I’m new here, interessting project! I noticed that, compared to Google’s Street View, Mapillary mostly only shows views through the front window of cars. I thought that holding the camera to the side, might give more interessting views, because you’ll have a better view on shops or whatever there is on side.
You can move through the sequence by clicking “play”, but the algorithm doesn’t really detect the sideway moving of the camera. On some images you can move sidely with the up and down arrows, which doesn’t make sense really and sometimes the images aren’t connected at all.
Especially on the bigger streets where multiple sequences were taken already by other mappers, I hopped to be capable to move the camera to the side and back to the front, like it’s possible on Google’s street view.
Are there any capture options I can set to support this or isn’t this possible yet? Or maybe, do we need a sequence at a different angle, like ~45° to the left or right, so that the algorithm can connect the 90° of the moving direction taken pictures?
I´ve done some sideways pictures out of car and trains - e.g. Mariazeller Bahn in Austria.
Most times I moved to fast and pictures were blurry, also often too much obstacles in the way.
I choosed to mostly do the 45° way: taking pictures straight ahead in both directions, and if thats done, 45° from straight ahead to other side of the street, also in both directions on both sided of the street.
If the street is NOT to narrow, I´ll do direct sideways, but mostly not anymore.
I too thinks that a good coverage requires at least these 4 directions - often the interesting part is to the side.
The biggest problem is, that Mapillarys algorithm for navigation is not quite there yet. E.g. you have to press forward to go to the next image (not the logical left or right). But often the navigation selects something completely ilogical. Fortunately I belive Mapillary will improve over time and fix this
You can of course also combine different techniques in one session.
Mapillary automatically tries to connect the images and tries to improve the location and calculates the heading. For this to work you’ll need a bit of overlap in the pictures. This works best with the forward or backward facing pictures. Therefore it is good practice to check if there already are forward facing pictures. If not it could be smart to start with that.
But the best pictures are the pictures you make, so in a buss it is smart to take sidewards pictures, when you’re in a car on the highway it’s smart to take forward pictures, because any picture is better then no picture att all
That image looks like motion blur and not rolling shutter blur.
But it is true, that for sideways mapping, motion blur is a much bigger problem than forward/backward. Motion blur is caused by the camera moving too fast relative to what it shoots in the given light. I.e. the shutter is open for too long time for the image to look sharp.
Rolling shutter blur usually occurs when the camera shakes fast (like a bump or uneven ground) and can be recognized by parts of the image being blurry.
Thanks for the links. 45° degrees looks really nice and is a good compromise between total front which is quite boring and unconnected 90° pictures.
I think the main problem right now is the bad mapillary viewer. You should be able to choose from any available source at that location. Not only those the algorithm put together somehow. Maybe they could set a little photo list on the bottom, where you can freely select any pictures taken closely to the current location regardless which direction mapillary things they are directed to.
tryl: I’m thinking of putting at least one camera on my bike pointing sideways, but I’m not sure which way would be best. Should I put it on the right side (in right-hand traffic), thereby putting the camera closer to the stuff it’s taking pictures of and perhaps having less overlap, or on the left, across the road, and risk there being traffic in the way, but having a broader view?
Have you found one to be better than the other?
Also, when you have a backwards facing camera, how does Mapillary deal with it? When you press the forward arrow, does the picture move backwards? Or has it figured out how to parse it, and basically reverse the sequence?
@pkoby Which way to point. It depends:
Even with a wide angle lens you need to keep a certain distance to e.g. a building to get a decent shot. So if the buildings are close to the bike lane I would point the camera left, but if they e.g. have a decent front yard or there is 10-15 meters to them, I would point right.
The downside of pointing the camera left is, that there are a lot of cars to that side. That again, if there are many parked cars along the road, then left may be better to get some view. If you have a great view to one side and not the other, it is easy.
In the end my best advice is to try and do what you think looks best! But… you don’t know what people are looking for at Mapillary, so it will always be a guess.
I use the interpolate direction script with -180 as degrees, for the backward camera. 180 would do the same. I don’t think Mapillarys navigation works that well when you introduce a lot of images into the same space. But I try to provide the best data I can (good GPS and set direction) and then I think they will continue to improve. Try e.g. https://www.mapillary.com/map/im/tt2EvdE-ehbXENfrM8gOUA/photo which are 4 cameras with 90 degrees between them, 2 sec between images and everything is tagged using the interpolate direction script and proper angles as argument. On https://www.mapillary.com/map/im/JL6KeD2w4bwy02T1eQOUfg/photo I finally got it to select the back camera (the only of the 4 which is in 4:3 and not 16:9 format).
Thanks for the tips. In the sequence that you linked that is actually at 180 degrees, I see that Mapillary does seem to be confused. When I lock the sequence, and go “forward” with the arrows, the view goes backwards, but in the minimap, the viewfinder is pointing along the sequence direction. It seems that Mapillary’s automated connections aren’t understanding this.
Looking in the ‘edit’ page, though, all the points are shown as facing with the movement, when they should be facing backwards, I would think.
Anyway, it seems to me that working with facing backwards might be too much hassle and confusion for me. As for sideways direction, I’ll have to play around with it. The majority of places I ride around here do not have substantial traffic, so probably pointing left would be sufficient. One exception would be our downtown, which has the most interesting stuff to the sides, but also has lots of cars (https://www.mapillary.com/map/im/L4NJuWjJdZubUqd4rBDhTg/photo).
You can ride with 2 cameras (forward and side) in each direction. In that way you will get more angles, but will only cover half the streets.
-180 degrees and 180 degress are the same. If the number is less than or equal to 0, then 360 must be added until et becomes larger than 0. If it is arger than 360, then 360 must be subtracted untill it becomes less than 360.
The way Mapillary is showing the images right now is not final. They can fix the issues and then it becomes better. And I am sure they will do that. right now they are hiring a lot of people and may be using a lot of resources to integrate them. The most important thing for us is to deliver the best data possible: Good coordinates from the GPS and an accurate direction. Then I think it will be fine eventually.
I understand that, but I think I also like to provide data that makes sense to me as Mapillary is now. That is not to say that I’m tweaking the data to fit my needs (like “mapping for the renderer” in OpenStreetMap), but that I want my sequences to be simple.
I already ride both ways on every road, so I suppose that putting a side camera just adds the number of photos for the area, but doesn’t decrease the distance traveled. But hey, it’s a good workout riding everywhere. And a good brain exercise figuring out the optimal routes to cover everything twice.
Hi, just wanted to confirm that images in all directions are welcome and that we have work to do for better navigation.
As said above, we use the overlapping parts of the images to understand the camera motion. For side looking cameras this is sometimes not possible since there is little or no overlap. Navigation for such cases should trust the original GPS and compass information, and it is not doing it well enough.
I liked the suggestion of showing nearby images to let the user browse them independently of the navigation.