if there are made several sequences at the same street from different users or from one user at different times are they led separately ore are they mixed together to one sequence with “better resolution”?
There is a prototype 3d viewer, there different pictures at the same location get combined to a big 360 picture, but the standard viewer shows only seperate sequences
@Alferic If multiple sequences are uploaded at the same street they will be mixed together to produce a better spatial photo coverage of the area. You can try it here by pressing the forward button two times:
You can also check the Photos Nearby on the right to see more images that were taken close to the one you are standing at. Click on of the images to move to it.
@Harry The 3D WebGL viewer is actually the default viewer. There is a fallback viewer that appears if the browser does not support WebGL. In the fallback viewer there are no 3D transitions but the navigation is the same. Images from different users will still be mixed when moving.
You can check if your browser supports WebGL here: https://get.webgl.org/
To summarize: More sequences from more users from different times will give a better spatial coverage of an area and in the end make it easier to navigate in that area.
@oscarlorentzon Yeah, now I see you actually do mix different sequences in the viewer.
But I was thinking of an experimental 360 degree viewer you had some months ago, there multiple pictures were used to form a 360 degree streetview, but I can’t find it anymore
@Harry You are right. Before we switched to the WebGL viewer there was a feature where you could zoom out to see a stitched panorama and look around. In the new viewer we do not have the zoom out feature so this is not possible.
What you can do though is to look around in stitched panoramas by dragging horizontally. It is shown in the panorama dragging video in this blog post: http://blog.mapillary.com/news/2015/09/30/viewer-improvements.html
We also support look around in native 360 panorama images like this one: http://www.mapillary.com/map/im/kN1fIRYhuPQUzgND-7NXwg/photo
But if several sequences are mixed together is there not a problem: different cameras, different times, different lighting, different weather, different seasons, …
@Alferic We handle different cameras with different focal lengths as well as different aspect ratios in our reconstruction algorithm. Every image added to an area improves the reconstruction because there will be more overlap.
In general, different times, lightning, weather and seasons are handled as well if the changes are not to large. For environments like cities with static structures like buildings this works well. The problem of correct 3D reconstruction is much harder in ever changing environments like forests where the trees look completely different in spring, summer, autumn and winter as well as over longer time.
In extreme situations like for two photos of the same view taken at day and night there will be no common features in the images.