I recently uploaded a sequence to Mapillary, but the GPS signal from my GoPro Max was very subpar. This was probably due to a lot of tall buildings disturbing the GPS-signal. After the images was processed I noticed that the images are located correctly relative to each other when enabling 3D mode, but they are incorrectly placed on the map. Navigating by the arrows therefor works poorly, but clicking on the next 3D sphere works great. Does Mapillary correct the green dots on the map after some further processing?
Yes, Mapillary will attempt to do some correction of GPS signal noise using OpenSfM (https://opensfm.org/) - Structure from Motion. However we donât yet display these corrected green dots on the map - that is something we would like to do in the future.
The full answer to this is a bit more complicated than it is easy to explain. Your high accuracy GPS data does indeed help but only to some limited extent. Simply put, OpenSfM has never really considered the quality of the GPS data because there is no really good metric (GPS Exif tag) for this. And although the GPSDOP Exif tag is supposed to describe the quality of a measured GPS position, it is a value only relative to a specific GPS receiverâs accuracy (not an absolute distance, like you would expect in Mapillaryâs use case). Furthermore, many if not most devices do not produce the GPSDOP value at all or produce buggy GPSDOP values. Hence, GPSDOP is completely useless for OpenSfM in Mapillaryâs use case and there is no feasible way to establish the quality of GPS positions by just looking at the data either.
OpenSfM computes camera GPS positions from image and GPS data. The final computed positions are an amalgamation of all images in a sector. So yes, bad quality GPS positions do degrade the accuracy of computed positions but high quality GPS improve computed positions also (in a sector). Hence, in this situation, the most significant factor that improves computed position accuracy is not so much GPS data quality but the number and types of images per sector. For example, panoramic images usually improve computed position accuracy most because these can often be a catalyst in finding many (new) reconstruction matches for many (existing) images in a given sector. In other words, it is enough that many visually overlapping images land in the same sector, so that their positions can be faithfully reconstructed.
Long story short, it is important to remember that the green dots you see on the map are image metadata GPS positions and the orange dot, like above, are computed positions, not corrected positions.
This is because their positions are reconstructed through a process called photogrammetry.
This is because the the green dots on the map are measured GPS positions from image metadata. And, like you have already observed by yourself, measurements in the real world can suffer from all kinds interference and thus produce skewed results.
This is because the (so called) navigation graph is built from image metadata positions, not from computed positions.
Yes, I overall understand the process. You determined relative orientation between two images that have common features and, then, you do a global optimisation combining relative orientations and absolute (although approximative) positions from GPS.
Still, what are the yellow/orange dots?
I see a very bad case, where image view overlapping is also poor:
Is the orange/yellow marker the corrected GPS position?
Still, what are the yellow/orange dots?
This is the computed position of the image you are currently viewing.
I see a very bad case, where image view overlapping is also poor:
Yeah, unfortunately reconstruction is not perfect. There have to be enough matching features in the image for it work properly. Because OpenSfM works per sector in iterations on collections of images with enough matching features, sometimes those collections are kept being shuffled to different positions until there is no (satisfying) solution. Note, that collections may have 1 or more images. Just to avoid any confusion, please also note that sequences and OpenSfMâs internal image collections have nothing to do with each other. Hence, in case there is no solution for matching collections the last computed collection positions are kept.
Is the orange/yellow marker the corrected GPS position?
Still, what are the yellow/orange dots?
This is the computed position of the image you are currently viewing.
Having said that, this used not to be in the UI. Computed positions were rather reserved for getting via the API only. The current image highlighted in orange used to be an image metadata dot. Maybe @boris or @gyllen can elaborate more why this was changed about a year ago? Imho this behavior is indeed a bit confusing to users, especially when the computed positions visibly deviate from sequence positions.
Basically, the final idea is that if local imagery saturation is high enough the system should be able to reconstruct the position of the camera (or rather all camera positions in the area).
Note the vast position difference between the orange dot and the preceding blue dot in the example above. In this case, reconstruction has done its job very well but only because the Leipziger Platz in Berlin has been sufficiently saturated with imagery.
But, you can see here that reconstruction can still be quite off.
I can understand trying to calculate the âcorrect locationâ for some internal processing. But itâs wrong for almost all the locations Iâve checked in the past month. If you want to show that indicator before itâs accurate while you develop the algorithms, thatâs fine I guess. But the actual indicator is now gone - I have no idea which point I have selected, because the estimated point is almost never there. This has turned from an experimental feature into a usability issue. The first time I saw it I was convinced it was some sort of rendering bug offsetting the indicator.
@HellPhoto - Agreed that we have work to do here - though I donât believe this functionality has changed recently? The original position is visible before you click on a point, and the computed one is shown after you click on a point.
Yes, this has been an issue for a while. Once I move the mouse cursor away from the point that is clicked - it is no longer visible. Only the orange one remains. It only shows up if you mouseover sequence navigation buttons (and those are technically next/previous points):
Please do not ruthlessly âimproveâ the location of for example Mapillary : viewed against the background of either OpenStreetMap, or -logged-in at OSM- against the Digitaal Vlaanderen GRB (= official couple of centimeter or so accuracy map) the GoPro Hero9âs position is - in a difficult location, given tall buildings on two sides - much better than Mapillaryâs âcomputedâ then fiddled -but actually fumbled- location;
just looking at the picture youâd see it was taken somewhere in line with the underpass (which is clearly shown on the GRB layer in OSM), whereas the âcorrectedâ position places it - my guess, as a scale stick is missing on Mapillary - some 20m to the left, as if I were cycling straight at the middle of that apartment block - quod non.
= = = another example :
Looking at Mapillary youâll see it was taken at the âjumpâ in the building line, GoPro shows it perhaps 2m ahead of where I was, and perhaps 1 - 1½m to the right; Mapillary has correctly corrected that forward bias, but placed me in the raised planter, which you can see wasnât the case, thus shows a worse sideways accuracy.
So by all means keep trying, but for now youâd likely benefit from betting on both some 3D-model builder and actually analysing the photo for clues.
Spotted other examples, where a photo taken on a path was âcorrectedâ to be way beside the path; think correcting in urban surroundings will be of most benefit?
Absolutely, we will always make the ârawâ location available, so that should be something users can always consult. As you mentioned there are lots of issues with the current âcomputedâ location - which is precisely what we either need to fix (or perhaps stop showing in cases where we know its unreliable).
âŚlots of issues with the current âcomputedâ locationâŚ
One reason for this might surely be the fact that for some time now sequences (in a sector) are no longer consolidated, which makes camera position reconstruction rather âhalf bakedâ. This also has the side effect that (3D) time travel does not work anymore either. Currently, we have some sort of pseudo fallback time travel because the images selected for comparison are simply selected based on location, image metadata direction, and date rather than content, location, and date. This is unfortunate because I have a few places that I would like to compare for changes. I am looking forward to the day when sequence consolidation and time travel come back in their full potential and glory!