Improving the lat/long position of images using timestamps

I have experimented with ‘mocking’ the internal gps in a Samsung Note 8 phone with an external GPS that has an external aerial. This is clearly going to have a better view of the sky compared to a phone in a car windscreen and also can do other free corrections such as SBAS (WAAS or EGNOS).

My ‘old’ Garmin 78s can do this with a complex set of USB cables, settings, helper apps, and developer phone settings. It is not very robust. It needs a phone capable of OTG and when using that special adaptor the phone becomes a power supplier! So I need to add a splitter USB with another power supply to feed the Garmin GPS. I actually found one from a very old external hard drive with a mini connector :sunglasses: It runs at 4800 baud. The phone cannot be charged with the OTG plug. There are too many plugs that vibrate loose. (Industrial or military grade connectors would solve that problem.) I could publish a HOWTO if it is of interest, but I am not continuing with it myself.

An option might be to find a better GPS with Bluetooth but still an external aerial and SBAS.
But I haven’t found one yet. The cheap one from Amazon ( GlobalSat ND-105C Micro USB GPS Receiver) does not have Bluetooth or SBAS processing although it claims to be SBAS capable. It has the same complex cables and app setup, no external aerial and needs a very long USB to place it on the car roof.
edit: I have found a Bluetooth GPS with Glonass and SBAS corrections - Garmin Glo - testing now. Still the same hacks for mocking required.
So I am back to post-processing. If I get a second camera that does not have a GPS I would have to post-process anyway. But here’s the problem - the times recorded for the images are all over the place! Why?

Perhaps Mapillary could help here with adding more EXIF tags? Such as GPS Capture Time? Which timestamp does Mapillary use themselves?

It turns out that recreational GPS units are only set up to return a fix every second. This is partly in the design of the GPS signal, the data is encoded and then repeated 1000 times for a second so that error correction can recover a reliable data block from a very weak signal by superposition. So any sub-second fixes must be extrapolated from directions and velocities and perhaps a gyroscope in the roaming unit.

But photos are not taken on a whole second so the irritating storage of the EXIF time to the nearest second builds in an uncertainty of more that the GPS accuracy already. Is that why the file name has milliseconds?

Even worse, the filename timestamp and the EXIF timestamp vary by several seconds - which to believe? There is not a constant difference. It’s not just calibrating the first image at the start (which is hard enough!) it is the time tolerance around each image that gives a distance error of about 20 metres if comparing two sources just based on the timestamps.

Would this be avoided using real-time corrections? I’m beginning to wonder if this can be done on a phone, but Trimble do have a solution. I can see why RTK corrections are so hard to do while moving. If you are standing still none of this is a problem!

I am attempting the adjustment using Linear Referencing.

The GPS track is converted into a route polyline with XYM point values. The m-values are in decimal seconds since the epoch, and I used a temporary origin of midnight to reduce the number of digits. Each route has an ID so there is no confusion with multiple tracks along the same road even if they were simultaneous. The measures are not linear based on the distance. The time along the route varies with the speed so there needs to be enough points to model this change in speed. Generalisation of the track is not a good idea. My Garmin has a setting to minimise this.

Now if we create an ‘Event table’ of the images and calculate the epoch seconds of each image, it can be placed with high precision along the gps route. The image time does not have to coincide with a gps point record, it is automatically interpolated and may even be on a curve. The measure is converted back into the lat/long at the point. It all depends on very precise timing.

The current Mapillary tools expect identical times to the nearest second.

I am using ArcGIS, but I see that QGIS has an excellent plugin LRS that does the same process.
Linear Referencing is also supported in Spatialite. You could do all the processing in a few SQL expressions after loading the data into Spatialite using Python.

The black numbers are hatches placed every 5 seconds (not the gps track points) which are not evenly spaced as I slow down turning in the cul-de-sac, and the green dots are the times of each image, nominally every second, corrected to the measure of the recorded photo time. The lat/long in the image is then replaced with the better one, the image direction can also be derived from the tangent of the track at the point. (I didn’t show the aerial for clarity, I had to weave around parked cars)


interesting approach.
have to note that “recreational gps units” can have a sampling rate of 1hz+ (eg 10hz)

Newer and more expensive ones may have fixes faster but I expect the manufacturer would boast about it if true. No information would default to 1 second. There are other problems with capture faster than a second. Just having an option to put in 0.5 seconds does not often work according to the forum reports.

I have just gone out to purchase a Garmin Glo which claims 10Hz but qualifies it suggesting the phone may not be capable. Or Bluetooth sending NMEA text sentences. The Glo is a much better option than a 78s.
They both also has SBAS. So far I can’t see any Glonass satellites in spite of the device name when using GNSS Commander app. Garmin suggests Bluetooth GPS as the mocking app.
A Bluetooth link simplifies the cabling and frees up the USB for power. Now will Bluetooth still work from the car roof? I am just placing the aerial on the dashboard while I make a magnetic mount.
The next test will be to see if it really makes a difference to the tracks.

Heard mixed reports about the Glo, although that’s true for all other gps units. I am hoping to test out a BadElf Lighting and Dual X150 at some point

You should check the dates of reviews. It has been around since 2013 and all those little niggles have been fixed, such as turning on in your pack running down the battery.
I have not had any problems with it.
It does take more effort to mock the GPS in your phone and you do need a couple of minutes patience after a cold start. The app suggested by Garmin for your mocking is Bluetooth GPS Provider. This has a quality setting that should settle down to 2 to indicate SBAS is working. A value of 1 is without SBAS.
But my resulting coordinates are much better than the phone alone.
No need to adjust.
Because the connection is Bluetooth I cannot put the Glo on the car roof for the best view, so I have just placed it on the dashboard.

Ah, you must be on Android. On iOS, the apps pick up certified external gpses without a mock gps input. Specific arrangements are needed to receive raw data though

I use a #gnss receiver with an antenna connector, so the receiver is inside the car, and the antenna is on the roof.
At first, I used a navspark GL, and now a Neo-M8T
As you can see below, it’s a little DIY, I never took the time to build a real Pcb and an enclosure

1 Like

I’m starting using a similar configuration i.e. picture acquisition with Mapillary app using smartphone (Android) internal GPS. In parallel, I’m recording raw data with dual-frequency multi-constellation GNSS receiver (ublox F9P) and external antenna on the roof of the car. Back home, I do RTK post-processing on GNSS data to get a centimetric accuracy track. Now, I want to use it to improve positioning of pictures but I’m facing difficulties with time synchronization. With one smartphone, I found a bias of 18 s, which could be GPS_time - UTC. With a second smartphone, it is about 14 s. It is a pity to have to do a manual adjustment, loosing advantage of centimetric positioning.

We really need a precise time stamping of picture, ideally with a correct GPSTimeStamp tag.

1 Like

I have had even less success with synchronising by time. The first issue is the resolution in the Exif header is only to the nearest second, which is about +/-13 metres at 50 km/hour. I then noticed that the file timestamp has milliseconds which might be better. But there is a large disconnect between the two times that is not a constant. I have never got to the bottom of which time is most correct for the photo. It appears that images can be buffered for several seconds. I added in a bluetooth GPS with WAAS corrections that would put even more load on poor Android real time processing so it is not unexpected, my phone overheats trying to keep up.
I have recently purchased a dashcam from Uniden that has a built-in GPS that records the coordinates inside the video. This is another challenge to extract, but possible with some free software. I note with approval that they only stamp the video once per second which is the maximum rate of a GPS signal without interpolation by the device. But no WAAS correction now. Well it has been turned off after a trial in New Zealand and Australia while a new contract and satellite is found.

Maybe, I’m expecting too much in term of real time for Android. :sweat_smile:

But the challenge is there: we are starting using centimetric GNSS receiver. How can we get pictures with the corresponding accuracy? Used in conjunction with Mapillary orientation determination, we may use it to get precise positioning of devices in OSM. But, if driving at 10 m/s (36 km/h), we need 1/1000th of second time stamp although, I think, one may be less accurate. 1/100th could be a good starting point… In my case, it is even more difficult because I only get the centimetric accuracy afterward.

Orientation is not recorded by the Mapillary app in my phone, even though the phone has an internal compass. I wrote a post processing python script that finds the orientation by looking backwards from the previous photo and then updated the EXIF. I wonder if it is worth the bother. Structure in Motion needs the orientation and probably creates its own idea of the location and direction of the photo from the objects in the view anyway. It does not seem to matter that the location is a bit rough.

Have you done any tests on the location of objects? That is the real test in the final result. Now that we all have access to the object database that might show if the improved calibration of the photos is necessary.

How is it that the track weaves from side to side, but never goes backwards? It must be some sort of adjustment done by the software by calculating a ‘running fix’ such as I learnt to do with celestial navigation at boat speeds. This suggests that the time and coordinates recorded by the phone have already been adjusted that adds an apparent error in the time.

In my first GPS corrections were not hidden. Selective Availability was still in force so the only way to get a fix better than a few hundred metres was to wait for a few minutes at each fix and average the readings. When SA was switched off it was better but still a random walk around each point. Nothing has changed in the GPS signal so the devices must be trying to pretend that the coordinates are better than they could possibly be, especially when moving when there is no averaging possible, hence my suspicion of the running fix. The faster you go the smoother the track gets around corners, yet I have to drive a right angle at intersections.

I think using video at 30 frames per second is the only way to improve the time resolution.

Thanks both for your answers. So, in a long term perspective, it may be usefully to store the GPSDOD.

I was also considering using more precise interpolation like cubic spline, at least when recording with car. By the way, using it to calculate direction.

For action camera, synchronization could still be an issue. Do you have any experience showing precise time synchro without manual adjustment?