I have experimented with ‘mocking’ the internal gps in a Samsung Note 8 phone with an external GPS that has an external aerial. This is clearly going to have a better view of the sky compared to a phone in a car windscreen and also can do other free corrections such as SBAS (WAAS or EGNOS).
My ‘old’ Garmin 78s can do this with a complex set of USB cables, settings, helper apps, and developer phone settings. It is not very robust. It needs a phone capable of OTG and when using that special adaptor the phone becomes a power supplier! So I need to add a splitter USB with another power supply to feed the Garmin GPS. I actually found one from a very old external hard drive with a mini connector It runs at 4800 baud. The phone cannot be charged with the OTG plug. There are too many plugs that vibrate loose. (Industrial or military grade connectors would solve that problem.) I could publish a HOWTO if it is of interest, but I am not continuing with it myself.
An option might be to find a better GPS with Bluetooth but still an external aerial and SBAS.
But I haven’t found one yet. The cheap one from Amazon ( GlobalSat ND-105C Micro USB GPS Receiver) does not have Bluetooth or SBAS processing although it claims to be SBAS capable. It has the same complex cables and app setup, no external aerial and needs a very long USB to place it on the car roof.
edit: I have found a Bluetooth GPS with Glonass and SBAS corrections - Garmin Glo - testing now. Still the same hacks for mocking required.
So I am back to post-processing. If I get a second camera that does not have a GPS I would have to post-process anyway. But here’s the problem - the times recorded for the images are all over the place! Why?
Perhaps Mapillary could help here with adding more EXIF tags? Such as GPS Capture Time? Which timestamp does Mapillary use themselves?
It turns out that recreational GPS units are only set up to return a fix every second. This is partly in the design of the GPS signal, the data is encoded and then repeated 1000 times for a second so that error correction can recover a reliable data block from a very weak signal by superposition. So any sub-second fixes must be extrapolated from directions and velocities and perhaps a gyroscope in the roaming unit.
But photos are not taken on a whole second so the irritating storage of the EXIF time to the nearest second builds in an uncertainty of more that the GPS accuracy already. Is that why the file name has milliseconds?
Even worse, the filename timestamp and the EXIF timestamp vary by several seconds - which to believe? There is not a constant difference. It’s not just calibrating the first image at the start (which is hard enough!) it is the time tolerance around each image that gives a distance error of about 20 metres if comparing two sources just based on the timestamps.
Would this be avoided using real-time corrections? I’m beginning to wonder if this can be done on a phone, but Trimble do have a solution. I can see why RTK corrections are so hard to do while moving. If you are standing still none of this is a problem!
I am attempting the adjustment using Linear Referencing.
The GPS track is converted into a route polyline with XYM point values. The m-values are in decimal seconds since the epoch, and I used a temporary origin of midnight to reduce the number of digits. Each route has an ID so there is no confusion with multiple tracks along the same road even if they were simultaneous. The measures are not linear based on the distance. The time along the route varies with the speed so there needs to be enough points to model this change in speed. Generalisation of the track is not a good idea. My Garmin has a setting to minimise this.
Now if we create an ‘Event table’ of the images and calculate the epoch seconds of each image, it can be placed with high precision along the gps route. The image time does not have to coincide with a gps point record, it is automatically interpolated and may even be on a curve. The measure is converted back into the lat/long at the point. It all depends on very precise timing.
The current Mapillary tools expect identical times to the nearest second.
I am using ArcGIS, but I see that QGIS has an excellent plugin LRS that does the same process.
Linear Referencing is also supported in Spatialite. You could do all the processing in a few SQL expressions after loading the data into Spatialite using Python.
The black numbers are hatches placed every 5 seconds (not the gps track points) which are not evenly spaced as I slow down turning in the cul-de-sac, and the green dots are the times of each image, nominally every second, corrected to the measure of the recorded photo time. The lat/long in the image is then replaced with the better one, the image direction can also be derived from the tangent of the track at the point. (I didn’t show the aerial for clarity, I had to weave around parked cars)