I use mogrify under linux to reprocess jpg’s extracted from (BlackVue) mp4’s. I have found that the ffmpeg extraction at 90% (or so) quality produces more artefacts than doing it at 99%, then a mogrify step to 85. I suggest that 75% is a little low.
At one stage too I was finding windscreen tinting caused a colour shift on my old capture system. I rectified that with -channel RGB before -normalize. I did however fall into an trap with that as I had an annotation strip along the base of each picture, that got included in the histogram calculation!
Oh and in the linux version (don’t know about Windows), mogrify only runs on one CPU of an SMP machine (eg i5/i7). I just shelled 8 concurrent processes with different wildcards and it completes much faster.
You are right, 85% is slightly better, I used to have this in combination with rescale to 1920x, but full resolution beats the quality, so I switched to 75% without scaling. Upload original images would take far too much time, even now it is taking almost one day for 30.000 images. I also use a Python script for 8 threads on a queue of image directories, but this would exceed the abilities of most users i guess.
I have never seen any docs re accuracy/reliability of object/sign detection as related to image distortion. I know that 3/4 rear or front facing vehicle cameras with a slow raster rate can skew objects sufficiently that the 3D reconstruction fails, but not the quality. I just chose 85% as being a common standard of sorts!
The mapillary tools ffmpeg extract command line also has an error of sorts. A quality figure of “qscale 1” is set, but unless an extra -min 1 switch is added that defaults to 2 (or roughly 90% jpeg)
Speed of upload isn’t an issue for me. I connect a full 1TB drive (maybe 1.2 million images) to a remote access server every 3-4 months. Then run the upload job unattended/remotely over night for a few weeks.
I’d like to point out that the most important thing in using Image Magick is the “normalize” option, it makes the photos look much better in contrast and colours.
Any body tried editing camera type on mapillary description? My images from mi 70mai & lukas dash camera are uploaded as none camera type. I uploaded from 70mai to mapillary in 2 scripts. mappillary_upload - Google Drive
As mentioned, “normalise” will help that but there could be a major loss of shadow/backlit detail if pointing into the sun. I guess the photo could be analysed and maybe even the time/direction could be used, but it may not be worth it.
There are a number of batch type image processing GUI programs around. One that comes to mind is (Windows) Irfanview. It can normalise or contrast stretch entire folders of images.
A lens hood may also be worth making/fitting, to keep sun off the lens. Wide angle may limit that though.
I will check whether it will work for other cameras… RIght now I updated in image with exiftool for make and model and it worked when mapillary processing it.
The current version of ImageMagick destroys EXIF data, so I switched to Pillow and Python, which is also faster. Feel free to use this tiny script to optimize JPEGs in a given directory to speed up the uploading a bit: