Thank you for the additional info. I really appreciate taking your time. Like you have guessed correctly, I was asking because I would favor image mode.
You can set time lapse (called interval) in image mode, but I think it’s not really meant for the Mapillary use case as the intervals as 5, 8,10, 15, 20 seconds. I think you also then need to run a stitch on the camera (where as video is stitched automatically).
No less than 5 seconds is a bummer. Besides, we live in the age of computers and integrated timing circuits. Hence, it is beyond my understanding why software developers force hard coded intervals on users. I am aware that there is certainly a minimum hardware limit but for goodness sake, why do we have to patronize and limit users? I am also pretty sure that both sensors can capture well below a fraction of a second and internal storage can take the data at the required speed.
Anyway, imho in Mapillary’s use case we would need the camera to capture like 1 to 2 fps in image mode and do the stitching after the user ends time lapse (interval) mode. Image mode quality does look better than video mode. You can definitely see fewer digital artifacts and more details. But, I am not sure about sensor noise. Did you set ISO speed 100 manually in the example?
I think you also then need to run a stitch on the camera (where as video is stitched automatically).
…note that this does take quite a while to capture…
Can you also comment on how long does stitching take in image mode? Can you offload or time shift stitching to the smartphone app? Is capturing speed in image mode limited by stitching speed?
Should both sensors be treated as two separate cameras in Android’s Camera API then developing an app for Mapillary’s specific use case should not be that hard. Otherwise, we are left with hoping for the best from LabPano.
Video mode is I think what we’d want to use (which offers 8K at 1,2,5 FPS and 5.7K at 1,2,3,4,7 FPS)
I understand your reasoning but it sounds rather like a workaround for Mapillary’s use case, which oddly enough is basically identical to “Google Street View” mode but apparently LabPano was unable to get it right either. “Google Street View” mode should have been what I have described above. We do not care about instant stitching in this mode. We care about 1. image quality, 2. timing, and 3. GPS data in this mode. Stitching is a post‑processing step in this use case.
Overall, this camera has some great potential but the software needs to be more tailored for Mapillary’s use case. Frankly, I was considering buying this camera (more than any GoPro 360 or Insta) just for Mapillary, which is a huge deal for only one use case. I am not interested in vlogging or the likes. But, image quality alone will not win the game for me here. Workflow and software are important too.
Btw, the example’s time stamp dates Jan 1st, 1970 (the Unix epoch). Do you have any clue how this comes?