YI 360 VR camera REVIEW

As anounced, I now own a YI 360 VR camera.

I did write “review” in the subject, but bare with me for a bit, I am quite bussy with work now, but When I find the time I’ll do some tests and upload images and share them here!

Just walked the dog (and I couldn’t resist… :wink: )

First: for the image setting: it has a timer, NOT an interval setting! (a small minus…)

But it can take a video in interval modus. So when I walked the dog I dit a first test at 4K.

With this command to get the separate images (source):
ffmpeg -i video.mp4 -vsync vfr -qscale:v 1 RFile_%3d.jpg

Next I need to get working on my “geo tag assistant” to correctly match my gpx track to the images…

But I’ll post an image here:

This image is “only” 7.3MP

Compare it to the image I took (also in an evening) with the LG360 (16MP!)

(yes the fence is gone now :P)

First impression:
Even though the resolution is half, I’m faling in love with this camera, because (compared to the LG360):

  • the stitching is much, much better
  • the colors ar much much fresher
  • dynamic range seems better

One of the barriers to geotagging seems to be a problem with software tools only going to one second granularity. eg “gpscorrelate” is only really usable at 1fps or slower. The picture record time may be correct in the EXIF but when/if GPS time is inserted it only has one second resolution. A 2fps stream ends up with two sequential frames at each location.

The mapillary_tools write a special EXIF tag though and by default don’t actually write GPS EXIF tags. If specified though it does write lat and lon only.

I’d suggest also trying the tools in video_process mode for your experiments. The video start time can be specified in absolute terms to make gpx matching happen. If you’d like some command syntax ideas shout!

I am working on something to fix that “snap to grid problem” (in almost every vector program such a function exists… excluding some exceptions, I hate the moment I hit the shortcut key that activates it… always need to look how to disable it again).

I red up on GPX formating:
1901-12-13T20:45:52.2073437Z
That should be the way for me to go in my “assistant”. I have also written/updated a perl script to match GPX tracks to images.

Even with an interval of once every two seconds I noticed this “granularity problem”. The solution I thought of is reducing (!) the number of GPX points by smoothing the track via brouter, then match the timestamps of the original GPX to that new track (up to the millisecond!) and then match the new GPX track to the numbered images (without any good date/time-info because of the ffmpeg mp4 -> jpg conversion)

If you have a good (better?) way to do this with mapillary_tools, that would save me quite a bit of time :wink:

OK, I also found that I could increase the bitrate to 120Mbps.

new image in 5.7k (16.6MP):

As expected, much better then the 4K (7.3MP)

Also I can see a bit of a stitching echo (a bit to the right of that brown pole, most visible with the tree top)
Not as dramatic as with my LG360, but still.

When I compare it to the LG360, the resolution may be a bit higher of the YI360, but I must admit that the detail is better with the LG360. I am comparing “MP4 image grabs” to separate stills of the LG360, which isn’t completely honest.

BUT the darn YI360 doesn’t have an image interval setting! Pressing the button on the camera I have the distinct impression it can handle about one image every second (maybe even faster when I have my new micro SD card?)
Options:

  1. complain at YI
    I will! I mean come on, how hard can it be to add an interval option?
  2. hack the system like this Gear 360 hack?
    Gonna ask that to YI also, how I can disassemble the thing… warranty will be void… but hey, if you want something… :wink:
  3. use an alternative app
    I tried LinkinEyes (source) That one does seem to connect to the camera (is says ‘beep’) but that is about it…

And I tried the Mapillary app… and it sees the camera! It even says ‘beep’ taking a picture: yay!… but only one, then I get this:


When I press OK, I can take another (and get the same error again, etc). After connecting the camera to the PC again, it really did take a picture!

I don’t mind the red screen (preview), but:

Mapillary team: can you ignore the error for the “YI 360” (*)

Concluding:
The image quality of mp4-grabs are nice, the colors are nice, the dynamic range is nice, the stitching is nice, but the detail doesn’t match the LG360. The massive benefit is that I can take these “nice pictures” at a rate of 1fps, 2fps and 25fps (drive up to 450km/h, and still have 5 meters between each image :stuck_out_tongue: ) That is awesome!

I’ll complain at YI, but if the Mapillary team can get rid of that error, then I can test the quality of the separate images… and I think the results will be pleasing… so if you guys/girls have a moment…:innocent::smiling_face_with_three_hearts:

*) ‘YI 360’ is the exact name the Mapillary app identifies the camera with, maybe you can add an exception?
on (error) {
…if (cam_ID != ‘YI 360’) {
… …do_error();
…}
}

It seems the EXIF standard does allow subsecond precision. The problem is that many software tools (and possibly libraries) only write to seconds. exiftool and I assume exiv2 seem to work though.

The problem though is most GPS data devices (ie nmea sentences) output every second, so interpolation between data points is needed. ie calculate the position between points, don’t modify the gpx itself. The mapillary tools do this. From memory the EXIF tag Exif.Photo.Date.Time.Original is used and correlated against a gps data file (ie nmea, gpx) and it all just works. There is a switch to use if the EXIF timestamp is in local rather than UTC.

There are of course a number of ways to do this. Look at the sample_video and video_process subcommands. If you use sample_video a process step will be needed after that. The README.md has some examples

Full command list for video_process;
mapillary_tools video_process --advanced -help
[-h] [–advanced] [–version] [–verbose] [–import_path IMPORT_PATH]
[–rerun] --user_name USER_NAME
[–organization_username ORGANIZATION_USERNAME]
[–organization_key ORGANIZATION_KEY] [–private] [–skip_subfolders]
[–video_import_path VIDEO_IMPORT_PATH]
[–video_sample_interval VIDEO_SAMPLE_INTERVAL]
[–video_duration_ratio VIDEO_DURATION_RATIO]
[–video_start_time VIDEO_START_TIME] [–master_upload]
[–device_make DEVICE_MAKE] [–device_model DEVICE_MODEL]
[–add_file_name] [–exclude_import_path] [–exclude_path EXCLUDE_PATH]
[–windows_path] [–add_import_date] [–orientation {0,90,180,270}]
[–GPS_accuracy GPS_ACCURACY] [–camera_uuid CAMERA_UUID]
[–geotag_source {exif,gpx,gopro_videos,nmea,blackvue_videos}]
[–geotag_source_path GEOTAG_SOURCE_PATH] [–local_time]
[–sub_second_interval SUB_SECOND_INTERVAL] [–offset_time OFFSET_TIME]
[–offset_angle OFFSET_ANGLE] [–use_gps_start_time]
[–cutoff_distance CUTOFF_DISTANCE] [–cutoff_time CUTOFF_TIME]
[–interpolate_directions] [–keep_duplicates]
[–duplicate_distance DUPLICATE_DISTANCE]
[–duplicate_angle DUPLICATE_ANGLE] [–skip_EXIF_insert]
[–keep_original] [–overwrite_all_EXIF_tags]
[–overwrite_EXIF_time_tag] [–overwrite_EXIF_gps_tag]
[–overwrite_EXIF_direction_tag] [–overwrite_EXIF_orientation_tag]
[–summarize] [–move_all_images] [–move_duplicates] [–move_uploaded]
[–move_sequences] [–save_as_json] [–list_file_status]
[–push_images] [–split_import_path SPLIT_IMPORT_PATH]
[–save_local_mapping] [–custom_meta_data CUSTOM_META_DATA]

And for sample_video
mapillary_tools sample_video --advanced -help
usage: see -h for available tools and corresponding arguments, add --advanced to see additional advanced tools and/or arguments and --version to see version. sample_video
[-h] [–advanced] [–version] [–verbose] [–import_path IMPORT_PATH]
–video_import_path VIDEO_IMPORT_PATH
[–video_sample_interval VIDEO_SAMPLE_INTERVAL]
[–video_duration_ratio VIDEO_DURATION_RATIO]

If you want to run the sampling manually with ffmpeg, the EXIF tag Exif.Photo.Date.Time.Original has to be written before then running “process” in the tools. I have done this with an interval touch and write bash script (using jhead) if you want a copy of the code.

Omitted some sample_video commands - Note the video start time. This is essentially the first video frame. Just point the camera at the GPS device display, then delete those “bad images” after process but before upload.

usage: see -h for available tools and corresponding arguments, add --advanced to see additional advanced tools and/or arguments and --version to see version. sample_video
[-h] [–advanced] [–version] [–verbose] [–import_path IMPORT_PATH]
–video_import_path VIDEO_IMPORT_PATH
[–video_sample_interval VIDEO_SAMPLE_INTERVAL]
[–video_duration_ratio VIDEO_DURATION_RATIO]
[–video_start_time VIDEO_START_TIME] [–skip_subfolders]

optional arguments:
-h, --help show this help message and exit
–advanced Use the tools under an advanced level with additional
arguments and tools available.
–version Print mapillary tools version.
–verbose print debug info
–import_path IMPORT_PATH
path to your photos, or in case of video, path where
the photos from video sampling will be saved
–video_import_path VIDEO_IMPORT_PATH
Path to a video or directory with one or more video
files.
–video_sample_interval VIDEO_SAMPLE_INTERVAL
Time interval for sampled video frames in seconds
–video_duration_ratio VIDEO_DURATION_RATIO
Real time video duration ratio of the under or
oversampled video duration.
–video_start_time VIDEO_START_TIME
Video start time in epochs (milliseconds)
–skip_subfolders Skip all subfolders and import only the images in the
given directory path.

I think I have to create something myself… on testing with only images:

At the highest resolution, the camera can’t stitch inside the camera, I need the Windows software. The stitching software removes all the EXIF data… So I need to read the original file for the EXIF data, match it to the GPX track and add it to the stitched image… (can mapillary tools do that? with wildcards?)

Example: original file: YITL_0029360.JPG (which has needed EXIF data)
stitched file: YITL_Stitch
0029_360_190521123653_360.JPG (no EXIF data what so ever)
(the numbers at the end are a sort of serial number, no time in there…)

I’ll upload a sample image:

This image quality easily meets the LG360! (excuse the mess, the house and garden ar a bit of a project :stuck_out_tongue: ) Now I can see grass, not just a green blur.

What EXIF data do you want to keep? The tools only need the date/time tag as mentioned above. Then it can merge in the GPS/gpx data.

Exiv2 (Windows version uses cygwin) allows all the EXIF data to be read from one file and written to another. Also useful for new/modified tags. There are bound to be other Windows pgms that do this.,

The tools can parse through a folder of images and insert the (interpolated) GPS/gpx data based on the date/time EXIF tag

EXIV2:
I did a quick look at their doc page: https://www.exiv2.org/manpage.html

I get the feeling it has a problem with microseconds? And, wow, what a list of options… I think you understand what I want/need to do, but I think I’ve faster altered the perl software I shared earlier, than that I’ve figured this out I think?

QUALITY COMPARISON YI360 vs LG360
I took al look at more images I have taken with the LG360. I more or less accidentally took a good image to compare it to, but looking with a more critical eye, I see that the LG has a lot of difficulty with the lower sun and with less daylight. The first few tests do give me the impression that the image quality of the YI seems much more stable… Therefor, in movie-grab-mode the quality may be less, compared to a good LG-image, but overall the quality of the YI seems much more stable…

Next up is working on creating a good workflow to get as good as possible EXIF data in the images.
Next to that I realy hope the Mapillary team can get rid of the error I’m getting with the app. Then I can do tests with “image mode”. And really get to work :wink:

[update]
NEW MICRO SD CARD
Did some experimenting with my new Sandisk Extreme PRO 128GB memory card. With a timer going off every one second, it missed quite a lot of images. On 2 seconds it snapped all the 5.7K (16.6MP) images. Setting at 1.5 felt a bit unstable and it worked well at 1.75 seconds. All this done manually (beep, press button :wink: ), so an actual value will probably be around 1.5 to 1.7 seconds per image.
This means that when an “interval setting” becomes workable, this camera is fast enough for taking pictures when walking and when cycling not to fast (at 12 km/h it’ll be around 5 meters per image).

So for now I’m stuck with video mode… That gives me time to get to a good workflow processing an MP4.

But ultimately I offcourse want “the best”… and separate images have a higher quality then MP4-grabs (obviously), so I prefer an “image interval mode”.

I am very (very) surprised that you find LG’s resolution/detail better. Considering how much the Yi costs, and that they did a pro camera as well, I was expecting much better. Maybe it’s the same as with Gear MK1 - the older cameras had plain better sensors?
Could also be the issue of most modern 360 cameras focusing on video instead of photo

I did compare MP4 grabs with LG images! And I did say the separate images are much better! And the focusing, freshness and stitching is better. Just did a test under lower lighting, also better :wink:

But I am certain that it’ll be an “all win” when the interval images starts to work!

Today I did a “duration test”. As I red in other tests, it’s just under an hour at 1 fps.

[update] You really need to see this, really!

http://www.geoarchief.nl/mapillary.html?key=MQbZm9TD3kpAdoalJMPyBA&zoom=.1

Press play!!

I had quite a bit of work getting the EXIF data in correct. But I did :). I updated my “GEO tag assistant” to find the right start and stop time. For the test I set the YI to taking an image every 1 second. In my walking speed that became one image every almost 2 meters. I set it to one second expecting that the time between each image would become a tiny bit more or less than that so I could test the microsecond matching of the Perl script.
The timer of the camera matched the GPS sequence perfectly, I ended up mismatching the sequence deliberately by one second, to do the testing :wink:

And I must say, I’m quite happy with the result!

At first I feared that aprox. 1.8 meters between the images would be overkill, but when I press play it’s almost like you’re watching a video, not a sequence of images, NICE!.
I also think the overall image quality is very nice! (even though the sun was already quite low…)

When I see the ‘animation’. it underlines that the “mechanical x-axis stabilizer” I bought was a waste of money. But the horizon isn’t extremely stable and you can see me walking… Maybee the Feiyotech G5GS with a custom connectionblock (red) might be next on my wishlist :wink:

1 Like

I think my wife won’t let me test the YI for a while… I had an accident

OK, for more reasons than one I processed the last sequence I created with my “roof rack”.

It’s a two suction-cup system. I used the car antenna to secure the rig (if the cups would fail…). The whole thing stayed put, even with the accident I had, so I expect it’ll hold on the highway as well :wink:

I set the recording to max resolution and max frame rate (25 fps)

I did have a bit of work improving my Perl script to get the matching right with the GPX track. I discovered a few optimizations to the microsecond calculation, since I now have 25 images per second in stead of the previous test of 1 image per second.
Furthermore I improved the script to exclude images that are les then 1.5 meters away of the previous image. This resulted in an distance ranging 1.5 and 2.3 meters, which in my opinion is more then enough. It reduced the sequence from 4.235 to 1.269 images.

Also I discovered that using a faster moving vehicle doesn’t improve the quality of the GPX track. It is best to drive as slowly as legally allowed to get the best possible GPX track. And seriously slow down at turns to give the GPS a chance to discover and record the change. I had to manually adjust the sequence at one sharp turn. Beside that the interpolation went perfectly and It worked good enough to use a start time and increment 1/25 th of a second each image.

This 4 minute track resulted in a MP4 file of 13,5 Gb. The raw unstitched datafile is almost 4Gb and the separate JPG files are a bit over 14Gb (in the best ffmpeg quality I could export them). So it’s about 1 Gb per minute raw recording resulting in a MP4 for processing of 3.3 Gb per minute. So I could drive around for a bit over an hour to get a raw file that fits my 128 Gb micro SD card… Then I’dd have a 400 Gb MP4 for processing… Also to generate that 13.5 Gb file it needed two hours of stitching on my computer… So really large tracks likely aren’t that good of an idea :wink:

I had done several more tests, but I didn’t like the GPS quality and also I think I need to extend the distance between the camera and the car. I have published this one sequence to see how it comes out, but I think I need to buy a lightweight “extension stick” to see less car in the images.

When the track is online I’ll publish a link.
[update] here it is:

I’m sure I set the orientation correctly in my Perl program, what happend? One image is correct, the next looks the other way etc!?
[update]
Must need a bit of time, all looks well now

What I feared, to much car in view, how much higher would i need to go to get it decent?
[update]
Now that the sequence is correctly orientated, it’s easier to evaluate:
I think it looks as well as the “walking sequence” so faster motion doesn’t influence the image quality. How much higher would be ok though? Higher is better naturaly, but to high is a danger to stability and the risk of to much stress on the suction cups…I’ve ordered this:

stay safe out there - I’d be afraid to even ride a quadbike on a narrow path like that.
The car being in the photo is an issue for most contributors, since it’s a trade off between stability/discreetness/cost etc. Nadir stitching isn’t great so it you are not losing much data from the image, it just doesn’t look quite as “pro”