Stitching (raw 360) images with Hugin

@StephaneP gave me tip for the stitching software Hugin (thanks!)

I looked at it, and it looks really promising … I read up on it, and it looks like it supports commandline processing and it runs on several operating systems. So this piece of software might just be ideal for batch processing. Just what I need :wink:

but I cant make heads or tales with it getting to stich a raw image… I found a youtube tutorial, but that seems to be a previous version… I tried to follow the instruction, but it ends in a mess.

Anyone here that can help me out?

1 Like

oh man, usually i am ok a learning new programs but the first time i tried hugin i spent the best part of an hour trying to figure it out. i couldnt get it to do anything in the end.

i tried again a few months later and managed to stitch an image but i still dont really understand how it works. i spent another hour trying to get the batch part to work but i still cant crack it. maybe third time lucky :slight_smile:

what camera are you using anyway?

1 Like

You could have guessed, and guessed right :stuck_out_tongue: : the YI 360

ah, well this video worked for me but its for the mi sphere, maybe the instructions might be similar though

you load a pto file? Who built that for the MI sphere? That contains the knowledge I seek… how is that done?

I searched the web and found a forum where I popped the question :wink:!msg/hugin-ptx/vYeBUu7_7Ic/__EDTzMVCgAJ

My hero there created an assistant (you can download via the link above)

  1. I placed that file here: c:\Users[user]\AppData\Roaming\hugin\
  2. I set interface to ‘expert’
  3. Then I open a raw unstitched image (or simply drag it into the program and drop)
  4. I need set a value for “HFOV” (set something, anything… example => 1 ), press OK
  5. via menu: Edit > User defined assistant > Assistant for dual lens images (vertical) (it’s at the bottom)
  6. In stitcher (last tab) set output image size and set to JPEG

Run it!

I did a few tests, and some didn’t go perfect, got a “black cat eye” (missing part of the image)I juggled around with the settings. In my manual I red that the field of view was 220 (I found 180 in the assistant) also I set the crop a bit larger (95% in stead of 92%)

Here is my custom file (save it to a text file and name it ‘duallens_vertical.assistant’):

Description=Assistant for dual lens images (vertical fov:220)
Help=Assistant for images shot with dual lens camera/one shot panoramic cameras

Description=Prepare initial pto file
Condition=single image
Arguments=--output=%project% --projection=2 --fov=220 %image0% %image0%

Description=Setting advanced settings
Condition=single image
Arguments=--output=%project% --unlink=v0,d0,e0 --set=y1=180,e0=-0.25*height,e1=0.25*height,d=0 --crop=95%,50% %project%

Description=Searching control points
Condition=not connected
Arguments=--output=%project% %project%

Description=Check result
Condition=not connected

Description=Cleaning control points
Condition=not connected
Arguments=--output=%project% %project%

Description=Searching line cp
Condition=no line cp
Arguments=--output=%project% %project%

Description=Setting advanced settings
Arguments=--output=%project% --opt=y,p,r,v,d,e %project%

Arguments=-n --output=%project% %project%

Description=Searching for best crop
Arguments=--projection=2 --fov=360x180 --canvas=70% --crop=AUTO --output=%project% %project%

It works very well! Next up, find out how to do this command line…

1 Like

yea i found the pto file for the mi sphere on github:

but it looks like youre getting there… please share when you crack the code on this mysterious program!

Those assistants are golden!

When you can get a PTO file for your camera all you have to do is open the unstiched image (if you have a double sphere image like me, you open it twice!)

Then select: File > Apply Template

and you’re golden!

When you don’t have a PTO you use the assistant. When you have a good result you can use the generated PTO file exactly the same way (so you have created your own template!).

I did discover you need to try, and try again sometime with exactly the same assistant you can get different results every time! There is quite a bit of fuzzy logic in there!

But now I have a question the result looks perfect, BUT when I overlay the two there is quite a bit of difference… Is that a problem for Mapillary (SfM)?

Original stitch:

with the “moa” shown with arrows:

The Hugin stitch:

Overlaying with the original stitch:

So optically it looks perfect… the question is… did I just now trade fact for fiction?

1 Like

created another template, “delta result” with stitch with original software:

It “feels better”, but what is wise? From an optical view the Hugin file is much better, but which is better for Mapillary? Or is that detail/distortion something that Mapillary has no trouble with?

A side note on “speed”. I created a perl script to process a folder with my created “YI360 hugin template”.
I have a Linux server for company testing purposes. That one isn’t a “cpu monster” (and only has a simple onboard graphics card). That one needs 15.5 seconds on avarage for stitching an image (command line, at 5760 x 2880, including conversion to JPG). I expected that my (Windows 10) workstation with a much faster CPU and a slight monster of a GPU would be faster, so I appended my perl script to generate templates suitable for the batchprocessor of the Windows program. But that one needs over 17.7 seconds on average…

With my camera in “movie mode” I filter out a lot of images… so I could get smart and stitch only the images I need…

But before I start doing that I’dd like to know if this way of stitching poses a problem for the “Mapillary platform”?

I can’t see how that would be worse for mapillary - if I remember correctly, the main requirement is that the image is equirect