Gopro Fusion Workflow

Hi guys!
Our GoPro fusion has just arrived, I’m waiting on the SD cards now.
Could someone who uses it wite me a brief description of their process of shooting and uploading to mapillary?
Thanks in advance!

3 Likes

With regarding to shooting and stitching, I cannot help. But I hope you can find an automated process.

For uploading I use the Python scripts which are great for large amounts of images and/or automatic upload. There is also the web loader, which is easy to get started with, but which is not that great for large amounts of images.

Please give me a ping if you need some specifics on uploading!

It would be great if you could share the script and some specific instructions if there are any, I’m familiar with python so I should be able to get it working on either OS.
Thank you!

Sorry for the delay. The Pyton scripts resides on GitHub - mapillary/mapillary_tools: Command line tools for processing and uploading Mapillary imagery and on Page not found I wrote a rough guide to running them. There is also a lot of questions in that thread.

Question:
There may be special reasons where I want to blur parts of my photos from GoPro (when copied to my PC harddisk) on my own, thus before uploading them to the mapillary server.

Does anyone know a good graphic tool (Windows?) where you can

1)browse all photos of a folder
2)inspect each image step-by-step
3)blur image wit a kind of brush tool
4)save it when finish this single image
5)and choose the next image easily?

I used TheGimp for this. It is a bit like PhotoShop but open source and free. It can open multiple images at once. I select the area, either using a free select mask or an oval mask and then run the guassian blur filter. You can make short cuts for all tools, but must of them have it already. Note that you have to export - not save as.

Hello ! Here is what I am doing to publish my 360° pictures on mapillary taken with my GoPro Fusion :

  • shooting : I’ve tried with 1 image per second while walking and while cycling, I’m currently uploading the sequences on mapillary, I’ll give you the links once it is uploaded.
    I’ve noticed that a picture from a sequence seems to be of lower quality than one taken with the standalone mode - you can see that with the number of pictures that show the available free space on the SD cards
  • stitching the sequence with the mac app Fusion Studio 1.2.1.400 : select JPEG through the renderer options ; the pictures will be rotated automatically to the horizontal and the middle of the picture will face North (compass direction = 0.0)
    (I’ve heard about some GPS coordinates rounding on the rendered picture but this does not happen got me with this mac version)
  • using exiftool to remove the XMP metadata (which prevent mapillary from reading the GPS data) and to correct the datetime exif data (which are set by the Fusion Studio app to the stitching datetime) : exiftool "-datetimeoriginal<gpsdatetime" -xmp= MULTISHOT_0028_0000*.jpg
  • upload the stitched pictures of the sequence on mapillary thanks to the web uploader

Voilà ! :wink:

3 Likes

@tryl if you have a 360 image (GoPro Fusion) do you batch apply the mask before rendering or after rendering the image?
Can you use TheGimp to easly script /batch that same mask(guassian blur) across several thousand photos? Any tips on how to google this or good links?

First, I don’t use the GoPro but currently the Xiaomi Mi Sphere, but I can answer in relation to your questions:
What does the mask do? Perhaps I can give advice if you explain it.
You can use TheGimp for that, but it requires scripting in a language I find very difficult. I would not try it.
But I have used ImageMagick (the convert command) for stuff like that a lot. Being command line it requires minimal memory and processing can be speed up using a command like parallel.

If you explain what the mask does, and perhaps posts one, I might help create a command line for that.

@tryl I have a series of 360 images, one half of the rendered image is mostly train because of where the camera was mounted. Not essential, but figure a blacked out train would cleaner. It would be the same mask or blur for all the images. I’d look into capturing with only one lens for future captures. Can’t seem to upload an image in this post but try here: https://photos.app.goo.gl/h2YKex1JbRVibxUv9. thanks

Is it not possible to render only the front images ?

Good idea @filipc. I don’t know if you can render images from one card.
However, in this case, the camera was mounted sideways (perpendicular to travel direction) to avoid bugs and streamline, so there is content on both the ‘front’ and ‘back’ images in this case.

Is there any issue with heading of 360 images, especially if you mount the camera sideways? Do you have to interpolate with a 90/180 offset? My manual (non Fusion) pano images on Mapillary still show up with the wrong heading despite numerous attempts to fix

First, awesome images! I am looking forward to see them (I like trains!)

I will create a command for ImageMagick when I get some time and post it in this thread. What you are requesting is definately possible.

Install ImageMagick and make sure the commands are on your path.

The actual command that does what you want is:
convert -composite before.JPG mask.png out.jpg

where it assumed that mask.png has the same size as before.jpg and the png has transparant background.

Untested: On Windows you can do this for multiple files like this:

for %i in (*) do convert -composite %i ..\mask.png output\%i

Where mask.png lies one step below the jpgs and the current folder contains a folder named output which will get the output.

2 Likes

yes @4004 I am experimenting with offsetting compass angles for the sequence. when you change the offset in the viewer it doesn’t give a preview of what the image would be just that small ‘wedge’ moves which isn’t that intuitive for 360 images! I’ll let you know if I have any success.

1 Like

Question to Fusion owners -
have any of you tried this box for the Fusion?

I am actually not planning to use it underwater but considering for everyday use (bike and car mounts) to protect it from dust, dirt, sand, gravel, insects.
Wondering if:

  1. using this box has a significant impact on the image quality?
  2. If it has any impact on the lens angle preventing from quality stitching?

I have found this review on youtube (sorry, it’s in russian) but no real-use videos posted yet.

it’s definitely overpriced for what it is (cheap hinges, hard plastic), but a good thing to put on the camera while you are testing out a particular mount. From my (limited) experience, the lens gets a bit more flary, but not terribly so, and stitching seemed normal. But that was only a couple of test shots

Hi, so does changing compass/heading angle (manually or using tools and offset option) change the default image focus as well?

So, I have made some tests and being a bit confused now as I don’t understand how it works…

Test script:

  1. GoPro Fusion mounted on the roof of the car
  2. Two short sequences uploaded covering the same area, one sequence - camera lens facing sides, second sequence, lens facing front and back.
  3. Stitching in Fusion studio with Yaw fix applied (see below)
  4. upload via web uploader

Details:

  • First sequence - device mounted with a cameras lens facing the sides (left and right). During stitching in Fusion studio, ‘center’ of resulting panorama images was adjusted to point to the front of the vehicle (Yaw = -90). So the rendered images were facing direction of movement as a center.
  • Second sequence - camera was rotated 90 grad to face front and back. During stitching - images were processed in the same batch with the first one, so the same Yaw offset was applied (-90) which resulted in a center of the stitched panos facing the side
  • Fusion doesn’t maintain heading in the GPS tags. So the compass angle during upload is set to 0 by default (pointing North)

Findings:

  • Both sequences - heading is set to 0, which is understandable
  • BUT it doesn’t seem to have ANY impact on the images or the way they are displayed at all. When you hover of the image - real view sector is shown separately and it is MATCHING the reality. So when you rotate the image, it shows the right sector of view.
  • Logic behind default view sector selection is not clear, maybe it could be different, in my case it was pointing West (270)
    cap1
  • what is interesting here - view angle applied is the same for both sequences and orientation in space is correct for BOTH despite of the fact that in the second sequence camera was rotated 90 grad!!!

And this is where I’m coming to the
Questions:

  1. Does compass angle (heading) is important for panos? Should I care adjusting it and normalizing the pano-sequences? Does it have any impact on what is being displayed and how? It eventually could be useful - see my feature request at the bottom
  2. How this could be that in both sequences image orientation in space is correct despite of the fact that camera was rotated 90 grad, How does Mapillary know - what is the correct orientation like?
  3. Having noticed above, it seems that ANY post-processing - either with tools (heading) or with Fusion studio (adjusting Yaw to center panos) does not make any sense at the moment?

@djakk
One question re your earlier post.

the pictures will be rotated automatically to the horizontal and the middle of the picture will face North (compass direction = 0.0)

What have you meant by that? When and where they will be rotated? I can see that user can adjust the pano center during batch processing in Fusion studio, keeping it either default (no correction, center is front lens) OR adjusting it for the whole batch with Yaw (i.e. rotate all selected images horizontally by a specified angle)
Compass direction is not supported by Fusion - heading tags are not used so Mapillary assumes them as 0 (North).

Finally, one feature request to consider @peter

  • make use of compass angle in a sense that it could be used as a default FOV during sequences playback. So, if panorama sequence is normalized correctly, you can playback it with FOV always set correctly and adjusted dynamically, i.e. keeping focus with the direction of movement.
    Now - it is fixed, so you can rotate image to apply a different FOV but during playback angle stays fixed, so if sequence is changing direction, direction movement could go out of ‘focus’

p.s. I will keep these two sequence as-is, will be testing heading offset and sequence normalizing impact with a different sample.