3d world generation with 360 degree camera

Hello, I recently managed to create a 3D world using Gaussian splatting with a 360-degree camera. However, I’m concerned that a 2000 × 1000 resolution might be too low for SfM (Structure from Motion) purposes. Since Gaussian splatting doesn’t support equirectangular images, I have been cropping the images into 90-degree segments, each offset by 60 degrees. I would appreciate it if you could consider providing a higher resolution.

2 Likes

For reference, could you maybe share a video of what you have built?

Hello boris-san,
Thank you for the response.
I made a tutorial video for 3d mapping that I have been doing these days.
If I make a new video, I will post here.

9 Likes

Hello I uploaded Gaussian splatting in Kyoto.

3 Likes

Hey, this looks great! :smile: Actually, I was expecting this to come from Mapillary for a long time but they were lacking real interest in it. Anyhow, it is nice to see some proof that it can actually be done. Good job! :+1:

@inuex35 The initial results look really cool already but if I am not mistaken there is still some fine tuning to be done, like removing some noise and outliers? Perhaps full resolution images can help improve this matter a bit?

@boris You got to hire some more smart people to integrate this into Mapillary’s 3D view. :wink:

4 Likes

Gaussian splatting is a new approach to rendering point clouds in sfm, so it’s necessary to improve the accuracy of the preceding sfm. Floating noise is noise from sfm. I think it’s possible to create more detailed maps with high-resolution images :blush:.

I was looking for a dataset for Gaussian splatting and found mapillary . Mapillary is a good platform for storing data. As I specialize in self-localization, I plan to develop a system that efficiently acquires extensive data using high-precision GNSS and a 360 camera :artificial_satellite::camera:. Ultimately, I want to make this map accessible from a game engine :video_game:.

I would welcome anyone who is willing to support this project :smile:!

4 Likes

Ultimately, I want to make this map accessible from a game engine :video_game:.
I would welcome anyone who is willing to support this project :smile:!

I am not entirely sure what is it that you want to accomplish. For example, Microsoft guys have done something similar for Flight Simulator and Google guys have done it for Google Maps and Google Earth. Note that in both cases they have limited their result datasets to select places only and used constant quality imagery. As a source dataset, Mapillary is by design a dataset of diverse quality. Yes, you can use post‑processing tricks to somewhat smooth out quality imbalances to some extent but you will still end up with corner cases. However, the biggest issue is removing noise and outliers. Both companies remove noise with armies of human editors. Maybe they have moved to AI by now but I doubt it because it is a really tricky problem, somewhere at the same level with full self driving cars. So, automagically pulling in some data from Mapillary and having it presentable in a game (without human intervention) is a rather difficult to achieve goal. Or, maybe there is something else you have on your mind?

2 Likes

Hello GITNE!
Thank you for your important opinion. I am not looking for such a high level of detail in Mapillary. I think it is sufficient for Mapillary to serve as a reference when creating realistic environments. To obtain precise data, equipment like high-end Lidar cameras is necessary, and the content of the metaverse is concentrated in urban areas where the cost-effectiveness of creating 3D models is higher. However, if it becomes possible to create environments with easily accessible 360-degree cameras, I believe it could open up possibilities for new industries.

As for using it with a game engine, I am thinking about how interesting it would be to 3D model tourist spots for promotion in games like Fortnite.
Using 3D data in a game engine is a personal interest of mine. It is likely that street views will be converted into 3D using technologies like nerf and Gaussian splatting, and the applications for 3D data will be diverse.

Especially in Japan, where the four seasons are distinct, each season creates a different atmosphere in places. Creating this diversity in 3D is something we should do ourselves, and I think it is ridiculous to just wait for Google or Microsoft cars to come.

3 Likes

This is very cool - thank you for sharing the tutorial video and the examples you have created! In terms of getting the Mapillary images, have you tried “thumb_original_url” from the API? I believe that should return the URL to the original wide image: API Documentation

3 Likes

Hello Boris-san, Thank you for the information! I was able to download it in the original resolution. (I had read somewhere that the maximum was 2000×1000 for privacy reasons, but it seems I was mistaken.) I’ll update here if I make any progress. Thank you!

2 Likes

Hello boris san,
I just had an idea. Recently, the technology to convert photos taken with smartphone cameras into 3D has advanced such as luma ai. Like the recent Gaussian splatting, it’s now possible to beautifully convert objects into 3D (not necessary to be 360-degree camera). With this in mind, is Mapillary considering the possibility of allowing users to upload images and generate 3D models on the map? If individuals could create 3D models and place things on the 3D map that others might have missed, I believe it could lead to new discoveries.

3 Likes

It’s an interesting idea! We’ll take it into consideration - thank you for sharing @inuex35!

All the best and happy holidays!

1 Like

These are awesome, great work!

One issue with using mapillary data if we want to get higher quality reconstructions is that the capture frequency of mapillary data is quite low. If you look at say the lumalabs app, it is capturing an image every second or faster since gaussian splats work better with high overlap between images. Mapillary defaults to something like one image per meter (can’t remember the exact number), so while you do get a lot of geospatial coverage, you’ll probably be stuck with low res outputs as you got in the video.

This isn’t to say this isn’t super cool and worth playing with further, but if you are after a reasonably high res reconstruction of the world, this probably won’t get you there unless you can add some AI layer to upscale things once the guassian has been created.
For game engine implementation you could probably make this work for you with a story line where the player is a ghost or something so the blurry world is part of the game mechanics :slight_smile:

2 Likes

Hello, let me share my progress with you. I spent the New Year holiday and some weekends working on this, and I finally managed to get OpenSfM to work with Gaussian splatting. Unlike COLMAP, which does not use 360-degree cameras, OpenSfM does. It is now ready to use, but I believe some parameters still need tuning. Gaussian splatting is intended for single objects, not scene views, so some adjustments are necessary to create a 3D world. Please wait a bit for me to write a README before sharing the code.

Indeed, the interval between capturing images is crucial. However, the distance to the object is also important. For large buildings far away, a larger interval is acceptable, but for smaller objects nearby, a narrower interval is preferable. At least, some investigation is necessary. I will now start exploring these values further.

Certainly, Gaussian splatting is realistic, but to create a 3D map that completely eliminates unnaturalness, further technological advancement may be needed. What we can do is to try new things and continue to search for what works best.:upside_down_face:

3 Likes

Hello we can try mapillary gaussian splatting here.
tell me if you find better parameters or something.

You can download the result in this folder in colab.
/content/360-gaussian-splatting/output/XXXX/iteration_30000/point_cloud.ply
image

Bug fix: Coordinate transformatin was wrong in my script. someone on github issue helped me to fix it. Now the bug fixed.

2 Likes

Great work. Thanks for the update.

Pixel8 was trying to do something similar in a somewhat open way before they got bought by Snap and disappeared. But there past posts on the topic are quite excellent: Geospatial Photos — Can Every Pixel Have Real World Spatial Coordinates? | by Pixel8Earth | Medium

Also FYI, luma labs now has a nice web plugin so you can use that to more easily host your own gaussian splat viewer (and with VR!)

1 Like

Hello, thank you for the information!

I didn’t know that. As a first step, it would be nice to create a website where you can click on icons placed on a map to view Gaussian splatting. It would be great if Mapillary’s point clouds could be directly transformed into Gaussian splatting. The Luma AI plugin seems very promising. I’m not very familiar with web development, but it seems worth exploring.

By the way, I have updated the Colab code to support multiple sequences. If there are enough sequences from 360-degree cameras around, it should be possible to make the point clouds more realistic. The problem is that the API download is a bit slow, and the usage time limit for Colab is reached too soon.

My github code

1 Like

Great stuff! Could it be possible to add 360 videos into gaussian-splatting point cloud as a texture so that blurs could be removed?

Hello, I have captured the ring road around Schottegat in Willemstad on Curaçao in 360 twice. Maybe that is of use to you, as I just read you would like to experiment with more 360 of the same.