Thanks for the answer.
Well, I do understand the idea, but it’s how Mapillary application works, isn’t it? It breaks the sequence by 300 images, and you can easily loose 299 images of your sequence. Merging is really needed I believe.
Thanks for the answer.
Well, I do understand the idea, but it’s how Mapillary application works, isn’t it? It breaks the sequence by 300 images, and you can easily loose 299 images of your sequence. Merging is really needed I believe.
I made a first attempt to capture a sequence for NeRF of a small place with 120 pictures. Back home, I discovered that one needs at least 200 pictures and 600+ is recommanded. But, as mentioned by WASD42, sequences are limited to 300 pictures. So, how can we suggest a set of 600+ pictures for NeRF processing?
Hello all, i’m Arnold from Cameroon
I submitted my recent capture as NerF candidate. I saw that it takes about a week to generate NerF scenes. I would like to know how to be aware when the scene will be created
cc @duncanzauss do you have any advice here?
BR, Yaro
Anckargripsgatan in Malmö is a great example where you can see that NeRF works on reflective surfaces too. And not only that! You can also see true perspective correct reflections! Hence, no simple “environment texture maps”. Look closely at glass surfaces in the video, especially on the turquoise building. The reflection changes depending on the viewing angle and is perspective correct. Hence, more like “cubemaps” or ray traced reflections.
Hmm, this leads me to think of “perspective depended voxels”?
However, I am a bit disappointed about the details on the bikes and motorcycles in the video. But, maybe this is just due to some general low granularity parameter in order to reduced complexity and thus compute time?
BTW, you didn’t recorded the sequence with cloudy weather as recommended.
The latest 360° renditions from NeRFs are a nice feat but I am not particularly fond of these. Perhaps because they do not perform as smoothly as videos on my machine or perhaps rather because they lack depth in contrast to videos. Having a dense point cloud to traverse and discover interactively would be much more interesting. However, current GPUs are not optimized for effective point clouds. Thus, I would be fine with traversable textured meshes too. Judging by some NeRF videos, I think you generate a sky box already. This alone is really cool stuff!
Did you know that some GPUs in the early 2000 years even had dedicated fixed function circuitry to accelerate
point clouds? Nvidia called it point sprites.
Later, it got lost to time.
@boris, I suggest having a NeRF category in the forum. I think that could help people sharing experiences and supporting each other in capturing these.
I always find it hard to find places where dogs are not allowed such as smokers and so on.
The Suggest for NeRF
button in the Advanced
…
sub‑menu should also be disabled and/or change to Suggested for NeRF
once a sequence has been suggested.
I think this is an interesting interview beyond its title with a lot of info on where we stand today with 3D scanning https://youtu.be/d2wiQqCGGP0?si=TyVuhUGibAt6UeQc
Indeed, it is an interesting interview despite a rather bumpy delivery. It is quite eye opening for non‑experts. So, I think it also basically answers why Gaussian Splatting is not a solution to everything, including for mapping, and why NeRF has not picked up either for other use cases than mapping, where real time rendering is key. From what I understand, real time rending is nice to have but not key to mapping. Mapping needs a) object detection, which can happen on flat 2D images and in (slowly offline) reconstructed 3D space. And b), object triangulation on any EPSG projected map, which can also happen offline in any amount of finite time. Thus, I would not assume NeRF to be dead for mapping yet. Besides, NeRF may still bear potential for model optimizations. What about making NeRF/AI create meshes instead of point/voxel clouds? Wouldn’t this solve real time rendering? What do you think @duncanzauss?