I have identified a historic landmark that I would like to capture for NeRF. However, it is quite tall and I would like to also capture details high above the ground. Hence, I was wondering whether your process can also deal with varying focal lengths (zoom) in the same sequence?
The current OpenSfM flavor used by Mapillary seems to work rather hit or miss with varying focal lengths. Often enough, just a handful of long focal length images cause the entire point cloud to scale up instead of positioning the camera further away.
We have introduced the limit as shorter sequences had some âgapsâ which prevented us from creating a successful reconstructions. Please try suggesting larger sequences.
Weâll also look into possibly generating reconstructions from multiple combined sequences in the future.
Well, I do understand the idea, but itâs how Mapillary application works, isnât it? It breaks the sequence by 300 images, and you can easily loose 299 images of your sequence. Merging is really needed I believe.
I made a first attempt to capture a sequence for NeRF of a small place with 120 pictures. Back home, I discovered that one needs at least 200 pictures and 600+ is recommanded. But, as mentioned by WASD42, sequences are limited to 300 pictures. So, how can we suggest a set of 600+ pictures for NeRF processing?
Hello all, iâm Arnold from Cameroon
I submitted my recent capture as NerF candidate. I saw that it takes about a week to generate NerF scenes. I would like to know how to be aware when the scene will be created
Anckargripsgatan in Malmö is a great example where you can see that NeRF works on reflective surfaces too. And not only that! You can also see true perspective correct reflections! Hence, no simple âenvironment texture mapsâ. Look closely at glass surfaces in the video, especially on the turquoise building. The reflection changes depending on the viewing angle and is perspective correct. Hence, more like âcubemapsâ or ray traced reflections. Hmm, this leads me to think of âperspective depended voxelsâ?
However, I am a bit disappointed about the details on the bikes and motorcycles in the video. But, maybe this is just due to some general low granularity parameter in order to reduced complexity and thus compute time?
The latest 360° renditions from NeRFs are a nice feat but I am not particularly fond of these. Perhaps because they do not perform as smoothly as videos on my machine or perhaps rather because they lack depth in contrast to videos. Having a dense point cloud to traverse and discover interactively would be much more interesting. However, current GPUs are not optimized for effective point clouds. Thus, I would be fine with traversable textured meshes too. Judging by some NeRF videos, I think you generate a sky box already. This alone is really cool stuff!
Did you know that some GPUs in the early 2000 years even had dedicated fixed function circuitry to accelerate point clouds? Nvidia called it point sprites. Later, it got lost to time.
@boris, I suggest having a NeRF category in the forum. I think that could help people sharing experiences and supporting each other in capturing these.