What is the state of the art at the moment for creating 3D models from photographs or videos or reconstructing scenes? Anything I can look up in papers or architectures?
Hi buddies,
leading this transformation are Neural Radiance Fields (NeRFs). NeRFs, which were created by UC Berkeley researchers, provide a novel method for producing realistic 3D sceneries from sparse data.
Neuralangelo appears to be the state-of-the-art (SOTA) technology at the moment.
It’s a vibrant field with a lot of activity. Google might offer you around 50 papers on the subject. The challenges here are varied, so it’s not just about “creating a 3D mesh that resembles my picture.”
It’s worth noting what you intend to do with the final 3D model. If it’s for gaming or metaverse applications, the requirements might differ from those for 3D printing. Advanced designs typically demand more precise geometry representations, so approaches focusing on converting images to BREP (Boundary Representation) or fitting CSG (Constructive Solid Geometry) shapes might be involved.
I’ve come across a few working examples on Hugging Face and some videos on LinkedIn, but nothing has truly impressed me yet. If anyone has come across something new and exciting, please share a link. It’s only a matter of time before something impressive surfaces.