- Видео 10
- Просмотров 76 402
Angjoo Kanazawa
Добавлен 6 июл 2011
Creative Horizons with 3D Capture October 2023
Talk recorded at Stanford's HAI Fall Conference: New Horizons in Generative AI: Science, Creativity, and Society on October 24, 2023.
Please see hai.stanford.edu/events/new-horizons-generative-ai-science-creativity-and-society for the Q&A and other talks!
Please see hai.stanford.edu/events/new-horizons-generative-ai-science-creativity-and-society for the Q&A and other talks!
Просмотров: 3 164
Видео
nerfstudio SIGGRAPH 2023 Trailer
Просмотров 7 тыс.Год назад
🥳 We will be presenting nerfstudio at SIGGRAPH 2023 as a conference paper 🎉 arxiv.org/abs/2302.04264 Myself and many of the nerfstudio team will be attending the conference! Looking forward to meeting the @ACMSIGGRAPH community 🙂 nerf.studio/ nerfstudioteam?lang=en
GTC 23 Nerfstudio: A Modular Framework for Neural Radiance Field Development
Просмотров 9 тыс.Год назад
Doc: nerf.studio Neural radiance fields (NeRFs) are rapidly gaining popularity for their ability to create photorealistic 3D reconstructions in real-world settings, with recent advances driving interest from a wide variety of disciplines in academia and industry. However, due to the flux of papers, consolidating code has been a challenge, and few tools exist to easily run NeRFs on user-collecte...
Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
Просмотров 23 тыс.4 года назад
We introduce the problem of perpetual view generation-long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image. This is a challenging problem that goes far beyond the capabilities of current view synthesis methods, which work for a limited range of viewpoints and quickly degenerate when presented with a large camera motion. Methods designe...
Video for "Learning 3D Human Dynamics from Video"
Просмотров 7 тыс.5 лет назад
Project page: akanazawa.github.io/human_dynamics/
Learning Category-Specific Mesh Reconstruction from Image Collections
Просмотров 6 тыс.6 лет назад
Project url: akanazawa.github.io/cmr/
Human Mesh Recovery (HMR) Supplementary Video
Просмотров 19 тыс.7 лет назад
Supplementary video showing more results of Human Mesh Recovery(HMR). arXiv: arxiv.org/abs/1712.06584 Project page: akanazawa.github.io/hmr/
Eurographics '16 fastforward for the paper "Learning 3D Deformation of Animals from 2D Images"
Просмотров 8908 лет назад
link to the paper: www.umiacs.umd.edu/~kanazawa/papers/cat_eg2016.pdf
Learning 3D Deformation of Animals from 2D Images: Supplementary Material
Просмотров 7689 лет назад
Link to paper: www.umiacs.umd.edu/~kanazawa/papers/cat_eg2016.pdf Link ot code: github.com/akanazawa/catdeform
WarpNet: Weakly Supervised Matching for Single-View Reconstruction: Supplementary Video
Просмотров 5709 лет назад
CVPR 2016 Link to paper: www.umiacs.umd.edu/~kanazawa/papers/kanazawa_cvpr2016_warpnet.pdf
Thank you for sharing your work! <3
We will never have data from the entire planet (unless be have robotic bee swarms in the billions that scan every nook and cranny ). Somehow one needs to build an AI model that is really good at guessing the missing data (essentially generative AI) for a certain place. Maybe the Earth digital twin could be populated by 3d avatars of all the people on facebook.
I wonder how this will be in 10 years.
The next step would be to use sensors to capture the physical interactions of objects. That could give the model a physics model of the real world and allow it to be used in games.
When will Google Street View be made in 3d? I want to drive around and explore the world as if it is a game or simulator.
Thank you so much!
Amazing technology. Thank you. Maybe someday it will work in real time. I mean training based on prompts or other data. We will let the algorithms dream and we will have the ability to jump into their visions. It will be more than modern computer games or vr movies.
so cool!
アブちゃんすごい!💯🎉
I love the work your team is doing with NerfStudio! Does it work in Mac computers?
Mac too
Are your sure? i cant find any information on the documentation...please send me the link with the info available@@play4fun516
Wow that is amazing 😻 Thank you for sharing!
😎
The only downside to a web viewer is it’s not gonna fly in the DOD or government. Altho from my last look you can run it on a VM used as a sort of server which IT is a bit more okay with. Would be nice to have just a straight up local desktop viewer
Thanks for this informative talk! Is there perhaps a tutorial on how to use the webviewer?
nerfstudio turned out to be suitable for gaussian splats too
Great presentation! I've certainly come across NeRF studio while researching this topic, but it's really great to hear what's behind it and who does it. Good luck with your project. We live in exciting times.
Imagine if the future of Google Earth was like this and you could drive around and explore every place in great detail.
Congratulation!
So technically, could one use an image sequence and get a series of 3d models that could be rendered as an animation? This could make a huge difference in creature vfx workflows.
That was radical!!, well done and beautifully executed 👍👍
great stuff!
great job🎉
impressive!!
cool
Hello, what do you think about the future of NeRF? I don't know anything about NeRF development, but I've seen a lot of related demos. I feel like there is a big difference between NeRF based on photos, and generated nerf. Performance-wise, it seems like NeRF can one day become a fully interactive environment. But we wouldn't be just creating interactions solely based on real life scenes and objects. I thought if Google and Apple reconstruct all their street view and Google earth source photos to NeRF (their new map release seems having), they can then train a network on top of that, and use it to generate scenes. But that would still be mostly buildings and cars.
英語はよくわからないですが、聞いてるだけで、ワクワク🎉します。❤❤❤❤
legends, all of you. I could see a future where there's a NeRF renderer built into blender... we're not there yet but the integration between the two apps right now is so awesome!!
based! cannot wait for these to be used for grounding foundation models in hyperreality 👁️🗨️
fantastic overview , really exciting developments in NeRF , thanks !
Is there any software for doing this online
very interesting, is this going to be productized? when MVP will be released ?
Has this been tested on datasets other than landscape photography? Was thinking this could yield really interesting results when applied to fantasy or science fiction art!
amazing.. I need in blender format
3:34 That was very interesting.
Weird.
CAN I USE THIS ? AND TRY OUT?
How small windows did you think to put in here? It so hard to see and compare results.
Hi, what do you mean by "small windows"? the video from 2:55 shows comparison with prior work, you could slow down or pause to see the difference at various steps.
There definitely need to be more cat 3d models
This is exactly what flying in a dream feels like. The world being created as you move forward.
despacito
Very dream-like! Excited to see what amazing results global consistency would hold, will be keeping an eye on this for sure!
THIS MUSIC DRIVES ME CRAZY!
And unintentionally made some of the most exhilarating psychedelic visuals going
Wow this is an interesting topic, will surely go through your paper.
Now show me what my great-great-grandchildren will look like! Really cool stuff! Keep it up.
wow it's amazing
Absolutely amazing. Wow.
Amazing reaearch direction you opened up! I really wonder what it would look like having a long term memory. Also what are those pink "artifacts" at 4:30 in the bottom left video?
we think those are red roofs of European houses in the dataset :) (We tried to avoid structures but some are inevitable)
amazing
great job