[SIGGRAPH 2018] Toward Wave-based Sound Synthesis for Computer Animation

Поделиться
HTML-код
  • Опубликовано: 11 дек 2024

Комментарии • 143

  • @slademcbride3225
    @slademcbride3225 6 лет назад +377

    im going to start calling cymbals non linear thin shells from now on

    • @Kombi-1
      @Kombi-1 5 лет назад +5

      heheheheeeeeeee

  • @bruce_luo
    @bruce_luo 6 лет назад +244

    Luke, I am your fawawawawawawawa therrrrrrrrrrrrr.

    • @ThaMentalGod2003
      @ThaMentalGod2003 5 лет назад +3

      Bruce Luo darth vader had a really high ping

  • @Frautcres
    @Frautcres 6 лет назад +120

    This is by far the most convincing solver so far. Amazing work!

  • @coma-body-stilllife
    @coma-body-stilllife 6 лет назад +44

    I've been waiting for someone to piece this together. All the parts of virtual sound synthesis have existed for a while. Binural spatializers, physical gas simulators and tools to interpret wave patterns as sound. These are very good results!!

  • @Nerdule
    @Nerdule 6 лет назад +9

    Woah, this really blows all the previous sound-synthesis work I've seen out of the water. Congratulations!

  • @Peacepov
    @Peacepov 6 лет назад +138

    I hope this gets integrated to a game engine or a 3d software soon. Thank you all for your hard work, This's amazing!

    • @totalermist
      @totalermist 6 лет назад +90

      This is *not* real-time, though. That dripping tap took almost 19 hours to render on 32 CPU cores...

    • @Peacepov
      @Peacepov 6 лет назад +5

      No, I mean the code/algorithm that generates the sound, it would mean you'd have to create ui for the artist to set element type and other attributes, it'll be quite technical no doubt, but so worth it.

    • @BD12
      @BD12 6 лет назад +24

      things like this NEVER make their way out, don't kid yourself hahaha. Every SIGGRAPH or MIT demonstration I've ever seen was just for these guys to wank over while they do their thesis

    • @Reversed82
      @Reversed82 6 лет назад +9

      it seems more like it's meant to accentuate foley work on animated movies or something similar, however it might be possible to pre-render convolution impulses for a game and use that in real time applications instead, at least for some use-cases

    • @tempname8263
      @tempname8263 6 лет назад +16

      Someday it will be.
      But not in this decade.

  • @8BitEggplant3
    @8BitEggplant3 5 лет назад +7

    These videos are cool and all and I'm amazed by the work that's gone in to all of these techniques but this is the first time a siggraph demonstration has really made me question my grasp on reality.

  • @RmaNYouTube
    @RmaNYouTube 4 года назад

    Why the hell this is not available for sound designers/Visual Artists/Musicians to use?!!!! The World Needs It.

  • @UghZan11
    @UghZan11 5 лет назад +18

    3:33
    Is that a cat on the reflection on trumpet?

  • @mada1241
    @mada1241 5 лет назад +12

    Cant wait for this to be implemented into gaming. But I will wait.

    • @ShoryYTP
      @ShoryYTP Месяц назад

      5 years later still nothing

  • @Sl4yerkid
    @Sl4yerkid 5 лет назад +2

    3:33 When the plunger went in front of the trumpet the first time (just to show the animation) my brain was automatically changing the sound I heard... When I watched the clip again without looking at the screen this time, I heard it as it should sound. very interesting

  • @thecanadianwombat8486
    @thecanadianwombat8486 5 лет назад

    This kind of stuff is really cool, it gets even crazier when you think about it in the context of things like video game application.

  • @peteblac1
    @peteblac1 6 лет назад +5

    Brilliantly conceived and executed. Where science meets art requiring indepth of visual and auditory modalities. Kudos.

  • @maulcs
    @maulcs 6 лет назад +5

    I've imagined something like this for awhile now, crazy to see it for real

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 6 лет назад +2

    This is really awesome! we need the bucket over head sound fx for the next elder scrolls game.

  • @tomshepperd3535
    @tomshepperd3535 5 лет назад +1

    Simulating compression waves in a virtual space to generate real-world organic sounds? Incredible.

  • @unlogik6895
    @unlogik6895 5 лет назад +1

    Wow this technology is awesome. I have the vision in 20 years its normal to use it in videogames and interactive video game movies.

  • @muzikermammoth3995
    @muzikermammoth3995 6 лет назад +1

    Acoustic shaders sounds incredible!

  • @JonesCrimson
    @JonesCrimson 5 лет назад +1

    For anyone unfamiliar with latin or how professional study papers are written, "Et Al" means "And Others." So, it is researcher Langlois and others being cited, implying that more than one person was deeply involved in or helped write the paper.

  • @lucabluewaterfall
    @lucabluewaterfall 6 лет назад +3

    I've been wondering whether this is possible with current technology for ages!! Amazing

  • @stanleyyyyyyyyyyy
    @stanleyyyyyyyyyyy 6 лет назад

    This is what I call excellent understanding of world around us. Great job guys!

  • @risist4502
    @risist4502 6 лет назад

    Oh god... I was watching another siggraph video while at the same time doing something else. It was late at night, that said it was quite quiet. And suddenly i hear those sounds. I was sure that something is happening with my stomach. Really realistic sounds.

  • @MooseY17
    @MooseY17 6 лет назад +9

    Love the space odyssey trumpet :D

  • @Malakyte-Studio
    @Malakyte-Studio 6 лет назад +2

    Very interesting. Great work.
    I wish to see the results of this development applied to sound for automotive (loudspeakers playing music in a complex cockpit).

  • @brainsanitation
    @brainsanitation 4 года назад +1

    I Noticed that the fan blocks the voices and maybe reflects it but doesn't seem to "chop" the breath carrying the voice like it would in this dimension.

  • @1ucasvb
    @1ucasvb 6 лет назад +12

    REALLY excellent! Great work!

  • @alexhein7583
    @alexhein7583 3 года назад +1

    THIS WILL BE REVOLUTIONARY FOR MUSIC PRODUCTION. I am imagining a VST synthesizer where you can model sounds from real life objects!! Or create virtual objects in a 3d space, than generate sounds produced by hitting them, blowing on them, etc.

    • @alexhein7583
      @alexhein7583 3 года назад

      Could even lead eventually to an accurate guitar emulator. Guitar sample instruments dont come close to the real thing but this could change that

  • @xirustam
    @xirustam 5 лет назад

    I knew this is possible, but probably takes too many resources for being useful nowadays. However, it's good to know that the algo already exists.

  • @bananartista
    @bananartista 6 лет назад +6

    I want this in my DAW

  • @coma-body-stilllife
    @coma-body-stilllife 6 лет назад +27

    Creating VSTi instruments will be a sure way to monetize this research when render times reach near RealTime.

    • @gloverelaxis
      @gloverelaxis 4 года назад

      You absolutely don't need real-time rendering for this to be totally revolutionary for recorded music.

    • @coma-body-stilllife
      @coma-body-stilllife 4 года назад

      ​@@gloverelaxis ok

  • @junipiter4689
    @junipiter4689 5 лет назад

    4:34 inspiration for psycho pass by Xavier wulf

  • @MsJeffreyF
    @MsJeffreyF 6 лет назад +6

    This is incredible, great job

  • @VideosBySimon
    @VideosBySimon 5 лет назад

    man these 3d research papers are the most surreal shit ive ever seen

  • @totty2524
    @totty2524 5 лет назад +1

    Oh my god, this is amazing!

  • @gamecity7265
    @gamecity7265 6 лет назад +1

    What an amazing future you are buildind

  • @XIIF
    @XIIF 6 лет назад

    we need this technology.. integrated sound with 3d applications.

  • @francisconascimento7447
    @francisconascimento7447 5 лет назад

    This is just perfection. Why is this not implemented in games? Or is it?

  • @MelloCello7
    @MelloCello7 6 лет назад

    Absolutely incredible!!

  • @alejandroz1606
    @alejandroz1606 6 лет назад

    Outstanding work!

  • @maychan26
    @maychan26 5 лет назад

    This is extraordinary...!!!

  • @Patrick73787
    @Patrick73787 5 лет назад +1

    IS THIS THE AUDIO VERSION OF RAY TRACING???!

  • @gloverelaxis
    @gloverelaxis 4 года назад

    This is absolutely fucking groundbreaking.

  • @JasonSmithDuhmeister
    @JasonSmithDuhmeister 6 лет назад

    Really great work. Keep it up!

  • @0hate9
    @0hate9 6 лет назад

    Damn, I REALLY want this.

  • @Yizak
    @Yizak 6 лет назад +1

    Okay that is amazing

  • @DanielShealey
    @DanielShealey 6 лет назад

    It makes me wonder where this tech could take us in the worlds of simulation. Material sciences, engineering, product design, medical research even?

    • @coma-body-stilllife
      @coma-body-stilllife 6 лет назад +1

      You could literally say that about any novel technology. Why even say something like that. ugh....

  • @rudnfehdgus
    @rudnfehdgus 5 лет назад

    This is amazing....

  • @Berniebud
    @Berniebud 5 лет назад

    We need this shit in games

  • @Serij92
    @Serij92 5 лет назад

    Amazing!

  • @selftransforming5768
    @selftransforming5768 6 лет назад

    Woah amazing!

  • @AraiKay
    @AraiKay 6 лет назад

    Can someone make a music out the the sounds in this video?

  • @LovelyJames-l3y
    @LovelyJames-l3y 4 года назад

    And after all of this, we have among us

  • @The-Vay-AADS
    @The-Vay-AADS 6 лет назад

    just to make sure I get this:
    a) this synthesizes sound in real time from thin air depending on materials, physics & force
    or
    b) this takes 3d sound sources (mono sound) and propagates them physically correct according to nearby objects with set materials?

    • @HowardCShawIII
      @HowardCShawIII 6 лет назад +2

      Well, sort of. It simultaneously performs the sound synthesis and simulates the *effects* of a 3D environment on the vibration of air in its volume, which kind of incorporates a and b at the same time, but more. Hence the comments on the pitch shift of the spalling bowl being due to near field effects - that part was not a result of synthesizing the sound of the bowl, but of the synthesized sound *interacting with itself* due to reflections off the bowl and the floor. Adding the 3D sound sources to that is as simple as simulating a speaker cone vibrating in response to that data (exactly as happens in the real world - speaker just works by wiggling a cone back and forth in response to the data). Very cool stuff.

  • @AMR-bf8nx
    @AMR-bf8nx 5 лет назад

    Maybe Nvidia can create a new soundcard using this technology with advanced AI for producing near real time synthesized sound, like today they are doing with raytracing with the RTX series. That would open a whole new world of opportunities in the music industry.

  • @draeath
    @draeath 6 лет назад +1

    "This DOI cannot be found in the DOI System"

  • @xanthirudha
    @xanthirudha 6 лет назад

    AMAZING

  • @olivecool
    @olivecool 6 лет назад

    wait how do they make the examples
    and is the software free

  • @DanielShealey
    @DanielShealey 6 лет назад

    This is amazing. I've been wondering when we would be able to truly "render" sound for a while now. How long does it typically take to output some of these demonstration files?

    • @totalermist
      @totalermist 6 лет назад +5

      Selected numbers:
      • Dripping Faucet: duration 8.5s; 18.6 hours render time on 32 CPU cores
      • Bowl and Speaker: duration 9s; 45 min on 320 CPU cores
      • Trumpet: duration 11s; 33 min render time on 640 CPU cores
      Source: graphics.stanford.edu/projects/wavesolver/assets/wavesolver2018_opt.pdf pg.10 Table 1

  • @OrangeC7
    @OrangeC7 5 лет назад +1

    1:40 Because this is how physics works

    • @littlesnowflakepunk855
      @littlesnowflakepunk855 5 лет назад +1

      It actually kinda is. It's animated rigidly to demonstrate the change in pitch and timbre when bending a vibrating sheet of metal.

  • @pianojay5146
    @pianojay5146 5 лет назад

    acoustic and asthetic

  • @boriswilsoncreations
    @boriswilsoncreations 5 лет назад

    How do I get this? Seriously xD

  • @Noone-of-your-Business
    @Noone-of-your-Business 6 лет назад

    So... the processed voice and trumpet are... what? Recorded sounds or completely synthetic?

    • @porksmash
      @porksmash 6 лет назад +5

      They were both pre-recorded sounds processed by this system

  • @moth.monster
    @moth.monster 6 лет назад

    yall this shit sounds moist

  • @カラス-h6e
    @カラス-h6e 6 лет назад

    すげー魔法みたい

  • @idot3331
    @idot3331 5 лет назад +1

    2:59
    *_B_*

  • @jayjoonprod
    @jayjoonprod 4 года назад

    WTF you guys created a world following our physics inside a computer
    Only if someday computers get really really fast

  • @wigwagstudios2474
    @wigwagstudios2474 4 года назад

    1:31

  • @1.4142
    @1.4142 2 года назад

    Relatable

  • @sharonpakk
    @sharonpakk 6 лет назад

    insaaaneee

  • @nic12344
    @nic12344 6 лет назад +12

    It's not : "Luke, I am your father"
    But rather : "No, I am your father"

  • @userou-ig1ze
    @userou-ig1ze 6 лет назад

    sad this is targeted at offline synthesis. Next step would be an ANN approach to do this in f*ing-seconds? @FellowScalars: is this published yet?! Link?!?

    • @AnteQu
      @AnteQu 6 лет назад +1

      See the project webpage in the video description: graphics.stanford.edu/projects/wavesolver/ . The page contains a low-res and a high-res paper that you can download.

    • @userou-ig1ze
      @userou-ig1ze 6 лет назад +1

      Ante Qu i meant the link to the paper for the online method...

  • @pencrows
    @pencrows 5 лет назад

    The legos sound kinda soft

  • @AnityEx
    @AnityEx 4 года назад

    now simulate two drums and a cymbal falling from a cliff

  • @HarmoniChris
    @HarmoniChris 5 лет назад

    2:30
    Arch-ae-ol-o-gists they like bones, and
    Ancient civilizations
    arch-ae-ol-o-gists
    (And one of them's gay)

  • @red9317
    @red9317 5 лет назад +3

    Archeologists they like bones and ancient civilisations, archaeologists!

    • @HarmoniChris
      @HarmoniChris 5 лет назад

      My man. World Doctors is hilarious.

    • @red9317
      @red9317 5 лет назад

      @@HarmoniChris I just noticed the character model in the video hahah.

  • @Veptis
    @Veptis 5 лет назад

    The metal sheet and bowl were great. Cymbals not at all.

  • @17MetaRidley
    @17MetaRidley 5 лет назад

    Any chance of coming to softwares like blender? Is it possible that it is already applied in games of the 9th generation?

    • @Dr.W.Krueger
      @Dr.W.Krueger Год назад +1

      This isn't for games, blendlet.

    • @17MetaRidley
      @17MetaRidley Год назад

      ​@@Dr.W.KruegerHum... Not yet. But, How longe? 😅

  • @sandersmcmillan5388
    @sandersmcmillan5388 6 лет назад

    Wowwww

  • @偽88
    @偽88 6 лет назад

    fucking wow, im baffled

  • @Slvrbuu
    @Slvrbuu 6 лет назад +2

    No! I am your father.

  • @Quaz-jinx
    @Quaz-jinx 5 лет назад

    Reddit?

  • @sabrango
    @sabrango 5 лет назад

    DAMM

  • @Unreissued
    @Unreissued 6 лет назад

    fuck im high

  • @guy3nder529
    @guy3nder529 5 лет назад

    well the cymbal was kinda disappointing.

  • @daanhoek1818
    @daanhoek1818 5 лет назад

    Now i totally believe we could be living inside a simulation

  • @ldbpictures7212
    @ldbpictures7212 6 лет назад

    500th like

  • @vodkacannon
    @vodkacannon Год назад

    😄

  • @CariagaXIII
    @CariagaXIII 5 лет назад

    i wish i can ABCD in a barrel

  • @tmcchamp8200
    @tmcchamp8200 3 года назад

    I can imagine a time where video games will have real time sound simualtions
    This would require a lot of computing but would save companies a lot of money cuz they save on sound recording and stuff…
    Maybe idk

  • @explosu
    @explosu 6 лет назад

    Wat.

  • @AshLordCurry
    @AshLordCurry 6 лет назад

    wOw

  • @twister5752
    @twister5752 4 года назад

    🅱️

  • @iLikeTheUDK
    @iLikeTheUDK 6 лет назад +3

    Bye bye foley people?

    • @totalermist
      @totalermist 6 лет назад +1

      Unlikely - it requires longer to render the sounds than for a Foley artist to create the sounds and a sound engineer to mix it.

    • @DanielShealey
      @DanielShealey 6 лет назад +2

      totalermist ... "For now" I really think it won't take that long for this sort of thing to become commonplace in production (5-7 yrs) Not as a replacement but at least as an aide. CAD doesn't replace engineers and architects. Surely as with any other tech, the processing time will drop significantly the more people use it.

    • @totalermist
      @totalermist 6 лет назад +1

      Daniel Shealey - I wasn't too sure about "Surely as with any other tech, the processing time will drop significantly"-remark so I went to check what happened in terms of processing time during the last 5 years.
      I took the Intel Xeon E5-2640, a mid-range 6 to 8 core 90ish W data centre CPU as an example to estimate what happened to processing power in the past 5 years.
      The model went from 6 cores at 2.5 GHz in its first incarnation to 8 cores at 2.4 GHz in its current version.
      Performance went up from 9500 Pts [1] to 15331 Pts [2] for an increase of *62%* in about 5 years (there is no direct successor and the somewhat similar Xeon Gold 5115 yields no significant performance gains).
      If we just take these 62% and round them up to 100% we get from 24 hours using 36 CPU cores *down to 12 hours* processing time for the _10 second metal sheet shake_ sound simulation in the next years [3].
      Now I don't know about you, but I'd like to see a production company that saves time and money by having a pretty beefy server system render for half a day instead of just letting a Foley guy/gal shake a metal sheet for half a minute and have a sound engineer mix it...
      [1] bit.ly/2GUTqnL
      [2] bit.ly/2IKQHTM
      [3] graphics.stanford.edu/projects/wavesolver/assets/wavesolver2018_opt.pdf

    • @DanielShealey
      @DanielShealey 6 лет назад +3

      totalermist Sorry, I wasn't clear. I meant this type of rendering will get faster in the future. Software will get more efficient. Hopefully they'll find a way to change over to something like a GPU based. Othwise it would have been a pretty useless venture to develop in the first place. For making sounds simulate water... Yeah, come on. That's nonsense for now. It will probably be first used in product design. 3D modeling high end speaker systems and simulated binaural coustic design for expensive vehicles. One I could see right away is acoustically engineering auditory "dead" spaces into architecture. Placing and testing baffles to create quiet spaces. Sound effects for movies is still a long way away. Civil engineering for putting buildings next to highways with solutions other than "a giant concrete wall"

    • @DanielShealey
      @DanielShealey 6 лет назад +1

      totalermist but I agree. Rendering the sound of metal sheets alone would be kind of silly. But the render time of a 3D square as a proof of concept was probbaly met with the same eyerolls until Pixar came along. The people making this seem like they have a little more in mind that these few proof of concepts.

  • @BarnacleButtock
    @BarnacleButtock 6 лет назад

    Does this system have any accounting or calculations based on position of the microphone

  • @Collinoeight
    @Collinoeight 2 года назад

    Wow. Excellent work.