Faust: A Programming Language For Sound (with Romain Michon)

Поделиться
HTML-код
  • Опубликовано: 14 ноя 2024

Комментарии • 31

  • @aadv1k
    @aadv1k Месяц назад +16

    I've never been much of a podcast "listener" especially with how the flooded the market is with mediocre q/a sessions masquerading as "informational podcasts". THIS is what I was looking for, an informative, and actually interesting session I can actually watch for entertainment! awesome (or idk maybe I am just a nerd) Keep it up man, you have some really dedicated folks rooting for ya!

    • @CjqNslXUcM
      @CjqNslXUcM Месяц назад +2

      Exactly. I can't take another famous host talking to a famous guest about nothing. Give me something I don't know.

    • @aadv1k
      @aadv1k Месяц назад +2

      @@CjqNslXUcM Yeah. These podcasts like "diary of a CEO" and many other I don't care to mention, it seems they have really accepted their identity as slop that people hear in background and don't really care about. Same generic cookie cutter questions, same circlejerk either trying to appease the guest or shill their product (looking at you Huberman)

    • @DeveloperVoices
      @DeveloperVoices  Месяц назад +4

      Yeah, hosting a podcast like that would hold zero interest for me. I'm here to learn too! :-)

  • @danielrs1047
    @danielrs1047 Месяц назад +4

    A fantastic interview as always!
    I’ve been a hobbyist digital synth designer for ~6 years, and a functional(ish) language for audio that transpiles to C and JS is literally a wish come true. The additional possibility of targeting FPGAs was beyond what I thought prudent to wish for. I will be checking this out immediately.

    • @redcollard3586
      @redcollard3586 Месяц назад +1

      Yeah I got a little excited when he started talking about FPGAs. In terms of latency they CRUSH a general purpose computer.

  • @AdrianBoyko
    @AdrianBoyko Месяц назад +8

    I’ve been doing signal processing for software defined radio in Pony-lang and my framework is concurrent in the way that Kris asks about at 27:58. Every “block” in my framework is a Pony Actor and the Pony runtime schedules them across the available cores. And inside each block/actor, the processing almost always takes advantage of vector operations. I haven’t measured the upper limit of performance but I’ve been processing streams with hundreds of thousands of samples per second - much higher than a typical audio data rate like 48k.

    • @encapsulatio
      @encapsulatio Месяц назад

      Do you like Pony more than Rust?

  • @Johnsormani
    @Johnsormani 6 дней назад

    I actually discovered Faust just some weeks ago and started to build my first synthesizer 2 weeks ago and am having great results with it. I built a polyphonic synth to my own taste ( I have 40 plus years experience in synths and sound design) which includes subtractive synthesis with a morphing oscillator , wavefolding and other waveshaping plus additive components. It’s really addicting

  • @0LoneTech
    @0LoneTech Месяц назад +1

    For those who'd like to scratch more surfaces, you might want to look into CSP (e.g. Transputer, XMOS, Occam), Clash and Futhark, or some DSPs like the Epiphany processors (with their dedicated loop modes). C is amazingly awkward for functional pipelines.

  • @robfielding8566
    @robfielding8566 Месяц назад +1

    I did iOS development, making music instruments. I wrote my own engine, but I tinkered with Chuck, Faust, SuperCollider, CSound. When I stopped doing iOS music app development, Julius Orion Smith III got the IP for my app, and they made Geo Shred with their own company. Having FFTs and filters, and vectorized acceleration is a good idea for a language. It's similar to how good a vectorizable language for AI is a good idea. (Mojo would be the closest contender.)
    audio is basically a hard-real-time app. you get loud pops when you miss an audio deadline. so, usually it will get used in a C, Rust, etc runtime.

    • @Johnsormani
      @Johnsormani 6 дней назад

      Interesting , I thought geoshred was really a cool app when it came out back then and I liked this and also SampleWiz that Jordan rudest promoted . I read somewhere that Faust had been used. And is that the same Julius smith that spoke at the ADC last year about Faust?

  • @mattanimation
    @mattanimation Месяц назад +2

    sounds like a great project. will by gnarly if they can sort out the VHDL and Verilog backends

  • @leonid998
    @leonid998 Месяц назад +1

    i have a feeling that also aliasing has been mentioned and talked about in broader context, lots of watchers may have no idea what it is in current application, why it happens and it'd be nice to explain just in few words at least

  • @leonid998
    @leonid998 Месяц назад +2

    porting to mlir?... //

  • @TJ-hs1qm
    @TJ-hs1qm Месяц назад

    my goto explainer video for sampling rates and aliasing
    ruclips.net/video/-jCwIsT0X8M/видео.html

  • @timidlove
    @timidlove Месяц назад +1

    I'm starting to worry the half-on headphone will drop

  • @adicide9070
    @adicide9070 Месяц назад

    where's is that Jon Blow interview?? that'd be something..

  • @MichaelSchuerig
    @MichaelSchuerig Месяц назад

    Starting at around 44:20, Romain explains pipelining on an FPGA. What he describes (AFAICT) is re-using a single hardware resource multiple times to process data. I imagine a case where each channel of multi-channel audio is processed by the same block of hardware, instead of dedicated hardware for each channel. I think "multiplexing" would be a better term for this. However, my understanding of FPGA lingo is nonexistent, I'm understanding from what I know about CPUs. With this background, pipelining on an FPGA would involve keeping all hardware resources busy by decomposing computations into small, successive steps that can be executed in parallel for a stream of inputs.
    Can anyone clarify this?

    • @0LoneTech
      @0LoneTech Месяц назад

      You're right, pipelining is time domain multiplexing at the function block level. If you have some fairly complex function, it takes time to complete it, as it propagates through multiple layers of logic. If we add registers spread out in that deep logic, the depth is lower so we can raise the frequency, but the new registers must then be filled with more data. The stages of the pipeline are like work stations along a conveyor belt. It's the same in CPUs; a pipelined CPU has multiple instructions at varying stages of completion. A revolver CPU, such as XMOS XS1, runs instructions from multiple threads to ensure they're independent (generic name SMT, hyperthreading is one example). MIPS instead restricts the effects, such that the instruction after a branch (in the delay slot) doesn't need to be cancelled. DSPs like GPUs specialize in this sort of thing, and might e.g. use one decoded instruction to run 16 ALU stages for 4 cycles (described as a wavefront or warp).

  • @FunkyELF
    @FunkyELF Месяц назад +3

    This discussions seems to conflate "real-time" with "fast". Programming in C, C++ or Rust doesn't magically get you "real-time", nor is it required. Real-time is more about being deterministic which requires help from the operating system itself as well.
    No such thing as a "real-time" programming language that runs on Windows, Linux, or any other non-RTOS.

    • @raphaeld9270
      @raphaeld9270 Месяц назад

      Thanks for the comment.
      I did not know too much about RealTime Operating Systems (RTOS), but seems like Linux only recently got support inside the kernel:
      EDIT: Seems none of the links made it through, but there is an article on zdnet called "20-years-later-real-time-linux-makes-it-to-the-kernel-really"

    • @raphaeld9270
      @raphaeld9270 Месяц назад +1

      Have a great day :D

    • @raphaeld9270
      @raphaeld9270 Месяц назад

      Seems none of the links made it through, but there is an article on zdnet called 20-years-later-real-time-linux-makes-it-to-the-kernel-really

    • @tommaisey9069
      @tommaisey9069 Месяц назад +2

      In the audio context, that 'help' from the OS is that your audio processor is called on special thread, which has 'more realtime guarantees' than a normal thread. You're right that this isn't a true realtime context like an RTOS, but it's usually pretty good (or your computer's audio would glitch constantly). It's an audio programmer's job not to mess it up by causing priority inversions with lower-priority threads, or to make system calls that lack worst-case latency guarantees. This is usually only possible in C, C++ or Rust.

    • @0LoneTech
      @0LoneTech Месяц назад +1

      Linux is an RTOS, though many don't use that functionality (e.g. core reservation, memory locking, scheduler replacement). And there are real time programming languages, e.g. the Copilot Realtime Programming Language and Runtime Verification Framework. It's common to relegate hard realtime tasks of limited complexity to coprocessors like PRUs in BeagleBone, PIOs in Raspberry Pi MCUs, or separate microcontrollers. An example of such a task is dynamic voltage and frequency scaling in mainline CPUs.

  • @kahnzo
    @kahnzo Месяц назад

    I'd love to see a "compile to Gleam" work done on this

    • @DeveloperVoices
      @DeveloperVoices  Месяц назад

      From the sounds of it, that would be a very achievable project. 🙂