Izotope RX11 Dialog Isolate CPU Usage // Tech Support's Response

Поделиться
HTML-код
  • Опубликовано: 11 окт 2024
  • While testing Dialog Isolate and Repair Assistant plugins in RX 11, I found the render times were unbelievably slow and my computer struggled with 3 instances. I reached out to Izotope's Tech Support to see if this was the expected behavior when using them as plugins. I'm using a Mac Studio and haven't run into these issues with Supertone Clear or Accentize dx:Revive Pro. This is what I found out.
    Learn how to use Izotope RX to clean up podcast audio with my course, Izotope RX: A Practical Guide for Podcasters - www.tansyaster...
    Do you find my content valuable?
    Become a benefactor at ko-fi.com/jess...
    Improve your podcast editing skills at Podcast Editing School: www.tansyaster...
    Improve your business acumen as a podcast service provider at Tansy Aster Academy's Pro Group: www.tansyaster...
    For podcast production services: www.tansyaster...
    For branding and growth strategies: tansyaster.com/
    My Equipment:
    Camera: Canon R8 amzn.to/3WQ7Tf4
    Lens: Canon RF 28mm amzn.to/3YxNFZ0
    Lights: Elgato Key Lights amzn.to/3SAE53G
    Mic: Earthworks Ethos amzn.to/3WOegzz
    Preamp: Cranborne Audio EC-1 amzn.to/3YvHdBq
    Interface: Universal Audio Apollo x8 amzn.to/46CBnAy
    Headphones: Ollo Audio S4X 1.3: tinyurl.com/Ol...
    *Get a free case and extension cable when you order directly from Ollo. Add the affiliate pack (tinyurl.com/oll...) and use the coupon code "tansy" when you order your headphones.
    Ollo Audio S5X: tinyurl.com/Ol... (this code automatically gets you the freebies above plus Waves nx virtual room)
    Follow me on Facebook: / jesse.mccune
    Disclosure: Some of the links in this post may be "affiliate links." This means if you click on the link and purchase the item, I will receive an affiliate commission.
    @Earthworksaudio
    #earthworksaudio
    #earthworksmicrophones
    @olloaudio

Комментарии • 14

  • @Tomaslav16
    @Tomaslav16 4 месяца назад +1

    Уважаю тебя за твою точность и въедливость в детали и тонкости технических процессов!!! ты пожалуй единственный в медиа пространстве кто так по мужски точен!!! Продолжай обязательно! Твоя информация и нам "Экономит кучу времени" при выборе инструментов работы со звуком)))

    • @jesse.mccune
      @jesse.mccune  4 месяца назад +2

      Thank you for the kind words and I'm happy that you're finding my content helpful.

  • @ChrisPFuchs
    @ChrisPFuchs 4 месяца назад +3

    Neural Network Denoisers are generally used as Audiosuite or on the Dialogue Bus for Post Audio. It's just too intensive to run on every Dialogue Track. Podcast Editing is perhaps a bit unique in that you need far less Dialogue Tracks so could expect to get away with it. I think the Real Time DX Isolate serves its purpose fine for Post Audio and does sound quite good, but the render time for the 'Best' is annoyingly long and I can see why not being able to run multiple instance of it is a big negative for Podcast. I personally offline render my heavy noise reduction; the two best Neural Network Denoisers in my opinion run off of dedicated AI accelerators and are only audiosuite or offline rendering at the moment.

    • @jesse.mccune
      @jesse.mccune  4 месяца назад

      What industry do you work in? Podcast editing is definitely a unique situation when it comes to dialog editing, for many reasons. I can see why running tools like this as Audio Suite on buses would be the norm in many situations. What makes podcast editing different is in most cases, we're dealing with low paying clients with short turnaround in an industry facing a lot of downward pressure and commodification of our skillset. Between AI and sites like Fiverr and Upwork, clients are increasingly asking for more while spending less making efficiency the number one goal to remaining profitable. I'd approach things differently if I were working on a project that had higher standards than podcasting where good enough and affordable is all that matters.
      What are the two best Neural Network Denoisers you are referring to?

    • @ChrisPFuchs
      @ChrisPFuchs 4 месяца назад +2

      @@jesse.mccune Hush Pro is generally regarded as the best Dialogue Denoiser in the industry at the moment and runs off of the Apple AI chips on their gpu. According to the creator, this allows orders of magnitude larger model sizes to be ran as opposed to cpu based Neural Denoisers and I think it shows. Audio companies like Izotope have to develope for a much wider range of systems however, as AI chips are really only found on a few Apple devices and Nvidia GPU's. This means running it off of the CPU.
      Auphonic is the other one; the 'Dynamic' Denoiser runs off of specialized hardware in the cloud. I know you're a little opposed to cloud based rendering, but the quality is good. It also has extremely quick render times and you can actually have multiple files being processed at the same time.
      I do Podcast editing regularily for a company, but also do other post audio work.

    • @jesse.mccune
      @jesse.mccune  4 месяца назад

      @ChrisPFuchs Thanks for the response. I like Hush, but since I'm not a PT user, I don't have access to the Pro version. The biggest issue with the regular Hush is that there's no preview function or a way to render a small section to test the settings. It becomes more of an all-or-nothing endeavor where I guess on a setting, render and listen back. I've had a couple conversations with Ian and suggested this to him and he seems to think he can bring Hush into VST/AU format, but needs to figure out a couple things to allow for that.
      When you say you do podcast editing for a company, are you a contractor for an agency or were you hired by a company to edit their podcast?

    • @ChrisPFuchs
      @ChrisPFuchs 4 месяца назад

      @@jesse.mccune Oh yeah, I can see how that'd be a deal breaker! Hopefully the realtime vst version is good. Ian seems like a cool dude.
      But to summarize my original comment about RX 11, it just seems like it was engineered within the constraints for Post Audio; a solid realtime denoiser that sits on the DX bus and a slightly 'better' offline algorithm that works well for Audiosuite processing. I think it's 'fine' in that sense, but as you point out, its weaknesses definitely show a little for Podcast when you have long files to render or tryimg to out use it on multiple dx tracks.

  • @Karam_Omar
    @Karam_Omar 3 месяца назад +1

    great work and very important issue

    • @jesse.mccune
      @jesse.mccune  3 месяца назад +1

      Thanks. I know it may not be an issue for many people, but for someone who prefers to work directly in my DAW, this is a deal breaker until they better optimize the plugins.

    • @Karam_Omar
      @Karam_Omar 3 месяца назад

      @@jesse.mccune
      I really appreciate your effort and your wonderful content, continue to the top😍😍

  • @omerylmaz958
    @omerylmaz958 4 месяца назад +1

    hahaha I love it adobe voice destroyer

    • @jesse.mccune
      @jesse.mccune  4 месяца назад

      I can find uses for most noise and reverb reduction tools, but that one changes the voice too much if there's too much noise. It makes them sound they sucked on some helium. It doesn't even know how to handle decent to good recordings. Case in point, I tested a recording my sister made that didn't have any noticeable noise and needed a little reverb reduction. The audio Adobe returned wasn't even recognizable as her. With these tools, the voice should still sound like the person after being processed, not an AI caricature of the person, so Voice Destroyer seemed like a good reference.

    • @omerylmaz958
      @omerylmaz958 4 месяца назад +1

      ​@@jesse.mccune to be honest, this feature used to be better when it was first launched, but seeings things like these everything is changing to a post editor kind of thing. The human aspect will never go away. Its just the person with more skills will be ahead always

    • @jesse.mccune
      @jesse.mccune  4 месяца назад

      @omerylmaz958 Agreed. I haven't tried it out since they moved it into their subscription package, but I noticed that each new algorithm seemed to be worse than the previous one. That was always a head scratcher to me. It leaves me wondering what they are training it on and who is testing it if they think the results are getting better.