tyFlow | tyDiffusion - A.I. in 3ds Max

Поделиться
HTML-код
  • Опубликовано: 26 дек 2024

Комментарии •

  • @JonasNoell
    @JonasNoell  5 месяцев назад +2

    ✅Check out Patreon for all my scene files, bonus videos, a whole course on car rendering or just to support this channel 🙂
    patreon.com/JonasNoell

  • @evelynfive5684
    @evelynfive5684 5 месяцев назад +17

    It's not the end, it's the dawn of new approach for the best results.

    • @JonasNoell
      @JonasNoell  5 месяцев назад +4

      Yeah, stuff will be changing, and probably changing fast. Curious to see how workflows will look like 2 years down the line from now

  • @FS4U-CH
    @FS4U-CH 2 месяца назад +1

    very interesting stuff! thank you for the video, much appreciated. cheers.

  • @emf321
    @emf321 5 месяцев назад +6

    I'm very positive about Stable Diffusion, its very exciting, not like people who complain about it all the time . However, i'm trying to look how i can use it in the Arch Viz industry, but there's just not a lot it can do. If you ask it to draw a pretty house, yes it can do that. If you ask it do draw a very specific house based on very specific architectural drawings, measurements and QS materials & surfaces, not at all! Its a toy at this moment in time. It can do great organic, general, abstract pictures, textures, etc. not a lot of very specific tasks in the practical real world.

    • @JonasNoell
      @JonasNoell  5 месяцев назад +3

      I don't think at the moment you can use it professionally as a One-Button-Solution kind of thing. For me it is a gamechanger because I can quickly try out new ideas, get references. It's basically a reference search engine on steroids which can give you an idea about how the final result could look like. So for me at least it is an inspiration collection tool, because as you described it lacks the fine control and precise revisions required in daily production. So try to use it what it's good for and we will see what the future brings :-)

    • @juliussaurus
      @juliussaurus 5 месяцев назад

      I´ve been using here in the office basicaly to find moods. If you have a reference image it works even better, it´s perfect to bring to a meeting and decide the project direction, even though the builing is not exactly how it should be.

    • @dcfuelbrother5914
      @dcfuelbrother5914 4 месяца назад

      Exactly this. I tried it today (tydiffusion) after fiddling around with Stable Diffusion for one year. It's got still a way to go. But it's usefull to make a scribble/reference, as Jonas said.

    • @fadilvi
      @fadilvi 2 месяца назад +1

      Use d5 render with its ai assistant....its great ..i recommend it.

  • @Hung_Nguyen_90
    @Hung_Nguyen_90 5 месяцев назад +3

    I am guessing this won't work for animation yet? This seem like it will treat each frame as a single pictureso It can't create waves, foam as If the ship is moving forward.

  • @yassinedjebbari4819
    @yassinedjebbari4819 5 месяцев назад +2

    Amazing introduction to this wonderful tool ! Thanks a bunch

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Glad it was helpful!

  • @GuillermoOlague-r6e
    @GuillermoOlague-r6e 10 дней назад

    Hi, can you explain me how to add new Loras, because I have tried many ways and I get error. Thanks

  • @L30nHbl
    @L30nHbl 4 месяца назад

    thanks alot! i thikn it can help in many parts of the image especially in archviz were you always want great photorealistic surroundings. this will help and cut the time in half!

  • @CaptainSnackbar
    @CaptainSnackbar 5 месяцев назад

    its really good to explore the posibilities where your art can go, but in the end you have to do it your self to reach that level. otherwise you will end up cycling for options that never ends

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Yeah that’s also how I see it at the moment. It’s a bit useless if you can’t combine it with the traditional way of doing things as it lacks control and precision and is just too random. But it will give you lots of ideas and inspirations which you can then utilize to build something new out of it.

  • @chosekriz
    @chosekriz 5 месяцев назад

    Great basic trainin video, thank you. I just bumped into a problem with depth - it always sends in just blank black viewport - no depth at all. Tried adding all max (arnold, v-ray) cameras, did not help. Any idea how to solve this?

  • @ytmelo
    @ytmelo 5 месяцев назад

    Great video as usual! I have a question: is your video sped up during the image generation phase (after you press the Generate Image button) ? I own an i9 13900KF + 4080 rtx gpu PC and it takes much longer than shown here. Is there any particular setting to tweak for gaining speed performance ? Thank you for answering. EDIT: I've already changed the sampler to a GPU one but the change in speed is almost unnoticeable...

    • @JonasNoell
      @JonasNoell  5 месяцев назад +1

      Hi, no the video is edited for the generation parts roughly 5x faster. This one is on a 4090

    • @ytmelo
      @ytmelo 5 месяцев назад

      @@JonasNoell thank you very much for answering! Keep rocking!!! ❤

  • @onlyyoucanstopevil9024
    @onlyyoucanstopevil9024 5 месяцев назад

    AWESOME, KEEP IT UP😊😊😊

  • @GS3D
    @GS3D 5 месяцев назад +1

    Thank you for the tutorial. Great content and i will play with it soon.

  • @smukkegreen
    @smukkegreen 5 месяцев назад

    Great intro Jonas.
    Thx a lot for this.
    Can we create materials for our objects that look good if the object is rotated?
    Like texture maps etc?
    I tried a tennis ball on sphere, but it looks bad when I rotate the sphere.

  • @SteelDrake
    @SteelDrake 2 месяца назад

    Thanks!

  • @intiazrahim
    @intiazrahim 5 месяцев назад

    @JonasNoell is it my system or is it just really slow to render in viewport. Maybe you can show a timelapse or something to get an idea of the actual speed? Just tried the cat and it placed my model in front of the cat (behind - blurry).

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      I speeded up the generations around 500%, but the speed depends on your GPU. I use a 4090 for reference so if you use weaker hardware it will of course cause delays. Also the resolution you chose plays an important factor. You could also choose smaller resolution and try to use upscaling which should be faster

  • @miladlahooti545
    @miladlahooti545 5 месяцев назад

    It doesn't work in my system and it appears an error about "torch not compiled with cuda enabled"

  • @piergiacomomacri492
    @piergiacomomacri492 5 месяцев назад

    Thank you Jonas for the video... may I ask you if there is an option to use inpaint mask feature, where you can improve some parts locally, especially 3d people?

    • @JonasNoell
      @JonasNoell  5 месяцев назад +1

      There doesn't seem to be an option for this (yet?) though ComfyUI/SD supports this. I hope this will be just a matter of time as this is the first release of the tool. This would be also my most requested feature. I would also like to have localized prompts for diffferent ObjectIDs or Masks for example, also possible in ComfyUI but not existent within tyDiffusion yet...

  • @ozzyosbourne6
    @ozzyosbourne6 4 месяца назад

    Are you able to export the baked textures with mesh?

    • @JonasNoell
      @JonasNoell  4 месяца назад

      It is not really baked, just camera projected. And yes you can export this.

    • @ozzyosbourne6
      @ozzyosbourne6 4 месяца назад

      @@JonasNoell thanks for reply, i was thinking creating a flat plane and projecting ai generated texture such as wooden plank, stone wall etc then exporting that texture.

  • @FreeKiLLuminati
    @FreeKiLLuminati 5 месяцев назад

    wow really a game changing

  • @tinko8903
    @tinko8903 5 месяцев назад +1

    "Tydiffusion comfyUI engine could not be started.Try running 3ds max as administrator." I see this box. I have this problem can’t generate.Please how can I solve this.

    • @mehmetyigit2330
      @mehmetyigit2330 5 месяцев назад +1

      same problem

    • @mihabrezavscek9487
      @mihabrezavscek9487 5 месяцев назад

      @@mehmetyigit2330 same here!

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Have you tried running 3ds max as an administrator? :-) If yes try to get some support from TyFlow directly, I can't give you much technical support as it worked flawless for me

    • @JonasNoell
      @JonasNoell  5 месяцев назад +1

      Oh and try upgrading your GPU drivers

    • @mihabrezavscek9487
      @mihabrezavscek9487 5 месяцев назад

      tried running as admin but still no success. even though the models are downloaded they dont show up under: Generate>Model>[none found]

  • @andrefranzke3882
    @andrefranzke3882 5 месяцев назад +2

    Dreams come true

  • @mohamedsabry6871
    @mohamedsabry6871 5 месяцев назад

    if i need to save the final result as the Png file what can I do ?

  • @axzichannel4837
    @axzichannel4837 5 месяцев назад +1

    Amazing. Thanks for this tutorial. :)

  • @ykadam
    @ykadam 5 месяцев назад

    Thank you Jonas!

  • @sacifair
    @sacifair 5 месяцев назад

    Interestig, how to do inpaint through this stuff?

  • @Mr.Indiyaah
    @Mr.Indiyaah 5 месяцев назад

    When is MAYA ASSIST coming out ?

  • @Rammahkhalid
    @Rammahkhalid 5 месяцев назад +2

    Nice Take

  • @onepeacebyisobare1750
    @onepeacebyisobare1750 5 месяцев назад

    please which version of 3ds max do you use?

  • @yanke8154
    @yanke8154 5 месяцев назад

    Amazing

  • @averageman2063
    @averageman2063 5 месяцев назад

    Amazing..

  • @andvfx
    @andvfx 5 месяцев назад

    Tydiffusion is awesome! Great video. Its so fun to try different looks of an image and so fast!

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Yeah recently going through all my recent projects and test what the AI would have come up with. You can get a good feeling for its potential but also it’s limitations! 😃

  • @intiazrahim
    @intiazrahim 5 месяцев назад

    Another amazing vid @JonasNoell! This is indeed a game changer for me. Trying it out for matching background landscapes instead of tediously modeling/lighting/texturing. Using this combined with other AI tools like Topaz upscaler or Magnific to further add enhancements.... the possibilities are crazy! Clients will especially like this and you can charge them an additional fee for design explorations or artistic styles.

    • @JonasNoell
      @JonasNoell  5 месяцев назад +2

      Yeah exactly that what I was thinking as well, should work really well for MattePainting replacements, even with the possibility to project that directly on some simple geometry for parallax. Can definitely simplify a lot of think you would normally have some high quality assets or some matte painting skills to have decent results

    • @intiazrahim
      @intiazrahim 5 месяцев назад +1

      @@JonasNoell hmmm never thought of using it for parallax. That would be amazing use for 3d orthographic projections for say 3d floor plans! Gotta try it out.

  • @davekite5690
    @davekite5690 5 месяцев назад

    'a really interesting video - thanks.

  • @mikkeel_johnnynew
    @mikkeel_johnnynew 5 месяцев назад

    do you know why mine isnt working? when i press to generate image it makes my viewport black and i cant fix even that. it doesnt generate at all and i have the default settings. does anybody know why?

  • @ivanbonavick227
    @ivanbonavick227 5 месяцев назад

    Hey! Does anyone knows why I can't make it work with viewport depth? Only color mode seems to work

    • @ivanbonavick227
      @ivanbonavick227 5 месяцев назад

      I also cant use edge mode "Cannot execute because node CannyEdgePreprocessor does not exist", I have the same models than in the video

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Try to check at TYflow support forums, I can’t give you technical support

  • @yautjagang4715
    @yautjagang4715 5 месяцев назад

    Great ❤ that helps a lot for me to kick start 🎉

  • @aagroupaagroup3917
    @aagroupaagroup3917 5 месяцев назад

    Does it work with free version of Tyflow ?

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      As indicated in the video: Yes

  • @pantov
    @pantov 5 месяцев назад

    is it true that animation features only work with 24gb gpu's? if so, does the vram has to be on a sinle gpu, or would a 2nd gpu solve this?

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      I haven't heard or read anything about those requirements as of now, but I haven't really tried animation yet so I can't tell for sure. Where did you read this?

    • @pantov
      @pantov 5 месяцев назад

      @@JonasNoell i cant find the thread where i read this, i think it was tyson himself on the tyflow forum.. explanation was that to render animation you need to load multiple large ai models into vram..

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      @@pantov Ok interesting, as said I will try animation soon do see how this works. Though I have a GPU that has 24GB Ram, so if that's the requirement I probably wouldn't notice it. Maybe just make a post in the tyflow Forum if you want a reliable answer. 🙂

    • @pantov
      @pantov 5 месяцев назад

      @@JonasNoell yeah i was planning to that a bit later, tyson is flooded with forum posts at the moment :-) i got 11gb on my workstation, but i got a render node that has 16gb but slower gpu, havent tried it there yet.. thanx for your reply, looking forward to your furter tyflow vids. cheers mate!

  • @abdullahubeyd3060
    @abdullahubeyd3060 5 месяцев назад

    great tuotrial

  • @EjazShaikh-y5p
    @EjazShaikh-y5p 3 месяца назад

    How to uninstall Ty diffusion Ai in 3ds Max?

  • @tinrats
    @tinrats 5 месяцев назад

    great tutorial! Have you experimented with animation yet? When I get a prompt that looks good as an image but then run it through animation, it looks completely different. Have you experienced that? I'm so new to AI I'm probably being stupid ;)

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Haven't checked out animation yet but will definitely do so. I can imagine making a more advanced 2nd tutorial that covers those features. This one here was mainly for the basics to get you started.

    • @tinrats
      @tinrats 5 месяцев назад

      @@JonasNoell Nice one. I'll look forward to it :)

  • @888berg
    @888berg 5 месяцев назад

    Amazing work! Sorry just so I understand - it looks like you can use this diffusion AI to make 2D images... but can you use it to export Video Clips, and use the 3D model to have multiple camera angles etc, cheers :)

    • @JonasNoell
      @JonasNoell  5 месяцев назад +1

      Yes you can do video but there are issues about inconsistency and just general weirdness that will happen, so I don’t think you can get something too productive out of it at the moment

  • @Team_rk288
    @Team_rk288 5 месяцев назад

    Waiting for tyre track in mud material

  • @shadergt2610
    @shadergt2610 5 месяцев назад

    Thank you!!!

  • @PsycandyII
    @PsycandyII 4 месяца назад

    so why not just run the render thru the ai? *sigh*

    • @JonasNoell
      @JonasNoell  4 месяца назад

      Because rendering and Generative AI is something completely different

  • @mehmetyigit2330
    @mehmetyigit2330 5 месяцев назад

    what components are in computer hardware?

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      RTX 4090 but you can use much cheaper hardware. Also I speeded up the image generation around 500% in editing.

  • @явася-р4с
    @явася-р4с 3 месяца назад

    🤔👍

  • @kurtv6281
    @kurtv6281 4 месяца назад

    i am very interested in fluffy cats...

  • @MegaSuperJaBaTo
    @MegaSuperJaBaTo 22 дня назад +1

    Es ist extrem schade, dass du deine Videos hauptsächlich auf Englisch veröffentlichst. Es gibt viel zu wenig deutschen Content in dieser Qualität und dementsprechend ist für viele Interessierte der Zugang zu den vorgestellten Techniken nur über Umwege möglich.

    • @JonasNoell
      @JonasNoell  10 дней назад

      Naja für mich als Creator macht es mehr sinn den Content für die größtmögliche Zielgruppe veröffentlichen, und das ist nunmal die Englische Sprache. Außerdem würde ich vermuten, dass Deutsche die 3D machen und kein Wort English sprechen eine sehr sehr kleine Zielgruppe :-)

    • @MegaSuperJaBaTo
      @MegaSuperJaBaTo 10 дней назад +1

      @@JonasNoell Nun, Hand aufs Herz, es gibt wahrscheinlich 100 weitere Creator die die selben Thematiken abhandeln, der Mehrwert geht dabei für den "Konsumenten" gegen Null. Es geht dabei auch nicht darum, ob man einer Sprache mächtig ist oder nicht, sondern vielmehr darum welche man zu 100% sofort versteht. Es gibt massig Beispiele wo man sich traut, die "Nische" zu bedienen und Content in anderen Sprachen aufzubereiten, die damit zudem durchaus erfolgreich sind. Sieh dir bspw. Stolz3D an, der seinen Content für FreeCAD quasi ausschließlich auf Deutsch macht und damit eine recht große Community um sich ausgebaut hat. Ich glaube du könntest damit auch eine sehr dankbare Klientel beglücken. 🙂

  • @caseyj789456
    @caseyj789456 5 месяцев назад

    If you know comfy and max (enough) you can probably do all without this plugin for free 😊

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Of course, it is just using ComfyUI in the background. It can't do anything that you couldn't already do with ComfyUI. The novelty is the convenience and direct integration making it frictionless to use.

    • @ViperAleks
      @ViperAleks 5 месяцев назад

      Stable Diffusion works in free version of tyFlow too.

  • @skylarkstudio
    @skylarkstudio 5 месяцев назад

    I guess its the end of vray and corona 😅

  • @zedeon6299
    @zedeon6299 5 месяцев назад

    Blender already has this a year ago, it's not game changing

  • @TauranusRex
    @TauranusRex 5 месяцев назад +5

    Interesting, how peopla with lack of knowledge can get blinded by an interpolation algorithm...

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Can you enlighten me what am I missing?

  • @charltonleonen8144
    @charltonleonen8144 5 месяцев назад

    back those days that your rendering should be based on your vray masters, but now this ai change the game, that's rediculous back then spending much time from your vray fuck masters!

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      I don’t think that much has changed, as the tools in their current state don’t produce final, controllable and consistent results. You would still have to set this up traditionally.

  • @slavchobrusev
    @slavchobrusev 5 месяцев назад

    10x

  • @aymanali5491
    @aymanali5491 5 месяцев назад

    I think Unwrapping is dead at least

    • @JonasNoell
      @JonasNoell  5 месяцев назад +2

      It is just camera projection the image one the model, there are lots of issues as stretching and being dependant on the camera perspective. Unwrapping is not dead for sure, but I guess there will be AI unwrapping tools which will do the job in the future ;-)

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 5 месяцев назад

    it's nothing but a peace of shit!!!!!!!!
    because it does not allow you to use your checkpoint . it even does not allow you to deselect the model you do not want to use for the download?!!!!!!!!!
    why should I have to download so many unnecessary things

    • @JonasNoell
      @JonasNoell  5 месяцев назад +4

      Maybe have a cup of tea to chill out a bit? 😀

  • @MrAsag
    @MrAsag 5 месяцев назад +2

    🤮🤮🤮

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Why you puke? 😀

  • @AAAAAAndreyAndreev
    @AAAAAAndreyAndreev 2 месяца назад

    that clickbait title tho

    • @JonasNoell
      @JonasNoell  2 месяца назад

      Welcome to RUclips you must be new here 😀

  • @dubtube6691
    @dubtube6691 5 месяцев назад +2

    Cash cow for the developers, useless in a professional environment, still waiting for serious tools

    • @JonasNoell
      @JonasNoell  5 месяцев назад +1

      Why everyone is always so sure that it is useless in a professional environment. In its current state it has many usecases already if you are creative about implementing it.

    • @FoulPet
      @FoulPet 5 месяцев назад

      What are serious tools?

    • @mianokamuru6333
      @mianokamuru6333 5 месяцев назад

      whats a ''professional environment''

  • @Brashenn
    @Brashenn 5 месяцев назад +3

    I really don't see anything game changing about this.

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      For me it is gamechanging as before I didn't use ComfyUI/SD for my daily work as it was too troublesome to export everything, leave your 3d software, deal with the hacky user experience of ComfyUI, so apart from some simple initial testing I didn't use it much. Now with tyDiffusion that is different as it is integrated so seamlessly that you would be stupid to not use it at least during concepting phase or when figuring out how you want the end result to look like. It is not a One-Button-Final-Result thing but more like a image search engine on steroids which can give you much better and faster ideas and references than traditionally.

    • @Adam.Magyar
      @Adam.Magyar 5 месяцев назад +1

      I agree. Not to mention the complexity of a single render setup. Also there are other AI generators out there which make something similar.
      Though I'm a paid tyFlow user I cannot see to much use cases... maybe a viewport IPR and / or an animation rendering mode would be a real game changer.

    • @Brashenn
      @Brashenn 5 месяцев назад +2

      @@JonasNoell Im sorry Noel but I cant agree with you there. If I wanted some lovecraftian abstract approximation to life to know what something might maybe look like, I'd just sniff some shrooms. This is just the enshitification of your ability to imagine. I am yet to see anything spewed by these scrape generators that I can honestly say that is interesting to me. Plus the whole morality of using these generators in the first place. I hope you don't start putting out more content related to this.

    • @JonasNoell
      @JonasNoell  5 месяцев назад +2

      I really struggle to see how you can NOT see any use case for this 😀 Did you watch the Release Trailer? It’s literally packed with Use Cases.

    • @JonasNoell
      @JonasNoell  5 месяцев назад +2

      So you can’t even see this being useful as an inspirational tool? So I’m a full time lighting and shading artist and I’m confronted every project every day with a blank viewport filled with 3d models and some rough description of what the client wants. How would that not be useful to me? It’s basically reference search on steroids. The client wants a pirate ship made out of cheese? Just see what would come out of the AI and see how I would translate that into a 3d scene and shader? Good luck finding something like this on google. I think you assume it is a solution to just press a button and get something finished. It’s not like this. I really struggle to see how you can see 0 use case of this. Nobody is forcing you to make trippy drug trip animations, you can use it for whatever you want. Use it for what’s useful to you. I’m not saying it’s the be all and end all of traditional way of doing things, as of now it’s a tool that can be used and should be used. And if that becomes a valuable tool for me I will of course continue to make videos about it as I try to provide most valuable content to my followers out of a production perspective. How about you make a video how crappy and useless it is if that is your opinion. Would be interested to see your take on it.

  • @ronaldolamont
    @ronaldolamont 3 месяца назад

    Concept art is dead!! It's over!!

    • @JonasNoell
      @JonasNoell  3 месяца назад

      Yeah, For Concept AI has a very strong use case that is hard to beat

  • @cyanide227
    @cyanide227 5 месяцев назад +3

    AI is just a waste of time. ComfyUI does the same stuff. People are limited with models and they need to pay. PAY PAY PAY. AI in CGI is no go.

    • @JonasNoell
      @JonasNoell  5 месяцев назад +5

      tyDiffusion uses ComfyUI in the backend, so yes it is the same stuff. The innovation is the direct and seamless integration into the 3d software, which at least for me makes it accessible. Didn't use it before through ComfyUI as it was annoying, slow, hacky but now through tyDiffusion I do and would be stupid no to. In CGI NOT experimenting with it would be a no go. If you literally can't see any usecases for this and it is and always will be a waste of time for you then I can't help you :-)

    • @Hung_Nguyen_90
      @Hung_Nguyen_90 5 месяцев назад

      Maybe AI in CGI is no go but CGI in AI is the best way to go. Imagine someday we can create a box on the ground and type: "Bush", then create a Cylinder, animate it go pass the bush and type: "A male human wearing a suit". Then type for the whole scene: "A Male human walking next to a bush".
      I mean using 3D object and camera to guide everything is the best way to have control over AI image generator. If We want we could create a more complex scene by using low poly human model instead of a cylinder and actually animate him to do what you want. And If you want to go extreme, you could still make a complete model and animate it just like you do right now and use AI simply to render the scene very fast.

  • @SwordToothTiger
    @SwordToothTiger 5 месяцев назад +1

    You job is to generate tonn of tasteless cg shit? Then it definitely complete your workflow.

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Have you ever worked in professional production? 😀

    • @SwordToothTiger
      @SwordToothTiger 5 месяцев назад +1

      @@JonasNoell yes, and I know that tonns sit is a result of so called "professional production"

  • @architect-cc
    @architect-cc 3 месяца назад

    In summary, spending years and years studying 3dsmax and many plugins is of no use, I lost years of study... it is best to put this program aside and just concentrate on studying how to generate prompts to achieve the same result in a second without having than wasting time modeling in 3D, materials, lights, effects, etc., with a single click on an AI you get a spectacular result.

  • @mikegentile13
    @mikegentile13 5 месяцев назад

    excellent explanation! thanks!

    • @JonasNoell
      @JonasNoell  5 месяцев назад

      Thanks for your support :-)