AMD's Hidden $100 Stable Diffusion Beast!

Поделиться
HTML-код
  • Опубликовано: 26 апр 2023
  • Thanks to Gigabuster.EXE for his help!
    forum.level1techs.com/t/mi25-...
    **********************************
    Check us out online at the following places!
    bio.link/level1techs
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Intro and Outro Music By: Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 3.0 License
    creativecommons.org/licenses/b...
  • НаукаНаука

Комментарии • 289

  • @JeffGeerling
    @JeffGeerling Год назад +297

    Well, they _were_ $100... after this video posts, maybe a little more ;)

    • @zepesh
      @zepesh Год назад +22

      The STH effect

    • @ChatGTA345
      @ChatGTA345 Год назад +5

      How do they compare to the current gamer GPUs btw?

    • @scruffy3121
      @scruffy3121 Год назад

      Still seeing some.

    • @manofsteel110
      @manofsteel110 Год назад +13

      I copped one at $75 🎉

    • @richardwatkins6725
      @richardwatkins6725 Год назад +5

      Those ARM servers sold after a little review.

  • @magfal
    @magfal Год назад +26

    Nooooooo I was about to buy one next week was hoping that no one else noticed the low price.
    Already printed a fan adapter.

  • @garrettkajmowicz
    @garrettkajmowicz Год назад +107

    It would be very nice too see real ROCm support for the RX5000 series and above. That would make it so much easier for students to be able to experiment with AI while still having a decent gaming GPU. After all, most consumer motherboards don't have enough PCIe connectivity to install 2 separate cards and have them both run full speed.

    • @cromefire_
      @cromefire_ Год назад +9

      RX 6000 definitely has it and I'm pretty sure RX5000 series too, but not 7000 series... Yet
      (Just no professional support, similarly to Nvidia's studio drivers)

    • @mareck6946
      @mareck6946 Год назад +8

      rocm currently supports Vega, RX5k and 6k safely

    • @blkspade23
      @blkspade23 Год назад +11

      GPU compute task rarely ever need the full x16 bandwidth, as they aren't continuously moving data in an out of VRAM. The working set of data is copied at the start, and it works through it in VRAM. Operating 2 cards at 8X shouldn't be a problem at all.

    • @cromefire_
      @cromefire_ Год назад +1

      @@blkspade23 Gaming might though sometimes, and there's power draw, driver conflicts, price and a thousands other reasons why wouldn't do that

    • @blkspade23
      @blkspade23 Год назад +3

      @@cromefire_ Mining rigs run on x1 adapters. I've personally run opencl tasks with Nvidia and AMD GPUs in the same PC. In both Linux and Windows. Games are fine at x8. A 2nd GPU being present is meaningless while gaming since only 1 is in use and the other is idle.

  • @ColonelFrosting
    @ColonelFrosting Год назад +8

    I actually picked up one of these for this exact reason about a month and a bit ago, but haven't been able to get around to using it yet due to the cooling issue. Glad more folks thought the same and have been engineering solutions! Thanks for putting this out there (though hopefully this doesn't disrupt the market)

  • @Razor2048
    @Razor2048 Год назад +10

    I wish they would add an option for stable diffusion to also use system memory as VRAM, While a game using system memory to supplement a lack of VRAM, will render a game nearly unplayable due to the frame rate drop, it would be good for doing a final high res render if you like a specific iteration of an image. For example, allow it to supplement your 8-16GB of VRAM with like 50GB of system memory, and render a 4K version, even if it will take a few hours due to the slower system RAM.

    • @jamesbuckwas6575
      @jamesbuckwas6575 Год назад +2

      That would be a very nice feature, especially since 8 channel system memory can achieve still-high memory bandwidth at sky-high capacities of 1 TB or more.
      However, another feature I would love to see is some kind of multi-GPU support. Perhaps pooling of system memory would be difficult to implement in software, but rendering every half, quarter, or 8th of the final image using four MI25 GPUs, or similar consumer GPUs, would be very convenient for those high resolution final renders, like you mentioned

  • @Brutaltronics
    @Brutaltronics Год назад +48

    the only reason i've been holding on AMD is because it's hard to get all the newer AI models to run properly. might give it a shot with those, specially at that price. but LLMs is what i'd really like to see running on AMD hardware.

    • @cromefire_
      @cromefire_ Год назад +5

      Well if they run on pytorch or tensorflow or onnx they probably will, as long as your GPU is supported by ROCm, just be aware of particularly old or new cards. IIRC Vega - RDNA2 should be a safe bet

    • @Senkiowa
      @Senkiowa Год назад +3

      Maybe GGML file format catches up with other than LLama derivative models. Those has to be loaded onto SSD's and they run decent on CPU power.

    • @SpeedKreature
      @SpeedKreature 11 месяцев назад

      There are several LLM's that work quite well on this card, however, thermals and RAM are still a bit of an issue. You're pretty well limited to around 6B parameters, and because of throttleing, don't expect results to be instant; they do come fast enough to be useful, however. There are a few code-specific LLM's (e.g. codeT5+) which work quite well at this size.

  • @hectorvivis3651
    @hectorvivis3651 Год назад +27

    Thank you SO MUCH for this video.
    I was planning on doing Stable diffusion at home, but with only my own Vega 64 to start, it felt a bit complicated outside of a whole, costly system upgrade.
    I'll probably go this route if shipping from the US to Europe isn't too costly/complicated.
    As a hobbyist in tech with low budget, your content is so valuable.
    Please keep up the good work.

    • @prgnify
      @prgnify Год назад +1

      I'm a fellow Vega64 owner, and haven't tried anything yet, didn't even think about running these things locally, as I would've expected it just to simply not work at all. But you mentioned it seemed complicated, what were some challenges you encountered? What did and did not do?

    • @hectorvivis3651
      @hectorvivis3651 Год назад +1

      @@prgnify Didn't really try anything yet to be perfectly frank, I was just skimming through manuals and some other Vega owners feedbacks.
      The fact I'm on Windows (as it's my biggest GPU and I'm a gamer), and the lack of WSL support for ROCm makes it pretty rough to use it for Stable Diffusion.

    • @prgnify
      @prgnify Год назад

      @@hectorvivis3651 Thank you

    • @SgtMurasa
      @SgtMurasa Год назад

      ​@@prgnifyI have an incredibly low tech system that isn't built for machine learning (using an NVidia 1050 Ti) and have had no issues running StableDiffusion at all. The only caveat is that it's a little slow; A 512x512 image takes about 1 minute to generate while I've seen others saying their more dedicated setups take 8 seconds or so. A graphic card with higher VRAM is also required for running the HiRes Fix option if you want more detailed images. Personally I've been amazed by the images generated with my 4GB VRAM card, so I can only imagine what a more powerful machine can achieve.

  • @Jimmy___
    @Jimmy___ Год назад +19

    Cool video. If you want more accurate Danny Devito faces you could make a Devito LORA for Stable Diffusion. Also, looking good, Wendell! I know it wasn't by choice but it's a silver lining :p

    • @jag764
      @jag764 Год назад +2

      He didn't make anything, the ai generated it...
      I really hate how people take credit for it it's like taking credit for art you commissioned.
      Everyone would agree that if you took credit for the work of an artist you commissioned it'd be idiotic but it's not any different here...
      You're not creating shit, you're having the ai generating things based on words you give it using illegaly and unethically scraped data without consent of people who are actively being harmed by their art being scraped and used against them...

    • @diewindows5628
      @diewindows5628 Год назад +7

      @@jag764 this is a completely silly rant, you actually do have to gather a bunch of images and train stuff if you want a decent lora, so "make" in this context is accurate. also name one artist that has been harmed and I will download their jpegs 😱😱😱😱😱😱😱😱

  • @ProjectPhysX
    @ProjectPhysX Год назад +13

    AI is not the only workload that needs insane amounts of VRAM. A few of these Instinct MI25 would make for a very capable FluidX3D system for CFD simulations. Also compelling cards are the $200 Nvidia Tesla P40 24GB, and $600 AMD Instinct MI60 32GB.

    • @georgebrandon7696
      @georgebrandon7696 Год назад +1

      GIS too. And AMD pretty much owns that space when it comes to GPU.

    • @moafwaz5563
      @moafwaz5563 Год назад

      the p40 is two 12gb cards fused no? Isn't that a bad idea. I mean, for the price of them it's ok if you're on a tight budget.

    • @georgebrandon7696
      @georgebrandon7696 Год назад +1

      @@moafwaz5563 the K80 was the same way, and that was used in data centers across the world. Including AWS's cloud (Colab) platform. Nothing wrong with that design. I have several in my home lab. Picked them up when they hit under $100 on the used market. Viable, even given the low (outdated) Cuda Compute capabilities.

    • @ProjectPhysX
      @ProjectPhysX Год назад +1

      @@moafwaz5563 no, the P40 is a single GPU with 24GB. It's the same as the Pascal Titan Xp, full GP102, but with double the VRAM.
      It's predecessor, the Tesla M40, is also a single GPU and available in both 12GB and 24GB variants.
      But there are some other models that are dual-GPUs: the Tesla K80 is 2x 12GB and Tesla M60 is 2x 8GB. And some models are quad-GPUs, like the Tesla M10 which is 4x 8GB.

    • @ProjectPhysX
      @ProjectPhysX Год назад +1

      @@georgebrandon7696 a single GPU is generally more valuable than a dual-GPU with the same overall VRAM capacity. Some workloads need as much unified VRAM a possible, and splitting across GPUs introduces communication overhead which reduces efficiency by quite a lot.
      And other workloads, like virtual desktops, are fine on a small toaster GPU, and then it's beneficial to have 2 or 4 of them on a single card.

  • @arugulatarsus
    @arugulatarsus Год назад +12

    I've been using my 6700xt with stable diffusion for a while. If anyone needs a hand, reddit has some decent guides. I can give pointers maybe too. ;)

    • @ravenstarfire8816
      @ravenstarfire8816 Год назад

      Im about to make a decision to get a 4060ti 8gb when it comes out in may or a 6800xt 16gb for same price, although I want to use Stable diffusion and heard it requires Nvida Cudas.. so your saying I can still be able to use the 6800xt?

    • @arugulatarsus
      @arugulatarsus Год назад

      @@ravenstarfire8816 A- The Azrath The Metrion The Zynthos.
      B- I get ~5 iterations per second on my 6700xt. If you're gaming in Linux, AMD is much more interesting than NV. (My opinion) A 6800xt IMO is a better deal, but you gotta work on the config here and there. I assume if you enjoy L1Tech, you enjoy the challenge too.

  • @mikebutler9332
    @mikebutler9332 Год назад +9

    I *just* picked up one of these and I'm quite happy with the purchase. It's a lot of compute for the price point. I don't do ML but for spectral DE solvers it's been great.

    • @sacamentobob
      @sacamentobob Год назад +2

      Mike, what's spectral DE ? (differential equations?)

    • @mikebutler9332
      @mikebutler9332 Год назад +6

      @@sacamentobob Spectral solvers use cosine and sine functions (in series form) to approximate the unknown functions of differential equations. It tries to solve the whole function at once (global) rather then a step at a time like standard solvers. These algorithms do a lot of Fourier transforms, so it's really easy to accelerate them on GPUs.

    • @sacamentobob
      @sacamentobob Год назад +1

      @@mikebutler9332 ok thanks for the explanation. It's been a long while since I did GDEs.. lol
      I no longer deal with these as I only did them academically and not at work or for personal interest

  • @TheSleppy
    @TheSleppy Год назад +5

    This is pretty cool for a budget option but the extra work requires some skill for sure, getting a used 5700 xt or non xt i think is more viable to skip the extra work, or just get a RX 6600 for 200 USD, and no work required.
    Regardless of what you do if you do not want to go the Linux route Nod-AI - SHARK has a gpu agnostic solution that runs on AMD, Intel and Nvidia, even APUs.
    If your a non-normy you would appreciate their work.

  • @MikeKasprzak
    @MikeKasprzak Год назад +1

    Thanks for the tip! I managed to snag one shipped for $120 CAD.

  • @Machinationstudio
    @Machinationstudio Год назад +6

    I got a Radeon VI as a hand-me-down from a Mac user's external GPU setup. 16GB makes me interested to use it for Stable Diffusion. I tried it before with my 1070ti, I wonder if it'll be better with twice the VRAM.

    • @ololh4xx
      @ololh4xx Год назад +1

      get a 1080TI. 11GB is enough for pretty much all models out there - you can even run Dreambooth if you wanted to.

  • @unclerubo
    @unclerubo Год назад +9

    I'm looking forward to the day when I can run this kind of stuff, plus my own personal assistant on my own nearly silent server, without any need to send my information to greedy corporations...

    • @MickeyMishra
      @MickeyMishra Год назад

      I'm begging for the day it can do government paperwork. That will be the day!

  • @BAD_CONSUMER
    @BAD_CONSUMER Год назад +1

    that thumbnail was really bugging me but i didn't want to offend anyone just in case... glad you touched on it in the video

  • @cameriqueTV
    @cameriqueTV 11 месяцев назад

    Would these long boards run in an external case? Then cooling setups could have all the room they need.

  • @gsedej_MB
    @gsedej_MB Год назад +1

    Hi Level1. Did you try mi25 also on a linux (rocm). Is it stable?

  • @xero110
    @xero110 Год назад +9

    The results I've been getting with Easy Diffusion are insane. My 2070 laptop can generate a 512x768 picture with 40 steps in about 15 seconds.

    • @x1c3x
      @x1c3x Год назад +4

      Try automatic1111.
      With xformers it should be at least 20% faster than ez diffusion.
      my 3080ti gets 15-16 it/s or so in a1111, but only 10ish in ez diff.

    • @xero110
      @xero110 Год назад

      @@x1c3x While 15 seconds isn't a lot of time to wait, I'll check this out.

    • @neonlost
      @neonlost Год назад

      @@xero110 xformers saves vram also so you can do slightly bigger renders

  • @Tannius
    @Tannius Год назад +1

    Does anyone have a link for the blower adaptor? I'm not seeing it on the forum.

  • @Gastell0
    @Gastell0 Год назад

    Supermicro SC745 chassis with GPU fan, incredibly good workstation/server chassis for these GPUs on cheap

  • @Jazdude123
    @Jazdude123 Год назад +1

    Is it possible to use these in something like a TrueNAS system and access the StableDiffusion webGUI by running the program in a jail or something?

  • @AntManeAmp
    @AntManeAmp Год назад +2

    Just picked up 2 mi25’s $74each… See u in the level 1 forums… Thanks, Big Guy

  • @CycahhaCepreebha
    @CycahhaCepreebha Год назад +7

    I got really excited for a minute, then I realised I confused Mi25 for Mi210.

  • @Everth97
    @Everth97 Год назад

    I didn't manage to have stable diffusion running on a 6900XT on Windows a ~month ago. Did they add RocM support proper?

  • @cromefire_
    @cromefire_ Год назад +7

    Now AMD only has to support their more recent GPUs, like their RX 7900 XTX... 4 Months out and still no ROCm support...

    • @Aremisalive
      @Aremisalive Год назад

      Is RDNA 3 not supported by rocm????? Whaaat?

    • @cromefire_
      @cromefire_ Год назад +6

      @@Aremisalive Not fully. Some small stuff works, but Tensorflow and Pytorch are broken. It's supposedly fixed with ROCm 5.5, but I'm waiting for that for months now...
      They won't ever be "officially" supported, because they're not workstation and stuff (I mean same for Nvidia), but I expected at least basic unofficial support for ML to be there...

    • @yenaek
      @yenaek Год назад +1

      @@cromefire_ there was a docker image of 5.5 out for a few days but they removed it. with a bit of googling you can find a torrent for it tho. it at least got stable diffusion working

    • @cromefire_
      @cromefire_ Год назад

      @@yenaek Yeah I have an image based on that, but Tensorflow for example doesn't even work with that yet. (plus it's kinda yanky to always execute in an image)

  • @I_Lemaire
    @I_Lemaire Год назад +1

    And now you can use this to run DeepFloyd IF. These are amazing times!

  • @kayankara4239
    @kayankara4239 Год назад +1

    Can I get 2-3 of them and get them to run together for a chatbot?

  • @gonzalodijoux5953
    @gonzalodijoux5953 Год назад

    hello. I currently have a ryzen 5 2400g, a B450M Bazooka2 motherboard and 16gb of ram. I would like to use vicuna/Alpaca/llama.cpp in a relatively smooth way. Could you advise me a card (Mi25, P40, k80...) to add to my current computer or a configuration with cheap components that can be found second hand?
    Thanks

  • @JustinAlexanderBell
    @JustinAlexanderBell Год назад

    Can you use any of these cards for GPU encoding with jellyfin?

  • @dtesta
    @dtesta Месяц назад

    So, do I have to connect both 8-pin connectors? I don't have 2x8pin on my PSU and if I connect only one 8-pin, the GPU just constantly beeps when I turn the computer on. I have a 650W PSU.

  • @Dr_Tripper
    @Dr_Tripper Год назад +2

    I don't think the Mi25 is suitable for 'machine learning' as it has no tensor cores, but for stable diffusion and inference it works ok. It still has a 300w TDP and extremely hot.

  • @mgeb101
    @mgeb101 Год назад +1

    What about all the radeon7 that can be bought on the cheap used?

  • @sailorbob74133
    @sailorbob74133 Год назад +8

    How big of a difference is that 16GB of HBM compared to running on like a 6800 with 16GB GDDR6? Those are going for like $450 now, which is quite a bit more, but I would think they'd still be quite a bit faster?

    • @2intheampm512
      @2intheampm512 Год назад

      I don't think the 6000 series has ROCM support

    • @cromefire_
      @cromefire_ Год назад +9

      @@2intheampm512 It has, just no professional support. Basically all of the cards that have a Radeon Pro counterpart are pretty safe (and lower end cards like the 6600 work too, you just have to... Tell it that you are really sure your card is supported sometimes).

    • @2intheampm512
      @2intheampm512 Год назад +1

      ​@@cromefire_ Got it thank you! Wasn't aware

    • @undefinablereasoning
      @undefinablereasoning Год назад +1

      The floating point performance is about half, so despite the extra HBM bandwidth, i'd wager its closer to a 6700XT.

  • @jamesm2075
    @jamesm2075 Год назад +2

    What would you recommend someone who enjoys gaming and really wants to get into playing around with AI? I am currently running a Vega 56 or a GTX 1060 for stable diffusion. I would really like to try all sorts of other AIs/LLMs locally as well. I was thinking about either a 7900XTX or a RTX 4080 but maybe there is a workstation card that would better suit my budget/needs?

    • @2intheampm512
      @2intheampm512 Год назад +1

      If you can stretch your budget to a 4090 get that, or if you're focused more on running LLMs locally then try and get 2x 3090s/3090Tis in good used condition (especially with an active warranty and original invoice for the 3090). If you can stretch your budget to more around $3500 then get an open box/lightly used RTX A6000 from a reputable eBay seller. Also missed that you also want to play games too lol, in that case get a 4090 for sure.

    • @asdfbeau
      @asdfbeau Год назад +3

      I have an A4000 16GB and 2 7900XTXs:
      A4000 was dead simple to setup and just as fast in image and text generation (if not faster).
      The 7900s will require you to download a leaked ROCm container from some shady fileshare, and you'll only be able to use them in a container, so you need that as a baseline technical understanding...this will change once AMD officially releases rocm 5.5 but that may not be until June/July.
      16GB is really your minimum if you want to mix/match models; other than that it's just, 'what is your time worth?'

  • @AnneHirow-bh6yq
    @AnneHirow-bh6yq Год назад +4

    First experience with this channel. Informative, accessible, and well-presented, even by old media standards.
    This would have fit comfortably on TechTV back in the day.

  • @madocworks1147
    @madocworks1147 Год назад

    Update: I got it to work! I enabled a bios setting called re-size BAR Support on my x570. Not sure what it does but The system now boots, and The operating system sees it. I also got it working on the ds3h by enabling 4g encoding in the bios
    Warning anyone that's trying to get one of these. I've been having issues with multiple motherboards not initializing correctly. I believe it has to deal with an issue with a bios compatibility problem. Some motherboards will not initialize without a graphics card, and since this isn't really a graphics card, it attempts to initialize it and fails. If your motherboard does not have a way to initialize with without a graphics card I doubtful you'll be able to get this the boot. I've been testing with two different motherboards both AMDs one a msi x570 (Blank screen no beeps) and a gigabyte b450 ds3h V2(8 beeps blank screen. My research says it means GPU initialize error).
    I'm curious if anyone else out there got theses to work. what motherboard are you're working with? Did you need a change any bios settings. Not much information out there on internet about these.

  • @AG-pm3tc
    @AG-pm3tc Год назад

    What about connecting it to something like the xgp external gpu thing for laptop?

  • @RSV9
    @RSV9 2 месяца назад

    I wish there were external "modules" to connect via USB (thundbolt 4) to the PC or Mac that are not graphics cards but rather specialized cards for AI and Stable Diffussion. This avoids buying new PCs or more powerful graphics cards just for that reason.
    Thanks for the video

  • @tad2021
    @tad2021 Год назад +16

    Nvidia gets more attention on the consumer space from having a longer and general much easier support tail. Maxwell is still supported by the current versions of cuda, and Kepler will still work with older drivers on cuda 11.
    I'd love to see the AMD support improve. Hopefully the attention from this video will help that.
    I saw this thread on the forum. One thing I wondered was how this card compares the M40 or P40 cards which are around the same price but a lot easier to get running.

  • @Aremisalive
    @Aremisalive Год назад +5

    How does this compare to something like a telsa p40 with 24 gb? They're more like $200

    • @Badjujubee
      @Badjujubee Год назад +2

      The p40 has int8 support, so it has some more power in some models clock per clock. Also not anywhere near the FP64 penalty the VEGA 10 series suffered from. But the MI25 is teasing 75 dollars in some postings, so it legit is one of the cheapest foot in the door Tensor flow type accelerators that's actually got the power to be useful in the home lab.

    • @Aremisalive
      @Aremisalive Год назад +1

      @@Badjujubee Well, that sounds good as my current home server is using my old 980ti, which has a measly 6gb of vram. I could utilize my 6950xt, but for less than $100 I would like the convenience of a separate machine.

  • @Jazdude123
    @Jazdude123 Год назад +3

    Could you use these compute units in a render farm for Blender?

    • @totalermist
      @totalermist Год назад +1

      Nope. Blender has *very* poor support for anything other than CUDA/OptiX when it comes to hardware accelerated rendering. Expect poor performance and lots of crashes for certain scenes and/or materials.

  • @FREQQLES
    @FREQQLES Год назад +2

    LDDLM: Large Danny DeVito Learning Model

  • @cdoublejj
    @cdoublejj Год назад +1

    i wanna see AMD and ARC vGPU in vSphere/esxi and Proxmox

  • @ChinchillaBONK
    @ChinchillaBONK Год назад +1

    Just curious, how do you game on AMD Instinct or Nvidia Tesla/workstation GPUs in general when they don't have Display outputs?

    • @TedPhillips
      @TedPhillips Год назад +2

      there are os, virtualization, and application techniques that let you decouple the gpu running the graphics workload from the interface that is providing the output. this adds some overhead but is otherwise viable. (ex: framebuffer has to be sent over pcie to the other graphics device that has the output)
      most typical: windows has a graphics option that let's you choose what gpu an application runs on.

    • @mytech6779
      @mytech6779 5 месяцев назад

      Shart answer: dual cards

  • @Verpal
    @Verpal Год назад +3

    If RTX 3060 have been dropping their price like the original competitor RX6600, I would have tell everyone to just grab a 3060, but in real world NVIDIA just going to keep Ampere price as high as possible, for as long as possible.

  • @kodimolly6082
    @kodimolly6082 9 месяцев назад

    Great sharing!

  • @Patrick-pu5di
    @Patrick-pu5di Год назад +1

    I was looking to mess around with an ML library about a month ago and read a lot of good things about PyTorch--I'm not sure what the issue is, but the ROCm release isn't available on windows :( Any idea about what that's about?

    • @misiekt.1859
      @misiekt.1859 Год назад +2

      AFAIK AMD announced Windows support just a week ago. For now you may try WSL2, but didn't test it. Oh, and BTW, switch to Linux ;)

    • @Patrick-pu5di
      @Patrick-pu5di Год назад +1

      @@misiekt.1859 I went googling and came across that...announcement? (accidental leak? lmao). I ended up using tensorflow + directml...a mess. agh I'd love to switch over to linux but feel chained to windows for game compat :/ one day...

  • @Duckers_McQuack
    @Duckers_McQuack 8 месяцев назад +1

    What is the iterations per sec this card performs at? As i'm looking for a card to do just A.I stuff like image, voice, video and upscaling, and as it's only 80 bucks used, i need to find out how performant it is as a workstation card vs a gaming card, like my 3090.

    • @mytech6779
      @mytech6779 5 месяцев назад

      AMD dropped ROCm support for it. Their support has a reputation for being arbitrary and erratic.(if your not a bulk buyer with support contracts)

    • @Duckers_McQuack
      @Duckers_McQuack 5 месяцев назад

      @@mytech6779 That's an oof. As the card's value on ebay near doubed, kinda glad i didn't buy one then

    • @mytech6779
      @mytech6779 5 месяцев назад

      ​@@Duckers_McQuack If you already had an older version of ROCm setup, or the card in hand and wanted to try it it might work fine and just keep on working, but I don't trust AMD support enough to gamble on setting up a new-to-me machine. (AMD generally does not keep older versions of any firmware or software availible.

  • @Elinzar
    @Elinzar Год назад +5

    Man I been eyeing that card for a long time (for gaming and just push it balls to the wall) and wanted to see if it dropped a bit more
    Well rip the price now...

    • @vegarnilsen9283
      @vegarnilsen9283 Год назад +1

      It’s still the same price. The reseller i originally wanted to buy it from (Swing computers or something) seems to be out of stock. For me it seems like a good card flashed to a WX 9100 because some CAD-applications need professional line GPUs like Quadro or WX. At first the single DP output seemed like it would make it hard for a multimonitor workflow, but i found out recently that DP 1.2 and up supports MST/Daisy chaining. Not supported by all monitors but it made it more viable for my use case. The prices might be on the rise tho, so it may be a good time to buy now.

    • @Elinzar
      @Elinzar Год назад

      @@vegarnilsen9283 yep, i tought it would be the M40 all over again, i guess really nobody really want them

  • @ed0c
    @ed0c Год назад

    i’ve been looking for good cards that can do hyper-v gpu pass through for VM’s

  • @SquintyGears
    @SquintyGears Год назад +4

    I thought PyTorch only had Cuda support for GPU acceleration?
    edit: checked again you just use the rocm package instead of the cuda one and all the built in cuda functions work regardless. neet

    • @eaman11
      @eaman11 Год назад +1

      I think that you can use pytorch even without all the mess of ROCM, just with AMDGPU

    • @SquintyGears
      @SquintyGears Год назад +1

      @@eaman11 no well... you have to choose which pytorch package you install. doing pip install pytorch alone only gets you CPU and the absolute basics. so they give you a list of combos that include the audio libraries and a bunch of other things. there are 3 versions of cuda, a CPU only and a rocm option. the rocm option will work on any GPU because im the background it swapped the cuda functions with device agnostic ones. you don't have to do anything special to convert to a rocm configuration. not until you're getting to a distributed setup at the server tier.

  • @MickeyMishra
    @MickeyMishra Год назад +1

    I finnaly have a use for the Dell Xeon servers I got laying around. Just plop the cards in and done. YES!!!!!

  • @Decenium
    @Decenium Год назад +2

    Wendel is looking good, healthy, good stuff

    • @MickeyMishra
      @MickeyMishra Год назад

      I think Wendel look younger too!
      Whatever the dude is doing? He's doing great!

  • @CloudybayTee
    @CloudybayTee Год назад

    Just saw radeon instinct mi50 16GB is currently selling for USD115 with copper heatsink fan with mount.

  • @GIANNHSPEIRAIAS
    @GIANNHSPEIRAIAS Год назад +5

    when i say to people that eventually amd will start to be the norm with rcom/miopen purely because its open source and because they are winning the exascale race they really dont believe me..

    • @Mil-Keeway
      @Mil-Keeway Год назад +2

      They are doing their very best to sabotage themselves in the open source parts, significantly slowing their rate of adoption. Not a single consumer GPU is supported by their framework, some are compatible at least partially, but no instructions exist for building in the support and not getting the "no binary has been found" error or multiple unimplemented features once you get it working. They've been repeating that they'll build support for regular GPUs into ROCm "in 3 months" since at least 2018, while Nvidia drivers currently support pretty much all desktop GPUs since 2016 (1080Ti) and have been doing so since before then.
      Is "ROCm becoming the norm" the new "year of the Linux desktop"?

    • @mytech6779
      @mytech6779 5 месяцев назад

      AMD has fools managing the department with zero respect for small time GPGPU customers. Apparently they don't understand elementary business concepts like if students can only get reasonable access to Cuda platforms then they, and the companies hiring them, are going to favor Cuda for the next entire generation.
      I know the big money comes from the big customers buying $10k cards for custom designed supercluster machines, but you can't expand market share when nobody can get started. Every single student ends up learning CUDA because its the only practical thing available.

  • @flashcloud666
    @flashcloud666 Год назад

    What are those monitors behind Wendell?

  • @BattousaiHBr
    @BattousaiHBr 11 месяцев назад +1

    bad timing considering they just announced they're dropping rocm support for these GPUs.

  • @blueguitar4419
    @blueguitar4419 Год назад +1

    I bought an MI25 last week. Lucky me before you induced the price hike

  • @rudysal1429
    @rudysal1429 Год назад

    I have an amd gpu (4650g) that i can't get rocm to work but think it's because i have an nvidia gpu and the mobo is disabling it. Kinda want to try stable diffusion now but how lol

  • @ash0787
    @ash0787 10 месяцев назад +2

    I got a used 3070 that was half the price or less than a new 70 class card usually sells for, sadly it didn't get along with my 650w psu and some expensive RAM died, so i'm still using a GTX 1080 for SD. I was looking at the AMD cards which are apparently good value for gaming and their performance with SD is fairly poor, this workaround you have found is interesting.

    • @peterr6595
      @peterr6595 8 месяцев назад +1

      I use a 3060 12gb for AI and SD.
      $250 brand new

  • @Tannius
    @Tannius Год назад

    Anyone getting "adapter not found" while trying to use amdvbflash?

  • @TexasJoe1985
    @TexasJoe1985 Год назад +12

    This is cool, but I would recommend against buying this. It's a lot of work for fairly abysmal performance. A used 3060 12GB will perform at 6-8 it/s at 512x512 on Euler at 20 steps, depending on the model, and you can also train models with 12GB of VRAM. Dollar per performance, that beats this old card and it's significantly faster which is really what's important. The extra memory on this AMD card doesn't get you anything except maybe higher resolution images which you're not going to use given the performance of this card and the way models behave at higher resolutions.

    • @hectorvivis3651
      @hectorvivis3651 Год назад +6

      I understand your point, and maybe that may work in the USA, but at least in France, it's impossible to find a 3060 12Gb for under 200€+ right now (and we're talking second-hand), while even with shipping from the USA, the Mi25 can be found for 130€

    • @undefinablereasoning
      @undefinablereasoning Год назад +1

      Exactly, there's more to how these cards perform than just their VRAM size. In fact most modern video cards can page their VRAM to the system ram, and although this operation is very slow, they can still be effectively faster than some of these old cards due to their sheer compute performance.

  • @Xanderfied
    @Xanderfied 6 месяцев назад

    Ive yet to find one with a display port out. Hell even finding one any where near that price point is a needle in a haystack. Most for sale are still new and still $2k or more.

    • @mytech6779
      @mytech6779 5 месяцев назад

      You should do some basic product research, no Instinct has any sort of video port. They are accelerator cards.
      The MI25 is super cheap, but no longer supported by ROCm.

  • @m_sedziwoj
    @m_sedziwoj Год назад

    I running Stable Diffusion but version form HuggingFace on RX 6800XT in Python, loop over night with random seeds, and next day a lot to checkout ;)

  • @bakadavi3917
    @bakadavi3917 11 месяцев назад

    Does that card perform better than an RX6750 xt in SD? Also, could I use both gpus in my system and get them to work together?

    • @Movierecap998
      @Movierecap998 11 месяцев назад

      Let me know if you get tge answer

  • @ronpaul2012robust
    @ronpaul2012robust Год назад

    so who is going to make an installer for stable diffusion to run locally on windows?

  • @WarblyWark
    @WarblyWark Год назад

    anyone know how the systemC support is on these?

  • @whathappenedman
    @whathappenedman Год назад +2

    Is doing extra cooling mandatory? Can you run it without that?

    • @gabrielecarbone8235
      @gabrielecarbone8235 Год назад +1

      it would overheat even idle without fan

    • @whathappenedman
      @whathappenedman Год назад

      @@gabrielecarbone8235 thanks. Yes I realized it has no fan at all in it lol

  • @BenMDepew
    @BenMDepew Год назад +1

    lol. Reminds me of when I ziptied a Corsair H70 AIO to a R9-280X

  • @mckidney1
    @mckidney1 Год назад

    AI still has few years to make me think Miroslav Táborský (born 9 November 1959 in Prague, Czechoslovakia) is Danny deVito

  • @mattmunroe4928
    @mattmunroe4928 Год назад +1

    I am running Stable Diffusion on my 3090. I upgraded from a 3080 just for Stable Diffusion. Doing 960x540 renders which then do the build in upsample to 1920x1080, take 25GB of VRAM on the last step. Takes to much VRAM!

    • @strongforce8466
      @strongforce8466 11 месяцев назад

      That's nuts I was hoping with 24gb you'd be able to do something like 2048x2048 or much more actually with hires...(512x768 base) I think with my current settings on 2080 I can do 1280x1900, with just 8gb

  • @joshuascholar3220
    @joshuascholar3220 4 месяца назад

    I think I'm getting higher res and speed on a 12 gb 3060. So maybe NVidia has an edge. Though I noticed that you had automatic1111 up on your screen and I noticed someone claiming that the install you should use to get speed out of AMD is called SHARK not automatic1111. So maybe that's where you left some performance on the table.

  • @timomustamaki5407
    @timomustamaki5407 2 месяца назад

    Just got my Tesla P100 running SD (512x512) at 4it/s. Not blazing fast compared to some 30xx or 40xx cards but good enough for me...

  • @peterxyz3541
    @peterxyz3541 Год назад

    Thanks 👍🏼👍🏼👍🏼

  • @PhazerTech
    @PhazerTech Год назад +8

    Nice video, I recently did my own going over my experience using a 6700 XT with stable diffusion, YOLO, and chatbots. But I disagree with your prediction about general intelligence. Personally I don't think it's going to happen any time soon, and to be honest, it might not even be possible. AGI requires the ability to deal with new situations, but this technology in its current form completely falls on its face when dealing with new situations. Its capabilities entirely depend on the data it was trained on.

    • @Movierecap998
      @Movierecap998 11 месяцев назад

      What about 6800xt ?

    • @PhazerTech
      @PhazerTech 11 месяцев назад

      @@Movierecap998 It works on all the RDNA 2 GPUs.

    • @mytech6779
      @mytech6779 5 месяцев назад

      I have a hypothesis that AGI will necessarily be just as flawed as natural intelligence. It's really the lack of precision and repetability, and even going off on tangents at random, that allows NI to do what it does.

  • @VincentVonDudler
    @VincentVonDudler Год назад

    Why would they not build a cooling solution on this card?

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 Год назад +3

    Would a few Radeon VII do good here?

    • @mareck6946
      @mareck6946 Год назад

      their about the same yes.

  • @deathcometh61
    @deathcometh61 Год назад +2

    Nvidia m40 24gb is $120 used.

  • @Prophes0r
    @Prophes0r Год назад +8

    If only the Instinct cards ACTUALLY worked with SRIOV like they claim to.
    They claim right on their product page to be for "VDI", but no one seems to have any success with it.
    And it's not for lack of trying at these price-points.
    There is plenty of interest, but everyone who tries to get it working seems to crash-and-burn.
    What we need is for someone with enough visibility to call-out AMD on it, and get some actual working drivers.
    I mean, if one of the intended use-cases for these cards was VDI, surely someone was using it for that...right?

  • @JesseMaurais
    @JesseMaurais Год назад +1

    I plan to make movies where everyone is Nick Cage. First on my list is Face Off

  • @italotorres3653
    @italotorres3653 11 месяцев назад

    can i run two of this gpu for get betther performance in stable?

    • @mytech6779
      @mytech6779 5 месяцев назад

      Yeah they are designed to run with 4 or 8 in a server style machine. But AMD dropped ROCm support for it. Their support has a reputation for being arbitrary and erratic.

  • @Seventeen76
    @Seventeen76 8 дней назад

    I have the ROG Strix RTX 3060 12gb and kicks booty at some stable diffusion. With x-formers it really gets after it

  • @nauseouscustody1440
    @nauseouscustody1440 8 месяцев назад

    In the ru handy markets its cost now about 1k$... (no politics, its just market)

  • @SoulRollerFIN
    @SoulRollerFIN Год назад

    I have no idea about 95% of the stuff on this video, but hot damn it's still interesting!

  • @bci3937
    @bci3937 6 месяцев назад

    I want to see AMD Support for R9 390 8 GB :D - i got nothing, got no AMD Support running on Win or Linux. (Comfy)

  • @RAM_845
    @RAM_845 10 месяцев назад

    I have a RX 6800...tried running SD takes ages. my GPU does 5it/s We need Rcom for Windows

  • @RaaynML
    @RaaynML Год назад +1

    Wow, $100 gets you a lot of compute nowadays

  • @sacamentobob
    @sacamentobob Год назад

    Always like your content mate!

  • @adamzahoy1749
    @adamzahoy1749 Год назад

    What I want to see is the James Bond Golden Eye movie starring Timothy Dalton. :D

  • @neonlost
    @neonlost Год назад

    looks like it runs as fast as my 3060 12GB, 35% the price nice might pick one up

  • @georgebrandon7696
    @georgebrandon7696 Год назад +2

    I find it funny all of the AMD fan boys (and some NVIDIA ones) bashed on NVIDIA when they were locking out consumer cards from being used for enterprise purposes. Yet AMD is pretty much doing the same exact thing now when it comes to ROCm. That said, I'm glad PyTorch is teaming with AMD for AI stuff. That means some of the NVIDIA cards will lower in price. Team NVIDIA/Intel ATW. I give AMD credit, though. From what I know, they've all but owned the GIS space when it comes to GPUs I MAY decide to switch when ALL of these cloud providers decide to provide AMD cards in their solutions. Only major ones I know of who's decided to provide AMD GPUs is AWS, and I believe Azure. GCS, nope. OCI, Nope. Linode, Nope.

  • @ulamss5
    @ulamss5 Год назад

    So this is only for Linux?

  • @FlorianSchauer
    @FlorianSchauer Год назад +2

    I bought a tesla m40 24 GB. Can recommend.

    • @Tigermania
      @Tigermania 3 месяца назад

      What mobo igpu? / hardware did you pair it with to work. Software drivers? Bios settings. Others could copy your successful setup :)

  • @Marc_Wolfe
    @Marc_Wolfe 9 месяцев назад +1

    The worst part is, you can't just flash your ideal settings as a custom vBIOS. Nope, a bunch of bullshit software work arounds that need re-done if you re-install windows, update drivers, or even an unexpected shutdown caused by ANYTHING. It really is a clusterfuck. Also, do yourself a favor and just get a hardware programmer like the CH341A.

  • @snake_00x
    @snake_00x Год назад +2

    The price is already jumping. If you want one you better get one ASAP...as in right now.

    • @asdfbeau
      @asdfbeau Год назад

      like, right now?....what about now?

  • @user-ik7rp8qz5g
    @user-ik7rp8qz5g Год назад +5

    MI100 goes for about $7000 in my local second hand marketplace. Even few times overpriced top cards of green 3000 series didnt cost that much during mining boom, and they sure generate faster.

    • @cromefire_
      @cromefire_ Год назад +6

      You might be confusing something, the MI100 is one of the big boys IIRC. It's competition to the A100, so it'll pretty much destroy the consumer 30 series cards (also it has 32G HBM ECC RAM).

  • @ky5666
    @ky5666 Год назад

    I asked this question on Reddit a couple months ago. Is there a full cover waterblock that can be used with some modification for this card?

  • @dmoneyballa
    @dmoneyballa Год назад +1

    found one on ebay for $75

  • @eaman11
    @eaman11 Год назад +1

    Intel also supports pytorch even on Windows, 16GB for a nice price and some recent computational power.