Llama 3.2 Vision 11B LOCAL Cheap AI Server Dell 3620 and 3060 12GB GPU

Поделиться
HTML-код
  • Опубликовано: 4 фев 2025

Комментарии • 120

  • @DigitalSpaceport
    @DigitalSpaceport  2 месяца назад +1

    AI Hardware Writeup digitalspaceport.com/homelab-ai-server-rig-tips-tricks-gotchas-and-takeaways

  • @FaithMediaChannel
    @FaithMediaChannel 2 месяца назад +3

    Thank you for your video. I will share it with other people and other work organizations and put you on our list as preferred content providers for those who wanna do it yourselves again thank you for your video. It is so easy and you’re very detailed and the explanation Nolly in the application deployment as well as the hardware configuration.

  • @CoolWolf69
    @CoolWolf69 2 месяца назад +3

    After seeing this video I had to download and try this model by myself (also running Open WebUI in dockge while Ollama in a separate LXC container on Proxmox with a 20GB Nvidia RTX 4000 Ada passed through). I was flashed by the accuracy of the pictures being recognized! Even the numbers shown on my electricity meter's display were identified correct. Wow ... that is and will be fun using more over the weekend ;-) Keep up your good work with these videos!

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      Wait so your model was able to see the numbers on a LCD? I need to figure out what is going on with mine I have 2 meter things I need to log.

    • @CoolWolf69
      @CoolWolf69 2 месяца назад

      @DigitalSpaceport Yeah. No idea what I did do differently or specifically 🤷 Looking at some logs might be a good idea though I have no clue if/how/where/when/why verbal they might be

  • @docrx1857
    @docrx1857 2 месяца назад +10

    Hi. This is an awesome video showcasing Ollama on a 12GB GPU. I am currently using a 12GB 6750xt. I still find it very usable speed with models in the 18-24 GB range.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +3

      Oh hey a datapoint for AMD! nice. can I ask what tokens/s you hut on the 6750xt? Any issues with ollama or does it "just work" ootb?

    • @docrx1857
      @docrx1857 2 месяца назад

      @@DigitalSpaceport I had to add a couple lines to ollama.service file because the 6750xt is not supported by rocm, but other than that it works great. I have not measured the token rate. I will get back to you when I do. But I can say with a 10600k and 32gb DDR4 3600 it generates responses at a very comfortable reading pace even when offloading decent percent to the cpu.

    • @spagget
      @spagget 2 месяца назад

      Is Amd Rx has a good compability now? I am planning rx7900 gre for games and ai
      Or i will sacrifice for 3060 16gb?

    • @docrx1857
      @docrx1857 2 месяца назад

      @@spagget 7900GRE is ROCM supported. You will have no issues with Ollama. It will work out of the box. Just install Ollama and go.

    • @spagget
      @spagget 2 месяца назад

      @@docrx1857 thank you. Nvidia is pricey for me and i wanna try out AI stuff before i quit gaming life.

  • @danny117hd
    @danny117hd 7 дней назад

    I'm trying this build closer to $500 with your links. It will be my first build since last century.

  • @BirdsPawsandClaws
    @BirdsPawsandClaws Месяц назад

    Thank you for the video. Very helpful

  • @alexo7431
    @alexo7431 2 месяца назад

    wo cool, thanks for the in deepth tests, helps a lot

  • @crimsionCoder42
    @crimsionCoder42 2 месяца назад

    Dude. You have to stop making me love you this content is fantastic.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      Thanks! I am glad I am hopefully being infotaining while tinkering around with all this stuff.

  • @AndyBerman
    @AndyBerman 2 месяца назад +1

    Great video! Love anything AI related.

  • @firefon326
    @firefon326 2 месяца назад +2

    Sounds like maybe you'll be doing a compilation video here soon, but if not or if it's going to be a while, maybe you should add the guide videos to a playlist. You have so much great content out there. It's hard to figure out which ones to watch if you're starting from scratch

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      I hear this feedback and its tough as things that are critical change fairly fast. I like the idea of segmenting the playlists by skillset and content type. Then I can during intro point new folks to that playlist and update those videos. Thanks, soon. And yes there is a new software guide video up soon I am working on right now.

  • @roaryscott
    @roaryscott 6 дней назад

    Nice 👍
    What would you think about a quad 3060 set-up?

  • @ToddWBucy-lf8yz
    @ToddWBucy-lf8yz 2 месяца назад +4

    30:07 if you have the ram you can always throw up a RAMDisk and swap models out of CPU RAM and into VRAM much quicker than off a drive. More advanced setup would use Memcached or Redis but for something quick and dirty RANDisk all day.

    • @MitchelDirks
      @MitchelDirks 2 месяца назад +2

      dude, genius! i didnt think about this. I have a server personally that has 192ish and might use this method lol

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +2

      Redis/valkey sounds like a great option for this!

    • @ToddWBucy-lf8yz
      @ToddWBucy-lf8yz 2 месяца назад +1

      @@DigitalSpaceport yup. I use a similar approach for swapping datasets in and out of VRAM during fine-tuning and have even put my whole RAG in VRAM via lsync (It works but no way I would put it production professionally) and that defiantly helped speed things up quite a lot.

  • @alx8439
    @alx8439 2 месяца назад +16

    Next time give it a try to ask a new question in a new chat. Ollama by default is using context size of 2k, you most probably exhausting it too quick with pictures. And the GPU VRAM is too low to accomodate higher context size without flash attention or using smaller quants, rather than default 4bit you have downloaded.

  • @MM-vl8ic
    @MM-vl8ic 2 месяца назад +2

    I like the way you are "testing" various Combos..... I'm an old guy progressively having hand issues after years of physical work/abuse..... I'm really interested in using the "AI" as a solution for disabilities, as well as Blue Iris/Home Assistant tie in. I'm "researching" the voice to text (conversational) as well as image recognition server/servers..... would be interesting to see speech to text asking/inputting the question(s)..... I have a 3060 12g and a 4000A to play with.... if you have time/desire, would be interested in seeing a dual GPU setup with the above GPUs (so I don't have to)..... also curious how they would perform in X8 (electrical) slots... and if multiple models can run simultaneously, voice

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      They will perform for inferance just as fine in an 8 as a 16 its a low BW workload. Training that wouldnt hold true however. Agree I need to do the voice video. Its pretty awesome and I use it often on cellphone.

  • @lemmonsinmyeyes
    @lemmonsinmyeyes 2 месяца назад +2

    the terminology 'in this picture' might mean it is looking for photographs within the image. Using the phrase 'what is shown in this image' would be more open ended. It might classify 'picture' the same as 'painting'. example: 'what is in this painting?' and showing the image of the cat and slippers. IDK, just a guess

  • @computersales
    @computersales 2 месяца назад

    Interesting build. Funny you make this video not too long after I recycled a bunch of them. It would be nice if people found more uses for stuff older than 8TH gen. These older machines are still perfectly usable.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      Im testing out a maxwell card this weekend, a M2000. I bet its going to suprise me!

    • @computersales
      @computersales 2 месяца назад

      @DigitalSpaceport it would be interesting to see a functional ultra budget build. Curious how much cheaper that this setup you could get. The Dell T3600 with 625W PSU are really cheap now.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +2

      The power pin for a GPU tends to dictate I have found and a must to get enough vram cheaply. A strong contender that is even cheaper could be based off an HP workstation class but wow I do not like their bios at all. I have a note that says so taped to my second monitor in case I forget but it could bring costs down. I think 7th gen intel is desirable cutoff as that iGPU performs the vast amount of offload needed to have a decent AIO media center box also. Does a 3600 have a power 6 pin?

    • @computersales
      @computersales 2 месяца назад +1

      @@DigitalSpaceport T3600 has two 6 pin connectors if it is the 635W config. 425W config doesn't support powered GPUs though. There also can be some clearance issues depending on the GPU design. Looks like they bring the same price as the 3620 though so might not be worth pursuing.

  • @womplestilskin
    @womplestilskin 4 дня назад

    do you have a tutorial on maybe using proxmox and multiple gpu and api? Also ESP32 cam is the lightest, cheap cam to put on a network.

  • @irvicon
    @irvicon 2 месяца назад

    Thank you for your video. Could you please tell me if you have tested this configuration on the Llama-3.1-8b or Llama-3.2-3b text models? It would be interesting to know the performance figures (tokens/sec) on your tests 🤔.

  • @jamesgordon3434
    @jamesgordon3434 2 месяца назад

    I would guess by the fact if you ask multiple things the LLM processes them all at once, the vision is the same and doesn't read left to right nor right to left but processes the entire sentence all at once. 29:14

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      Okay but check this out. It says -80 at first, but that screen looks like that if read rtl. The - is a small case watt. Its 08watt on the screen. Im testing the big one today so will investigate further on it.

  • @ardaxify
    @ardaxify 2 месяца назад

    Did you give multiple images and try to retrieve the correct one with your query? That would be an intesting experiment. I wonder the how many images it can handle at most. Thanks for your series btw

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      I think you would want to use either the new knowledge function, unsure if that would work. The goto for lots of images would be classifiers traditionally. Now it would be straight to vectors in a RAG pipeline. Ill see if its possible to sent it more then one in OWUI. I doubt it but worth checking into.

  • @mariozulmin
    @mariozulmin 2 месяца назад +1

    Thanks nearly my setup! Did you go with pci passthrough to an vm or to an lxc?
    The card is pretty good for daily tasks and some low power consumption.
    Also 3.2 vision is at the moment really good for what i use it, mine takes about 170W on full load though 😅

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      So in this demo I went with the VM and passthru as it "just works" with no cgroups funkiness but in a stable system I always go with LXC. Plus you can use it for other tasks but if it crashes out of VRAM with like a lot of tasks it doesnt recover gracefully. I need to figure that out but yeah 3.2 vision is wild stuff.

  • @klr1523
    @klr1523 2 месяца назад

    18:02 I thought it might be referring to the F-connector and is not registering the white Cat-6 cable at all.
    Maybe try again using a Cat- with a contrasting color...

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      Good point! I am also now convinced it is reading RTL and not LTR on LCD screens which is weird.

  • @ThanadeeHong
    @ThanadeeHong 2 месяца назад

    I would love to see the same test on fp16 or 32. Not sure if it gonna give more accurate responses.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      I do plan to test the 90b-instruct-q8_0 which is 95GB (4 3090 gonna be close) and also the 11b-instruct-fp16 is only 21GB so might give that a roll also. I think the meta llama series of models caps at just fp16 or am I overlooking something?

  • @alcohonis
    @alcohonis 2 месяца назад +1

    Can you do a AMD test with a 7900 variant. I feel that’s more affordable and realistic when it comes down to $ to VRAM ratio.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +2

      The amount of requests I am getting for testing AMD GPUs does have me strongly considering buying one used to see. I had a friend that was going to lend me one but then they sold it. Possibly test this out soon.

    • @alcohonis
      @alcohonis 2 месяца назад +1

      I completely understand. I would love to see an AMD build so that we don’t have to offer our kidneys to the Nvidia gods.

    • @slavko321
      @slavko321 2 месяца назад

      @@alcohonis 7900xtx, accept no substitute. Ok maybe W7900. Or instinct.

  • @TokyoNeko8
    @TokyoNeko8 2 месяца назад

    What was the inference speed for text gen. can you ask it to write 500 words story and see the llama stats?

  • @Novalis2009
    @Novalis2009 2 месяца назад

    I always stumble upon the dual card use, let's say I want to add another 3060 and have 24GB. How is this going to work for inference, image genareation, training. Is it a combined shared 24GB or is it separately used and allocated by e.g. the software instances of ollama, comfyui, etc.? Any good reading recommendation maybe?

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      I think I answered all that and more in detail in this recent video: ruclips.net/video/_xL9r0ygISg/видео.html but basically you WILL get 24GB vram, so you can run larger models. It will not perform at 2x speed with llama.cpp/ollama (currently, there is a github issue that has this as being worked on in some fashion but not today) You can also run additional smaller models fully in each card and that is something you should do, as it has the best overall performance. Things like a vision model and a main model that can route vision requests to the vision model benefit from this a lot. Image generation doesnt main repo run it but there are some offshot comfy repos that I have read but not ran. Training you need to have full pcie bw but you can get speedup benefits from multiple cards. I'm not sure at the 12GB 3060 level how that would play out however.

    • @Novalis2009
      @Novalis2009 2 месяца назад

      @@DigitalSpaceport Thanks for pointing that out. I actually came to these questions after watching ruclips.net/video/_xL9r0ygISg/видео.html, so for me it was not clear how things play together, but that is because I am max. only half way up the learning curve as you are. I still struggle to understand how VRAM is managed. You say "You can also run additional smaller models fully in each card". What does "each card" mean? If I "WILL get 24GB" do I have 24GB VRAM or do I have two 12GB VRAMs, each of them addressed seperately. At the moment I am running Flux1 via ComfUI on my 4070 Ti Super, but I want to code in parallel and every now and then ask ollama some question and that wouldn't work. My 64GB RAM would explode and my 16GB VRAM too. Not to mention my Ryzen7 8core CPU. So I'm thinking of having a second machine for inference with ollama, but it should also be possible to ship some flux workload over night. It sould not neccessarily be very powerfull but it should run without crashing or becoming I/O locked. So do you think a double 3060/12GB would be suitable for that scenario?

  • @neponel
    @neponel 2 месяца назад

    can you look into running multiple mac mini m4 in a cluster? using exo for example?

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      That is an expensive ask and unfortunately this YT channel earns um lets say not even remotely close to enough to have a budget for R and D that would be in the 10K range for a quad setup. I have a hard time even getting people to subscribe, much less sign up for a membership or anything.

  • @aliasfox_
    @aliasfox_ 2 месяца назад

    I clicked on this because of the $350 in the thumbnail. The RXT 3060 hits nearly that budget alone. I built mine for ~$800, so I was curious. I used 2x Nvidia Tesla P40 24GB Cards.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      228+132 is what these are priced at used on ebay right now, I just checked them. That's 360 so no its not 350 for just a 3060.

  • @Micromation
    @Micromation 2 месяца назад

    Can you use multiple 3060s for this? I mean does it support memory pooling or is the model limited by VRAM capacity of a single GPU? Sorry if this is dumb question but this is not my field (and in 3D rendering with CUDA and OptiX can't pool memory on consumer grade cards)

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      They all pool at various vram sizes AND generations even! Paired a 1070 and 4090 just to test this in a prior video. Its the excellent sw running these which handles parallelism nice.

  • @xlr555usa
    @xlr555usa 2 месяца назад

    I have an old dell i7-4760 that I could try pairing with a 3060 12gb. I have run llama3 on just a i5-13600K and it was usable but a little slow.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      Was it the new llama3.2-vision 11b? What tokens/s did you get?

  • @Alejandro-md1ek
    @Alejandro-md1ek 2 месяца назад

    Interesting

  • @Prøpħęžzŷ
    @Prøpħęžzŷ Месяц назад

    Would this setup be enough for a local AI that I want to integrate into Home Assistant as a voice assistant?

    • @DigitalSpaceport
      @DigitalSpaceport  Месяц назад +1

      Yes it does work for that. Just did a video overview of my HA integration and that model will easily run in 12GB vram like this video demos. You can also run HA on the box with piper and whisper at much faster speeds vs a pi.

    • @Prøpħęžzŷ
      @Prøpħęžzŷ Месяц назад

      @@DigitalSpaceport Thanks for the reply and I will check that video out!

    • @DigitalSpaceport
      @DigitalSpaceport  Месяц назад

      Yeah dont use the little blue device esp32 thing I used. Its audio and mic are not good enough.

  • @JoeVSvolcano
    @JoeVSvolcano 2 месяца назад +1

    LoL, Now your speaking my langage! Until 48GB Vram cards under 1000 become a thing anyway 😀

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      Yeah this 3060 is pretty sweet. I wish there was a 12 or 16GB vram slot powered card that was cheap but maybe in a few years. 20 t/s is totally passable and the base model this is strapped to is pretty decent.

    • @mariozulmin
      @mariozulmin 2 месяца назад

      @@DigitalSpaceportyes and affordable too, sad there is no 16G Version for just a little more. The price gap between 12-24 is just insane if just used for AI

  • @NLPprompter
    @NLPprompter 2 месяца назад

    could you please test this build with localGPT vision github, that repo had several vision model to test with seeing how each model perform on RAG with such build might really interesting because this kind of RAG were really different to image to text to vector, this system image to vector. different architecture

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      Im looking at this now and I like the idea fewer steps in RAG. Img2txt getting the boot would be awesome.

    • @NLPprompter
      @NLPprompter 2 месяца назад

      @DigitalSpaceport awesome, glad to know you are into the concept of "image to vector" instead of "image to text to vector" i believe in future having a model be able to handle both without losing speed in consumer hardware would be game changing, since both architecture have their pro cons. thanks for your videos mate.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      Yeah I do like the concept and having been a long time user of unpaper/tesseract its indeed extra steps that avoiding would be ideal.

  • @meisterblack9806
    @meisterblack9806 2 месяца назад

    hi will you try llamafile on threadripper cpu not gpu they say its really fast

  • @DIYKolka
    @DIYKolka 2 месяца назад

    Ich verstehe nicht, wofür ihr diese Modelle benutzt. Kann mir das vielleicht einer erklären was der nutzen ist?

  • @StevePrior
    @StevePrior 2 месяца назад

    I bought the Dell 3620 with i7-7700, the GPU and adapter cable you linked, but while I've got the end plugged into the GPU I don't see any of those 6 pin connectors within reach to hook up to. The loose cords coming off the power supply all seem to be SATA. Am I missing something? There is a 6 pin connector cable but it's plugged into the motherboard. Update: I checked and that 6 pin connector on the motherboard is an output for hard drives and CD, neither of which I'm using (the motherboard has onboard 1T NVME) so I thought I could plug the adapter cord into that, but it's too short to reach and also the wrong gender. So I'm back to wondering what I'm missing - how did you really do it?

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      What is the dell part number on the PSU? Mine has the 6 pin that comes dirctly off the PSU, not any header on the motherboard. Usually a 5 to 9 letter part nunber is visible. Ill check that against mine. *updated Mine has a PSU d/pn 0T1M43

    • @StevePrior
      @StevePrior 2 месяца назад

      @@DigitalSpaceport Model # L290EM-01, Dell P/N: HYV3H, P/N: PS-3291-1DB. Total output wattage 290W.

    • @StevePrior
      @StevePrior 2 месяца назад +1

      @@DigitalSpaceport Looks like your Dell came with a 365W power supply but mine is only 290W.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      Crap. We have different PSUs in these. This one is 365w and im guessing the 290w you have doesnt have the 6 pin. They are on ebay for ~20 it looks like but wish I knew that when I made the vid would have included that info for sure.

    • @StevePrior
      @StevePrior 2 месяца назад

      @@DigitalSpaceport I had previously ordered the 6770 based Dell via your Amazon link and I just confirmed it also has the 290W power supply with no extra connector. Do you mind posting the ebay link to the PSU that you're seeing for ~20? I'm seeing them a lot more expensive than that.

  • @milutinke
    @milutinke 2 месяца назад

    It's a shame Pixtral is not on ollama, also it's a bigger model.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +1

      I agree but I think there is a way to make it work with the new ollama HF running. Would need to manually kick that off but I think it could work.

  • @i34g5jj5ssx
    @i34g5jj5ssx 2 месяца назад

    I understand appeal of 3060, but why everyone ignore 4060ti 16gb?

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +3

      Im not ignoring it myself, at msrp its a rather good card. I just cant afford to buy 1 of all the things so thats why its not reviewed here.

  • @tungstentaco495
    @tungstentaco495 2 месяца назад +3

    I wonder how the Q8 version would do in these tests. *Should* be better.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      I do plan on testing the q8 in the 90 so we should get a decent hi-lo gradiant. If that is signifigant I will revisit for sure

  • @FSK1138
    @FSK1138 2 месяца назад

    i5/ i7 10th gen
    Ryzen 5 / 6 5th gen
    better price/ watts

  • @nhtdmr
    @nhtdmr 2 месяца назад +4

    No body should give their Ai Data or Researches to big providers. Keep your data in Local.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      Fully agree! The data the collect on us in addition to paid fir services is absolutly silly.

    • @NLPprompter
      @NLPprompter 2 месяца назад

      fully local yes be careful of API too some model still send data

    • @AlexKidd4Fun
      @AlexKidd4Fun 17 дней назад

      @@NLPprompter Can you cite an example model that phones home even if used on Ollama? Asking for research purposes. Thanks!

    • @NLPprompter
      @NLPprompter 16 дней назад

      @@AlexKidd4Fun what is phone home means?
      mm... you can try gemma 2b,or any 3b model if running that resulted too slow then try download lower parameters like 1.5b or lesser but if you try 3b is fast enough for your smart phone specs, then you might want to try higher parameters like 5b,or 8b and so on.
      FYI these small models (SLM) are mostly used for specific use case isn't for general use (like what Apple try to do and fail then blame LLM in their paper)
      if it's fine tuned for specific use they does can work pretty well in zero shot

    • @AlexKidd4Fun
      @AlexKidd4Fun 16 дней назад

      ​@NLPprompter phoning home refers to a system which which runs locally but will also reach across a network to an api or command & control server with information like telemetry, feature usage, license checks, or data collection that's secondary and usually benefits the software creator but not the user. I'm not referring to a smartphone usage scenario specifically. Basically something that makes an open source or local AI model have any element of spying built into it. Thanks!

  • @animation-nation-1
    @animation-nation-1 2 месяца назад

    Good luck getting this for under $900 USD in aus even used.

    • @ali.005
      @ali.005 Месяц назад

      yeeesh wtf? They’re 200 us here in Canada

  • @JamesMartin2014
    @JamesMartin2014 2 месяца назад +2

    openwebui is dog crap slow. I've had nothing but problems with it. Lobe chat has proved to be faster and easier to deploy and support

  • @NikolaySemenkov
    @NikolaySemenkov 9 часов назад

    After this it is hard to imagine what is in a brain of a self driving car.

  • @genkidama7385
    @genkidama7385 2 месяца назад

    these "vision" models are so bad and unreliable for anything. need to be way more specialized and fed much more samples to be of any value. spatial relationships are completly wrong. blob classification/recognition is weak. i dont see any use for this unless very very basic tasks. i dont even know if any of this can be put to production due to unreliability.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад

      I am about to start testing out the big one here and hope for a lot of improvement. I just want to be able to read an LCD thats very clear which seems like it should be a small hurdle.

  • @zxljmvvmmf3024
    @zxljmvvmmf3024 2 месяца назад

    Yea 350$ + GPU lol. Stupid clickbait.

    • @DigitalSpaceport
      @DigitalSpaceport  2 месяца назад +3

      No. It is $350 with the 3060 12GB GPU and I dont clickbait like that. I do clickbait of course but not some outright lie as you are stating.

  • @andreas1989
    @andreas1989 2 месяца назад +1

    ❤❤have a happy weekend brother and followers !!!

  • @andreas1989
    @andreas1989 2 месяца назад +1

    ❤❤❤first comment .
    Love ir videos man
    Love from sweden

  • @FaithMediaChannel
    @FaithMediaChannel 2 месяца назад

    Thank you for your video. I will share it with other people and other work organizations and put you on our list as preferred content providers for those who wanna do it yourselves again thank you for your video. It is so easy and you’re very detailed and the explanation Nolly in the application deployment as well as the hardware configuration.