Building My Ultimate Machine Learning Rig from Scratch! | 2024 ML Software Setup Guide

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024

Комментарии • 209

  • @Hamisaah
    @Hamisaah 5 месяцев назад +34

    You put so much effort and knowledge into this video! I watched all the way and it was interesting how you demonstrated the whole build from scratch. Keep it up!

    • @sourishk07
      @sourishk07  5 месяцев назад +4

      Thank you so much for watching! Excited to make more ML videos 🙏

    • @paelnever
      @paelnever 5 месяцев назад

      @@sourishk07 Better use llama.cpp instead of ollama, faster and more options including model switching or running multiple models simultaneously.

    • @sourishk07
      @sourishk07  4 месяца назад

      Thanks for the recommendation! I'll definitely take a look.
      I like Ollama because of how simple it is to get up and running and that's why I chose to showcase it in the video.

    • @paelnever
      @paelnever 4 месяца назад

      @@sourishk07 You don't seem the kind of people who likes "simple" things. Anyway if you want to run llama.cpp in a simple way also you can do it.

    • @sourishk07
      @sourishk07  3 месяца назад

      @paelnever I just played around with it and it seems really promising! Definitely want to spend more time looking into it. I appreciate the rec

  • @dominiclovric8984
    @dominiclovric8984 Месяц назад +2

    This is the best video I've seen on this topic! Well done!

  • @hasannkursunn
    @hasannkursunn 2 месяца назад +2

    Resources that you shared are amazing👍 I always see videos teaches you how to build the system but your video includes much more than that👌 Thank you very much!

    • @sourishk07
      @sourishk07  2 месяца назад

      Thank you for watching! I'm glad that you enjoyed it!

  • @init_yeah
    @init_yeah День назад

    nice vid man im still saving up for my setup, got the 40+80 super already the rest will be easy!

  • @sergeziehi4816
    @sergeziehi4816 4 месяца назад +1

    This, is days of work!!! Compile freely in 1 single video. Thanks!! For that.pricesless information here

    • @cephas2009
      @cephas2009 4 месяца назад

      Relax it's 2hrs hard work max.

    • @sourishk07
      @sourishk07  4 месяца назад

      Don't worry, I had a lot of fun making this video! Thanks for watching and I hope you're able to set up your own ML server too!

    • @JorgeDizDias
      @JorgeDizDias 2 месяца назад

      @Sourish Kundu Indeed it is very nice this video and very informative. Would you share the command prompts in on file?

  • @aravjain
    @aravjain 3 месяца назад +3

    This feels like a step by step tutorial, great job! I’m building my RTX 4070 Ti machine learning PC soon, can’t wait!

    • @sourishk07
      @sourishk07  2 месяца назад +1

      Good luck and I hope you have fun! I love building computers so much haha

    • @aravjain
      @aravjain 2 месяца назад

      @@sourishk07 me too!

  • @naeemulhoque1777
    @naeemulhoque1777 20 дней назад

    Bro Made one OP video! 🔥🔥🔥🔥😃
    Please make more LLM focused video.
    1. More PC building guide for LLMs.
    2. Difference between quantized models.

  • @thethinker6837
    @thethinker6837 2 месяца назад +1

    Amazing project !! One step away from creating a personalized Jarvis hope you create one 👍

    • @sourishk07
      @sourishk07  Месяц назад

      Haha maybe that's my long-term plan!

  • @Eric_McBrearty
    @Eric_McBrearty 5 месяцев назад +1

    This was a great video. I had to pause it like 10 times to make bookmarks to all of the resources you dove into. The I saved it to Reader, and summarized it with ShortForm. Great stuff. You went into just enough detail to cover the whole project and still keep the video moving along.

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      That was a balance I was trying really hard to navigate, so I'm glad the video was useful for you! Hope you have as much fun setting up the software as I did!

    • @crypto_que
      @crypto_que 2 месяца назад

      I had to slow the video to .75 speed to make sure I was understanding what he was saying.

  • @Chak29
    @Chak29 3 месяца назад

    I echo other comments; this is such a great video and you can see the effort put in, and you present your knowledge really well. Keep it up :)

  • @chiralopas
    @chiralopas 9 дней назад

    You just don't know for how many days I was trying to find something which can let me run AI stuff from vscode itself. Thanks!

  • @deltax7159
    @deltax7159 5 месяцев назад +3

    Cant wait to build my first ML machine

    • @sourishk07
      @sourishk07  5 месяцев назад +2

      Good luck! I'm really excited for you!

  • @DailyProg
    @DailyProg 4 месяца назад +1

    I found your channel today and binged all the content. Please please please keep this up

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Wow I'm glad you found my channel this valuable! Don't worry, I have many more videos coming up! Stay tuned :)

  • @akashtriz
    @akashtriz 5 месяцев назад +5

    Hi @sourishk07,
    I had considered the same config as yourself but then changed my mind due to:
    1. the unstable 14900K performance due to MoBo feeding the i9 insanely high power. Please Do make sure you enforce intels thermal limitations on the Asus MoBo bios settings. 😊
    2. Instead of the NR200P I opted for AP201 case so that a 360mmAIO can be used for the CPU.
    3. I went for a used 3090 as much of my focus will be on using the A100 or AMD Mi300x on the cloud.
    ROCm has made huge progress, noteworthy is the efforts that George Hotz is taking to make ROCm more understandable for the ML community.
    Overall congratulations buddy, hope you succeed at your goals.

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      Hi! Thanks for watching the video and sharing your setup. You bring up completely valid points.
      1. I personally haven't had any issues with the 14900K stability. I didn't turn on AI overclocking in the BIOS and just left settings at stock (except XMP for RAM). I'm probably more wary with any sort of overclocking after that news came out now though lol
      2. The reason I opted for the smaller case was because I wanted to try building in a SFF for the first time. The good thing is that cooling hasn't really been impaired, although a larger radiator never hurt
      3. I should've considered a used 3090 as well, but because I wanted to do some computer graphics work as well, I opted for the newer architecture.
      And while the advancements in ROCm do seem promising, I'm not sure anything will ever take me away from NVIDIA's vast software suite for ML/AI, but maybe one day, we'll see!

    • @jwstolk
      @jwstolk 22 дня назад

      @@sourishk07 The issue is that the BIOS defaults not result in instability, they can permanently damage the CPU. This may eventually be fixed with OS updates that try to update the BIOS, but Intel has been quite slow admitting the long know issue and providing proper fixes. It may be worth looking into this a bit more, before assuming the BIOS defaults are safe, since the issue is specifically about incorrect BIOS defaults.

  • @jefferyosei101
    @jefferyosei101 4 месяца назад

    This is such a good video. Thank you, can't wait to see your channel grow so big, you're awesome and oh we share the same process of doing things 😅

    • @sourishk07
      @sourishk07  4 месяца назад

      I really appreciate those kind words! Tell me more about how our processes overlap!

  • @benhurwitz1617
    @benhurwitz1617 5 месяцев назад +1

    This is actually sick

    • @sourishk07
      @sourishk07  5 месяцев назад

      Thank you so much!

  • @halchen1439
    @halchen1439 4 месяца назад

    This is so cool, im definitely gonna try this when I get my hands on some extra hardware. Amazing video. I can also imagine this must be pretty awesome if youre some sort of scientist/student at a university that needs some number crunching machine since youre not limited to being at your place or some pc lab.

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Yes, I think it’s a fun project for everyone to try out! I learned a lot of about hardware and the different softwares

  • @pradeepr2044
    @pradeepr2044 2 месяца назад

    Absolutely loved the video. Learnt a lot. Thank you...

    • @sourishk07
      @sourishk07  Месяц назад

      You're welcome! I'm glad you learned something!

  • @benoitheroux6839
    @benoitheroux6839 5 месяцев назад +1

    Nice video, well done ! this is promising content! Can't wait to see you try some Devin like stuff or test other way to use LLMs.

    • @sourishk07
      @sourishk07  5 месяцев назад

      Thank you so much for watching! It'll be really cool to be able to run more advanced LLMs as they continue to grow in capabilities! Excited to share my future videos

  • @mufeedco
    @mufeedco 5 месяцев назад

    This video is truly exceptional.

    • @sourishk07
      @sourishk07  5 месяцев назад

      I'm really glad you think so! Thanks for watching

  • @jordachekirsten9803
    @jordachekirsten9803 3 месяца назад

    Great clear nd thorough content. I look forwrard to seeing more! 🤓

  • @ashj1165
    @ashj1165 2 месяца назад

    very comprehensive video, thanks a ton!!!

    • @sourishk07
      @sourishk07  2 месяца назад

      You're very welcome!

  • @BruceWayne15325
    @BruceWayne15325 3 дня назад

    Thanks for the video. What kind of context window can your rig support?

  • @alexandre.hsdias
    @alexandre.hsdias 27 дней назад

    This video is a gem

  • @HacknSlashPro
    @HacknSlashPro 2 месяца назад

    I make proper Gen AI and Agentic Framework Videos in Bengali, never got views in 3 digits, good that you chose English

    • @sourishk07
      @sourishk07  2 месяца назад +1

      Yeah I'm sure there's demand for Bengali content, but I suppose since more people speak English, it might be easier to get a larger audience. My Bengali isn't good at all so I don't really have an option haha

  • @joelg1318
    @joelg1318 Месяц назад +1

    All i need for my AI Machine is the gpu im going for a dual 3090ti 24gb vram. AM5 x670e with gen5 pcie will support both cards with pcie bifurcation splitting the 5th gen 16x to 8x8x.

    • @sourishk07
      @sourishk07  Месяц назад

      That sounds like a sick idea! Good luck with the build!

  • @akshikaakalanka
    @akshikaakalanka 2 месяца назад

    Thank you Sourish!

    • @sourishk07
      @sourishk07  Месяц назад

      You're welcome! I appreciate you tuning in

  • @Snakebite0
    @Snakebite0 14 дней назад

    Very informative video 🎉

  • @Zelleer
    @Zelleer 5 месяцев назад

    Cool vid! Not sure about pulling hot air from inside the case, through the rad, to cool the CPU though. But really a great video for anyone interested in setting up their own AI server!

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      Hi! That’s a good point, but from my testing, the max difference in temperature is only about 5 degrees Celsius. Also, keeping the GPU cooler is more important.
      And because the only place in the case for the rad is at the top, I don’t want to have it be intake, because heat rises and the fans would suck in dust.
      Thanks for watching though and I really appreciate you sharing your thoughts! Let me know if there were any other concerns that you had. Always open to feedback 🙏

  • @hypernarutouzumaki
    @hypernarutouzumaki 2 месяца назад

    This is really great info! Thanks!

    • @sourishk07
      @sourishk07  2 месяца назад

      Glad you enjoyed it!

  • @Gabriel50797
    @Gabriel50797 Месяц назад

    Great video. Are you running Nala? :)

    • @sourishk07
      @sourishk07  Месяц назад +1

      Thanks! Haven’t heard of this but will definitely look into it more

  • @JsAnimation24
    @JsAnimation24 5 месяцев назад +4

    Thanks for this! I see you went with 96 GB system RAM and a 4080 with 16 GB VRAM. Curious whether the 16 vs 24 GB VRAM (e.g. in 4090) could make a difference for AI/ML, and especially LLM, apps? I realize a 4090 would have set you back another extra $1000 though. And is more system RAM helpful, what I'm reading is that GPU VRAM is more important.

    • @sourishk07
      @sourishk07  5 месяцев назад +4

      Thanks for the question! Yes, VRAM is king when it comes to ML/AI. Always prioritize VRAM. More system memory will never hurt, especially with massive datasets, but I didn't want to elect for the 4090 because of its price tag. However, on FB marketplace, I've seen RTX 3090's with 24 GB of VRAM for as low as $500, which was an option I should've considered while I was choosing my parts.

    • @federicobartolozzi680
      @federicobartolozzi680 4 месяца назад

      immagine two of them with nvlink and the cracked version of P2P.​ Too bad you didn't see it earlier, it would have been a great combo.😢 @@sourishk07

    • @xxxNERIxxx1994
      @xxxNERIxxx1994 4 месяца назад

      @@sourishk07 RTX 3090's is a MONSTER ! fp 16 models loaded with 32k context running at 60 tokens are the future :D
      Great video :)

    • @sourishk07
      @sourishk07  3 месяца назад

      @federicobartolozzi680 @xxxNERIxxx1994 Stay tuned for a surprise upcoming video!

    • @martin777xyz
      @martin777xyz 42 минуты назад

      I've seen build videos with 4x rtx3090. VRAM is king

  • @raze0ver
    @raze0ver 4 месяца назад

    am just gonna build a budgeter PC than yours for ML this weekend with 5900x + 4060ti 16GB ( not a good card but enough VRAM .. ) will go through your video and follow the steps to setup everything hopefully all go as smooth as you did ! Thanks dude!

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Thanks for watching and good luck with your build! I think for my next server build I want to use GPUs with more VRAM, but 16 GB should serve you fine for a budget build

    • @raze0ver
      @raze0ver 4 месяца назад

      @@sourishk07 do you think those pro card such as A4000 or higher is really necessary for casual ML given their price tags?

    • @sourishk07
      @sourishk07  4 месяца назад +1

      @@raze0ver No, probably not. Since those cards are originally targeted at enterprise, they're overpriced. What I should've done is gone for a used 3090 because that's the best bang for your buck when it comes to VRAM or a 4090 if you can afford it.

  • @renegraziano
    @renegraziano 3 месяца назад

    Wow super complete information now I'm subscribed now on 😮

    • @sourishk07
      @sourishk07  2 месяца назад

      Thank you so much for watching!

  • @JEM871
    @JEM871 5 месяцев назад

    Great video! Thanks for sharing

    • @sourishk07
      @sourishk07  5 месяцев назад

      Thanks for watching! Stay tuned for more content like this!

  • @matthewmonaghan9337
    @matthewmonaghan9337 2 месяца назад +32

    why would you use intel, the cpu is going to destroy itself in 3 months

    • @slazy824
      @slazy824 2 месяца назад

      Set core p1 and p2 to Intel spec and use proper cooling and then problem solved.

    • @zZTrungZz
      @zZTrungZz 2 месяца назад +3

      ​@@slazy824No it won't be solved. Even running at super conservative clock speed still degrades the cpu.

    • @sourishk07
      @sourishk07  Месяц назад +3

      I have faced some instability issues with my CPU so far, but the funny thing is that by disabling XMP, everything is working. I actually have ASUS's AI overclocking feature enabled with no issues. To be honest, this totally might crap out my CPU but hopefully Intel can push the microcode update soon

    • @zhou0001
      @zhou0001 Месяц назад

      Recently there are talks about Gen 14 and 13 intel processors having overheat issues especially when overclocked. Do you think you may be facing similar problem?

    • @sourishk07
      @sourishk07  27 дней назад

      @@zhou0001 That definitely is a possibility. I have indeed started to experience weird instability issues so I've already submitted an RMA request haha

  • @novantha1
    @novantha1 4 месяца назад

    I'm not sure if I like the idea of an AIO or water cooling in a server context. If it springs a leak I think you're a lot less likely to be regularly maintenancing or keeping an eye on a server that should be definition be out of sight.
    I'd also argue that the choice in CPU is kind of weird; I would personally have preferred to step down on the CPU to something like a 13600K on for a good sale or a 5900X personally; they're plenty fast for ML tasks which are predominantly GPU bound but you could have thrown the extra money from the CPU (and the cooler!) into a stronger GPU. The exact price difference depends on the context, but I could see the difference being enough to do something a bit different.
    I also think that an RTX 4080 Super is a really weird choice of GPU. It sounds kind of reasonable if you're just taking a quick glance at new GPUs, the price to performance ratio is wack. It's in this weird territory where it's priced at a premium price but doesn't have 24GB of VRAM; I would almost say if you're spending that kind of money you may as well have gone for a 4090 if you need Lovelace specific features like lower precision compute or something. Otherwise, I'd argue that a used 3090 would have made significantly more sense, and you could possibly have gotten two of them if you'd minmaxxed your build, and a system with 48GB of VRAM would absolutely leave you with a lot more options than a system with 16GB. You could have power limited them, too, if that was a concern.
    If you were really willing to go nuts, in a headless server I've seen MI100s go for pretty decent prices, and if you're doing "real" ML work where you're writing the scripts yourself ROCm isn't that bad on supported hardware nowadays, and that'd give you 32GB of VRAM (HBM, no less) in a single GPU, which isn't bad at all.
    Personally I went with an RTX 4000 SFF due to power efficiency concerns, though.

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Thank you so much for all of that feedback! Honestly, I agree with all of it, not to mention a couple other people also have commented similar things.
      But in my specific use case, my "server" is right next to my desk so maintenance should be pretty easy. Not to mention that I've really never really had any issues with AIOs for the 7 years I've been using them. Sure, a leak is possible but I guess I'm willing to take that risk.
      I think I might need to potentially switch this computer to be my main video editing computer and convert my current computer be the server because it has two PCIE slots.
      This was my first time building a computer from scratch solely for ML so I appreciate the recommendations!

  • @ericksencionrestituyo1802
    @ericksencionrestituyo1802 5 месяцев назад

    Great work, KUDOS

    • @sourishk07
      @sourishk07  5 месяцев назад

      Thanks a lot! I appreciate the comment!

  • @electronicstv5884
    @electronicstv5884 4 месяца назад

    This Server is a dream 😄

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Haha stay tuned for a more upgraded one soon!

  • @archansen8084
    @archansen8084 4 месяца назад

    Super cool video!

  • @sohamkundu9685
    @sohamkundu9685 5 месяцев назад

    Great video!!

  • @kawag2780
    @kawag2780 5 месяцев назад

    Could have started the video with the budget you were targeting. When recommending systems to other people, knowing how much the person can spend can heavily dictate the parts they can choose.
    Here are some questions I've thought of while looking at the video. Why choose a 4080 over a 3090? Why choose a gaming motherboard or one that is a MITX formfactor? Why choose a "K" SKU for a production focused workload? There's missed commentary there.
    I know that you have tagged some of your other videos but it could have been better to point out that you already have a NAS tutorial. Linking that video with the introduction of the 1TB SSD would have been helpful.
    And finally why is the audio not synced up with the video? It's very jarring when that happens. Other than that it was cool to see the various programs that you can use. However I feel that the latter part feels tacked because it's hard to gauge how the hardware you chose has an affect on the software you chose to showcase.

    • @sourishk07
      @sourishk07  5 месяцев назад

      Wow, thank you so much for your in-depth feedback! I sincerely appreciate you watching the video and sharing your thoughts. I apologize that the video didn't initially clarify some of the hardware choices and budget considerations. In retrospect, you're absolutely right, and I'll ensure to include such details in future content.
      I chose the 4080 Super because it has the newest architecture, along with the fact that I was able to get it at a discount. The extra VRAM from the 3090 would've helped with larger models like LLMs and Stable diffusion, but for a lot of my personal projects such as training a simple RL agent or even some work with computer graphics, the extra performance of the 4080 Super will serve me better. Again, something I should've added to the video.
      For the "K" SKU, I got the CPU on sale at Best Buy for about $120 off and the motherboard has an "AI overclocking" feature, which I thought would be kinda on brand with the video lol. I didn't really get a chance to touch upon it in the video or even benchmark any potential performance gains the feature might've gained me. Regarding the SFF build, I chose the form factor just because I have a pretty small apartment and I don't have much space. These are things I'm sure the viewers of this video might've been interested to hear about, and I appreciate you inquiring about them.
      I also agree with your point about my NAS video! I'll keep that in mind the next time I mention a previous video of mine.
      And regarding the audio, everything seems fine on my end? I've played the video multiple times on my desktop, phone, and iPad. Hopefully, it was just a one-off issue. Also, I suppose the software I installed isn't really too dependent on this specific hardware, but rather its the suite of tools I would install on any machine where I plan on doing ML projects.
      Thank you once again for such constructive feedback. I'm curious, what topics or details would you like to see in future videos? Your input helps me create more tailored and informative content.

  • @Four_Kay
    @Four_Kay 2 месяца назад

    Had the Sick set up but is the 1TB.

    • @sourishk07
      @sourishk07  Месяц назад +1

      Lol yeah fair. But 1 TB hasn't posed an issue yet. I have my NAS mounted on the server so I can easily offload any large model files that I'm not currently using, which makes 1 TB much more usable

    • @Four_Kay
      @Four_Kay Месяц назад

      @@sourishk07 Yeah,If it works for you you are good....🥲

  • @yellowboat8773
    @yellowboat8773 Месяц назад +1

    Hey man, wouldn't it be better to get an older 3090 with the higher vram? That way you get similar performance but more vram

    • @sourishk07
      @sourishk07  Месяц назад

      Haha yes you're right. I've received a lot of feedback about this, which is why I've upgraded to two 3090s actually! All the software is still the same though.
      This machine is now my editing/gaming rig!

  • @sinamathew
    @sinamathew 2 месяца назад

    I love this.

  • @maaheedgaming4055
    @maaheedgaming4055 Месяц назад +1

    Please make some projects to learn from you bro.

    • @sourishk07
      @sourishk07  Месяц назад +1

      Will do! Feel free to check out my DDQN implementation or my NeRF videos until then!!

  • @alirezahekmati7632
    @alirezahekmati7632 4 месяца назад

    GOLD!

    • @sourishk07
      @sourishk07  4 месяца назад

      Thank you so much!

    • @alirezahekmati7632
      @alirezahekmati7632 4 месяца назад

      ​@@sourishk07 that would be greate if you create part 2 about how to install wsl2 in windows for deep learning with nvidia wsl drivers

    • @sourishk07
      @sourishk07  4 месяца назад +1

      @@alirezahekmati7632 From my understanding, the WSL2 drivers come shipped with the NVIDIA drivers for Windows. I didn't have to do any additional setup. I just launched WSL2 and nvidia-smi worked flawlessly

  • @RazaButt94
    @RazaButt94 5 месяцев назад +1

    With this as a secondary machine, I wonder what his main gaming machine is!

    • @sourishk07
      @sourishk07  5 месяцев назад +3

      LOL you'll be surprised at this: my main gaming machine is an Intel 12700K and a 3080 12 GB. ML comes before gaming 🙏

  • @lucky__verma
    @lucky__verma 22 дня назад

    More of Gaming PC than an ML server. Great build though!

  • @GodFearingPookie
    @GodFearingPookie 2 месяца назад

    This is it

  • @manojkoll
    @manojkoll 3 месяца назад

    Hi Sourish, the video was very helpful
    I found the following config on Amazon, how would you rate it. Plan to run some Ollama models and few custom projects leveraging smaller size LLMS
    Cooler Master NR2 Pro Mini ITX Gaming PC- i7 14700F - NVIDIA GeForce RTX 4060 Ti - 32GB DDR5 6000MHz - 1TB M.2 NVMe SSD

    • @sourishk07
      @sourishk07  2 месяца назад

      Hi sorry for the late reply, was busy working on my most recent video.
      I biggest thing I would check is if that is the 8GB or 16 GB variant 4060 Ti. Definitely avoid the 8 GB one at all costs. Also, consider buying a used GPU as sometimes you may be able to get good deals on those. The other specs look fine to me, as long as you think the price on Amazon is reasonable.

  • @AvatarSD
    @AvatarSD 4 месяца назад

    As an embedded engineer I using 'continue' extension directly with my openai api, especially gpt4-turbo for auto-completion. Seems my knowledge not enough for this world..😟
    Hello from Kyiv💙💛

    • @sourishk07
      @sourishk07  4 месяца назад

      Hello to you in Kyiv! I completely understand the feeling. With the field of ML/AI changing at such rapid paces, it's hard sometimes to keep up! I struggle with this often too

  • @abhiseckdev
    @abhiseckdev 5 месяцев назад +2

    Absolutely love this! Building a machine learning rig from scratch is no small feat, and your detailed guide makes it accessible for anyone looking to dive into ML. From hardware selection to software setup.

    • @sourishk07
      @sourishk07  5 месяцев назад +2

      Thank you so much!!! I appreciate the support 🙏

  • @GodFearingPookie
    @GodFearingPookie 2 месяца назад

    Subscribed

    • @sourishk07
      @sourishk07  2 месяца назад

      Haha thank you so much! Stay tuned for more ML content!

  • @carlosnumbertwo
    @carlosnumbertwo 3 дня назад

    I would’ve went with amd with the cpu, tbh.

  • @punk3900
    @punk3900 4 месяца назад

    Hi, what is your experience with this rig? Is it not a problem for the temperature that the case is so tight?

    • @sourishk07
      @sourishk07  4 месяца назад

      The temperature has not been an issue with the same case size

  • @jetman-x4e
    @jetman-x4e Месяц назад

    Why not rtx 3000 ada generation or 4000 or 5000 even rather than 4090 ?

    • @sourishk07
      @sourishk07  Месяц назад +1

      Hey those are valid choices as cards. In this video, I should’ve considered those and probably chosen a better card than the 4080. I was more so focused on the software here

  • @danielgarciam6527
    @danielgarciam6527 5 месяцев назад

    Great video! What's the name of the font you are using in your terminal?

    • @sourishk07
      @sourishk07  5 месяцев назад

      Thank you for watching! The font is titled "CaskaydiaCove Nerd Font," which is just Cascadia Code with icons added, such as the Ubuntu and git logos.

    • @Param3021
      @Param3021 3 месяца назад

      ​@@sourishk07 ohh, i was literally finding this font from a long time, will install it today and use it.

    • @sourishk07
      @sourishk07  3 месяца назад +1

      @@Param3021 Glad to hear it! Hope you enjoy! It works really well with Powerlevel10k

  • @punk3900
    @punk3900 4 месяца назад

    is this system good for inference? Llama 70b will run on this? I wonder whether RAM really compensates for the VRAM

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Hello! That's a good question. Unfortunately, 70b models struggle to run. Llama 13b works pretty well. I think for my next server, I definitely want to prioritize more VRAM

  • @club4796
    @club4796 2 месяца назад

    can we able to play games from this server remotely like playing AAA games on ipad or macbook?

    • @sourishk07
      @sourishk07  2 месяца назад

      Yes, you would be able to, although using Windows + Parsec, or some sort of hypervisor might make things easier than natively gaming on Linux.

  • @JayG-hn9kf
    @JayG-hn9kf 5 месяцев назад

    Great video, I never got the Continue extension working in code-server, Is there a step that I may have missed ?

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      Thanks for watching! And regarding the Continue extension, what is the issue you're running into?

    • @JayG-hn9kf
      @JayG-hn9kf 5 месяцев назад

      @@sourishk07 thank you for offering support 🙂I have followed exactly your steps , however I don't get Continue text zone to ask question , not even the drop list to choose the LLM or setup. I tried Continue Release and Pre-release but both did not work. I the fact that I have Ubuntu Server running as a VM under Proxmox with GPU passthrough could have an impact ?

    • @sourishk07
      @sourishk07  4 месяца назад

      I don't believe the virtualization should affect anything. When you go to install the Continue extension, what version are you seeing? Is it v0.8.25?

    • @marknivenczy1896
      @marknivenczy1896 4 месяца назад

      I've tried twice to post help with this, but the youtube does not like me adding a url. Anyway, I found I needed to run code-server under HTTPS in order for Continue to run. If you open code-server under HTTP it will issue an error (lower right) that certain webviews, clipboard and other features may not operate as expected. This affects Continue. You can find the fix by searching for: Full tutorial on setting up code-server using SSL - Linux. This uses Tailscale which Mr. Kundu has already recommended.

    • @sourishk07
      @sourishk07  3 месяца назад

      Thanks for sharing this insight! I probably should've specified that I set up SSL with Tailscale behind the scenes to avoid that annoying pop up message. I apologize for not being clearer!

  • @mrb180
    @mrb180 2 месяца назад

    how do you get vscode to have this modern looking, curvy UI? mine looks nothing like that

    • @sourishk07
      @sourishk07  2 месяца назад +1

      The font that I use is Cascadia Code and the theme that I use is Material Palenight!

  • @fantomgaming9018
    @fantomgaming9018 2 дня назад

    ROG strix for server ????

  • @LouisDuran
    @LouisDuran Месяц назад

    How is your i9-14900K holding up for you?

    • @sourishk07
      @sourishk07  Месяц назад

      I think I might need to RMA it tbh. I'm definitely facing some instability. Wouldn't recommend rip

  • @johngou
    @johngou Месяц назад +1

    This is such a weird video. You know what tailscale is, you know what NAS is, you have a preference for proxmox, you know what a server is supposed to do and then you casually proceed to build the most consumer hardware build in existence with the wrong GPU and overkill components irrelevant to your usecase. Is this bait? Literally what.

    • @sourishk07
      @sourishk07  Месяц назад +1

      My original intention was to build a gaming computer for fun and have this video be solely about the software, but I decided last minute to include some more information about the hardware.
      You're definitely right, which is why my new machine has two RTX 3090s and I showcase it in my video about parallelism strategies.

  • @AlMuhimen
    @AlMuhimen 19 дней назад

    My parents said that if I reach a job level, they'll buy me a 4090 Super GPU. For now, they've given me a budget for an RTX 4060 Ti (16GB). I have a question: since I'm currently in a learning position at a beginner level, do I really need 32GB of RAM, or can I manage with 16GB for the next 1-2 years while I'm learning? For AI/ML, it seems like 62+ GB is the best.
    Could you please watch a few more videos and give me some advice? They will only provide me with a high-budget build if I reach a job level, not before. Plz relply 🙏 Amd R7 - 7700, gpu Rtx 4060 ti (16gb), and ram (16/32) for 1/2 years??????

  • @alvaromorales5967
    @alvaromorales5967 2 месяца назад

    Could it be made in windows with WSL?

    • @sourishk07
      @sourishk07  Месяц назад

      Yes it can! I love WSL because if you have your NVIDIA drivers for Windows installed, your WSL instance will have them too! Same goes for Docker.
      Some of the setup steps might be different for WSL, so definitely be sure to look out for that

  • @SamKhan-kb3kg
    @SamKhan-kb3kg Месяц назад

    how much did it cost you

  • @vauths8204
    @vauths8204 2 месяца назад

    I didnt see thermal paste on that processor. does it not need it?

    • @kashyapkshitij
      @kashyapkshitij 2 месяца назад +1

      The thermal paste comes pre applied on the cooler.

    • @vauths8204
      @vauths8204 2 месяца назад

      @@kashyapkshitij oh that's sick I haven't attempted this myself. good stuff

    • @kashyapkshitij
      @kashyapkshitij 2 месяца назад

      @@vauths8204 All good, just don't forget to peel off the sticker when you do.

  • @sunilkumarnayak9268
    @sunilkumarnayak9268 2 месяца назад

    What is uses of your ai machine and how to earn money through this can u tell me

    • @sourishk07
      @sourishk07  2 месяца назад

      Haha I really enjoy working on different ML projects, such as what you see in my Super Mario Bros RL or my Neural Radiance field projects!
      Regarding the second part, making money from this machine isn't really too much of a priority for me now so I'm not sure if I'm the best person to answer that. I still work a full-time job at TikTok lol. I believe that simply learning what I'm learning currently will pay off in the long term so I guess we'll see!

  • @kamertonaudiophileplayer847
    @kamertonaudiophileplayer847 5 месяцев назад

    It's kind of not typical PC building targets something other than gaming.

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      Yes you’re definitely right! The motherboard was definitely meant for only gamers 😂

  • @T___Brown
    @T___Brown 5 месяцев назад +1

    I didnt hear what the total cost was

    • @sourishk07
      @sourishk07  5 месяцев назад +4

      Thanks for the comment. While focusing on the small details of the video, I completely forgot some of the important information haha. The cost pre-tax was $2.8k although components like the motherboard do not have to be as expensive as what I paid. I was interested in the AI overclocking feature, but never got around to properly benchmarking it. Anyways, I've updated the description to include a Google Sheets with a complete cost breakdown.

    • @T___Brown
      @T___Brown 5 месяцев назад +2

      @@sourishk07 thanks! This was a very good video. Thanks

  • @notSoAverageCat
    @notSoAverageCat 5 месяцев назад

    what is the total cost of hardware?

    • @sourishk07
      @sourishk07  5 месяцев назад +2

      I forgot to mention this in the video, but the final cost was $2.8k pre-tax. Check out the link in the description for a Google Sheets for a complete price breakdown.

  • @ketankbc
    @ketankbc 2 месяца назад

    Where is CPU cooling gel?????

    • @sourishk07
      @sourishk07  2 месяца назад

      If you mean the thermal paste, the CPU cooler came with it pre-applied!
      Otherwise, the AIO has its own coolant that it comes with inside to cool the CPU.

  • @arpanchoudhury_
    @arpanchoudhury_ Месяц назад

    I hope your procy is ok now

    • @sourishk07
      @sourishk07  Месяц назад

      Lol I'm not sure I know what you mean here

    • @arpanchoudhury_
      @arpanchoudhury_ Месяц назад

      @@sourishk07 check intel recent issue with 13 and 14 gen cpu.

  • @aadilzikre
    @aadilzikre 4 месяца назад

    What is the total Cost of this Setup?

    • @sourishk07
      @sourishk07  4 месяца назад

      Hi! The total cost was about 2.8k although some parts I probably should’ve gone cheaper on like the motherboard. I have a full list of the parts in the description

    • @aadilzikre
      @aadilzikre 3 месяца назад

      @@sourishk07 Thank you! I did not notice the sheet in the description. Very Helpful!

  • @bitcode_
    @bitcode_ 5 месяцев назад

    If it doesn't have 4 H100s that i cannot afford i don't want it 😂

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      LOL maybe if I get a sponsorship, it’ll happen 😂😂😂 That’s always been a dream of mine

    • @bitcode_
      @bitcode_ 5 месяцев назад

      @@sourishk07 i subbed to see that one day 🙌

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      Haha I appreciate it! Looking forward to sharing that video with you eventually

  • @soumyajitganguly2593
    @soumyajitganguly2593 4 месяца назад

    who builds a ML system with 4080? 16GB is actually not enough! Either go 4090 or 3090

    • @sourishk07
      @sourishk07  4 месяца назад +1

      Yeah you’re right. Thanks for the comment! Next time I build a server, I’ll keep this in mind!

  • @crypto_que
    @crypto_que 2 месяца назад

    When bro told his parents there weren't going to be any grandchildren they laughed and were not surprised.

    • @sourishk07
      @sourishk07  2 месяца назад

      LMAO this is hilarious

  • @Maybemaybexyz
    @Maybemaybexyz 4 месяца назад

    Has anyone built this yet based on his recommendation?

    • @sourishk07
      @sourishk07  4 месяца назад

      Hi! Regarding the hardware side of things, I didn’t really mean to recommend these specific set of parts. I just wanted to share my experience!
      However, the software are tools I definitely use on a day to day basis and cannot recommend enough!

    • @marknivenczy1896
      @marknivenczy1896 4 месяца назад +1

      I built a similar rig, but with a Silverstone RM44 rack-mount case and a Noctua NH-D12L with an extra fan for cooling instead of the water unit. Fitting the GPU in the case required a 90 Degree Angled Extension Cable from MODDIY (type B). I used at ASUS ProArt Z790 motherboard. All of the software recommendations were great.

    • @sourishk07
      @sourishk07  4 месяца назад +1

      @@marknivenczy1896 I'm glad you enjoyed the software recommendations!

  • @nisaybliss855
    @nisaybliss855 9 дней назад

    Never go to Kenya or Tanzania with that surname

  • @حودة-الديدو
    @حودة-الديدو 5 месяцев назад

    Greate video, truly loved it ❤,but you should hide your id 😭😭

    • @sourishk07
      @sourishk07  5 месяцев назад

      Thanks for watching! And do you mean my email id?

    • @حودة-الديدو
      @حودة-الديدو 5 месяцев назад

      @@sourishk07 sorry i meant to type ip, but I think it's a local ip so no worries ❤️

    • @حودة-الديدو
      @حودة-الديدو 5 месяцев назад

      @@sourishk07 thanks for your high quality content.

    • @sourishk07
      @sourishk07  5 месяцев назад +1

      I appreciate it! And yeah, all the IPs in the video were my Tailscale IPs which are only accessible to me, so unless my Tailscale account gets hacked, I have nothing to worry about.

  • @punk3900
    @punk3900 4 месяца назад +1

    this is por**graphy

  • @goblinphreak2132
    @goblinphreak2132 Месяц назад +1

    ultimate machine? using intel? hahahahahaha.

    • @sourishk07
      @sourishk07  Месяц назад

      Yeah bro it's not looking so hot

  • @robertthallium6883
    @robertthallium6883 4 месяца назад

    Why do you move your head so much