Zero to Hero LLMs with M3 Max BEAST

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024

Комментарии • 318

  • @AZisk
    @AZisk  5 месяцев назад +1

    JOIN: youtube.com/@azisk/join

    • @AdamsTaiwan
      @AdamsTaiwan 4 месяца назад

      Just tried the LM Studio and my desktop. Was able to connect my 8 year old notebook's vsCode using Code GPT to it. Pretty nice, but still looking for a solution that can scan all my vs solution and tell me where to fix my problems.

  • @MaxTechOfficial
    @MaxTechOfficial 9 месяцев назад +119

    Keep up the good hustle, Alex! -Vadim

    • @AZisk
      @AZisk  9 месяцев назад +12

      Thanks Vadim!

    • @univera1111
      @univera1111 9 месяцев назад +1

      @@AZisk if I may ask, can you replicate this on a Linux or windows and see which is easier for users. Or u can just say here

    • @zt9233
      @zt9233 9 месяцев назад

      @@univera1111also benchmarks

    • @abhishekjha9041
      @abhishekjha9041 9 месяцев назад

      ​@@AZisksir please make a video for MacBook pro specifications for Machine learnings . I'm so confused about what to buy 16inch with 30 core 96gb ram Or 16inch with 40 core 64 GB ram. Or I have to buy a m3 pro 18 core 36gb ram. I'm so confused and like me other people also so please make a separate video on that it's a request

    • @abhishekjha9041
      @abhishekjha9041 9 месяцев назад

      ​@@AZiskAnd I have a question that I do some research and find out that MacBook pro in Delaware have zero sales tax which means if I buy MacBook pro in 2500 dollars so I don't have to give any tax on it. It's is true sir.

  • @bawbee27
    @bawbee27 7 месяцев назад +7

    Incredibly helpful - this is the video everyone with an Apple Silicon machine trying to do LLM’s should see!

  • @giovannimazzocco499
    @giovannimazzocco499 9 месяцев назад +17

    Excellent stuff. I searched RUclips for weeks to find benchmarks of DNN models on M3. This is the first and only one I've found so far. There's is a ton of videos on video editing, graphics, gaming and music production on M3s. But for what concerns fresh material about machine learning on Apple Silicon I'm pretty convinced you're the only game in town. Keep it up. Looking forward to seeing more benchmarks.

  • @gargarism
    @gargarism 9 месяцев назад +12

    I think the very first thing I will try out on my already ordered M3 Max, will be to follow what you did. The whole reason I bought the M3 Max is to work with machine learning. So thanks a lot!

    • @AZisk
      @AZisk  9 месяцев назад +1

      Good choice!

    • @zt9233
      @zt9233 9 месяцев назад +1

      @@AZiskis m3 max as good as nvidia for this?

    • @pec8377
      @pec8377 9 месяцев назад +2

      @@zt9233 no it's not, unless you want to run large model that won't dit into nvdias cards, they Will Always beat M3 GPU. Maybe not when ANE IS activated, but none of thé tools présentes hère supports core ml

    • @MikeBtraveling
      @MikeBtraveling 9 месяцев назад

      If you are looking for a laptop to work with LLMs on you cant really beat the Mac for models larger than 7bP and you want them to run locally@@zt9233

  • @JasonHorsnell
    @JasonHorsnell 9 месяцев назад +5

    Just got myself an M3 Max and found your videos. You’ve saved me SO MUCH TIME…..
    Very much appreciated…..

    • @danieljohnmorris
      @danieljohnmorris 6 месяцев назад

      How much ram?

    • @JasonHorsnell
      @JasonHorsnell 6 месяцев назад

      ⁠36GB max base. More than enough for my purposes atm.

    • @TimHulse
      @TimHulse 5 месяцев назад

      Same here!

  • @tonbii
    @tonbii 7 месяцев назад +1

    i bought M1 Max with 64GB 3 years ago to do this kind of works. I am so happy to find this video.

  • @catarinamoreira4805
    @catarinamoreira4805 9 месяцев назад +7

    This is fantastic! Thank you so much! More content on LLMs, please!

  • @joshgarzaBI
    @joshgarzaBI 6 месяцев назад +2

    Awesome video here. I'm bummed I didn't do it sooner. I have never seen my M1 (16GB) freeze before. Great teaching here!

  • @theperfguy
    @theperfguy 9 месяцев назад +12

    I have to commend you for your effort.
    I havent seen any other reviewer showing any other usecase than media comsumption, synthetic benchmarks and video encoding and editing.
    You are perhaps the only youtuber I know who tries out other things like code compile time and ML workloads, which is what is going to run on majority of the high end machines.

    • @AZisk
      @AZisk  9 месяцев назад

      Glad it was helpful!

  • @aimademerich
    @aimademerich 6 месяцев назад +2

    Thank you for the GPU setting in LM Studio at 15:00!! Can you do more videos on proper GPU setup on LLM's for M1-3?

  • @LukeBarousse
    @LukeBarousse 9 месяцев назад +3

    Interesting, I didn't know about LM Studio; that makes things A LOT cleaner

  • @mr.w7803
    @mr.w7803 9 месяцев назад +1

    Dang!! Dude, this video sold me on that M3 Max configuration… this is EXACTLY what I want to do on my machine

  • @devdeal4146
    @devdeal4146 6 месяцев назад +1

    Just got the m3 max with 48gb ram. Excited to see how it works with your tutorial. Thanks!

  • @amermoosa
    @amermoosa 9 месяцев назад +1

    amazing. just shrinking the whole second grade of engineering college in 17 minutes. incredible 😊

  • @joshbarron7406
    @joshbarron7406 9 месяцев назад +11

    I would love to see a token/second benchmark between M2 Max and M3 Max. Trying to decide if should upgrade

    • @abhinav9058
      @abhinav9058 8 месяцев назад

      Hey did you upgrade?

  • @atldeadhead
    @atldeadhead 9 месяцев назад +3

    I enjoy all your videos but this one was particularly interesting. I look forward to future videos that explore machine learning leveraging the power of the M3 Max. Fantastic stuff, Alex. Thank you!

  • @stanchan
    @stanchan 9 месяцев назад +3

    The performance of the M3 is amazing. Waiting for the refreshed Studio, as the M3 Ultra will be a beast. Hoping it will have the 256GB RAM as predicted.

  • @scosee2u
    @scosee2u 9 месяцев назад +5

    I really love your videos and how you explain these cutting edge concepts! Would you consider researching or interviewing someone to make a video about quantizing options and how it impacts using llms for coding? Thanks again for all you do!

    • @AZisk
      @AZisk  9 месяцев назад +2

      Possibly!

  • @paraconscious790
    @paraconscious790 2 месяца назад

    This is amazing, awesome, super crisp yet easy to understand with absolute engagement 🙌👌🙏

  • @SebastianWerner82
    @SebastianWerner82 9 месяцев назад +1

    Great to see you creating videos with this type of content as well.

  • @dennisBZC
    @dennisBZC 4 месяца назад

    Hey Alex,
    I’ve been watching many of your videos, mostly for comedy - as I find you hilarious the way you explain things to a non-tech mortal, but occasionally, try to copy your instructions and try my luck to test out a few things for fun. I’m not one for cutting code, but I still watched the whole thing, just to get to the LM Studio to download a model to try out on my M3 Max. I tried the Phi3, thinking Microsoft might be better than the others.
    I don’t have a clue what I’m doing, but it seems to work a little.
    You are a LEGEND!
    Keep up the great work. Love to see how you train your AI in due course.
    I keep shouting at it to “sit”…my MacBook hasn’t moved, so I guess, it is quite obedient.

  • @ismatsamadov
    @ismatsamadov 9 месяцев назад +1

    I subscribed a few months ago, but I have never seen such quality content. Thanks, Alex! Keep going.

    • @AZisk
      @AZisk  9 месяцев назад

      thx 🙏

  • @facepalmmute3619
    @facepalmmute3619 9 месяцев назад +1

    the bass in your voice on the MBP speakers is phenomenal

  • @hamiltonwmr189
    @hamiltonwmr189 9 месяцев назад +1

    If you are going to do any intensive task on MacBook then keep it charged at 80% using al dante. Dont let run the models on battery as churning though cycles will damage it's help ,keep it on power adapter and 80% charging. I did some intensive training on my m1 Pro and it went from 100 to 96% battery health in 1 year.

    • @CitAllHearItAll
      @CitAllHearItAll 7 месяцев назад

      4% loss in 1 year is normal. I'm at 2+ years on M1 Pro with 86% battery health. You're either trippin or trollin.

  • @BenWann
    @BenWann 4 месяца назад +1

    I couldn’t agree more - I wanted to really sink my teeth in ML since it’s been a while - and I bought a MBP m3 max after seeing your comparisons. Sorry I couldn’t use an affiliate code - micro center had a killer deal on it :(. I look for your videos to drop now, and look forward to what you come up with next.

  • @FrankHouston-v5e
    @FrankHouston-v5e 5 месяцев назад

    Best LLM build video RUclips ❤. I’m buying my 36GB MacBook Pro M3 Max 14 Core cpu with 30 core GPU. Planning on launching a RUclips AI/Ml channel soon 🧐.

  • @anthonyzheng7274
    @anthonyzheng7274 9 месяцев назад

    You are awesome! This is great, I bought an M3 Max several days ago and really having a great time playing around with LLM's.

  • @jorgeluengo9774
    @jorgeluengo9774 4 месяца назад +1

    Thank You Alex, this is an amazing video. I will look into the software development tools installation.

    • @AZisk
      @AZisk  4 месяца назад

      Awesome! Thanks

  • @Mrloganphillips1
    @Mrloganphillips1 6 месяцев назад

    I had so much fun with this project. I just got a m3max and wanted a project to work on. After I got llama running I made a bash script to run the command and trigger a second bash script to open a browser window to the ip address after a 5s delay to let the server get up and running first. then I made a shortcuts button to run it. now I have on demand llm with an easy to use on/off button.

  • @JohnSmith762A11B
    @JohnSmith762A11B 8 месяцев назад +1

    Excellent. Many thanks for putting this together! 🥂

  • @01_abhijeet49
    @01_abhijeet49 5 месяцев назад

    These models run soooo well on my 3060 rtx desktop. Alas, my investment is worth it

  • @juangarcia-wp2zr
    @juangarcia-wp2zr 9 месяцев назад +2

    very cool content, thanks, I feel very curious now to try out some of this llms

  • @christopherr8441
    @christopherr8441 9 месяцев назад +3

    If only we could directly access and use the Apple Neural Engine for doing things like this. Imagine the speed and performance gains.

  • @someone5781
    @someone5781 3 месяца назад

    So excited for your next video on training on the m3 max!

  • @RadAlzyoud
    @RadAlzyoud 9 месяцев назад +2

    Brilliant. Thanks for sharing.

  • @MikeBtraveling
    @MikeBtraveling 9 месяцев назад +4

    I bought a maxed out M3 max to do this, please run the larger models with ollama, when using LM studio you need to make sure you are using the correct prompt template for the model, i think that was your issue.

    • @mrsai4740
      @mrsai4740 12 дней назад

      @@MikeBtraveling I'm curious, were you able to run larger model like a 70b llama maiden?

  • @nikolamar
    @nikolamar 9 месяцев назад +1

    Alex this is AWESOME!!! Thank you!

  • @jameshancock
    @jameshancock 9 месяцев назад +2

    Nice! Thanks!
    FYI when you change the preset you’re changing how it inputs into the LLm. Which caused it to go nuts.

  • @suburbanflyer
    @suburbanflyer 9 месяцев назад

    Thanks for this Alex! Just got an M3 Max so it'll be great to try out some new things on it, this definitely looks interesting!

  • @XNaos
    @XNaos 9 месяцев назад +1

    Finally, I waited for this

  • @terra8net
    @terra8net 2 месяца назад

    THX .... Great LLM Content for Mac User :)

  • @abhinav23045
    @abhinav23045 9 месяцев назад +1

    That fan noise is like feel the power of AGI.

    • @AZisk
      @AZisk  9 месяцев назад

      😆

  • @bdarla
    @bdarla 9 месяцев назад

    Super helpful! I hope you will continue with further relevant videos!

  • @tomdonaldson8140
    @tomdonaldson8140 9 месяцев назад +2

    Love it! Looking forward to the training video(s). Now I want a Mac Studio M3 Ultra! Oh, no such thing yet? Come on Apple! We’re waiting!!!

  • @davidpsp89
    @davidpsp89 9 месяцев назад +3

    super interesting and useful, I take this opportunity to ask about Matlab again and its real performance, since Apple's on its page is not real

  • @DavidCampero26
    @DavidCampero26 8 месяцев назад +1

    Hi Alex! I would love to see a comparison between M3 Max 14/30 and M3 Max 16/40 with the same processes for LLMs. I read that many people is going with the base model M3 Max and I would like to see how much difference there is. If you know of someone who did it, please let me know!! I want to buy a laptop as soon as possible!! Thanks!!

  • @estebanguillen8110
    @estebanguillen8110 9 месяцев назад

    Great video, looking forward to the LLM fine-tuning video.

  • @Andrew-v2g
    @Andrew-v2g 9 месяцев назад +1

    Alex, thanks.

    • @AZisk
      @AZisk  9 месяцев назад

      You bet!

  • @justisabelll
    @justisabelll 9 месяцев назад +4

    Great video, really looking forward to the next few ML related ones. You might have had better results with LM studio though if you disabled mlock after enabling Metal GPU. Also the model output looks nicer if you enable markdown in the settings as well.

  • @MuhammaddiyorMurodov-l5n
    @MuhammaddiyorMurodov-l5n 9 месяцев назад

    Thank you so much for making this video, it was really helpful. Please do more this kind of coding videos and testing on m3 macbook, and push them to the limits, I think you are the best channel for this because you have the knowledge and intention to do these things and it will be win win situation for both of us

    • @sujithkumar8261
      @sujithkumar8261 9 месяцев назад

      Are you using macbook m3 base variant?

  • @_mansoor
    @_mansoor 7 месяцев назад

    Awesome, Thank you.
    Halo Alex!!!🎉🎉

  • @bobybobybobo
    @bobybobybobo 8 месяцев назад

    Just tried this on an M1 max, token generating speed is about 15% slower, ie, 208 vs 238. So the $2100 M1 is still holding up ok compared to the $3500 M3 for this LLM experiment...

  • @keithdow8327
    @keithdow8327 9 месяцев назад +2

    Thanks!

    • @AZisk
      @AZisk  9 месяцев назад

      🤩 thanks!

  • @pbdivyesh
    @pbdivyesh 9 месяцев назад +1

    You're a good lad, thank you!🎉😅

  • @JonNordland
    @JonNordland 9 месяцев назад +5

    I would love some videos where the M3 was pushed a bit harder. For instance, 70b models. The 70b models are much more useful for real work.

    • @AZisk
      @AZisk  9 месяцев назад +2

      Noted!

    • @brandall101
      @brandall101 9 месяцев назад

      I have the 48GB variant so can't do 70B...but 34b models run fairly slow as is, seeing a reported 11-12 tok/sec in LM Studio, so I'd expect a 70B to be about 5-6 tok/sec. It's also pushing 110W during inference. For me personally, that's just not performant enough for real use so opted to not go through the hassle of swapping for a 64GB BTO.

    • @AZisk
      @AZisk  9 месяцев назад

      @@brandall101For what it would cost to get a high end 128+gb mac AND all the ssd space you’d need for the ml models, i would just get a 4080 or 4090. only problem is - memory requirements

    • @brandall101
      @brandall101 9 месяцев назад

      @@AZisk The common thing is to buy a pair of 3090s to get a nice middle-ground between performance / memory / cost... those can be had used for about $1500. I just don't think the hardware is quite there... yet. A couple more generations and I think we'll be golden.

    • @geoffseyon3264
      @geoffseyon3264 9 месяцев назад

      I hope Apple is reading this thread…

  • @fallinginthed33p
    @fallinginthed33p Месяц назад

    Now the Snapdragon X Elite laptops are also pretty good at local LLMs if you run a specific quantized format.

  • @yashen12345
    @yashen12345 9 месяцев назад +6

    YES AWESOME! More on this pls. do you have an m3 max with 128gb available? Typically these smaller and quantized models you have showcased will preform worse than the bigger ones. I wanna see LLama 2 70B 8bit running on an m3 max with 128gb . This is the largest most powerful model thats able to fit in a macbook. Lets push this thing to the absolute limit and see how it preforms
    llama 2 70b is actually able to match chatgpt3.5 performance. so if we're able to run this we can have OUR OWN chatgpt that is actually as good as chatgpt running LOCALLY ON A MACBOOK THATS INSANE, pls can i have a video on this

    • @toddturner6
      @toddturner6 9 месяцев назад +1

      Actually it will run Mistral 180B in the top configuration with 128GB RAM.
      Edit: typo. Falcon 180B.

    • @yashen12345
      @yashen12345 9 месяцев назад

      @@toddturner6 ? mistral is 7b

    • @toddturner6
      @toddturner6 9 месяцев назад +1

      @@yashen12345 Falcon 180B (typo).

  • @eldee8704
    @eldee8704 7 месяцев назад

    Awesome tutorial! I bought the 14" MacBook Pro M3 Max base model for this to try out.. lol

  • @geog8964
    @geog8964 9 месяцев назад

    Thanks, Alex.

  • @vincentnestler1805
    @vincentnestler1805 7 месяцев назад +1

    Thanks!

    • @AZisk
      @AZisk  7 месяцев назад

      🤩 thanks!

  • @ergun_kocak
    @ergun_kocak 9 месяцев назад

    3 to 5 times faster than M1 Max 64GM full spec. Thank you very much for the video 👍

  • @stephenthumb2912
    @stephenthumb2912 9 месяцев назад +1

    thanks for testing. it's interesting that even with enough memory, still some slowness on the bigger model quants. my base M2 8gb can run the q4 7b's barely.... prefer ollama using cli which will run at usable tps. it's sort of ok with LM Studio, but generally I need to run 3b's or below with q4 quants. Orca-mini 3b is sort of the default test standard for me on 8gb mac's incl. the mac metal air. can confirm, using the mac metal checkbox, causes runaways. textgen funnily runs fine with mac metal suport as well.

  • @ChitrakGupta
    @ChitrakGupta 9 месяцев назад

    That was really good. I learnt something and was fun to run on the new M3 Max

  • @jeffersonmp4
    @jeffersonmp4 4 месяца назад +2

    How do you know how to do all these steps?

  • @chillymanny714
    @chillymanny714 9 месяцев назад +1

    This is a great video, I think if you were to make videos to teach intro/intermediate data analyst how to build LLMs or a series of videos to try different application creation using Macs M chips, that it would be a big hit. I will try to replicate your approach

    • @syedanas2083
      @syedanas2083 9 месяцев назад

      I look forward to that

  • @DivineZeal
    @DivineZeal 6 месяцев назад

    Great video! Thinking about getting the MBP M3 for llm

  • @juliana.2120
    @juliana.2120 9 месяцев назад +1

    ohh i love that you use conda here because it really helps me keep my hard drive clean with all those different AIs :D im an absolute beginner so i'm afraid of installing stuff i cant find later on.
    some people say its "outdated" and runs in errors too often but i cant really judge that. is that true?

  • @astrohgamingZero
    @astrohgamingZero 5 месяцев назад

    Looks good. I use text-generation-webui and the chat/chat-instruct modes or input presets can make or break some models.

  • @rhoderzau
    @rhoderzau 9 месяцев назад +3

    I was away when my M3 Max (40 Core GPU, 2TB, 64GB) arrived so just got a hold of it now. Looking forward to giving LM Studio a go and finally learning how everything works rather than just what the outcome is.

  • @MikeBtraveling
    @MikeBtraveling 9 месяцев назад

    very interested in the topic and would love to see you do more in this space.

  • @kman41000
    @kman41000 9 месяцев назад +1

    Awesome video man!

    • @AZisk
      @AZisk  9 месяцев назад

      Glad you enjoyed it

  • @SergeyZarin
    @SergeyZarin 9 месяцев назад +1

    Thanks great video explaining !

    • @AZisk
      @AZisk  9 месяцев назад

      Glad it was helpful!

  • @camsand6109
    @camsand6109 9 месяцев назад

    Glad i subscribed. you've been on a roll lately (new subscriber).

  • @PMX
    @PMX 7 месяцев назад

    7:15 no, the one for text generation is S_TG t/s, in your case about 20 tokens per second for f16. And at 7:46, again, that's the wrong column, the correct column (S_TG t/s) shows the correct value, 33.61 which is exactly what I get on an M2 Max for a 7B Q8_0 (the GPU improvements on the M3 Max compared to the M2 Max don't have much impact for LLM inferencing, they are mostly useful for 3D rendering apps like Blender and for games).
    The column you are using (S t/s) is (tokens prompt processing + tokens text generation) / total time, which is a meaningless number (you can get it to be as fast as prompt processing speed just by having a large prompt and a very small text generation, or as slow as text generation speed by having a short prompt and a long number of tokens for text generation).

  • @onclimber5067
    @onclimber5067 9 месяцев назад +1

    Maybe I am a bit late but I am gonna ask anyway.
    If you were to buy a new laptop right now and the budget would be about 2500-3000. I am currently thinking:
    Dell XPS 15 with 32gb ram, i9-13900H, 1tb, rtx4070
    M3 Pro 36gb, 1tb
    M2 Pro 32gb, 1tb
    Can't decide haha

  • @MeinDeutschkurs
    @MeinDeutschkurs 9 месяцев назад

    Thanks for the video, but there's something I don't understand: we explored this together, yet concluding with "find another model" isn't what I expected. Achieving satisfaction through a successful demonstration, despite the difficult journey, is essential. Now, I feel like I've wasted my time. I could have just downloaded LM Studio and figured it out through trial and error myself.

    • @AZisk
      @AZisk  9 месяцев назад +1

      this video is not about finding the best model, but getting set up with an environment that will allow you to use any model you want.

    • @MeinDeutschkurs
      @MeinDeutschkurs 9 месяцев назад

      @@AZisk, and that’s why this demonstration was not satisfying, but yes, it was a demonstration: a not satisfying demonstration. :)

  • @TimHulse
    @TimHulse 5 месяцев назад

    That's great, thanks!

  • @laobaGao-y7f
    @laobaGao-y7f 9 месяцев назад +2

    Is it the 96GB version of the M2 Max, what do you think, I want to deploy my own 13B model locally (train the model with some relatively sensitive data), or even become my 'digital clone', do you think the 38c 96GB M2 Max is a suitable choice?

  • @RobertMcGovernTarasis
    @RobertMcGovernTarasis 4 месяца назад

    Llama3's output for this is pretty decent. and even broke down the regex. Not tested it yet :) (mostly coz I don't know Python). But certainly the JavaScript version did. LMStudio is really nice, just a shame you can use it to benchmark in quite the same way. My poor M1 Air only gets 10.71 tok/s with Llama3 7B q4_k_m *cries*

  • @timelesscoding
    @timelesscoding 8 месяцев назад +1

    Interesting stuff, I wish I could understand a little more. Thanks

  • @boraoku
    @boraoku 9 месяцев назад

    In my experience trying different open LLMs for code generation, my recommendation is that don’t waste your time unless you can’t access OpenAI for some reason…

  • @theoldknowledge6778
    @theoldknowledge6778 9 месяцев назад

    This LM Studio is Lit 🔥

  • @SimoneFolador
    @SimoneFolador 9 месяцев назад

    Thanks about the video man! I loved it and it helped me a lot since I wanted to try some models on my machine. What's your experience on fans on the M3 Max machine? I've read that they are pretty noisy and it becomes pretty hot as well. I still have an Intel machine (last generation) with 64GB ram and 2TB drive but i wanted to buy a new M3 max

  • @Xilefx7
    @Xilefx7 9 месяцев назад +1

    Can you test the LLM perfomance in low power mode? I believe Apple needs to optimize how they handle the thermals of the MacBook Pro with the m3 max.

  • @AliHussain-jh3iq
    @AliHussain-jh3iq 5 месяцев назад

    Insightful video Planning to get a MacBook Pro M3 Max for LLM work. Should I go for 1TB or 2TB, 14 or 16-core CPU, and 64GB or 128GB RAM? Thanks for your insight!

  • @nickwind2584
    @nickwind2584 9 месяцев назад +4

    I learned more about AI in just 15 minutes with Alex than I did taking an entire AI class in college.

  • @akashtriz
    @akashtriz 7 месяцев назад

    Why is it that no one questions the metal GPU hardware for bungling up the model? Llama seemed less wacky on CPU.

  • @neodim1639
    @neodim1639 9 месяцев назад +2

    Try ollama instead

  • @Jorge-ls9po
    @Jorge-ls9po 6 месяцев назад

    Nice vid. Now, with the M3 Max, should I stick to 64 GB of unified RAM for this sort of tasks? A jump to 128 GB will cost me a thousand bucks more. Cheers!

  • @mercadolibreventas
    @mercadolibreventas 9 месяцев назад

    Kep it up! Good Job! Can you do a video on getting Llama Factory set up on the M3, Thanks!

  • @RenanHiramatsu
    @RenanHiramatsu 9 месяцев назад +1

    Ok buying my M3 Max tomorrow (really)

  • @petercheung63
    @petercheung63 9 месяцев назад

    Ziskind is a surname meaning “sweet child

  • @TwstedTV
    @TwstedTV 8 месяцев назад

    Also when I ask the language model what is its knowledge base on programing languages and what is the latest version it knows.
    This is the reply I was given back. Which shows further proof that many of the language models are pretty outdated.
    Here is the list it gave me of what it knows.
    Here are some of the latest versions of the programming languages I mentioned earlier:
    1. Python - Python 3.10.2 (released in October 2021)
    2. Java - Java 17 (released in September 2021)
    3. C++ - C++20 (released in December 2019, but still considered the latest version as no official C++21 has been released yet)
    4. JavaScript - ECMAScript 2021 (also known as ES2021, released in June 2021)
    5. PHP - PHP 8.0 (released in November 2020)
    6. Ruby - Ruby 3.0 (released in October 2020)
    7. Swift - Swift 5.5 (released in April 2021)
    8. Kotlin - Kotlin 1.5.30 (released in July 2021)
    9. SQL - The latest version of SQL varies depending on the database system, but for example, MySQL is currently at version 8.0 released in January 2019

  • @speed_mc3990
    @speed_mc3990 3 месяца назад

    I train my own NNs but sadly popular ML libraries like tensor flow and PyTorch don't support multiple GPUs, hence I can only use one GPU Core.

  • @42Odyssey
    @42Odyssey 9 месяцев назад +1

    Thanks for the video Alex. As for me, my laptop at work is the famously loud MBP 16 intel i9. My personal machine is a 14" M3 Max 64Go. I am with the two laptops right now, and the 16" intel is louder than my 14" M3 max in my opinion . Maybe it's a 16" thing ...

    • @AZisk
      @AZisk  9 месяцев назад +4

      when not under stress the intel will keep being loud and the silicon will be silent. but when fans hit over 3500rpm, the m3 max is louder than any other ones i heard

    • @brandall101
      @brandall101 9 месяцев назад

      The main thing with the Intel machines is the GPU. Any moderate load will push it into chaos. With the Max you have to really push it hard - either high performance gaming or inference will do it.

    • @JS-ih4qi
      @JS-ih4qi 9 месяцев назад

      @@AZisk I read that the 14” can throttle from the heat due to the smaller fans. Would this affect how fast an llm responds after it’s set up on the computer. I’m looking at the biggest M3 Max chip with 64 ram with 4tb. Appreciate any advice.

  • @innocent7048
    @innocent7048 9 месяцев назад +1

    Very interesting article. I will try this :-)

    • @AZisk
      @AZisk  9 месяцев назад

      🤩 thanks so much!

  • @PietroSperonidiFenizio
    @PietroSperonidiFenizio 9 месяцев назад

    I might need to upgrade from my Apple 2C

  • @davidpsp89
    @davidpsp89 9 месяцев назад

    LM studio It is an ideal environment to do this on Mac, since it is not consuming as much GPU, since for that we need to use Nvidia and it is not an option in M chips

  • @PIZZA_KITTY
    @PIZZA_KITTY 6 месяцев назад

    I really want M3 Max but wish it was a little more cheaper😱