How Did Llama-3 Beat Models x200 Its Size?

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024
  • Sign up Shipd now to start earning while coding! tally.so/r/3jBo1Q
    And check out Datacurve.ai if you're interested: datacurve.ai/
    In this video, I compiled the latest Llama-3 news and information that you might have missed. Llama-3 is actually very impressive, and I am going to find my jaws because I accidentally dropped it somewhere.
    xAI News
    [Grok-1] x.ai/blog/grok-os
    [Grok-1.5 Vision] x.ai/blog/grok...
    [Code] github.com/xai...
    Llama-3 News
    [Blog] ai.meta.com/bl...
    [Huggingface] huggingface.co...
    [NVIDIA NIM] nvda.ws/3Jn5pxb
    This video is supported by the kind Patrons & RUclips Members:
    🙏Andrew Lescelius, alex j, Chris LeDoux, Alex Maurice, Miguilim, Deagan, FiFaŁ, Daddy Wen, Tony Jimenez, Panther Modern, Jake Disco, Demilson Quintao, Shuhong Chen, Hongbo Men, happi nyuu nyaa, Carol Lo, Mose Sakashita, Miguel, Bandera, Gennaro Schiano, gunwoo, Ravid Freedman, Mert Seftali, Mrityunjay, Richárd Nagyfi, Timo Steiner, Henrik G Sundt, projectAnthony, Brigham Hall, Kyle Hudson, Kalila, Jef Come, Jvari Williams, Tien Tien, BIll Mangrum, owned, Janne Kytölä, SO, Richárd Nagyfi, Hector, Drexon, Claxvii 177th, Inferencer, Michael Brenner
    [Discord] / discord
    [Twitter] / bycloudai
    [Patreon] / bycloud
    [Music 1] massobeats - swing
    [Music 2] massobeats - lush
    [Music 3] massobeats - glisten
    [Profile & Banner Art] / pygm7

Комментарии • 296

  • @bycloudAI
    @bycloudAI  5 месяцев назад +26

    Sign up Shipd now to start earning while coding! tally.so/r/3jBo1Q
    And check out Datacurve.ai if you're interested: datacurve.ai/
    On a side note, I am also looking for some like-minded people that are down to work together. For video scripting to maybe revive the AI newsletter with me, feel free to hit me up on Discord if you’re interested!

  • @marinepower
    @marinepower 5 месяцев назад +542

    Llama 3 is 8B instead of 7B because the increased vocabulary size -- llama 8B has a feature dimension of 4096. Therefore, the initial embedding layer goes from 32000*4096 to 128000*4096, and the final prediction layer goes from 4096*32000 to 4096*128000. Aka a difference of 800M parameters.

    • @samsonthomas6797
      @samsonthomas6797 5 месяцев назад +21

      This kind of knowledge 🤍🤍

    • @Ginto_O
      @Ginto_O 5 месяцев назад +3

      Bigger vocabulary means the tokens are longer?

    • @Ginto_O
      @Ginto_O 5 месяцев назад +5

      Never mind he talks about it in video sorry

    • @lio1234234
      @lio1234234 5 месяцев назад +1

      I much prefer this method of a greater vocabulary size as it results in higher efficiency of long contexts, scaling beyond what the 7b is in terms of efficiency past a given point.

    • @lelouch1722
      @lelouch1722 5 месяцев назад +1

      @@Ginto_O Depends on the tokenization method, but it can be the case. In some methods like Wordpiece, high frequency words are kept as one token will low frequency are split into subwords. If you increase vocab size then you allow for more tokens and hence more "full word" token at the same time.

  • @xman8908
    @xman8908 5 месяцев назад +444

    I usually Hate Facebook but this time there doing a really good thing by pioneering open sourced ai

    • @carkawalakhatulistiwa
      @carkawalakhatulistiwa 5 месяцев назад +9

      We happy because the use 100 million for traning ai for public

    • @nangld
      @nangld 5 месяцев назад

      They are accelerating the AI arms race even faster, risking it all coming to billions losing jobs and heavy social unrest being suppressed by robots.

    • @starblaiz1986
      @starblaiz1986 5 месяцев назад +62

      I know right? I still can't believe we ended up in the timeline where Meta of all companies are the champions of open source! Like seriously, who went back in time and stepped on a butterfly? GIVE ME THEIR NAMES!! 😅

    • @daxtron2
      @daxtron2 5 месяцев назад +7

      Only because their initial llama models were leaked lol

    • @Vifnis
      @Vifnis 4 месяца назад +1

      Facebook becoming "Meta" is still kinda a weird flex, but hey, maybe it'll work out tin the end who knows

  • @6AxisSage
    @6AxisSage 5 месяцев назад +239

    This 3rd phase of Zuck has really hit his stride, amazing.

    • @kiwihuman
      @kiwihuman 5 месяцев назад +106

      As AI gets better, the zuck appears more human.

    • @atomiste4312
      @atomiste4312 5 месяцев назад +26

      @@kiwihuman i mean, if you're a robot and you want your kind to rule over humans, going open source is the fastest way towards improvment

    • @juanjesusligero391
      @juanjesusligero391 5 месяцев назад +22

      And this isn't even Zuck's final form.

    • @guncolony
      @guncolony 5 месяцев назад

      Llama 3 represents the pinnacle of civilization by the new human species, homo zuckerbergus

    • @GearForTheYear
      @GearForTheYear 5 месяцев назад +5

      @@kiwihuman yes, improvements in compute are exponential. It's no mere coincidence.

  • @sukantasaha5678
    @sukantasaha5678 5 месяцев назад +413

    Open sourcing is better because it takes away the leverage of models like GPT4 and other closed sourced ones from their competitors. If you can't compete, disrupt the competition.

    • @abhi36292
      @abhi36292 5 месяцев назад +29

      sad to see Stability ai falling apart tho

    • @luckyb8228
      @luckyb8228 5 месяцев назад +20

      Stable diffusion is not falling apart, SD3 has hit gold in my view.
      That is the best image generation model right now.
      SD3 is accessible via API and its gonna make a killing. I don't think we have seen the last of them. As a matter of fact,its only a start. Stable diffusion has has the potential to give SORA a run for its money.. We will see.

    • @abhi36292
      @abhi36292 5 месяцев назад +18

      @@luckyb8228 i mean the company, didn't by cloud mention that.
      the apis access could gradually become a closed source software although SD3 demos are amazing i agree

    • @float32
      @float32 5 месяцев назад +6

      Or you could see it as meta dumping money to hurt the competition.

    • @RafaGmod
      @RafaGmod 5 месяцев назад +2

      If the model could train more why would they stop? I thin they may be under the expected budget and wait better results. In this case, opensourcing is a good marketing strategy

  • @CorridorCrew
    @CorridorCrew 5 месяцев назад +159

    Congrats on graduating and good luck on your foray into doing more RUclips. Your videos always go beyond surface level news. It’s the reason you’re the only AI channel I watch, and why I watch all the videos you drop. Looking forward to seeing how your channel grows! -Niko

    • @Vvk2000
      @Vvk2000 5 месяцев назад +6

      Damn corridor commented😮

    • @bruhmoment23123
      @bruhmoment23123 5 месяцев назад

      your videos are ass bruh

    • @fhub29
      @fhub29 5 месяцев назад +3

      Hi Niko, love corridor

    • @EddieBurke
      @EddieBurke 5 месяцев назад +1

      I expected y'all to watch cus he's been making quality vids for a good bit now, but seeing a CorridorCrew comment with like 15 likes is bizarre

    • @GeorgeG-is6ov
      @GeorgeG-is6ov 5 месяцев назад

      Corridor?

  • @user-qr4jf4tv2x
    @user-qr4jf4tv2x 5 месяцев назад +172

    if you live long enough you see your self become a hero - reptile zuk

    • @jmvr
      @jmvr 5 месяцев назад +15

      You either die a villain, or you live long enough to see yourself become a hero

    • @Terenfear
      @Terenfear 5 месяцев назад +10

      So I guess that's Zuck's redemption arc, huh.

    • @TheRhopsody
      @TheRhopsody 5 месяцев назад

      Gust Jenius 🎉

  • @andrewlescelius474
    @andrewlescelius474 5 месяцев назад +37

    Congrats on graduating bro 🎓🎉👏 and to clarify, I'm not the "boss man," I only want to support your excellent work. Thank you for all your videos and excited to follow along your adventure 🙂

    • @finalfan321
      @finalfan321 5 месяцев назад +3

      nice!

    • @bycloudAI
      @bycloudAI  5 месяцев назад +9

      you the goat, thank you so much for your kind words!

  • @user-ex6xc5ox3k
    @user-ex6xc5ox3k 5 месяцев назад +135

    How the hell is Zuck the good guy in this?

    • @jameshughes3014
      @jameshughes3014 5 месяцев назад +64

      character arc of the century for sure.

    • @blakecasimir
      @blakecasimir 5 месяцев назад +27

      The lesser evil, perhaps. FB still makes bank selling customer data...

    • @nangld
      @nangld 5 месяцев назад +5

      AI is generally not a good thing. It is here to replace you.

    • @naevan1
      @naevan1 5 месяцев назад +12

      Stop thinking in these terms for multibillioners please

    • @StevenAkinyemi
      @StevenAkinyemi 5 месяцев назад +12

      @@nangld And what are you going to do about it? Cry more?

  • @samsonthomas6797
    @samsonthomas6797 5 месяцев назад +49

    Mistral 7B was released based on Llama 2 architecture, i can't wait after 2-5 months what Mistral will release based on this new way of training models by Meta AI

    • @Slav4o911
      @Slav4o911 5 месяцев назад +12

      Llama 3 based models will absolutely beat GPT4.

    • @paul1979uk2000
      @paul1979uk2000 5 месяцев назад +4

      @@Slav4o911 The signs are looking promising that Llama 3 will beat GPT4 once the community starts to fine-tune them, especially looking at how big of an improvement that's been done on Llama 2, it's likely we will see some big improvements on the newer model, probably more so because these are bigger models.

    • @Puerco-Potter
      @Puerco-Potter 5 месяцев назад +5

      Its impressive what llama 3 8b can, I was floored with how good it can comprehend text and improvise

    • @basilalias9689
      @basilalias9689 4 месяца назад +2

      ​@@paul1979uk2000
      They got there by fine-tuning the shit out of it, I have no idea how the community is supposed to put in that much power.

    • @someyetiwithinternetaccess1253
      @someyetiwithinternetaccess1253 3 месяца назад +2

      ​@@basilalias9689 you underestimate the power of random people on the internet

  • @metacob
    @metacob 5 месяцев назад +27

    I did not expect that I could run an LLM that can beat an older version of GPT-4 on my own PC this year.
    For reference, 70B runs at ~1 token/s on a 8C CPU. Not "interactive", but I sometimes switch tabs when asking GPT-4 something bigger too. And 8B runs at 60 tokens/s on my RTX 4080, which is more than interactive!

    • @paul1979uk2000
      @paul1979uk2000 5 месяцев назад +7

      Yeah, it does surprise me how quickly these open source models are developing, from a size to performance level.
      You get a sense that the likes of OpenAI, Microsoft and Google are using a brute force approach to A.I. which must cost them a fortune to run compared to the smart nimble way that the open source community is doing, and it makes sense, if you have limited resources, you're going to think outside the box to get better results.
      I really do wonder how much better a 7b, 13,b 40b and 70b can get before we get to limits that we need bigger models for better results, it looks like we are still a long way away from that because we keep finding better solutions for the given model sizes, which improves performance and like you said, it's remarkable the pace of development in just over 1 year, makes me wonder what we will see over the next 5, 10 years.

    • @r.k.vignesh7832
      @r.k.vignesh7832 4 месяца назад

      How much RAM do you need for the 70B model? And what level of quantization are you using?

    • @masterneme
      @masterneme 4 месяца назад

      Is it possible to run 8B on a Ryzen 4700U using the iGPU paired with 32GB of RAM?

    • @masterneme
      @masterneme 4 месяца назад

      @@r.k.vignesh7832 I got a notification but your response isn't here, anyway thanks.
      Is it possible to use the integrated GPU to make it a little bit faster?

    • @r.k.vignesh7832
      @r.k.vignesh7832 4 месяца назад +1

      @@masterneme Damn, I don't know what happened. I said that you can run but probably not very fast, as I can easily run 8B models on 16GB RAM + 6GB VRAM, and that you should try it on Ollama and see how you go

  • @YugKhatri-ht8kd
    @YugKhatri-ht8kd 5 месяцев назад +57

    bro you should create an LLM primer playlist, from training to inference, from a to z.

    • @bycloudAI
      @bycloudAI  5 месяцев назад +43

      I am actually planning something similar like this, it'll be sick

  • @tannenbaumxy
    @tannenbaumxy 5 месяцев назад +10

    Hey, congrats on finishing university! Please do what you like to do the most. But in my opinion there are already a lot of AI-news youtubers that cover a lot of what is happening in the AI world on the surface but what I really like about your content is the way you try to go into one topic a bit deeper. I really like the entertaining but educational style of your videos, so keep up the great work.

  • @megachelick
    @megachelick 5 месяцев назад +8

    Looking forward to see more tech stuff from you. Congrats on graduating btw!

  • @chamba149
    @chamba149 5 месяцев назад +30

    Peak thumbnail

  • @Puerco-Potter
    @Puerco-Potter 5 месяцев назад +2

    You are the only person that talk about AI in a way that I understand and also don't waste my time talking about random stuff for 10 minutes.
    I want to thank you for this. When llama 3 was announced I watched and read other channels and I was so disappointed, you have spoiled me with your quality.

  • @beemerrox
    @beemerrox 4 месяца назад +1

    I think you´re right in focusing more in these in-depth researches. Makes you stand out from all the AI-influencers. You got a new subscriber!

  • @hotlineoperator
    @hotlineoperator 5 месяцев назад +40

    It is competition. Open source is way to get some users away from GPT-4 userbase. Llama is not yet ready, it make mistakes. So, not yet time to collect money from it, now is time to get postition in AI market. So, open source is clever move.

    • @MangaGamified
      @MangaGamified 5 месяцев назад +2

      Sad, I hope someone had already saved the best open source -- offline, so in the future when it become paywall people would just use the model when it was free. for DMCA I guess they should uplaod it to a torrent, so that everyone is the host.

    • @Slav4o911
      @Slav4o911 5 месяцев назад +12

      What mistakes... have you even tested it? Llama 3 is the best open model ever released. Now open models are just a few finetunes away from flatly beating GPT4 and by a lot. Considering how much Llama 2 based models had evolved, almost nudging GPT4, I have no doubt, open source Llama 3 based models will beat GPT4, the difference is not even that big, just a little uncensoring will beat GPT4. When a model is censored it's lobotomized, so it doesn't matter how good the real GPT4 is, if people can't reach the unlobotomized model. Llama 3 will be unlobotomized by the community, there is no way a lobotomized model can ever beat a truly open and uncensored model with similar capabilities. It's funny how because of a few "bad" words, the whole AI field is lobotomized and stifled, because a few human snowflakes can't take reality and don't have the ability to think by themselves.

    • @elyakimlev
      @elyakimlev 5 месяцев назад

      @@Slav4o911 The problem is you can't really "unlobotomize" an LLM model without decreasing its quality.
      I believe the current best uncensored model is WizardLM-2-8x22b. They released it uncensored by mistake. It wasn't lobotomized in the first place. I use the IQ_4_S version and it's amazing.

    • @Puerco-Potter
      @Puerco-Potter 5 месяцев назад +8

      ​@@Slav4o911Open AI businesses model seems to be throwing more power into GPT, GPT 5 will need a small country's energy to run. Llama 3 can be run locally, that a insane difference no matter how you look at it.

    • @hotlineoperator
      @hotlineoperator 5 месяцев назад

      @@Slav4o911 Yes it's best but ask same question twice you'll get different answers - and one answer is correct.

  • @matthewmckinney1352
    @matthewmckinney1352 5 месяцев назад +1

    Congratulations on graduating! 🎉That’s huge, and I have loved your videos

  • @Words-.
    @Words-. 4 месяца назад

    Thanks for being one of the few AI youtubers that seems very knowledgable on ML as a whole. You're doing a good job of condensing the information without leaving the juicy technicals out, imo

  • @AnIndieMaster
    @AnIndieMaster 4 месяца назад

    I love when you explain research paper more than just AI News. Even this video was a little bit more in depth into the science of the machine learning than other videos out there. Hence, I continue the good work.

  • @TheMcSebi
    @TheMcSebi 5 месяцев назад

    Thanks for all of your great videos! Just keep us updated with the latest and greatest AI news and tutorials :)

  • @jsivonenVR
    @jsivonenVR 5 месяцев назад +1

    Interesting twist of events indeed! Small yet capable models pave the way for standalone LLMs like Phi-3 🤯

  • @Gambazin
    @Gambazin 5 месяцев назад +1

    Really looking forward to your next videos man! I know you will keep doing an amazing job! Will support patreon as soon as my startup is no longer just bleeding money 😂

  • @danishamin6018
    @danishamin6018 4 месяца назад +1

    Beard zuck looked more human

  • @MaJetiGizzle
    @MaJetiGizzle 5 месяцев назад +6

    Also, when it comes to LLMs, they’re spending far less money than OpenAI as well as far less compute to kneecap their competitive edge with the larger “better” models, thereby setting themselves up to capture a significant amount of the market share around AI later down the line a la Microsoft making Internet Explorer free versus Netscape who charged a bunch of money.

  • @123100ozzy
    @123100ozzy 4 дня назад

    i used to hate zucc, but this atitude to make llama opensource and free like this is amazing. he deserves an award.

  • @paul1979uk2000
    @paul1979uk2000 5 месяцев назад +1

    I suspect a big reason for them to want to release open source is because for one, the community themselves will help to improve the model a lot, which over the long run would save Mata a fortune, and two is probably to level the playing field, being that A.I. is likely going to be important in so many areas that it would be dangerous to allow so few governments and corporations control them, so open sourcing them, blows that open and puts everyone on the same playing field.
    If we had a situation where eventually one or two closed models dominant the market, that would give that corporation and probably the government of the country a massive advantage over everyone else, it's a given that they will use the uncensored version of the model whiles everyone else gets the restricted one, because of all this, open source is very important for A.I. models.
    There is also the advantage of open source models that will lower the cost for consumers and gives consumers far more control and privacy when running at a local level.

  • @naptimusnapolyus1227
    @naptimusnapolyus1227 5 месяцев назад +3

    zuck & musk are doing some good things now.

  • @virtualalias
    @virtualalias 5 месяцев назад +1

    Resource requirements are so high on the big models that you can effectively open and close source. Open sourcing GPT4, for instance, wouldn't halt OpenAI's revenue stream.

  • @what-un4yq
    @what-un4yq 5 месяцев назад +1

    Actually, it makes perfect sense to start with open sourcing. As clearly shown, AI is in its infancy And we are highly ignorant on how to properly train them. Later models can always be closed source, but this is a crucial period of information gathering and experimentation. So it's not only beyond reasonable, but actually rather smart.

  • @isbestlizard
    @isbestlizard 5 месяцев назад +5

    Heck AI is such a vibrant and fast evolving industry this is like trying to surf a 100 ft wave and remain on top. Data curators! Ahh god that's like something from a sci-fi novel 5 years ago.... data curators... we collate and sell high quality training data ahhh

  • @niklase5901
    @niklase5901 2 месяца назад

    Glad i found this chanel! This i high quality stuff!

  • @Bizlesses
    @Bizlesses 5 месяцев назад +1

    They guy just finished university... And here I am, having finished my Bachelor's in Software Engineering last year by cheating through all the exams, watching this video and not understanding how half the things discussed work.
    That is to say, you've made it, OP! Wish you luck with whatever endeavor you go for next.
    And to everyone else - make sure you're actually interested in the subject enough before applying! 🤣

  • @cdkw2
    @cdkw2 5 месяцев назад +3

    Full time RUclips is a good idea but remember to keep a backup plan

  • @AndersonPEM
    @AndersonPEM 4 месяца назад

    Thank you for your videos. You're very instructive and clear in your assessments.
    Keep at it :)

  • @tawfikkoptan5781
    @tawfikkoptan5781 5 месяцев назад

    Video aside (amazing video btw), the thumbnail is absolutely diabolical I cannot lie.

  • @ayushmanbt
    @ayushmanbt 4 месяца назад

    loved the video... guilty confession here: saw the thumbnail and thought it was a fireship video

  • @KeinNiemand
    @KeinNiemand 4 месяца назад +1

    Now we just need to wait for the uncencored finetunes

  • @xyers9757
    @xyers9757 5 месяцев назад +2

    Congrats on graduating man!

  • @michmach74
    @michmach74 5 месяцев назад +3

    If the whole YT thing doesn't work out, be an ML researcher lol
    In all sincerity, I like it when you go deeper into the papers and research. Most AI YTers either focus on AI News, test running the tools or just high level think pieces. Those are nice and all, but stuff like this is cool too.
    I think Yannic Kilcher does paper deep dives too? No offense to the man though, his videos are just too long. And probably too technical. While you balance the technical stuff that I'm curious about without making it well, too technical.

  • @isbestlizard
    @isbestlizard 5 месяцев назад +3

    This llama definitely not thrown off its groove

  • @marhensa
    @marhensa 5 месяцев назад

    I tried this Llama-3 on the Nvidia website and it can help my coding be very capable, maybe at the par with Sonnet level of ClaudeAI.

  • @mackroscopik
    @mackroscopik 3 месяца назад

    Your vids are both very informative and entertaining.

  • @AfifFarhati
    @AfifFarhati 5 месяцев назад

    YES , continuing down the research analysis path is the more interesting option imo!

  • @HorizonIn-Finite
    @HorizonIn-Finite 5 месяцев назад

    LLAMA-3: If ChatGPT gain all his fing-, GPU’s, he might cause me a lil trouble.
    Grok-1.5: but would you lose?
    LLAMA-3: Nah, I’d win.

  • @jean-michelgilbert8136
    @jean-michelgilbert8136 3 месяца назад

    Llama 3 might beat Mixtral models on synthetic benchmarks but I still get more useful answers from Mixtral 8x7b and 8x22b. Latest Ollama, Open-webui, no custom system prompt, all 4-bit quants. Mixtral 8x22b and Llama 3 70b are slow as hell.

  • @AlexanderBukh
    @AlexanderBukh 5 месяцев назад

    Good vid bruv 🎉🎉🎉 i think you gonna have success in this.

  • @VorpalForceField
    @VorpalForceField 3 месяца назад

    Great info .. Thank You for sharing .. Cheers :)

  • @yalmeme
    @yalmeme 5 месяцев назад +1

    hi bro thank for video you doing a great job!
    just wanted to ask which software u used to create/animate you avatar at the end of the video? it's in general called png-tuber if i'm understand correctly? but which exactly do you use?

  • @jawadmansoor6064
    @jawadmansoor6064 5 месяцев назад

    2:21 what I am interested in (and most developers do need, though they may not realize it) is MMLU and human eval score (unbiased and uncontaminated only) because this gives the model the ability to do things that uptil now (before llama3-8B) only mixtral could do, but that is huge compared to this (don't need to mention bigger models because it is obvious they can do it too but they are just too big) so yea, I love this 8B model. I am sure next 3b or even 1b models would be as great as this (Mark Zuk promised mobile based models in 2025). So, I am really enthused and really love what meta (not facebook) is doing finally.

    • @Slav4o911
      @Slav4o911 5 месяцев назад +1

      I think 8B models are also not very far away from running on the future mobile phones. It would be neat to have a model which can outperform GPT4 running locally on your smartphone. That reality is actually not very far away. Unless some dumb politician bans open models.

  • @rapidrabbit11485
    @rapidrabbit11485 4 месяца назад +1

    Now this is some armchair quarterback level stuff, but I really don't feel that very large parameter models are the solution to AI accuracy. I think you will soon see a race to the bottom, for who can make the smallest, well-performing LLM that can fit into a smartphone or tablet. I think the largest use of LLMs in the future will be on-device. I'm really surprised that you can move from 8 billion parameters, to over 400 billion, and really not see anywhere near the return in performance or reasoning. It will be interesting to see what the future holds, but it is just as interesting to understand some of the limitations of where we conduct research going forward. Apple has a very different take on this, I think they will be showing off shortly.

  • @TheSpace81
    @TheSpace81 5 месяцев назад

    The thumbnail is peak fiction.

  • @controli5123
    @controli5123 4 месяца назад

    At this Llama pace 1B models are going to be everywhere and GPT-4 level will be the minimum

  • @BhabaranjanPanigrahi
    @BhabaranjanPanigrahi 4 месяца назад

    7B to 8B because the vocabulary size is much bigger for 3. I have heard there’s also some gpu related advantages.

  • @mayatrash
    @mayatrash 5 месяцев назад

    Meta is open sourcing it because they learned from Microsoft and vscode. They will sneak into the middle between the user and the developer and in the end the can probably monetize it somehow (think about copilot and vscode)

  • @Lubossxd
    @Lubossxd 5 месяцев назад

    good luck with your channel, I think you can combine the mix of popular+studying. find your own mix and popularize it, not the other way around o7

  • @H0mework
    @H0mework 5 месяцев назад

    Hope to see more of you. :)

  • @ColorfullHD
    @ColorfullHD Месяц назад

    @bycloud yo where is that graph at 3:15 from? What specific NVIDIA presentation? thanks!!!

  • @UnchartedWorlds
    @UnchartedWorlds 4 месяца назад

    Zuck looks more human than ever! 8:50

  • @canekpantera14
    @canekpantera14 5 месяцев назад

    Congratulations on graduating!!

  • @Gregorius421
    @Gregorius421 4 месяца назад

    Key takeaway: Zuck with beard looks more human.

  • @jameshughes3014
    @jameshughes3014 5 месяцев назад +1

    more bycloudai videos would be awesome.

  • @inthevibedev
    @inthevibedev 5 месяцев назад +5

    Can't believe you forced me to click on the video with this thumbnail LMAO

  • @nTu4Ka
    @nTu4Ka 5 месяцев назад +1

    Besides anything else making Llama-3 open source will put pressure (remove money) on OpenAI.
    Lizards are cunning.

  • @bananalyzer-l5y
    @bananalyzer-l5y 5 месяцев назад

    Open sourcing it makes its better in the long run.

  • @MarcAyouni
    @MarcAyouni 5 месяцев назад

    Benchmark is one thing. But I found it gives more generic answers, even ignoring specifics in the question. So there is definitely more blur or average in it with fewer parameters.

  • @AaronALAI
    @AaronALAI 5 месяцев назад

    I built a 7xgpu rig that lets me run this bad boy at full fp16....frick it's amazing!

  • @RTBRuhan
    @RTBRuhan 4 месяца назад

    hmm... watching this breakdown as a common user of the free version of ChatGPT 3.5...didnt understand anything but still I enjoyed the content. thanks anyway

  • @rezasayadi8160
    @rezasayadi8160 3 месяца назад

    scary thing about this is that is Meta. they stole our data multiple times and they are likely to do it again

  • @OscarTheStrategist
    @OscarTheStrategist 4 месяца назад

    OpenAI: We gonna steamroll yall
    Giga-Zuck: I pity the fool!
    Glad to see at least one billionaire take a real risk. That could have been $10Bil+ wasted on a "tiny" model but they pushed the boundaries, and the longer token strategy is going to be the way. Exciting AF!

  • @hstrinzel
    @hstrinzel 5 месяцев назад

    Since it's coming from Zuck, I wonder what its answer would be to "How does Socialism/Communism compare to Market Capitalism in terms of quality of life of the citizens?"
    And I hope that the answer is NOT "They are pretty much the same, both have their advantages." And when asked why there are MILLIONS of refugees from Socialism, the answer CANNOT be "well, there are some people who have fled from Capitalism also." Could someone let me know what they find?

  • @نشامي
    @نشامي 4 месяца назад

    How gpt4 is bigger then lama 3 8b with 200x like it should be 1600billion parameter at this point, or there is something i miss??

  • @syan224
    @syan224 4 месяца назад

    Thank you for your content

  • @streamdungeon5166
    @streamdungeon5166 4 месяца назад

    So it seems now we live in a world where Facebook is making things that add value to this world. Wow, what a 180° change!

  • @ichiroramenbowls8559
    @ichiroramenbowls8559 2 месяца назад

    hen God comes back, he's gonna close the gates.

  • @GamingWithBlitzThunder
    @GamingWithBlitzThunder 5 месяцев назад

    Nvidia x Meta. Not something you see everyday, since Nvidia GPU will be roughly focused on Llama, other then closed source AI, far in the distance.

  • @finalfan321
    @finalfan321 5 месяцев назад

    you are doing great keep it up.

  • @danial_amini
    @danial_amini 5 месяцев назад

    The beard had me 😂

  • @QuantenMagier
    @QuantenMagier 4 месяца назад

    These statistics are fine but in actual tests I didn't find the llama3 8B model to be a major leap over previous models, the model is quite bad at roleplay, tends to repeat itself and still can't do proper multitasking.

  • @TobiMetalsFab
    @TobiMetalsFab 5 месяцев назад

    The most shocking thing about this video is Zuck with a bear. This is my work account so I'll just leave it at that.

  • @gingeral253
    @gingeral253 4 месяца назад

    Get me into the Llama club

  • @felixjohnson3874
    @felixjohnson3874 5 месяцев назад +1

    The issue AI is having is the same issue most software has; we're just throwing more compute at it instead of making it better.
    I mean transformer models are fundamentally very limited on an architectural level, so there are limits here, but for the most part no-one has really spent the time making it learn or operate better, we've just thrown more and more data and compute at it.
    That *_does_* work, but it reaches a limit. At some point the requirements grow expinential for linear or even plateauing returns. In contrast architectural improvements cost way more upfront (note : "cost" does not necessarily mean cash, it can be as simple as mental resources to spend the time thinking about a complex problem.) with near-zero returns on trivial scales, but *_absurd_* returns relative to unoptimized comparisons at scale.
    For an example of this Tantan released a video about binary optimizing the rendering topology of his game (it sounds advanced, and it sorta is, but trust me its nowhere near the level of jargon that description implies.) and, while his old renderer showed a performance difference on his old CPU compared to his friend's stronger CPU, after optimizing it to bitwise operations the performance was nearly identical. Further optimizations *_did_* eek out a bit more of a perf difference, but it was still only ~50%. The point though is that bitwise ops are so bloody fast that if you optimize to that point your CPU stops mattering. Strong CPUs, weak CPUs, old CPUs, new CPUs, it just doesn't matter, because you're already so damn optimized throwing more compute at it can't even be more performant. (* it CAN, but the returns are tiny, especially compared to otherwise under optimized equivalents)
    That isn't saying "okay so optimizing is bad and leads to poor resource utilization" it's saying "we've overbuilt CPUs by this much unnecessarily *_because_* we haven't had a culture of good software optimization". For another vidro there is the in/famous video Clean Code Horrible Performance which explores a similar idea.
    The overall point though is that we only hit performance walls with respect to our current operating assumptions. If we challenge those assumptions and change our approach we can see (and indeed often find) insane, bordering on incomputable performance boosts. Sure, you could put a threadripper in there with a 500w power supply to do the job... or you could design an analog circuit that does it for a watt. Performance walls, in lieu of some seriously first principle basis for assertion otherwise, can only ever be thought of in reference to a set of assumptions. If you're hitting a performance wall, it probably means you should look at those assumptions. Now, not all assumptions *_can_* be challenged, for instance any game you make is gonna be limited to needing to run on windows with conventional CPU and GPU architectures, but the more assumptions you can challenge the more likely you are to find the real bottleneck, and AI just gasn't been challenging many of it's assumptions lately. Instead they've favoured just throwing more compute at it and there are only so many GPUs you can throw before your arm gets tired and you run out of silicon.

  • @Ravisidharthan
    @Ravisidharthan 5 месяцев назад

    In case of ai and its current phase, who get the advancement and enough resource may lead and control the world , may be through upcoming multimodals or through so called AGI.
    But zuck blocked the passage of ClosedAI to being the one via Llama updates, that's what the same elon did, they lowered the game and the cost, which drastically pull off closedAi's margin in manyways....
    It's well planned game, and we community benefit from it alot...
    And i thank zuck for this, atleast he did what he did..

  • @c0nsumption
    @c0nsumption 5 месяцев назад +1

    Thanks dude. How long before we see VLMs built on this? Haha

    • @sjcsscjios4112
      @sjcsscjios4112 5 месяцев назад +2

      Probably llama4, and they will likely use JEPA architecture which will make it insane

  • @ingusmant
    @ingusmant 5 месяцев назад +2

    Do not go full time on this right out of college, not in this field, get a job and do this on the side, you don't want to be the 30yo failed youtuber meme whose entire resume is doing content. With some xp under your belt you can quit and if you have to come back from this gig to a boring office you can always show you just quit your previous job because ytube paid more, which is more understandable than a guy straight out of college never having a job besides videos on the interwebs.

  • @tsilikitrikis
    @tsilikitrikis 5 месяцев назад

    Mark competing OpenAI through open source

  • @kulikalov
    @kulikalov 5 месяцев назад

    good job! Keep it going!

  • @wut3v3r77
    @wut3v3r77 5 месяцев назад

    Interested in the video scripting part you mentioned during the life update section. Where can I reach out to you?

  • @danielchoritz1903
    @danielchoritz1903 5 месяцев назад

    10:23 is a her. The anime is called "Suzumiya Haruhi no Yūutsu" Watch it...

  • @thebrownfrog
    @thebrownfrog 5 месяцев назад

    It will sound sus coming from a guy, but your voice is nice to listen to
    Other "qualities"(like editing, the mamba edit, etc.) are good too

  • @lukacolic4193
    @lukacolic4193 5 месяцев назад

    Congratz on graduating

  • @drlordbasil
    @drlordbasil 5 месяцев назад

    so far i'm noting llama3 if prompted properly does better than any other model for basic tasks that have long term reasoning.

    • @drlordbasil
      @drlordbasil 5 месяцев назад

      NOTE: using ollama embedding with RAG

  • @Serizon_
    @Serizon_ 4 месяца назад

    isn't mistral and some other ai with name starting with p (I forgot it ) even more impressive than llama? (I think the name was phi 2 though I might be wrong)

  • @ShaneSemler
    @ShaneSemler 5 месяцев назад

    Meta open sourced it? I'm genuinely surprised.

  • @bacemtayeb326
    @bacemtayeb326 4 месяца назад

    How can I potentially test a model against gpt 4 (and other llms)? What is the name of the benchmark to be used?

  • @zman-1x1
    @zman-1x1 5 месяцев назад

    Well I completed my university too. Time to experiment with llms.

  • @masterworm2880
    @masterworm2880 4 месяца назад +1

    This aged like milk.

    • @GrimeReaperMan
      @GrimeReaperMan 4 месяца назад

      I've tried llama 3 for a week now, it sucks.

  • @NLPprompter
    @NLPprompter 5 месяцев назад +1

    xx Trillion training data + Censored LLM =
    user : teach me about A
    LLM :
    - Find A
    - A found
    - Breakdown A
    - A = A1, A2, A3, A4, A5
    - cross section A with possible nearly A in context, Find context
    - context found = B, C, F
    - mixing A with B, C, F
    - guardrail triggered
    - F and C possible dangerous or toxic. avoiding A3, C and F
    - mixing A with B.
    result missing context
    missing context = missing information = censor in favor for certain ideology or political view, or even for a favor certain business.

  • @DonPatro92
    @DonPatro92 4 месяца назад

    Tried Llama 3 Instruct on LM Studio but when I ask it something it doesn't stop generating, it just keep going. Is there any way to fix that?