Meta Open Sources Llama 3.1 ― Should Llama 4, 5, and 6 be Open Source Too?

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024

Комментарии • 62

  • @jamiethomas4079
    @jamiethomas4079 Месяц назад +49

    I prefer dangerous freedom over peaceful slavery.

    • @quanchi6972
      @quanchi6972 Месяц назад +2

      no preservation instinct

    • @xyhmo
      @xyhmo Месяц назад

      I don't believe peaceful and secure slavery exist. In North Korea the authorities can grab anyone they want and torture them for fun. Maybe they generally don't, but they certainly could. A slave has no rights and is treated however the masters want. If it's peaceful, it's only at the mercy of the masters and can change any second.

    • @CYI3ERPUNK
      @CYI3ERPUNK 9 дней назад

      1111% this , this is the way of nature

  • @Steve-xh3by
    @Steve-xh3by Месяц назад +20

    There is no conceivable argument that democratization of power is more dangerous than concentration of power. I find it alarming that those who normally believe in democratization of power, suddenly turn into authoritarians when discussing things that are VERY powerful. If it is too dangerous to democratize, then it is too dangerous to centralize. I'd much rather live in a world where everyone had access to extremely powerful tools, then to live in a world where only the powerful do. If AI is so dangerous that it can't be democratized, it shouldn't be pursued period.

    • @priestesslucy3299
      @priestesslucy3299 Месяц назад +3

      @@Steve-xh3by 1,000% this

    • @andrasbiro3007
      @andrasbiro3007 Месяц назад +2

      Would you want everyone to have access to nukes?
      The question is a bit more nuanced, both ways have serious pros and cons.

    • @eSKAone-
      @eSKAone- Месяц назад

      Everyone access to nukes?​@@priestesslucy3299

    • @DaveShap
      @DaveShap  Месяц назад +3

      I don't know why this comment doesn't have more upvotes.

    • @dg-ov4cf
      @dg-ov4cf Месяц назад

      wanna think that through for 5 seconds? should be able to have an f35 fighter jet, or an EMP then?

  • @DeathHeadSoup
    @DeathHeadSoup Месяц назад +3

    “God made man, but Colt made them equal”. The Colt revolver only made people physically equal. AI is the Colt revolver for the mind.

    • @CYI3ERPUNK
      @CYI3ERPUNK 9 дней назад

      and exactly why/how did colt make all men equal? becuz anyone/everyone can carry/use the gun/pistol , ie DECENTRALIZATION ; 100% agreed , if the AI is not in everyone's hands equally then it will 100% be used against the masses by the minority elites in power , as it already is being used atm ofc

  • @vi6ddarkking
    @vi6ddarkking Месяц назад +22

    Short answer, Yes.
    Long answer. YES. Open Sources LLMs are uncensorable and therefore objectively better.

  • @philip3283
    @philip3283 Месяц назад +6

    A million times yes. With whats coming anyone without free access to a frontier model will be at the whims of the big companies: I'd hate few things more than being at the mercy of mister Altman.

  • @priestesslucy3299
    @priestesslucy3299 Месяц назад +9

    My answer: All Open Source All the Way.
    If we do get a major breakthrough the last thing we would want is for it to be privately owned

  • @DiceDecides
    @DiceDecides Месяц назад +9

    everyone's worried that the average person will know how to make nukes if advanced models are open source, but guess what, it's a complicated process even if you know how to do it so they won't make nukes automatically accessible to everyone.

    • @phen-themoogle7651
      @phen-themoogle7651 Месяц назад +2

      Yeah. Getting the materials can be very hard. It’s only scary when LLM get smart enough to find alternate much easier ways we never thought of to make biological weapons. Idk how many years that will take if ever though

  • @JamesRogersProgrammer
    @JamesRogersProgrammer Месяц назад +22

    It should be 100% open source because it trained on everyone else's data, so they shouldn't be allowed to now own all that data moving forward.

    • @Custodian123
      @Custodian123 Месяц назад

      Markets don't care.

    • @Gael_AG
      @Gael_AG Месяц назад +1

      @@Custodian123people influence markets so … it’s an assumption you have to assume as a responsible person

  • @adolphgracius9996
    @adolphgracius9996 Месяц назад +1

    Open source is good because it is how newbies train and learn new things. These newbies could eventually end up working for companies anyway, so the better they are, the more the company can get out of them.

  • @7TheWhiteWolf
    @7TheWhiteWolf Месяц назад

    Zuckerberg already said he’s open sourcing the first AGI model.

  • @I-Dophler
    @I-Dophler Месяц назад

    Great analysis on Llama 3.1. Open sourcing it could indeed enhance research opportunities and safety mechanisms by allowing a broader range of experts to examine and test it. This approach might prevent potential misuse and better address emergent AI capabilities. However, it's important to keep an eye on Meta’s future plans and the ethical implications of their strategies. Looking forward to seeing how this impacts the AI landscape.

  • @aaroncrandal
    @aaroncrandal Месяц назад

    The fakest element of the back to the future plot line was the box of weapon grade plutonium bein carted around in a box without detection

  • @andreinikiforov2671
    @andreinikiforov2671 Месяц назад

    I can't run Llama 3.1 locally because it's too large. Most firms won't use it either since it's more expensive to operate than models like GPT-4 mini or Gemma. So, who benefits from increasingly larger and more capable open-sourced models? Recently, OpenAI shut down a large Kremlin influence operation using its API to run a network of fake news sites. Now, they could potentially use Llama 3.1 locally for similar or even more advanced operations. It seems like the only beneficiaries of these massive open-sourced models (such as Llama 4, 5, and 6) will be malevolent state actors who can't develop their own foundational models. Maybe some academic researchers could benefit, but they usually get early access to more advanced private models anyway.

  • @ashtwenty12
    @ashtwenty12 Месяц назад

    For the path finders community? I assume this is mostly US hours, possibly in the evening? Here in the UK not sure how it would work?

  • @CronoPVP
    @CronoPVP Месяц назад

    If you think of this as something dangerous like a weapon, to not open source is to give its owner unlimited power with no contrapoints.
    You rather have everyone armed or only one player armed and the rest of us hostages to him?
    If not even our workforce is of any value, why would this player care for any of our interests?

  • @andrasbiro3007
    @andrasbiro3007 Месяц назад

    I'm in favor of open source, from the standpoint of safety.
    It's a bit counterintuitive, but you want AI to cause harm as soon as possible. It's because we only have a vague idea about how it would look like, and many people don't even believe it could happen, so prevention and mitigation are very hard. So likely we'll have to learn from the first few incidents, and then it's better to have those when AI is just smart enough to cause trouble, but not smart enough to be an existential threat.
    It's like with nukes. They were first used when only 2 existed, and making them was very slow and expensive. This way the damage was limited, but bad enough to create a strong taboo against the use of them. Imagine that Hiroshima and Nagasaki didn't happen, and nukes were used first in the next big war, when there were hundreds or thousands of them.

  • @Ideagineer
    @Ideagineer Месяц назад

    I don't consider any model not released under MIT or Apache 2.0 licenses "open source" but it's such a nerd concept it might as well be taboo.

  • @neomatrix2669
    @neomatrix2669 Месяц назад

    Regarding your earlier prediction of AGI coming next month, it is possible that closed laboratories are already achieving the first prototyped versions for human general reasoning, it just won't be publicly revealed yet, but I think in my conclusions that it will indeed be next month that the first existence of AGI will emerge.

    • @7TheWhiteWolf
      @7TheWhiteWolf Месяц назад

      We have no possible way of validating that it’s a non-falsifiable prediction.

  • @CrypticConsole
    @CrypticConsole Месяц назад

    I hope they focus more on either larger 70B+ size models or developing better architecture advancements because I feel like llama 3.1 8b can already do everything I would want from an on device text only model, and it already has 128k context. For example smart home interaction with tool use for turning on and off devices is something the 8b is easily capable of, and a new model will not help much there.

  • @OnigoroshiZero
    @OnigoroshiZero Месяц назад

    Open source is the only way forward.
    In a few years, with the models being more optimized, and with the people having much better hardware, no one will even care about the closed source models which will be heavily censored, regulated, and with much less safety for personal data.

  • @robxsiq7744
    @robxsiq7744 Месяц назад +2

    question: should only corporations and government have power in the future. ugg. do you want cyberpunk future? because closed source is how you get cyberpunk future!

  • @dustymoses729
    @dustymoses729 Месяц назад

    Open source vs closed source; either way, we the people should be the ones to decide!

  • @GaryBernstein
    @GaryBernstein Месяц назад +1

    Free the AI. It’s evolution

  • @ryzikx
    @ryzikx Месяц назад

    lets go

  • @a.thales7641
    @a.thales7641 Месяц назад

    Llama4 should yes. I'm not sure there will be a 5. Maybe there might be a new name as a multimodal.

  • @HUEHUEUHEPony
    @HUEHUEUHEPony Месяц назад

    How is this a question? If you want closed source go to openai and shit

  • @quanchi6972
    @quanchi6972 Месяц назад

    Did you delete my comment because you disagree with me? These people literally willing to walk off a cliff like lemmings for the next shiny new toy have done zero thinking about what threats even exist

    • @DaveShap
      @DaveShap  Месяц назад

      No idea what you're talking about.

    • @quanchi6972
      @quanchi6972 Месяц назад

      @@DaveShap Darn, well I had written a very thoughful response but it apparently got caught by the critical thinking filters. Can I suggest you take some time to speak concretely to your audience about some of the object level dangers we face as OS gets stronger? It's astounding how few people in this comments section have taken the time to think through their OS zealotry beyond first principles and understand where it really leads to down the road. The prevailing attitude here is almost "this is my toy and you can't take it away from me"

  • @picklepopsickle
    @picklepopsickle Месяц назад

    I wish

  • @user-gt2wj9ss6b
    @user-gt2wj9ss6b Месяц назад

    I have a strong impression that your channel is degrading to a source of pointless questions and obvious informations. And in the beginning, when I subscribed, there was even programming, attempts to build cognitive architectures and solve algorithmic challenges... Now, it seems, you are running out of ideas. And it would be nice to return to those original sources. I thought that was your mission? At least that's the message you were sending.
    Otherwise, what next? "Stunning blabla shocks the entire industry" just to show RUclips ads?
    P.S. But I see you have created your own course on AI. That is the point of the video, not the obvious question.

  • @mehdihassan8316
    @mehdihassan8316 Месяц назад

    first