Sure, but its a trojans horse I assume. Actually it could be a marketing gag. But all that doesnt matter. The politics are preparing closing down the free market with regulation which only huge companies can manage. So like you said, the major companies having the platforms are the neo feudalists and we are the peasants. All user b2b or end customes will pay the tithe.
Yes local OSS is the way forward, I've been using Linux forever and, for example, never affected by malware like Microsoft Solarwinds, Crowdstrike. Also hey @Matthew Berman where does this new LLama version 3.1 fit in your Ai stack for local OSS development? Does your Ai stack still INCLUDE RouteLLM per "achieves 90% GPT4o Quality AND 80% CHEAPER" ?
Zuck, the hero we need! lol As Yann LeCunn often says, LLM's reasoning abilities are very constrained to their datasets... These are not easily going to become the runaway AI Gods that people fear. Meanwhile... They may well be quite important to the future. Consider things like: a) Almost 30% of Americans only read at a 3rd grade level. b) Much of the developed world is headed for a demographic crisis. People are going to need help.
definitly. Sharing intelligence is like a parent giving ideas that help their children grow. Or the sun radiating warmth 'light' for all life. Aliging with the principles inherent in the expansion of nature is the way forward. Limiting this out of fear is jsut small minded thinking - Open sauce all the way.
I've been running Llama 3.1 8B on my 36 GB M3 Max MBP. It's a game changer. Consistently follows the system message and handles json data extraction. I was never able to get the other tiny models like phi, mistral, orca, dolphin, etc. to do this sort of work.
Concerns about Zuckerberg's track record with social media are valid. The Cambridge Analytica scandal highlighted the potential for misuse of data and technology. As AI becomes more powerful, the stakes are even higher. Ensuring robust ethical guidelines, regulatory oversight, and continuous community engagement will be crucial to harnessing the benefits of open-source AI while minimizing risks.
Facebook was free to use because I was the product and not the customer. That could be why it wasn't the experience I wanted. I wonder if this will just be the same go around.
But it's not possible with open source. You download it to your computer and everything is offline. None of your data is sent to Meta. You can see (and edit) the code. You can see everything it's doing. So very different situation than Facebook (which I don't use, I am not defending it).
@RondorOne they want to build products on top of their models , it's the products ( and possibly platforms ) that will be built on top I'm worried about
@@jaredowen7321 Sure, I understand your worries. But nobody is forcing you to use their products built on top of this AI. I don't use Facebook, Instagram, WhatsApp, Twitter, TikTok, or any other social media outside of RUclips. I am always wondering why people have this unstoppable urge to show everyone what they had for lunch and their deepest personal things, and then are shocked that someone "stole" this data from them. Easy fix. Don't post it, and nobody will steal anything from you.
I think the real game changer with open source models is that eventually, they can be integrated into other tools, apps, games and other software far better than what closed source online services can. Throw in that being open source allows developers to custom make a A.I. model for their specific needs on an app or game and it's easy to see the advantage that can offer, not to mention that it would be far more tightly integrated into the software than online models could ever be. On top of this, there's the trust issue, as A.I. become better and more capable, I suspect more of us will want to use it on a day-to-day basis on almost anything we want, that could become a major privacy and security concern for a lot of us with online models with data being sent back and forth, and that concern is only going to get a lot worse as A.I. models have a long term memory that can adapt and change to your habits at a personal level, there's no way I can see the majority of us wanting to use an online model as A.I. really matures, and if that isn't enough, there's robotics, eventually we could have them around the house and in society, and again, that's too big of a privacy and security concern to even think about using the online models. About a year ago, I was thinking that the online models were the way to go because of the computing power needed, basically, I thought it would take far longer to have this kind of quality running locally on our own computers, but the pace of development of these smaller open source models has been a lot faster than I expected, and computing hardware is only going to get more powerful as well as small A.I. models will continue to get better, and because of all this, I was convinced enough about a year ago that the future of A.I. is open source and where you can run it locally on your own hardware, and because of that, I don't bother watching videos or reading articles on closed source online models any more, unless a game changer happens on one of them.
@@RondorOne same way occulus rift was open source until it was developed and then meta bought it licensing for it all and made it proprietary to their platform. I had the beta version of occulus and used it for asseto corsa many years before fb and now meta purchased it. Most of the developers were all no named dedicated sim racers. Thats the truth. Same gameplay with ai development. But it definitely sounds nice
@@immunity4soul Which has no relation to something you get as open source now and you can use it forever as it is. Nobody is promising you that in the future Meta will spend even more billions to give you even more powerful open source AI for free. Maybe they will, maybe they won't. But the one you are getting today is free and open and you can use it however you want forever.
14:30 I think efficiency is a real big issue. When everybody is talking about how the amount of compute and the energy to run it is rising at an order of magnitude every single year or even less, we need to get that under control because I don't think we're anywhere near technologically advanced enough to start building a Dyson swarm and beaming energy from space to Earth. Not to mention the overheating that would cause anyway especially if you're doubling or tripling the amount of energy on this planet we're using. And of course the amount of hardware and the cost of all of that would be prohibitive as well.
Matthew did you you see Snowden's speech at the Nashville Bitcoin Conference? If not, go check it out. The need for opensource AI is directly addressed. The threat is articulated pretty well there.
3:30 The thing about VR is that it is effectively just a monitor, and they are trying to sell it like a smart phone. Maybe someday we will get to the point where you can use contact lenses with AR or something similar, but its unlikely to replace the smartphone until it becomes very small, and doesnt make you look like a dork.
It concerns me greatly that huge corporations are in control of this tech. This will satiate people who are worried. Companies will NOT do things for anyone but themselves. There is always a catch.
Hey @Matthew Berman where does this new LLama version 3.1 fit in your Ai stack for local OSS development? Does your Ai stack still INCLUDE RouteLLM per "achieves 90% GPT4o Quality AND 80% CHEAPER" ?
This morning I was using to find out which beaches were open in my area and i t provided me with current info that I needed.. my point is that AI need some general knowledge and then need to specialize in a specific area!
Since we don't know what any of the weights actually do, is open source even different from closed source in this case? Isn't distributing the weights without giving the training data essentially just giving a compiled program?
It likely will for open source and closed source models, in fact, a big part of why Zuckerberg is releasing these models is because he realises that there's an army of developers in the community that can do far more work when it comes to fine-tuning these models, something we've seen already over the last year.
I think open source AI is absolutely the best way forward, not only for the reasons mentioned, but because it's the best way to make sure AI is used to benefit people rather than just a few big corporations. Power disparity is one of the best predictors for abuse of power, so if big companies are the only ones with the power multiplier that AI is, then they will absolutely abuse that power. Everyone having approximately the same AI capabilities is the best way to keep it truly beneficial to everyone rathar just a few
I think it also levels the playing field that everyone has access to it, and for a tech as important as A.I. can be in the future, being open is probably important. I would be really concerned if we ended up in a situation where 2 or 3 online A.I.models dominant by 2 or 3 corporations, that no doubt will have backdoors in them from governments, that might not be a concern for now, but as A.I. gets better and more capable, having that capability in the hands of so few people could become dangerous for the human race, or at the very least, offer those few a major advantage over everyone else, because I'm sure we all know, they will have access to the full unrestricted A.I. whiles everyone else has access to the restricted one. This might not be a big deal now, but if A.I. keeps developing, it could become a major driving force of human development over the coming decades, having that being controlled by so few corporations or governments sounds extremely dangerous for the human race over the long run. Open source is the only way around that, to level the playing field and allow everyone to access to it, for good or bad, also, I like to think that we humans can collectively come up with sensible rules on A.I. not corporations or governments that likely want to control it and not for our benefit.
To be clear Lama3.1 is open weight and not open source. Also if you are using lama3.1 in your project you have to mention "based on lama3.1" according to Lama community licensing in order to give credit to it.Correct me if I am wrong..
Yes local and able to store personal information with or without asking, probably in a vector database. Like if your tell it you family members it will remember.
While Linux gets all the press these days, a reminder that FreeBSD was the open source UNIX that had to fight the fight to actually have and open the path for having an open source Unix / Unix like OS. Also FreeBSD has a more open license for its use and development. Also right now, Open Source OSs are the ONLY OSs that do not require or demand to know who you are and or uses you as an advertising platform - just to use it.
One of the worries is Zucker might be playing the loss-leader game, and once they bankrupt the competition, they'll go mask off and launch a closed model that's orders of magnitude better and is too big to be run by anyone that isn't a billionaire even if it gets leaked, and you will have to pay him whatever he asks for in order to be allowed to use it (if it doesn't end the world first). Another major concern of course is they might've embedded in the model the manipulation tricks they have learned over years with their unethical psychological experiments on Facebook, the stuff with Instagram etc; and the AI might stealthy manipulate people against their own best interest and in favor of Zucker.
Regarding closing tech from other countries would not work, I guess it can work this way: if a country actually does make an act of espionage reverse engineer and (steal) they would lose moral points and the ethical part of the argument. And as our shiny dad always says light always prevail in the end.
Try to remember that we do not understand the nature of this technology yet, and many top researchers believe current models may have vast capabilities far beyond what we are able to tap into with current methods. If the lovecraftian cosmic horror of the unknown doesn't evoke in you existential dread, you probably shouldn't have a voice in this discussion IMHO. The scale of the consequences of getting such things wrong is literally beyond our imagination, because we have no reference points to anything like this in our past.
I think open source is best, it helps to limit all the power going to one or two companies that just end up owning everything and all technological advancements.. Any AI would be guaranteed to be deflationary than the just increase profits solely for thee companies that got ahead early on.
"open source" we need a ton of quotes around those two words. Why are we letting some rich guy get away with calling something open source that is not open source ? What is open source here exactly ? Meta AI website code ? the code telling it the LLM to use data ? Where exactly is the code we can look through ???
21:51 so it sounds like the thinking might be that with open source technology will then that's also highly compatible with more open societies as well.
Even though this is sold as open source, Zuckerberg can at any point turn it into closed source, like OpenAI did. I'd rather that we put our effort on something like Petal which IS open, like Linux by Linus Torvalds. Sad that Petal doesn't get as much attention.
In truth, beggars can't be chooser, and even thought many of the so-called open source models are not truly open, the cost to make these models is really expensive and I understand why they need to find some kind of revenue source for them. I would prefer more true open source models, but these are still far better than the online closed source models.
18:00 I don't understand, what would prevent a group from taking that open-source project and making it insecure on purpose? What are we talking about here? Are we talking about preventing a bug from being introduced or a group of people taking it and turning it into an AI expert in scams?
Writing and maintaining etc. your own code even with an API requires each company to have its own software development department, even with capable AI coding, this is a cost the makes sense for some companies and initiatives but not for all.
Don't be fooled that Zuck has any motivation besides his own enrichment and prestige. I can't think of anyone worse to lead us into the AI era given his track record with developing technology that benefits humanity. Remember that you can't share secrets with your friends without also sharing it with your enemies.
Yes, that's true, but if you get open source, you can do whatever you want with the code and fix it or change it in any way and Zuckerberg cannot do anything about it. He cannot steal your data, he cannot insert ads, he cannot do anything. So I understand you hate him so much it makes you blind, but if you get the source code and the weights, there is nothing he can do. So this is different.
Mark Zuckerberg says the software is open to anyone, but that's not entirely true. If you have a big platform with over 700 million user's monthly user, you need special permission to use it, and the company can lower the monthly user cap to whatever number they want any time. Given Mark Zuckerberg's past mistakes, people are right to wonder what he's really trying to do with this model. His rebrand has been really forced
"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans." - Nick Bostrom
15:44 wow! Yes regulatory capture we got to avoid that we need a free market in this! Now speaking of this not to get political hopefully but what do people think of President Trump's idea of a "artificial intelligence Manhattan project"? Could that also fall into the regulatory capture trap if certain aspects of AI research become a national security issue? Or perhaps maybe that super cutting edge research is not to be confused with upgrading open source models because it's not going to be anywhere near such an issue until it catches up.
Do you understand what open source is? Nobody can insert an ad anywhere. You can see and edit all the code. This is completely different situation than Facebook, because you have everything locally and you can do whatever you want with it. So it is technically impossible for Meta to steal your data (like Facebook) or insert any ads (like Facebook).
@@joeblow6105 But it is technically impossible for Meta to insert ads into your local AI. If you download LLaMA 3.1 (405B) right now and use it for the next 10 years, there is no way for Meta to insert anything into it ever. Plus Meta cannot insert ads into any open source release even in the future - it's technically impossible, people would just remove it from the code. And if you speak about some future AI from Meta that will not be open sourced and will have ads. Nobody promised to continue spending billions and billions to always bring you fresh and most powerful local open source AI for free. But the ones you got now are open and free and you can use them forever like that. Nobody can take it from you.
Yan LeCun (Zuck’s AI guy) has been dismissive of the existential risks posed by AI described by experts like Hinton, Suskever, Tegmark, and Bostrom. Instead, YeCune argues that AI development will be incremental and that we can implement safety measures along the way. This confidence is risky, especially given the high stakes involved and Meta’s track record. Money and company growth is always behind everything
@@r34ct4 it does work? thats the whole point of this lol did you watch the video. People have been finetuning llama3 to improve it for specific things.
Frontier models are not open source, they just change the values of each algorithm to alinement with ethical guidelines when they release, so much manipulation has be found. But keep up the transparency. Domain-specific manipulations is the new tactics. He found a solution to hide more
Just hours after LLaMA 3.1 has been released, people have already removed all censorship and bias (except the soft bias present in the training data, but that's bias of the Internet, nothing you can do with that... maybe fine-tune on your own data so it has your personal bias instead). You can download abliterated uncensored versions of all these models right now and they are completely uncensored.
AI needs to be free because nickel and diming over completely random outputs will not end well. AI has theoretically unlimited potential but people need to have full access to it to bring it out. It needs to be open source to promote innovation.
Capture the video in 1 minute 🚀⬇️ *Main Points* 1. Open-source AI models like Llama 3.1 are becoming competitive with frontier models, drawing parallels to the rise of open-source Linux over closed-source Unix. 2. Zuckerberg's vision for open-source AI aims to prevent Meta from being beholden to closed ecosystems, as experienced during the mobile revolution. 3. Llama 3.1 is designed for modifiability, cost efficiency, and security, allowing developers to fine-tune models for specific needs without compromising data privacy. 4. The open-source approach encourages a diverse ecosystem of developers, leading to innovations and improvements that closed models cannot match. 5. Meta's commitment to open-source AI is driven by the desire to create a standard that benefits the broader community and ensures long-term sustainability. 6. Open-source AI is seen as a way to democratize access to technology, preventing power from being concentrated in a few companies and promoting safety through transparency. 7. Zuckerberg argues that a decentralized, open innovation model is crucial for maintaining a competitive edge against geopolitical adversaries like China. *Timestamped Summaries* 00:00 Open source AI is gaining traction with the release of Llama 3.1, which is competitive with leading models. 00:28 Mark Zuckerberg's letter emphasizes the importance of open source AI as a path forward, drawing parallels to the evolution of Unix and Linux. 01:24 The initial affordability and modifiability of open source software led to its eventual dominance over closed systems. 02:16 Zuckerberg's strategy is influenced by Meta's past experiences with mobile platforms, aiming to avoid dependency on closed ecosystems. 03:43 The competitive landscape of AI is shifting, with Llama 3.1 now on par with the best models and expected to lead in openness and cost efficiency. ... ---------------- click [here](ruclips.net/channel/UCSD2kXTOHUg_shF2fYQYpcg) to watch the full summaries 📺
I love the comparison with Linux. However, Linux could run on any old PC was its magic, its low bar to entry. For Llama 3.1 405B you will need a beast of a server to run it on, there is no laptop out today that will run this model so we will rely on Shops that can run it to make smaller models from it. While we still have that reliance we won't be able to reap the benefits Zuck is pointing out. So I think there needs to be a breakthrough in hardware first, this is why I'm watching Qualcomm and nVidia because it's here where we will see the revolution of AI. What about Apple? Apple's focus is on selling devices and providing a personal experience, we haven't seen a networked Apple experience and ecosystem. Until then, I believe Linux will continue to be a significant player in the AI landscape.
If Zuck sucessfully makes llama being open source the standard for LLM. we cant be sure that in the future he wont rug pull and simple stop being open source, but assuming all those versions released already are open source, we can still fork them .... not as great as linux but well... in this case with the mass money dumping and AI (LLM) race, open source model like this are the best path .
It's all true and good but I'm sure llama is also trained on Facebook, WhatsApp etc. user data. The Zuck is a little hypocritical eventhough I agree with most statements made.
The comparison to Linux is strained and may be invalid. Linux was made outside of a big corporate entity. Linux did not require a multi billion dollar cluster of graphics cards to train it. Linux was hand coded line by line by people who knew how every part of it worked. The 405b model from Meta is nothing like that. It's a black box and no one knows how it actually does what it does. Giving this tech to the whole world, including brutal dictators, when we don't even know how it works yet is reckless. I'm sure the 405 will be ablated and jailbroken and there's absolutely nothing that can be done about it because it runs locally. Do we really want to freely give Russia or North Korea that power? Maybe someday, yes. Once we understand how the models do what they do and we can better characterize the risks they pose. Just because FB got burned by not being a leader in mobile is not a blanket excuse to release whatever they want to the whole world.
open source will not be the "path forward" any time soon for the reasons: 1. regular people can't run big models, only big companies can steal the big models and fine-tuning them then release them rebranded commercial. 2. people who are scared about their data being released are < 0.0001% 3. smaller models are not good enough and cost more (even if you run locally) than i.e. gpt-o-mini that's because Microsoft datacenters run on solar-wind energy. once the gpu's become dirt cheap, electricity become dirt cheap, open source path could start there, till then, see you at the otherside.
You want them to buy you gpu as well. 😂. For christ sake they trained and built an entire smart AI model using billions of dollars. If you cant pay 300$ bucks to get your self a good pc, then just sleep
1. Small companies can, and people can run it in cloud services, which compete with together 2. Small companies, government organisations care very much. And people too, but less 3. 8b model is very smart, significantly smarter then gpt3.5
You got some good points.. But some misconceptions too .. 1. Huge models weren't made for everyday Joe to begin with. 2. Stands I guess. 3. Smaller models used to not even exist and now they are getting better and better every single day.
@@starfieldarena Same with people who get an AI model that costs billions of dollars to research, develop and train, and are allowed to use it commercially, can modify it in any way, and even train a competitive private model with the synthetic data from the model, and see and modify source code and all the model weights ... but it did not release all the 15 trillion tokens of original training data so "iT's NoT rEaL oPeN sOuRcE bRo!!!1" People are ungrateful.
Have you ever been to China? CRWD costed the problems of the world but China. Have you ever asked yourself, Why? Don't blind intelligent people seeing the truth of AI.
Why always the focus on the "cult of personality" instead of the applications? They must teach feckless aggrandizing at Y Combinator. But any criticism means I'm a hater despite my long-term viewership, so exit stage left, I shall...
This is not open source AI. Unless you can retrain it yourself, its a black box. The whole idea of open source is you can recompile from scratch, not just using a pre compiled binary blob thats free to use.
You can generate synthetic data from it. So you can rebuild a new LLM from scratch. The black box is just a nature of neural networks but if you have access to the weights you can examine it using statistics.
Why are people impressed or congratulatory about Zuckerberg’s appearance. Everything he does has to be seen through a marketing and economic growth motivation unfortunately.
What we have to Defend at all cost is within the material science tech lanes. . The software is fair game but still it should be some representing standards at play here. Propagandized beliefs, exploitated tech like rogue terminater bots embedded plausible deniabilty Triangulated greatest threats of all.. We transfered wealth industrialized the world and it didn't bring utopia as evidence in our world proves. But they can survive by producing conventional suppy chains to trade for whatever resources they don't have. This 3rd frontier demands responsibility unlike anything else we've ever known! If it's a state actor who censors it's individuals and refuses to grant Deterministic free flow of information education and access to tech even if it's American allys we must be very careful with the material hardware itself. Not all beliefs are eqaul, not all states are willing to re allocate affinities in proper time and place application to not get in the way of a mature successful social behavior that get more narrow each time we dig out complexity of yesterday's axioms put it into our world tech and material sciences. Each time this occurs it reduces what defines a successful Free & responsibile society is..
This open source model 405B is not really open source since a normal consumer cannot use it. I understand open source as can being used by everybody and not by some coorporate companies. So Mark is pushing against closed source companies at the end as a strategic move.
bla bla bla..... all the nonsense.... open source free AI will be better than a google search but in the end paid versions will have no competition! rather than trying to predict what the fututre will be compare zucks AI to chatGPT and Grok
It's what David Shapiro labeled terminal race condition; it's not possible to act safely when competitors aren't also doing so; those that are careful will be left behind, and the ones rushing it are playing russian-roullette with a doomsday device aimed at the whole world; there's a chance that if you act fast you might find a solution before the hammer hits the doomsday bullet, but odds aren't in our favor...
@@LukasPetry Lemme try to figure out how to rewrite it in a safe way: It's what David Shapiro calls terminal r4ce condition; it's not possible to act safely when competitors aren't also doing so; those that are careful will be left behind, and the ones rushing it are playing rvssian-r0ull3tt3 with a doomsday device aimed at the whole world; there's a chance that if you act fast you might find a solution before the hammer hits the doomsday bvll3t, but odds aren't in our favor...
Then why did you click on the video and why did you keep watching? So that you can write angry commend and make Matt feel bad about himself? If you don't like commentary on the news but rather search for them yourself and read it yourself, then what are you even doing here in the first place? I used this video as a radio / podcast while working around the house. I also appreciate Matt's take on the topic. If you don't care, then why were you even subscribed in the first place?
@@RondorOne hi my friend. Been a sub until writing this comment. After leaving my honest opinion stopped watching and unsubbed. Clear enough for you ? Need more explanations? Im here for you
@@paolomoscatelli No, I think it's clear. I would just recommend not even watching videos where it's obvious it's just RUclipsr's commentary on some news or press release. Most of us subscribed watch these RUclipsrs (or just listen to them) because we don't have time to read AI news every day. Listening is easier, because you can do it while driving or working around the house. For us it has value, because we don't have time to read it all ourselves and curate it and also we are interested in some commentary and context from someone who is reading all the news. I can understand why this might not be good enough for someone like you, but then you should avoid videos like this and click only on benchmarks or something like that.
An open-source neural network similar to TOR? What if we make an open-source neural network that would work as a swarm, in the spirit of the Tor network or torrents, crypto? So that all computers would be part of its "consciousness", and not some servers?
Do you agree? Is open-source AI the right path forward? If not, why?
In general, knowledge should be available to all people. All human decisions should be based upon using knowledge to make each decision.
Sure, but its a trojans horse I assume. Actually it could be a marketing gag.
But all that doesnt matter. The politics are preparing closing down the free market with regulation which only huge companies can manage. So like you said, the major companies having the platforms are the neo feudalists and we are the peasants. All user b2b or end customes will pay the tithe.
Yes local OSS is the way forward, I've been using Linux forever and, for example, never affected by malware like Microsoft Solarwinds, Crowdstrike.
Also hey @Matthew Berman where does this new LLama version 3.1 fit in your Ai stack for local OSS development? Does your Ai stack still INCLUDE RouteLLM per "achieves 90% GPT4o Quality AND 80% CHEAPER" ?
Zuck, the hero we need! lol
As Yann LeCunn often says, LLM's reasoning abilities are very constrained to their datasets... These are not easily going to become the runaway AI Gods that people fear.
Meanwhile... They may well be quite important to the future.
Consider things like:
a) Almost 30% of Americans only read at a 3rd grade level.
b) Much of the developed world is headed for a demographic crisis.
People are going to need help.
definitly. Sharing intelligence is like a parent giving ideas that help their children grow. Or the sun radiating warmth 'light' for all life. Aliging with the principles inherent in the expansion of nature is the way forward. Limiting this out of fear is jsut small minded thinking - Open sauce all the way.
I've been running Llama 3.1 8B on my 36 GB M3 Max MBP. It's a game changer. Consistently follows the system message and handles json data extraction. I was never able to get the other tiny models like phi, mistral, orca, dolphin, etc. to do this sort of work.
Been using Linux over 25 years. 15 years as my desktop. It's great to be free.
Right there with you.
I'm about the same. I feel like the desktop and desktop apps only really got stable about 15 years ago.
it was probably mostly GPU driver issues in the early days.
exactly same here .. I hear Windows, I hear steam games. lol
Linux is not a desktop OS.
Concerns about Zuckerberg's track record with social media are valid. The Cambridge Analytica scandal highlighted the potential for misuse of data and technology. As AI becomes more powerful, the stakes are even higher. Ensuring robust ethical guidelines, regulatory oversight, and continuous community engagement will be crucial to harnessing the benefits of open-source AI while minimizing risks.
Facebook was free to use because I was the product and not the customer. That could be why it wasn't the experience I wanted. I wonder if this will just be the same go around.
Like using prompts and follow-up prompts and the kind of follow up prompts as data to sell or mostly to train AI 😂 , this time its something else
But it's not possible with open source. You download it to your computer and everything is offline. None of your data is sent to Meta. You can see (and edit) the code. You can see everything it's doing. So very different situation than Facebook (which I don't use, I am not defending it).
I really appreciate this comment - so much blind love for meta these days
@RondorOne they want to build products on top of their models , it's the products ( and possibly platforms ) that will be built on top I'm worried about
@@jaredowen7321 Sure, I understand your worries. But nobody is forcing you to use their products built on top of this AI. I don't use Facebook, Instagram, WhatsApp, Twitter, TikTok, or any other social media outside of RUclips. I am always wondering why people have this unstoppable urge to show everyone what they had for lunch and their deepest personal things, and then are shocked that someone "stole" this data from them. Easy fix. Don't post it, and nobody will steal anything from you.
I think the real game changer with open source models is that eventually, they can be integrated into other tools, apps, games and other software far better than what closed source online services can.
Throw in that being open source allows developers to custom make a A.I. model for their specific needs on an app or game and it's easy to see the advantage that can offer, not to mention that it would be far more tightly integrated into the software than online models could ever be.
On top of this, there's the trust issue, as A.I. become better and more capable, I suspect more of us will want to use it on a day-to-day basis on almost anything we want, that could become a major privacy and security concern for a lot of us with online models with data being sent back and forth, and that concern is only going to get a lot worse as A.I. models have a long term memory that can adapt and change to your habits at a personal level, there's no way I can see the majority of us wanting to use an online model as A.I. really matures, and if that isn't enough, there's robotics, eventually we could have them around the house and in society, and again, that's too big of a privacy and security concern to even think about using the online models.
About a year ago, I was thinking that the online models were the way to go because of the computing power needed, basically, I thought it would take far longer to have this kind of quality running locally on our own computers, but the pace of development of these smaller open source models has been a lot faster than I expected, and computing hardware is only going to get more powerful as well as small A.I. models will continue to get better, and because of all this, I was convinced enough about a year ago that the future of A.I. is open source and where you can run it locally on your own hardware, and because of that, I don't bother watching videos or reading articles on closed source online models any more, unless a game changer happens on one of them.
Major tech companies need a major overhaul of the CEOs
@@Darkt0mb5 Don’t take Satya away plz
Why
Apple died with Steve Jobs AFAIC!
Fs, zero swag since
Regards, Matthew. 👍 Excellent Chanel. Thank you for your thoughtful analysis of Zuk's post.
This was a great vid.. Welll done and thank you.
Just like fb is free, Because you are the product in the grand scheme of things.
How can you be the product when it's an offline open source model. You cannot, it's just impossible.
@@RondorOne same way occulus rift was open source until it was developed and then meta bought it licensing for it all and made it proprietary to their platform. I had the beta version of occulus and used it for asseto corsa many years before fb and now meta purchased it. Most of the developers were all no named dedicated sim racers. Thats the truth. Same gameplay with ai development. But it definitely sounds nice
@@immunity4soul Which has no relation to something you get as open source now and you can use it forever as it is. Nobody is promising you that in the future Meta will spend even more billions to give you even more powerful open source AI for free. Maybe they will, maybe they won't. But the one you are getting today is free and open and you can use it however you want forever.
I'm fascinated by the questions around increasing costs of training models versus return on making them freely available.
14:30 I think efficiency is a real big issue. When everybody is talking about how the amount of compute and the energy to run it is rising at an order of magnitude every single year or even less, we need to get that under control because I don't think we're anywhere near technologically advanced enough to start building a Dyson swarm and beaming energy from space to Earth. Not to mention the overheating that would cause anyway especially if you're doubling or tripling the amount of energy on this planet we're using. And of course the amount of hardware and the cost of all of that would be prohibitive as well.
Matthew did you you see Snowden's speech at the Nashville Bitcoin Conference? If not, go check it out. The need for opensource AI is directly addressed. The threat is articulated pretty well there.
Excellent video. Thanks.
3:30 The thing about VR is that it is effectively just a monitor, and they are trying to sell it like a smart phone. Maybe someday we will get to the point where you can use contact lenses with AR or something similar, but its unlikely to replace the smartphone until it becomes very small, and doesnt make you look like a dork.
contact lenses will never happen because you can't power them.
It concerns me greatly that huge corporations are in control of this tech. This will satiate people who are worried. Companies will NOT do things for anyone but themselves. There is always a catch.
I am still not convinced but I will if they release Llama-4 1 trillion parameter model next year and open source it.
Hey @Matthew Berman where does this new LLama version 3.1 fit in your Ai stack for local OSS development? Does your Ai stack still INCLUDE RouteLLM per "achieves 90% GPT4o Quality AND 80% CHEAPER" ?
This morning I was using to find out which beaches were open in my area and i t provided me with current info that I needed.. my point is that AI need some general knowledge and then need to specialize in a specific area!
Since we don't know what any of the weights actually do, is open source even different from closed source in this case?
Isn't distributing the weights without giving the training data essentially just giving a compiled program?
the open source models will lead to an explosion of closed models though
It likely will for open source and closed source models, in fact, a big part of why Zuckerberg is releasing these models is because he realises that there's an army of developers in the community that can do far more work when it comes to fine-tuning these models, something we've seen already over the last year.
and to an explosion of even more open source models.
I think open source AI is absolutely the best way forward, not only for the reasons mentioned, but because it's the best way to make sure AI is used to benefit people rather than just a few big corporations. Power disparity is one of the best predictors for abuse of power, so if big companies are the only ones with the power multiplier that AI is, then they will absolutely abuse that power. Everyone having approximately the same AI capabilities is the best way to keep it truly beneficial to everyone rathar just a few
I think it also levels the playing field that everyone has access to it, and for a tech as important as A.I. can be in the future, being open is probably important.
I would be really concerned if we ended up in a situation where 2 or 3 online A.I.models dominant by 2 or 3 corporations, that no doubt will have backdoors in them from governments, that might not be a concern for now, but as A.I. gets better and more capable, having that capability in the hands of so few people could become dangerous for the human race, or at the very least, offer those few a major advantage over everyone else, because I'm sure we all know, they will have access to the full unrestricted A.I. whiles everyone else has access to the restricted one.
This might not be a big deal now, but if A.I. keeps developing, it could become a major driving force of human development over the coming decades, having that being controlled by so few corporations or governments sounds extremely dangerous for the human race over the long run.
Open source is the only way around that, to level the playing field and allow everyone to access to it, for good or bad, also, I like to think that we humans can collectively come up with sensible rules on A.I. not corporations or governments that likely want to control it and not for our benefit.
Llama is the one that deserves the OpenAI name
To be clear Lama3.1 is open weight and not open source. Also if you are using lama3.1 in your project you have to mention "based on lama3.1" according to Lama community licensing in order to give credit to it.Correct me if I am wrong..
Amazing! Can you please share the link to the source code?
Nice ending m8!
I'm all in for open-source!
Interesting content👍🏻
Yes local and able to store personal information with or without asking, probably in a vector database. Like if your tell it you family members it will remember.
While Linux gets all the press these days, a reminder that FreeBSD was the open source UNIX that had to fight the fight to actually have and open the path for having an open source Unix / Unix like OS. Also FreeBSD has a more open license for its use and development. Also right now, Open Source OSs are the ONLY OSs that do not require or demand to know who you are and or uses you as an advertising platform - just to use it.
One of the worries is Zucker might be playing the loss-leader game, and once they bankrupt the competition, they'll go mask off and launch a closed model that's orders of magnitude better and is too big to be run by anyone that isn't a billionaire even if it gets leaked, and you will have to pay him whatever he asks for in order to be allowed to use it (if it doesn't end the world first). Another major concern of course is they might've embedded in the model the manipulation tricks they have learned over years with their unethical psychological experiments on Facebook, the stuff with Instagram etc; and the AI might stealthy manipulate people against their own best interest and in favor of Zucker.
Regarding closing tech from other countries would not work, I guess it can work this way: if a country actually does make an act of espionage reverse engineer and (steal) they would lose moral points and the ethical part of the argument. And as our shiny dad always says light always prevail in the end.
Open source "foundation models"
≠
Open source "frontier models"
The first option is clearly wise while the latter is clearly self-destructive.
Try to remember that we do not understand the nature of this technology yet, and many top researchers believe current models may have vast capabilities far beyond what we are able to tap into with current methods.
If the lovecraftian cosmic horror of the unknown doesn't evoke in you existential dread, you probably shouldn't have a voice in this discussion IMHO. The scale of the consequences of getting such things wrong is literally beyond our imagination, because we have no reference points to anything like this in our past.
I think all small model releases should be specific and big ones general
We are definitely on the same page.
I think open source is best, it helps to limit all the power going to one or two companies that just end up owning everything and all technological advancements..
Any AI would be guaranteed to be deflationary than the just increase profits solely for thee companies that got ahead early on.
"open source" we need a ton of quotes around those two words.
Why are we letting some rich guy get away with calling something open source that is not open source ?
What is open source here exactly ? Meta AI website code ? the code telling it the LLM to use data ? Where exactly is the code we can look through ???
21:51 so it sounds like the thinking might be that with open source technology will then that's also highly compatible with more open societies as well.
Even though this is sold as open source, Zuckerberg can at any point turn it into closed source, like OpenAI did. I'd rather that we put our effort on something like Petal which IS open, like Linux by Linus Torvalds. Sad that Petal doesn't get as much attention.
In truth, beggars can't be chooser, and even thought many of the so-called open source models are not truly open, the cost to make these models is really expensive and I understand why they need to find some kind of revenue source for them.
I would prefer more true open source models, but these are still far better than the online closed source models.
18:00
I don't understand, what would prevent a group from taking that open-source project and making it insecure on purpose? What are we talking about here? Are we talking about preventing a bug from being introduced or a group of people taking it and turning it into an AI expert in scams?
Writing and maintaining etc. your own code even with an API requires each company to have its own software development department, even with capable AI coding, this is a cost the makes sense for some companies and initiatives but not for all.
Don't be fooled that Zuck has any motivation besides his own enrichment and prestige. I can't think of anyone worse to lead us into the AI era given his track record with developing technology that benefits humanity. Remember that you can't share secrets with your friends without also sharing it with your enemies.
Yes, that's true, but if you get open source, you can do whatever you want with the code and fix it or change it in any way and Zuckerberg cannot do anything about it. He cannot steal your data, he cannot insert ads, he cannot do anything. So I understand you hate him so much it makes you blind, but if you get the source code and the weights, there is nothing he can do. So this is different.
Mark Zuckerberg says the software is open to anyone, but that's not entirely true. If you have a big platform with over 700 million user's monthly user, you need special permission to use it, and the company can lower the monthly user cap to whatever number they want any time. Given Mark Zuckerberg's past mistakes, people are right to wonder what he's really trying to do with this model. His rebrand has been really forced
"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans." - Nick Bostrom
Except hes lying his ass off, his ai is nowhere near open source and it will never be that way.
It just dawned on me that Zuckerberg is to open AI what Napster was to Metallica😅😅😅
15:44 wow! Yes regulatory capture we got to avoid that we need a free market in this! Now speaking of this not to get political hopefully but what do people think of President Trump's idea of a "artificial intelligence Manhattan project"? Could that also fall into the regulatory capture trap if certain aspects of AI research become a national security issue? Or perhaps maybe that super cutting edge research is not to be confused with upgrading open source models because it's not going to be anywhere near such an issue until it catches up.
Look how well that worked with facebook, an add infested AI hellscape.
do you know if vision/image processing version of llama is in works? any potential timeline?
Do you understand what open source is? Nobody can insert an ad anywhere. You can see and edit all the code. This is completely different situation than Facebook, because you have everything locally and you can do whatever you want with it. So it is technically impossible for Meta to steal your data (like Facebook) or insert any ads (like Facebook).
Completely ignorant take.
@@RondorOne of course it is ad free at first. Just like facebook
@@joeblow6105 But it is technically impossible for Meta to insert ads into your local AI. If you download LLaMA 3.1 (405B) right now and use it for the next 10 years, there is no way for Meta to insert anything into it ever. Plus Meta cannot insert ads into any open source release even in the future - it's technically impossible, people would just remove it from the code. And if you speak about some future AI from Meta that will not be open sourced and will have ads. Nobody promised to continue spending billions and billions to always bring you fresh and most powerful local open source AI for free. But the ones you got now are open and free and you can use them forever like that. Nobody can take it from you.
Yan LeCun (Zuck’s AI guy) has been dismissive of the existential risks posed by AI described by experts like Hinton, Suskever, Tegmark, and Bostrom. Instead, YeCune argues that AI development will be incremental and that we can implement safety measures along the way. This confidence is risky, especially given the high stakes involved and Meta’s track record. Money and company growth is always behind everything
Can you do a video on how to actually finetune a model with custom data
There aren't many videos on this because fine-tuning just doesn't really work/isn't useful.
@@r34ct4 it does work? thats the whole point of this lol did you watch the video. People have been finetuning llama3 to improve it for specific things.
It doesn't consider externalities like energy use in a time when we need to use less energy 🌎
Frontier models are not open source, they just change the values of each algorithm to alinement with ethical guidelines when they release, so much manipulation has be found. But keep up the transparency. Domain-specific manipulations is the new tactics. He found a solution to hide more
Just hours after LLaMA 3.1 has been released, people have already removed all censorship and bias (except the soft bias present in the training data, but that's bias of the Internet, nothing you can do with that... maybe fine-tune on your own data so it has your personal bias instead). You can download abliterated uncensored versions of all these models right now and they are completely uncensored.
Yay! 😊@@RondorOne
AI needs to be free because nickel and diming over completely random outputs will not end well. AI has theoretically unlimited potential but people need to have full access to it to bring it out. It needs to be open source to promote innovation.
@@neubsi8243 llama models are only softly censored. You can easily prompt engineer it to be "uncensored".
🎉 Open source models are moving fast!
i think Zuck looks like Krusty the Clown now 🤔🤦♂️🤣👉
I have never imagined this could be done by Zuck. I think Elon would do it anyway
Does making open LLMs negate the valuation of chatGPT?
Yes
Capture the video in 1 minute 🚀⬇️
*Main Points*
1. Open-source AI models like Llama 3.1 are becoming competitive with frontier models, drawing parallels to the rise of open-source Linux over closed-source Unix.
2. Zuckerberg's vision for open-source AI aims to prevent Meta from being beholden to closed ecosystems, as experienced during the mobile revolution.
3. Llama 3.1 is designed for modifiability, cost efficiency, and security, allowing developers to fine-tune models for specific needs without compromising data privacy.
4. The open-source approach encourages a diverse ecosystem of developers, leading to innovations and improvements that closed models cannot match.
5. Meta's commitment to open-source AI is driven by the desire to create a standard that benefits the broader community and ensures long-term sustainability.
6. Open-source AI is seen as a way to democratize access to technology, preventing power from being concentrated in a few companies and promoting safety through transparency.
7. Zuckerberg argues that a decentralized, open innovation model is crucial for maintaining a competitive edge against geopolitical adversaries like China.
*Timestamped Summaries*
00:00 Open source AI is gaining traction with the release of Llama 3.1, which is competitive with leading models.
00:28 Mark Zuckerberg's letter emphasizes the importance of open source AI as a path forward, drawing parallels to the evolution of Unix and Linux.
01:24 The initial affordability and modifiability of open source software led to its eventual dominance over closed systems.
02:16 Zuckerberg's strategy is influenced by Meta's past experiences with mobile platforms, aiming to avoid dependency on closed ecosystems.
03:43 The competitive landscape of AI is shifting, with Llama 3.1 now on par with the best models and expected to lead in openness and cost efficiency.
...
----------------
click [here](ruclips.net/channel/UCSD2kXTOHUg_shF2fYQYpcg) to watch the full summaries 📺
So open source f-ing Facebook then
I love the comparison with Linux. However, Linux could run on any old PC was its magic, its low bar to entry. For Llama 3.1 405B you will need a beast of a server to run it on, there is no laptop out today that will run this model so we will rely on Shops that can run it to make smaller models from it. While we still have that reliance we won't be able to reap the benefits Zuck is pointing out.
So I think there needs to be a breakthrough in hardware first, this is why I'm watching Qualcomm and nVidia because it's here where we will see the revolution of AI.
What about Apple? Apple's focus is on selling devices and providing a personal experience, we haven't seen a networked Apple experience and ecosystem. Until then, I believe Linux will continue to be a significant player in the AI landscape.
If Zuck sucessfully makes llama being open source the standard for LLM. we cant be sure that in the future he wont rug pull and simple stop being open source, but assuming all those versions released already are open source, we can still fork them .... not as great as linux but well... in this case with the mass money dumping and AI (LLM) race, open source model like this are the best path .
It's all true and good but I'm sure llama is also trained on Facebook, WhatsApp etc. user data. The Zuck is a little hypocritical eventhough I agree with most statements made.
The comparison to Linux is strained and may be invalid. Linux was made outside of a big corporate entity. Linux did not require a multi billion dollar cluster of graphics cards to train it. Linux was hand coded line by line by people who knew how every part of it worked. The 405b model from Meta is nothing like that. It's a black box and no one knows how it actually does what it does.
Giving this tech to the whole world, including brutal dictators, when we don't even know how it works yet is reckless. I'm sure the 405 will be ablated and jailbroken and there's absolutely nothing that can be done about it because it runs locally. Do we really want to freely give Russia or North Korea that power? Maybe someday, yes. Once we understand how the models do what they do and we can better characterize the risks they pose.
Just because FB got burned by not being a leader in mobile is not a blanket excuse to release whatever they want to the whole world.
Zuck is putting pressure on all these players trying to maximize selfish gain
Ill be more impressed when it becomes multimodal.
Llama 4.
The only question is the size. If I can't have a small multimodal model that I can run locally, I will not be impressed.
Llama is open weights, not open source.
Bro-Zuck is my prozac
NONE of us are on team Zuck
Zuck updates are very human
there's a reason why no two snowflakes are alike, because randomness has no restraints
Hats off to Zuck
open source will not be the "path forward" any time soon for the reasons:
1. regular people can't run big models, only big companies can steal the big models and fine-tuning them then release them rebranded commercial.
2. people who are scared about their data being released are < 0.0001%
3. smaller models are not good enough and cost more (even if you run locally) than i.e. gpt-o-mini that's because Microsoft datacenters run on solar-wind energy.
once the gpu's become dirt cheap, electricity become dirt cheap, open source path could start there, till then, see you at the otherside.
You want them to buy you gpu as well. 😂. For christ sake they trained and built an entire smart AI model using billions of dollars. If you cant pay 300$ bucks to get your self a good pc, then just sleep
1. Small companies can, and people can run it in cloud services, which compete with together
2. Small companies, government organisations care very much. And people too, but less
3. 8b model is very smart, significantly smarter then gpt3.5
You got some good points.. But some misconceptions too ..
1. Huge models weren't made for everyday Joe to begin with.
2. Stands I guess.
3. Smaller models used to not even exist and now they are getting better and better every single day.
@@starfieldarena Same with people who get an AI model that costs billions of dollars to research, develop and train, and are allowed to use it commercially, can modify it in any way, and even train a competitive private model with the synthetic data from the model, and see and modify source code and all the model weights ... but it did not release all the 15 trillion tokens of original training data so "iT's NoT rEaL oPeN sOuRcE bRo!!!1"
People are ungrateful.
Now we have the same blah blah we already hate either the Zuckerberg video. No new value.
One caveat: backdoor lol
I don't trust zuck not to pee in the pool....
Llama is not "open source"
Have you ever been to China? CRWD costed the problems of the world but China. Have you ever asked yourself, Why? Don't blind intelligent people seeing the truth of AI.
Why always the focus on the "cult of personality" instead of the applications? They must teach feckless aggrandizing at Y Combinator. But any criticism means I'm a hater despite my long-term viewership, so exit stage left, I shall...
Throws a tomato
I love my gpt4all
This is not open source AI. Unless you can retrain it yourself, its a black box. The whole idea of open source is you can recompile from scratch, not just using a pre compiled binary blob thats free to use.
You can generate synthetic data from it. So you can rebuild a new LLM from scratch. The black box is just a nature of neural networks but if you have access to the weights you can examine it using statistics.
Open source hype can anyone teach how to use open interpreter with ollama
Why are people impressed or congratulatory about Zuckerberg’s appearance. Everything he does has to be seen through a marketing and economic growth motivation unfortunately.
Just look at athene 70B
They deserve a second chance? LOL
⭐
Good luck running this locally.
I have tried 405b, I am really not convinced, hopefully, the model will get improved by the community but for me, it does not cut it (8B seems great)
What we have to Defend at all cost is within the material science tech lanes. .
The software is fair game but still it should be some representing standards at play here.
Propagandized beliefs, exploitated tech like rogue terminater bots embedded plausible deniabilty Triangulated greatest threats of all..
We transfered wealth industrialized the world and it didn't bring utopia as evidence in our world proves. But they can survive by producing conventional suppy chains to trade for whatever resources they don't have. This 3rd frontier demands responsibility unlike anything else we've ever known!
If it's a state actor who censors it's individuals and refuses to grant Deterministic free flow of information education and access to tech even if it's American allys we must be very careful with the material hardware itself.
Not all beliefs are eqaul, not all states are willing to re allocate affinities in proper time and place application to not get in the way of a mature successful social behavior that get more narrow each time we dig out complexity of yesterday's axioms put it into our world tech and material sciences.
Each time this occurs it reduces what defines a successful Free & responsibile society is..
This open source model 405B is not really open source since a normal consumer cannot use it. I understand open source as can being used by everybody and not by some coorporate companies. So Mark is pushing against closed source companies at the end as a strategic move.
Just stop using the GreedyAI in your examples from now on.
bla bla bla..... all the nonsense....
open source free AI will be better than a google search but in the end paid versions will have no competition!
rather than trying to predict what the fututre will be compare zucks AI to chatGPT and Grok
Zuck may just want to be a hero. I do think he's neglecting security though
It's what David Shapiro labeled terminal race condition; it's not possible to act safely when competitors aren't also doing so; those that are careful will be left behind, and the ones rushing it are playing russian-roullette with a doomsday device aimed at the whole world; there's a chance that if you act fast you might find a solution before the hammer hits the doomsday bullet, but odds aren't in our favor...
My other reply to you got shadow-blocked, didn't it?
@@tiagotiagot possibly. I can see only one
@@LukasPetry Lemme try to figure out how to rewrite it in a safe way: It's what David Shapiro calls terminal r4ce condition; it's not possible to act safely when competitors aren't also doing so; those that are careful will be left behind, and the ones rushing it are playing rvssian-r0ull3tt3 with a doomsday device aimed at the whole world; there's a chance that if you act fast you might find a solution before the hammer hits the doomsday bvll3t, but odds aren't in our favor...
666 likes!
🦙
1st
Thanks for re reading what you just read, cause we're idiots. Unsub
Then why did you click on the video and why did you keep watching? So that you can write angry commend and make Matt feel bad about himself? If you don't like commentary on the news but rather search for them yourself and read it yourself, then what are you even doing here in the first place?
I used this video as a radio / podcast while working around the house. I also appreciate Matt's take on the topic. If you don't care, then why were you even subscribed in the first place?
@@RondorOne hi my friend. Been a sub until writing this comment. After leaving my honest opinion stopped watching and unsubbed. Clear enough for you ? Need more explanations? Im here for you
@@paolomoscatelli No, I think it's clear. I would just recommend not even watching videos where it's obvious it's just RUclipsr's commentary on some news or press release. Most of us subscribed watch these RUclipsrs (or just listen to them) because we don't have time to read AI news every day. Listening is easier, because you can do it while driving or working around the house. For us it has value, because we don't have time to read it all ourselves and curate it and also we are interested in some commentary and context from someone who is reading all the news. I can understand why this might not be good enough for someone like you, but then you should avoid videos like this and click only on benchmarks or something like that.
@@RondorOne either ur trolling or very limited at reading. you'll get there. or not
An open-source neural network similar to TOR?
What if we make an open-source neural network that would work as a swarm, in the spirit of the Tor network or torrents, crypto? So that all computers would be part of its "consciousness", and not some servers?