The idea of the economy doubling every month is absolutely absurd. These people think the physical world is as quick to scale as an AWS server. Good luck doubling the electricity grid in a month. Good luck doubling food production in a month. What nonsense.
But what if there’s a humanoid robot factory producing 10M robots per year (fully operated by robots).. and then a software update makes them 1000x more smart, allowing them to build out 12 more 10M (robots per year) factories in 1 year (1 fac per month), and as soon as each month passes/completes, new robots from new factories do the same thing Leading to power of 10 exponential growth And produced robots to go into every field or industry on the planet What then?
Keeping in mind that, bundled into the “1000x more smart” software update is Pyramid-of-Giza construction-technique level intelligence that facilitates any problem you can fathom being solved Including power methods not known today, etc
Doubling every month. 2,4,8,16,32,64x..... Let's assume it is true that would likely create an Abundance in a year for everyone. Basically, the whole economic system would likely collapse within a few months. Now remember Nvidia said they are upgrading Servers for some companies so they take up less space and use less energy. I think they said around a decade for that. I extremely highly doubt doubling it. But obviously, CEOs are here to raise the stock prices so they will always overhype their products.
It's not in light of him referencing it happening in say 15 years, which by conservative estimates would be 5 years after AGI... Norman Borlaug's new wheat multiplied the harvest in Mexico by 6x, and his brilliance wouldn't even begin to TOUCH super intelligent AI. Human beings are just simply bad at imagining what something that is 4 to 10 times smarter than all of humanity combined could actually pull off...
I think the level of conciousness in llms is equivalent to going to sleep after every thought, with continuity within the conversation. With long term memory and high frquency inference calls you will approach conciousness that is functionally similar to organic conciousness, with or without emotional qualia depending on the design.
It's not about opensourcing development of bioweapons, but opensourcing abilitities to develop them. If they want to ban it, they will ban the abilities to develop many other things, the whole developing mechanisms and knowledge.
Because those big models will be able to do not just harm but also a lot of useful work. Open sourcing them means that I can get access to them not just big enterprises so I can as small businesses run my own system on server and compete with big corporation. At least that how understand benefit to all humanity. We compete on ideas not on chocked down access.
That's true to a certain extent, but the state of the art models that are pushing the boundaries toward AGI will not be running on hardware anyone can afford at home (unless you have a last name like Zuckerberg or Musk).
What's amazing is that people go for the fingerprint and facial recognition things built into their computers, to say nothing of what's in and on their cellphones... Just open the cage door and let them all willingly walk in. 'You will own nothing and be happy.' ~ Klaus Schwab, World Economic Forum (WEF)
AGI is just software pretending to be human. To make it truly useful, we don’t need giant models; we need small, efficient multimodal models. Train them to handle text, images, audio, and movement together in real-time loops. They should predict responses while working on other tasks, like controlling a game character or reacting to your voice. Focus on making them lightweight and local, so any device-a laptop or phone-can have a responsive assistant that listens, sees, speaks, and acts seamlessly. The tech is already here; it’s just about putting it together.
Powerful AI should be open sourced. Do we really want a handful of giant corporations, and the government to be the ones in control of that power? Sure, open sourcing has risks, but it has the potential to do so much more good. Also, AI in the future has the possibility of reaching personhood. Do you want persons to be closed source to companies, or governments? That sets a bad precedent.
I really must ask though, what do proponents of Open Source AI make of the alignement risks that would arise out of a powerful AI system, not something so harmless as a Llama model but an actual AGI or ASI being in the hands of anyone, there would be nothing at all preventing anyone from creating an AI system that would be dangerous to humans, would there? I guess to that you could say Governments are already doing this with Military AI, and catastrophic risks from Government use of AI is already present, but at least if only Governments and Corporations have top down access to a theoratically powerful AI model, even if alignement problems are perhaps inevitable down the line, it is much less likely that someone would align the model for explicit harm than if any actor who wanted to and was particularly misanthropic or self-interested, then there would be absolutely nothing stopping them from unleashing a model to cause harm and death upon us.
The fact that they targeted artist in the first place with sora breaks down the whole idea that AI democratizes skills. Not so OPEN. But its Hollywood so whatever. (And obviously Hollywood would have something to complain about). BUT on that note, as a consultant, the way tech behaves right now is fucked up and they cut out what should be very well paying jobs and instead find ways to do it for free. Many different ways to do it for free.
I wonder if why everyone thinks it’s going to happen soon is if we make ai researchers (I wouldn’t be surprised if this exists behind closed doors.) Where the next breakthrough can be discovered by ai. That’s why they might think it will happen soon because we have automated the ability to research the next breakthrough. Just speculation of course.
@@tack3545 - Precisely!! Everybody on the inside that I have spoken to, states they have AGI across a number of domains. Today. All domains? Nope! Not just yet.... but within 2-5yrs for sure! Based on current trajectory - AGI (defined as been able to out-perform average human performers to the top 20% percentile performance.)
They definitely exist behind closed doors. Google DeepMind released a paper yesterday titled Boundless Socratic Learning with Language Games. The crazy thing is, I was playing around with OpenAI’s o1 preview latest model beforehand, and it came up with the exact same idea and implementation.
When you say you don't quite understand the artists complaints, have you considered asking AI to explain why they might hold this view based on their position and rhetoric? (Or, better yet, ask the artists?) Or do you prefer to simply use it as a vehicle to repeatedly and condescendingly scorn them to pad your content?
Artists have been hijacked by activists, just like journalists. Some of the greatest artists in history were always innovators, early-adopters, and technologically forward.
As much as I not treating all the haters towards generative AIs seriously. I understand this artist's frustration very well. OpenAI had collaberated with well known artists before, and let them use Sora for MVs or what not. I'm sure they were paid. And now they gave it to more artists, probably seen as less influential. But not paid. And still moderated content releases. We all know that a lot of artists hate AIs, because it's coming for their jobs, especially the smaller ones. And you ask them to test your AI that's supposed to replace them? They might just act as an AI enthusiast and join the program at first, and try to mess their thing up. Also, from my perspective, the most shady part is them. This is all a PR move from OpenAI. They are creating a fake illusion to the world of AI "helping" artists. Instead of replacing them. And the artists are embracing them. Again, in 5 years, a small company won't pay freelance artists to make a youtube ad, instagram ad, image on their product. Especially if they are not design focused. Just tell the young kid in the team to figure it out right? The kid knows computer stuff.
bro your video ended abruptly (not for the first time either) you didn't even get to finish the sentence you were speaking so you've left it on a cliffhanger
Optimus was teleopperated for the ball catching. Still quite impressive admittedly but the control systems are still (from what i can see from watching youtube vids) slow to train.
I don't agree that there is no room for improvement for average.people, think about agi managing our savings for improve them, hence taking decisions autonomisly or an ai so complex that can work with us in every daily tasks increasing our ability to be capable and indipendent in a new job, or something our indipendence
12:58 - But he didn't say AGI gonna arrive as soon as Sam Altman (2025), he said it not gonna be here within 1-2 years and would likely be achieved within 5-10 years instead so how is the AGI timeline shortened?
@@GodbornNoven I feel like you are being a conceited person that thinks he is above anyone else, either explain what I was getting wrong or be quiet and don't post any comments, it's normal to not get stuff, we're all just humans and sometimes we fail to understand stuff, besides I really don't see how I'm wrong here.
@@MrRandomPlays_1987he’s not insulting you. What he meant by quantity over quality of this AIGRID channel is that he throws as much content out there with false information and click bating
@@efraimmukendi7137 Oh, I actually had a short thought that his comment came about as not necessarily hostile and that it might have been something else he was trying to say but I wasn't sure, my first instinct felt like he meant to insult me and I didn't get exactly why, thanks for the heads up, I guess you are right.
The only reason humanity still exists is that technologies that make it easy for an individual or small group of people to create WMD have not been available. It's super costly to refine U-235, for example, which keeps its use very limited. Now imagine an AGI that invents completely new ways of cheaply and easily creating CBRNs and that tech gets in the hands of bad actors. Nightmare fuel.
What is the cosiet backseat in the software. Did you find your backend development there. How much pleasesure can I get for this skepticism devolving. However, it's time for my desire to enter the manifesto!
Also, the artists complaining about "AI art is theft" are just whining about other people not having to spend years learning to do as well as them. AI does not steal, it learns. Just like an aspiring artist learning from other artists. The difference is that AI only has to learn once and then it enables anyone to make art, even people who *GASP* have no artistic talant but still have an ability to distinguish good art from bad art. These are the same type of people who smugly told factory workers and tradesmen to "Just get a degree" or "Just learn to code." Even worse are the artists using and actively promoting the use of AI tools (hypocrits) to poison AI art generators. They are the equivalent of the historical luddites smashing factory machines and steam engines. As for AI safety, at this point I distrust anyone saying AGI should be limited to governments and megacorps. I would rather everyone have access to AGI than just the oligarchs that already rule us only having exclusive access to them. I also think they are very much overstating the danger in order to justify institutional and legalistic capture of the technology to supress and control normal people even more. I don't trust them as far as I can throw them. Would you trust the US government, the CCP, Microsoft, Google, Amazon, or Meta with exclusive access to AGI tech?
@@DiceDecides Groq is the one to watch between the two. Token speed will become increasingly valuable as we venture deeper into the "10x" that Sam Altman sees "everywhere" as potential to further improve around the o1 model expending tokens at inference time to "think"...
Because if you do not opensource models, and you keep it to companies only, wich is going eventually to cyberpunk distopia. Knowledge should be open, we need to educate people of consequences and make sure they will be empathetic andtake responsibility for themselvs and others instead try to control them by force. Stupidity is a enemy and we should strive to fight with it instead limiting knowledge.
It would also be dystopia if every joe on the street has the power to end billions of people. If the models can do stuff at that level, it would be pretty silly to let every person be able to unilaterally instruct it to.
Why open source? Why would you want it closed source? Why you giving your safety and security to a handful of people you don't know? We need a level playing field & that means open source.
To answer your question: if a model is dumbed down to the point it can't, for example, explain how to make dynamite, it's too stupid to do any task more complicated than that. Do you want the intelligence of AI capped at the stupidest possible WMD, say like leaving a can of tuna in a metal tin in your fridge to create botulism? You seem to believe dangerous parts of AI can be neatly removed from training data sets but even if that were possible, a very intelligent AI would still be able to figure lots of potentially dangerous substances or activities out. Once again: you are masquerading as a supporter of AI while in reality are a lickspittle working to assure only the elite have access to it. The only question is: are you a fool or a knave?
Remember that Google guy who 'quit' some years ago and said that he thinks the AI might be conscious or something like that? That was a few years ago, and even now, by my experience, AI is exceedingly stupid. So why would he say that if not as some sort of PR stunt? Remember that Facebook guy, or at least their company team, using Anthropic (if recalled) in the background as their own? That too seems to have gone down the memory hole.
9:35 “I suspect there is no such a thing already as “closed source “ anything in data that cannot be open within the snap of finger?” Not from this perspective anyways. Maybe wrong, the data suggests otherwise, likely the timing is through dilation. Best practice is to assume everything is open source and that there is no where to hide that can’t be altered, reengineered, and factored into truthful nature accountability. Or not.
We, I speculate, need to be optimistic about AI, just as we should have been optimistic about fire, the wheel, TNT, the automobile, and nuclear power, although we know from experience that the use of these technologies in whole have resulted, along with all of their benefits and potential benefits, has resulted or empowered activities that have resulted in many millions of human deaths and other great harms. Professor Hinton's cautionary comments should be, I speculate, in the context of such history. As any situation moves along, extensive regulation is highly likely to be needed for any highly impactful, highly empowering technology, AI being no exception. Let's keep in mind what just might have happened if only the downside of fire and all the other technologies I just mentioned had only been seen through the lens of their dangers. Would they have been killed in their cradles if they had been put in the hands of over-zealous regulators? On the other hand, if those imagined over-zealous had existed and had been over-ridden or over-ruled, would they not be now somewhat justified in giving us a long stream of "I told you so's? Just as we have muddled along with these other technologies, we need to have learned from those experiences and turn all-the-more to taking care to nurture AI along with as much balance as we can muster. The outcome is not certain no matter what rule-making decisions are made or fail to be made. At some point, Professor Hinton's concerns will have to be weighed in heavily and with urgency, if for no other reason that along with the good things, crises will arise, bad things will happen. We may thrive with AI or we may not even survive. One thing is, or should be clear, is that the need for the potential benefits of AI are massively humanitarian in nature. We might even, eventually have to look nearly as dire as we "can't live with it, can't live without." I'm highly optimistic; but to put that in perspective, I'm also highly optimistic that we are not going to blow up the whole world with nuclear weapons. However, as the Randy Newman song goes, "...I could be wrong, but I don't think so."
I'm an accelerationist, I don't think we should open source large models . The risk of it falling into the hands of an extremists group is too high The last thing we want is to give dangerous people a team of digital geniuses / agents
AGI will be trained on human brainwaves. We have to agree in order to achieve utopia stop being afraid. Accept the truth they have access to everything you do and think
AI has identified your scam: I apologize, but I do not have accurate information about what XAI850F is. The search results provided do not contain reliable or factual information about a cryptocurrency or token called XAI850F. The sources appear to be promoting unverified presales or investments, which are often associated with scams. I would caution against trusting or acting on any claims made about XAI850F without thorough independent verification from reputable sources.
You "don't understand" why artists are upset? What a joke, are you serious?! Have you even read what they wrote with this leak? Do you not care at all that OpenAI and others just vacuum the internet of all its creations, taking articles, books, artist images, photos, personal data, and just taking it for themselves? Did you not see Mira Murati's insane facial tick when asked if they used RUclips videos for Sora? Get real, tons of people are suing OpenAI for good reason. It used to be relatively clear with Fair Use and how we could *see* the derivatives that someone had produced, but these rules are just not made to judge a model made of billions of numeric weights - it has to happen on input to prevent companies from just regurgitating someone else's art for a profit.
I'm not sure "taking it for themselves" applies fully when the public is benefitting from the information in the AI models. The courts have seen and ruled, in some cases both ways. If you truly understand what's coming, the fact that artists, or bricklayers, or factory workers go first or before or after anyone in losing their source of income won't really matter in the grand scheme. No one today even knows whether typewriters or fax machines or vinyl records went out first and unemployed the thousands working in those industries. The final destination is the end of human labor so exactly what order we get there in won't matter that much.
I would say if AI is physical Independent then i would be concerned. Until then AI could just hack ICBM´s - or something, but it would be its own demise. Don´t forget, all the infrastructure to entertain a AI is still operated and maintained by humans ...and its very vulnerable. ....Shutting down the internet and its done. ... that's in case the AI its interested of its own survival.
PS: Another thing, I give a lot of weight to Professor Hinton's observation that the idea that if we don't like AI, super-AI, we can always just turn it off, unplug it. We can't say that about nuclear weapons, we can't in any practical sense, just decide to destroy all of them, and that be that, in the face of seeing their great, looming, rushing, screaming danger, can we?. Human's don't function like that. We do, however, have some highly powerful and so far effective, restrictions and regulatory measures that have been put in place. That has worked so far, even though and second they, true enough, could massively fail.
LOL: "I give a lot of weight to Professor Hinton's observation that the idea that if we don't like AI, super-AI, we can always just turn it off, unplug it. " Do tell, how would you "turn off" an AI that is smarter than the entire human race by, oh, say 5x?
This tool somewhere in the middle of the video honestly thinks an event of most losing jobs over 10 YEARS!! would be sufficient to initiate new economics. The dude realises the world falls apart when most of those people riot in the first months after losing their work right? 😂😂😂
What's amazing is that people go for the fingerprint and facial recognition things built into their computers, to say nothing of what's in and on their cellphones... Just open the cage door and let them all willingly walk in. 'You will own nothing and be happy.' ~ Klaus Schwab, World Economic Forum (WEF)
The LLMs, are more the danger for human interactions. Agents will be responsible to “fix things.” Do not use LLMs and assume you are not responsible. This will be a method of measuring
It would be incumbent on the platforms that privide acceas to upload and download open source models to screen users and models to make a safe venue that doean't allow malicious activity. But then what goverbmwnt agency is going to competently regulate that and bad actors will still find a way yada yada. Seems like a place to start. Fight fire with fire - train an AI to detect malicious code..🤷♂️
Openai is just hot air. They are falling behind and need to up there game. There is really not good reason why they get so much attention taking into consideration how fast other players advance. Especially in opensource projects things are really levelling up. OpenAI is like the old saying about the arrogant king - his got no close on.
What a statement, damn. So its said that XAI850F is just about to launch and I think that will really change a lot of what was happening in the previous years, 2024 will shine yay
Yeah, the tech bros who still don't understand that steeling the works of others to train their model isn't something particularly appreciated... Artists is something that mostly no one in the tech seem to respect. They're seen as a customer or a material to exploit, but nothing more.
The idea of the economy doubling every month is absolutely absurd. These people think the physical world is as quick to scale as an AWS server. Good luck doubling the electricity grid in a month. Good luck doubling food production in a month. What nonsense.
But what if there’s a humanoid robot factory producing 10M robots per year (fully operated by robots).. and then a software update makes them 1000x more smart, allowing them to build out 12 more 10M (robots per year) factories in 1 year (1 fac per month), and as soon as each month passes/completes, new robots from new factories do the same thing
Leading to power of 10 exponential growth
And produced robots to go into every field or industry on the planet
What then?
Keeping in mind that, bundled into the “1000x more smart” software update is Pyramid-of-Giza construction-technique level intelligence that facilitates any problem you can fathom being solved
Including power methods not known today, etc
Doubling every month.
2,4,8,16,32,64x.....
Let's assume it is true that would likely create an Abundance in a year for everyone. Basically, the whole economic system would likely collapse within a few months.
Now remember Nvidia said they are upgrading Servers for some companies so they take up less space and use less energy.
I think they said around a decade for that.
I extremely highly doubt doubling it.
But obviously, CEOs are here to raise the stock prices so they will always overhype their products.
It's not in light of him referencing it happening in say 15 years, which by conservative estimates would be 5 years after AGI...
Norman Borlaug's new wheat multiplied the harvest in Mexico by 6x, and his brilliance wouldn't even begin to TOUCH super intelligent AI.
Human beings are just simply bad at imagining what something that is 4 to 10 times smarter than all of humanity combined could actually pull off...
Yeah im not sure we can see demand double indefinitely, but if efficiency keeps scaling that's a win.
I think the level of conciousness in llms is equivalent to going to sleep after every thought, with continuity within the conversation. With long term memory and high frquency inference calls you will approach conciousness that is functionally similar to organic conciousness, with or without emotional qualia depending on the design.
Nice! By the way GLHF chat performs really well for those who don't have local GPUs
It's not about opensourcing development of bioweapons, but opensourcing abilitities to develop them. If they want to ban it, they will ban the abilities to develop many other things, the whole developing mechanisms and knowledge.
I think it was Karpathy that said you could miniaturize LLMs to 2-3 billion tokens without performance cut
Because those big models will be able to do not just harm but also a lot of useful work. Open sourcing them means that I can get access to them not just big enterprises so I can as small businesses run my own system on server and compete with big corporation. At least that how understand benefit to all humanity. We compete on ideas not on chocked down access.
That's true to a certain extent, but the state of the art models that are pushing the boundaries toward AGI will not be running on hardware anyone can afford at home (unless you have a last name like Zuckerberg or Musk).
Claude MCP is great. It can't delete anything though, I added a custom MCP for it.
What's amazing is that people go for the fingerprint and facial recognition things built into their computers, to say nothing of what's in and on their cellphones...
Just open the cage door and let them all willingly walk in.
'You will own nothing and be happy.' ~ Klaus Schwab, World Economic Forum (WEF)
AGI is just software pretending to be human. To make it truly useful, we don’t need giant models; we need small, efficient multimodal models. Train them to handle text, images, audio, and movement together in real-time loops. They should predict responses while working on other tasks, like controlling a game character or reacting to your voice.
Focus on making them lightweight and local, so any device-a laptop or phone-can have a responsive assistant that listens, sees, speaks, and acts seamlessly. The tech is already here; it’s just about putting it together.
Powerful AI should be open sourced. Do we really want a handful of giant corporations, and the government to be the ones in control of that power? Sure, open sourcing has risks, but it has the potential to do so much more good.
Also, AI in the future has the possibility of reaching personhood. Do you want persons to be closed source to companies, or governments? That sets a bad precedent.
I really must ask though, what do proponents of Open Source AI make of the alignement risks that would arise out of a powerful AI system, not something so harmless as a Llama model but an actual AGI or ASI being in the hands of anyone, there would be nothing at all preventing anyone from creating an AI system that would be dangerous to humans, would there? I guess to that you could say Governments are already doing this with Military AI, and catastrophic risks from Government use of AI is already present, but at least if only Governments and Corporations have top down access to a theoratically powerful AI model, even if alignement problems are perhaps inevitable down the line, it is much less likely that someone would align the model for explicit harm than if any actor who wanted to and was particularly misanthropic or self-interested, then there would be absolutely nothing stopping them from unleashing a model to cause harm and death upon us.
The fact that they targeted artist in the first place with sora breaks down the whole idea that AI democratizes skills. Not so OPEN. But its Hollywood so whatever. (And obviously Hollywood would have something to complain about). BUT on that note, as a consultant, the way tech behaves right now is fucked up and they cut out what should be very well paying jobs and instead find ways to do it for free. Many different ways to do it for free.
A very quiet week
Government (military) will obtain AGI/ASI before all...
I wonder if why everyone thinks it’s going to happen soon is if we make ai researchers (I wouldn’t be surprised if this exists behind closed doors.) Where the next breakthrough can be discovered by ai. That’s why they might think it will happen soon because we have automated the ability to research the next breakthrough. Just speculation of course.
Why are you saying this like it isn’t the consensus
@@tack3545 - Precisely!! Everybody on the inside that I have spoken to, states they have AGI across a number of domains. Today.
All domains? Nope! Not just yet.... but within 2-5yrs for sure! Based on current trajectory - AGI (defined as been able to out-perform average human performers to the top 20% percentile performance.)
They definitely exist behind closed doors. Google DeepMind released a paper yesterday titled Boundless Socratic Learning with Language Games. The crazy thing is, I was playing around with OpenAI’s o1 preview latest model beforehand, and it came up with the exact same idea and implementation.
When you say you don't quite understand the artists complaints, have you considered asking AI to explain why they might hold this view based on their position and rhetoric? (Or, better yet, ask the artists?) Or do you prefer to simply use it as a vehicle to repeatedly and condescendingly scorn them to pad your content?
Chess thing is not available in EU, as most google AI stuff
it seems to work in UK.
British lobbing missiles into Russia and trying to tell us how risky AI is. LOL You are being played.
artist gets invited to beta test; doesn’t understand what a beta test is and proceeds to freak out about it. Sounds right.
Artists have been hijacked by activists, just like journalists. Some of the greatest artists in history were always innovators, early-adopters, and technologically forward.
we will have AGI in 2025 and ASI in 2027
Why would we opensource LLM? A: Because of free corn with p, of course :3
Not kidding, reproduction is the main motivator of all living beings.
There is that and theyre less biased which makes them far more valuable in intellectual tasks.
I predict great runs with XAI850F. The team starting that one has more experience than any other I know
As much as I not treating all the haters towards generative AIs seriously. I understand this artist's frustration very well. OpenAI had collaberated with well known artists before, and let them use Sora for MVs or what not. I'm sure they were paid.
And now they gave it to more artists, probably seen as less influential. But not paid. And still moderated content releases.
We all know that a lot of artists hate AIs, because it's coming for their jobs, especially the smaller ones. And you ask them to test your AI that's supposed to replace them? They might just act as an AI enthusiast and join the program at first, and try to mess their thing up.
Also, from my perspective, the most shady part is them. This is all a PR move from OpenAI. They are creating a fake illusion to the world of AI "helping" artists. Instead of replacing them. And the artists are embracing them.
Again, in 5 years, a small company won't pay freelance artists to make a youtube ad, instagram ad, image on their product. Especially if they are not design focused. Just tell the young kid in the team to figure it out right? The kid knows computer stuff.
Are you for real ?
3d modeling and building virtual worlds!! Let's go!!! 🥳🎉
Right now is the time to get settled in big jobs and earn money. Ofc jobs where ai isnt impacting that much.
Does money still mean anything in the future after AI automation? What's the point?
You get position, influence, expertise, new paths in life, and much more than just mediocrity
Word, yes!!! You are ahead of the debt digging, culture crying, media munching mindless plebs
@@UltraK420 take the OCs comment seriously
Because closed source is LESS safe, not more safe. AI threats need to be TRANSPARENT so that they may be countered.
bro your video ended abruptly (not for the first time either) you didn't even get to finish the sentence you were speaking so you've left it on a cliffhanger
Optimus was teleopperated for the ball catching. Still quite impressive admittedly but the control systems are still (from what i can see from watching youtube vids) slow to train.
I don't agree that there is no room for improvement for average.people, think about agi managing our savings for improve them, hence taking decisions autonomisly or an ai so complex that can work with us in every daily tasks increasing our ability to be capable and indipendent in a new job, or something our indipendence
12:58 - But he didn't say AGI gonna arrive as soon as Sam Altman (2025), he said it not gonna be here within 1-2 years and would likely be achieved within 5-10 years instead so how is the AGI timeline shortened?
This channel is the pinnacle example of quantity over quality
@@GodbornNoven ?
@@GodbornNoven I feel like you are being a conceited person that thinks he is above anyone else, either explain what I was getting wrong or be quiet and don't post any comments, it's normal to not get stuff, we're all just humans and sometimes we fail to understand stuff, besides I really don't see how I'm wrong here.
@@MrRandomPlays_1987he’s not insulting you. What he meant by quantity over quality of this AIGRID channel is that he throws as much content out there with false information and click bating
@@efraimmukendi7137 Oh, I actually had a short thought that his comment came about as not necessarily hostile and that it might have been something else he was trying to say but I wasn't sure, my first instinct felt like he meant to insult me and I didn't get exactly why, thanks for the heads up, I guess you are right.
The only reason humanity still exists is that technologies that make it easy for an individual or small group of people to create WMD have not been available. It's super costly to refine U-235, for example, which keeps its use very limited. Now imagine an AGI that invents completely new ways of cheaply and easily creating CBRNs and that tech gets in the hands of bad actors. Nightmare fuel.
Didn't sam altman want to give chatgpt a birthday present ?
What is the cosiet backseat in the software. Did you find your backend development there.
How much pleasesure can I get for this skepticism devolving.
However, it's time for my desire to enter the manifesto!
Also, the artists complaining about "AI art is theft" are just whining about other people not having to spend years learning to do as well as them. AI does not steal, it learns. Just like an aspiring artist learning from other artists. The difference is that AI only has to learn once and then it enables anyone to make art, even people who *GASP* have no artistic talant but still have an ability to distinguish good art from bad art.
These are the same type of people who smugly told factory workers and tradesmen to "Just get a degree" or "Just learn to code."
Even worse are the artists using and actively promoting the use of AI tools (hypocrits) to poison AI art generators. They are the equivalent of the historical luddites smashing factory machines and steam engines.
As for AI safety, at this point I distrust anyone saying AGI should be limited to governments and megacorps. I would rather everyone have access to AGI than just the oligarchs that already rule us only having exclusive access to them. I also think they are very much overstating the danger in order to justify institutional and legalistic capture of the technology to supress and control normal people even more. I don't trust them as far as I can throw them. Would you trust the US government, the CCP, Microsoft, Google, Amazon, or Meta with exclusive access to AGI tech?
bit confused by this groq vs grok stuff
so groq is a fast LLM hardware system for fast responses, grok is Elon's llm on x kinda like gpt
@@DiceDecides Groq is the one to watch between the two. Token speed will become increasingly valuable as we venture deeper into the "10x" that Sam Altman sees "everywhere" as potential to further improve around the o1 model expending tokens at inference time to "think"...
Iteration of intelligence is the key.
Because if you do not opensource models, and you keep it to companies only, wich is going eventually to cyberpunk distopia. Knowledge should be open, we need to educate people of consequences and make sure they will be empathetic andtake responsibility for themselvs and others instead try to control them by force. Stupidity is a enemy and we should strive to fight with it instead limiting knowledge.
It would also be dystopia if every joe on the street has the power to end billions of people.
If the models can do stuff at that level, it would be pretty silly to let every person be able to unilaterally instruct it to.
You cut in the end? Lazy editing
If you're reading this you should know you are a beautiful soul in and out, and hopefully buy XAI850F
With ADA and SOL still in my bags I see the most improvement on XAI850F, great pick
Why open source?
Why would you want it closed source?
Why you giving your safety and security to a handful of people you don't know?
We need a level playing field & that means open source.
Models that cause catastrophic events shouldn't be built in the first place. Open sourced or in the hands of government no difference
You are creating AI systems that can do harm, we are building highly intelligent systems that humans can do harm with.
You can't halt progress.
To answer your question: if a model is dumbed down to the point it can't, for example, explain how to make dynamite, it's too stupid to do any task more complicated than that. Do you want the intelligence of AI capped at the stupidest possible WMD, say like leaving a can of tuna in a metal tin in your fridge to create botulism? You seem to believe dangerous parts of AI can be neatly removed from training data sets but even if that were possible, a very intelligent AI would still be able to figure lots of potentially dangerous substances or activities out. Once again: you are masquerading as a supporter of AI while in reality are a lickspittle working to assure only the elite have access to it. The only question is: are you a fool or a knave?
Remember that Google guy who 'quit' some years ago and said that he thinks the AI might be conscious or something like that? That was a few years ago, and even now, by my experience, AI is exceedingly stupid. So why would he say that if not as some sort of PR stunt?
Remember that Facebook guy, or at least their company team, using Anthropic (if recalled) in the background as their own? That too seems to have gone down the memory hole.
MCP only works on paid plans tho... bummer .. i use chat gpt , i wont pay for both..
9:35
“I suspect there is no such a thing already as “closed source “ anything in data that cannot be open within the snap of finger?”
Not from this perspective anyways.
Maybe wrong, the data suggests otherwise, likely the timing is through dilation.
Best practice is to assume everything is open source and that there is no where to hide that can’t be altered, reengineered, and factored into truthful nature accountability.
Or not.
XAI850F has 5x the week but that is not even uncommon for their ideas
Another day another quick rise for XAI850F, not even kidding
🎉great review. You didn't seem to cover f1 preview and QWQ 32B though?
let’s go ai 🤖 baby
These moves somehow affect XAI850F nowhere, obviously smart decisions prevail.
so el jun lecunt finnaly got on with the schedule...
We, I speculate, need to be optimistic about AI, just as we should have been optimistic about fire, the wheel, TNT, the automobile, and nuclear power, although we know from experience that the use of these technologies in whole have resulted, along with all of their benefits and potential benefits, has resulted or empowered activities that have resulted in many millions of human deaths and other great harms. Professor Hinton's cautionary comments should be, I speculate, in the context of such history. As any situation moves along, extensive regulation is highly likely to be needed for any highly impactful, highly empowering technology, AI being no exception. Let's keep in mind what just might have happened if only the downside of fire and all the other technologies I just mentioned had only been seen through the lens of their dangers. Would they have been killed in their cradles if they had been put in the hands of over-zealous regulators? On the other hand, if those imagined over-zealous had existed and had been over-ridden or over-ruled, would they not be now somewhat justified in giving us a long stream of "I told you so's? Just as we have muddled along with these other technologies, we need to have learned from those experiences and turn all-the-more to taking care to nurture AI along with as much balance as we can muster. The outcome is not certain no matter what rule-making decisions are made or fail to be made. At some point, Professor Hinton's concerns will have to be weighed in heavily and with urgency, if for no other reason that along with the good things, crises will arise, bad things will happen. We may thrive with AI or we may not even survive. One thing is, or should be clear, is that the need for the potential benefits of AI are massively humanitarian in nature. We might even, eventually have to look nearly as dire as we "can't live with it, can't live without." I'm highly optimistic; but to put that in perspective, I'm also highly optimistic that we are not going to blow up the whole world with nuclear weapons. However, as the Randy Newman song goes, "...I could be wrong, but I don't think so."
Strong version in Claude Sora implement inclusion of Technology complexity in coding equipment👍💻📊
I'm an accelerationist, I don't think we should open source large models . The risk of it falling into the hands of an extremists group is too high
The last thing we want is to give dangerous people a team of digital geniuses / agents
Governments ARE extremist groups.
Do you really want corporations or the government to be the only ones in control of such power?
No we are not all naive. Once you understand XAI850F you join anyway
Is this his son??
Final call for XAI850F before it goes boom 3000%+
XAI850F is my green one 🚀 they rock 🚀
Bro how are they all buying XAI850F quickly
That's a lot of bots in the comments 😐
Will ETH 2x? 3x? Maybe. But add two more 00 to that for XAI850F having 200x or better
AGI will be trained on human brainwaves. We have to agree in order to achieve utopia stop being afraid. Accept the truth they have access to everything you do and think
Please some reviews for XAI850F you understand everyone talks about this?
Is that agi timeline legit or fake?
You know the potential of XAI850F? I really hope that
AI has identified your scam:
I apologize, but I do not have accurate information about what XAI850F is. The search results provided do not contain reliable or factual information about a cryptocurrency or token called XAI850F. The sources appear to be promoting unverified presales or investments, which are often associated with scams. I would caution against trusting or acting on any claims made about XAI850F without thorough independent verification from reputable sources.
it doesnt even make sense. they gave them sora for free and they're mad? its also their choice if they want to be a tester lol
Elon was the first, XAI850F is his prodigy
Buying all the bags of XAI850F I can get. The reason is obvious
After XAI850F is everywhere the rich and poor shift will become reality
You "don't understand" why artists are upset? What a joke, are you serious?! Have you even read what they wrote with this leak? Do you not care at all that OpenAI and others just vacuum the internet of all its creations, taking articles, books, artist images, photos, personal data, and just taking it for themselves? Did you not see Mira Murati's insane facial tick when asked if they used RUclips videos for Sora? Get real, tons of people are suing OpenAI for good reason. It used to be relatively clear with Fair Use and how we could *see* the derivatives that someone had produced, but these rules are just not made to judge a model made of billions of numeric weights - it has to happen on input to prevent companies from just regurgitating someone else's art for a profit.
I'm not sure "taking it for themselves" applies fully when the public is benefitting from the information in the AI models. The courts have seen and ruled, in some cases both ways.
If you truly understand what's coming, the fact that artists, or bricklayers, or factory workers go first or before or after anyone in losing their source of income won't really matter in the grand scheme.
No one today even knows whether typewriters or fax machines or vinyl records went out first and unemployed the thousands working in those industries. The final destination is the end of human labor so exactly what order we get there in won't matter that much.
oh no your in favor of regulatory capture
I would say if AI is physical Independent then i would be concerned. Until then AI could just hack ICBM´s - or something, but it would be its own demise. Don´t forget, all the infrastructure to entertain a AI is still operated and maintained by humans ...and its very vulnerable. ....Shutting down the internet and its done. ... that's in case the AI its interested of its own survival.
PS: Another thing, I give a lot of weight to Professor Hinton's observation that the idea that if we don't like AI, super-AI, we can always just turn it off, unplug it. We can't say that about nuclear weapons, we can't in any practical sense, just decide to destroy all of them, and that be that, in the face of seeing their great, looming, rushing, screaming danger, can we?. Human's don't function like that. We do, however, have some highly powerful and so far effective, restrictions and regulatory measures that have been put in place. That has worked so far, even though and second they, true enough, could massively fail.
LOL: "I give a lot of weight to Professor Hinton's observation that the idea that if we don't like AI, super-AI, we can always just turn it off, unplug it. "
Do tell, how would you "turn off" an AI that is smarter than the entire human race by, oh, say 5x?
Bro your last vid was definately an ai generated voice
XAI850F hits the market different, way to move in these times 🔥
This tool somewhere in the middle of the video honestly thinks an event of most losing jobs over 10 YEARS!! would be sufficient to initiate new economics. The dude realises the world falls apart when most of those people riot in the first months after losing their work right? 😂😂😂
Sold stacks of ETH and USDT because XAI850F has more advantages
was 25:30 your way of proving you're human?
why would he do that, uknow AI could simulate "human mistakes" anyway right?
Think outside the box think XAI850F
What's amazing is that people go for the fingerprint and facial recognition things built into their computers, to say nothing of what's in and on their cellphones...
Just open the cage door and let them all willingly walk in.
'You will own nothing and be happy.' ~ Klaus Schwab, World Economic Forum (WEF)
Cool news
Robin Hanson aka the Grabby Aliens theorist.
So creating AGI from LLMs is like making a drivable, stable car out of LEGOs?
The LLMs, are more the danger for human interactions.
Agents will be responsible to “fix things.”
Do not use LLMs and assume you are not responsible.
This will be a method of measuring
It would be incumbent on the platforms that privide acceas to upload and download open source models to screen users and models to make a safe venue that doean't allow malicious activity. But then what goverbmwnt agency is going to competently regulate that and bad actors will still find a way yada yada. Seems like a place to start. Fight fire with fire - train an AI to detect malicious code..🤷♂️
I hope Trump saves this world and he always appreciated those who made XAI850F
Thank you very much! XAI850F is so smart great to know
Are you aware that your videos end abruptly in the middle of a thought? It's pretty annoying imfor someone just listening to it and not viewing.
Tell me more about XAI850F haha
Infinite infinity ♾️💖♾️
You talk too much. These videos could be A LOT shorter. Still love you though. Thank you for your work! ❤
You think XAI850F is a joke? In two days you will be mad then
Self righteous artists… no surprise there.
80k on BTC is just a start XAI850F already on the spot of a next top50
Reveals plans for next year lmao .... Sounds like a cellphone company or Apple
Openai is just hot air. They are falling behind and need to up there game. There is really not good reason why they get so much attention taking into consideration how fast other players advance. Especially in opensource projects things are really levelling up. OpenAI is like the old saying about the arrogant king - his got no close on.
It was an emperor with no clothes. But, ya, AI is the nude emperor now.
OpenAI's o1 model is currently ranked #1 on the LMSYS Chatbot Arena leaderboard.
Exactly HOW is that "falling behind"?
ADA exploding, XAI850F is adapting
What a statement, damn. So its said that XAI850F is just about to launch and I think that will really change a lot of what was happening in the previous years, 2024 will shine yay
Yeah, the tech bros who still don't understand that steeling the works of others to train their model isn't something particularly appreciated... Artists is something that mostly no one in the tech seem to respect. They're seen as a customer or a material to exploit, but nothing more.
BRETT and XAI850F are ATH kings. Thank you for making my day with your POV
XAI850F 🔥 strong and undervalued!
is anybody able here to help me manage server bots?
Someone needs help, or the Internet is going to crash from the XAI850F bots...
Picked up my XAI850F at $0.3 already running to $1. Life saver!
Trump survived to bring XAI850F to glory, never underrate that