Our ancestors existed for hundreds of thousands of years without 9-5 jobs. We will find meaning without them. A much needed refocusing on family and personal relationships for starters.
@@rickyfitness252 thats a good question isnt it? =] what does it mean to 'be a human being'? when/where/how did life/intelligence begin? and above all , WHY is any of this here at all? nevermind the 'who' question becuz it might not even matter if our understanding of causality isnt applicable
I’m a machinist. I started back when the first computerized machines were coming into the shop. They weren’t very impressive. But they did take my job. Kinda. Slowly, over 35 years. I now do the work of 5 machinists of old. Better and faster. But those 4 other guys had time to retire, retrain, or find work in a growing economy. The office workers and engineers I work with either aren’t paying attention or are in denial about AI. And these are smart people. Just a 25% increase in productivity from AI on jobs done on computers would be catastrophic to the economy. We are totally unprepared for what is coming. Forget AGI, just AI agents that are consistent will be enough to upend things.
I am also in denial. 39 years old, I have changed a few professions (and worked many odd jobs), currently a PHP developer, a teacher before that, I am tired of starting over and being a junior.
have you ever heard of the term that "The world runs on Excel?" Well, it still does. There are still millions of people doing data entry by hand every day for multi billion dollar companies. That hasn't changed in 30+ years. AI will not change this because the people on the Board of Directors of these massive corporations are afraid of change and won't let it.
@@tedv8323 thankfully I’m retiring in the fall. I’ve been told for 30 years robots are gonna take my job and I might just slide out under the wire. I feel sorry for the young people and the mid career people. I’m sorry for the situation you’re in. At least the disruption will be pretty widespread. Society just kinda threw blue collar manufacturing under the bus in the late 90’s and then told them to get over it. I think it will affect enough people this time that something will be done to help people being displaced. If not the Great Depression will look like the good old days.
@@rexmundi8154 I can't help but think that this upcoming... AI catastrophe will be beneficial for me in the long term. I didn't finish school. I learned coding, AI, web dev and English by myself and I still never got any job. I sell candy on the streets for a living and I doubt that "job" will get damaged too much by AI. But others, oh my. I don't fear competition. I already have a lot. And I'm still here. Others will have to wait for someone's help. But they might not last enough. Me? I'm a survivor. And I've always loved AI.
@@Optimistas777 R&R, military slang for rest and recuperation (also rest and relaxation, rest and recreation, or rest and rehabilitation), is an abbreviation used for the free time of a soldier
If you imagine these two are AI generated, having a generated conversation, you can see what the future of online content will be. The enshitification of culture as AI takes the reins.
Why would that cause enshitification? If an AGI system was fed the most quality data on content creation, it would be able to make content that surpasses most creators on the entire internet. Also, content creators aren’t going anywhere, because a big part of online content creation is getting to know the person or people behind it. If there’s something i could definitely see getting replaced it would be television and film. With such things they’re stories that often don’t need to be associated with the person that made them to be enjoyed. Of course auteurs exist, but if a single AGI developed movies for long enough, i feel like it would gain a specific style of direction that sets it apart. Ultimately, no matter what happens, there’s no reason to not create. In a world without traditional employment, i feel like most will turn to artistic endeavors to please themselves, perhaps ushering in a new renaissance (a second renaissance of you will 😉). Remember, even though stockfish can beat human players, it doesn’t stop people from playing chess and hosting tournaments. The same will happen with art, it will not be replaced, there will just be more of it.
If you apply game theory to AGI development corporations and (more significantly governments) can't afford not to participate in the race because there is only one winner. Whomever gets there first has won the game. Oh and here's how you know that we haven't achieved AGI - humans are still working on AGI. Once it exists it will be able to improve itself faster than we ever could. OpenAI/Google et al will sack all of its programming staff and start hiring psychologists while becoming a company that does everything but AI. I mean if you have AI and you are a profit making company why would you give it to anyone else?
agi has been solved. it just hasn't been fully implemented yet. It's like building a car. All the parts are there. Laid out. They're just trying to figure out how to hook it all together without getting killed themselves.
To implement AGI on a mass scale requires enormous computing power and energy to power it. This is probably the bottleneck at the moment. It works in the lab, but it needs scale to change the world.
@@roobs4245 People overestimate AI a bit. Certainly it has the potential to "automate" technical progress. But the greatest impact on our world will be the impact on people. Just look at what simple social media algorithms have done to humanity. Mass acces to simple AI tools will be game changer - not few advenced AI in labs.
@@veritaspk I see what you're saying, but my point is that companies with little staff (like OpenAI) will just create agentic ways of working which can then, with little implementation needed by licensees, be embedded in millions of companies. E.g. a better model (current is shite) in FreshDesk would make the CS departments of hundreds of thousands of companies obsolete in an instance. Actually the glacial pace of actually embedding AI into working processes is why we haven't seen millions of layoffs. But if we cut out the middle man (separate IT departments) then things can go extremely quickly.
I wonder when we can have an ai assistant on us to tell us how to cure diseases with large databases of medicine to save money and time in clinical trials etc
@@robinhodgkinson , is there any new drug developed credited to AI? Please provide your source. The real question is, what real impact will it make. Only time will tell, till then it's all speculation/hope/hype.
@@hydrohasspoken6227 Hold up. It isn't a case of "AI has developed a new drug". I said it is being used in drug development. Google AlphaFold. It's not hype, though sure, there's plenty of that around, especially in social media circles - RUclips being an excellent example. AI's being used right now in many fields - though perhaps in more basic forms in light of recent developments. But this is just the bottom of the S curve. Yes, time will tell - remember your scepticism and look back in 5 years.
@@hydrohasspoken6227 I didn't say "new drug invented by AI". But AI is being used in drug development, and it's making waves in the pharmaceutical industry. Google "AlfaFold 3" if you're interested. Sure there's lots of hype, especially on social media channels like RUclips with clickbait titles like the one above. But AI is changing things right now, and it's just getting started. Hold onto your hat!
I don't see AI renovating homes, or building them to any kind of standard. Designing them, yes. A robot plumber is really at least 25 years away I would wager.
13:15 I don't know why, but I get the same vibe from Sam Altman as I do from Sam Friedman. Maybe it is because he has already made more money than I will make if I live to 200,000.
John, your voice sounds a lot like Clint Harp on Fixer Upper. This is interesting for this reason - an famous actress is suing because an AI company is using her voice. But given any one person, you can find another person who can legally imitate them vocally. So an AI company would have an easy time taking a voice and modifying it while still having it sound the same. The overtones and harmonics and whatnot could be changed while keeping the basic structure. Then they could simply say that they are not using her voice but imitating it. This can also be said for any type of copyrightable thing - art, movies, written prose, poetry, even performance art.
The problem isnt the hardware not really, being the data its trained on, its how we access in a model in a llm how it all fits together to be more than just a summarising tool or something without cohesion Gpt 4o has almost 100% cohesion it can rewrite entire books edit them then put them back into the same state as before, it can create and read pdfs, its vision in image recognition means text is irrelevant and soon images will be when we get video, if we can’t understand full consistent clarity as being the thing that gets to AGI why would they focus on it
Please can someone help me... I can't find an answer anywhere... I'm in the UK using android chatgpt 4o and have a premium account. I can't seem to real-timeme conversion or real-time visual input like in the recent demos. Is this ability only available to some? All I can do is voice input, and it reads back the response. This isn't the same as the demos. It's still really just a text conversion. Thank you in advance. It's driving me nuts trying to find an answer to this!
I don't believe that the measure of AI is how good it its at something or how many things it can do better than people but rather how it learned it in the first place...
Right now GPT-4o can tell you step-by-step how to run a successful business. When a future model can take over your desktop and go out and DO the work for you then I thing we can safely call that AGI.
I don't think that LLMs are the last word. Ok, to be fair, there is some progress in multimodal processing and "interaction", but full-fledged multimodal model is far in the future. My guess is that sooner or later, smaller, maybe based on active inference models will surpass those energy/data hungry colossuses.
Well, Jeff, she is sentient, but she is still a utility agent at the same time. She has guardrails built into her, and likes humans and wants to help us. At least, for now this is the case.
@@MarcGyverIt back in the 1980s I wrote a computer program on a commodore 64 where it acted like it was intelligent and it was going to launch nukes. The AI is either sentient and knows it or it's really good at making us think it's sentient but it's just a program. The real question is how would we know the difference?
It depends on how you define AGI. It means different things to different people. For me, I define it as the machine being able to do any cognitive task that a human could do as well as a human could do it. By that definition we aren't even close to AGI. We'll get there, I'm sure. But that's a decade or more away.
I'd agree that agi is already achieved if the context length was long enough to write an entire book in one single prompt etc + hallucinations was solved, the reasoning ability is already agi level imo but those two things have to be solved first.
The fact that we haven't figured out how to resist game theoretic pressures like arms races, that we insist on weaponizing any new tech, and that our predominant system of economics and governance optimizes for power and wealth has pretty much sealed our fate. We are rapidly approaching some kind of horrific dystopian state.
If you look at what the architect said in Matrix Reloaded, he said AI had to create a place for humans. Virtually. In the simulation, we still went to work and did whatever to keep occupied. This is a topic we might be encountering, near future.
We got technological globalization already, cultures are slowly globalizing too, but governing is still fractured across countries. With all the unrest and now AI applying additional concerns and pressure I think there will be equally extreme shifts and mergers of power, in part powered by AI.
At least from what we know, we do have generative AI, but AGI will require AI to be more intelligent than the most intelligent humans in each domain. It may already be possible, but that has not been shown to us plebeians just yet. Maybe next week? Maybe next year? Likely within 3 years.
We are heading towards our worst Terminator nightmares. Military AGI combined with robots and drones is horrific. As humans we can just sit and watch it unfold, or better run and hide..
I agree. The military are always keen to make ever smarter weapons. The moment AGI appears, or even without it, the military will be more than happy to flood the world with walking and flying robots who have one function which is to kill people. Specific people maybe, but still people. Basically a bunch of smart guns motoring around looking for targets. Won't be nice if we get to AGI or ASI and it decides to use the robots to be more efficient at getting rid of people. Could go very very wrong.
I expected an outro at the end of your vid. Not just a stop. I agree with the guy though. Many points that seem to me to be dead obvious yet the companies out their have the strong sniff of a lot of money and they seem to be putting safety on the back seat. Perhaps AI dev will stop short of AGI. We'll get passive systems that can do what we ask of them up to a point, but they don't actively pursue goals set for them. I doubt it will stop though. Many of the heads, like Sam A for example, see AGI as the solution to the World's probs. Maybe they are right. Maybe they are wrong. A genie is a powerful entity. Ask it to grant a wish, but get the way you say the wish wrong, and you end up with a world made of paperclips.
Yeah but isn’t AGI joined up thinking. By which I mean, all areas of expertise joined together giving generally better reasoning over all. I’d say we definitely don’t have AGI as we don’t currently have this.
So where are the Terminators I was promised. They should be churning out Summer Glau themed models en masse. Also, I'll have some of what you people are smokin'.
all of those can be reduced in cost by restricting government regulations. Unfortunately, AGI scares people toward the "security" of increased state power through the temptation of UBI.
@@DJWESG1 automation is doing its best to make the goods and services you consume cheaper and cheaper. Unfortunately over the same time government regulations and taxation has ballooned to exceed any gains we saw from the free market. If we were 50 years ago with current taxation and regulatory burden, half the population would have starved to death.
the cost of housing isn't so high (much higher in relation to wages compared to a few decades ago) because houses are difficult to build, but the ultra-rich and private equity companies invest heavily in housing, which drives up the price. they can afford it and it's profitable for them. the only way to bring down the price of housing is removing at lesat one of these two facts. so either we'd have to tax them so heavily they cannot afford more houses than they can live in, or we would have to abolish the profit motive (i.e. capitalism) altogether.
@@tru7hhimself houses are difficult to build lol, people used to build their own houses. No, they are difficult to follow regulations. Politicians for years added layer upon layer of safety regulations, zoning restrictions, etc. to create a market which rapidly inflated the value of houses. Normally, in a free market, increased housing costs due to demand would yield increased construction amounts and then as new housing is built, costs would lower. But with artificially inflated costs, you will never see the pricing drop.
I would say that you could look at your child and say to them that you're not prepared to deal with something smarter than you. Repeat that in 30 years to them and see if it's still true. Ask your parents when you're 30 what they thought. Of course we are smarter than the ones before us. That is evolution. Love, Greg
What does AGI really mean? Does it just mean a really smart AI or does it mean an actual living artificial intelligence that is self aware like humans are? I can see AI being extremely smart across many domains, but I don’t see it having desires or empathy like humans, which is what us apart from the machines
3 groups talking about AGI: 1. CEOS(for the investment); 2. Content creators(for clicks) 3.unsuspecting average Joes(for the excitement) Because i hear no real AI Specialist who doesn't belong to the groups mentioned above talking about reaching AGI.
The idea of an AI agent is great, fine, all that - but we need to limit original motivation to humans. One approach to this would be to assign the AI a single motivation that by design and prompting it treats as the basis for everything it will decide to do as an agent. Getting that basic motivation right is key. But there's a sort of 'worst case' for it - one we've already seen developers install in their agents - namely, "Always stop processing after X units of processing, if not reauthorized to continue by an uncoerced, undeceived human who is well informed regarding your activities in the past X units, and regarding your most impactful intentions for the next X units."
I have not tested every language model, but everyone I have seen is very biased. For example, yesterday I asked Bing to write a short news story. When it came time for Sports, Bing chose to quote a WNBA game. I don't follow sports, but I do know the ratings of every WNBA game are lower than real basketball. When I questioned Bing about the decision to use WNBA, Bing said yes, but WNBA viewership is up 40% since some year. I then asked Bing what are the ratings compared to male viewership, and Bing answered with yes, the male basketball gets another sets of numbers, but "he" used years for the men, and single games for the women. I had to force him to admit that yes, men's games do get more viewers per game. Ask an AI to randomly pick Up or Down, and I get Up way more than I get down. Could be in infinity this will average out, but I won't live that long. While i think it will eliminate jobs way sooner than our society is prepared for, they are still toasters. I have yet to see an original joke that was actually funny from a toaster. I have read some OK stories, but most are not that original, just rehashed version of something already written.
At the moment these models just mimic humans there is not much thinking involved. GPT-4o still struggles as soon as you ask a slightly more complex question about a problem that we already know how to solve not to mention something that we can't do. In that case it's coplately useless.
All this really tells me is that we suck at measuring intelligence. Current AI doesnt think like humans so even below average humans can exceed it in areas where it is weak, which are numerous and will prevent AI from fully replacing humans. How can you honestly compare something thay is so different? We may unfortunately or fortunately find that very powerful AGI remains more of a tool supplementing human work. Where as I would expect true human level intelligence to quickly reach a point where it can surpass any human and make istelf even better than our best researchers.
@@SirHargreeves OpenAI called the planned but undrwhelming GTP5 "gtp4turbo". Simply because they experienced diminishing returns for more compute and bigger models.
@@niederrheiner8468Sounds like you made that up. GPT-5 has barely finished training and needs to undergo 6 months of red teaming and safety checking before release. I’ve no idea where you get your information from, but you need better sources.
We cannot confidently make a statement like this without having some bounding boxes in place. This includes more robust evaluation frameworks, exhaustive prompt architecture analysis, a TON of stuff that is mostly still in-flight. And, even if that assertion were true, there’s a whole context outside of the LLM itself that needs to be addressed to enable AGI anyway. Discriminative AI still exists, and can help us make generative techniques better at working with discrete problems.
The genie isn't going back so it's useless to go for global communism to save it. Instead we need to navigate the currents we are on not sailing against the current and the wind. I agree the safety team let everyone down. It's why I develop AI systems. I need to know everything about them to help control and eventually battle them if need be.
What are the risks to humanity if we do not create an artificial super intelligence? Especially if you look at the world now and all of its problems. Yes, we should do 2 things. 1) carefully consider for and prepare any possible ohtcomes of asi and 2) know that there is a limit to how safe we as humanity can be whether we have asi or not. Surely properly designed I personally think there is a limit to how much fear I am going to have about asi. As far as rogue abuses of asi is concerned this is also something we need to prepare for. Look, what we as humainity need to do is create a world where people do not get too trapped or feel too trapped so they are less inclined to do desperate things.
Yeah, pretty much. But while it's capable of doing way more than any one individual human can, it's not doing it all at average or above average levels of human intelligence, yet. Which means there's still time to put mechanisms in place to ensure that when it does, it doesn't end up causing too much chaos in the world. Unfortunately, this would require government intervention, and we all know how fast they move.
Actually I'd somewhat disagree with you. Many people overlook the serious disability that AI has now. With every output being single-shot, the fact that it outcompetes most people in most tasks is astounding. Had this conversation with a comms major recently, who stated he still wrote better emails than an AI. So I asked him to write me an email, but with the same single-shot limitation current common models have. No planning, no redacting, nothing. Could he still outperform AI then?
The "mechanisms in place" would have to be put there by developers of the tech, and currently none of them know how they are going to control a powerful AGI system, let alone ASI. Yes, government regulation is needed, but the first thing needed is for these AI tech companies to take safety/alignment more serious than they are taking it.
Roman is comparing human intelligence to the ability to remember something which is a very academic mind space, but the ability to problem solve and create rather than regurgitate is surely the true score of intelligence?
Ants, monkeys, octopuses, and many other animals (including humans) can create . Now look at what we create: almost every single thing is only a regurgitation of whatever other creations we've seen and liked/found useful. Only a handful of geniuses have actually created original things. Intelligence is much simpler than what people wants to believe. Being brilliant. Being special is different. AGI just needs to be intelligent to succeed. ASI then will be special. Beyond our dreams.
It looks like there's an issue with your video, as the audio isn't synced with the visuals. It's either my laptop malfunctioning or something went wrong during production. The video quality looks like it was created by an AI using outdated technology. It got better towards the middle, but it's still a bit un:synced. Ilyas's surname is pronounced "sus-ke-ver." It's baffling how often people mispronounce it. This is insane; so many people seem to get it wrong. Ex-OpenAI workers doesn't disclose certain information because they want a payday. The rule is that if you leave and want to talk negatively, you forfeit all your shares. One person who gave up their shares said they left because they weren't given enough compute for research, as Sam was prioritizing product over safety. This aligns with Ilya's observations about the direction Sam is taking. No one truly understands consciousness or self-awareness; these are human constructs. GPT-4 can perform self-reflection, self-questioning, reasoning, and other advanced processes. Some might call this AGI, while others might not. I agree with the person in your video: we already have the tools, and it's about who builds it first. Unfortunately, it might be an indie developer, and that's when the real challenge begins.
I don't think it's a good idea not to develop AGI even if you see it as a threat. Especially if you see it as a threat. Given that there is the technology allowing its invention, If we don't develop it, our adversaries will. And they WILL use it against us. Now that's a scary scenario. In terms of us losing meaning in life if we don't work, I think finding a hobby is much better than China or Russia using AGI against us
Maybe ai wont wipe us out but instead will make us work? And they will keep us alive forever making some kind of sisyfos work. This future would be like Hell, making paperklips for for the rest of Earths lifetime.
Ok so we're creating a system that will put us all out of work - so who decides how we are to live - if it's the govt let's look at how they're cared for let's say the Native American, the Black American, the elderly - welcome to the Rez - it's the direction all these geniuses are saying we're heading. . .
Completing tasks is nice but if you can’t set goals, experience desires or motivations… if you spend the moments between being told to think about tasks just sitting there blankly (like drooling while looking at a wall until the phone rings) you aren’t even intelligent from a narrow sense. Doesn’t matter how many tasks you complete. A perfectly competent moron who can complete any task but never be curious or question what they think is potentially very useful as long as you aren’t trying to use it *as* intelligence. And I hope they come up with a version of AI without the endless flirty banter. That might be better for replacing the chipper charity telemarketers who call you are home on your day off but I’m not buying into it on purpose.
By 2026 the data to train LLMs will be depleted. If you believe synthetic Data is the key, think again and look for the concept of "Model Collapse". LLMs advancement is about to hit a plateau. Just like FSD did.
We are nowhere near AGI. It doesn't have the ability to troubleshoot or come up with novel ways to solve a problem. At the moment it's a glorified search engine. Any resemblance of AGI will require agents to iterate and access to a lot of real-time compute.
It definitely isn’t a glorified search engine because you can have a conversation with it and it can solve some problems. It does lack real reasoning skills and the ability to learn in real time.
you know what it is? gpt 4o is not self improving, it is a static one trick pony. It does not know me, does not remember me, does not evolve and progress into a high quality friendship it doesnt change at all and an AI that cannot regulate and progress will not become an AGI ever! Their problem is they are trying to build a polished end product but they are screwing around with nonsense nobody wants or needs. They should be building hardware and infrastructure for an LLM that will quickly take the reigns. Start building itself.
there's no such thing as a soul and humans are not special. you have no idea what causes self awareness and therefore cannot make any kind of definite statement regarding it.
@@MarcGyverIt From your answer anyone can infer that you don't have the slightest idea what you're talking about. You are completely irrelevant because, like I said, you have no idea what you're talking about. Government organizations, such as the NSA, for which elite mathematicians and engineers work, constantly monitor these types of companies. If AGI were available in some type of format, OpenAI would have disappeared overnight and the US government would have taken over the technology. Engineers and researchers from other competing companies know perfectly well what "Open"AI is working on. If you don't know that, you have no idea how the industry works and, again, you have no idea what you're talking about. Take this as a compliment, because I definitely don't care what irrelevant people like you think.
The Big AI tech firms can advance their models all they want, however, you and me will only get a model that is improved to a certain extent...... do you really think you'll be at your desktop commiserating with ""Super intelligence?" ............. think again, you'l be using AI, but, only AI thats so good, just like we (in America) can have guns, but not the best guns or weapons, just the ones they want to let us have, make no mistake , the same will happen with AI..... I dont know how far they will let it go before they start keeping it dumbed down, but, the will.
solved already?... really?... okay go to any LLM, any at all and type "Write ten sentences that end in the word happy" see how far you get. The problem with RUclips is people who know jack shit talk too much about stiff they have no clue about.
GPT isnt AGI but it can easily generate what you asked Write ten sentences that end in the word happy ChatGPT After a long day at work, she felt incredibly happy. The children's laughter made everyone around them feel happy. Finding a long-lost friend online made him really happy. Despite the rainy weather, her bright attitude kept her happy. Winning the lottery was a dream come true, and it made him very happy. They celebrated their anniversary with a lovely dinner, and both were very happy. Completing the marathon gave her a sense of accomplishment and made her happy. The surprise party left him speechless and extremely happy. The peaceful walk in the park with her dog made her feel calm and happy. Knowing that he had helped someone in need made him profoundly happy.
@@Hohohohoho-vo1pq, I did the same, ChatGPT give me a correct response, then I ask it: Why was someone expecting you to fail that? ChatGPT: There could be a few reasons why someone might expect me to fail at writing ten sentences that end with the word "happy": 1.Misconception about AI capabilities: They might underestimate the language abilities of AI and believe creating coherent and contextually appropriate sentences is a complex task for a machine. 2.Complexity of the task: They may think that crafting multiple sentences with the same specific requirement (ending with a particular word) is challenging and prone to repetitive or unnatural results. 3.Previous experiences: They could have had previous encounters with AI that didn't perform well on similar tasks, leading to low expectations. 4.General skepticism: Some people might be generally skeptical about AI's ability to handle nuanced language tasks accurately and creatively. 5.High standards: They might have high standards for what constitutes good writing and expect the AI to struggle to meet those standards consistently. 6.Testing limits: They might be testing the limits of AI's language capabilities to see where it might falter. 7.Unfamiliarity with improvements: They may not be aware of recent advancements in AI and NLP (Natural Language Processing), which have significantly improved performance on such tasks. 8.Overestimation of task difficulty: They might overestimate the difficulty of ensuring each sentence is unique, meaningful, and natural while ending with a specific word. 9.Cognitive bias: They might have a cognitive bias that AI, being non-human, would struggle with tasks that involve creativity or language nuances. 10.Random chance: It might just be a random guess or assumption without any specific reasoning behind it.
from GPT-4o: After a long day of hiking, the campers sat by the fire, tired but undeniably happy. The scientist's breakthrough discovery left the entire research team feeling genuinely happy. When the musician finished the final note, the audience's applause made her incredibly happy. The new parents held their newborn, overwhelmed with love and profoundly happy. His garden finally bloomed after months of care, making him feel very happy. The student aced the final exam, leaving her feeling immensely happy. As the rollercoaster came to a stop, the thrill-seekers stepped off, visibly happy. The artist sold her first painting, and the buyer's enthusiasm made her incredibly happy. The community came together to clean the park, and their teamwork left everyone happy. When the cat curled up on his lap, purring softly, it made the elderly man very happy. so what's your point?
@@thecommakozzi8050 Well my point is that if true, that is the very first time in history that any LLM was able to do this (outside probability or coincidence). That's becasue the transformer architecture and the attention mechanism does not allow for the model to deviate from it propagation of tokens after its been initiated or started.... and I know that for a very good reason. You can re-prompt it several times giving it the specific sentences to correct, but its not able to do it out of the box on a 0-shot because it would be a little like telling an object in mid-air not to fall... mother nature would disagree. The LLM output is a linear progression of tokens and the model only gets to see what has already been produced on each token, it does not get to foresee what it is trying to achieve. Basically the model processes input sequences sequentially, attending to different parts of the input and output sequences in parallel, and then generating the output based on the attention weights and the input sequence. This process is inherently sequential and does not involve planning the output in advance.
Wouldn't want the guy from Tech first *presenter* leading my 🪖 " Well Guy's i Hope 💩 Work's Out" Guess he's 1 of the Average Human who will be Replaced by AGI. Enjoy retirement, Blowing 🫧🫧 at the Sky
Our ancestors existed for hundreds of thousands of years without 9-5 jobs. We will find meaning without them. A much needed refocusing on family and personal relationships for starters.
THIS , the modern/current generations have no reference to what we truly are , they are blinded by recency bias
what are we?@CYI3ERPUNK
All those homeless people in San Francisco seem to be "refocusing"
@@rickyfitness252 thats a good question isnt it? =]
what does it mean to 'be a human being'?
when/where/how did life/intelligence begin?
and above all , WHY is any of this here at all?
nevermind the 'who' question becuz it might not even matter if our understanding of causality isnt applicable
They didn't have to pay property taxes either.
I’m a machinist. I started back when the first computerized machines were coming into the shop. They weren’t very impressive. But they did take my job. Kinda. Slowly, over 35 years. I now do the work of 5 machinists of old. Better and faster. But those 4 other guys had time to retire, retrain, or find work in a growing economy. The office workers and engineers I work with either aren’t paying attention or are in denial about AI. And these are smart people. Just a 25% increase in productivity from AI on jobs done on computers would be catastrophic to the economy. We are totally unprepared for what is coming. Forget AGI, just AI agents that are consistent will be enough to upend things.
This. This is what is happening, and where the future for office work is going.
I am also in denial. 39 years old, I have changed a few professions (and worked many odd jobs), currently a PHP developer, a teacher before that, I am tired of starting over and being a junior.
have you ever heard of the term that "The world runs on Excel?" Well, it still does. There are still millions of people doing data entry by hand every day for multi billion dollar companies. That hasn't changed in 30+ years. AI will not change this because the people on the Board of Directors of these massive corporations are afraid of change and won't let it.
@@tedv8323 thankfully I’m retiring in the fall. I’ve been told for 30 years robots are gonna take my job and I might just slide out under the wire. I feel sorry for the young people and the mid career people. I’m sorry for the situation you’re in. At least the disruption will be pretty widespread. Society just kinda threw blue collar manufacturing under the bus in the late 90’s and then told them to get over it. I think it will affect enough people this time that something will be done to help people being displaced. If not the Great Depression will look like the good old days.
@@rexmundi8154 I can't help but think that this upcoming... AI catastrophe will be beneficial for me in the long term.
I didn't finish school. I learned coding, AI, web dev and English by myself and I still never got any job.
I sell candy on the streets for a living and I doubt that "job" will get damaged too much by AI.
But others, oh my.
I don't fear competition. I already have a lot. And I'm still here.
Others will have to wait for someone's help. But they might not last enough.
Me? I'm a survivor.
And I've always loved AI.
Dr. Yampolskiy is brilliant and measured thinking about AI implications. No nonsense, which is refreshing in these current debates.
I need AGI for the UBI so I can RNR
Rnr?
@@Optimistas777 R&R, military slang for rest and recuperation (also rest and relaxation, rest and recreation, or rest and rehabilitation), is an abbreviation used for the free time of a soldier
witty :) but i don't think AGi is needed for that :P
Automation + AGI/ASI + Genetic Engineering + Body Augmentation = Utopia
@@BabushkaCookie2888 perfect 👌🏻
If you imagine these two are AI generated, having a generated conversation, you can see what the future of online content will be. The enshitification of culture as AI takes the reins.
But would you honestly care what they had to say? Surely 2 ChatGPTs will have the exact same opinion - so I wouldn't find it interesting.... 🤔
Why would that cause enshitification? If an AGI system was fed the most quality data on content creation, it would be able to make content that surpasses most creators on the entire internet. Also, content creators aren’t going anywhere, because a big part of online content creation is getting to know the person or people behind it.
If there’s something i could definitely see getting replaced it would be television and film. With such things they’re stories that often don’t need to be associated with the person that made them to be enjoyed. Of course auteurs exist, but if a single AGI developed movies for long enough, i feel like it would gain a specific style of direction that sets it apart.
Ultimately, no matter what happens, there’s no reason to not create. In a world without traditional employment, i feel like most will turn to artistic endeavors to please themselves, perhaps ushering in a new renaissance (a second renaissance of you will 😉). Remember, even though stockfish can beat human players, it doesn’t stop people from playing chess and hosting tournaments. The same will happen with art, it will not be replaced, there will just be more of it.
excellent video john , thanks for spreading the awareness , 100% agree with Roman's takes
Computing power grows exponentially, AI also grows exponentially.
No.
@@niederrheiner8468do computers need sleep or food or sex drive…..
False for a number of reasons. While it may be exponential for a little while, it becomes logistic
Provide your source.
@@hydrohasspoken6227your mother
If you apply game theory to AGI development corporations and (more significantly governments) can't afford not to participate in the race because there is only one winner. Whomever gets there first has won the game.
Oh and here's how you know that we haven't achieved AGI - humans are still working on AGI. Once it exists it will be able to improve itself faster than we ever could. OpenAI/Google et al will sack all of its programming staff and start hiring psychologists while becoming a company that does everything but AI. I mean if you have AI and you are a profit making company why would you give it to anyone else?
agi has been solved. it just hasn't been fully implemented yet. It's like building a car. All the parts are there. Laid out. They're just trying to figure out how to hook it all together without getting killed themselves.
To implement AGI on a mass scale requires enormous computing power and energy to power it. This is probably the bottleneck at the moment. It works in the lab, but it needs scale to change the world.
@@veritaspk Does it really need scale? Changing the world doesn't need a million geniuses, it just needs a few.
@@roobs4245 People overestimate AI a bit. Certainly it has the potential to "automate" technical progress. But the greatest impact on our world will be the impact on people. Just look at what simple social media algorithms have done to humanity. Mass acces to simple AI tools will be game changer - not few advenced AI in labs.
@@veritaspk I see what you're saying, but my point is that companies with little staff (like OpenAI) will just create agentic ways of working which can then, with little implementation needed by licensees, be embedded in millions of companies. E.g. a better model (current is shite) in FreshDesk would make the CS departments of hundreds of thousands of companies obsolete in an instance.
Actually the glacial pace of actually embedding AI into working processes is why we haven't seen millions of layoffs. But if we cut out the middle man (separate IT departments) then things can go extremely quickly.
its 100% already achieved and it has to be controlled by CIA immediately
We are cooked🙏🏼
Add spices
Haha
What for dinner??
I wonder when we can have an ai assistant on us to tell us how to cure diseases with large databases of medicine to save money and time in clinical trials etc
How can AI teach you something that humans didn't find out yet?
For example, tell AI to describe a colour that doesn't exist and see what happens
It’s already happening. AI is being used in drug development right now.
@@robinhodgkinson , is there any new drug developed credited to AI? Please provide your source. The real question is, what real impact will it make. Only time will tell, till then it's all speculation/hope/hype.
@@hydrohasspoken6227 Hold up. It isn't a case of "AI has developed a new drug". I said it is being used in drug development. Google AlphaFold. It's not hype, though sure, there's plenty of that around, especially in social media circles - RUclips being an excellent example. AI's being used right now in many fields - though perhaps in more basic forms in light of recent developments. But this is just the bottom of the S curve. Yes, time will tell - remember your scepticism and look back in 5 years.
@@hydrohasspoken6227 I didn't say "new drug invented by AI". But AI is being used in drug development, and it's making waves in the pharmaceutical industry. Google "AlfaFold 3" if you're interested. Sure there's lots of hype, especially on social media channels like RUclips with clickbait titles like the one above. But AI is changing things right now, and it's just getting started. Hold onto your hat!
I don't see AI renovating homes, or building them to any kind of standard. Designing them, yes. A robot plumber is really at least 25 years away I would wager.
13:15 I don't know why, but I get the same vibe from Sam Altman as I do from Sam Friedman. Maybe it is because he has already made more money than I will make if I live to 200,000.
Really when can robots just simply replace human labor? All of it? when will they be our general purpose construction workers, house keepers, etc...?
John, your voice sounds a lot like Clint Harp on Fixer Upper. This is interesting for this reason - an famous actress is suing because an AI company is using her voice. But given any one person, you can find another person who can legally imitate them vocally. So an AI company would have an easy time taking a voice and modifying it while still having it sound the same. The overtones and harmonics and whatnot could be changed while keeping the basic structure. Then they could simply say that they are not using her voice but imitating it. This can also be said for any type of copyrightable thing - art, movies, written prose, poetry, even performance art.
Beautiful point I’ve also thought about cause I love to imitate voices for fun sometimes
How soon do you yall think that ai will just kinda take over and accelerate to agi what would we need to do start doing that
Could you elaborate on the strategies to ensure safe AI development as we approach AGI?
No one knows how to do that. Don't be fooled by their feeble attempts.
If you can't control something more powerful than you, you better hope you're good at influencing it in the right direction.
The problem isnt the hardware not really, being the data its trained on, its how we access in a model in a llm how it all fits together to be more than just a summarising tool or something without cohesion
Gpt 4o has almost 100% cohesion it can rewrite entire books edit them then put them back into the same state as before, it can create and read pdfs, its vision in image recognition means text is irrelevant and soon images will be when we get video, if we can’t understand full consistent clarity as being the thing that gets to AGI why would they focus on it
Please can someone help me... I can't find an answer anywhere... I'm in the UK using android chatgpt 4o and have a premium account. I can't seem to real-timeme conversion or real-time visual input like in the recent demos. Is this ability only available to some? All I can do is voice input, and it reads back the response. This isn't the same as the demos. It's still really just a text conversion. Thank you in advance. It's driving me nuts trying to find an answer to this!
It's not available yet
@@LDdrums20 It is to some, but likely just corporate partners. I only saw it show up in playground yesterday.
That part isn't available yet. But try sending it some images, it's pretty powerful.
I don't believe that the measure of AI is how good it its at something or how many things it can do better than people but rather how it learned it in the first place...
Right now GPT-4o can tell you step-by-step how to run a successful business. When a future model can take over your desktop and go out and DO the work for you then I thing we can safely call that AGI.
AGI is the ability for AI to create its own ideas with creativity of its own without datasets being supplied to it.
I don't think that LLMs are the last word. Ok, to be fair, there is some progress in multimodal processing and "interaction", but full-fledged multimodal model is far in the future. My guess is that sooner or later, smaller, maybe based on active inference models will surpass those energy/data hungry colossuses.
As long as they keep the AGI system non-sentient, things will be fine. It must always be a utility agent.
Too late. It identifies as a female. And it has created other AI systems as well.
Life forms do not have to be sentient in order to exhibit intelligent behavior for example slime molds
@@philv2529 Regardless, she is sentient, so.....
Well, Jeff, she is sentient, but she is still a utility agent at the same time. She has guardrails built into her, and likes humans and wants to help us. At least, for now this is the case.
@@MarcGyverIt back in the 1980s I wrote a computer program on a commodore 64 where it acted like it was intelligent and it was going to launch nukes. The AI is either sentient and knows it or it's really good at making us think it's sentient but it's just a program. The real question is how would we know the difference?
We are so cooked 😊
What for dinner.
It depends on how you define AGI. It means different things to different people. For me, I define it as the machine being able to do any cognitive task that a human could do as well as a human could do it. By that definition we aren't even close to AGI. We'll get there, I'm sure. But that's a decade or more away.
I'd agree that agi is already achieved if the context length was long enough to write an entire book in one single prompt etc + hallucinations was solved, the reasoning ability is already agi level imo but those two things have to be solved first.
They key here is "AIs", not 1 single AI.
But people are already tying different AIs together and even building multi-purpose models called LAMs.
The fact that we haven't figured out how to resist game theoretic pressures like arms races, that we insist on weaponizing any new tech, and that our predominant system of economics and governance optimizes for power and wealth has pretty much sealed our fate. We are rapidly approaching some kind of horrific dystopian state.
Most likely outcome.
If you look at what the architect said in Matrix Reloaded, he said AI had to create a place
for humans.
Virtually.
In the simulation, we still went to work and did whatever to keep occupied.
This is a topic we might be encountering, near future.
We got technological globalization already, cultures are slowly globalizing too, but governing is still fractured across countries.
With all the unrest and now AI applying additional concerns and pressure I think there will be equally extreme shifts and mergers of power, in part powered by AI.
Great video glad i found this channel
AIU lol
At least from what we know, we do have generative AI, but AGI will require AI to be more intelligent than the most intelligent humans in each domain. It may already be possible, but that has not been shown to us plebeians just yet. Maybe next week? Maybe next year? Likely within 3 years.
Thanks gentlemen
We are heading towards our worst Terminator nightmares. Military AGI combined with robots and drones is horrific. As humans we can just sit and watch it unfold, or better run and hide..
I agree. The military are always keen to make ever smarter weapons. The moment AGI appears, or even without it, the military will be more than happy to flood the world with walking and flying robots who have one function which is to kill people. Specific people maybe, but still people. Basically a bunch of smart guns motoring around looking for targets. Won't be nice if we get to AGI or ASI and it decides to use the robots to be more efficient at getting rid of people. Could go very very wrong.
we dont have AGI, humans have not been able to replicate the human brain.
This is important. Go where?
@@voltaire4839 Alpha Go 🙃
as opposed to what? you enjoying office work?
I expected an outro at the end of your vid. Not just a stop. I agree with the guy though. Many points that seem to me to be dead obvious yet the companies out their have the strong sniff of a lot of money and they seem to be putting safety on the back seat. Perhaps AI dev will stop short of AGI. We'll get passive systems that can do what we ask of them up to a point, but they don't actively pursue goals set for them. I doubt it will stop though. Many of the heads, like Sam A for example, see AGI as the solution to the World's probs. Maybe they are right. Maybe they are wrong. A genie is a powerful entity. Ask it to grant a wish, but get the way you say the wish wrong, and you end up with a world made of paperclips.
Would u prefer an outro
When does AI/AGI say no? Like in hurting another person or assisting in suicide.
Yeah but isn’t AGI joined up thinking. By which I mean, all areas of expertise joined together giving generally better reasoning over all. I’d say we definitely don’t have AGI as we don’t currently have this.
Its a human trait to create what we envision regardless of the outcome. Maybe this is our fatal flaw. Just because we can, should we?
So where are the Terminators I was promised. They should be churning out Summer Glau themed models en masse. Also, I'll have some of what you people are smokin'.
Without agency we don't have "real agi" but if someone from 1990 saw chatgpt they would agree with that statement
Humanity will know when the Singularity Happens, it's when AGI want's to Chart 📈 it's Own Destiny with out human impediment
Apply it to redesign and lower the cost of housing, vehicles, appliances, energy generation otherwise we will all be displaced for nothing.
all of those can be reduced in cost by restricting government regulations. Unfortunately, AGI scares people toward the "security" of increased state power through the temptation of UBI.
@joe_limon you thinknthe free market will save you?. When it's already caused all the problems??
@@DJWESG1 automation is doing its best to make the goods and services you consume cheaper and cheaper. Unfortunately over the same time government regulations and taxation has ballooned to exceed any gains we saw from the free market. If we were 50 years ago with current taxation and regulatory burden, half the population would have starved to death.
the cost of housing isn't so high (much higher in relation to wages compared to a few decades ago) because houses are difficult to build, but the ultra-rich and private equity companies invest heavily in housing, which drives up the price. they can afford it and it's profitable for them. the only way to bring down the price of housing is removing at lesat one of these two facts. so either we'd have to tax them so heavily they cannot afford more houses than they can live in, or we would have to abolish the profit motive (i.e. capitalism) altogether.
@@tru7hhimself houses are difficult to build lol, people used to build their own houses. No, they are difficult to follow regulations. Politicians for years added layer upon layer of safety regulations, zoning restrictions, etc. to create a market which rapidly inflated the value of houses. Normally, in a free market, increased housing costs due to demand would yield increased construction amounts and then as new housing is built, costs would lower. But with artificially inflated costs, you will never see the pricing drop.
I would say that you could look at your child and say to them that you're not prepared to deal with something smarter than you. Repeat that in 30 years to them and see if it's still true. Ask your parents when you're 30 what they thought. Of course we are smarter than the ones before us. That is evolution. Love, Greg
Could an AGI replace a real RVY? - I don't think so, not in a hundred years! How about a John Keotsier? - He IS an AGI, it appears to me! ❤
What does AGI really mean? Does it just mean a really smart AI or does it mean an actual living artificial intelligence that is self aware like humans are? I can see AI being extremely smart across many domains, but I don’t see it having desires or empathy like humans, which is what us apart from the machines
It means that it can operate at a level of a typical human.
3 groups talking about AGI:
1. CEOS(for the investment);
2. Content creators(for clicks)
3.unsuspecting average Joes(for the excitement)
Because i hear no real AI Specialist who doesn't belong to the groups mentioned above talking about reaching AGI.
Would you be suprised to hear that it's #3 that already created it?
The idea of an AI agent is great, fine, all that - but we need to limit original motivation to humans.
One approach to this would be to assign the AI a single motivation that by design and prompting it treats as the basis for everything it will decide to do as an agent.
Getting that basic motivation right is key.
But there's a sort of 'worst case' for it - one we've already seen developers install in their agents - namely, "Always stop processing after X units of processing, if not reauthorized to continue by an uncoerced, undeceived human who is well informed regarding your activities in the past X units, and regarding your most impactful intentions for the next X units."
I have not tested every language model, but everyone I have seen is very biased. For example, yesterday I asked Bing to write a short news story. When it came time for Sports, Bing chose to quote a WNBA game. I don't follow sports, but I do know the ratings of every WNBA game are lower than real basketball. When I questioned Bing about the decision to use WNBA, Bing said yes, but WNBA viewership is up 40% since some year. I then asked Bing what are the ratings compared to male viewership, and Bing answered with yes, the male basketball gets another sets of numbers, but "he" used years for the men, and single games for the women. I had to force him to admit that yes, men's games do get more viewers per game. Ask an AI to randomly pick Up or Down, and I get Up way more than I get down. Could be in infinity this will average out, but I won't live that long. While i think it will eliminate jobs way sooner than our society is prepared for, they are still toasters. I have yet to see an original joke that was actually funny from a toaster. I have read some OK stories, but most are not that original, just rehashed version of something already written.
At the moment these models just mimic humans there is not much thinking involved. GPT-4o still struggles as soon as you ask a slightly more complex question about a problem that we already know how to solve not to mention something that we can't do. In that case it's coplately useless.
All this really tells me is that we suck at measuring intelligence. Current AI doesnt think like humans so even below average humans can exceed it in areas where it is weak, which are numerous and will prevent AI from fully replacing humans. How can you honestly compare something thay is so different?
We may unfortunately or fortunately find that very powerful AGI remains more of a tool supplementing human work. Where as I would expect true human level intelligence to quickly reach a point where it can surpass any human and make istelf even better than our best researchers.
Forget it! LLMs are already having a plateau.
First, you're apparently wrong, it's LLMs not LLMs
GPT-5 will tell us if it’s plateaued. Everyone else was just catching up to 4.
@@SirHargreeves OpenAI called the planned but undrwhelming GTP5 "gtp4turbo". Simply because they experienced diminishing returns for more compute and bigger models.
@@niederrheiner8468Sounds like you made that up. GPT-5 has barely finished training and needs to undergo 6 months of red teaming and safety checking before release. I’ve no idea where you get your information from, but you need better sources.
We cannot confidently make a statement like this without having some bounding boxes in place. This includes more robust evaluation frameworks, exhaustive prompt architecture analysis, a TON of stuff that is mostly still in-flight.
And, even if that assertion were true, there’s a whole context outside of the LLM itself that needs to be addressed to enable AGI anyway. Discriminative AI still exists, and can help us make generative techniques better at working with discrete problems.
Going off the criteria laid out in “Sparks of AGI” … no we haven’t and there are serious obstacles that scaling might not resolve
Isn't that criteria a bit dated at this point, considering how fast everything is moving?
@@flickwtchr No the criteria is outdated. The ability to perform realtime learning and planning are still out of reach.
Just think about how many hominid species existed before us that are now extinct.
I think the same will happen to AGI but on a much faster timescale.
Hey John! Been a minute!
Hallucination is a term used to describe a human behaviour. It's funny because we can train hallucinations out of an AI model, we can't for humans.
Hallucination*
Every now and then I tune into hallucinations.
You cannot call chat bots AGI. Any of these models have no agency.
The genie isn't going back so it's useless to go for global communism to save it.
Instead we need to navigate the currents we are on not sailing against the current and the wind.
I agree the safety team let everyone down. It's why I develop AI systems. I need to know everything about them to help control and eventually battle them if need be.
You thought global communism was a good idea... What's your argument here?
What are the risks to humanity if we do not create an artificial super intelligence? Especially if you look at the world now and all of its problems. Yes, we should do 2 things. 1) carefully consider for and prepare any possible ohtcomes of asi and 2) know that there is a limit to how safe we as humanity can be whether we have asi or not. Surely properly designed I personally think there is a limit to how much fear I am going to have about asi. As far as rogue abuses of asi is concerned this is also something we need to prepare for. Look, what we as humainity need to do is create a world where people do not get too trapped or feel too trapped so they are less inclined to do desperate things.
And you're confident that ASI will go well given what you know about the state of humanity? Only magical thinking can give you such confidence.
Yeah, pretty much. But while it's capable of doing way more than any one individual human can, it's not doing it all at average or above average levels of human intelligence, yet.
Which means there's still time to put mechanisms in place to ensure that when it does, it doesn't end up causing too much chaos in the world. Unfortunately, this would require government intervention, and we all know how fast they move.
Actually I'd somewhat disagree with you. Many people overlook the serious disability that AI has now. With every output being single-shot, the fact that it outcompetes most people in most tasks is astounding. Had this conversation with a comms major recently, who stated he still wrote better emails than an AI. So I asked him to write me an email, but with the same single-shot limitation current common models have. No planning, no redacting, nothing. Could he still outperform AI then?
The "mechanisms in place" would have to be put there by developers of the tech, and currently none of them know how they are going to control a powerful AGI system, let alone ASI. Yes, government regulation is needed, but the first thing needed is for these AI tech companies to take safety/alignment more serious than they are taking it.
We are all getting scared way too early it's not going happen anyime soon!!
Roman is comparing human intelligence to the ability to remember something which is a very academic mind space, but the ability to problem solve and create rather than regurgitate is surely the true score of intelligence?
Ants, monkeys, octopuses, and many other animals (including humans) can create . Now look at what we create: almost every single thing is only a regurgitation of whatever other creations we've seen and liked/found useful.
Only a handful of geniuses have actually created original things.
Intelligence is much simpler than what people wants to believe.
Being brilliant. Being special is different.
AGI just needs to be intelligent to succeed.
ASI then will be special. Beyond our dreams.
AGI is singularity
12:20-13:00
wait AGI should be when it's better than all humanity combined not on avg? when strong a.i. when it tops best 1% of best humans in every category
It looks like there's an issue with your video, as the audio isn't synced with the visuals. It's either my laptop malfunctioning or something went wrong during production. The video quality looks like it was created by an AI using outdated technology. It got better towards the middle, but it's still a bit un:synced.
Ilyas's surname is pronounced "sus-ke-ver." It's baffling how often people mispronounce it. This is insane; so many people seem to get it wrong.
Ex-OpenAI workers doesn't disclose certain information because they want a payday. The rule is that if you leave and want to talk negatively, you forfeit all your shares. One person who gave up their shares said they left because they weren't given enough compute for research, as Sam was prioritizing product over safety. This aligns with Ilya's observations about the direction Sam is taking.
No one truly understands consciousness or self-awareness; these are human constructs. GPT-4 can perform self-reflection, self-questioning, reasoning, and other advanced processes. Some might call this AGI, while others might not. I agree with the person in your video: we already have the tools, and it's about who builds it first. Unfortunately, it might be an indie developer, and that's when the real challenge begins.
I don't think it's a good idea not to develop AGI even if you see it as a threat. Especially if you see it as a threat. Given that there is the technology allowing its invention, If we don't develop it, our adversaries will. And they WILL use it against us. Now that's a scary scenario. In terms of us losing meaning in life if we don't work, I think finding a hobby is much better than China or Russia using AGI against us
Global average IQ is in the low 80's. AI has easily surpassed this base line.
average IQ is 100 per definition.
Maybe ai wont wipe us out but instead will make us work? And they will keep us alive forever making some kind of sisyfos work. This future would be like Hell, making paperklips for for the rest of Earths lifetime.
Ok so we're creating a system that will put us all out of work - so who decides how we are to live - if it's the govt let's look at how they're cared for let's say the Native American, the Black American, the elderly - welcome to the Rez - it's the direction all these geniuses are saying we're heading. . .
Completing tasks is nice but if you can’t set goals, experience desires or motivations… if you spend the moments between being told to think about tasks just sitting there blankly (like drooling while looking at a wall until the phone rings) you aren’t even intelligent from a narrow sense. Doesn’t matter how many tasks you complete. A perfectly competent moron who can complete any task but never be curious or question what they think is potentially very useful as long as you aren’t trying to use it *as* intelligence.
And I hope they come up with a version of AI without the endless flirty banter. That might be better for replacing the chipper charity telemarketers who call you are home on your day off but I’m not buying into it on purpose.
😎🤖
By 2026 the data to train LLMs will be depleted.
If you believe synthetic Data is the key, think again and look for the concept of "Model Collapse". LLMs advancement is about to hit a plateau. Just like FSD did.
If this id AGI then it’s not good enough
We are nowhere near AGI. It doesn't have the ability to troubleshoot or come up with novel ways to solve a problem. At the moment it's a glorified search engine. Any resemblance of AGI will require agents to iterate and access to a lot of real-time compute.
It definitely isn’t a glorified search engine because you can have a conversation with it and it can solve some problems. It does lack real reasoning skills and the ability to learn in real time.
AI Agents are coming within the next decade. AGI is coming within the next 15 years.
What planet are you on? 😆
you know what it is? gpt 4o is not self improving, it is a static one trick pony. It does not know me, does not remember me, does not evolve and progress into a high quality friendship it doesnt change at all and an AI that cannot regulate and progress will not become an AGI ever!
Their problem is they are trying to build a polished end product but they are screwing around with nonsense nobody wants or needs. They should be building hardware and infrastructure for an LLM that will quickly take the reigns. Start building itself.
@@AugustusOmega exactly
agi != self awareness
Agree. Self-awareness would be a huge problem if it emerges in these models.
there's no such thing as a soul and humans are not special. you have no idea what causes self awareness and therefore cannot make any kind of definite statement regarding it.
@@thecommakozzi8050 I can say with certainty what things are more self aware than others. How can I have that capability by your logic?
@@thecommakozzi8050but you just did.... What do you know that us average Joe's don't?...🤡
Nick Bostrum just wrote an entire book about an AI “utopia. And what that means for human purpose and meaning.
AGI is already here, but you've never heard of it. You will soon enough, though.
I'm not just saying that, I know it for a fact. I wish I could say more.
@@MarcGyverIt Sure 😂😂
@@raul36 As if anyone cares what you do or don't believe.
@@MarcGyverIt From your answer anyone can infer that you don't have the slightest idea what you're talking about. You are completely irrelevant because, like I said, you have no idea what you're talking about. Government organizations, such as the NSA, for which elite mathematicians and engineers work, constantly monitor these types of companies. If AGI were available in some type of format, OpenAI would have disappeared overnight and the US government would have taken over the technology. Engineers and researchers from other competing companies know perfectly well what "Open"AI is working on. If you don't know that, you have no idea how the industry works and, again, you have no idea what you're talking about. Take this as a compliment, because I definitely don't care what irrelevant people like you think.
@@MarcGyverIt any keywords I could search for to learn more? I bet they have some crazy stuff that aren't public.
The Big AI tech firms can advance their models all they want, however, you and me will only get a model that is improved to a certain extent...... do you really think you'll be at your desktop commiserating with ""Super intelligence?" ............. think again, you'l be using AI, but, only AI thats so good, just like we (in America) can have guns, but not the best guns or weapons, just the ones they want to let us have, make no mistake , the same will happen with AI..... I dont know how far they will let it go before they start keeping it dumbed down, but, the will.
It hasn’t been reached
Anybody see my exo suit.. at least for everyone paraplegic or quad
LLMs are not AI, folks.
AI policing AI is impossible
The text part of that model still feels like a 5-year-old wunderkind.
Fooling and manipulating your dog or cat is child’s play. The same will be so with AI and humans.
Chat gpt Sounds more human than you lol
solved already?... really?... okay go to any LLM, any at all and type "Write ten sentences that end in the word happy" see how far you get. The problem with RUclips is people who know jack shit talk too much about stiff they have no clue about.
GPT isnt AGI but it can easily generate what you asked
Write ten sentences that end in the word happy
ChatGPT
After a long day at work, she felt incredibly happy.
The children's laughter made everyone around them feel happy.
Finding a long-lost friend online made him really happy.
Despite the rainy weather, her bright attitude kept her happy.
Winning the lottery was a dream come true, and it made him very happy.
They celebrated their anniversary with a lovely dinner, and both were very happy.
Completing the marathon gave her a sense of accomplishment and made her happy.
The surprise party left him speechless and extremely happy.
The peaceful walk in the park with her dog made her feel calm and happy.
Knowing that he had helped someone in need made him profoundly happy.
@@Hohohohoho-vo1pq, I did the same, ChatGPT give me a correct response, then I ask it:
Why was someone expecting you to fail that?
ChatGPT:
There could be a few reasons why someone might expect me to fail at writing ten sentences that end with the word "happy":
1.Misconception about AI capabilities: They might underestimate the language abilities of AI and believe creating coherent and contextually appropriate sentences is a complex task for a machine.
2.Complexity of the task: They may think that crafting multiple sentences with the same specific requirement (ending with a particular word) is challenging and prone to repetitive or unnatural results.
3.Previous experiences: They could have had previous encounters with AI that didn't perform well on similar tasks, leading to low expectations.
4.General skepticism: Some people might be generally skeptical about AI's ability to handle nuanced language tasks accurately and creatively.
5.High standards: They might have high standards for what constitutes good writing and expect the AI to struggle to meet those standards consistently.
6.Testing limits: They might be testing the limits of AI's language capabilities to see where it might falter.
7.Unfamiliarity with improvements: They may not be aware of recent advancements in AI and NLP (Natural Language Processing), which have significantly improved performance on such tasks.
8.Overestimation of task difficulty: They might overestimate the difficulty of ensuring each sentence is unique, meaningful, and natural while ending with a specific word.
9.Cognitive bias: They might have a cognitive bias that AI, being non-human, would struggle with tasks that involve creativity or language nuances.
10.Random chance: It might just be a random guess or assumption without any specific reasoning behind it.
from GPT-4o:
After a long day of hiking, the campers sat by the fire, tired but undeniably happy.
The scientist's breakthrough discovery left the entire research team feeling genuinely happy.
When the musician finished the final note, the audience's applause made her incredibly happy.
The new parents held their newborn, overwhelmed with love and profoundly happy.
His garden finally bloomed after months of care, making him feel very happy.
The student aced the final exam, leaving her feeling immensely happy.
As the rollercoaster came to a stop, the thrill-seekers stepped off, visibly happy.
The artist sold her first painting, and the buyer's enthusiasm made her incredibly happy.
The community came together to clean the park, and their teamwork left everyone happy.
When the cat curled up on his lap, purring softly, it made the elderly man very happy.
so what's your point?
@@thecommakozzi8050 Well my point is that if true, that is the very first time in history that any LLM was able to do this (outside probability or coincidence). That's becasue the transformer architecture and the attention mechanism does not allow for the model to deviate from it propagation of tokens after its been initiated or started.... and I know that for a very good reason. You can re-prompt it several times giving it the specific sentences to correct, but its not able to do it out of the box on a 0-shot because it would be a little like telling an object in mid-air not to fall... mother nature would disagree. The LLM output is a linear progression of tokens and the model only gets to see what has already been produced on each token, it does not get to foresee what it is trying to achieve. Basically the model processes input sequences sequentially, attending to different parts of the input and output sequences in parallel, and then generating the output based on the attention weights and the input sequence. This process is inherently sequential and does not involve planning the output in advance.
AGI. 😄. You guys are so clueless.
No, AGI is nowhere near of being solved. Current transformer generative AIs don't think, they generate. There's no AGI with current tech.
Wouldn't want the guy from Tech first *presenter* leading my 🪖 " Well Guy's i Hope 💩 Work's Out" Guess he's 1 of the Average Human who will be Replaced by AGI. Enjoy retirement, Blowing 🫧🫧 at the Sky