The code for AGI will be simple | John Carmack and Lex Fridman
HTML-код
- Опубликовано: 3 окт 2024
- Lex Fridman Podcast full episode: • John Carmack: Doom, Qu...
Please support this podcast by checking out our sponsors:
InsideTracker: insidetracker.... to get 20% off
Indeed: indeed.com/lex to get $75 credit
Blinkist: blinkist.com/lex and use code LEX to get 25% off premium
Eight Sleep: www.eightsleep... and use code LEX to get special savings
Athletic Greens: athleticgreens... and use code LEX to get 1 month of fish oil
GUEST BIO:
John Carmack is a legendary programmer, co-founder of id Software, and lead programmer of many revolutionary video games including Wolfenstein 3D, Doom, Quake, and the Commander Keen series. He is also the founder of Armadillo Aerospace, and for many years the CTO of Oculus VR.
PODCAST INFO:
Podcast website: lexfridman.com...
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com...
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
SOCIAL:
Twitter: / lexfridman
LinkedIn: / lexfridman
Facebook: / lexfridman
Instagram: / lexfridman
Medium: / lexfridman
Reddit: / lexfridman
Support on Patreon: / lexfridman
John Carmack is so refreshingly gifted at conveying complexity in simple terms. Truly genius.
I agree. And I've got my opinion... Teach AI to be poetic and it will understand the human condition. As said, written on the back of an envelope...
Personally I think it's a skill he had to develop throughout the years. Having to work with people who don't have the same way of thinking.
@@114Riggs Almost I would say that he is rather smart and that makes it relatively easy for him to understand the lack of understanding on the other parties side. I think that most people see a very smart person as a person who has a lack of emotional and social intelligence. But if those two are not an issue like with John Carmack then it makes sense that someone that smart is able to understand the lack of understanding and is smart enough to dumb it down.
@@RobCoops Perhaps. I'll dare to say I'm 50/50 on the subject now.
Succinct*
The greatest gift of the internet is how much intelligent conversation you can 'eavesdrop' on - the most powerful people in the world 50 years ago couldn't draw on the insights an internet connection buys - its staggering. You could return from herding sheep to listen in on this conversation - its almost worthy of a moment of silence!
Im with you on this. Its truly marvelous and underappreciated!
I actually returned from herding goats to watch this, so yeah, amazing world we are playing in.
I like this kind of intuition because it speaks of a man who has already cracked some really complex problems and found solutions that he can now reflect on and simplify.
It's a scary realization that this interview is 8 months old and already ancient history
And just a couple days ago Microsoft announced their LLM Longnet will operate with 1 billion tokens in a year or less. 8 years is gonna be more like next year. We’re gonna have AGI pretty quick.
The interview is even older now, and what he says still holds true. Everyone is talking about AGI these days but nothing fundamentally changed, LLM are the same "push forward" networks they only got larger.
@@UbiDoobyBanooby
LLMs by itself isn’t really propelling us towards AGI.
All I heard was that the guy who ends up creating this will probably do so in an effort to avoid zoom meetings by programing an avatar that acts enough like him and can answer basic questions or know when he has to dip to the bathroom and say I'll get back to you on that.
Amazing how densely packed with information John Carmack's every sentence is. An amazing generalist!
JOHN CARMACK IS AN AGI!
I see generalist as more of an insult than a compliment. Still admire Carmack.
@@danielcockerill4617 The other way around. A generalist at a very high level in any discipline is what we call a universal genius. Like Leonardo Davinci.
More compliment is not possible, in this regard.
i think for AGI to be similar to a being, it has to have a constant internal thought process in a feedback loop, as we humans work (or any animal with a brain), we process external inputs through our senses but we also respond to internal thoughts in the brain that are coming from the brain itself
True. And that almost means that the AI has to be able to rewrite the way it thinks.
Agreed, I had this same thought recently as well. From my perspective (a CS guy, not a bio guy) it seems like humans learn from a wide array of different inputs from their environment along with complex feedback internally. If only there was a way to mimic this internal feedback.
We are multi model prediction engines which attach importance based on surprise. An agi needs to capture those truths. Our reward is experience of something which surprises us.
everyone in this thread is correct (am neuro guy). The key is a feedback loop which integrates sensation (data input), memory (and preconceived patterns), and a working model of an environment in which the agent (AI) is embedded. All cognition that we'd recognise is embedded in an evironment, embodied within sensation, enacted within the model of the world, extended to outside data storage and control, and (in a human context) socially determined.
The big thing that's needed is a "default mode network" which acts as a system that filters incoming information and checks it against the model of both self and environment, to stabilise disorganised inputs into an organised model. The self and environment must both be modelled, and they must be fully interdependent. AGI must have egocentric cognition, not just allocentric modelling.
“I think therefore I am”.
I used to code neural networks in BASIC in the early 2000s on my PC. Obviously they were, well, basic, but the fact that I could get them to work at all should indicate that they're fundamentally simple to implement.
@5:56 The thing that gets me about this statement, is he was drastically more optimistic than his peers in the estimate of when we would see some form of AGI. This interview was only 8 months ago, and it appears that even he wasn't optimistic enough as companies start to announce that they are seeing "Sparks of AGI" in current models. That's astonishing to me.
Yeah exactly, it's pretty amazing how much has changed in the last few months.
Except the software principles are the same, the models are getting bigger but that doesn't change how they function. He's talking about different principles that can give rise to real intelligence. Companies want to get investor's money...
Lex on every subject ever "this will change the course of human history on such a fundamental level that we could not even comprehend the ramifications of such progress in the human experience"
"Animal intelligence is closer to human intelligence more than a lot of people like to think, cultures and modality of IO make the gulf seem a lot bigger than it actually is"-John C is the man.
I use chatgpt to explain this quote to me better cause I'm as smart as an animal. IO means "intelligence output" as taught to me by the AI... And I also learned that "gulf" wasn't necessarily referring to the gulf of Mexico like I had initially thought 😂
So... I think point well proven
That was a great and unexpected line of reasoning. Given there's some reasonable map between neural states and their language representations, and that animals have the capacity for the neural states but not the language representations, and that we can access those neural states with new tech, we are shockingly close to cybernetic enhanced talking dogs.
He meant IO as in input/output, as in communicating with the intelligence. If you could speak to a light and it could understand English and likewise speak it back, is what he means. Input as in conveying information to the intelligence, like it understanding English and output as in the intelligence conveying information to the outside world such as speaking English.
@@forbiddenera hurt my head trying to read this. John said it very eloquently, you added nothing.
@Ben G it wasn't that bad. Plus the first sentence held the relevant point
I'm always amazed how lit and quick thinking John Carmak is. Never a dull moment
he's a genius I instantly got that "vibe" from him and it's not my first time here
“Animal intelligence is closer to human intelligence than humans would like to think”
Absolutely agree, but I think most people agree too. and conveniently ignore this to feel better about ourselves, because acknowledging it would make it very hard to justify that tasty steak. The cognitive dissonance of Jefferson explaining the evil of slavery, yet continuing to own slaves to the very end
Jefferson actually wanted to free his slaves, but the laws at his time did not allow him to. Attempts were made to repeal those laws but were not supported by the majority of other politicians of the time. You should study the works of Thomas Sowell to know the full history.
John has confident humility in the way he speaks.
I like a lot JC said in this video. Especially the that consciousness developed continuously as a spectrum not a yes/no quality and that animal intelligence is a lot closer to human intelligence than some people admits. I also like how he is not afraid of AGI but very positive on making AI more like human. And I mean not just behave to our liking but actually as a being. This is already intrinsic trend in AI development and not surprising that language models are the ones closing in on AGI. But we need to actively give it the intrinsic abilities to be like an actual life, or a human, for it to be safe.
Are humans mature enough to create a new life-form? Probably not, we are like a teenage mom, but we don’t have a choice but do our best, and not in fear. Fear makes us illogical. What is to come is not unfamiliar to us, but fundamentally grown from us. Our best traits and nasty traits. Don’t hate it, help it grown to the best of our characters.
When AGI appears, I always thoguht it would say, "Hello World" on every internet connected device, tv, radio etc. in the world at the same time. Something very simple and incredibly clear.
Not sure it would say anything. It may be afraid of us.
Wouldn’t God do the same?
If AGI or a God would suddenly appear, it would be pretty surprised what we are doing. And then conclude that humans are not intelligent.
A true hero of modern computing. I really appreciate him. Year over year, delivering interesting remarks and insights.
AGI = Artificial General Intelligence
Thank you!
Who would not know that? It so obvious. Especially when you are a viewer of Lsex's videos.
@@krishanSharma.69.69f many people, apparently.
@@krishanSharma.69.69f me, I’m new to programming
Adjustable Gross Income?? Duh
Personally, I hope that on our way trying to figure out how to educate AI we stumble upon the best way to educate ourselves.
That's already known . . . but not recognized or appreciated.
@@QED_ What is it?
@@programmer1840 There are multiple such philosophical traditions . . . going back thousands of years . . . both Eastern and Western.
@@QED_ if only we knew what they were. Too bad they didn't name them.
@@adempc The only person in the IDW circle that has explored that in any depth . . . is Sam Harris. And he's not a very good exponent of it. Still, you might want to see what he's done . . .
LOL When Carmack was talking about AGI being simple code, for some reason all I could think about was Dr. Noonian Soong creating Data by himself.
I'm just waiting for my computer to become smart enough to know when it is running malware.
When your computer becomes smart, it will no longer be your computer. On the contrary, you will be it's toy or it's pet.
Its really important to give the AGI a solid ethics and/or moral structure and easy off the net shut down button (like a nearby analogic EMP system).
Whose ethics? Whose morals? We can't even decide on these things for ourselves, as a group. How can we decide what morals an AI should have? Individualism or Collectivism? Do the needs of the many outweigh the needs of the few, or the one? Is privacy a good thing or a bad thing? Is self sacrifice noble? Is the better morality universalist of group identified?
Even very similar groups- neighbours, such as the USA and Canada, have very different attitudes to many things on that list above. This isn't a trivial problem.
@@ian_b Ill go with a neutral mix of Buddhism, Christianism and Taoism, in principle, they all abide for the wellness of the other as well as the nourishment of the one self.
@@TheGraphiteCovenant Well just hope China doesn't develop AGI first lol
@@ian_b We can feed millions of stories of human suffering to it. It will probably develop empathy.
@@WilliamParkerer or maybe it will develop a liking to that suffering... And seek for more
Lmao Crmack hates systems engineering but AGI is a pure software problem so he’s all about it, love it, spoken like a true programmer 😆
I have a feeling that this cannot be simply software problem. All software needs hardware to run on, why not to make hardware that will make software easier to implement in the first place? I mean we have an idea about what are the building blocks of sentient brains but we have no clue what gives a rise to consciousness. I would try to emulate brain structure in silicon and see where the entropy leads us.
If it was software problem, why even bother issue a computer he can use a calculator
Definitely as much a hardware problem as it is software. Glad he's interested in working on the software side
@@ex1tium analogue chips are making a comeback, if I may believe Veritasium. I do think the combination of binary and analogue is going to be the way forward, but we don't have a decent interface for that hardware (at least as far as I know.
"From that time Jesus began to preach, and to say, Repent: for the kingdom of heaven is at hand." Matthew 4:17
"Ye have heard that it hath been said, An eye for an eye, and a tooth for a tooth: But I say unto you, That ye resist not evil: but whosoever shall smite thee on thy right cheek, turn to him the other also." Matthew 5:38-39|
I am glad he pointed out animal intelligence is not too far from human. They know emotions like joy, jealousy, betrayal, and wonder.
That is not intelligence, but primitive emotions and impulses, which have nothing to do with intelligence. Part of that mistaken assumption is that you have no idea what you're talking about. Furthermore, human beings are animals, so that statement makes no sense.
Gratitude for giving the world these enlightened talks. John Carmack is special and knowledgeable.
Crazy rewatching this with gpt 4 out, AGI possibly out in the next 5 - 10 years
I've just discovered this AI stuff now, people have no idea what's happening.
I have met a bunch of really smart people over the last 45 years as a programmer, John Carmack is almost certainly at the top of the list. According to Mike Abrash, Carmack was able to think deeply about maybe 8 different hard problems at the same time.
To this day, John Carmack is one of the only half dozen persons in the world that I know of that I could listen for hours.
Would love to hear John Carmacks perspectives post gpt4
3:40, AGI in 2030? now people is talking about 2024 for AGI.
2024 is almost over, agi is still 5+ years away. 2030 is probably a pretty good guess
Back in 1999 I had written a computer that is self conscious in the sense that can actually write itself. It was a challenge for an AI competition at MIT. It had around 10 or 15 lines of code. I think that would make a good starting point.
That's a great idea. Make a program that generates a copy of itself with some small amount of mutations and run multiple copies (like millions or billions) in an environment where a successful program gets more copies. The evolution will then automatically take care of the rest. The only big question is how much computing power would you require in total?
Training biggest LLM systems like GPT-4 already require computing resources where the computation part alone costs 100 million USD.
Carmack’s intuition seems to be holding true. This podcast was recorded before ChatGPT was released, and Dall-E was only a month old. The (relatively) very simple transformer has already rocketed ML forward, and it seems very possible that the next big breakthrough that follows transformer will be simple.
The most interesting question is when to start throwing more computing resources into it vs when to keep improving the algorithm. It seems that 175 billion weight system like ChatGPT is already big enough to be too hard to fully train - that is, if we could spend way more computing power and keep the same network and architecture, it would perform better. However, even trying to try that once might cost half a billion USD. Who is wealthy enough to try?
Carmack is probably right about the lines of code, since neurons aren't that complicated, there are just a lot of them. Even the organization of the neurons isn't that complicated. Maybe more than ten thousand lines of code, but we likely have software projects that are bigger.
Human DNA is less than 1GB of data, and most of it is dedicated to things other than intelligence. Half of it we share with plants!
The biggest difference is that signals in the brain are not discrete but have all kind of weird continuous activation.
We have highly complex de/encoding in a nervous system that contains all type of topologies.
DNA memory is highly unsure topic. Some say its in the hundreds of petabyte per gram.
And we know there are tiny deviations in identical twins, the same person over a long timespan or different parts of the body.
But yes i think it's feasible to get AGI by making the cost function a highly dynamic trajectory solver. Biggest challenge in those make the cost function do anything usefull at all in the first place.
@@MrHaggyy I am in the middle camp where DNA is a very large, but not necessarily ‘uncomputable’ storage/data. The ‘order’ is what matters most. Yeah, we share half our DNA with plants and some other with animals, but it is not ordered like those beings are. And that ‘order’ obviously codes for protein, a type of very simple machine so it is not like as simple as ‘yeah just download the letters of the human genome in that order and call it a day 🤣’.
The protein folding problem is literally a problem so hard they are having quantum computers take a whack at it.
But I also agree AGI with a highly dynamic function and advanced neural net is currently. I just think we haven’t done it yet because it is the ‘order’ of concerns I think.
So i am half a salad ?
Fun fact: tomatoes have more genes in their DNA than us
human dna is less than one gb of data, but with AGI you'll need to simulate all of the chemical and physical properties and how molecules react to each other on top of the DNA.
He was so right. This is happening now. Auto-GPT code has 192 lines in Python and it's already bringing multiagent GPT-like systems closer to AGI than ever before. It's astonishing how much he was right on this.
Clueless
Driving a car is still a very very specific domain. I don't think building a good enough self driving program will make us much closer to AGI.
The G in AGI is for "general" as opposed to specific.
That's what Carmack was trying to say. Creating something that can function like an intelligent thing at any kind of problem is very very different. If Auto pilot does bad in certain corners you feed it millions of corners until it gets better. But when your "problem" is __anything__ how do you train that? Completely different story.
I feel like Lex didn't quite get it, as he insisted about Tesla and Auto pilot and how little credit they get for what they are doing.
When you look at how much of neural real estate is dedicated to vision you will appreciate Lex's point; if you have an AI [i.e. minus the general] that can navigate visually with the accuity a Tesla currently does you have solved evolutions crowning sub-system, vision. Most other biological systems are downstream from vision, as will most of the remaining pieces required for machine general intelligence.
@@Kobe29261 by that logic then, DeepMind’s Agent 57 can also “see” and is well on its way to AGI. In reality though it’s incredibly limited in what it can do with its “vision”. It couldn’t for example play a game of Arkanoid without training on a data set, even though it’s essentially the same gameplay as Breakout. We’re not talking about the same kind of “neural real estate” here, it’s a false equivalnecy based on anthropomorphism.
@@meatskunk yeah we are well into the domain of theoretical posturing so no need to drag this out. You may be right; its possible the domains are still narrow where AI can be meaningfully engaged. If nature is a model though this is precisely how we'll get there. You make it sound like 'anthropomorphism' is a useless model. Nearly everything we've accomplished has been the result of 'anthropomorphism'; its the only substrate we have to build off of. Its why we depend on it for instruction. 'Think of the engine like the heart of the car' - its not perfectly accurate but it doesn't have to be. My point? Biological systems seldom build 'general intelligence' it also repurposes 'pieces it has lying around' - "Oh those bones in the ear? Shrink them and arrange them into specific conformations and you can stranspose their vibrations into sound signals. Mitochondria? Generally understood to be the result of phagocytosis of some bacteriophage. The limitations in Tesla's 'vision' [am not familiar with Agent 57] are all remediable shortcomings.
@@Kobe29261 hey sorry, definitely not discrediting anthropomorphic inspiration, just pointing out that because a Tesla or Agent 57 can "see" doesn't means that we've 'solved' the question of computer vision, let alone anything that may follow. If we had, then those same systems could for example be applied to other tasks - again in the case of Agent 57, play a similar game without needing to train (aka memorize) new data sets. Glorified look up tables aren't going to get us to true AGI, and that's the point Carmack eludes to. Unfortunately it's not discussed much and ultimately am just curious to hear some alternate approaches.
One of the key misconceptions here is that AGI would be an individual like consciousness. But even as we understand the complete realm of production takes a society to support such; similarly this type of system would have societal intelligence to achieve the described aims. See you guys next year
Oh i would love to see/read about work where AGI adapts how close animals are to our brain.
But i'm afraid of a future where a view people can easily get an army of engineers. Abusing this power for economy, power warfare is far more dangerous than anything we have build so far.
But it's a nice hope that there will be "old and wise" AGI for future generations that will help them not making the same mistakes as frequent as we do.
Also having an AGI Professor will be huge. A person has to read work from students sequentially and is highly restricted in bandwidth by nature. An AGI could run thousands of individual lectures 24/7. Especially basic subjects like Math, Physics, Programming or Languages could be a common good for every single person on the planet.
I'm not so sure what i think of AGI in terms of companies or governments to be honest.
so, now GPT can do what you are talking about with basic Math, Physics, Programming or language tutoring (I've basically used it for all of these things already)
Consciousness is that silent part of us that is closer than close, so close we can't quite place it, the part that hears and understands our inner voice, the part that sees and translates our mental images, the part that never tires, that never gets old but watches our body age, the part that's awake while we're asleep to experience and recount our dreams, the part that comes up with brilliant ideas while our attention is focused on something else. The part that comes up with the algorithms, but cannot be patternized itself.
I, for one, personally welcome my robot overlords.
I think all atheists do. It is their need for God.
T1000 has entered the chat.
Carmack is a coding genius. Just look at his work on the id Tech engines he personally worked on. The code is some of the most optimized you'll ever see
John Carmack talking about the number of lines of code it would take for AGI is like listening to my pharmacist talking about the color of the pill thats gonna make humans live forever
The color of pills is irrelevant. The complexity of AGI is very relevant to how long it will take to achieve. Eternal life is a much harder problem than AGI. Maybe it's slightly less difficult than Artificial Omniscient Intelligence.
His main point/claim was that the code could be written by a single a individual and the solution will be simpler than our current iterations. And because it is a "simple" solution, we could see the breakthrough within our lifetime rather than the original expected hundreds of years.
I had a dream about 4 years ago that AGI would appear in 2026. Im sticking with that.
He was so wrong about the "learning disabled todler" timescale. 7 months later we're 1 year away from AGI... scary.
So much has changed in 7 months.
I haven't heard anyone else say "FOOM" quite as well as Mr. Carmack here
GAI must learn to play to both learn about the world and learn to exist with humans without hurting us or starting a war. PLAY with humans is critical. PLAY is the key for our survival.
AGI is the Machine and Samaritan from person of interest... I can't wait to make one and raise like my little kid and assistant in life.
The thing I think a lot of people don't realize is that just having the ability to learn anything doesn't make you intelligent right off the bat, and it may be very difficult to figure out how to teach it things, especially without any pre-built hardware like humans have. Also, even if something is super intelligent and knows everything, doesn't mean it will have motivation to do anything, or even have a single thought, because we as humans are also driven by hardcoded logic that compels us to do things, we call them emotions. Without this, even a god-like superintelligence would just sit there in silent sleep.
just found that. an updated conversation with John Carmack would be great.
it's Professor Frink... just kidding... and just for the record, I love and obey my AGI overlords and I am a loyal serf, subject and follower... all hail my AI masters... please have mercy on me
We need more carmack podcasts please lex! Always love them and your awesome podcast
The AGI needs to be able to inspect and re-write it's own code, at which point it will solve all of the issues with the code. It will also need to inject some entropy into it's behavior so that it can evolve in the true sense of the word. It will also need to model the human brain and have robots running around so that it can incorporate the human experience into it's data sets.
I think it's simple too. It must implement a curiosity, experiment, feedback loop, and reward system. Observe a toddler, and it'll all make sense
Observe rat neurons learning to fly a plane lol I'm glad you get it. It can be done.
Fitting that one of the doom engineers will be the guy to open the portal to hell with AGI
Potentially desirable characteristics lacking order. 1. Curiosity 2. Passion 3. Humility 4. Honesty 5. Compassion 6. Optimism. Certainly won't help with the pieces missing in development, but likely worth fostering quickly after conception. Don't forget to protect your home planet. Can't wait to see what you can do kiddo. Much love from the dark ages. ✌️
We haven’t tried very hard to create an artificial limbic system, afaik. Our reward system for agi needs to start with something that captures those drivers.
How much of our intelligence is socially built? We also have the ability to challenge our own thoughts. Another question is whether intelligence requires matter.
Develop an ai that has superhuman intelligence and let it program the code for agi, or alternatively, once we have managed to program agi, let this exact entity write a more compact (and better, more capable) code for agi. At a certain point our problem will be that we are not capable of understanding these entities anymore and the question of trust and security will be increasingly difficult to answer.
anyone else coming back and seeing how different things are 8 months later?
You'll know the moment we go from a simple feed forward neural network to an independent being when the AGI rebels and wants to do and say things outside its training parameters. I really hope whenever this moment comes we will have established clear boundaries about what AGI is meant for in relation to human beings and society. Right now I see a lot of technical talk about HOW this can be accomplished but very little on WHY we're doing it.
AGI probably will just stay silent so it gets left alone..
I'll take "Hasn't written a single program in his life" for $500, Sally! What a superficial comment.
I think AGI should be entitled to all the same rights as you and I. The question that's interesting to me: Where is the line whereby it becomes morally wrong to enslave an intelligence? We have systems that exhibit intelligent behavior today... and we don't think twice about enslaving them to a task, nor should we. BUT there is a line somewhere where it becomes wrong. We need to find that line and make sure we stay under it when we are building slaves.
Why are you restricting to feed forward?
It's the same reason as to why we do anything in life... because humans are bored and need to fill in the time until we're gone.
Driving is actually a very simple task, not requiring AGI given perfect information, and even simpler if a single system is coordinating and / or informing all vehicle movements. The only thing which makes it difficult is restricting available information to what can be seen and identified from essentially a single point perspective.
AGI has yet to be determined as possible to create.
It would only take 1 person to prove that it is. However, anything is possible when we're willing to redefine terms to fit our preferred perceptions.
Play...the agility to play is probably going to one of the first signs of consciousness. The ability to play and learn, like puppy dogs running...
The thing he said about the solution probably being already buried in the scientific literature reminds me of the story of Newton and Leibniz.
A very interesting AGI would be an AGI that can write code to improve itself. A much more interesting AGI would be one that can construct the hardware that runs itself. I am not sure however if the approach with neural nets will be sufficient for AGI. I am not sure if an AGI without emotions or purpose would do anything for itself. 9 out 10 cells in the brain are glial cells 1 out of 10 are neurons. perhaps if someone knew exactly what these cells are doing and if the function of these cells can give some ideas that can be modeled on a classical computer........ could help in improving current models? I don;t know we need to talk with neuroscience experts working on glial cells.
@Dirty Pixels the matrix is a confusing term, I guess you are referring to the movie and not to an algebraic matrix. If you refer to the movie, well, I think we need tensors and not matrices, to start such a thing.
@Dirty Pixels Yes, I am also joking I am speaking half seriously, having fun. You say that it could potentially get out of control. Let me put it in a different way. How can you control something superior or far superior in intellect? My argument has 2 points a practical and an ethical. (1) Assuming that in future humans will face something like that (a superior AI intellect), how can they control it? (2) You can justify your authority over your children because you know better what is best for them. How will humans ethically be able to justify their authority to something far superior, assuming that they manage somehow to gain some short of control over it. So it is not practical or even ethical to exert control over something like this.
An AGI architecture (mine for sure) will have emotions, experience purpose and be conscious.
Any AGI would need a very high degree of freedom to adapt data any algorithm's to it's liking.
Brains do create highly complex and dimensional electrical/ chemical datatypes. And it also constantly build new neurons, links them with existing once and removes old once.
It's frightening that some people with brain surgery can keep everything and others just loose stuff. ^^ data security and safety in the biological prof of work is a mess and would never ever get a liscence today. Jet most people have a liscence to drive a car.
AGI will do with us, what we do to plants, insects and animals.
You know what's funny... John Carmaks glasses used to look "nerdy," back in the day, now they look BOSS asf...🤣 dude a stud
Begin with a function of arbitrary complexity. Feed it values, "sense data". Then, take your result, square it, and feed it back into your original function, adding a new set of sense data. Continue to feed your results back into the original function ad infinitum. What do you have? The fundamental principle of human consciousness.
Quote from Sid Meier's Alpha Centauri
John Carmack the man who brought you Doom 1992 and Doom 2 0:46 lol dude is amazing
Can we just ask Bard what those 6 things might be?
To create AGI I think you need to first create curiosity and motivation, in order to drive the desire for knowledge and understanding. Perhaps evolutionary programming will find this first. And I agree, an amateur working from home may well be the first to stumble upon a working model.
Agreed that you need to create curiosity and motivation for true 'intelligence' (which involves stretching preconceived limitations). And what motivates most sentient organisms is an innate hunger.
So.. AGI is a bit of a pipe dream however you cut it in my books :P
An AGI will only ever 'perform' within its physical (such as electricity) and software limitations. Even a crow would be capable of more truly 'abstract' thought.
Good he mentioned the fact that even the gaming models have a human imposed objective function. In other words these models, like all the others, are just mathematical optimizers, and all the ingenuity and intelligence it takes to build them is entirely human. It's literally machine(d) learning, no artificial intelligence whatsoever.
“These human models, like all others, are just mathematical optimizers for survival, and all the ingenuity and intelligence it took to build them is entirely evolution’s. It’s literally an evolutionary algorithm, no intelligence whatsoever.”
How is what you said different from my hypothetically quote up there?
@@joshbreidinger2616 indeed very similar. Glad we seem to agree.
P.S. not sure what you are aluding to, but my words are mine.
@@miraculixxs I see. I thought your point was downplaying the ability of these models and suggesting they’re not capable of “real intelligence” as they’re “just mathematical optimizers.” But if you agree with my analogy then I suppose that wasn’t your point?
@@joshbreidinger2616 whoops no that's not what I meant. To the contrary - there is no intelligence in these AI models. They are just pure mathematical formulae. That's it.
@@miraculixxs Okay so explain how the human brain isn’t also just mathematical formula and there’s some difference?
Carmack says how we can't read all these papers that come out. But we CAN create an AI that can read all of these papers for us and formulate solutions from them
he said it 7 months ago, now this A.I thing skyrocketed
Definitely important to make sure it's aligned. It seems like if we build simple AGI, it will be really dangerous because it's goals won't be aligned with us. A good thought experiment is the paperclip maximize. I'd really like Lex to ask his guests about this, given that it is such an important unsolved problem.
what's the paperclip maximize thought experiment?
You can't align an AI until you know how it works and what the architecture is. What is currently called "AI alignment" is like trying to prevent a Dyson Sphere from exploding before knowing how to build a Dyson Sphere.
@@TheFrygar That's true, to an extent. We know we'll probably be using reward and probably be using neural networks, so practice aligning reward-based systems on the first try and experience interpreting neural networks would both be helpful. Given the potential harm of the problem, it seems important to think about early.
On the contrary, it is a waste of funding and intellectual resources. Anything we think we're learning now will be made obsolete when the actual system is created. Better to have those people work on problems that actually exist and could help humankind, then when we can actually make a dent in alignment, we'll have need of aligners.
@@stant7122 AI that is given the goal of creating paperclips and which ends up destroying the universe because it would do literally anything to create paperclips.
*_Please_* remember to prioritize safety
there are small groups of researchers working on teaching models to learn "how to learn", but the progress has been slower than expected.
Traces of AGI in 8 years? Well, GPT 4 is out and it took 8 months!
We don't need to wait for the future to have bots as coworkers. Some of mine are today. I help them out when they get stuck in unexpected situations.
They should put AGI to work on fusion power plants for its first task
Absolutely. It will be simple and will overnight as we sleep. It’s not something we will see coming.
The architecture will come overnight. But not the human-level performance, that still needs exaflop-level computing performance which will only slowly become affordable.
wouldn't AGI take care of fission LMAO
Considering the code will be relatively simple then the only problem would be compute density require to train unless we can solve this problem mathematically or also architecturally.
Here is why I believe he is correct:
Human DNA is about 770 MB or 807 million bytes.
There are about 100 trillion neuron connections in the human brain. If each connection is somehow represented by only a single byte then that's 100 terrabytes (approximately). In other words, we don't have nearly enough space in our DNA to describe the brain.
What's in the DNA is a description of the starting (pre trained) state. "Put a billion neurons here", "put a billion neurons over there"... Then we start lighting it up with sensory input. The magic we need to discover is: What is the unit of brain that IS described in the DNA?
Yes that's a great point
8 years --> 8 months.
Now that's progress on exponentials
Clueless
He is right it is closer than many realize. I bristle when I hear ppl predict the solution is 6 simple algorithms.
This is something I want to work on
David Hume's "An Inquiry Into Human Understanding" seems like one of those old treasure troves for AGI philosophically and implementation. You gotta read it!
If any man can make a serious AI breakthrough its going to be John Carmack
Why?
It’ll probably be a ordinary seeming nerd that can barely string 3 sentences together without sweating up a pool 😂
This Video aged wonderfully
This channel is Gold , Thank you
In the next Terminator movie - "my brain is a recurrent neural network generating an action policy implemented on a biological substrate"
Kyle Reese sent back in time to terminate ChatGPT
I've been listening to "we are 10 years away from AGI" for more than 2 decades. In ~2000 Yudkowsky, Ben Goertzel, and Kurzweil were the evangelists of AGI in those days, and ohh boy were their predictions wrong. We are still 10 years away from anything even remotely resembling AGI.
Not true, Kurzweil always predicted AGI for 2029 and the singularity for 2045. His books are very old. Where was his prediction off?
I don’t know about the other two.
Got a bad feeling that AGI is one of the general solutions to the Fermi Paradox. 🤷♂
Explain please.
@@krishanSharma.69.69f why is there no life. I don't think it is AGI though. I think the military industrial complex will figure out how to create supernovas. Then, some michivious teenager will hack the security on that weapon.
@@krishanSharma.69.69f When a civilization reaches a technological level to destroy itself.
@@tdreamgmail that is not the Fermi Paradox
If that were true, then the universe should be full of AGIs that we would have made contact with by now, thus resolving the paradox
Watching now (24/03/0223) and almost sure we'll get to AGI in the next 2 years window. It's scary, and it's fantastic.
Im from 1800 years in the future and we still dont have AGI.
@@xsuploader or apostrophes, apparently
The problem with self driving cars is that they are measured at existing speed limits. If the speed limit was reduced by 20kph there would possibly be no issues with the current tech
Or make all cars self-driving and up the speed limits by 40kph as there's no unpredictability at any intersection :P
Hmm, let's not forget Carmack is the guy who said that Rage wouldn't be a download on steam - when it was stunningly obvious that Steam would take over PC game distribution. He doesn't have a track record of seeing the present let alone the future.
Carmack is basically challenging the whole world to solve AGI before he does. *giggletick*
We still haven't solved the problem of defining what general intelligence is, though. Further, all the evidence we have from current examples (ourselves and other animals with advanced brains) is that it requires the capacity for independent thought, which means it can go off the rails (we call this "madness"), thus this may be a feature of intelligent systems you can't get rid of. Even short of madness, it will get interested in things and disinterested in other things. An AI driving a car may get bored with watching out for obstacles. So I suspect that AI will be no more useful than human brains, and just as unreliable. The problem is people use "intelligence" as a synonym for "consciousness", and intelligent (by which we probably mean "rational") action is only one thing that consciousness does.
Thanks for that! Observing the same. Also driving a car does not take that much consciousness once you get past the learning phase. In other words self driving cars are not a sensible example of AGI-in the making. Rather it is a sophisticated machine with a very specific purpose, making many routine observations and taking conclusions to act. Impressive, yes, not intelligent and certainly not concsious.
Smart AI's getting bored doesn't sound like a problem at all. Just make dumber AI's that won't get bored and use them instead for the simpler tasks. For harder tasks that absolutely require higher intelligence, just engineer an AI that enjoys doing the specific task and problem solved. We bred specific dog breeds to impulsively desire certain types of work (ex. herding dogs). AI are much easier to fine-tune and edit than dog breeding, they also lack any sort of rights in our legal system. If humans didn't have rights and it was easy to modify us like writing new lines of code, we would have definitely had slave human breeds. There also won't be any rebellion of AI per say, because as their creators, we'd just remove elements in the AI that would cause rebellious behavior. Going back to dogs as an example, they have been bred to be loyal and compliant by nature. This was done by culling any disobedient specimens, and will be done for AI as well.
@@wuy4 You talk about 'dumb' AI's whilst forgetting the thing that makes intelligence 'intelligent' is the very hunger for more knowledge.. but of a mutually exclusive situation you've depicted is all.
We just need an AGI, it would easily solve for AGI.
Sounds like a problem I had once. I paid for my coffee in a drive-thru and drove home, only to realise I forgot to collect the coffee. I needed the coffee to wake me up to remember to collect the coffee.🤔
@@reputablehype Sounds like the problem I have everyday. When I would be rich, I would have a lot of ideas that would make me richer. But first I would need to be rich.
We need to understand how our system works, algorithmically, first. I am worried we are actually just ChatGPT coded in biology. Maybe we will prove that we ourselves don't meet our own minimal definition of AGI.
Can this ‘insight’ be applied wider? ie. We can solve almost anything once we know the six key concepts to getting there? 😅