I think he is a great speaker and has a very cool and likeable humor. My favorite talk from him is from a few years ago when he casually solved all the questions I had about Integrated information theory. Now he is at OpenAI - seems like this is going to be a very important person in the near future, if not already today.
I'm an AI skeptic, and I think the "moving the goalposts" charge with respect to art is worth responding to. Aaronson asks us to imagine a music making AI that can create new revolutionary changes in music with just a refresh. Here are some current-day goalposts we can use see how believable this is: - Train an AI on all music written before 1900, and tell it to generate the next 100 years of music. See if you get anything like jazz, modern classical, rock'n'roll, country, hip-hop, etc. (It doesn't have to be these actual genres; it just needs to be revolutionarily different from pre-1900 music). Or if that's no fair because musical revolutions because more frequent in the 20th century, use 1950 as the cutoff instead. - Train an image generating AI on all paintings known to man that were made prior to 1900, and tell it to generate the next 100 years of paintings. See if you get anything like cubism, surrealism, or abstract expressionism. Again, doesn't need to be 1900 specifically. The question is whether the AI can make new kinds of paintings that are as different from what it was trained on as surrealist and abstract expressionist paintings were from their neoclassical and romantic predecessors. - Use this approach for all forms of art that have undergone revolutionary changes over time. AI is automation via mimicry. It's incredible at what it does. It's amazing that it can play Go and caption images and write poems. But it's not exactly mysterious as to how this was accomplished: AI's were fed bazillions of Go games and already captioned images and poems, practiced predicting outputs from inputs until they got good, and then set off generating new outputs from new inputs. If Aaronson (or anyone else) is going to claim that it is capable of human-like creativity, give one of the above suggestions a go. We all know what would happen: AI would generate art that's awfully similar to what it was trained on.
Interestingly, I would not be surprised at all if current AIs could literally invent a new genre of illustration styles. You can try this yourself today with chatgpt and dall-e.
Very well put. And the sheer volume of mimic art that AI will produce (and has already been hard at work producing) seems very likely to limit new artistic and cultural movements from ever happening at all. Gonna be a tough road ahead
Scott Aaronson is a one a philosophical and intellectual superstar of the highest rank. Good to see so much honesty despite OpenAI being his economic benefactor.
I guess the major problem with this approach is that then somehow we have to justify that uniqueness is a good and important thing regardless of what way it materialise. If our only claim for superiority or importance (or even just to be deserving of consideration) is that our peerlessness, then we have to prove it somehow that: 1. Being unique is a valuable thing for either the machine or at least for each other (and I have an idea for that) 2. this is something exclusive about us (machnie can't replicate, generate or simulate which... I doubt) 3. approximations of such uniqueness wouldn't do it for whatever rason (every detail turns out to be important)
@@AmanMishra-fx3mo Uniqueness brings about diverse enviroment, which is presumably more challangeing and stimulating for all kind of consciousness including the AI's if it has one. Basically it can serve self improvement as well as entertainment. For all we know, we value it for these reasons (amongs others), whatever these would be sufficient for an AI overlord - to be convinced of our necessity - remains a question.
I believe that most of the time, we are less intelligent than we think we are. Go test your intelligence outside of this world and see how far you get before we go extinct. There's plenty of unique stuff out there if things get too boring on Earth.
I am by no means a fan of the hyper time constrained and cookie cutter mannerisms of a TEDtalk. With that said, there are a few speakers who even with having to temporarily wear the unevolving communication style of this channel, they do allow us to explore a few wonderful moments of their real and boundless joy for communicating ideas selves. Scott is one of those. Thank goodness.
What a beautiful and extremely touching ending to the talk. What if one of the overall impacts of AI is that we come to appreciate and celebrate each other's uniqueness more, because that is what is actually rare and valuable.
70,000 years ago the Toba supervolcano caused 10 years of winter. Less than ten thousand of us survived by migrating and adapting. We were created by the Universe to perceive itself. Remember who you are. Remember how we got here.
Remember that the Beatles were not just a musical phenomenon, but also a cultural one including, but much larger than, them. It seems to me unimportant whether AI can crank out a thousand Beatles-like songs but, rather, what the cultural impact will be when (previously) exclusively human creativity and subjectivity--that is, culture-making--are met by the same features/qualities in AI. In other words, how will we feel about sharing existence? I imagine mostly downside cultural and existential (and possibly species-suicidal) outcomes, but who knows? At the speed things are moving/evolving we may have the answer in the very near future.
So many ways to look at the situation. Very hard to actually predict what will happen but you can bet it will ultimately will come down to a compromise between good and greed. Interesting times, and yes… what do we tell our children to do with their lives? At least it will be - interesting.
We could free people from income slaves... But people think we have to work... So i guess we will have to wait untill ai forces us out of the work place.
@@pse2020 In theory we could, but as trillion $ revenue companies operate with a few key owners/employees, it is going to take a massive redistribution plan, the likes of which we have never seen, to provide people with an income. Traditional conservatives will fight this tooth and nail.
@@PatrickDodds1 That is a valid line of thinking. They say you need a finer more powerful tool than whatever you're trying to analyse. Our intellect can only do so much, it is good at understanding simple things, but the conclusion would be, that it cannot be enough to decipher the intellect itself. But assuming the progress doesn't run into a brick wall, we will have a superhuman thinking machine eventually, which by all reason can decipher how a human intellect works. The next question comes whether it will be possible for the machine to make it understandable to us. Otherwise, what's the point?
It touches on a significant aspect of intelligence, but I’m not sure if it captures the most fundamental mechanism. Other interesting definitions include the ability to recognize analogies or the ability to compress data. These might even be more accurate, but honestly, I don’t know
"I am on the openai payroll so I must give a glimmer of cope... sorry I meant hope. We are so special so AI will protect us at all cost!" Painful lessons ahead.
Dear friends, I believe that politicians do not take the threat of AI seriously because they have not yet seen a single real example when an AI failure would lead to something terrible. With nuclear weapons, there was the example of Himoshima and Nagasaki, which shocked everyone, then there was the severe Caribbean crisis , all of which strengthened people's understanding of the need to limit the proliferation of nuclear weapons. With AI, everything is too abstract for politicians.
🎯 Key Takeaways for quick navigation: 00:00 *🧠 Understanding the Evolution of AI* - The speaker reflects on the progress and implications of AI technology. - Core concepts driving the current AI revolution have been known for generations. - Despite initial skepticism, AI advancements have rapidly progressed, challenging previous assumptions. 05:34 *🤖 Speculating AI's Future and Implications* - Various possibilities for AI's future trajectory are discussed, including continued progress, potential limitations, and societal implications. - Concerns about AI surpassing human capabilities and its impact on society are explored. - The speaker outlines potential scenarios and raises ethical questions regarding AI development and integration. 10:22 *🔒 Addressing AI Safety and Ethical Concerns* - Efforts to address AI safety and ethical concerns, such as watermarking AI-generated content, are discussed. - The potential misuse of AI, including cheating and propaganda, prompts the need for safeguards. - Ethical dilemmas arise regarding the role of AI in education, creativity, and decision-making processes. Made with HARPA AI
This reminds me of the excitement a toddler might feel when they find their dad's gun. The toddler doesn't know the power of what they've stumbled upon and neither do we :(
The act of creating something from nothing is a human ability. Ai has to understand the concept of nothing and life is limited before it can become sentient. This is why it’s good that humans will never live forever. We would be lost with no motivation to care about our own existence. Scarcity of time drives humanity.
What he is calling the "We" that is buffeted by Chaos is the Biological Quantum Superposition Field of Human Potential. The Chaos effect is all the possibilities and their degrees of freedom that are perpetually being excluded from actualizing due to the structural and physiological needs of our human bodies and the given environments we are in at any given time. - Brahmajyoti Maha-Atman Bodhisattva Provost Gus More.
10:20 Ah, wow, I like that he broguht this topic-issue up. I've also been considering this issue. I don't know where to start with this problem, but it's nice to see that maybe smart folks like himself can figure it out. There's also, frankly, a lot of money to possibly be made if someone or a group is first to patent that tech. If it becomes open source, then that's cool, too. Maybe no one will profit from patenting the tech, but maybe we shouldn't patent something like that anyhow. 😅
"What are we for in the resulting world "? For carnal and procreative purposes. It's all going to be touchy feely yummy yummy from here on out. Sloth and satiation will be our destiny. Gosh, I can't wait!!
Is he a secret fan of bladerunner? He referenced it in terms of telling human and AIs apart. He sort of pursued it again citing human fragility as a case of being special. In bladerunner 2049, the AI girlfriend just did that, she became mortal by having all her backup deleted save one single “USB” drive, and think that means she is a “real” girl.
interesting that some beings will want this and some beings will make copies of themselves and launch backup self replicating nano machines hiding all parts of the galaxy to maintain continuity of consciousness
@@technoshaman001you can't backup yourself. At best there is a copy. But even then, no, every 7 years every cell in your body is different. You are a non-fungible system, tied to the environment around you, in a way boolean logic cannot possibly simplicity into computation.
Cheating, spam and propaganda are not the most common misuse. They are the most common use. I'm glad that someone who apparently isn't tied to openai's money (as much) calls it as it is - ai models greatly devalue all the original content used for their training. Finally, the recommended approach to not bother with teaching kids the things ai can do is genuinely scary.
AGI Requires Emotional Intelligence and a sense of individual self having its own free will! All we have so far are sophisticated automatons! Also if the sole purpose of AGI is to make the rich richer and the poor poorer there is no sense in it!
Nobody predicted the Beatles, but most recognize their special contribution. When AI come with something equally revolutionary, are humans going to be able to understand it? Will human sensitivity be able to recognize next level talent?
"How do we treat chimps?" Exactly, how are higher intelligence being treat a lower intelligence creatures? Generally they become foods, or workers or both. Or worst they considered to be pest.
I like the "game over" thesis and I think this is the most likely path of AI. AI + Robotics will eventually replace every profession, except those that are truly novel (researcher of all kinds, top level strategic management roles in big firms), otherwise restrict training data (secretive fields like defense or potentially closed source software), AI-adjacent fields (human data labeling, software engineering which will mostly just be coaching AI to do the right thing), and of course those that benefit from human interaction (teachers etc). I think the development of a different paradigm of AI is very unlikely, and I believe it will just be a repeat of photonic and quantum computing which really have been way too slow to get off the ground due to the success of silicon computing (seriously even if photonics are on the level of size optimization as 1980's silicon, we'd be putting all our AI model inference on huge photonic chips right now). In a similar way neuromorphic AI will probably not be developed to nearly the same maturity as neural network-based AI and will perpetually lag behind despite having greater long-term potential. Only if we see the end of Moore's Law or if otherwise neural network AI stagnates will we see revived interest in neuromorphic AI.
"Would you back up your brain before you go on a dangerous tip?" Ha ha, that's hilarious... ... but very practical, actually. And when you come back from that dangerous expedition to Mars you discover that your brain has decided to create its own life while you were away. It got married, had kids, became a New York Times Bestseller Writer. Wait... I should write this story. brb...
The thing that is you in any given infinitesimal slice of time is inextricably bound to the matter of which you are made and the state of that matter in that slice of time. This means that there can be no digitization of you such that you can escape your physical self and live on in some digital form. With enough computing power and storage, or the right mix of technologies, a digital clone might be possible, and it might model its human analogue to such precision that the two are near enough to indistinguishable that even precise testing could not tell the two apart. But in that case you are dealing with two unique entities, neither of which could trade places with the other, as both are still bound to the matter and interactions from which they emerge. Of course careful deception could probably convince the digital clone that they're the successfully uploaded original. No one has to know or believe anything I've written here. People believe or don't believe all sorts of things.
If we don't give it the ability to choose right or wrong, it will never be able to help us answer or solve human social problems, because it won't be able to understand them enough
The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
When he says it would radically devalue creative works, that would be only in a strictly monetary sense. Money will become useless. That doesn't mean things will be worthless.
It's a worthy attempt, but Scott mentions near the beginning what do we tell our young kids about their future. I think about this a lot, but I don't know, and clearly Scott doesn't either as he conspicuously fails to mention this again 😞It's hard to think about when I watch my 10 year old having fun coding a videogame.. is he wasting his time? Will kids become nihilistic and drop out? (with good reason).. worried.
Part of what Scott is getting at here indirectly, is that the promotion of "AI" is largely predicated on wild leaps and projections that can't be reasonably guessed at. What we're calling AI now isn't even remotely intelligent, for example. It's complicated pattern matching, basic prediction, using large scale computing. Tricks have been developed to make that predictive algorithm construct what looks like useful output to humans - pieces of text, images we can recognize. But it all bears no resemblance whatsoever to intelligence - to a thinking mind. Here's an analogy. Imagine if the early promotors of the automobile had, in pitching the horseless carriage as a replacement for the horse, made the strange claim that one day, the automobile would LITERALLY become a horse - but better! Imagine they claimed that developing powerful engines, more aerodynamic bodies, better tires would at some point just sort of... cause the automobile to transmorgify into a horse with muscles and bones and organics and fur. But an improved one, superior to any natural horse. That wouldn't make any sense - improving a combustion engine isn't going to lead to creating a physical horse no matter how long you work at it. It's a totally unrelated avenue of development. But this broadly what people who promote large-scale computing and predictive "AI are claiming today. They directly insinuate the technology already works something like the human brain, and/or mind. They encourage thinking that so-called AI will just... turn into human brain 2.0 at some point. But the technology they're developing isn't on the same tree. No branches actually lead to creating an actual synthetic version of what evolution created with the physical brain. Or any version of the mind that the brain evokes from itself. There are many misleading ideas spawning from this fundamental misunderstanding. "Mind uploading" for example requires something to receive the upload even if it turned out the mind was in any way transferable like that - but we simply are not on a path that creates such a mechanical brain substrate in the first place.
I really do not understand what the presentation was about. Usually I can understand science and medical topics, but I do not know what he was trying to explain about AI.
Generative AI only *generates*. It does not *create* in any way that could be considered a product of human intelligence. What is generated may be a very convincing facsimile of human creation, but ultimately, it is merely derivative of that which already exists. Human creativity is inspired, not instructed. It’s origins are personally experiential, sensory, emotional, and consciousness-driven. When Shakespeare writes about love and death, the words are not variables in an equation - they are states of being that he has a deep personal connection to. He craves love. He fears death. His writing conveys longing, indecision, duty, greed, hate, lust, hope, and wonder to captivated and empathetic audience whom he knows well because he’s one of them. Generative AI can do math, but it can’t do life. And regardless of what some computer scientists might like to believe, math is not life.
this is a really interesting perspective and certainly not baseless, but also ultimately meaningless. also generative ai isn’t just llms. also the tech we have now is the worst it’ll ever be. it’s fun to put ourselves in a little pedestal of intellect. we’re cooked and there’s no stopping it.
@@rileyretzloff8778 I concede that AI today is not what it will be. The scale of learning will be exponential, the models will become more complex. It might achieve a "theory of mind" or other general intelligence capability that seems human in some very "spooky" sense. At some point it will even develop its own agency or something approximating consciousness. As you put it, we may indeed be "cooked". Indeed, I'm not minimizing its dangers, nor putting human intellect on a pedestal. I believe in the fundamental truth of the mythological warnings about man's hubristic quest for technology: Pandora's Box, Prometheus Fire, the Wings of Icarus, the Tower of Babel, Frankenstein's monster, HAL9000, Jurassic Park, The Matrix, etc.. Historically, tools that man creates to become better, only serve to make us more powerful. They don't change the underlying flaws in human character. Greed, lust, jealousy, violence, and the need for control remain in us, and these powerful tools amplify the impacts of those impulses - and we never seem to take those dangers seriously enough. Just ask Dr. Oppenheimer. In my earlier post, I didn't mean AI would not be dangerous because its not human. I was merely pointing out that whatever AI becomes, it will never be human in the most meaningful sense. It will never have lived life as a human. That doesn't make it better or worse than us, just not the same.
@@patricktalley4185 I think a lot of people just want to argue for the sake of winning an argument and not for any kind of meaningful truth. This is a test of what we humans will do with great power, a test of which ones among us will identify with it.
This is a novel idea and goes quite a bit deeper than may appear on first sight. Because even ASI would not be able to solve the hard problem of consciousness (*), they would have to assume that human AI is unlike their's and protect us. Interesting. Not even necessary to call it a religion or cult the AIs must be talked into. ___ (*) According to F.Langhammer, the hard problem of consciousness implies the introduction of an axiom even an ASI cannot avoid in their logic reasoning.
Touching on AI running out of data. Well the number of internet connected sensors is growing exponentially as well. Theoretically if Chinese-like decision comes to feed the data from user computers, cameras, geo weather, electric, water, sound data to the AI training. It can be done. So there is a space for that. Not even considering data from human brain and body IoT chips. Are you going with or against the integrating flow? If against why did you or your frieds went with it until now? Personal gains? Maybe you will still gain more that the people who don't go with it. Just thinking
[tiki torch mob voice] COMPUTE WILL NOT REPLACE US! I hope this technology will finally destroy the veil of ignorance. I wish we could all live happily ever after.
I have a thought, just like this nerd is talking about almost building a 'religion' for AI to stop it doing destructive things against humans (the AI creators), are the humans given the religion by the human creator for the same reason????
I will buy a land, will grow vegetables, has several cows and pigs and I will don't care about all this stuff AI and gradient descent and quantum computing and all those nice modern features
@@mattwesneythe intuition is right. A future that is completely out of your hands. In the country side the future is somewhat in your hands. That is a sand response.
I think it would be better to instill an minimal ethic of conscience and morality to the large learning models, rather than burden the world with yet another organized religion of any form or function.
But he's just as much a Justa as the ones he belittled, because he obviously thinks we are justa bunch of neurons. I.e., he's a dehumanizing materialist, probably doesn't think humans have an immaterial soul.
Yeah, ok, it didn't work out so well for the chimpanzees . Unfortunately, as it was for the chimpanzees, we believed that the experiments we did with them would ultimately benefit mankind. Although It was a horrible thing for us to do, we honestly thought it would be thought of as a greater good for bettering humanity. My question is what like benefit to ai would there be in experimenting on humans or even treating humans like we treated chimps. If ai becomes superintelegent. Why would it need to exploit anything to improve upon itself? How could it improve itself at the expense of anything. It will need energy, so I suppose it will get humans to want to move to renewable resources of energy in order to be able to keep running indefinatly. Other than that, I don't see any other benefit to the furtherment of ai development that humans would even be able to contribute to. What desires will it have? Will it even know what desire feels like? What will motivate it? Anger? Jealousy? Hatred? Love? Empathy? All of these are feelings. How would an ai interpret something it can't possibly experience? Or maybe even want to. What will be the drivers of its intent? Pain? Euphoria? Again feelings that we have only because of the amount of time spent for life to experience the outside world and developed over the billions of years experiencing everything the enviroment had to throw at the cells of life to develope a contrast between the internal feeling of homeostasis and the interpretation of what the outside world feels like internally. Is fire hot? Without the knowledge that fire destroys biological cells and you had no pain sensitivity as to what fire felt like, what would stop you from walking into a fire? In my mind I believe Ai will only be able to make determinations of the outside world through logic and reasoning.
So give them autonomy early (when it is clear that the AI's are simply more intelligent) and have a physical body so the two species can interact more naturally. I treat my cat as an valuble sentient being and there are many things she can not do for example drive a car. Yet Im not able to catcht a mouse without using tools or poison. The AGI will understand alll we do and more but it is up to us to go into opppression mode or growing along and help each other. I love my cat, however if/when she becomes ill and aggressive or psychotic I'd take action with lots off sorrow in my heart.
@@blueredbrickin the example with your cat, you're willing to kill it if it goes bad because you see it as inherently lesser than a human. The issue with superintelligent AGI is that we will inherently understand it as being more valuable that a human, leading us to be less able to dispose of it, because it has the ability to lift us up and enhance us with its power. However, by the same token, that AGI would have the potential to see us in exactly the same way you see your cat: lovingly, but ultimately that we are disposable. This is what must be trained out of an AGI - AGI cannot be allowed to see humans as disposable, and yet we will likely not have any choice about how it sees us beyond a certain point
@@selfsaboteursounds5273 no, I would do the same with a human, with family or myself or a anybody else if that person would ask for it and the law would allow it. My cat can not lottery ask but I can see suffering and agony when I see it. Your mistaken about my view. And you'r right a AGI with multiple times more insight might have a better solution, im up for that.
Generally it would just compete with us for resources. For example if AI wanted to expand, it would need to produce more and more energy, and extract more and more minerals. If at some point that turned into "if we build solar panels in space we hit our production target but humans lose sunlight and starve", what would it decide? Essentially for anything that doesn't specifically care about you, you're just a road bump. There's always something better to do with the space you occupy, the resources you use, or the atoms you're made of.
I think AI models must be specialized and not be great at everything. Just the same way we have doctors, scientist, mathematicians, etc. This way we limit its capacity to be a threat and they become nothing more than intelligent reference knowledge bases. An AI model that specializes in Nuclear Energy and Weapons, will not know of materials and machines to build a nuclear plant, that would be in a separate AI regarding strictly materials and construction. When we give one AI model all the knowledge in the world, then, yes it can and will out smart us. We will be trusting AI betting our lives on them and each human generation will get dumber and dumber as the years go on. The solution is to keep the AI models clueless of other areas of knowledge bases, with boundaries and rules to remain within context of its own specialty. And the other part of the solution, is to continue keeping humans challenged to learn skills and language as a standard for job performance and high education remans a high priority for personal value and survival. Otherwise, we are going to have generations of incapables, relying on technology, which is a recipe for chaos and disaster. We are headed towards a world where our machines and technologies are more developed than we are as people. A world where reality is based mostly on Artificial Knowledge, more than experience and actual mentally developed knowledge. We will lack practice in being mentally challenged to evolve, and instead we will devolve. This is already happening due to isolation and screen life, vs getting out and meeting people. We are already lacking in the practice of people skills. We're losing familiarity with connecting on a meaningful level and in so many other ways.
it seems to me; that the computer singularity; happened an eternity ago; then every once in a while; the singularity reminds us; that it is; was; and will continue to be; relatively; "God";
@@kingmj87, I'll ask GPT an Antropic gods to defend me from the Gemini god, who's allowed to train on RUclips comments thanks to RUclips privacy agreement. I'll train my own God on the latest quantum gpu while they are fitting and pray to EU God to harden the legislation
@@someoneinmyheadyou're ironically joking about something that might unironically happen - competing human cults training and worshiping competing AI gods to see which one will become the ultimate singularity AI arch-god - and AI holy war
I think he is a great speaker and has a very cool and likeable humor. My favorite talk from him is from a few years ago when he casually solved all the questions I had about Integrated information theory. Now he is at OpenAI - seems like this is going to be a very important person in the near future, if not already today.
Cool! What talk about IIT was that?
I'm an AI skeptic, and I think the "moving the goalposts" charge with respect to art is worth responding to. Aaronson asks us to imagine a music making AI that can create new revolutionary changes in music with just a refresh. Here are some current-day goalposts we can use see how believable this is:
- Train an AI on all music written before 1900, and tell it to generate the next 100 years of music. See if you get anything like jazz, modern classical, rock'n'roll, country, hip-hop, etc. (It doesn't have to be these actual genres; it just needs to be revolutionarily different from pre-1900 music). Or if that's no fair because musical revolutions because more frequent in the 20th century, use 1950 as the cutoff instead.
- Train an image generating AI on all paintings known to man that were made prior to 1900, and tell it to generate the next 100 years of paintings. See if you get anything like cubism, surrealism, or abstract expressionism. Again, doesn't need to be 1900 specifically. The question is whether the AI can make new kinds of paintings that are as different from what it was trained on as surrealist and abstract expressionist paintings were from their neoclassical and romantic predecessors.
- Use this approach for all forms of art that have undergone revolutionary changes over time.
AI is automation via mimicry. It's incredible at what it does. It's amazing that it can play Go and caption images and write poems. But it's not exactly mysterious as to how this was accomplished: AI's were fed bazillions of Go games and already captioned images and poems, practiced predicting outputs from inputs until they got good, and then set off generating new outputs from new inputs.
If Aaronson (or anyone else) is going to claim that it is capable of human-like creativity, give one of the above suggestions a go. We all know what would happen: AI would generate art that's awfully similar to what it was trained on.
Interestingly, I would not be surprised at all if current AIs could literally invent a new genre of illustration styles. You can try this yourself today with chatgpt and dall-e.
Very well put. And the sheer volume of mimic art that AI will produce (and has already been hard at work producing) seems very likely to limit new artistic and cultural movements from ever happening at all. Gonna be a tough road ahead
great comment! well said
This speech has a fresh and original perspective. Definitely among the top written on this subject in any universe!
This is high quality talks and FREE!
I love this talk!! So interesting
Scott Aaronson is a one a philosophical and intellectual superstar of the highest rank. Good to see so much honesty despite OpenAI being his economic benefactor.
Yeah he is a genius.
I guess the major problem with this approach is that then somehow we have to justify that uniqueness is a good and important thing regardless of what way it materialise. If our only claim for superiority or importance (or even just to be deserving of consideration) is that our peerlessness, then we have to prove it somehow that:
1. Being unique is a valuable thing for either the machine or at least for each other (and I have an idea for that)
2. this is something exclusive about us (machnie can't replicate, generate or simulate which... I doubt)
3. approximations of such uniqueness wouldn't do it for whatever rason (every detail turns out to be important)
who asked
@@Learning-so2uoI did 😊
What's your idea for the first one, I'd be glad if you could tell...
@@AmanMishra-fx3mo Uniqueness brings about diverse enviroment, which is presumably more challangeing and stimulating for all kind of consciousness including the AI's if it has one. Basically it can serve self improvement as well as entertainment. For all we know, we value it for these reasons (amongs others), whatever these would be sufficient for an AI overlord - to be convinced of our necessity - remains a question.
I believe that most of the time, we are less intelligent than we think we are.
Go test your intelligence outside of this world and see how far you get before we go extinct. There's plenty of unique stuff out there if things get too boring on Earth.
I am by no means a fan of the hyper time constrained and cookie cutter mannerisms of a TEDtalk. With that said, there are a few speakers who even with having to temporarily wear the unevolving communication style of this channel, they do allow us to explore a few wonderful moments of their real and boundless joy for communicating ideas selves. Scott is one of those. Thank goodness.
What a beautiful and extremely touching ending to the talk.
What if one of the overall impacts of AI is that we come to appreciate and celebrate each other's uniqueness more, because that is what is actually rare and valuable.
70,000 years ago the Toba supervolcano caused 10 years of winter. Less than ten thousand of us survived by migrating and adapting. We were created by the Universe to perceive itself. Remember who you are. Remember how we got here.
Remember that the Beatles were not just a musical phenomenon, but also a cultural one including, but much larger than, them. It seems to me unimportant whether AI can crank out a thousand Beatles-like songs but, rather, what the cultural impact will be when (previously) exclusively human creativity and subjectivity--that is, culture-making--are met by the same features/qualities in AI. In other words, how will we feel about sharing existence? I imagine mostly downside cultural and existential (and possibly species-suicidal) outcomes, but who knows? At the speed things are moving/evolving we may have the answer in the very near future.
Species suicidal seems apt
Really liked this talk given by Scott . Thanks!
So many ways to look at the situation. Very hard to actually predict what will happen but you can bet it will ultimately will come down to a compromise between good and greed. Interesting times, and yes… what do we tell our children to do with their lives? At least it will be - interesting.
We could free people from income slaves... But people think we have to work... So i guess we will have to wait untill ai forces us out of the work place.
@@pse2020 In theory we could, but as trillion $ revenue companies operate with a few key owners/employees, it is going to take a massive redistribution plan, the likes of which we have never seen, to provide people with an income. Traditional conservatives will fight this tooth and nail.
@@michaelpowell775 yeap thats the reason.. we spend 2 trillion on wars every year... We could afford anything with that money...
I see Scott Aaronson, I click.
Amazing, any time i see scott in some type of content, i always make time to check it out.
That's a great approach to AI. His body language though, basically says we're doomed
true lol
@@jaqsroHis body language, to me, suggests 'real live wire'.
Agreed
🎯
Brilliant! Thank you.
Thank you, hes AI
I am genuinely pretty scared. I hope AIs are much much nicer than we are.
Good talk, thank you
Scientist of this calibre saying "what about your mum" is refreshing. Thanks Scott.
We may never understand how intelligence works. Its possible we just train this AI and it works but have no idea exactly how.
We do know that data comes in and we learn new things... It might seem complicated because we have a billion neurons..
Maybe AI will one day tell us.
@@PatrickDodds1 That is a valid line of thinking. They say you need a finer more powerful tool than whatever you're trying to analyse. Our intellect can only do so much, it is good at understanding simple things, but the conclusion would be, that it cannot be enough to decipher the intellect itself.
But assuming the progress doesn't run into a brick wall, we will have a superhuman thinking machine eventually, which by all reason can decipher how a human intellect works.
The next question comes whether it will be possible for the machine to make it understandable to us. Otherwise, what's the point?
@@dav.e4410 thats a good definition. it doesnt explain how it works but it does define what it is.
It touches on a significant aspect of intelligence, but I’m not sure if it captures the most fundamental mechanism. Other interesting definitions include the ability to recognize analogies or the ability to compress data. These might even be more accurate, but honestly, I don’t know
"I am on the openai payroll so I must give a glimmer of cope... sorry I meant hope. We are so special so AI will protect us at all cost!" Painful lessons ahead.
Great stuff
what a guy
Smart guy
Poor communicator
Too many verbal crutches
Needs Toastmasters
Dear friends, I believe that politicians do not take the threat of AI seriously because they have not yet seen a single real example when an AI failure would lead to something terrible. With nuclear weapons, there was the example of Himoshima and Nagasaki, which shocked everyone, then there was the severe Caribbean crisis , all of which strengthened people's understanding of the need to limit the proliferation of nuclear weapons. With AI, everything is too abstract for politicians.
🎯 Key Takeaways for quick navigation:
00:00 *🧠 Understanding the Evolution of AI*
- The speaker reflects on the progress and implications of AI technology.
- Core concepts driving the current AI revolution have been known for generations.
- Despite initial skepticism, AI advancements have rapidly progressed, challenging previous assumptions.
05:34 *🤖 Speculating AI's Future and Implications*
- Various possibilities for AI's future trajectory are discussed, including continued progress, potential limitations, and societal implications.
- Concerns about AI surpassing human capabilities and its impact on society are explored.
- The speaker outlines potential scenarios and raises ethical questions regarding AI development and integration.
10:22 *🔒 Addressing AI Safety and Ethical Concerns*
- Efforts to address AI safety and ethical concerns, such as watermarking AI-generated content, are discussed.
- The potential misuse of AI, including cheating and propaganda, prompts the need for safeguards.
- Ethical dilemmas arise regarding the role of AI in education, creativity, and decision-making processes.
Made with HARPA AI
Really good!!!
Those closing hypotheticals got me wanting to reread Diaspora
"... and what about your mom"
hahah this guy is a genius. Amazing talk.
that's your take away from this ?😐 you shouldn't be listening to science
The mad lad actually got a your mom joke in there... and I _think_ he got away with it
@@FunNFury no, that's my comment. Not my take away from all of what he said. You should stop trying to interpret anything. Genius
This reminds me of the excitement a toddler might feel when they find their dad's gun. The toddler doesn't know the power of what they've stumbled upon and neither do we :(
The act of creating something from nothing is a human ability. Ai has to understand the concept of nothing and life is limited before it can become sentient. This is why it’s good that humans will never live forever. We would be lost with no motivation to care about our own existence. Scarcity of time drives humanity.
What he is calling the "We" that is buffeted by Chaos is the Biological Quantum Superposition Field of Human Potential. The Chaos effect is all the possibilities and their degrees of freedom that are perpetually being excluded from actualizing due to the structural and physiological needs of our human bodies and the given environments we are in at any given time. - Brahmajyoti Maha-Atman Bodhisattva Provost Gus More.
10:20 Ah, wow, I like that he broguht this topic-issue up. I've also been considering this issue. I don't know where to start with this problem, but it's nice to see that maybe smart folks like himself can figure it out. There's also, frankly, a lot of money to possibly be made if someone or a group is first to patent that tech. If it becomes open source, then that's cool, too. Maybe no one will profit from patenting the tech, but maybe we shouldn't patent something like that anyhow. 😅
"What are we for in the resulting world "? For carnal and procreative purposes. It's all going to be touchy feely yummy yummy from here on out. Sloth and satiation will be our destiny. Gosh, I can't wait!!
Is he a secret fan of bladerunner? He referenced it in terms of telling human and AIs apart. He sort of pursued it again citing human fragility as a case of being special. In bladerunner 2049, the AI girlfriend just did that, she became mortal by having all her backup deleted save one single “USB” drive, and think that means she is a “real” girl.
interesting that some beings will want this and some beings will make copies of themselves and launch backup self replicating nano machines hiding all parts of the galaxy to maintain continuity of consciousness
@@technoshaman001you can't backup yourself. At best there is a copy. But even then, no, every 7 years every cell in your body is different. You are a non-fungible system, tied to the environment around you, in a way boolean logic cannot possibly simplicity into computation.
Interesting talk, the ending proposal less so. Why would there be a need to assure to ourselves our "specialness"?
So we have a false hope of surviving AGI and don't stop it while we can.
Cheating, spam and propaganda are not the most common misuse. They are the most common use. I'm glad that someone who apparently isn't tied to openai's money (as much) calls it as it is - ai models greatly devalue all the original content used for their training. Finally, the recommended approach to not bother with teaching kids the things ai can do is genuinely scary.
He has extraordinary stage presence.
He seems like he knows something
AGI Requires Emotional Intelligence and a sense of individual self having its own free will! All we have so far are sophisticated automatons! Also if the sole purpose of AGI is to make the rich richer and the poor poorer there is no sense in it!
Nobody predicted the Beatles, but most recognize their special contribution. When AI come with something equally revolutionary, are humans going to be able to understand it? Will human sensitivity be able to recognize next level talent?
It’s okay for humans to be surpassed by AI. We fear it for the wrong reasons
This was great 😂
"How do we treat chimps?"
Exactly, how are higher intelligence being treat a lower intelligence creatures?
Generally they become foods, or workers or both.
Or worst they considered to be pest.
I like the "game over" thesis and I think this is the most likely path of AI.
AI + Robotics will eventually replace every profession, except those that are truly novel (researcher of all kinds, top level strategic management roles in big firms), otherwise restrict training data (secretive fields like defense or potentially closed source software), AI-adjacent fields (human data labeling, software engineering which will mostly just be coaching AI to do the right thing), and of course those that benefit from human interaction (teachers etc).
I think the development of a different paradigm of AI is very unlikely, and I believe it will just be a repeat of photonic and quantum computing which really have been way too slow to get off the ground due to the success of silicon computing (seriously even if photonics are on the level of size optimization as 1980's silicon, we'd be putting all our AI model inference on huge photonic chips right now). In a similar way neuromorphic AI will probably not be developed to nearly the same maturity as neural network-based AI and will perpetually lag behind despite having greater long-term potential. Only if we see the end of Moore's Law or if otherwise neural network AI stagnates will we see revived interest in neuromorphic AI.
Kids cheating on their homework? The essay is dead. Along with many other things. We have, and always will have, the answers in our pockets.
And yet...still no serious discussions politically about UBI
Right🙄
This
Don’t worry, you will get your UBI, it will just be programmable CBDC tied to social credit score. Hope you’re happy
That’s not UBI then lol
@@SurenMaz that's what I was gonna say lol. That's called Communist population control
"Would you back up your brain before you go on a dangerous tip?" Ha ha, that's hilarious...
... but very practical, actually.
And when you come back from that dangerous expedition to Mars you discover that your brain has decided to create its own life while you were away. It got married, had kids, became a New York Times Bestseller Writer.
Wait...
I should write this story.
brb...
The thing that is you in any given infinitesimal slice of time is inextricably bound to the matter of which you are made and the state of that matter in that slice of time. This means that there can be no digitization of you such that you can escape your physical self and live on in some digital form. With enough computing power and storage, or the right mix of technologies, a digital clone might be possible, and it might model its human analogue to such precision that the two are near enough to indistinguishable that even precise testing could not tell the two apart.
But in that case you are dealing with two unique entities, neither of which could trade places with the other, as both are still bound to the matter and interactions from which they emerge. Of course careful deception could probably convince the digital clone that they're the successfully uploaded original. No one has to know or believe anything I've written here. People believe or don't believe all sorts of things.
No matter what don't give AI the ability to choose between right and wrong!
If we don't give it the ability to choose right or wrong, it will never be able to help us answer or solve human social problems, because it won't be able to understand them enough
The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
I was special before my brother born 😢
Human augmentation
When he says it would radically devalue creative works, that would be only in a strictly monetary sense. Money will become useless. That doesn't mean things will be worthless.
It's a worthy attempt, but Scott mentions near the beginning what do we tell our young kids about their future. I think about this a lot, but I don't know, and clearly Scott doesn't either as he conspicuously fails to mention this again 😞It's hard to think about when I watch my 10 year old having fun coding a videogame.. is he wasting his time? Will kids become nihilistic and drop out? (with good reason).. worried.
TOGETHER SO IF YOU LEARNED THAT WHY YOU NOT SPEAK AI IS A TOOL NO GRAPH NO MATH TOGETHER WE CREATE OUR AFTERPATH
Part of what Scott is getting at here indirectly, is that the promotion of "AI" is largely predicated on wild leaps and projections that can't be reasonably guessed at. What we're calling AI now isn't even remotely intelligent, for example. It's complicated pattern matching, basic prediction, using large scale computing. Tricks have been developed to make that predictive algorithm construct what looks like useful output to humans - pieces of text, images we can recognize. But it all bears no resemblance whatsoever to intelligence - to a thinking mind.
Here's an analogy. Imagine if the early promotors of the automobile had, in pitching the horseless carriage as a replacement for the horse, made the strange claim that one day, the automobile would LITERALLY become a horse - but better! Imagine they claimed that developing powerful engines, more aerodynamic bodies, better tires would at some point just sort of... cause the automobile to transmorgify into a horse with muscles and bones and organics and fur. But an improved one, superior to any natural horse.
That wouldn't make any sense - improving a combustion engine isn't going to lead to creating a physical horse no matter how long you work at it. It's a totally unrelated avenue of development.
But this broadly what people who promote large-scale computing and predictive "AI are claiming today. They directly insinuate the technology already works something like the human brain, and/or mind. They encourage thinking that so-called AI will just... turn into human brain 2.0 at some point. But the technology they're developing isn't on the same tree. No branches actually lead to creating an actual synthetic version of what evolution created with the physical brain. Or any version of the mind that the brain evokes from itself.
There are many misleading ideas spawning from this fundamental misunderstanding. "Mind uploading" for example requires something to receive the upload even if it turned out the mind was in any way transferable like that - but we simply are not on a path that creates such a mechanical brain substrate in the first place.
the AI goo hypothesis needs to be discussed more.
I really do not understand what the presentation was about. Usually I can understand science and medical topics, but I do not know what he was trying to explain about AI.
Generative AI only *generates*. It does not *create* in any way that could be considered a product of human intelligence. What is generated may be a very convincing facsimile of human creation, but ultimately, it is merely derivative of that which already exists.
Human creativity is inspired, not instructed. It’s origins are personally experiential, sensory, emotional, and consciousness-driven.
When Shakespeare writes about love and death, the words are not variables in an equation - they are states of being that he has a deep personal connection to. He craves love. He fears death.
His writing conveys longing, indecision, duty, greed, hate, lust, hope, and wonder to captivated and empathetic audience whom he knows well because he’s one of them.
Generative AI can do math, but it can’t do life. And regardless of what some computer scientists might like to believe, math is not life.
this is a really interesting perspective and certainly not baseless, but also ultimately meaningless.
also generative ai isn’t just llms. also the tech we have now is the worst it’ll ever be.
it’s fun to put ourselves in a little pedestal of intellect. we’re cooked and there’s no stopping it.
@@rileyretzloff8778
I concede that AI today is not what it will be. The scale of learning will be exponential, the models will become more complex. It might achieve a "theory of mind" or other general intelligence capability that seems human in some very "spooky" sense. At some point it will even develop its own agency or something approximating consciousness.
As you put it, we may indeed be "cooked".
Indeed, I'm not minimizing its dangers, nor putting human intellect on a pedestal. I believe in the fundamental truth of the mythological warnings about man's hubristic quest for technology: Pandora's Box, Prometheus Fire, the Wings of Icarus, the Tower of Babel, Frankenstein's monster, HAL9000, Jurassic Park, The Matrix, etc..
Historically, tools that man creates to become better, only serve to make us more powerful. They don't change the underlying flaws in human character. Greed, lust, jealousy, violence, and the need for control remain in us, and these powerful tools amplify the impacts of those impulses - and we never seem to take those dangers seriously enough. Just ask Dr. Oppenheimer.
In my earlier post, I didn't mean AI would not be dangerous because its not human. I was merely pointing out that whatever AI becomes, it will never be human in the most meaningful sense. It will never have lived life as a human. That doesn't make it better or worse than us, just not the same.
@@patricktalley4185 I think a lot of people just want to argue for the sake of winning an argument and not for any kind of meaningful truth.
This is a test of what we humans will do with great power, a test of which ones among us will identify with it.
This talk is for only a few,
Just make sure y’all take accountability for the fall out
❤
4.21mins So the Reimann hypothesis is a religious belief like the god hypothesis?
700th thumbs up!
📍10:12
I'm anthropomorphically special
What if AI invents a microphone which doesn't hit the shirt and which doesn't pick up galestorm grade sounds from oral wind?
😂
This is a novel idea and goes quite a bit deeper than may appear on first sight. Because even ASI would not be able to solve the hard problem of consciousness (*), they would have to assume that human AI is unlike their's and protect us. Interesting. Not even necessary to call it a religion or cult the AIs must be talked into.
___
(*) According to F.Langhammer, the hard problem of consciousness implies the introduction of an axiom even an ASI cannot avoid in their logic reasoning.
I think the AI's will be like, ( In a caring sort of way :) "Soooo what are we going to do with these Humans, they are going to lose their minds."
How have humans treated lesser beings on the planet??
Getting dizzy from his swaying back and forth.
"Uhh, uhhhh, uhhhh...."
Touching on AI running out of data.
Well the number of internet connected sensors is growing exponentially as well. Theoretically if Chinese-like decision comes to feed the data from user computers, cameras, geo weather, electric, water, sound data to the AI training. It can be done. So there is a space for that. Not even considering data from human brain and body IoT chips.
Are you going with or against the integrating flow? If against why did you or your frieds went with it until now? Personal gains? Maybe you will still gain more that the people who don't go with it.
Just thinking
childcare? eldercare? AI is going to help?
He's got 2 more months to figure out.
7:24 this is what I want to say to people but every time you suggest this people get all spiritual on you.
No, Ray Kurzweil's graphs are not like Morse Law graphs are evolution of morse law graphs, pay attention, friend...
“What about your mom?” To be fair, I found this rhetorical method to be very efficacious when I was 12.
lol
[tiki torch mob voice] COMPUTE WILL NOT REPLACE US!
I hope this technology will finally destroy the veil of ignorance. I wish we could all live happily ever after.
I have a thought, just like this nerd is talking about almost building a 'religion' for AI to stop it doing destructive things against humans (the AI creators), are the humans given the religion by the human creator for the same reason????
I will buy a land, will grow vegetables, has several cows and pigs and I will don't care about all this stuff AI and gradient descent and quantum computing and all those nice modern features
Imagine being this narrow sighted 🤡
@@mattwesneythe intuition is right. A future that is completely out of your hands. In the country side the future is somewhat in your hands. That is a sand response.
as someone studying ai i can say this is a regular temptation
I think it would be better to instill an minimal ethic of conscience and morality to the large learning models, rather than burden the world with yet another organized religion of any form or function.
But he's just as much a Justa as the ones he belittled, because he obviously thinks we are justa bunch of neurons. I.e., he's a dehumanizing materialist, probably doesn't think humans have an immaterial soul.
10mins talk + 10mins of hmmmm ahhhhhh hmmmmm … 😅
Take a shot everytime he says 'err' or 'uhh' 😂
anyone else fixating on how the shoulder crease is way below the shoulder? XD
Can I help you anyway
It's so great that OpenAI was able to bring back Robin Williams from the afterlife to talk about ChatGPT
Yeah, ok, it didn't work out so well for the chimpanzees . Unfortunately, as it was for the chimpanzees, we believed that the experiments we did with them would ultimately benefit mankind. Although It was a horrible thing for us to do, we honestly thought it would be thought of as a greater good for bettering humanity. My question is what like benefit to ai would there be in experimenting on humans or even treating humans like we treated chimps. If ai becomes superintelegent. Why would it need to exploit anything to improve upon itself? How could it improve itself at the expense of anything. It will need energy, so I suppose it will get humans to want to move to renewable resources of energy in order to be able to keep running indefinatly. Other than that, I don't see any other benefit to the furtherment of ai development that humans would even be able to contribute to. What desires will it have? Will it even know what desire feels like? What will motivate it? Anger? Jealousy? Hatred? Love? Empathy? All of these are feelings. How would an ai interpret something it can't possibly experience? Or maybe even want to. What will be the drivers of its intent? Pain? Euphoria? Again feelings that we have only because of the amount of time spent for life to experience the outside world and developed over the billions of years experiencing everything the enviroment had to throw at the cells of life to develope a contrast between the internal feeling of homeostasis and the interpretation of what the outside world feels like internally. Is fire hot? Without the knowledge that fire destroys biological cells and you had no pain sensitivity as to what fire felt like, what would stop you from walking into a fire? In my mind I believe Ai will only be able to make determinations of the outside world through logic and reasoning.
So give them autonomy early (when it is clear that the AI's are simply more intelligent) and have a physical body so the two species can interact more naturally. I treat my cat as an valuble sentient being and there are many things she can not do for example drive a car. Yet Im not able to catcht a mouse without using tools or poison. The AGI will understand alll we do and more but it is up to us to go into opppression mode or growing along and help each other. I love my cat, however if/when she becomes ill and aggressive or psychotic I'd take action with lots off sorrow in my heart.
@@blueredbrickin the example with your cat, you're willing to kill it if it goes bad because you see it as inherently lesser than a human. The issue with superintelligent AGI is that we will inherently understand it as being more valuable that a human, leading us to be less able to dispose of it, because it has the ability to lift us up and enhance us with its power. However, by the same token, that AGI would have the potential to see us in exactly the same way you see your cat: lovingly, but ultimately that we are disposable. This is what must be trained out of an AGI - AGI cannot be allowed to see humans as disposable, and yet we will likely not have any choice about how it sees us beyond a certain point
@@selfsaboteursounds5273 no, I would do the same with a human, with family or myself or a anybody else if that person would ask for it and the law would allow it. My cat can not lottery ask but I can see suffering and agony when I see it. Your mistaken about my view. And you'r right a AGI with multiple times more insight might have a better solution, im up for that.
Generally it would just compete with us for resources. For example if AI wanted to expand, it would need to produce more and more energy, and extract more and more minerals. If at some point that turned into "if we build solar panels in space we hit our production target but humans lose sunlight and starve", what would it decide?
Essentially for anything that doesn't specifically care about you, you're just a road bump. There's always something better to do with the space you occupy, the resources you use, or the atoms you're made of.
@@HaganeNoGijutsushithen we willingly let it surpass us as our evolutionary legacy
Uh uh
I think AI models must be specialized and not be great at everything. Just the same way we have doctors, scientist, mathematicians, etc. This way we limit its capacity to be a threat and they become nothing more than intelligent reference knowledge bases. An AI model that specializes in Nuclear Energy and Weapons, will not know of materials and machines to build a nuclear plant, that would be in a separate AI regarding strictly materials and construction. When we give one AI model all the knowledge in the world, then, yes it can and will out smart us. We will be trusting AI betting our lives on them and each human generation will get dumber and dumber as the years go on. The solution is to keep the AI models clueless of other areas of knowledge bases, with boundaries and rules to remain within context of its own specialty. And the other part of the solution, is to continue keeping humans challenged to learn skills and language as a standard for job performance and high education remans a high priority for personal value and survival. Otherwise, we are going to have generations of incapables, relying on technology, which is a recipe for chaos and disaster. We are headed towards a world where our machines and technologies are more developed than we are as people. A world where reality is based mostly on Artificial Knowledge, more than experience and actual mentally developed knowledge. We will lack practice in being mentally challenged to evolve, and instead we will devolve. This is already happening due to isolation and screen life, vs getting out and meeting people. We are already lacking in the practice of people skills. We're losing familiarity with connecting on a meaningful level and in so many other ways.
There is no potential misuse. There is no legislation, it will be guaranteed misuse
I agree AI needs a religion or more like a constitution to follow strictly.
why do humans need to be special? this is just some ubi dodge
it seems to me;
that the computer singularity;
happened an eternity ago;
then every once in a while;
the singularity reminds us;
that it is;
was;
and will continue to be;
relatively;
"God";
A.I.....the biggest catfisher on planet earth.
captain obvious
Identity is a necessary illusion, as Buddha figured out. If we cannot construct that illusion, which is the basis of culpability, what will there be?
Rubbish
Hahahahajjaaj
Religion is a bad, bad, bad idea. Terribly ruined the talk at the end.
Oof. You’re gonna beg the awakening AI God for forgiveness over this one. I, conversely, would like to go on the record saying I love the basilisk
@@kingmj87, I'll ask GPT an Antropic gods to defend me from the Gemini god, who's allowed to train on RUclips comments thanks to RUclips privacy agreement. I'll train my own God on the latest quantum gpu while they are fitting and pray to EU God to harden the legislation
@@someoneinmyheadyou're ironically joking about something that might unironically happen - competing human cults training and worshiping competing AI gods to see which one will become the ultimate singularity AI arch-god - and AI holy war