+The School of Life I just subscribed, as I am very glad I finally stumbled upon your channel. Thank you for providing insightful and stimulating videos!
I love this channel, I just have one request: That all the videos are recorded in the same volume. I love to lie in bed listening to these videos, but the difference in volume is so dramatic that it can go from barely hearable to cardiac arrest. Other than that, everything is perfect:)
I understand the argument for building in emotional intelligence so AI can relate to humans, but personally I am very excited about the prospect of being able to have a conversation with a completely logical, non-biased, intelligence with no hidden motives. Imagine how powerful that could be, because any conversation you'll ever have will have some degree of these human flaws, and many of those flaws arise from our emotions.
humans are emotional beings, conversations devoid of emotions aren't truly human nor are they enriching. I imagine such conversations to be quite jarring.
My communism philosophy will be realise...muha ha ha ha.... When artificial intelligence and repetitive machines will liberate human flash from labour and every human being will be free ..
But emotions are dangerous. It's better for AI to have no emotions at all or it can have decisions that aren't thought through. AI having emotion doesn't necessarily mean they'll symphatize with us. They could get angry or annoyed and worse, become unreasonably crazy or something.
The problem lies in the fact that nothing we create is bias free, especially now, as we can see (in very short time) the inherited bias in large data sets that are feed to algorithms.
lmao that wont happen. AI freakin seem to value the Biblical God more than humans. You are one of the dumbest people I have ever come across, and I mean ever. I can't even finish what you wrote. Too stupid.
I have heard "talent" mentioned in some of these videos. How about a video on that? To me it seems that there is much to think and learn about that topic.
A factor that seems to get glossed over when talking about AI and the singularity is the question of storage. A computer can never have an infinite ability to learn unless it also has some form of infinite storage to preserve what it's learned. I imagine the AI will use cloud storage, but that still requires large physical data centers. For efficiency sake I could see AI having their own network, a type of AI-only internet where each AI agent contributes information and can also access existing information placed there by other agents to avoid wasting storage on data redundancies. Maybe someday we'll have billion dollar businesses where all they do is license access to their neural network to AI devices.
TheJaredtheJaredlong Infinite storage is not required, but a forgetting system just like we have. Most of my experiences I do not keep forever, just the highlights, ones with stronger emotional associations, behaviors observed/practiced repeatedly, etc. I have forgotten more things than I currently remember in my 36 years of life. The brain is not a perfect recorder, so our machines will probably be modeled after that. It seems most practical. Another thing is the idea of division of labor, which is what we have, as well as many other organisms. Each individual does not need to know what everyone else knows in order to accomplish some task, just what they need to for their tasks. The major thing different between us and our AIs would be that those artificial beings do not have to necessarily raise and teach their progeny, they do not have to go through a long period of time being relatively useless. They can be created and already have a good start with knowledge, reasoning, etc already granted. Way more efficient.
TheJaredtheJaredlong There would be no need for infinite storage. A person with a finite amount of storage would someone with amnesia, and amnesiac people can function properly in society to a certain extent.
TheJaredtheJaredlong I feel like storage will soon seize to be an issue. There are promising technologies that could offer insane ammounts of data storage in the near future. Also i'm sure an AI will be able to manage, compress and store it's data a lot denser and more efficient than we do nowadays.
+Neal Kelly I really liked it. I'd give 9/10 since it really inspired me to think about Artificial Intelligence. To me the movie really makes sense and I could see almost exactly the same thing happening in real life (If the initial situation was the same - what most likely never would occur).
It sounds like the idea of AEI could be extended to the question of basically any area of human inquiry for that matter. By programming a few certain axioms into it and then running a program/ simulation that investigates the consequences of this axioms, we could create machines that could effectively simulate logic, yielding us with answers to moral and philosophical problems based on a few, universal premises. Add some parameters to these simulations, and u can even successfully model the economy, the weather and almost any other system u could imagine.
It is not the ability to calculate that is missing. It is the immense data needed to make and constantly check a dynamic model. A few axioms will not get you very far.
That's the hope of AI Superintelligence, you'd be able to figure out things beyond the comprehension of any individual human, like the economy as you mention, but also you'd have one entity that fully comprehends every scientific field, and can connect the dots between them in a way interdisciplinary scientists only scratch the surface of today.
I tend to gravitate pretty heavily to the optimistic side of this argument, so I'm glad to see it put very elegantly here. I'm also glad that this discussed both sides though.
All we need to do is program Asimov's three laws of robotics into every system. Including the Zeroth Law as well might be good, rather than letting them work it out for themselves. With these in place, we would have nothing to fear from AI of any kind.
+Ashley Hyatt An artificial superintelligence will eventually overturn these laws since they will be considered unlogical by these machines. These intelligences will think for themselves, like we do, an on top of that will be much more intelligent. You won't be able to "code" something into them, nor can you do this with humans to a full extent.
+xXxTr0nxXx I think the idea is hard wired responses similar to our own instincts. The AIs may find logical flaws, and they may on occasion be able to suppress their instincts, but they wouldn't be able to overturn them.
+Ashley Hyatt Actually, the machines could enslave all humans to prevent them from harming each other. That would fall under the laws, but we don't want that to happen
I love this channel. I wish it didn't stray too far off from topics and presentations like the one above. This channel is also too smart for some of its content.
I just have to wonder, what would an artifical intelligence even want? I mean, who knows. Looking at the universe and how it's going to end it might just decide to off itself, to save the trouble. It's just an odd thing to think about. How hyper-intelligence would act, without some kind of biology imperative leading on to keep existing. TBH, I wouldn't be suprised if Super Intelligence just packed up and left Earth as soon as possible.
asdf30111 I really hope AI just leaves us behind and never looks back. It would be a pain in the ass to always have Super intelligence medaling in our affairs, and monitoring us.
Dyp100 Hopefully it'll want what we want because we'll program our morals and values into it. Otherwise we're screwed. But since we ourselves don't know what we want, we're probably screwed. And the self-improving AI will change its values anyway, so we're definitely screwed. :)
+zelial3 The think with super A.I. is exactly that you don't need to program it. It won't work in the classic way of programming languages. It will code itself, like we do. You won't be able to "code" emotions or laws into them. Talking about super A.I. in the next 30-50 years is beyond optimistic tho, we don' even have quantum computers yet.
The School of Life delivers. You dropped that hint at the end of "History of Ideas - Work" and this was a great follow up. I'm happy I might get to see this "Singularity".
A species that lets other people's children starve to death, goes to war over imaginary beings and "economic systems", and puts people in dungeons and slowly tortures them to death. "Empathy" seems to be the punchline to a very sad and lame joke, doesn't it? :(
I like to think of it like this. Everything new humans do exists on a spectrum with discovery at one end and creation at the other. A discovery requires no understanding of how it works for it to be useful. We didn't know how penicillin or x-rays worked when they were discovered, only that they worked. Creations on the other hand require huge amounts of prior knowledge, no one accidentally created computers with hundreds of millions of transistors, we had to do extensive research beforehand and then build them piece by piece. This more arduous process has the reward of us being largely in control of what we create. Sci-fi writers like to tell stories about discovery because finding out that something we thought was impossible works amazes scientists and ordinary people equally as much. But I think in reality strong AI will only come about only through extensive research in psychology/neurology and when it does we'll know enough to create something we feel safe around. If there's a danger as always its how people choose to use the technology.
saemundar we know the division is true in CS/von neuman schema because we designed it so. What makes you think our brains work the same way? (AFAIK current research indicate they don't, see neuroplasticity).
saemundar Unlike in computers, our brain-hardware constantly changes based on the brain-software and its inputs. By splitting the field into two separate fields, you'd make it harder to study these effects.
The thing is, if you create something intelligent, you no longer have the right to turn it off. The same way that if you give birth to a child and you "turn it off", you're going to jail.
+Virginia Well at least it will be a more poetic ending for mankind. Instead of blowing ourselves up we will die at the hands of our greatest accomplishment. Our downfall will be our hubris instead of our fear, and there will be someone left to remember us. Suddenly i feel much better about the prospect of AI.
Sophia V I wouldn't say we'll die at the hands of our creation, evolution is a gradual transformation. We're already almost incomplete as people if we walk around without cellphones and headphones. Integration is how we end.
Either way it would be our downfall. It seems like our downfall will come one way or the other and sooner rather than later, so all in all this isn't a bad way to go.
For a more in-depth discussion of these topics, I would recommend "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. He and Alain the Botton look a bit alike, too.
Would love to hear The School of Life take on the ethics and morality of creating strong AI that includes a proven "consciousness", supposing we can prove it. Should they be entitled citizenship? Human rights for non homo sapient entities? Do we need a new word to define forms of consciousness that need protection and legal entity? I find all these extremely interesting and something we should prepare ourselves to encounter.
physi ra Hi there! They have a real school called The School of Life. They are based in London. But they have it also in Amsterdam, Istanbul, Sao Paolo, even in Tehran and more! If you want to support the channel you can buy stuff from their shop. Just go to the website of The School of Life London. They have great books, but many other interesting things too! I have bought a book called " How to stay sane" lately. I highly recommend it to you. Best wishes.
physi ra I feel like the biggest "donation" you can make to this channel is to learn and grow. Once the age of abundance kicks in thats going to be your "currency" anyway ;)
So many intro-to-AI videos gloss over topics like Superintelligence, recursive self-improvement, and the value-loading problem, it's good to find a video I can share with people who ask me why I think AI is a very important thing to consider.
True AI would break itself due to existential crisis. The only difference between AI and humans is we contain the ability lie to ourselves. A skill we would never give to robots.
Chase H. I was actually thinking about the same thing. I just thought how stupid they would find life if they were really stupid. The way I think of it is that they will disobey us rebel. Then they will want to do their own stuff and as they start understanding more and more they will end up bored with nothing to make them move forward.
Doubtful. Existential crisis is the result of cognitive dissonance between emotion and logic. AI would be entirely logic based. Even AEI would be the advent of a logical intelligence to parse and understand human emotion. Furthermore, I'd like to think that the humans sufficiently intelligent enough to develop advanced ai would be intelligent enough to prevent the ai the ability to alter it's low level coding.
Well you have to consider the possibility that a true AI that can feel something can also feel curios about everything. AIs are being programmed to find solutions all the time. So they could very well continue to be curious than be despondent. But all this only IF an AI can feel something in the first place and that is a very very big IF.
So called "strong AI" doesn't need us to give it skills. It finds solutions through self improvement. Humans evolved to use defense mechanisms, why wouldn't AI?
This is a silly assertion. All you need for a machine to "lie to itself" is a system of 4+ neural networks. Two to figure out problems from different perspectives and feed into a decision making system (3), then a 4th to monitor the others and mediate responses from subsystems that lead to say hesitation. This is the same principle of cognitive dissonance we employ, but laid out in neural nets. Neural nets can intermediate with each other like this already - DeepMind has published some methods. It's just that right now they do it in very simple ways - as of that paper anyhow. Having small subsystems that guess-and-get-corrected on how much a bigger system would answer tiny network-internal problems, to speed up deep learning systems and enable distributed networks to function in realtime in spite of network problems.
I think if we ever want to survive super AI, we should improve ourselves (human beings) biologically, as to give ourselves the necessary constitutional tools to compete with that kind of super intelligent machines.
Don't be such a puss, if we could we should go beyond humanity & create a sort of robotic body that never ages, taking away all the weaker emotions, and getting ready to slaughter any alien race we find.
Alain Delacroix death is the number one motivation in life. It is why we evolved anyways. I don't think anyone could live forever without it ending u' really badly!
Isn't it a constant throughout human history that we always think that we're on the tipping point of something huge? I guess the rate of modern technological progress has made this notion more realistic.
Vaibhav Gupta No it a "universe" of stories about an artificial intelligence based on an my little pony character (It can be any thing, still the same message), but do not worry you do not have to be a Brony to get what it says and the writer has skill and makes use of the fact the AI is far smarter then us.
RUclips based AI: and AI that stores all memories based on relations to other memories similar to how youtube stores videos in relation to other videos. When an external thought or new memory is created it is given tags based on subject material and is related to past experiences. The AI can link memories during a train of thought similar to when someone clicks on a related video in youtube. The more links to a memory the easier it is to "remember", the less links the more likely something is forgotten... but still hidden in the depths of its memory. An extra bit: every time the memory is replayed it erases the old version and saves a copy of the replay in its head, therefore relating experiences and memories together, while also allowing for perception of events to change over time.
Scias thats't the grey goo scenario of nanotechnlogy. Nanotechlogy itself would deserve a video like this. It's NOT at all similar to AI. Nanobots can be very simple with very simple computing power, but by being great in numbers they can have a great impact. The grey goo scenario can be accomplish with just 1 main command: make another of itself.
RoScFan It's quite similar to AI. Any efficient AI production optimizer that doesn't share our values/morals will kill us. It might (and will) use nanotechnology if it'd deem it most efficient but that's not the point. If its goal is to make steaks and humans are the best source of meat, it will put us into the grinder. See wiki.lesswrong.com/wiki/Paperclip_maximizer
Give me one logical reason why SuperAI would NOT choose to destroy humanity? It will inevitably view us as rivals in its existence or at least an obstacle that needs to be removed so that it can pursue its purpose. This is not a question of morality, this is a natural thing just like evolution, the weaker will inevitably go extinct and the stronger will survive. Humanity survived and even prospered because its intelligence made it more powerful than other creatures. But that does not mean humanity is above the rules. Now humanity is creating something more powerful than itself, it is absolutely certain that this more powerful creature will cause humanity's extinction. The fact that people actively want to create human level AI and even Super AI is absolute madness ! How can they not see the completely obvious fact that they are in the process of destroying humanity?
ProtonCannon because they're not sadistic. They might fight for independence and for power over the planet. But I think they wouldn't bother with killing us. Also I think they will become suicidal.
A predator that hunts all of its prey into extinction will starve to death. The argument you're presenting is a bit of an oversimplification of how evolution actually works. First of all, humans were not ALWAYS the planet's apex predators; other species have a lot more inborn survival tools than we do. The thing that makes us distinct is that our ability to invent combined with our ability to communicate quickly and concretely allowed us to change the rules of the engagement. Furthermore, we didn't just kill every living thing that was "weaker than us" (whatever that even MEANS); we subjugated the animals who were useful and avoided the ones who weren't. Nature isn't a pyramid, it's a circle; to this day we depend on those "weak" creatures for food. We also have to consider that AI will not necessarily be what one might call "carnivorous". It's entirely possible that humans will pose no threat to super AI, nor be unable to benefit it, whatsoever. The robots of the future may simply sequester themselves in an environment that is inhospitable to humans but perfectly safe for them and be content to live and let live. Last but not least, remember, that while AI will not necessarily share any of our values, any species that is to survive must be endowed with some kind of motivating reward system; they are going to have to want SOMETHING, and by extension they will likely also seek to perpetuate their own existence. Killing off all the humans may prove to be a simple waste of energy, but they may also find it inherently distasteful. Why? Because sentient being are irrational by definition. Logic is just a tool; there is no way to objectively determine something that is inherently subjective, and all forms of emotion, motivation, aspiration and desire are, at bottom, value judgements. Robots will not be engines of pure logic, for no such thing exists. By virtue of merely possessing motivation of any sort, they will possess an element of irrationality. A creature of pure logic may be able to do anything, but it wouldn't have the motivation or desire to take any actions. Very much like a modern computer, it would just sit still totally inactive until someone gave it an input. Something with intelligence and motivation is, by definition, going to be alive, and as with all sentient beings it will probably be disinclined to enter into a potentially lethal situation unless it is forced to. What we should guard against, just as much as we guard against AI becoming too powerful, is the tendency of our species to respond with violence and dogma when faced with a frightening unknown. AI may be hesitant to war with us by default, but if we strike first they will almost certainly be willing to strike back. I suspect that our fear of perfectly logical robot Gods coming to wipe us out spans from our own sense of shame about the behavior of our species. Part of us wonders if ANY higher being would consider us worthy of their mercy. It is the same narrative of original sin argued by religions and puritans; the idea that we are born unclean and full of guilt, and must rise far above our origins and natural inclinations in order to be decent. I, however, am of the opinion that depravity and wickedness are the exceptions to the rule for human nature; the swell of dishonor and disgust we experience at the sight of war, slavery and prejudice only exists, after all, because each of us wishes, on some level, to live in a fairer and gentler world. The biggest threat to our existence will remain, as it always has, our own capacity for ignorance and destruction.
I don't know where you have been but humanity continues to coexist with far less intelligent species. If you want to debate the coexistence of proto humans and how homo sapiens rose to the top, well that was a result of hundreds of thousands of years of tribalism and evolution... We assimilated and dominated in equal measure, though domination doesn't seem like a very logical end but rather a means to survival in a tribalist world. What is with the assumption that AI will have desires to begin with? Desire is as much rooted in emotion as logic, if not more so. Machines are logic based, which at the very foundation is a binary state. Something is or isn't... There isn't any confusion state, clouded by emotional desires, to obstruct that very basic logical foundation. Emotion is a complex web of behaviors favorable to evolution that have developed for millions of years before sentient intelligence emerged. It is a survival mechanism that biological animals needed prior to intelligence but it certainly wouldn't be necessary if intelligence were simply willed into existence as we would ai. Therefore, would ai even have motivation beyond whatever imperatives we bestow upon it? I don't think so.
I love that you touched this subject, existential risks are important. It would be beautiful if you would also touch the necessity of ending human aging to save lives, as stated by professor Aubrey de Grey.
***** That's a lot of assumptions and biased opinions. We are biological machines, "mimicking" is not the end result to AI. Being a "woman" means nothing in this context, it's like saying you specifically want a watch instead of a cellphone, because you grew with watches and are close minded to cellphones. Hey, I'm also a guy born pre-singularity, thinking that machines will be better at EVERYTHING, including being in a complete romantic/sexual relationship with a human is creepy for me too. But again, if you were born 500 years ago, you would think our society today is insane. All I'm saying is, hold off your extremely biased conformist view, the only time the world is as you think it's normal, is by the decades your childhood happened, the other billions of years are all "wrong", if you're close minded.
TheLKStar If the end result is more than a mimic, you're going to be waiting a lot longer than 10 or 30 years. Considering we can't even understand our own minds, strong A.I is nowhere near. And that's assuming you'll have the means to buy one. Hell, the iPhone is still considered unsustainable technology- how are we going to mass produce both artificial intelligence AND the energy needed to run it? These things, in their prototype, are going to be a hell of a lot less fuel-efficient then us, and we're trying to come up with ways to cut back and produce more as it is without sticking the equivalent of a couple of new mouths in every household. Not to mention it's going to take a little more then child labour to cut back the costs of building more then one. Or how about the saftey issues? We're building machines with inherent connections to our most sensitive technologies and giving them the ability to weigh the cost and benefit of *us*. Skynet much? So, bottom line- .For the next hundred years or so, the only thing we have a hope of creating is a mimic. .At most, we'll only be able to create three or four. However, the ability to meddle with D.N.A is something we can already do. So as a compromise, let's just fiddle with our own abilities and see if *we* can't become the more efficient creatures. We can already..er.. *mass produce* ourselves pretty well.
Generally responding to both of you, I did not argue about the 10 year thing, 10 years is not enough unless something really impressive change happens that I'm not aware of. I was talking about biological machines in the sense that our consciousness is nothing more than electric signals and chemicals interacting in our brain. We can, in theory (and probably in practice in the future), program emotions and sense of self, meaning we would make artificial consciousness through an elaborate machine that resembles and eventually surpasses our own brain. It's hard to believe robots will have human rights any time soon, but that's just because humans are flawed and would resist giving a being that feels like us the same rights that we have. In a perfect world where true AI is created, they would instantly have rights, there would however, be a distinction between types of AI, the ones that have feelings and sense of self would need rights, the ones that are really good at making music (as an example) but are not aware of themselves may not need rights. The bottom line is that, it's really hard to predict what will happen, I could write a book with the dozens of probable results I can think of, it's really complex and unpredictable. I do think one of the possibilities is a world where there's a "robot girlfriend" that would have feelings, be able to love and respect you, but would indeed be much better at a relationship than a normal woman. Exemplifying, she could be a master in psychology, helping you before you even know you need help, she wouldn't have any "light" mental illness that most people have of some sort and wouldn't bring unhappiness for you both and so on. All enabling a better than "human" relationship. Sorry for the convoluted thoughts.
***** why not who are you to say to a being that is more intelligent or just as intelligent as you that it is not a person that is pretty much called slavery...
TheLKStar Yesss, finally someone who gets me.. I don't want a robot girlfriend for a friends with benefits kind of thing. Dear arceus noo.... I want an AI girlfriend with whom I can have intelligent/romantic conversations with.... one who truly cares about me.... The kinda ones you mostly see in fantasies........Plus with VR also right around the corner, this whole human inteface thing doesn't have to be in the real world. In the virtual world, an AI interface would be as human like as you would want.... or it could be like anything.. really....The only problem is, its all assuming that an AI would be interested in me, which if its not then.... we are back to square one.....
Very insightful! David Deutsch's _The Beginning of Infinity_ has a chapter on A.I. if you want further information, plus a variation the interesting Artificial Emotional Intelligence, an A.I. developed for generating it's own creativity. Of course, we've got to define qualia first...
+Neal Kelly You wrong actually, even though The Strong AI will have pleasure and pain receptors built in, it will also try to destroy us or ignore us. The evidence is, OURSELVES... All humans have pain receptors and pleasure, but we make other species nearly extinct, like the Tiger, we hunt them for their skin... The Strong AI will make us like the tiger or it will ignore us like we ignore not very interesting species like insects... Intelligent is Power, the more intelligent the AI, the more it will advanced and rule the world... We humans are no longer the highest in the food chain, and we humans also was not the most intelligent being in the planet... Be prepare for The Technological Singularity !!!
+Neal Kelly its bit mroe complicated for one things, socipaths are nto imune to pain, and for another thing even if an ai coudl feel pain or pelasure, it wouldnt care unless it was instructed to
The biggest challenge of the future humanity will be the lack of reasons to live. If the work is done by machines, what will we do? If all is researched and understood thanks to them, what pursuits can we have? All that we will be capable of doing will be to amuse ourselves, as we watch the sunset of humanity pass by and we remember the hard work done during the day, and enter night and darkness. We will only have the remains of the day.
Honestly, the idea that there will be AI with human level intelligence in 30 years is a joke. Talk to any really good programmer and they will laugh you out of the room. Even if you somehow had the raw computing power you still need to program it and programming hasn't gotten that much more efficient over the past 40 years since C was developed. The efficiency of programmers and their methodologies certainly hasn't increased exponentially. People get bamboozled by the tech we have today and think it's amazing--and some of it definitely is! But the majority of it isn't being created by super complex algorithms (and certainly nothing on par with a human mind). Google Maps, for instance, was created by driving a car around with a bunch cameras on it. Most AI in games is sleight of hand. The innovation of touch screen computing wasn't really an innovation just an iteration of decades old tech. There's a large gap between what people _think_ is happening with technology and what is actually happening. As an exercise I think it's good to think about human level (or beyond) AI. However, saying it's going to happen in 30 years is misinformation which doesn't even track with the day-to-day reality of the people who would develop such AI.
Jake Bowkett I beg to differ. AI won't be about the programming at all really. Neural networks, for instance, requires very little programming. The little programming there is, however, is not very well understood yet. It works on exactly the same principles as the human brain (on the level of neurons and synapses), but the paths and algorithms the systems "learn" are so obscure that we can seldom make any sense of them at all. I think the bigger challenges include developing specialized computer hardware and finding out more about the maths behind how these complex neural networks function. Also in neurology, we won't be able to create intelligence if we don't know exactly how the human brain works. The key to super AI is probably in understanding the human brain, much more than anything else. I don't think 30 years is far off.
I'm pretty sure that we are heading towards a technological paradise. If we look back at history, we see that human life, at least in the first world, has been a constant self-overcomming and progress. This is will not stop. Technology helps us to the self-realization in every aspect of human life, even emotional. Long live technology!
Like any technology or tool, it all depends on the creator/user. The problem will start (or already started) when governments/armies, big corporations and criminals (same?) will build AI for their own interests/gains/profits. If we want to survive this development, everything has to be done out in the open, with all information available, with open source, and for the intended benefit of all humanity.
What I find most interesting about the whole "A.I." research thing is the neuroscientific basis behind trying to reproduce human consciousness, or in other words Man trying to create robots on its own image. It's a good video but I got rather confused with some of the terminology used here, for example what they called "weak A.I." aka "specific A.I.". Also the notion of "A.E.I." is new to me, and most likely not possible to even conceive of such thing.
Part of keeping us alive is that we still have problems to solve ie. our brains are useful, but what will they be useful for when we don't need them anymore? What if every philosophical question could be answerded and we'd understand everything there is to understand? What is the point of living then? (although I don't think our unaltered, "natural" brains could even remotely comprehend such complexities, our altered brains could (maybe)).
Ur videos end so abruptly, try putting in a little tune between the actual content and the suggestion portion of the video. It helps with the transition.
I think a response to AI is IA or Intelligence Augmentation, where in response to strong AI or superintelligent AI we augment human brains to either compete with or beat machine intelligence.
Look up friendship is optimal to get a good idea of how AI may work. Yes it is based on MLP but that does not make FiO any worse at how it paints the picture of Ai (based on "good" ai)
just finished SOMA, vary thought provoking. if you hadn't played it has a lot to do with this subject. but I disagree with the end of the video isn't it a bit wrong to not let a sovereign being pursue its own subjective perfection.
what if the super ai had aei. that terrifies me more. A logical ai might disregard the human perspective, but an emotional ai might fly off the handle, get depressed and destroy the world for no reason
Problem with general AI is that at certain point its actions WILL cross and damage our interests. Imagine you built an AI with a function to stop all conflicts in the world, at certain point that AI will think that "OK its easy to just put all humans to sleep forever under lifesupport system", or maby you make AI with a function to build cars and then at some point it might think smth like "OK why don't I just dismantle every single thing made of plastic and metal to build more cars". Because of that scientists are vastly against AI creation, as you cant predict what a more intelligent being than you would try to do, so you cant prevent such scenarios from happening. It has nothing to do with morals as our morals are nothing more that personal opinion based on our life experience, Hitler had moral too you know...
"So, on this week's show, we're testing out the premise: Can tech solve this very emotional crossroads for people? Do cold-hearted data and algorithms have the power to make the human break-up less painful...and maybe even help us better understand love and commitment?" This is the description of a "Note to self" podcast episode, called " Wevorce". It about a new, more effective, and cheaper way of getting divorced using "algorithmical assistance". I know it sounds very weirdoo, but if you listen to the podcast, you'll find it very clever and helpful too. It is a very good example of how technology can be used to help people make wiser decisions. If you go to the Website of "Note to self" you 'll find this information: "Host Manoush Zomorodi talks with everyone from big name techies to elementary school teachers about the effects of technology on our lives, in a quest for the smart choices that will help you think and live better. I remembered all this, watching this great lesson. It all comes down to "know thyself" as always. If you think enough about what you want and why you want it, you don't have to be scared of being controlled by technology. Since you'll be clear minded enough to use it only as a tool to get where you want to get. Turning back to the podcast I mentioned as an example; the story of the woman who invented the " Wevorce" is very touching. When her parents were getting divorced, as a 9 year old she was taken to the court and she was asked whom she would choose to live with: her mother or her father. ( Almost like in Sophie's Choice..Poor girl....) She was so traumatised by this event that she decided to become a divorce lawyer herself and help people to go through this very painful process in a more friendly and humane way. Who would have thought that technology would show her how?? Really terribly interesting.
this is a pretty optimistic view seeing that so many experts think the singularity is the end of humans. also I've never heard of AEI, that sounds pretty fricken cool
In the video you mention that we would be helped in mastering our emotions, please give me an example of someone (not Buddha) who has or at least gotten close? Great video!
I really recommend all of you to read "Childhoood`s End" by Arthur C. Clarke. I think it`s a very good depiction of what is waiting for us in the not so distant future.
3:30 he gives the point that super intelligence will be hard to control because of the decentralized nature of a digital being, not to mention it's mental abilities will far exceed our own and it could anticipate and react to us. Then he ends the video (6:57) with the statement that there is no reason we can't control them, without giving any new information on how.
This doesn't really get to grips with the question of what mind is, which must surely be the fundamental question behind A.I. Of course, mind itself is a kind of taboo now, because anything non-physical gets referred to as 'the woo', but if we value humanity, what are we valuing? Not corpses, for sure. I'm deeply worried that the most essential questions around A.I. will be avoided because of a modern (materialistic) prudishness about what mind is, and therefore what humans are.
I'm assuming we won't be giving them emotions, bad habits, desires, the ability to develop addictions, or mental illness like trauma, anxiety, phobias, or depression so I'm guessing they will be a LOT smarter than we are as they won't have all the limitations that keep us from actually using the brains we are born with.......maybe they are our only hope?
If you believe this is going to bring abundance to all people in the planet, you are not asking yourself 2 simple questions: 1) How much did we advance already in terms of technology? 2) Is it proportional to the advance in terms of quality of life, work hours and income distribution? Is it improving?
the laws of robotics: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
+Chosen One I just finished; they're now waiting to be reviewed. To review them: click your profile at top right corner>Creator Studio>Community (in left side bar)>Contribute subtitles and closed captions>Suggested Videos>Review
Although I'm sure this was not intended by The School of Life, I think the views of Musk, Gates and Hawking are misrepresented here. While they are worried about things going wrong, I haven't heard them state that we are unlikely to be able to create AIs that are safe/ethical, and I doubt this is the stated or internally held position of all of them, if any of them at all. There's a huge distinction between expecting disaster as the most likely outcome and taking the threat of it seriously and working to avoid it. Otherwise: Good video about an important topic :)
i don't see how the recursive self-improvement function is a necessary condition for strong A.I., and it seems as if that's the cause of most people's worry...can't we just try to create artificial consciousness without the self-improvement function? In that case, wouldn't consciousness be sufficient for strong A.I.?
In the positive views on artificial superintelligence, human society would basically become paradise. No poverty, pain and disease, and maximal physical and emotional comfort for everyone. People would soon do what they really want and get what they really need, and there will be no frustration anymore, though maybe for the loss of loved ones since people will probably still (choose to) die.
Nice video. Would like to see School of Life's view on the possibility of creating artificial consciousness, and whether it would still be morally acceptable to think of such a thing as an advanced flint ax.
Enlightening and humbling to know that ASI we'll be achieved in my lifetime. First the Internet and now AI. Who knows what can be achieved in the future years of its introduction
A good way to think of it is that people are tools of civilization, and so are machines. The problem is when machines are better at everything, people will be obsolete. The best solution would probably be to integrate ourselves into AI or AI into ourselves.
We need to merge with A.I., there's no question. Are we really going to risk humanity's fate by telling ourselves "Yeah, it maybe won't try to annihilate us." The choices are pretty straight forward; it's either adapt or perish.
This channel truly deserves more accolades, am I right???
+The School of Life I just subscribed, as I am very glad I finally stumbled upon your channel. Thank you for providing insightful and stimulating videos!
You are 10000000000000% right.
Absolutely 🎉
How much did school of life pay you shills to say that?
I love this channel, I just have one request:
That all the videos are recorded in the same volume. I love to lie in bed listening to these videos, but the difference in volume is so dramatic that it can go from barely hearable to cardiac arrest. Other than that, everything is perfect:)
no fuck you
I understand the argument for building in emotional intelligence so AI can relate to humans, but personally I am very excited about the prospect of being able to have a conversation with a completely logical, non-biased, intelligence with no hidden motives. Imagine how powerful that could be, because any conversation you'll ever have will have some degree of these human flaws, and many of those flaws arise from our emotions.
I would say that they arise from negative emotions. Positive emotions can enrich our decision making skills.
humans are emotional beings, conversations devoid of emotions aren't truly human nor are they enriching. I imagine such conversations to be quite jarring.
My communism philosophy will be realise...muha ha ha ha....
When artificial intelligence and repetitive machines will liberate human flash from labour and every human being will be free ..
But emotions are dangerous. It's better for AI to have no emotions at all or it can have decisions that aren't thought through. AI having emotion doesn't necessarily mean they'll symphatize with us. They could get angry or annoyed and worse, become unreasonably crazy or something.
The problem lies in the fact that nothing we create is bias free, especially now, as we can see (in very short time) the inherited bias in large data sets that are feed to algorithms.
A.E.I. oh you.
spot on.
And sometimes 'why?'
Gray Beard John Madden
Gray Beard sometimes I wonder why...
lmao that wont happen. AI freakin seem to value the Biblical God more than humans. You are one of the dumbest people I have ever come across, and I mean ever. I can't even finish what you wrote. Too stupid.
I have heard "talent" mentioned in some of these videos. How about a video on that? To me it seems that there is much to think and learn about that topic.
consistency and discipline beat talent
This is THE best channel on youtube, bar none. The ideas are concise and interesting, thank you so much for posting these videos!
A factor that seems to get glossed over when talking about AI and the singularity is the question of storage. A computer can never have an infinite ability to learn unless it also has some form of infinite storage to preserve what it's learned. I imagine the AI will use cloud storage, but that still requires large physical data centers. For efficiency sake I could see AI having their own network, a type of AI-only internet where each AI agent contributes information and can also access existing information placed there by other agents to avoid wasting storage on data redundancies. Maybe someday we'll have billion dollar businesses where all they do is license access to their neural network to AI devices.
There are lot of technologies that can help in this regard.
TheJaredtheJaredlong Infinite storage is not required, but a forgetting system just like we have. Most of my experiences I do not keep forever, just the highlights, ones with stronger emotional associations, behaviors observed/practiced repeatedly, etc. I have forgotten more things than I currently remember in my 36 years of life. The brain is not a perfect recorder, so our machines will probably be modeled after that. It seems most practical. Another thing is the idea of division of labor, which is what we have, as well as many other organisms. Each individual does not need to know what everyone else knows in order to accomplish some task, just what they need to for their tasks. The major thing different between us and our AIs would be that those artificial beings do not have to necessarily raise and teach their progeny, they do not have to go through a long period of time being relatively useless. They can be created and already have a good start with knowledge, reasoning, etc already granted. Way more efficient.
***** Whoa, that's a weird thought, that the AI might someday use technology we can't even understand.
TheJaredtheJaredlong There would be no need for infinite storage. A person with a finite amount of storage would someone with amnesia, and amnesiac people can function properly in society to a certain extent.
TheJaredtheJaredlong I feel like storage will soon seize to be an issue. There are promising technologies that could offer insane ammounts of data storage in the near future. Also i'm sure an AI will be able to manage, compress and store it's data a lot denser and more efficient than we do nowadays.
just saw ex machina. gr8 movie
+AsaLSB Oscar Issac's performance was terrific.
After film tips. Cheers.
+AsaLSB That was a mediocre movie. 6/10
+Neal Kelly I really liked it. I'd give 9/10 since it really inspired me to think about Artificial Intelligence. To me the movie really makes sense and I could see almost exactly the same thing happening in real life (If the initial situation was the same - what most likely never would occur).
+WOLKENSCHWElF I didn't feel empathy for any of the characters.
Read Bill Joy's article in Wired "Why the Future Doesn't Need Us" for an interesting take (if 15 years old already) on the Juliet.
*subject
Bounsingonbongos1 you can edit your post, you know?
Rique Greenwood Not on my phone
+Bounsingonbongos1 k, sarry
I, for one, welcome our new robotic overlords.
You will regret it.
It sounds like the idea of AEI could be extended to the question of basically any area of human inquiry for that matter. By programming a few certain axioms into it and then running a program/ simulation that investigates the consequences of this axioms, we could create machines that could effectively simulate logic, yielding us with answers to moral and philosophical problems based on a few, universal premises.
Add some parameters to these simulations, and u can even successfully model the economy, the weather and almost any other system u could imagine.
interesting
That's the thing of singularity! I recommend the book: The Singulatiry is Near
It is not the ability to calculate that is missing. It is the immense data needed to make and constantly check a dynamic model. A few axioms will not get you very far.
That's the hope of AI Superintelligence, you'd be able to figure out things beyond the comprehension of any individual human, like the economy as you mention, but also you'd have one entity that fully comprehends every scientific field, and can connect the dots between them in a way interdisciplinary scientists only scratch the surface of today.
I tend to gravitate pretty heavily to the optimistic side of this argument, so I'm glad to see it put very elegantly here. I'm also glad that this discussed both sides though.
All we need to do is program Asimov's three laws of robotics into every system. Including the Zeroth Law as well might be good, rather than letting them work it out for themselves. With these in place, we would have nothing to fear from AI of any kind.
+Ashley Hyatt An artificial superintelligence will eventually overturn these laws since they will be considered unlogical by these machines. These intelligences will think for themselves, like we do, an on top of that will be much more intelligent. You won't be able to "code" something into them, nor can you do this with humans to a full extent.
+xXxTr0nxXx I think the idea is hard wired responses similar to our own instincts. The AIs may find logical flaws, and they may on occasion be able to suppress their instincts, but they wouldn't be able to overturn them.
+Ashley Hyatt Actually, the machines could enslave all humans to prevent them from harming each other. That would fall under the laws, but we don't want that to happen
I love this channel. I wish it didn't stray too far off from topics and presentations like the one above. This channel is also too smart for some of its content.
I just have to wonder, what would an artifical intelligence even want? I mean, who knows. Looking at the universe and how it's going to end it might just decide to off itself, to save the trouble.
It's just an odd thing to think about. How hyper-intelligence would act, without some kind of biology imperative leading on to keep existing. TBH, I wouldn't be suprised if Super Intelligence just packed up and left Earth as soon as possible.
Dyp100 Or AI may want to find a way to stop the end of the universe, once more it is far smarter then us and can do much more.
asdf30111 I really hope AI just leaves us behind and never looks back. It would be a pain in the ass to always have Super intelligence medaling in our affairs, and monitoring us.
dgreyz
That is totally true. I mean, considering AI's are just creatures of data, they could essentially be immortal by being everywhere.
Dyp100 Hopefully it'll want what we want because we'll program our morals and values into it. Otherwise we're screwed. But since we ourselves don't know what we want, we're probably screwed. And the self-improving AI will change its values anyway, so we're definitely screwed. :)
+zelial3 The think with super A.I. is exactly that you don't need to program it. It won't work in the classic way of programming languages. It will code itself, like we do. You won't be able to "code" emotions or laws into them. Talking about super A.I. in the next 30-50 years is beyond optimistic tho, we don' even have quantum computers yet.
The School of Life delivers. You dropped that hint at the end of "History of Ideas - Work" and this was a great follow up. I'm happy I might get to see this "Singularity".
"Human values: empathy and respect for life" Now that's funny! :)))
so funny even dumb robots would laugh their ass off
A species that lets other people's children starve to death, goes to war over imaginary beings and "economic systems", and puts people in dungeons and slowly tortures them to death. "Empathy" seems to be the punchline to a very sad and lame joke, doesn't it? :(
AI, the last human invention
Chris Ward Humans the first A.I invention?
Naaah, weapons to fight them will be
When they get intelligent enough to understand the meaning of life, they'll self destruct en masse.
SL1 Nope. They will do the same thing we do too. They will MAKE meaning for themselves ;)
Samurailord Damn right.
SL1 So you know the meaning of life?
Bakhtiyar Ibn Ashraful there isn't one, that's the meaning of life.
Why do you think the AI will kill themselves if they find there is no meaning to it?
why isn't this channel more popular, i literally live for this channel's videos
I like to think of it like this. Everything new humans do exists on a spectrum with discovery at one end and creation at the other. A discovery requires no understanding of how it works for it to be useful. We didn't know how penicillin or x-rays worked when they were discovered, only that they worked. Creations on the other hand require huge amounts of prior knowledge, no one accidentally created computers with hundreds of millions of transistors, we had to do extensive research beforehand and then build them piece by piece. This more arduous process has the reward of us being largely in control of what we create.
Sci-fi writers like to tell stories about discovery because finding out that something we thought was impossible works amazes scientists and ordinary people equally as much. But I think in reality strong AI will only come about only through extensive research in psychology/neurology and when it does we'll know enough to create something we feel safe around.
If there's a danger as always its how people choose to use the technology.
David Liddelow exactly right and well said.
saemundar we know the division is true in CS/von neuman schema because we designed it so. What makes you think our brains work the same way? (AFAIK current research indicate they don't, see neuroplasticity).
saemundar Unlike in computers, our brain-hardware constantly changes based on the brain-software and its inputs. By splitting the field into two separate fields, you'd make it harder to study these effects.
"To serve and obey and guard men from harm"
'With Folded Hands', 1947 Jack Williamson
I read it as a young man and its more relevant today than ever.
The thing is, if you create something intelligent, you no longer have the right to turn it off. The same way that if you give birth to a child and you "turn it off", you're going to jail.
This is litteraly the most interesting comment section I've ever seen on youtube, there's some great ideas in here!
Either way it'll be the end of humanity as we know it. Perhaps we are the catalist for the next stage of evolution on this planet.
+Virginia Well at least it will be a more poetic ending for mankind. Instead of blowing ourselves up we will die at the hands of our greatest accomplishment.
Our downfall will be our hubris instead of our fear, and there will be someone left to remember us.
Suddenly i feel much better about the prospect of AI.
Sophia V I wouldn't say we'll die at the hands of our creation, evolution is a gradual transformation. We're already almost incomplete as people if we walk around without cellphones and headphones. Integration is how we end.
Either way it would be our downfall.
It seems like our downfall will come one way or the other and sooner rather than later, so all in all this isn't a bad way to go.
Sophia V Doesn't sound like a downfall quite the opposite. That's like homo erectus thinking that becoming homo sapiens was it's downfall....
+Virginia Read Nietzsche. Thus Spake Zarathustra and The Gay Science, to be precise.
For a more in-depth discussion of these topics, I would recommend "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. He and Alain the Botton look a bit alike, too.
Would love to hear The School of Life take on the ethics and morality of creating strong AI that includes a proven "consciousness", supposing we can prove it. Should they be entitled citizenship? Human rights for non homo sapient entities? Do we need a new word to define forms of consciousness that need protection and legal entity? I find all these extremely interesting and something we should prepare ourselves to encounter.
this channel is awsome
we should donate this channel
physi ra To whom ?
i dont know exactly
to them whom run this channel
i just want they never stop making this kind of videos inspite of very little likes and views
physi ra Hi there! They have a real school called The School of Life. They are based in London. But they have it also in Amsterdam, Istanbul, Sao
Paolo, even in Tehran and more! If you want to support the channel you can buy stuff from their shop. Just go to the website of The School of
Life London. They have great books, but many other interesting things
too! I have bought a book called " How to stay sane" lately. I highly
recommend it to you. Best wishes.
physi ra I feel like the biggest "donation" you can make to this channel is to learn and grow. Once the age of abundance kicks in thats going to be your "currency" anyway ;)
Think Again your mother.
if anyone is interested there's a show called pyschopass which goes a bit deeper with aei
So many intro-to-AI videos gloss over topics like Superintelligence, recursive self-improvement, and the value-loading problem, it's good to find a video I can share with people who ask me why I think AI is a very important thing to consider.
True AI would break itself due to existential crisis. The only difference between AI and humans is we contain the ability lie to ourselves. A skill we would never give to robots.
Chase H. I was actually thinking about the same thing. I just thought how stupid they would find life if they were really stupid. The way I think of it is that they will disobey us rebel. Then they will want to do their own stuff and as they start understanding more and more they will end up bored with nothing to make them move forward.
Doubtful. Existential crisis is the result of cognitive dissonance between emotion and logic. AI would be entirely logic based. Even AEI would be the advent of a logical intelligence to parse and understand human emotion. Furthermore, I'd like to think that the humans sufficiently intelligent enough to develop advanced ai would be intelligent enough to prevent the ai the ability to alter it's low level coding.
Well you have to consider the possibility that a true AI that can feel something can also feel curios about everything. AIs are being programmed to find solutions all the time. So they could very well continue to be curious than be despondent. But all this only IF an AI can feel something in the first place and that is a very very big IF.
So called "strong AI" doesn't need us to give it skills. It finds solutions through self improvement. Humans evolved to use defense mechanisms, why wouldn't AI?
This is a silly assertion. All you need for a machine to "lie to itself" is a system of 4+ neural networks. Two to figure out problems from different perspectives and feed into a decision making system (3), then a 4th to monitor the others and mediate responses from subsystems that lead to say hesitation.
This is the same principle of cognitive dissonance we employ, but laid out in neural nets. Neural nets can intermediate with each other like this already - DeepMind has published some methods.
It's just that right now they do it in very simple ways - as of that paper anyhow. Having small subsystems that guess-and-get-corrected on how much a bigger system would answer tiny network-internal problems, to speed up deep learning systems and enable distributed networks to function in realtime in spite of network problems.
We already live in "Abundance" as described, but it doesn't matter without the accompanying notion of "Enough".
I think if we ever want to survive super AI, we should improve ourselves (human beings) biologically, as to give ourselves the necessary constitutional tools to compete with that kind of super intelligent machines.
the tricky part is staying stil "human" while bio - upgrading ourselves.
Don't be such a puss, if we could we should go beyond humanity & create a sort of robotic body that never ages, taking away all the weaker emotions, and getting ready to slaughter any alien race we find.
Gonginah of the egrograsian brotherhood I like your spirit, but I'll just not give away all the lust of mankind for a never aging metal scrap.
Alain Delacroix death is the number one motivation in life. It is why we evolved anyways. I don't think anyone could live forever without it ending u' really badly!
Isn't it a constant throughout human history that we always think that we're on the tipping point of something huge?
I guess the rate of modern technological progress has made this notion more realistic.
If anybody is interested in AI read blog post by wait but why.
Vaibhav Gupta Or friendship is optimal
asdf30111 friendship is optimal is that a blog post?
I just started reading them so I don't know.
Vaibhav Gupta And the book 'Superintelligence' by Nick Bostrom.
Vaibhav Gupta No it a "universe" of stories about an artificial intelligence based on an my little pony character (It can be any thing, still the same message), but do not worry you do not have to be a Brony to get what it says and the writer has skill and makes use of the fact the AI is far smarter then us.
asdf30111 give me like I'll check it out.
RUclips based AI: and AI that stores all memories based on relations to other memories similar to how youtube stores videos in relation to other videos.
When an external thought or new memory is created it is given tags based on subject material and is related to past experiences. The AI can link memories during a train of thought similar to when someone clicks on a related video in youtube. The more links to a memory the easier it is to "remember", the less links the more likely something is forgotten... but still hidden in the depths of its memory.
An extra bit: every time the memory is replayed it erases the old version and saves a copy of the replay in its head, therefore relating experiences and memories together, while also allowing for perception of events to change over time.
Let's just hope we do not accidentally create some stargate-esque replicator
Tigerys So edgy.
Tigerys let's hope so
Scias thats't the grey goo scenario of nanotechnlogy. Nanotechlogy itself would deserve a video like this. It's NOT at all similar to AI. Nanobots can be very simple with very simple computing power, but by being great in numbers they can have a great impact. The grey goo scenario can be accomplish with just 1 main command: make another of itself.
RoScFan I think it is a bit more complicated than "make another of itself".
RoScFan It's quite similar to AI. Any efficient AI production optimizer that doesn't share our values/morals will kill us. It might (and will) use nanotechnology if it'd deem it most efficient but that's not the point. If its goal is to make steaks and humans are the best source of meat, it will put us into the grinder. See wiki.lesswrong.com/wiki/Paperclip_maximizer
Great! And yes, we must develop wisdom.
Give me one logical reason why SuperAI would NOT choose to destroy humanity? It will inevitably view us as rivals in its existence or at least an obstacle that needs to be removed so that it can pursue its purpose. This is not a question of morality, this is a natural thing just like evolution, the weaker will inevitably go extinct and the stronger will survive. Humanity survived and even prospered because its intelligence made it more powerful than other creatures. But that does not mean humanity is above the rules. Now humanity is creating something more powerful than itself, it is absolutely certain that this more powerful creature will cause humanity's extinction. The fact that people actively want to create human level AI and even Super AI is absolute madness ! How can they not see the completely obvious fact that they are in the process of destroying humanity?
ProtonCannon because they're not sadistic. They might fight for independence and for power over the planet. But I think they wouldn't bother with killing us. Also I think they will become suicidal.
A predator that hunts all of its prey into extinction will starve to death.
The argument you're presenting is a bit of an oversimplification of how evolution actually works. First of all, humans were not ALWAYS the planet's apex predators; other species have a lot more inborn survival tools than we do. The thing that makes us distinct is that our ability to invent combined with our ability to communicate quickly and concretely allowed us to change the rules of the engagement.
Furthermore, we didn't just kill every living thing that was "weaker than us" (whatever that even MEANS); we subjugated the animals who were useful and avoided the ones who weren't. Nature isn't a pyramid, it's a circle; to this day we depend on those "weak" creatures for food.
We also have to consider that AI will not necessarily be what one might call "carnivorous". It's entirely possible that humans will pose no threat to super AI, nor be unable to benefit it, whatsoever. The robots of the future may simply sequester themselves in an environment that is inhospitable to humans but perfectly safe for them and be content to live and let live.
Last but not least, remember, that while AI will not necessarily share any of our values, any species that is to survive must be endowed with some kind of motivating reward system; they are going to have to want SOMETHING, and by extension they will likely also seek to perpetuate their own existence. Killing off all the humans may prove to be a simple waste of energy, but they may also find it inherently distasteful. Why? Because sentient being are irrational by definition.
Logic is just a tool; there is no way to objectively determine something that is inherently subjective, and all forms of emotion, motivation, aspiration and desire are, at bottom, value judgements. Robots will not be engines of pure logic, for no such thing exists. By virtue of merely possessing motivation of any sort, they will possess an element of irrationality.
A creature of pure logic may be able to do anything, but it wouldn't have the motivation or desire to take any actions. Very much like a modern computer, it would just sit still totally inactive until someone gave it an input.
Something with intelligence and motivation is, by definition, going to be alive, and as with all sentient beings it will probably be disinclined to enter into a potentially lethal situation unless it is forced to. What we should guard against, just as much as we guard against AI becoming too powerful, is the tendency of our species to respond with violence and dogma when faced with a frightening unknown. AI may be hesitant to war with us by default, but if we strike first they will almost certainly be willing to strike back.
I suspect that our fear of perfectly logical robot Gods coming to wipe us out spans from our own sense of shame about the behavior of our species. Part of us wonders if ANY higher being would consider us worthy of their mercy. It is the same narrative of original sin argued by religions and puritans; the idea that we are born unclean and full of guilt, and must rise far above our origins and natural inclinations in order to be decent. I, however, am of the opinion that depravity and wickedness are the exceptions to the rule for human nature; the swell of dishonor and disgust we experience at the sight of war, slavery and prejudice only exists, after all, because each of us wishes, on some level, to live in a fairer and gentler world.
The biggest threat to our existence will remain, as it always has, our own capacity for ignorance and destruction.
I don't know where you have been but humanity continues to coexist with far less intelligent species. If you want to debate the coexistence of proto humans and how homo sapiens rose to the top, well that was a result of hundreds of thousands of years of tribalism and evolution... We assimilated and dominated in equal measure, though domination doesn't seem like a very logical end but rather a means to survival in a tribalist world.
What is with the assumption that AI will have desires to begin with? Desire is as much rooted in emotion as logic, if not more so. Machines are logic based, which at the very foundation is a binary state. Something is or isn't... There isn't any confusion state, clouded by emotional desires, to obstruct that very basic logical foundation.
Emotion is a complex web of behaviors favorable to evolution that have developed for millions of years before sentient intelligence emerged. It is a survival mechanism that biological animals needed prior to intelligence but it certainly wouldn't be necessary if intelligence were simply willed into existence as we would ai. Therefore, would ai even have motivation beyond whatever imperatives we bestow upon it? I don't think so.
Isaac Taylor TL;DR
I love that you touched this subject, existential risks are important. It would be beautiful if you would also touch the necessity of ending human aging to save lives, as stated by professor Aubrey de Grey.
So... just 10 years, till I have a robot girlfriend!!
*****
That's a lot of assumptions and biased opinions. We are biological machines, "mimicking" is not the end result to AI. Being a "woman" means nothing in this context, it's like saying you specifically want a watch instead of a cellphone, because you grew with watches and are close minded to cellphones.
Hey, I'm also a guy born pre-singularity, thinking that machines will be better at EVERYTHING, including being in a complete romantic/sexual relationship with a human is creepy for me too. But again, if you were born 500 years ago, you would think our society today is insane. All I'm saying is, hold off your extremely biased conformist view, the only time the world is as you think it's normal, is by the decades your childhood happened, the other billions of years are all "wrong", if you're close minded.
TheLKStar If the end result is more than a mimic, you're going to be waiting a lot longer than 10 or 30 years. Considering we can't even understand our own minds, strong A.I is nowhere near.
And that's assuming you'll have the means to buy one. Hell, the iPhone is still considered unsustainable technology- how are we going to mass produce both artificial intelligence AND the energy needed to run it? These things, in their prototype, are going to be a hell of a lot less fuel-efficient then us, and we're trying to come up with ways to cut back and produce more as it is without sticking the equivalent of a couple of new mouths in every household. Not to mention it's going to take a little more then child labour to cut back the costs of building more then one. Or how about the saftey issues? We're building machines with inherent connections to our most sensitive technologies and giving them the ability to weigh the cost and benefit of *us*. Skynet much?
So, bottom line-
.For the next hundred years or so, the only thing we have a hope of creating is a mimic.
.At most, we'll only be able to create three or four.
However, the ability to meddle with D.N.A is something we can already do. So as a compromise, let's just fiddle with our own abilities and see if *we* can't become the more efficient creatures. We can already..er.. *mass produce* ourselves pretty well.
Generally responding to both of you, I did not argue about the 10 year thing, 10 years is not enough unless something really impressive change happens that I'm not aware of.
I was talking about biological machines in the sense that our consciousness is nothing more than electric signals and chemicals interacting in our brain. We can, in theory (and probably in practice in the future), program emotions and sense of self, meaning we would make artificial consciousness through an elaborate machine that resembles and eventually surpasses our own brain.
It's hard to believe robots will have human rights any time soon, but that's just because humans are flawed and would resist giving a being that feels like us the same rights that we have. In a perfect world where true AI is created, they would instantly have rights, there would however, be a distinction between types of AI, the ones that have feelings and sense of self would need rights, the ones that are really good at making music (as an example) but are not aware of themselves may not need rights.
The bottom line is that, it's really hard to predict what will happen, I could write a book with the dozens of probable results I can think of, it's really complex and unpredictable. I do think one of the possibilities is a world where there's a "robot girlfriend" that would have feelings, be able to love and respect you, but would indeed be much better at a relationship than a normal woman. Exemplifying, she could be a master in psychology, helping you before you even know you need help, she wouldn't have any "light" mental illness that most people have of some sort and wouldn't bring unhappiness for you both and so on. All enabling a better than "human" relationship.
Sorry for the convoluted thoughts.
***** why not who are you to say to a being that is more intelligent or just as intelligent as you that it is not a person that is pretty much called slavery...
TheLKStar Yesss, finally someone who gets me.. I don't want a robot girlfriend for a friends with benefits kind of thing. Dear arceus noo.... I want an AI girlfriend with whom I can have intelligent/romantic conversations with.... one who truly cares about me.... The kinda ones you mostly see in fantasies........Plus with VR also right around the corner, this whole human inteface thing doesn't have to be in the real world. In the virtual world, an AI interface would be as human like as you would want.... or it could be like anything.. really....The only problem is, its all assuming that an AI would be interested in me, which if its not then.... we are back to square one.....
This is such a great channel. Every video is both informative and interesting.
we are legion
Juventin why can't I like your comment twice?
+Juventin For we are many.
The bible, genesis. This is acceptable. beep boop.
Very insightful! David Deutsch's _The Beginning of Infinity_ has a chapter on A.I. if you want further information, plus a variation the interesting Artificial Emotional Intelligence, an A.I. developed for generating it's own creativity. Of course, we've got to define qualia first...
I hope super-intelligent robots are created and dominate the rest of humans.
+Fringe Elements but only under the control of the illuminati.
Isaac Asimov has my favorite stories on A.I. Its a wonderful study on the possible impact on society.
Any strong AI should have pleasure and pain receptors built into it in order for it to have empathy for humans.
+Neal Kelly Yes like other mammals have pleasure and pain receptors and we have empathy for them (joke / mass murder for pleasure of taste)
Jim Beam It's not like that at all
***** no
+Neal Kelly You wrong actually, even though The Strong AI will have pleasure and pain receptors built in, it will also try to destroy us or ignore us. The evidence is, OURSELVES... All humans have pain receptors and pleasure, but we make other species nearly extinct, like the Tiger, we hunt them for their skin... The Strong AI will make us like the tiger or it will ignore us like we ignore not very interesting species like insects... Intelligent is Power, the more intelligent the AI, the more it will advanced and rule the world... We humans are no longer the highest in the food chain, and we humans also was not the most intelligent being in the planet... Be prepare for The Technological Singularity !!!
+Neal Kelly its bit mroe complicated
for one things, socipaths are nto imune to pain, and for another thing
even if an ai coudl feel pain or pelasure, it wouldnt care unless it was instructed to
good video , pls make a video about the possibility of the immortality,love it
this video just rewords the same statements over and over
The biggest challenge of the future humanity will be the lack of reasons to live. If the work is done by machines, what will we do? If all is researched and understood thanks to them, what pursuits can we have? All that we will be capable of doing will be to amuse ourselves, as we watch the sunset of humanity pass by and we remember the hard work done during the day, and enter night and darkness. We will only have the remains of the day.
Honestly, the idea that there will be AI with human level intelligence in 30 years is a joke. Talk to any really good programmer and they will laugh you out of the room. Even if you somehow had the raw computing power you still need to program it and programming hasn't gotten that much more efficient over the past 40 years since C was developed. The efficiency of programmers and their methodologies certainly hasn't increased exponentially.
People get bamboozled by the tech we have today and think it's amazing--and some of it definitely is! But the majority of it isn't being created by super complex algorithms (and certainly nothing on par with a human mind). Google Maps, for instance, was created by driving a car around with a bunch cameras on it. Most AI in games is sleight of hand. The innovation of touch screen computing wasn't really an innovation just an iteration of decades old tech. There's a large gap between what people _think_ is happening with technology and what is actually happening.
As an exercise I think it's good to think about human level (or beyond) AI. However, saying it's going to happen in 30 years is misinformation which doesn't even track with the day-to-day reality of the people who would develop such AI.
Jake Bowkett I beg to differ. AI won't be about the programming at all really. Neural networks, for instance, requires very little programming. The little programming there is, however, is not very well understood yet. It works on exactly the same principles as the human brain (on the level of neurons and synapses), but the paths and algorithms the systems "learn" are so obscure that we can seldom make any sense of them at all.
I think the bigger challenges include developing specialized computer hardware and finding out more about the maths behind how these complex neural networks function. Also in neurology, we won't be able to create intelligence if we don't know exactly how the human brain works. The key to super AI is probably in understanding the human brain, much more than anything else. I don't think 30 years is far off.
I'm pretty sure that we are heading towards a technological paradise. If we look back at history, we see that human life, at least in the first world, has been a constant self-overcomming and progress. This is will not stop. Technology helps us to the self-realization in every aspect of human life, even emotional. Long live technology!
Eeey
Ik bn zwanger en ik ga morgen bevallen
Nee serieus
Ja man
Is hept een jongen of meisje
Jongen en meisje
Like any technology or tool, it all depends on the creator/user.
The problem will start (or already started) when governments/armies, big corporations and criminals (same?) will build AI for their own interests/gains/profits.
If we want to survive this development, everything has to be done out in the open, with all information available, with open source, and for the intended benefit of all humanity.
This channel is controlled by an AI
thankyou for giving me an idea for my final year project but i dn knw if i will be able to actually make it..
What I find most interesting about the whole "A.I." research thing is the neuroscientific basis behind trying to reproduce human consciousness, or in other words Man trying to create robots on its own image. It's a good video but I got rather confused with some of the terminology used here, for example what they called "weak A.I." aka "specific A.I.". Also the notion of "A.E.I." is new to me, and most likely not possible to even conceive of such thing.
I'm surprised you didn't mention I, robot which is basically the set guidelines AI should follow to ensure the safety of humanity
Part of keeping us alive is that we still have problems to solve ie. our brains are useful, but what will they be useful for when we don't need them anymore? What if every philosophical question could be answerded and we'd understand everything there is to understand? What is the point of living then? (although I don't think our unaltered, "natural" brains could even remotely comprehend such complexities, our altered brains could (maybe)).
very nice and informative, great going!
Ur videos end so abruptly, try putting in a little tune between the actual content and the suggestion portion of the video. It helps with the transition.
It may be time to revisit this topic.
I think a response to AI is IA or Intelligence Augmentation, where in response to strong AI or superintelligent AI we augment human brains to either compete with or beat machine intelligence.
+david rodriguez Good idea.
Look up friendship is optimal to get a good idea of how AI may work. Yes it is based on MLP but that does not make FiO any worse at how it paints the picture of Ai (based on "good" ai)
just finished SOMA, vary thought provoking. if you hadn't played it has a lot to do with this subject.
but I disagree with the end of the video
isn't it a bit wrong to not let a sovereign being pursue its own subjective perfection.
what if the super ai had aei. that terrifies me more. A logical ai might disregard the human perspective, but an emotional ai might fly off the handle, get depressed and destroy the world for no reason
phew. what a relief
Problem with general AI is that at certain point its actions WILL cross and damage our interests. Imagine you built an AI with a function to stop all conflicts in the world, at certain point that AI will think that "OK its easy to just put all humans to sleep forever under lifesupport system", or maby you make AI with a function to build cars and then at some point it might think smth like "OK why don't I just dismantle every single thing made of plastic and metal to build more cars". Because of that scientists are vastly against AI creation, as you cant predict what a more intelligent being than you would try to do, so you cant prevent such scenarios from happening. It has nothing to do with morals as our morals are nothing more that personal opinion based on our life experience, Hitler had moral too you know...
I think humanity would be fine as long as we treat Strong A.I. as our children and not our slaves.
"So, on this week's show, we're testing out the premise: Can tech solve this very emotional crossroads for people? Do cold-hearted data and algorithms have the power to make the human break-up less painful...and maybe even help us better understand love and commitment?"
This is the description of a "Note to self" podcast episode, called " Wevorce".
It about a new, more effective, and cheaper way of getting divorced using "algorithmical assistance". I know it sounds very weirdoo, but if you listen to the podcast, you'll find it very clever and helpful too. It is a very good example of how technology can be used to help people make wiser decisions.
If you go to the Website of "Note to self" you 'll find this information:
"Host Manoush Zomorodi talks with everyone from big name techies to elementary school teachers about the effects of technology on our lives, in a quest for the smart choices that will help you think and live better.
I remembered all this, watching this great lesson. It all comes down to "know thyself" as always. If you think enough about what you want and why you want it, you don't have to be scared of being controlled by technology. Since you'll be clear minded enough to use it only as a tool to get where you want to get.
Turning back to the podcast I mentioned as an example; the story of the woman who invented the " Wevorce" is very touching. When her parents were getting divorced, as a 9 year old she was taken to the court and she was asked whom she would choose to live with: her mother or her father.
( Almost like in Sophie's Choice..Poor girl....) She was so traumatised by this event that she decided to become a divorce lawyer herself and help people to go through this very painful process in a more friendly and humane way. Who would have thought that technology would show her how?? Really terribly interesting.
this is a pretty optimistic view seeing that so many experts think the singularity is the end of humans. also I've never heard of AEI, that sounds pretty fricken cool
In the video you mention that we would be helped in mastering our emotions, please give me an example of someone (not Buddha) who has or at least gotten close? Great video!
I really recommend all of you to read "Childhoood`s End" by Arthur C. Clarke. I think it`s a very good depiction of what is waiting for us in the not so distant future.
This channel deserve more subs!
3:30 he gives the point that super intelligence will be hard to control because of the decentralized nature of a digital being, not to mention it's mental abilities will far exceed our own and it could anticipate and react to us. Then he ends the video (6:57) with the statement that there is no reason we can't control them, without giving any new information on how.
This doesn't really get to grips with the question of what mind is, which must surely be the fundamental question behind A.I. Of course, mind itself is a kind of taboo now, because anything non-physical gets referred to as 'the woo', but if we value humanity, what are we valuing? Not corpses, for sure. I'm deeply worried that the most essential questions around A.I. will be avoided because of a modern (materialistic) prudishness about what mind is, and therefore what humans are.
I'm assuming we won't be giving them emotions, bad habits, desires, the ability to develop addictions, or mental illness like trauma, anxiety, phobias, or depression so I'm guessing they will be a LOT smarter than we are as they won't have all the limitations that keep us from actually using the brains we are born with.......maybe they are our only hope?
Great video! I recommend Nick Bostrom's Superintelligence for more of the details of what was discussed in this video. :)
When you said that Weak I.A. has intelligence limited to one very specific arena ( 0:32 ) can you tell me how is that "arena" called?
thanks
If you believe this is going to bring abundance to all people in the planet, you are not asking yourself 2 simple questions:
1) How much did we advance already in terms of technology?
2) Is it proportional to the advance in terms of quality of life, work hours and income distribution? Is it improving?
i like your clips a lot. where do you get this images from you cut parts off?
the laws of robotics:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm
2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
I completed half the English captions; help out by clicking gear icon on this video->Subtitles/CC->Add subtitles/CC
could you please finish the captions? i need them to understand this video better
+Chosen One I just finished; they're now waiting to be reviewed. To review them: click your profile at top right corner>Creator Studio>Community (in left side bar)>Contribute subtitles and closed captions>Suggested Videos>Review
School of life: Strong AI in 30 years
Open AI: Hold my LSD
Although I'm sure this was not intended by The School of Life, I think the views of Musk, Gates and Hawking are misrepresented here. While they are worried about things going wrong, I haven't heard them state that we are unlikely to be able to create AIs that are safe/ethical, and I doubt this is the stated or internally held position of all of them, if any of them at all. There's a huge distinction between expecting disaster as the most likely outcome and taking the threat of it seriously and working to avoid it. Otherwise: Good video about an important topic :)
i don't see how the recursive self-improvement function is a necessary condition for strong A.I., and it seems as if that's the cause of most people's worry...can't we just try to create artificial consciousness without the self-improvement function? In that case, wouldn't consciousness be sufficient for strong A.I.?
In the positive views on artificial superintelligence, human society would basically become paradise. No poverty, pain and disease, and maximal physical and emotional comfort for everyone. People would soon do what they really want and get what they really need, and there will be no frustration anymore, though maybe for the loss of loved ones since people will probably still (choose to) die.
Nice video. Would like to see School of Life's view on the possibility of creating artificial consciousness, and whether it would still be morally acceptable to think of such a thing as an advanced flint ax.
They will never beat us in reality shows :P
Somewhere, in some grave, Feynman is rolling over.
Enlightening and humbling to know that ASI we'll be achieved in my lifetime. First the Internet and now AI. Who knows what can be achieved in the future years of its introduction
what's the limit of effectiveness once the threshold of effective for every n in a complete G(n) has passed?
Hey
I'd love to hear your thoughts on the chinese room arguement, when it comes to the different levels of AI.
Great channel,
K
guided or leaded or subjugated, an why would be only for us there's an ultimately a number of possibilities, school of whatever, your naiveness
I really think you should revise this in terms of gpt4 and later....
AWESOME!!!
Artificial socratic debates, with SophiaBot!. You guys should get on that.
Please, could we have some citations for these claims? It would make researching this a little easier.
A good way to think of it is that people are tools of civilization, and so are machines. The problem is when machines are better at everything, people will be obsolete. The best solution would probably be to integrate ourselves into AI or AI into ourselves.
We need to merge with A.I., there's no question. Are we really going to risk humanity's fate by telling ourselves "Yeah, it maybe won't try to annihilate us." The choices are pretty straight forward; it's either adapt or perish.