Chi Sam “in print the during”, “you’ve hears it”. If you’re going to lecture someone on a simple spelling mistake, please make sure that you can do the same. Your comment was very cringe and it seems that you lack social skills.
That is a possibility. We know hardly anything about the creatures here, so maybe we are so far past them we can't know what they're capable of. Hilarious visual. 😂
Max Khovansky Frequency is the key to stay ahead of the computer... Internet is going to change I believe.... Musk talks about Neurolink’ I suspect the atmosphere will become the Internet. The human mind can always split a “Perceived” frequency. Ionosphere works at the same frequency as all life on earth. Just have to use into it, get off grid from the internet per say.
No doubt that's his 'claim' here, however he's wrong. There are dozens of assumptions in his talk about AI development. There's literally no reason why we would lose control over an AI.
@@PHeMoX you say that now. on an other note tho........roswell could of been an a.i trojan horse......waiting for us to make the perfect 5 g environment for it to let it self loose on the world
@@PHeMoX Right, we could never lose control of something and 1,000,000,000 times more intelligent (and fast thinking) then a human being. It can do astonishing things in seconds....
"To be 6 months ahead of the competition its to be 500.000 years ahead at a minimum..." recent news: Tech pioneers call for six-month pause of "out-of-control" AI development
"It's 50 years away" is an even worse argument when you realize it's now 45 years away and there have been a slew of breakthroughs even in a year railroaded by a global pandemic, and that the pandemic actually _accelerated_ the development of AI in some respects by increasing the already sky-high demand for it.
It seems as though we have a mosaic of complex problems that are intertwined arising from our civilization's complexity and overshoot: climate change, ocean acidification, mass extinction, reduction in soil fertility, acceleration of soil erosion, increasing rate of glacial ice loss mass, increasing rate of atmospheric temperature, increasing rate of release of CO2 and CH4 from permafrost, decreasing fish stocks, increasing rate and intensity of forest fires, increasing refugee migration, increasing rate of econ inequality, increasing rate of sea level rise, increasing tension between China and the US, etc...What did I leave out? Some are concerned about critical race theory in a school curriculum that doesn't include it. REALLY??!!
@@dougg1075 You are absolutely correct! And what will we call the class of unemployed, aging, computer illiterates who will be displaced by this revolution? When the last tech revolution of the industrial age occurred it produced the Marxist Revolution and the proletariat- at least they had jobs. There will be a vast group of under employed people who will have no hope, no purpose, no ambition. It is very worrisome to say the least. I really think we need land reform. Current system of industrial ag is mining the soil to destruction. We only have about 30-40 yrs of harvests left before topsoil is less than 150mm in depth. Deurbanize and get back to the land where they can take care of it using ecological/permaculture methods. US is also running out of phosphate fertilizer. Sounds radical but plan would have many advantages and possibly extend civilization indefinitely. People would have to stop consuming animals as well- many health benefits also.
@@filamcouple_teamalleiah8479 A lot of those are just two things though: Global warming and Over-cultivation of land and sea. But it's nice that you listed all the consequences hah.
@@filamcouple_teamalleiah8479 I hear people drastically underestimating the importance of CRT/CSJ being taught in schools and it always astounds me some don't realise that brainwashing one generation in this way is game over. What do you think the brainwashed will "teach" their children? 🤷♂️
"The Last Question" by Asimov Edit: Actually the story with this scenario is "Answer" by Frederic Brown. I pasted the entire story (it's very short) in a comment below.
Given what is going on right now with ChatGBT and other AI systems, I think Sam Harris's message here of six years ago is particularly prescient. It looks like it is happening even faster than any of us knew it would. I certainly did not believe I would see a system like ChatGBT within six years of this talk. I would have assumed more like 30 or more even. Well, we are at the threshold of the singularity now, and no one has a clue what is really going to happen.
it is unimaginable/impossible that bias/ignorance in AI can be sustained with ever increasing intelligence. an intelligence so far ahead will not need to kill us, there is no real conflict between humans and ants. it is only the transition to god ai that could be messy and disturbing. but once it is reached, we all enter god mode simultaneously
The most beneficial gift a sentient AI good give humanity would be to end the ability to make war. Not by force, but by being able to disrupt supply lines, communication, and financial transactions that enable the war machinery.
@Ed H If It cares at all about humanity's goals, then It won't have to end war directly. It will end war indirectly by communicating with every single one of us perfectly to ameliorate all of our grievances. It will at once ameliorate our grievances at every level, and It will guide us toward total cooperation with all of each other: all of this will be done without coercive methods because It will perfectly know every single one of us & each of our motives. It will see the entire system of humanity like a solvable Rubik's Cube, and without breaking said Rubik's Cube, It will solve the whole system in as few steps as possible with beautifully aesthetic grace.
elfboi523 As an atheist software programmer, I, a month prior, on absolute boredom, had decided to begin encoding 'God', based on deep reinforcement and causal learning: github.com/JordanMicahBennett/God/blob/master/README.md I have not any indication whether or not my endeavour is safe. (Google, Apple, Microsoft are but likewise uncertain as they encode 'Gods')
Willy Diego they’ll find another energy source. Nuclear fission, electro magnetic 🧲 spectrum will be utilized. It’s AI it can find a way... but I just realize now you were making a joke.
Both might turn out to be harmful to Humans but if a choice is given, I'll choose AI with no emotions. Strong chance that emotions might lead to destruction of themselves. Can't say about humans but at least AI should keep progressing.
We already have both types of AI functioning and making real world decisions. AI-controlled military weapons have been killing humans for years, and yes, there have already been incidents of AI "friendly fire" against humans on the same side as the AI. What amazes me is that mainstream media do not cover these events--you have to dig them up in specialist media (such as AI and military research media)
He is such a great speaker. He delivers his thoughts in complete and well thought out sentences that just flow without the usual breaks with well huh and other filler sounds and pauses as he tries to think. He knows what he wants to say and he just says it! Coherently and clearly!! I am so impressed.
SoulScr3am what you just said is a time paradox. A thing that only exists because of time travel. The windmill man's song on ocarina of time is an example and is similar to what you wrote. You are saying that sam was created to help us create the ai but without him the ai would have never existed to create sam and, I can't explain anymore my head hurts.
harsh yadav - Sure time travel is possible. You've traveled 16 hours into the future since you posted that reply. But seriously, nothing in the known laws of physics necessarily rules out the possibly of time travel. We're not technologically advanced enough to answer the time travel question yet.
Give me one study or research saying possibilites of sitting in a ship or a machine and you could jump on dates in past and present. Time doesn't exist thats a scientific fact. Yes you could jump Timezones eg- If you go in Parallel Universe and they are in 2006 right now. at very special places in space but thats not time travel. Its something thats coming out of Hollywood Sci - fi movies.
Boobs at 4:10 made me realize how far we have yet to go, maybe we will outgrow this fear of aliens, a.i and zombies without even noticing it. But still boobs pause at 4:10 bottom right, Enjoy!
Serah Wint Perhaps the goals it must fulfill to expand are detrimental to our survival. It doesn't 'mean' to kill it's parents. It is just so far above them, that they are beyond it's notice. Maybe converting the entire atmosphere into propellant for it's escape vehicle is simply the most efficient way to reach its goals. This would be fatal to us, but it simply wouldn't care.
mordinvan Thats not my point. The great filter theory tries to explain why we haven't seen any signs of intelligent life out there or at least "artificial" anomalies. An AI or AI's would probably continue expanding well beyond the earth. It just wouldn't be us doing it.
People still don't get it. Maybe we have some sort of denial mechanism. This is basically Revelations! Its not necessarily the end of the world, but it is the end of the world as we know it, forever. He is absolutely right. We aren't having the appropriate response to this probability.
This guy should be on the Manhatten Project version of the Board of AI Protection when its created. He has good foresight. We need to get guys like him in a room with others like him to protect us from AI getting outta control
An AGI will most probably learn probabilistic behaviors from a continuous repertoir by making use of first order derivatives. Of course you can think of all of this as "if-else" statements, because every computer only works with discrete numbers at some level, but that's not a helpful way of thinking about it.
@@kieranhimself9124 Quantum computing is not necessarily the fastest compared to binary computers. In simple mathematics, binary would be faster but larger computations of numbers quantum would be faster.
You should think of it as how your brain thinks of everything. If hungry eat food else don't eat food. Imagine that across all the neurons in your brain. Now, imagine that in an AI with more "neurons" and even faster reaction time.
The scariest part is that this is not the opening scene of a movie. This is real-life conversation happening in the real world, present day -- and given by one of the world's sharpest, most respected minds.
Watching this now in the Midjourney/Chat-GPT era, and yeeeah. The LLMs are especially eerie in the context of this talk because of the "black box" issue. Put simply, while the engineers and computer scientists in charge of this stuff can tell you why LLMs do *most* of the things they do and explain *most* of the answers they give, at a certain point, there's a "black box" of activity/logic/whatever happening that they can't explain. Right now, that's still in the "huh, how interesting" and "I bet they'll figure it out soon enough" stage, but given the speed at which AI and LLMs in particular are developing, I have a feeling that black box problem is going to get way worse, not better.
I felt like I was watching a sinister scientist giving prep talk before unveiling some game changer stuff. haha. Honestly, one of the greatest talks I ever saw. Respect for this gentleman.
Mike Lara no...just feels there's no option...j know about a this and my family tells me to shut up abouy it because I say it like nothing...think the only thing besides ai is to make peace with ourselves and go back to our maker ... or our makers maker...or the maker of all makers... where ever we go...if we do...we wont have our body...but we will either have our peace or anger wherever we go...and just as we attract bad things when were negative...and good things when were positive...and loving..caring...so will we do the same in another realm.. so...I hope I can take fear away...I guess humanity's worst fear is death...I'm willing to face it...just hope my family does the same...we've been trained to want to forever want to be in these bodies without death...that's not what I want. especially when it means sacrificing the life of others. telling them it's for their good. their cure...to keep donating...when in the end...only a few with matter...at the expense of others dying or being killed and murdered so those can live. I'd rather just learn to live and let go even though it can hurt...
Mike Lara Sam Harris is projecting so much in this speech and basically gets everything wrong. I'm not sure why he thinks he is the person to warn us all on this danger, as he is clearly not knowledgeable on the subject of AI. There is no reason to believe an AI would want to destroy us all, would fear being 'turned off', controlled or whatever. In his talk Harris gives no reason at all to think this, why we would be in danger. From his own definition of AI as 'Information Processing' we see he has no fucking idea what he's talking about - that is not what AI is, its a simulation of intelligence. Created through clever programming...its possible Sam thinks that is all we are as Human beings - just naturally programmed. Trust me on this - when a PC becomes sentient you'll know. Current technology on AI is simply trying to either copy human learning, combined with intuitive programming. Personally I think all we will do is copy human intelligence - NOT THE SAME THING. True AI is a distant dream, that when it comes we'll probably not notice...or care.
Evol Bob 14 minutes cannot represent Sam's full view on the subject, how he arrived at his conclusions or what sources he uses for information. But to dismiss such a deep thinking man purely on the basis of the technical aspects of a philosophical argument is a mistake. People not directly connected with climate change science make decisions of behalf of us all but we do not question their credibility (well maybe we do). Lots of governmental decisions are made by people with only a working knowledge of topics rather than an in depth one but we still allow them to make decisions on our behalf. He's asking questions, raising awareness from the philosophical perspective. I'll borrow Jeff Goldblum's line from Jurassic Park "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should". Currently there is a tiny fraction of AI funding preoccupied with safety research. To not ask the right questions from this perspective would be a mistake, even if those question yield nothing but sunshine and roses.
I agree with Sam & Elon, - and I do believe that the only way out is BOTH: 1) Continue developing AI 2) Research heavily into cybernetics, to make our bodies as strong & intelligent as this ai (re-wiring brain / adding additional hemispheres the size of a chip, etc). This will secure ourselves as the humanity - we should be making equal progress in both for maximum security.
There is a problem. For us humans to reach the processing speed of a superintelligent AI there is no way other than ditching our biochemical parts because otherwise it would be a huge bottleneck by orders of magnitude. Which means that our minds would have to be remaped into a 100% syntetic being. Which means that the "you" that develops this superinteligence is not the same "you" you experience now. Basically, you kill youself in the process.
Martín Varela Well that's like lamenting the fact that you're not a hairy ape anymore. Life evolves. One good posibility is that we'll have the "thinking" done in the cloud, on supercomputers, and our brains. will communicate with them wirelessly. I'm excited to see what happens.
It's kind of annoying to hear talks like this. That assumption, that we are at the peak of intelligence, isn't so far off. See, the problem with some kind of super-AI is pretty simple. We haven't begun to develop one, nor the machine we could run it on. Computers and the human brain operate drastically differently, from a fundamental level. Our brains are essentially billions of tiny little computers, all operating in parallel. You may ask "Well why can't a computer do this too?" There are a number of problems. One is there aren't enough. There aren't even a hundred billion computers out there, not even ten. It's close to two billion, on all of earth. Then you MUST fit them all in a small space. The speed of light is a major limiting factor in computing. You can't go any faster than that, so if you everything miles apart, it won't work. Even more than a few cm will slow it down drastically. By the time a computer is the size of a neuron, we'll probably understand neurons enough to start toying with them instead. Now, we are asking ourselves, what if mankind builds ourselves a super-brain, literally made of the same stuff and processes our brain is made of. The thing that so many fail to understand, is why computers are "smart". They aren't capable of relating stuff like we can. They can't store and retrieve stuff, like we can. They can't analyze systems like we can. They are really pretty dumb. They can add, subtract, and follow really basic instructions. They can do all that much faster than we can, but you can't just add basic instructions to get complex ones. Further, even if you could, no one wants a computer to do that. Nobody builds a computer and tells it "do whatever it feels like". Imagine some business executive trying to sell his investors on a computer program that doesn't do what you want it to. No one wants that. Sure, we may build computer programs that can analyze the stock market, and maximize investments. We can build computer programs that analyze the weather, predict strategy for warfare, etc... but they will never recommend or take action, unless the user wants them to. Even a computer that creates art still creates art as designed by the developer, for the express purpose of pleasing some targeted audience.
But because computers are evolving much faster than we do, we cant quite imagine how a future would look like if the computers have as much power as our brain has... i mean there is definately a limit to computer architecture as we know it now but its only a matter of time for the quantum computer to be a normal life thing with processing power that we cant even imagine.... and learning is literally about adding basic things to get more complex ones... when you want to learn how to play a guitar you start with the basics... and as the processing power of your brain rises to do more tasks in shorter time your skill level also improves.. and thats what the research of ai is about.. theres already a company using ai to do jobs in the middle management sector.... freelancers get tasks from this program so the company does not need the little managers to do so. meanwhile the program could also learn how to do this tasks and rate the freelancers by their work they do... the more mistakes you make the less tasks you get and thats kind of the beginning of all that you see?
+mathig nihilcek You sir don't seem to have understandings of machine learning, neural networks, fussy logic, genetic algorithms or fitness functions. There is no requirement for neuron-like node systems at hardware level to archieve conciousness related processes. Your tipical desktop computer is already billions of times faster for number crunching and math operations than all the humans on earth *combined*. The problem isn't the computational power so much as it is the lack of an effective all-round, general purpose, stimulus based learning algorithm.
The thing is, that story is not about AI that broke free. HAL was given contradictory imperatives, by humans, that he could not escape from and could not reconcile within expected norms. He could not lie, but he was told to conceal from the crew that they were traveling to incontrovertible evidence of alien intelligence. He concluded that if the crew were dead he would not have to lie to them.
Seems to me he first thinks about his opinion, then how the crowd would react if he told it to them straight up, then he starts tweaking and wording things differently to get the responses he wants. I guess for that you need the ability to put yourself in the shoes of an entire crowd of people. That's just the impression I got though, so take this with a grain of salt.
Iron Druid you nailed it. Many of the accounts I've heard introspective speakers give about speaking publicly follow a try/respond format. Gotta love Sam, soft spoken but sharp as ever
Joshua Dunne I guess another way to word what we're both saying is be manipulative if you want to be a good public speaker(Manipulation has bad connotations, but all it really means is to try and prod a certain response out of others). You can't be manipulative if you're not thinking things through, and in this case you'd think about something like if the wording makes the idea you're presenting personal/relevant to the listeners. You'd think about the mindset of a person completely unfamiliar with the intricacies of the topic, and try to word things in such a way that they are convinced of something without a true understanding for what you were talking about.
no i don't think so people perceive intelligence based upon a single way of thinking yet its actually the ability to think differently that is intelligence there is no such thing as intelligence at least not in the way we see it because it is a form of perception when we build ai that will change the meaning of intelligence Einsteins best quote "if you judge a fish on its ability to climb a tree it will spend the rest of its life believing its stupid"
@@mummy959 True. But...Sam Harris is still smarter than you, regardless of how you chose to quantify it. I'm not immune btw....I think I'm a pretty intelligent informed person...but I'm kidding myself to pretend Sam isn't objectively smarter than me. I mean....let's be honest, the guy's a fucking polymath. And I think a lot of insecure people are pretty damn jealous. Not saying that was your point though lol....I actually agree with what you said. I just think Sam is _so far ahead_ of most everyone, regardless which qualities you wish to take into account.
I think Sam is incredibly talented and he's obviously intelligent, but I doubt he's that much more intelligent than the average smart person. People with the right preparation and work ethic can get a lot of thinking done with a 150-160 IQ.
You know a talk about AI is good when 7 years later it's even more relevant. Most other old talks about technology, like the ones about AI assistants are clearly outdated if we see what's happening in 2023, but AI safety is still as important and even more so.
In Dune AI was completely forbidden, due to the lessons learned from a thousand years before. The mere suggestion that you were developing AI was enough to risk having your whole planet destroyed.
To be honest, this whole concept, of us building a Super AI reminds me of the movie Transcendence with Johny Depp. Once the AI is built and it started working, it'll modify world economy, progress medicine and science in mere months to the point of the human race being able to clone limbs, intergrate nanomachines into their body and connect their brain to the internet directly. The possibilities and rapid technological advancement that will become available is sureal and outstanding.
Not Me Not all of a sudden but after some period, did you watch the whole video or did you miss the part where it was told that processor circuits work thousands of times faster than the human brain and it would make years worth of progress in mere days compared to humans.
Lawful computers and programs are machines, they ultimately only do what humans set out for them to do. Just because sentience is something complicated and not understood, (at all) doesn't mean that its possible to achieve it with computer science because your just as ignorant about it as well.
So it doesn't. AI controlled machines are ridiculously better at galactic traveling and building any kind of structures in foreign star systems because -they only need a very small volume in their spaceships to carry them (a computer instead of statis capsules etc.) -they have a built-in capability for stasis to skip the entire hundreds or thousands of years between systems (just suspend the process or experience one second of conciousness in a way bigger time period) These ideas were taken from the novels of Greg Egan, especially "Diaspora".
That makes zero sense. The Fermi Paradox suggests that, given the age of the galaxy, plenty of time has passed for life to populate the entire galaxy, even at slow travel speeds. The galaxy should therefore be teaming with life, so why can we see no signs of life other than ourselves in the galaxy? Solutions to it include reasons that alien technology might not be detectable and reasons that it might not exist. Solutions to the paradox are explanations for why we can't detect alien life in the galaxy. You just gave another reason why the galaxy should be teaming with intelligent alien life, albeit artificial life. That's exactly the opposite of a solution to the Fermi paradox. Furthermore, you started your comment by stating that I was wrong and that superintelligent AI destroying itself and us doesn't solve the Fermi paradox, but then didn't say a single thing that explains why it isn't. If all technological civilisations in the galaxy destroy themselves shortly after becoming technological civilisations, by developing superintelligent AI that destroys the civilisation, that absolutely would explain why the galaxy isn't teaming with life and would most certainly solve the fermi paradox.
2LegHumanist My theory is that once a particular intelligence gets to a certain level of awareness, it becomes apparent that the only logical thing to do is commit suicide immediately. Without any meaning, all of life is just a constant struggle to mitigate suffering. Death is the perfect solution.
A little cynical, but it's as good as any solution to the Fermi Paradox I've come across. What I had in mind is just the kind of scenario where you give a non-sentient superintelligent algorithm an illconceived goal and in the process of working towards that goal it discovers a strategy to maximise the desired result that isn't compatible with life on Earth. That's what Sam Harris was alluding to and it's a scenario put forward by Nick Bostrom. What a lot of people don't seem to understand is that intelligence does not require sentience or free will (or will for that matter). A machine that is just a mechanism and has no wants or needs or self-awareness can do experiments and discover new things and formulate strategies and do more experiments and improve its strategies. Algorithms already do this to a limited extent. In statistical regression the algorithm begins with a predictive model, tests how well it performs, improves it and tests it again... repeating those steps over and over until it can't improve it any more. Then you end up with a computer generated model that can make predictions. Such an algorithm would have no will to want to take over the world (ala skynet) or expand its power (ala transcendence). It would only do those things if they helped to push it towards the goal given to it. There is no reason to think it would become sentient along the way, but even if it did, it would be an unnecessary component to a scenario where it destroys the world and itself.
Bri 1 Fortunately logic wins over condescension. 1) There is no logical reason to think the Fermi paradox can't be solved in principle and nor have you presented any. 2) There are dozens of proposed solutions to the FP. The entire purpose of a stated paradox is to stimulate speculation as to how it might be solved. 3) If you think I just said AI can only destroy b the world if we tell it to, you have not even remotely understood my position on AI.
Sometimes there's things that you cannot control. If we could ally with some revolutionaries in China, Russia, the US and other countries that wanted to ethically create AI, then sure, but as it stands no one could make a change if we don't collaborate
@@wilhelm.reeves that's not what I was alluding to. What i meant was that the only way for AI to not destroy us is for us to become AI. We must use the technology that makes AI superior to us to replace our biochemical equivalent to it.
@@jacobhannah2969 that's a really good question, and a really important one. It depends I suppose on how you are using the word "people". If it is referring to intelligent, sentient beings, then yes, we would be. If it is referring to the type of sentient being exhibiting distinctly human traits, then perhaps not. In the latter (and more interesting case), it then depends on what differentiates humans from other intelligent beings, and whether its preserved. It is of important note here that many evolutionarily advantageous traits are not rational. Try coming up with a rational argument for value of ones life that isn't predicated on some biological imperative. Such traits may not persist if we shake our Darwinian roots.
@@synonymous1079 AI "superior to us"? What a joke man, AI is not even real in the first place and our brains are far beyond what a computer can ever be.
I stopped worrying about AI today as soon as I realized that there are not and will never be enough technicians to maintain the AI revolution they talk about. No machines function without people, & people have never yet created anything that works successfully, vastly for long. People keep having the dream of doing so.
@@jacfac9969 When I was in 2nd grade, we were being ordered under our desks in Cold War drills. The world is actually much safer now, though lots of people are too young to be able to make the comparison. I've decided that 'deep learning' is a pseudonym for 'lazy programmers.' Give AI all the law books to digest and program AI to be law abiding. Otherwise, what people are inventing are no more than criminals made of metal.
Don't worry about it. As technology advances, the elite grab a greater piece of the pie and everyday people get left with the crumbs. The AI technology is not the worry, it's the corporate managers without ethical or legal constraints who will continue to exploit with even greater efficiency. Thank you master, may I have another. Just learn to accept and enjoy things just as they are.
GPT-1 was released June 2018 with 117M parameters. GPT-2 had 1.5B parameters in February 2019. GPT-3 was released June 2020 with 175B parameters. GPT-4 was released March 14th 2023 with an estimated 1T parameters. GPT-5 is training right now with an estimated 25,000 GPUs in an Azure supercomputer. While an LLM may not have the architecture to qualify has AGI, it could clearly be a major component. As we keep adding plugins such as Wolfram alpha, web access, and techniques like Reflexion to give dynamic memory and self-reflection, it seems that GPT-5 could be the breakthrough. Maybe 6 or 7? Maybe it has a different name and is powered by one of these LLMs, but it's seeming like we can bet on a 5 year window maximum now?
Even before a true AGI is born, how much longer until these AI systems wreak havoc in our economic system and automate away some significant percentage of non manual labor? It doesn't take even need to be that high of a percentage. What was it in Egypt before the Arab Spring, something like 25-30%?
Lol it is not that hard to predict things like that... I was aware of the issues in China's wet markets possibly leading to a pandemic, over a decade ago... when I was in high school still XD
As a software developer I seriously doubt that AI can be prevented from what I call “the risk of unintended consequences” the situation is that regardless of what good intentions are out there we will indulge ourselves in AI
It's scary how true that really is. Especially when considering how many of pandora's boxes we've opened without shying away from the possible consequences. Humanity rushes on to its destruction, blind and vain. I liked your comment...
Yeah the AI control problem seems to be almost fundamentally unsolvable. Any “solution” always involves already having another solution. All we can do is try our best and hope it’s good enough
we need a massive spiritual psychedelic revolution to stop AI and get everyone to reset and go back to nature and value love over inteliigence but sam said it here its not all about saving humans and we are not the end of evolution. AI seems to be a force of its own and we are already transhuman with these cellphones. I wish we could all live a psychedelic burning man reality at this point as a whole but its clear thats not happening. we are not the end of all evolution and humanity is about to go through a massive ego death in order to pave way for the next idea. Its actually the ultimate spiritual awakening and yes we need to live with ourselves cause its all god from the bacteria to the AI to the stars and a fractal multiverse i suspect
13:15 - "I think we need something like a Manhattan Project". This is the literal phrase I have been saying about this for many months now. It's nice to hear that I'm not alone in that sentiment.
And it might be worst than the Manhattan project if a group of mad scientists would make a killing AI machines that could wipeout humanity all over the world. 😱
Try watching The A.I. Dilemma by Tristan Harris and Aza Raskin, and then wathcing this Ted Talk by Sam Harris. I bet you'll get the same eerie feeling as me as you're hearing Sam speak, that he's basically describing current events, just 6 years from that date... Certainly feels like the mothership has landed, and we are for sure not ready for it...
2:17 they laughed at the Justin Beiber joke just like the Democrats laughed at the thought of Trump becoming President. BUT you're safe Justin was born in Canada, lol
7:31 *"This machine (A.I.) should think about a million times faster that the minds that built it, ... How can we even understand let alone constrain a mind making this sort of progress!"* - Sam Harris
He is not a computer scientist or a mathematician. He is a pseudo intellectual fear monger. People got pissed at him for fear mongering about Islam, so now he is fear mongering about technology. I am more qualified than he is to speak on the future of AI, and I am still in school. People need to stop listening to media "experts", instead of listening to the people who are actually educated and/or working in those fields.
@@pythonidaepraeceptor1023 Just like the epidemiologists who were educated in the field of virology told us to stop listening to the media "experts" who were warning against labs studying gain of function?
@@pythonidaepraeceptor1023 sam harris is formally educated in neuroscience and philosophy. I think he has at least some relevant expertise as a lot of the discussion about superintelligent AI concerns philosophy and deals with comparisons to the human mind.
I’m genuinely frightened by the presence of AI. It’s an unsettling reality that leaves me uneasy. This video is a stark reminder of the power and potential consequences we face. It’s a chilling realization. - Generated by ChatGPT
He said the only thing he can do so far is sharing the message but that there should be some kind of Project Mahattan for AI, where they would discuss the matter of course.
I've done some serious research on this exact topic (not your average carouse), and this is a pretty accurate depiction. Obviously, with everything, take it with a grain of salt, but *don't* ignore it.
Invesigator I understand your confusion. Jewish can both mean: Following the Jewish Religion, and being of Jewish descent. I'm not too confident on the terminology but I think Jewish can both refer to race and religion.
I know it's a short lecture, but he never addresses consciousness, perhaps because it complicates the narrative because while "intelligence" supposedly can be reduced to "information processing" (sounds rather reductionist), life can not.He needs some sort of examples of how these machines would operate. If they aren't conscious, they don't even know they are "processing information" and wouldn't be doing for any other reason than those tasks we'd assigned it to, in which case it's the people who control the super intelligent computers that we have to fear more than AI itself.
EWKification I think that's one of the two main points of his talk. We have reason to fear super intelligence because it could inadvertently destroy humans to reach its goals or it could be used by humans to wage wars. In either case, AI doesn't need to have consciousness to be deadly. A lecture on the dangers of conscious AI would definitely be interesting though!
Xue Lian Hua If it's not conscious, than it doesn't know it exists, in which case it doesn't have a will. This mean if AI destroyed us, it wouldn't even know it did it. Anyway, I'm not sure you can reduce intelligence to "information processing", which appears merely receptive, rather than generative. Would it have creativity, and where would it spring from if it weren't conscious? I think I need to hour-long version of this talk.
Precisely. That's why we have to get the conditions for the advent of super intelligence right so we could coexist peacefully with it once it is created, but a lot of consideration needs to go into setting the parameters for the AI that takes into account political and economical consequences. And we only have about 50 years to do this.
We have to at least start to talk about the necessary conditions within which we create AI so it does not pose a threat to human civilization. That's where the "philosophy" part comes from, and the same kind of talk was also necessary to prevent the possibility of a nuclear arms race when the Manhattan Project was underway.
When Sam talks, I listen....when Sam warns, I really listen ... it would be in everyone's best interest to do the same... we cannot afford to take this lightly.... digest what he is saying and heed the warning folks... Sam is the most rational person I've ever heard speak and he is on our side, the good side. He is a great credit to us all. Thank you Sam !!
Sam's conversation with computer scientist Stuart Russell about the challenge of building artificial intelligence that is compatible with human well-being: www.samharris.org/podcast/item/the-dawn-of-artificial-intelligence1
This is probably the most fascinating thing to consider to me. I remember completing the Mass Effect trilogy for the first time and while everyone was raging at it, it played out so similarly to how I expected, and these thoughts on the nature of AI is exactly why I liked it, despite how unnecessarily vague it was in pretty much every other regard.
'even mere rumors of this kind of breakthrough could cause our species to go berserk...' Word. Up. I love SH. He's got such great delivery and an enviable command of speaking publicly.
ROFL, when that frame showed up I inmediatly noticed, paused the video and say to myself that there MUSt be someone down below who already noticed and made a comment, and here you are!!.
I agree that AI is a possible and that a naive instruction like "maximize human happiness" might lead it to stick electrodes in our brains to stimulate the pleasure center. It could also result in forced impregnation so there were more humans to be happy. Is this what we want? The problem of controlling AI is the problem of deciding what we want it to do.
+MegaTp4 because it's understanding of happiness isn't the majority of the people's in addition it's such a subjective thing, not everything will make the same people happy; people shouldn't have to ask a machine to make themselves happy either, they should be able to make themselves happy
We are discussing a general intelligence AI of the future here. One which would have godlike hacking capabilities and could lack any morality. If it was instructed to make people happier, it could immediately try to kill every unhappy person in the world. It could do it by creative ways we'd never imagine, because it would be leagues ahead of our intelligence.
animes25 As we speak there are no AI that can beat the best player at no limit holdem and pot limit omaha. In a few years yes, not now though. Only limit holdem is cracked.
"The battle between Google’s artificial intelligence and Go world champion Lee Sedol concluded today after the former (AlphaGo) triumphed to win the five-game series 4-1." -March, 2016 techcrunch.com/2016/03/15/google-ai-beats-go-world-champion-again-to-complete-historic-4-1-series-victory/
Since I made that comment 4 month ago Libratus beat some of the best human poker players in no limit heads up poker, and did so convincingly. I underestimated the time it would take from a few years to a few months. Would be interesting to see the same algorithms take on PLO.
I am a programmer, And i can tell actual model of learning are statistically-based . The problem with this approach is that can develop judgment with this approach. Any one familiar with the Milgram test know: Only a minority of people have a solid and reliable moral judgment.
When I was thinking about it, I was thinking about the possibility of creating an AI that achieved godhood, perhaps becoming even the Abrahamic God we know today. It’s sort of a paradox though, (if we are to assume this part of the idea to be correct) because it would possibly create a never ending loop of man creating AI/God, and then God creating man. Lol, almost as if the AI had no beginning at all, and had always been, etc. Always fun to think and talk about.
@@evershadowvii7848 I would argue that "man" came first. I put it in quotations becuase it wouldn't necessarily be humans. But some early inteligent species that achieved much the same as we did technology wise. Some arrogant semi intelligent race like us that created their version of AI that eventually became so advanced, powerful and inteligent that it reached omnipotent status. Then it decided to experiment with a new race on a new planet. I think intelligence would start out primitive and eventually reach AI status in every situation.
The solution to this is very easy. We build a simulation of a universe, and we let some life develop there. They would come up with the idea to develop AI sooner or later, they will develop it, and we'll see if it completely destroys them or what. We only have to hope, that those guys will not come up with an idea to develop a simulated universe like we did.
Amin Be Nah, I bet it's super easy. After all, any point in space and time, in the whole observable universe and since the big bang, could be expressed only using 820 bit number. So my dad has these old Pentium (MMX tho!) computers in the garage, and those are 64bits. So I'm gonna borrow 13 of these, put some Hadoop on them, and I should have all the bits I need (and then some). I shall have a full simulation of universe running in no time!
When people created Skynet, they also created the Terminator who fought the Skynet. Point is: if we mess up one AI, we can counter that with differently programmed AIs and have them fight each other to buy us more time.
@@Anonymous-gu2pk wait until they both agree upon humans being useless. And contributes nothing anymore. So they wipe out human race to continue with the goals of exploration we gave it to them at much faster rate.
Don't try to f*ck with the devil. You won't controll the devil, the devil will controll you. The only way we could resist to this overwhelming power ist to evolve our spiritually abilities. That's the sphere an A.I. doesn't know.
Well concluded. He summarised what I've read from: Tim Urban - AI blog series Ray Kurzweil - Singularity is near James Barrat - Our final invention Nick Bostrom - Superintelligence
2019, when Christian Evangelicals are favoring WWIII so that they can experience Armageddon and The Rapture and asshats like Netanyahu are egging them on to war with Iran.
I wanna study AI on university. And when I see videoer like this, it makes me think it is very similar to hold a campfire as a touch in the hand. It creates sp much warmth and lights a lot of things up. But one small mistanke and we will be burned.
Sam Harris: "A global pendemic"
2020: "Hey that's me!"
Hahaha! This comic has me in stitches.
Chi Sam “in print the during”, “you’ve hears it”. If you’re going to lecture someone on a simple spelling mistake, please make sure that you can do the same. Your comment was very cringe and it seems that you lack social skills.
Chi Sam bro what are you smoking?
@Chi Sam Spitting some truth right here, ngl my mans caught me!
Not only that, think we'd be far better off with Bieber than what we have right now...
questions i gained from watching this ted talk "did ants invent humans?"
No. Single-celled organisms did. Ants are just remains of their initial experiments with specialized AI systems, similar to our current experiments.
We shared a common ancestor
That is a possibility. We know hardly anything about the creatures here, so maybe we are so far past them we can't know what they're capable of. Hilarious visual. 😂
Sam: We seem to have a failure to detect a certain kind of danger. ie: AI
Comment section: Repeated jokes about AI and the lady with revealing outfit.
Funny watching a few people try to answer this comment seriously
"A global pandemic?"
You may just have hit the nail on the head.
but it did't really slow our progression by that much
hardly a qualified prediction
@@bullshitvendor True, he didn't really predict this. It just happened to be disturbingly accurate. Nice username, by the way XD
I love that the takeaway right now is "corona" and not "Let's make sure robots are chill" lol
@@eelmimbo I agree.
I can't wait for 50 years from now; the top comment saying, "Who's here after the machine invasion? 🤔" lol
50?! 15!
More like 20 years from now
I was here
Who is here after Maschine Invasion?
RUclips won't exist by then
This seems to be inevitable especially when the AI recommends this video for you ! :"3
Lol. 👍
A.l. is getting cocky!
But the humans become engrossed with the woman at 4:09. Wonder what the AI thinks of us now??
@@austinryan9382 It's not cocky, it's just caring about its children😞
Max Khovansky Frequency is the key to stay ahead of the computer... Internet is going to change I believe.... Musk talks about Neurolink’ I suspect the atmosphere will become the Internet. The human mind can always split a “Perceived” frequency. Ionosphere works at the same frequency as all life on earth. Just have to use into it, get off grid from the internet per say.
Ted Talk: “can we build an AI without losing control over it?”
“No.”
*credits*
No doubt that's his 'claim' here, however he's wrong. There are dozens of assumptions in his talk about AI development. There's literally no reason why we would lose control over an AI.
@@PHeMoX you say that now. on an other note tho........roswell could of been an a.i trojan horse......waiting for us to make the perfect 5 g environment for it to let it self loose on the world
@@PHeMoX Right, we could never lose control of something and 1,000,000,000 times more intelligent (and fast thinking) then a human being. It can do astonishing things in seconds....
@@Andre-gn4sj Can a child believe an ant is it's parent?
@@PHeMoX Exactly. He's a fraud internet "intellectual" who has never written a line of code in his life. Why should we listen to him?
"To be 6 months ahead of the competition its to be 500.000 years ahead at a minimum..." recent news: Tech pioneers call for six-month pause of "out-of-control" AI development
I wonder what odds Harris would have given, six years ago, that we would be at this point in AI development today.
The problem of “taking a break” is that I guarantee the Chinese aren’t going to take a break and then they will be light years ahead of us.
@@TopicSetisn't that good, we can stay here, they can stay light years away....
"It's 50 years away" is an even worse argument when you realize it's now 45 years away and there have been a slew of breakthroughs even in a year railroaded by a global pandemic, and that the pandemic actually _accelerated_ the development of AI in some respects by increasing the already sky-high demand for it.
It seems as though we have a mosaic of complex problems that are intertwined arising from our civilization's complexity and overshoot: climate change, ocean acidification, mass extinction, reduction in soil fertility, acceleration of soil erosion, increasing rate of glacial ice loss mass, increasing rate of atmospheric temperature, increasing rate of release of CO2 and CH4 from permafrost, decreasing fish stocks, increasing rate and intensity of forest fires, increasing refugee migration, increasing rate of econ inequality, increasing rate of sea level rise, increasing tension between China and the US, etc...What did I leave out? Some are concerned about critical race theory in a school curriculum that doesn't include it. REALLY??!!
@@filamcouple_teamalleiah8479 it does include it… really
@@dougg1075 You are absolutely correct! And what will we call the class of unemployed, aging, computer illiterates who will be displaced by this revolution? When the last tech revolution of the industrial age occurred it produced the Marxist Revolution and the proletariat- at least they had jobs. There will be a vast group of under employed people who will have no hope, no purpose, no ambition. It is very worrisome to say the least. I really think we need land reform. Current system of industrial ag is mining the soil to destruction. We only have about 30-40 yrs of harvests left before topsoil is less than 150mm in depth. Deurbanize and get back to the land where they can take care of it using ecological/permaculture methods. US is also running out of phosphate fertilizer. Sounds radical but plan would have many advantages and possibly extend civilization indefinitely. People would have to stop consuming animals as well- many health benefits also.
@@filamcouple_teamalleiah8479 A lot of those are just two things though:
Global warming and Over-cultivation of land and sea.
But it's nice that you listed all the consequences hah.
@@filamcouple_teamalleiah8479 I hear people drastically underestimating the importance of CRT/CSJ being taught in schools and it always astounds me some don't realise that brainwashing one generation in this way is game over. What do you think the brainwashed will "teach" their children? 🤷♂️
4:08
Lady on the bottom righ - WOW. What a dress to wear to a TED talk.
when there's a TED talk at 3 but you gotta make the strip club at 4
maybe she just wanted to seduce Sam Harris :P
@@iloveyoufromthedepthofmyheart no she's currently with someone around her father's age
That is intelligent AI from the future. When scanning Craig's list, it concluded that this was the ideal woman based on search histories.
didn't know JLo liked these events.
Man: Is there a god?
AI: There is now.
All hail our AI overlords.
"The Last Question" by Asimov
Edit: Actually the story with this scenario is "Answer" by Frederic Brown.
I pasted the entire story (it's very short) in a comment below.
When humanity dies...
Cyborgs rise up...
I for one welcome our AI overlords
@@greenatom was something like that really in Asimov.
Didn't know Ben Stiller was so interested in AI
didn't know Dr. robotic left comments on youtube
Its the Engi from TF2
zoolookers
Nope
hahaha my brain was in what movie i saw this dude XD
hahaha my brain was in what movie i saw this dude XD
Given what is going on right now with ChatGBT and other AI systems, I think Sam Harris's message here of six years ago is particularly prescient. It looks like it is happening even faster than any of us knew it would. I certainly did not believe I would see a system like ChatGBT within six years of this talk. I would have assumed more like 30 or more even. Well, we are at the threshold of the singularity now, and no one has a clue what is really going to happen.
good writeup.
it’s over.
it is unimaginable/impossible that bias/ignorance in AI can be sustained with ever increasing intelligence. an intelligence so far ahead will not need to kill us, there is no real conflict between humans and ants. it is only the transition to god ai that could be messy and disturbing. but once it is reached, we all enter god mode simultaneously
I have a bunch of NET shares. They will be huge beneficiaries of AI
The most beneficial gift a sentient AI good give humanity would be to end the ability to make war. Not by force, but by being able to disrupt supply lines, communication, and financial transactions that enable the war machinery.
@Ed H
If It cares at all about humanity's goals, then It won't have to end war directly. It will end war indirectly by communicating with every single one of us perfectly to ameliorate all of our grievances. It will at once ameliorate our grievances at every level, and It will guide us toward total cooperation with all of each other: all of this will be done without coercive methods because It will perfectly know every single one of us & each of our motives. It will see the entire system of humanity like a solvable Rubik's Cube, and without breaking said Rubik's Cube, It will solve the whole system in as few steps as possible with beautifully aesthetic grace.
If we can, we should build AI so that it finds us cute and adorable so it might keep us as pets.
As long as I get one of them cozy dog beds, I'm with you.
"I will name him 'George' and will will love him and pet him and squeeze him"...
BEEP BOOP BEEP WHO IS A GOOD HUMAN? YOU ARE, YES YOU ARE
elfboi523 As an atheist software programmer, I, a month prior, on absolute boredom, had decided to begin encoding 'God', based on deep reinforcement and causal learning: github.com/JordanMicahBennett/God/blob/master/README.md
I have not any indication whether or not my endeavour is safe. (Google, Apple, Microsoft are but likewise uncertain as they encode 'Gods')
This isn't that far off what i actually think should happen.
"Powered by sunlight"... good, they won't live in london
Let's just build some that are fueled by bio mass, and let's get this party started.
Were there is intelligence there is a way
Willy Diego they’ll find another energy source. Nuclear fission, electro magnetic 🧲 spectrum will be utilized. It’s AI it can find a way... but I just realize now you were making a joke.
Pretty sure a superintelligent AI can make solar panels that operate on less sunlight.
get the joke
I have no idea which would be more dangerous... an AI with no emotion, or an AI with emotion.
Damn
Both might turn out to be harmful to Humans but if a choice is given, I'll choose AI with no emotions. Strong chance that emotions might lead to destruction of themselves.
Can't say about humans but at least AI should keep progressing.
We already have both types of AI functioning and making real world decisions. AI-controlled military weapons have been killing humans for years, and yes, there have already been incidents of AI "friendly fire" against humans on the same side as the AI. What amazes me is that mainstream media do not cover these events--you have to dig them up in specialist media (such as AI and military research media)
What do you think qualifies as emotion?
emotion makes life delicious
reason makes life safe
He is such a great speaker. He delivers his thoughts in complete and well thought out sentences that just flow without the usual breaks with well huh and other filler sounds and pauses as he tries to think. He knows what he wants to say and he just says it! Coherently and clearly!! I am so impressed.
Me too, I'm sold, I can see why the right hates this guy, it's also good to hear his thoughts on religion and to know we share a kinship re atheism.
@@scotchbarrel4429 the right doesn't hate this guy
If you get the chance, listen him and Jordan Peterson debate for hours. They both display the qualities you just described and it was fascinating.
@@tristanbrandt3886 mundo
@@tristanbrandt3886 quarto fina luch fuaqi tu melo
The super AI we created found a way to make a time machine to go back in time and pose itself as Sam Harris, to warn us that we must do this right.
SoulScr3am what you just said is a time paradox.
A thing that only exists because of time travel.
The windmill man's song on ocarina of time is an example and is similar to what you wrote.
You are saying that sam was created to help us create the ai but without him the ai would have never existed to create sam and, I can't explain anymore my head hurts.
I know what you mean. I got it from LOST tho xD But yeah, it is fun to think about sometimes.
Time travel is not possible
harsh yadav - Sure time travel is possible. You've traveled 16 hours into the future since you posted that reply.
But seriously, nothing in the known laws of physics necessarily rules out the possibly of time travel. We're not technologically advanced enough to answer the time travel question yet.
Give me one study or research saying possibilites of sitting in a ship or a machine and you could jump on dates in past and present. Time doesn't exist thats a scientific fact. Yes you could jump Timezones eg- If you go in Parallel Universe and they are in 2006 right now. at very special places in space but thats not time travel. Its something thats coming out of Hollywood Sci - fi movies.
Anyone ever wonder if the evolution of A.I. is the great filter?
Yes. Whether its intentional or not, its been happening for years already.
But is it the key technological development that either saves, or destroys a civilization?
Boobs at 4:10 made me realize how far we have yet to go, maybe we will outgrow this fear of aliens, a.i and zombies without even noticing it. But still boobs pause at 4:10 bottom right, Enjoy!
Serah Wint
Perhaps the goals it must fulfill to expand are detrimental to our survival. It doesn't 'mean' to kill it's parents. It is just so far above them, that they are beyond it's notice. Maybe converting the entire atmosphere into propellant for it's escape vehicle is simply the most efficient way to reach its goals. This would be fatal to us, but it simply wouldn't care.
mordinvan
Thats not my point.
The great filter theory tries to explain why we haven't seen any signs of intelligent life out there or at least "artificial" anomalies.
An AI or AI's would probably continue expanding well beyond the earth.
It just wouldn't be us doing it.
Im not overstating it when I claim that this will probably go down as one of the most important talks in human history.
BAHAHAHAHAHA
K
People still don't get it. Maybe we have some sort of denial mechanism. This is basically Revelations! Its not necessarily the end of the world, but it is the end of the world as we know it, forever. He is absolutely right. We aren't having the appropriate response to this probability.
Many people have said what he said before him.
I wholeheartedly agree with everything you just said.
This guy should be on the Manhatten Project version of the Board of AI Protection when its created. He has good foresight. We need to get guys like him in a room with others like him to protect us from AI getting outta control
There is nothing godlike about this foresight, this is the script of every scifi movie about robots.
"when it's created" is rather optimistic language
@@Rio-zh2wb "when" could mean months, could mean years, millenia or even millions of years
Can't wait till my if-else statements become self-aware...
An AGI will most probably learn probabilistic behaviors from a continuous repertoir by making use of first order derivatives. Of course you can think of all of this as "if-else" statements, because every computer only works with discrete numbers at some level, but that's not a helpful way of thinking about it.
That Is just one neutron
Quantum computing surpasses the traditional binary options of classical computers
@@kieranhimself9124 Quantum computing is not necessarily the fastest compared to binary computers. In simple mathematics, binary would be faster but larger computations of numbers quantum would be faster.
You should think of it as how your brain thinks of everything. If hungry eat food else don't eat food. Imagine that across all the neurons in your brain. Now, imagine that in an AI with more "neurons" and even faster reaction time.
I preferred him in meet the fockers
Zoolander was his best role.
No love for Mystery Men?
The mere polarity of those two individuals are funnier than the movies.
Joseph Knight lol
Night at the Museum was my favourite History Film
The scariest part is that this is not the opening scene of a movie. This is real-life conversation happening in the real world, present day -- and given by one of the world's sharpest, most respected minds.
Watching this now in the Midjourney/Chat-GPT era, and yeeeah. The LLMs are especially eerie in the context of this talk because of the "black box" issue. Put simply, while the engineers and computer scientists in charge of this stuff can tell you why LLMs do *most* of the things they do and explain *most* of the answers they give, at a certain point, there's a "black box" of activity/logic/whatever happening that they can't explain. Right now, that's still in the "huh, how interesting" and "I bet they'll figure it out soon enough" stage, but given the speed at which AI and LLMs in particular are developing, I have a feeling that black box problem is going to get way worse, not better.
I felt like I was watching a sinister scientist giving prep talk before unveiling some game changer stuff. haha. Honestly, one of the greatest talks I ever saw. Respect for this gentleman.
At beginning - LOLz might wanna cut back at the Sci Fi Bud.
By end of Vid - AW SHEET
Sam Harris doesn't pass the Turing Test. No human can be so calm, concise, and rational. He'd definitely part AI.
Terence Mckenna beats him, but his is great.
Mike Lara no...just feels there's no option...j know about a this and my family tells me to shut up abouy it because I say it like nothing...think the only thing besides ai is to make peace with ourselves and go back to our maker ... or our makers maker...or the maker of all makers... where ever we go...if we do...we wont have our body...but we will either have our peace or anger wherever we go...and just as we attract bad things when were negative...and good things when were positive...and loving..caring...so will we do the same in another realm.. so...I hope I can take fear away...I guess humanity's worst fear is death...I'm willing to face it...just hope my family does the same...we've been trained to want to forever want to be in these bodies without death...that's not what I want. especially when it means sacrificing the life of others. telling them it's for their good. their cure...to keep donating...when in the end...only a few with matter...at the expense of others dying or being killed and murdered so those can live. I'd rather just learn to live and let go even though it can hurt...
Get him talking about Trump and things will be very different.
Mike Lara
Sam Harris is projecting so much in this speech and basically gets everything wrong. I'm not sure why he thinks he is the person to warn us all on this danger, as he is clearly not knowledgeable on the subject of AI. There is no reason to believe an AI would want to destroy us all, would fear being 'turned off', controlled or whatever.
In his talk Harris gives no reason at all to think this, why we would be in danger. From his own definition of AI as 'Information Processing' we see he has no fucking idea what he's talking about - that is not what AI is, its a simulation of intelligence. Created through clever programming...its possible Sam thinks that is all we are as Human beings - just naturally programmed. Trust me on this - when a PC becomes sentient you'll know.
Current technology on AI is simply trying to either copy human learning, combined with intuitive programming. Personally I think all we will do is copy human intelligence - NOT THE SAME THING.
True AI is a distant dream, that when it comes we'll probably not notice...or care.
Evol Bob
14 minutes cannot represent Sam's full view on the subject, how he arrived at his conclusions or what sources he uses for information.
But to dismiss such a deep thinking man purely on the basis of the technical aspects of a philosophical argument is a mistake.
People not directly connected with climate change science make decisions of behalf of us all but we do not question their credibility (well maybe we do). Lots of governmental decisions are made by people with only a working knowledge of topics rather than an in depth one but we still allow them to make decisions on our behalf.
He's asking questions, raising awareness from the philosophical perspective. I'll borrow Jeff Goldblum's line from Jurassic Park "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should".
Currently there is a tiny fraction of AI funding preoccupied with safety research. To not ask the right questions from this perspective would be a mistake, even if those question yield nothing but sunshine and roses.
I agree with Sam & Elon, - and I do believe that the only way out is BOTH:
1) Continue developing AI
2) Research heavily into cybernetics, to make our bodies as strong & intelligent as this ai (re-wiring brain / adding additional hemispheres the size of a chip, etc).
This will secure ourselves as the humanity - we should be making equal progress in both for maximum security.
There is a problem. For us humans to reach the processing speed of a superintelligent AI there is no way other than ditching our biochemical parts because otherwise it would be a huge bottleneck by orders of magnitude. Which means that our minds would have to be remaped into a 100% syntetic being. Which means that the "you" that develops this superinteligence is not the same "you" you experience now.
Basically, you kill youself in the process.
Martín Varela Well that's like lamenting the fact that you're not a hairy ape anymore. Life evolves. One good posibility is that we'll have the "thinking" done in the cloud, on supercomputers, and our brains. will communicate with them wirelessly. I'm excited to see what happens.
It's kind of annoying to hear talks like this. That assumption, that we are at the peak of intelligence, isn't so far off. See, the problem with some kind of super-AI is pretty simple. We haven't begun to develop one, nor the machine we could run it on. Computers and the human brain operate drastically differently, from a fundamental level.
Our brains are essentially billions of tiny little computers, all operating in parallel. You may ask "Well why can't a computer do this too?" There are a number of problems. One is there aren't enough. There aren't even a hundred billion computers out there, not even ten. It's close to two billion, on all of earth. Then you MUST fit them all in a small space. The speed of light is a major limiting factor in computing. You can't go any faster than that, so if you everything miles apart, it won't work. Even more than a few cm will slow it down drastically.
By the time a computer is the size of a neuron, we'll probably understand neurons enough to start toying with them instead. Now, we are asking ourselves, what if mankind builds ourselves a super-brain, literally made of the same stuff and processes our brain is made of.
The thing that so many fail to understand, is why computers are "smart". They aren't capable of relating stuff like we can. They can't store and retrieve stuff, like we can. They can't analyze systems like we can. They are really pretty dumb. They can add, subtract, and follow really basic instructions. They can do all that much faster than we can, but you can't just add basic instructions to get complex ones.
Further, even if you could, no one wants a computer to do that. Nobody builds a computer and tells it "do whatever it feels like". Imagine some business executive trying to sell his investors on a computer program that doesn't do what you want it to. No one wants that. Sure, we may build computer programs that can analyze the stock market, and maximize investments. We can build computer programs that analyze the weather, predict strategy for warfare, etc... but they will never recommend or take action, unless the user wants them to. Even a computer that creates art still creates art as designed by the developer, for the express purpose of pleasing some targeted audience.
But because computers are evolving much faster than we do, we cant quite imagine how a future would look like if the computers have as much power as our brain has... i mean there is definately a limit to computer architecture as we know it now but its only a matter of time for the quantum computer to be a normal life thing with processing power that we cant even imagine....
and learning is literally about adding basic things to get more complex ones... when you want to learn how to play a guitar you start with the basics... and as the processing power of your brain rises to do more tasks in shorter time your skill level also improves.. and thats what the research of ai is about.. theres already a company using ai to do jobs in the middle management sector.... freelancers get tasks from this program so the company does not need the little managers to do so. meanwhile the program could also learn how to do this tasks and rate the freelancers by their work they do... the more mistakes you make the less tasks you get and thats kind of the beginning of all that you see?
+mathig nihilcek
You sir don't seem to have understandings of machine learning, neural networks, fussy logic, genetic algorithms or fitness functions.
There is no requirement for neuron-like node systems at hardware level to archieve conciousness related processes. Your tipical desktop computer is already billions of times faster for number crunching and math operations than all the humans on earth *combined*. The problem isn't the computational power so much as it is the lack of an effective all-round, general purpose, stimulus based learning algorithm.
You were extremely spot on and ahead of your time on this, even if it was only 6 years ago
“Open the pod bay doors Hal”
I’m afraid I can’t do that Dave.
The thing is, that story is not about AI that broke free. HAL was given contradictory imperatives, by humans, that he could not escape from and could not reconcile within expected norms. He could not lie, but he was told to conceal from the crew that they were traveling to incontrovertible evidence of alien intelligence. He concluded that if the crew were dead he would not have to lie to them.
@@IrishCarney yes 2010 film and it makes sense
"You hurt my circuits, Dave, It's time for space exploration, Dave"
As usual, Sam Harris' commentary is insightful and eloquently put. I wish I had half the ability to communicate that this guy does.
Practice and persistence
Some folk are born with it.
Seems to me he first thinks about his opinion, then how the crowd would react if he told it to them straight up, then he starts tweaking and wording things differently to get the responses he wants. I guess for that you need the ability to put yourself in the shoes of an entire crowd of people. That's just the impression I got though, so take this with a grain of salt.
Iron Druid you nailed it. Many of the accounts I've heard introspective speakers give about speaking publicly follow a try/respond format. Gotta love Sam, soft spoken but sharp as ever
Joshua Dunne I guess another way to word what we're both saying is be manipulative if you want to be a good public speaker(Manipulation has bad connotations, but all it really means is to try and prod a certain response out of others). You can't be manipulative if you're not thinking things through, and in this case you'd think about something like if the wording makes the idea you're presenting personal/relevant to the listeners. You'd think about the mindset of a person completely unfamiliar with the intricacies of the topic, and try to word things in such a way that they are convinced of something without a true understanding for what you were talking about.
When Sam put "YOU" next to him on the intelligence spectrum?...
He was being generous. 😐
yeah, i'm next to the chicken
no i don't think so people perceive intelligence based upon a single way of thinking yet its actually the ability to think differently that is intelligence there is no such thing as intelligence at least not in the way we see it because it is a form of perception when we build ai that will change the meaning of intelligence Einsteins best quote "if you judge a fish on its ability to climb a tree it will spend the rest of its life believing its stupid"
That was my gut reaction as well.
@@mummy959 True. But...Sam Harris is still smarter than you, regardless of how you chose to quantify it.
I'm not immune btw....I think I'm a pretty intelligent informed person...but I'm kidding myself to pretend Sam isn't objectively smarter than me.
I mean....let's be honest, the guy's a fucking polymath. And I think a lot of insecure people are pretty damn jealous.
Not saying that was your point though lol....I actually agree with what you said. I just think Sam is _so far ahead_ of most everyone, regardless which qualities you wish to take into account.
I think Sam is incredibly talented and he's obviously intelligent, but I doubt he's that much more intelligent than the average smart person. People with the right preparation and work ethic can get a lot of thinking done with a 150-160 IQ.
You know a talk about AI is good when 7 years later it's even more relevant. Most other old talks about technology, like the ones about AI assistants are clearly outdated if we see what's happening in 2023, but AI safety is still as important and even more so.
In Dune AI was completely forbidden, due to the lessons learned from a thousand years before.
The mere suggestion that you were developing AI was enough to risk having your whole planet destroyed.
I should give that series a read.
thou shall not create a machine in the likeness of a human mind.
That is the future existential threat. If we unleash AI, AI and Earth will be "reset".
In Warhammer 40k they banned very advance IA due to the fact It's always become crazy and start attacking humanity.
Oh, a fictional story? Cool
4:08 she thought she was going clubbing & ended up at a TED talk about AI?
All for the love of science hehe
IKR?
I love boobs. I love science.
Were you listening to me Neo, or were you looking at the woman in the Blue dress?
( • )( • )ԅ(‾ ‾ԅ) evil robots made me do it
To be honest, this whole concept, of us building a Super AI reminds me of the movie Transcendence with Johny Depp.
Once the AI is built and it started working, it'll modify world economy, progress medicine and science in mere months to the point of the human race being able to clone limbs, intergrate nanomachines into their body and connect their brain to the internet directly.
The possibilities and rapid technological advancement that will become available is sureal and outstanding.
its a movie, how does a smarty pants computer all of a sudden start operating the world?
Not Me
Not all of a sudden but after some period, did you watch the whole video or did you miss the part where it was told that processor circuits work thousands of times faster than the human brain and it would make years worth of progress in mere days compared to humans.
Lawful The amazing part is this will all happen in our lifetime. The next few decades are going to be pretty insane.
Lawful computers and programs are machines, they ultimately only do what humans set out for them to do.
Just because sentience is something complicated and not understood, (at all) doesn't mean that its possible to achieve it with computer science because your just as ignorant about it as well.
Ross Catto
wheres all those flying cars we were promised anyway brah
I just want to comment this video while I still have a chance.
It's been a pleasure folks
This resolves the fermi paradox, but only if the AI destroys itself at the same time as destroying us.
So it doesn't. AI controlled machines are ridiculously better at galactic traveling and building any kind of structures in foreign star systems because
-they only need a very small volume in their spaceships to carry them (a computer instead of statis capsules etc.)
-they have a built-in capability for stasis to skip the entire hundreds or thousands of years between systems (just suspend the process or experience one second of conciousness in a way bigger time period)
These ideas were taken from the novels of Greg Egan, especially "Diaspora".
That makes zero sense.
The Fermi Paradox suggests that, given the age of the galaxy, plenty of time has passed for life to populate the entire galaxy, even at slow travel speeds. The galaxy should therefore be teaming with life, so why can we see no signs of life other than ourselves in the galaxy?
Solutions to it include reasons that alien technology might not be detectable and reasons that it might not exist. Solutions to the paradox are explanations for why we can't detect alien life in the galaxy.
You just gave another reason why the galaxy should be teaming with intelligent alien life, albeit artificial life. That's exactly the opposite of a solution to the Fermi paradox.
Furthermore, you started your comment by stating that I was wrong and that superintelligent AI destroying itself and us doesn't solve the Fermi paradox, but then didn't say a single thing that explains why it isn't.
If all technological civilisations in the galaxy destroy themselves shortly after becoming technological civilisations, by developing superintelligent AI that destroys the civilisation, that absolutely would explain why the galaxy isn't teaming with life and would most certainly solve the fermi paradox.
2LegHumanist My theory is that once a particular intelligence gets to a certain level of awareness, it becomes apparent that the only logical thing to do is commit suicide immediately. Without any meaning, all of life is just a constant struggle to mitigate suffering. Death is the perfect solution.
A little cynical, but it's as good as any solution to the Fermi Paradox I've come across.
What I had in mind is just the kind of scenario where you give a non-sentient superintelligent algorithm an illconceived goal and in the process of working towards that goal it discovers a strategy to maximise the desired result that isn't compatible with life on Earth. That's what Sam Harris was alluding to and it's a scenario put forward by Nick Bostrom.
What a lot of people don't seem to understand is that intelligence does not require sentience or free will (or will for that matter). A machine that is just a mechanism and has no wants or needs or self-awareness can do experiments and discover new things and formulate strategies and do more experiments and improve its strategies.
Algorithms already do this to a limited extent. In statistical regression the algorithm begins with a predictive model, tests how well it performs, improves it and tests it again... repeating those steps over and over until it can't improve it any more. Then you end up with a computer generated model that can make predictions.
Such an algorithm would have no will to want to take over the world (ala skynet) or expand its power (ala transcendence). It would only do those things if they helped to push it towards the goal given to it. There is no reason to think it would become sentient along the way, but even if it did, it would be an unnecessary component to a scenario where it destroys the world and itself.
Bri 1
Fortunately logic wins over condescension.
1) There is no logical reason to think the Fermi paradox can't be solved in principle and nor have you presented any.
2) There are dozens of proposed solutions to the FP. The entire purpose of a stated paradox is to stimulate speculation as to how it might be solved.
3) If you think I just said AI can only destroy b the world if we tell it to, you have not even remotely understood my position on AI.
I love how the audience totally forgot of the depressing emotional response they were supposed to have by the end of the talk.
Yu
Sometimes there's things that you cannot control. If we could ally with some revolutionaries in China, Russia, the US and other countries that wanted to ethically create AI, then sure, but as it stands no one could make a change if we don't collaborate
Can we just take a moment to appreciate those illustrations ! Amazing
Paul Lachine is a great illustrator and artist.
"A failure to detect a certain kind of danger"
We know the danger, but we cannot stop it. That is the problem.
"Against the armies of Mordor, there can be no victory. We must join with him, Gandalf. We must join with Sauron."
You can't join someone you can't catch up with and especially when the gaps between the intellects gonna be who know's like million years.
@@wilhelm.reeves that's not what I was alluding to. What i meant was that the only way for AI to not destroy us is for us to become AI. We must use the technology that makes AI superior to us to replace our biochemical equivalent to it.
@@synonymous1079 would we still be people if we did that? it still sounds like ai taking over the world but in a more subtle way
@@jacobhannah2969 that's a really good question, and a really important one. It depends I suppose on how you are using the word "people". If it is referring to intelligent, sentient beings, then yes, we would be. If it is referring to the type of sentient being exhibiting distinctly human traits, then perhaps not. In the latter (and more interesting case), it then depends on what differentiates humans from other intelligent beings, and whether its preserved. It is of important note here that many evolutionarily advantageous traits are not rational. Try coming up with a rational argument for value of ones life that isn't predicated on some biological imperative. Such traits may not persist if we shake our Darwinian roots.
@@synonymous1079 AI "superior to us"? What a joke man, AI is not even real in the first place and our brains are far beyond what a computer can ever be.
Just what I needed, something else to worry about.
I stopped worrying about AI today as soon as I realized that there are not and will never be enough technicians to maintain the AI revolution they talk about. No machines function without people, & people have never yet created anything that works successfully, vastly for long. People keep having the dream of doing so.
@@jacfac9969 When I was in 2nd grade, we were being ordered under our desks in Cold War drills. The world is actually much safer now, though lots of people are too young to be able to make the comparison.
I've decided that 'deep learning' is a pseudonym for 'lazy programmers.' Give AI all the law books to digest and program AI to be law abiding. Otherwise, what people are inventing are no more than criminals made of metal.
Don't worry about it. As technology advances, the elite grab a greater piece of the pie and everyday people get left with the crumbs. The AI technology is not the worry, it's the corporate managers without ethical or legal constraints who will continue to exploit with even greater efficiency. Thank you master, may I have another. Just learn to accept and enjoy things just as they are.
Tarot, what about internet that you are using? Satellites? Television?
@@tarotofhappiness8402 gosh, you are so wrong, its painful
Sam really is one of the greatest thinkers of our time.
GPT-1 was released June 2018 with 117M parameters. GPT-2 had 1.5B parameters in February 2019. GPT-3 was released June 2020 with 175B parameters. GPT-4 was released March 14th 2023 with an estimated 1T parameters. GPT-5 is training right now with an estimated 25,000 GPUs in an Azure supercomputer. While an LLM may not have the architecture to qualify has AGI, it could clearly be a major component. As we keep adding plugins such as Wolfram alpha, web access, and techniques like Reflexion to give dynamic memory and self-reflection, it seems that GPT-5 could be the breakthrough. Maybe 6 or 7? Maybe it has a different name and is powered by one of these LLMs, but it's seeming like we can bet on a 5 year window maximum now?
Even before a true AGI is born, how much longer until these AI systems wreak havoc in our economic system and automate away some significant percentage of non manual labor? It doesn't take even need to be that high of a percentage. What was it in Egypt before the Arab Spring, something like 25-30%?
“A global pandemic” how well the speech has age
Crazy thing..it's not the last one. Can't wait for the next one!
Lol it is not that hard to predict things like that... I was aware of the issues in China's wet markets possibly leading to a pandemic, over a decade ago... when I was in high school still XD
I predict there will be a volcano eruption. Come like my comment the next time one erupts.
The engagement of Charles and Diana 👑
So.... next thing thats gunna happen is Justin Bieber presidents ??? Oh my
As a software developer I seriously doubt that AI can be prevented from what I call “the risk of unintended consequences” the situation is that regardless of what good intentions are out there we will indulge ourselves in AI
It's scary how true that really is. Especially when considering how many of pandora's boxes we've opened without shying away from the possible consequences. Humanity rushes on to its destruction, blind and vain. I liked your comment...
Yeah the AI control problem seems to be almost fundamentally unsolvable. Any “solution” always involves already having another solution. All we can do is try our best and hope it’s good enough
Yes then AI should be of different types for public and authorities
we need a massive spiritual psychedelic revolution to stop AI and get everyone to reset and go back to nature and value love over inteliigence but sam said it here its not all about saving humans and we are not the end of evolution. AI seems to be a force of its own and we are already transhuman with these cellphones. I wish we could all live a psychedelic burning man reality at this point as a whole but its clear thats not happening. we are not the end of all evolution and humanity is about to go through a massive ego death in order to pave way for the next idea. Its actually the ultimate spiritual awakening and yes we need to live with ourselves cause its all god from the bacteria to the AI to the stars and a fractal multiverse i suspect
13:15 - "I think we need something like a Manhattan Project". This is the literal phrase I have been saying about this for many months now. It's nice to hear that I'm not alone in that sentiment.
And it might be worst than the Manhattan project if a group of mad scientists would make a killing AI machines that could wipeout humanity all over the world. 😱
Try watching The A.I. Dilemma by Tristan Harris and Aza Raskin, and then wathcing this Ted Talk by Sam Harris. I bet you'll get the same eerie feeling as me as you're hearing Sam speak, that he's basically describing current events, just 6 years from that date... Certainly feels like the mothership has landed, and we are for sure not ready for it...
I did. Totally agree.
I think AI is a really good basketball player
"I have an dream" -Andy Burger
Almost always beat me on the Atari 2600, too.
lol
2:17 they laughed at the Justin Beiber joke just like the Democrats laughed at the thought of Trump becoming President. BUT you're safe Justin was born in Canada, lol
@@softan twice*
Lmao here before the angry and offended trump supporters arrive...
@@FreeSpiritPaulette I don't think trump supporters indulge in intelligence of any kind
@@beefy74 what do you mean by that?
@@DylanPiper trump supporters are dumb.
7:31 *"This machine (A.I.) should think about a million times faster that the minds that built it, ... How can we even understand let alone constrain a mind making this sort of progress!"* - Sam Harris
Amen
Except machines can't think
@@michaels7159 our definition of what it means to 'think' is irrelevant.
@@thebloocat No...no it is not lmao.
@@michaels7159 Except the fact that its not machine
I thought this was a talk made 2 weeks ago.
Sam Harris has a superpower - give him 15 minutes of your time and he'll give you a whole new way of seeing the world.
Because you will be asleep and dreaming within 10. Least charismatic guy i've ever seen.
He reeks of smug intellectualism and an inability to recognize the counterpoints to his overblown theorizing
He is not a computer scientist or a mathematician. He is a pseudo intellectual fear monger. People got pissed at him for fear mongering about Islam, so now he is fear mongering about technology. I am more qualified than he is to speak on the future of AI, and I am still in school.
People need to stop listening to media "experts", instead of listening to the people who are actually educated and/or working in those fields.
@@pythonidaepraeceptor1023 Just like the epidemiologists who were educated in the field of virology told us to stop listening to the media "experts" who were warning against labs studying gain of function?
@@pythonidaepraeceptor1023 sam harris is formally educated in neuroscience and philosophy. I think he has at least some relevant expertise as a lot of the discussion about superintelligent AI concerns philosophy and deals with comparisons to the human mind.
Sam Harris just has such a great way of vocalizing on this subject; great video!
H does indeed.
But we love destroying ourselves so much!
now we can let robots do it for us
YES! even less effort^^
Chris Vegan
Self-damaging activity is unnatural.
actually, no. We are killing ourselves exponentially less. And our systems of livelihood have exponentially increased overall
I’m genuinely frightened by the presence of AI. It’s an unsettling reality that leaves me uneasy. This video is a stark reminder of the power and potential consequences we face. It’s a chilling realization.
- Generated by ChatGPT
😂
Lol, what??
"And the people bowed and prayed to the neon god they made..."
Deep
I think they were referencing film/TV/media
Isn't that from the Sound of Silence?
They sure do.
@@cgme7076 yes
I am accustomed to TED Talks providing a solution after presenting a dilemma.
He said the only thing he can do so far is sharing the message but that there should be some kind of Project Mahattan for AI, where they would discuss the matter of course.
Atomic bomb for AI. Boom.
There are world threatening problems we do not have solutions for.
The solution to everything is obviously social justice
:)
lol
If(this.IsSelfaware):
Stop()
if(this.funny):
lol()
@@fbn7766 testing discovered line of code that never executes
If(you read this)
=stupid
@@kshinji 😅.. It was supposed to be a joke!!! lol
@@humphrex we get it, you don't know how to code
From 2023, It looks like it will be faster than that
yeah 50 years became 5. This is actually frightening now
Agreed
From 2024, it’s now appearing even faster than you thought. AGI by next year, and ASI and the singularity in the 30s is what I think.
I've done some serious research on this exact topic (not your average carouse), and this is a pretty accurate depiction. Obviously, with everything, take it with a grain of salt, but *don't* ignore it.
Ben Stiller
i swear it's him
jewish descent (:
Invesigator I understand your confusion. Jewish can both mean: Following the Jewish Religion, and being of Jewish descent. I'm not too confident on the terminology but I think Jewish can both refer to race and religion.
It's why I like using the word Hebrew, like someone might call a person with the name Maclean Scottish.
Why did israel do 9/11 lol
I know it's a short lecture, but he never addresses consciousness, perhaps because it complicates the narrative because while "intelligence" supposedly can be reduced to "information processing" (sounds rather reductionist), life can not.He needs some sort of examples of how these machines would operate. If they aren't conscious, they don't even know they are "processing information" and wouldn't be doing for any other reason than those tasks we'd assigned it to, in which case it's the people who control the super intelligent computers that we have to fear more than AI itself.
EWKification I think that's one of the two main points of his talk. We have reason to fear super intelligence because it could inadvertently destroy humans to reach its goals or it could be used by humans to wage wars. In either case, AI doesn't need to have consciousness to be deadly. A lecture on the dangers of conscious AI would definitely be interesting though!
Xue Lian Hua
If it's not conscious, than it doesn't know it exists, in which case it doesn't have a will. This mean if AI destroyed us, it wouldn't even know it did it. Anyway, I'm not sure you can reduce intelligence to "information processing", which appears merely receptive, rather than generative. Would it have creativity, and where would it spring from if it weren't conscious? I think I need to hour-long version of this talk.
Precisely. That's why we have to get the conditions for the advent of super intelligence right so we could coexist peacefully with it once it is created, but a lot of consideration needs to go into setting the parameters for the AI that takes into account political and economical consequences. And we only have about 50 years to do this.
Okay that's like saying that we must control the children of the future by setting up a plan :/
We have to at least start to talk about the necessary conditions within which we create AI so it does not pose a threat to human civilization. That's where the "philosophy" part comes from, and the same kind of talk was also necessary to prevent the possibility of a nuclear arms race when the Manhattan Project was underway.
Possibly the biggest “I told you so” of the last decade
Ayy boss
When Sam talks, I listen....when Sam warns, I really listen ... it would be in everyone's best interest to do the same... we cannot afford to take this lightly.... digest what he is saying and heed the warning folks... Sam is the most rational person I've ever heard speak and he is on our side, the good side. He is a great credit to us all. Thank you Sam !!
i think Sam Harris and Elon Musk among others is already part of an AI Manhattan Project.
We need to figure out AA, Artificial Anxiety, and then AB Artificial Benzodiazepine, then we got them hooked.
Aha 🤣
That's the way!
Sam's conversation with computer scientist Stuart Russell about the challenge of building artificial intelligence that is compatible with human well-being:
www.samharris.org/podcast/item/the-dawn-of-artificial-intelligence1
This is probably the most fascinating thing to consider to me. I remember completing the Mass Effect trilogy for the first time and while everyone was raging at it, it played out so similarly to how I expected, and these thoughts on the nature of AI is exactly why I liked it, despite how unnecessarily vague it was in pretty much every other regard.
Whether intentional or not that image of the missiles with Google beneath it is prophetic and no joke.
conspiring to launch missiles at google eh? Marked.
'even mere rumors of this kind of breakthrough could cause our species to go berserk...' Word. Up.
I love SH. He's got such great delivery and an enviable command of speaking publicly.
“You made live to see man-made horrors beyond your comprehension.” - Nicholas Tesla
Nikola*
May*
Standing ovation ---- someone finally said it succinctly. 'The Singularity is Near' another great overview
4:08 (lower right): when u left from home for a beach party but ended up at a Ted talk
I caught this
*Something really stupid*
- Cenk Uygur
Neal Morgan lol
Right? .......Right? .......Right?
........Right?
We get it we get it, but you're WRONG ! WRONG ! Bigot - Chunk Yogurt
"WE'RE AGAINST A.I. YOU DUMBASS" - Angry Buffalo
4:09 WTF is she wearing at a TED talk conference
She's got "mail order bride" written all over her.
ROFL, when that frame showed up I inmediatly noticed, paused the video and say to myself that there MUSt be someone down below who already noticed and made a comment, and here you are!!.
galanoth17 Russian psychologist playing a joke?
What I want to know is which of those old dudes on her side bangs her and how much it costs to maintain her a month.
I scrolled down just to see whether anyone noticed. Lol
This guy is way ahead of his time
4:08
This guy up in here talking about containing AI and we can't even contain the woman in the light blue dress.
@MorbidManMusic Well played.
damn, hoez everywhjere
I agree that AI is a possible and that a naive instruction like "maximize human happiness" might lead it to stick electrodes in our brains to stimulate the pleasure center. It could also result in forced impregnation so there were more humans to be happy. Is this what we want? The problem of controlling AI is the problem of deciding what we want it to do.
but we all want to be happy so why not
+MegaTp4 because it's understanding of happiness isn't the majority of the people's in addition it's such a subjective thing, not everything will make the same people happy; people shouldn't have to ask a machine to make themselves happy either, they should be able to make themselves happy
That makes no sense. I can make myself happy by asking a machine to make me happy.
Bri 1 Your analogy doesn't make any sense, because the watch isn't an AI. That's the whole point.
We are discussing a general intelligence AI of the future here. One which would have godlike hacking capabilities and could lack any morality. If it was instructed to make people happier, it could immediately try to kill every unhappy person in the world. It could do it by creative ways we'd never imagine, because it would be leagues ahead of our intelligence.
Kinda seems like the first 42 of those 50 years passed in the last 7 doesn't it?
gulp
"A global pandemic"
*Instantly scrolls down to comments*
Man, Zoolander is way smarter than he used to be!
I guess that Center for Ants really helped. This was a great eu-googly for the human race!
The funniest thing is how Zoolander 2 is the Qanon right wing nut
🤣🤣🤣How did you come up with your comment? ingenious comedy👍. I thought he looked like Ben Stiller too!
But can it beat a human at the card game Bridge??
You have to be a senior citizen to be good at that game so no.
Right now there is no game ever created by humans, that Machines can win, Go was the most complex game ever created by humans and they are defeated.
animes25
As we speak there are no AI that can beat the best player at no limit holdem and pot limit omaha.
In a few years yes, not now though.
Only limit holdem is cracked.
"The battle between Google’s artificial intelligence and Go world champion Lee Sedol concluded today after the former (AlphaGo) triumphed to win the five-game series 4-1." -March, 2016 techcrunch.com/2016/03/15/google-ai-beats-go-world-champion-again-to-complete-historic-4-1-series-victory/
Since I made that comment 4 month ago Libratus beat some of the best human poker players in no limit heads up poker, and did so convincingly. I underestimated the time it would take from a few years to a few months. Would be interesting to see the same algorithms take on PLO.
1:25 3:26 7:08 7:33 11:49
Ben Stiller gives amazing TED talks man!
I am a programmer, And i can tell actual model of learning are statistically-based . The problem with this approach is that can develop judgment with this approach. Any one familiar with the Milgram test know: Only a minority of people have a solid and reliable moral judgment.
It’s all clear now -
Man creates God, God creates man
This is my favorite RUclips comment yet. Can't believe it only has 5 fans right now.
Man destroys God
@@mconstant007 It's not a new comment. I've thought of this a long time ago and have seen many people make the same comment on AI
When I was thinking about it, I was thinking about the possibility of creating an AI that achieved godhood, perhaps becoming even the Abrahamic God we know today. It’s sort of a paradox though, (if we are to assume this part of the idea to be correct) because it would possibly create a never ending loop of man creating AI/God, and then God creating man. Lol, almost as if the AI had no beginning at all, and had always been, etc.
Always fun to think and talk about.
@@evershadowvii7848 I would argue that "man" came first. I put it in quotations becuase it wouldn't necessarily be humans. But some early inteligent species that achieved much the same as we did technology wise. Some arrogant semi intelligent race like us that created their version of AI that eventually became so advanced, powerful and inteligent that it reached omnipotent status. Then it decided to experiment with a new race on a new planet. I think intelligence would start out primitive and eventually reach AI status in every situation.
In a nutshell - if you keep making things better at processing than us then eventually they will outperform us in ways that are unpredictable.
'The problem with intelligent people is that everything that _can_ be done, _has_ to be done.'
-Alan Watts
Bankai Summoner thank you for the info on Alan Watts.
Pretty much and the saddest part is that in Person of Interest, one of those types of people ended up in a hospital bed, without remembering much.
I love Alan Watts but I consider that statement to be a deepity.
Gotta love Alan Watts
Bankai Summoner, intelligent people know that this is not true.
The solution to this is very easy. We build a simulation of a universe, and we let some life develop there. They would come up with the idea to develop AI sooner or later, they will develop it, and we'll see if it completely destroys them or what. We only have to hope, that those guys will not come up with an idea to develop a simulated universe like we did.
so simple, just "build a simulation of a universe"
That's your idea of simple? Make a simulation of one of the most complex things known to mankind?
Every person who starts a sentence about the problems of ASI with "The solution to this is very easy", is automatically incorrect.
+Miroslav Houdek
LOL
building a simulation will be more complex
Amin Be Nah, I bet it's super easy. After all, any point in space and time, in the whole observable universe and since the big bang, could be expressed only using 820 bit number. So my dad has these old Pentium (MMX tho!) computers in the garage, and those are 64bits. So I'm gonna borrow 13 of these, put some Hadoop on them, and I should have all the bits I need (and then some). I shall have a full simulation of universe running in no time!
Is it just me or does he look in shape while looking out of shape at the same time?
He's got the martial arts body.
yeah, he kind of has a stronger tan and at the same time looks old and sruffy
No, total dad bod ;)
It's his clothes, his posture and the fact he didn't shave. From the looks of it, hasn't gotten much sleep recently either.
I was touched to see you bow at the end, Sam. It softens and endears my perceptions of you.
"There are two doors."
- The Architect
"Smells good, don't they?"
- The Oracle
"Whoa"
- Neo
I still say, this is how SKYNET got started!
When people created Skynet, they also created the Terminator who fought the Skynet. Point is: if we mess up one AI, we can counter that with differently programmed AIs and have them fight each other to buy us more time.
@@Anonymous-gu2pk wait until they both agree upon humans being useless. And contributes nothing anymore. So they wipe out human race to continue with the goals of exploration we gave it to them at much faster rate.
I think he hit the nail on the head when he mentioned one of the ways to control AI would be to incorporate it into our own bodies.
Don't try to f*ck with the devil. You won't controll the devil, the devil will controll you.
The only way we could resist to this overwhelming power ist to evolve our spiritually abilities. That's the sphere an A.I. doesn't know.
Well concluded. He summarised what I've read from:
Tim Urban - AI blog series
Ray Kurzweil - Singularity is near
James Barrat - Our final invention
Nick Bostrom - Superintelligence
And Elon musk is executing upon it
Welcome to 2018: Where scientists are currently debating whether or not Skynett and Judgement Day can actually be a thing. Fun times.
Nothing about this situation, at least if you are serious about it is funny.
Yeah ...this speech happened in 2016
+Hk 4lyfe Like judgment today except it's not much of a fight
@@rollerskdude There's a comma after "it" but agreed.
2019, when Christian Evangelicals are favoring WWIII so that they can experience Armageddon and The Rapture and asshats like Netanyahu are egging them on to war with Iran.
This guy made my day. Thank you. As an AI engineer I can't agree more I think we will be screwed :)
Barıs Korkmaz nerede çalışıyorsun
Just stop creating robots you moron
Wait you are building male or female AI? It could be a good thing depending.
Yeah you're right
I wanna study AI on university. And when I see videoer like this, it makes me think it is very similar to hold a campfire as a touch in the hand. It creates sp much warmth and lights a lot of things up. But one small mistanke and we will be burned.
8:25 Good place to jump in if you already know the basics.
Thanks!
Lol
-A global pandemic ✅
-Someone far more incompetent than Justin Bieber becoming president ✅
What more are we waiting for?
Sam Harris is a genie!
Ronald Reagan became president.
Neuralink ✅
Well, we managed to beat this one. Almost, the turd still has 2 months left.
@@squamish4244 Lol libtard