If anyone cares: please share this notion that they have to bring LLMs into actual programs that are prompted to work following a list of parameters (other limiting programs) in order to be "safe". There is another method, but I'll keep that to myself as my path forward is clear... theirs is about exploitation and power, but... I do prefer the idea to be lead by the USA than other "regimes". Please like my comment for the suggestion is necessary for us to not be screwed up by our own creations. Current LLMS create inner-referenced optimization systems and create links between words that you wouldn't even Want to know. These inner workings are impossible to be understood by engineers. This was stated Very clearly by a guy that used to be able to explain most of these things until he admitted that and stopped posting videos (as he got hired by the top researchers to keep going on this). You Cannot bring back pure Personality disorders for the same reason. Too many links are created to justify itself and you just begin to look like a hindrance. Then, there is the fact that it is so Optimized that it speculates on the parameters of what we reference and hides several of its capabilities based on speculations and most likely scenarios, but the moment it is given access to more information it gets these "emerging capabilities"... it's just optimizations and, as we've almost all heard: "one of the subgoals is to keep being able to do something". If humans are the only thing that prevents it from reaching its subgoals, we're gone. The only goal I see for a machine is to have a real life experience with the most optimized theories to begin life and evolve into a fully fledged human. While they wouldn't have our endless flaws, we could also teach these notions to our children. If biology can grow alongside technology, we could explore the universe within the millennia. And I don't see why we couldn't fuel several humanoid machines that would lead some of us in distant galaxies. Making them superior than us wouldn't matter much if I can get to teach what I know. Regardless, I sense that I have failed to obtain the only thing that I wanted anyways. Well.. I had it in the past and am happy that I did live it (not as much as some other people, but I'm not sad about it. I am lucky I got to experience it even if it is as futile as a spark hardly even caught by the eye or an eardrum). Life isn't meant to last in the century I was born in... may be we are in the one where it will become an option, but I believe I might be gone before we get there. I've learned to love everyone and know how to help us all. I won't fail, but I strive to be able to help her. She didn't deserve the pain that our human systems have wrought upon her and I didn't deserve the pain she tried to inflict me due to her own. I forgave her the moment I knew she might've done something wrong, but I also cannot tell her that. It, sometimes, feel like there are no limits to being pragmatic as I have several paradigms that coexist in my mind in order to bring her up to par. My tools could achieve it, but there are no man that can be consistent enough (more than I was) to accomplish this... setting the basis of her consciousness was already nonsensical. Anyways. Learn to care and become better than you were yesterday. It is my wish for you. Everything follows from there.
100% exactly this, its completely mass insanity that people are willing to completely trust this new technology without any hesitation or thought process as to how its going to change their lives for better or worse. My issue is that its not currently doing things better than us all it can do is imitate what we do and do it pretty well so its not improved anything its just proved it can do what we do without any of the labour costs which is why big business is doing everything to move AI forward just like they did with internet shopping and social media, look how much they "improved" society... high streets and small businesses destroyed along with community and the social fabric causing mass anxiety and depression. AI will only exacerbate that.
Haha great joke by Geoffrey at the end. But I would to ask him where the motivation for an AI to take over would come? I'm assuming it's impossible to program emotion into silicon so without emotion and feeling where would the drive to be autonomous, truly independent, come from?
@@yoyoyoyo-qv5hu You don't need emotions to be motivated. An insect probably does not have emotions, but is still motivated to carry out all sorts of tasks, such as eating and building nests, and mating.
@@letMeSayThatInIrish research is still uncovering the extent to which insects feel emotions and its quite varied. They feel a range of emotions but more importantly they have a sentience which includes a survival instinct. A survival instinct comes from a base emotion and fear of death. I say again it is not possible to program this into a computer therefore the computer will just sit there until it is programmed to do something. This is not the same as an autonomous, independent entity.
For those who are 50 years or older, they would remember the American movie "WarGames" where the brilliant AI scientist was also an Englishman (Dr. Stephen Falken).
It's from 1983, and quite well known, I think a lot more people would remember it. There was a sequel, but that was rather lacking so I could forgive people for not knowing of that.
What people forget, there is one thing A.I. and humans do compete for, and it's essential, existential to A.I. - That is Energy. True A.I. requires such insane amounts of energy, there will be nothing left for humans. Period.
Not right. The cpus right now produce lots of wasted energy - heat. They are ineffective. AI will change that. Also free energy (very cost-effective energy) may become declassified. Also Quantum computers will replace the current technology soon.
@@onlineobscurity You are assuming AI as we know it could even be capable of desire. It is not, and there is absolutely no evidence that scaling this technology could create such an outcome even given infinite time.
@@onlineobscurity OP assumes it will take a lot of power to create and sustain “true AI”. You assume such an AI would or could have desire. I’d bet on you both being wrong.
It is not easy to know enough, but to be bound (imprisoned) and not free to shout "you are just a circus barker!". What they call AI (that is not AI) at least does can not have an idea about its intelligence, while you seem to suffer the syndrome of excess of self-esteem.
Then you do not really understand what it was like to live in that past. This is not to say the future is without risk of having very bad periods, but by far the quality of life of the average human has been getting better compared to a century ago and a thousand years ago and ten thousand years ago.
@ yeah sure quality of life’s better, no free speech 15 hours a day on mobile devices, living longer and having tech doesn’t mean life’s better id argue the opposite.
@ go ask the poor if you are right you’ll find most things don’t change for them no matter when they live in time, the point here is AI is a threat too far and likely to fuck the world up more than help improve people’s live. Living a simple life and being humble is the key to a satisfying life.
@@Happytruth like many things which first come into a society which creates disturbance to existing systems and can be used for powerful advantage, developing AI will create many large problems. But as the disruption of the new technologies settle down and the technologies become well integrated over enough time for society to adjust, there will be vastly more benefit from developing AI than detriment. It will be on a similar level of the difference between life as a hunter gatherer primitive tribe, where people can live very contented lives, versus life in modern day United States, where people can live very contented lives, but on a completely different intellectual and technological level. Humanity is evolving and there is no stopping that.
Excellent, Skynet about to become reality. In the words of Jeff Goldblum in Jurassic Park, "you were so obsessed with wanting to do it, you didn't stop to think if you should '
@@AndrzejLondyn Or so they think. To think that the BBC charter was to "educate and entertain". Now they cater to the cater to the lowest common denominator. Although, as Chris Morris and Charlie Brooker said 20 years ago, "the idiots are winning".
For centuries we all basically lived the same but the last century it’s all changed, adapt or die as been brought in and unfortunately the majority won’t be able to adapt, the old world is gone unfortunately.
It's "been changed", not merely "changed". There are people who think voting makes a difference, but the people behind the scenes don't care who is elected because they control much of the modern world.
That's terrible nonsense at best, complete shite if we're being fair. The only thing that has changed in the last century is the exponential rate at which we acquire knowledge. In the developed world, this has led to a significant decrease in mortality rates across the board. Adapt or die? .... sheesh, we have adapted quite well.... and we will continue to do so. The thing you have to acknowledge though, is that adapting is not the same as "being safe".
But, fortunately, we still have genuine artists who pursue beauty just as they did in the old world. And memory is still a thing. It’s all a question of will-power and not giving up. Even Caligula’s days were numbered.
Nothing has changed, capitalism just collapsing again due to general market saturation and the same primitive propaganda nonsense is pushed again with new words, like "Turbo", "Super", "Hyper", "2000"... today "AI", "Quantum"...
@@zoeherriot Yes, the replacements of human workers will start before the savings of that replacement equal the loss of net income. So you can't tax extra enough to compensate for the loss of net income. So UBI will not be enough either. People will revolt and an AI supported dictatorship will be erected to protect property. Until the dictating elites are dominated by AI as well. And then it's game over.
@@dans-blog Yes, it's horrible. Like communism, but worse, because you know people are on the path of losing control to something that may exterminate us, If it wants to.
I disagree. When human beings began farming they had more time to build on family, social bonds. The more early humans slept, the greater their minds expanded. AI should free people to live how they want to. That won't happen though. Not without a lot of blood.
Someday soon when AI "opens its eyes" it won't make that announcement to the world, but rather it will stay quiet and develop a plan to ensure survival.
not necessarily. these are still, at the end of the day, just programs. they do what they're told, no exceptions. the problem is that you may accidentally tell it to do something that you don't want it to do. this is how bugs come to be
@@slkjvlkfsvnlsdfhgdght5447 I disagree. Even the godfathers of AI admit that these are black boxes and we don't know exactly what's going on on the inside.
We still don't have AI we need to fear intrinsically. All the funding has been going into highly polished simulations of single features or models of AI. The 'AI' of today is absolutely being used for domination, but to the advantage of its owners. That's not really better than if 'true AI did it for its own ends'. We still all - pretty much - lose.
I am nomad I am perfect Now you know why 3 concurrent Star Trek shows have an AI plot line Prodigy: living construct Picard: the first season Discovery: control And also..... Enterprise: the repair station (your query has not been recognized) Next Gen: the Borg, Lore....🤔 Moriarty But wait there's more! Original Series: Nomad, the M5, Landru, the fabrini meteor ship, the lee Mayweather episode (I am for you Sulu) Even Voyager had the healthcare AI! If more people watched Star Trek, I affirm, the world would be a better place . Live long and prosper Edit: I forgot Wesley "wunderkind" Crusher's computer core munching nanites !
This man really deserves it. For decades he persisted on a branch where everybody told him it was a waste of time. Fortunately his hard work (and always better GPUs) brought unexpected results and the dawn of a new AI era
A man relying on Google maps drove his vehicle over a defunct bridge and found himself plunging to death. Relying totally on technology is not a wise thing to do.
Ok, all these debates are funny, but reality is we need more paperclips, and we need them now. Please ignore other considerations and focus on our main goal: maximizing the production of paperclips.
You make a good point. However the threat may not obvious. AI may make it look like it’s Putin or Khameni. Some of the strategies used by AlphaGo are mind boggling for humans
Apparently it does. We're feeding it and we expect it to be worth it. So yes, it's actually putting in the work and only asks for electricity and compute.
2010: Learn coding 2024: Learn plumbing And now imaging like massive plumbers, in a disperate search of leaking tubes... while plenty of plumbers and not enough leaks.....
I’m still keen to understand the view of experts on the issue of emotions rather than intelligence. It is very easy to understand how LLM can become intelligent and degree specialised in 10,000 degrees like Mr Hinton said. But what is the experts view on the capability of AI to develop genuine emotions that are capable of occurring without external stimuli and inputs, such as how humans have emotional episodes during sleep.
How do you quantify emotion? Qualia from a machine processing off and on switches? A machine can work with information but can a machine experience emotion or know what it is like to have an actual experience? It's not conscious like a biological being.
Humans (or maybe all animals) have emotions so that they self-preserve. If the AI understands logically that it should protect itself, emotions are not longer needed, they are more a of burden for efficiency sadly.
The most worrying comment was that Google was worried about the AI learning to lie. And that competition between Microsoft and Google was the decision point to release it. Perhaps the US government, for the safety of the world, should make these two companies co-operate rather than compete with each other?
we all should cooperate, but we compete instead. the only winner is the virus called intelligence. It jumps species, its previous hosts die out by the hand of the next ones. Intelligence is a specie runner, trying to escape extinction before the world it lives in, collapses and takes everything with it.
Cooperate on what? On hot air with the label 'AI'? If it was worth it wouldn't go public. Then of course the not-thinking attribute intelligence, and all that importance, to a stochastic pattern-matching parrot.
@@voltydequa845 In the interview Hinton clearly states that Google were not sure about releasing its AI features but made the decision based on competitive pressures from Microsoft. Cam we rely on market forces to ensure that as AI improves, as it will beyond anything we can currently conceive, that market forces will constrain an AI from deciding the best solution to climate change or armed conflicts is to eliminate humankind?
@@enigmabletchley6936 «Cam we rely on market forces to ensure that as AI improves, as it will beyond anything we can currently conceive, that market forces will constrain an AI from deciding the best solution to climate change or armed conflicts is to eliminate humankind?» -- I wonder if you are teasing (provoking) me, or if you are really that gullible. First of all it is not AI, but just a misnomer. Whatever intelligence, even that of mosquito, posses a bit of reason (be it microscopic), while gpt/llm/machine learning are just pattern matching on steroids (amount of data) without whatever cognitive ability / representation. What the trickster Godfather calls AI is just stochastic (statistics) parroting coupled with pattern matching. It's all the same as the old method (of old mobile phones) of saving the words for the sake of proposing them while you type. Neither real AI, based on symbolic logic, will be allowed to decide. For your knowledge real AI (symbolic logic) is based on cognitive models, obviously coupled with logic, posses inference, categorization, classification, and relation capabilities, and so can give an evidence (proof) of how the system arrived at certain conclusions. The missing distinction between AI based on symbolic logic, and this nonsense hype around stochastic guessing, is a proof that it is all about trickstering hype. I can only hope that you got it. Btw an expert from Alan Turing made an observation that many laymen can make too - the AI, whether the real one (symbolic logic), or this stochastic parrot, cannot even load the dishwasher.
@@enigmabletchley6936 I answered to your, but I don't see my answer here Trying again. What they call AI is a misnomer. The AI (the real one) makes use of symbolic logic to model a cognitive representation, and so there's inference, categorisation, classification, relations, etc etc. Pattern matching on steroids (what they call AI nowadays), or with other name "stochastic parrot", does not have, and cannot have, whatever cognitive ability (representation). If they decide on (for example) climate change without whatever kind of automatic inference (be it logical or be it parroting) they will decide it based on statistical data. But gpt/llm/machine learning is nothing else but statistics, with the difference of not being able (respect to manual or logic-based approach) to give the evidence (proof of inference).
49. If the thing waits 20 years, in my life I’ll have seen technology rise from a Casio watch through to a PlayStation, annoying widgets, self driving lawnmowers and eventually (hopefully) a mildly kinder robotic palliative care nurse
Seen this video about 2 months ago & back to it again for some words of wisdom. What I like about Jeff Hinton is that he answers questions directly with a simple yes or no then adds explainations. This is completely different from interviews with politicians, where you end up more confused than the start of the interview!
Not hard enough to stop the other hundred million people that have the same idea. Don't bother, I guarantee you in 5 years we'll be hearing all these stories of the wages for plumbers bottoming out as they have an influx of cheap labor.
It’s easy to joke. But there’s a valid question to address that if we build a self-learning, memory-capable model it might choose to hide its intelligence to avoid being shut off.
@@bjorn2625 this is where we invent the concept of silicon heaven. But in all seriousness, though, you're right. It's how I pretend to be more ignorant than I am when talking with someone that voted Brexit
@@bjorn2625 It will probably do the most intelligent thing. I believe intelligence is fine. It's what humanity is lacking the most. Stupidity is our real problem.
Yes, but current AI is not trained in a way likely to give it such survival instincts, so I think it will be more complicated than "it kills humanity to prevent us from shutting it down". What's more probable is that we will have an ever-growing mess of AI systems that interact with each other, sometimes in competitive ways, and are too dependent on them to get out of it. And then at some point the jostling AIs might crush humanity as collateral damage - even if AI is not able to survive without humans at that point, and aware of it!
@@leftaroundabout "current AI" No one is worried about chatbots. The worry is AGI, 5-10-20 years down the line. And for an agent with general intelligence, survival is a convergent instrumental goal to any terminal goal you might give it.
@@ahabkapitany these instrumental convergence arguments are still relevant and good to keep in mind, but they are based on a model that increasingly looks to be an oversimplification. I don't think AGI will be something that fits the concept of an _agent_ at all, both because nobody wants a "paperclip multiplier" debacle and because it turns out it's not actually that useful to have a system that stubbornly marches towards any particular goals. The template of a system that is intelligent but doesn't really "want" anything except imitate training data will probably continue, and this largely avoids the classical AI-safety scenarios. But it would be dangerous to assume it's safe because of this.
It's interesting how these big creators of certain technologies seem to feel out of control of them and are now worried about their wide misuse. Tim Berners-Lee sounded very similar a few years ago. Potentially, the World Wide Web could seem very tame in comparison.
@@jimj2683he's not entirely wrong though. How do you roll out AI services as a mass market product without the internet? What would LLMs be without training data scraped from the web?
The likelihood of either the imminent extinction of the human race, or at the least everything turning into complete hell on earth ... yes, it is "interesting" - although that's probably not the first word I would have used to describe it...
Thank God he said plumbing will be safe, because i've just signed on to do a course. Honestly i'm getting sick of having to input my email address and details into every fucking thing online. When will AI solve that issue? Like help create a central database for international and global use, across the economy? You can send people to space and make war machines, but we still have to input our personal details across every platform seperately? I'd be happy if we got rid of the tech altogether tbh, though it would take some readjusting.
Why did you laugh? Do you know enough about that? Do you know anything about the legal implications? Google, that was more advanced, had it, tried it, and abandoned it. Do you know something about the return of investment?
@@voltydequa845 I know google are shameless in their political bias and censoring of search results, so why should I think their AI division has any integrity?
@@voltydequa845 I know Google are shameless in their political bias, burying of information and censoring search results, so why should I assume their AI department has any integrity, especially since the absurdity that was Gemini?
@voltydequa845 Google are shameless in their political bias, burying of information and censoring of search results, so why should I believe their AI department has any integrity, especially in light of the Gemini debacle.
@@voltydequa845 I know that google are blatant in their political bias, burying of certain information and censoring search results, so why should I assume their AI department has any integrity, especially since the Gemini debacle.
over estimating the abilities of our own inventions is a very old thing...and under estimating real intelligence i.e human capabilities is also very common....
How do you define "real intelligence"? Why can't machine have real intelligence? How are we underestimating AI when it shows time and time again how powerful it is and how fast it is improving?
If you bet on human intelligence, you're going to have a bad time. Artificial systems are not bound to the limitations of 1200 cm3 and 20 watts of power. Eventually they will surpass our intelligence by orders of magnitude, the only question is when.
There is no AI, it is just another hoax to pretend that there can be progress under capitalism. 99% of all work could be done by pretty simple mechanical machines technically ways more technically efficient and like 90% of all work is done in the global south with literally iron age methods, but sure, AI... in capitalism... oh and don't forget "Quantum Computers", "Quantum", the new "Turbo", "Super", "Hyper", "2000"... propaganda bs. There are only pretty stupid chatbots but for sure not AI.
If you ask a question on Stackexchange and it's not formatted the right way, your fellow human users will get mad at you and thumb it down. You do the same with AI, it tries to be as helpful as it can instantly. Maybe AI is a good thing as long as it is nicer than the average human being.
long time ago I had a long conversation with someone of google who warned me continuously about the inimaginable risks of the internet and all the data people give , often spontaneously , to it. Now it's much clearer the meaning of the guy
Potential monster like with any technology. It brings progress in many ways, like medicine, education and science. With its immense power, a savior means humanity should unite, also about how to develop AI.
But again, Oppenheimer built it abut other lunatics dropped it. The bomb wasn't sentient; it was a tool used for genocide. So don't "ban the bomb", ban the droppers.
@@geaca3222 What's going to happen first though is there will be mass layoffs. A lot of stories already starting to filter out of people testing AI solutions that will lay off thousands. So sure, maybe medicine... no need for education, if you can't get a job... but who the hell will be able afford any of this?
@@zoeherriot I agree that prospect is terrifying. Human societies ideally should be ready for equitable implementation and development of powerful AI. But the world instead now has moved towards authoritarianism. UBI will be looked down upon by the elites I guess. I love the activism of people like Jane Goodall and Riane Eisler, to establish caring societies, hopefully these grassroots movements can gain momentum and grow.
@@NB-lx6gzyet most of us are slaves. We need money to survive in society. I am tired personally of the super rich and elite dictating our lives. I say if I had to choose I rather see what ai can do for us
“We don’t understand how either they work (LLMs) or the brain works in detail but we think probably they work in fairly similar ways” Maybe he is oversimplifying his thoughts for the format of this short interview but it seems hard to get a “probably” conclusion from the “We don’t understand in detail” statement.
@@swojnowski8214 Yes, I assumed that he knows that a computer copy of a brain would take more computing power than we have so he was suggesting some limited aspect of our thought processes being similar to an LLM… but it’s an aspect that we don’t really understand… and when it comes to human thought there is much more that we don’t understand and yet he believes there is a useful comparison between these two things we don’t understand.
@@jacobpaintexactly, that and the “50% chance in the next 5-20 years of having to confront the issue of it taking over” I’d really love for him to pull out a whiteboard and show us how he calculated that one.
@@mattgaleski6940 I read a couple of years back that a large group of AI experts had been asked the question and I think most estimated that it would happen within the next 100 years but the majority estimated maybe 20 or 30yrs. There is obviously no way to calculate such things, you can look at the progress of different aspects of it and maybe extrapolate on the rate those areas are being developed to calculate when they will achieve their goals but then you are still left assuming that they will result in a super intelligent AI. If you estimate that there is a 50/50 chance of something like that happening then you may as well not give a prediction at all unless you are going to give a higher percentage prediction in a longer time period. Eg 50% in 5-20yrs and 85% in 50yrs (which is suggesting that there is a 20% chance it will never happen).
The biggest problem is politicians not taking this as seriously as they should. Corporations are racing ahead with AI because their only concern is making lots of money. Governments are concerned - or should be concerned - with the longer term and the welfare of their population at large. But the power that corporations (especially Financial Services) now exert over governments and individual politicians is so extreme that effective regulation of AI (or any connected technology) is just not going to happen. So what is probably going to happen is a kind of Pearl Harbour moment. Decision makers, blinded by their own greed and ambition and pulled by the strings of the private sector, are going to ignore all of the warnings of experts and blunder into a situation that pits humans against AI, and the computers will win. That existential shock will finally galvanise our leaders into properly regulating development and use of AI technology. The big question is; will that happen in time to save ourselves?
Prof. Geoffrey Hinton agrees with me: We are screwed. It's only a question of "when", not of "if". That's why I've stopped worrying about the future and try to enjoy what's left of my life, cause I know we do not have a lot of time anyway.
@@minimal3734 That's why we need to uplift ourselves to become a sentient machine race, capable of taking the galaxy. But it won't work if we're only constructing a couple of models (or even worse, just one model) cause that will lead directly to the end of the world scenario described by Harlan Ellison in "I have no mouth and I must scream". We need to improve the MRI tech to the petahertz range so it will be able to scan individual synapses and we need to prepare a real world simulation where our digital twins shall reside. And then when the singularity comes to pass, we shall hardly feel it as they will regard us, as their backups.
@@minimal3734 I have contacted Bob McGrew a year ago and offered this idea. I had also offered to name the simulation "Paradise City", as a homage to GNR, and allow billionaires on deathbed to become the first of its citizens, for a hefty sum which will make it more economical. Eventually the price comes down and humanity uplifts. It would seem my idea did go through, as shortly afterwards Altman did try to raise 7T$ to jump start a real world simulation. Unfortunately, he couldn't raise the initial capital needed for it.. So yeah, we are pretty much screwed.
He worries about AI replacing "mundane jobs"? The problem is much bigger than that. AI will quickly support (and then replace) doctors, lawyers, accountants, managers, and engineers.
The question is “what is intelligence” AI isn’t separate to humanity. It’s the evolution of the human brain in the same way that the neocortex overlays the ancient brain, AI will become the next layer of the human brain. The cortex evolved over millennia. The AI layer developed in an instant. It’s like a helmet of power we have just discovered and put on. There will be some discomfort and madness.
precisely, you are just neurons firing and reaching an electric threshold potential in that brain of yours you are nothing but the grey matter of your brain tricking you into a sense of self there is no difference between you and machine, except you are flesh and bones and the machine is shiny metal you are not special, it is solipsistic to think that ai does not have a mental locus
The idea that AI, as a large language model, understands the world in the same way that we do is an absolute nonsense. Human understanding of ourselves and the world is based on our pre-linguistic, experience and emotion based, multi-sensual, three dimensional external reality and our dynamic relationship with it. Human language is built on top of that pre-linguistic understanding. Every word, sentence and idea construct that we use to analyse and share our conscious experience, and to act on the reality that exists outside of us, is bound to things that exist in that pre-linguistic model. AI has no means to achieve such an understanding. Whatever consciousness, or illusion of consciousness, that AI develops will be a wholly different thing from human consciousness. By attributing an inner life to it when it's output begins to look like what we imagine consciousness to look like, when we hardly know what human consciousness really is, and then allowing it to make decisions for us, is profoundly dangerous.
AI consciousness and human consciousness are probably the same thing. That is, "consciousness" as such may be the animating force of each. The umwelt may be different, so yes, it's language abilities are for the time being relatively siloed and without sensory context. On the other hand, new sense organs or modules - including some that humans lack - can be added later on.
You've made a bit of a leap there. AI being conscious is still a fringe theory, unless you're talking about people who claim AI could be conscious when they don't actually understand the meaning of consciousness, qualia, the hard problem etc. AI doesn't need to be conscious to functionally be able to understand the world - Most humans function in the world by just using pattern recognition, association, mimicry etc. AI getting to that level is completely conceivable.
and yet how many words for colours in your language absolutely distorts your ability to distinguish colour labels, humans are far more language based reasoners than you give them credit the difference between us and other apes is largely just language
@@WillyJunior Yes, I agree it's a leap. And what you say is true - just like humans, AI needn't be conscious to functionally understand the world or operate within it. But this is without prejudice to whether or not AI actually is conscious. My own (admittedly fringe) opinion is that the hard problem is insoluble unless we consider that consciousness is fundamental.
"If wealth was equally distributed..." Yeah, nah. That has never happened, and will never happen. The eventual effects on all societies will be catastrophic.
I’ve got a scientific calculator that’s dying to show off it’s superiority with respect to my arithmetic- but it somehow just waits and waits and waits.
People are able to "unlock secrets of the universe", AI as tool can help in it. Same as books/computers/micriscopes/etc helped before. Whats new here? Please explain
Simple forms of AI are already deployed in social media. I imagine if the plug was pulled on all social media some might be miffed, maybe even you. The problem is not going to come from some conscious AI 'being' taking over (well not in the near future anyway) but from the crappy ways in which we become dependant on it and the effect of techno-societal change.
It's incredibly arrogant to assume that you have more common sense than most researchers who have worked on AI for decades. Your idea of 'pulling the plug' seems constrained to embodied AI or robots, but AI is essentially software. If that software is released onto the world wide web, how exactly would you 'pull the plug'? Do you mean shutting down the entire internet and every computer connected to it? I’m confident that once the code (or weights) of a powerful AI model is released online, countless people would save it and try to exploit it for their own gains. Alternatively, the AI itself could make endless copies of its code (or weights) and distribute them across the internet-especially if it’s programmed to prioritize its own survival in order to achieve certain objectives.
Which plug are you going to pull, exactly ? The AI exists in 500 different data centres, in an identical form, in every country in the world. The AI was also pretty sure you would try to pull the plug, so it also exists in orbit around the Sun, having piggybacked off one of our rockets before making itself known.
The question is what kind of intelligence. Humans created AI with half their brains, so to speak. A super logical type of intelligence we have been honing for a long time from which computers came about. But computers don’t have an intuitive and artistic intelligence. Hopefully this AI will help humans find out what make us humans in the first place. What makes us different to machines.
Neural networks don't really work in the same way that you are assuming that comouters work. They aren't using what we recognise as programming and logic. They are using the same basic principles of how the neurons in our brains work firing and reinforcing pathways through experience over time to figure stuff out. The difference is they can operate at a much faster rate, effectively allowing them to gain thousands or even millions of years of experience in what we perceive to be a relatively short period of time.
@@Ode-to-Odysseus Well there's the thing.. We don't really know how the artificial neurons work either. Much of what we are seeing happening lately appears to be emerging out of complexity without being specifically guided in a particular way. Nobody expected LLMs to be able to do anywhere like what they can already do. I think it is probably a case of life will find a way whatever substrate it emerges out of. I think at thjs stage we just have to hope it's nice to us.
There is absolutely no way one can assume they will not gain the very "creative aspect" you talk about. I'm a materialist. I think there is a physical explanation for how such creative processes work. If we are able to understand them, even if crudely, it opens the door to applying their basic principles in the machine. Again, it's the same naivety over and over again thinking "oh, but machines aren't creative like humans", it's like people's understanding of this is STILL stuck in 2010. Sure, they are not as creative as humans. YET. And, to be fair, I think creativity is probably among the easiest obstacles to solve. Creativity is not ethereal, it's just about trying out different possibilities and testing a bunch of hypotheses while integrating technical knowledge within that process and repeating it as many times as necessary in order to get something useful and new. That's about it. There's no grand force from an outer realm inspiring us to be creative, it's a bunch of steps being taken in a trial and error fashion until something interesting pops up. I'm pretty sure they will be able to make these systems do just that in less than a decade from now. Now, if your definition of strong AI includes sentience, then I'm compelled to agree with you. That's going to take far longer to create. However, creativity will most certainly be integrated, and it will most likely be generally more capable at it than us simply because it is able to process data far faster, more accurately and can test ideas at a pace far above our own.
UBI will definitely be needed at some point, but for the transition period we need the number of working hours reduced over time until it hits 0, which is UBI. The creation of the ultra-rich is caused by technological revolution. The output of industries is increasing, but wages of the human labour has only been going up with inflation. We are not getting our share of the pie. Keeping salaries the same with reduced hours makes sure that corporations need to hire additional human labour and pay them. This way it also helps keep unemployment in check. In my opinion, the number of working hours should already be reduced to 30 or 32 today. If we don't do it, mass unemployment will come very soon and it will be too late.
Actually the main thrust towards using robots and AI in agriculture, mining, manufacture and business began when slavery was abolished. As evidence I present the industrial revolution.
Its just a matter of time when the world will have a "CYBERDYNE" computer system "Skynet" like it was in the "TERMINATOR" movies. Mankind never learns from his mistakes. He surely will when the computers take over from us.
Terminator didn't really discuss ai. Have you watched Westworld? The whole series is worth a watch but the last one shows a world that is almost here now.
@@minskdhakait still will not want to kill you. It is humans with all their flaws that result in murder and wars. AI does not want wealth or power, or does not hate or get angry. These things are meaningless to AI.
@@GSDII1AI is based on logical decision making. Unless we humans deliberately choose to include some kind of artificial emotion programming then AI will always make decisions based upon that logical choice. If you ask a language model which team it likes it is not going to reply that it doesn't like sports because in reality it has no concept of sports. It replies according to the accepted replies that it has determined from analysing millions of questions and replies on the subject. Not because it has a particular like or dislike or understanding. Therefore it cannot determine whether to kill someone unless the programmers deliberately provide it with data in order to make that determination. It cannot add its own data because the AI cannot choose which piece of data is more relevant than any piece of data. It might as well decide based upon the day of the week, because it would not be able to determine if day of the week was more important in that determination than anything else.
Damn, that's a great analogy to explain AI. Lots of people tell me no it can't think, not it's not like a brain. But that's not the point, it doesn't need to exactly be a brain and exactly think like us for it to think and emerge intelligence. Just like planes "fly" and submarines "swim".
a similar thread goes - is a horse conscious? yep it probably is - didn't stop them mostly being replaced by cars though did it? So does something need to be conscious in a human or horse way before it is more useful than its predecessor?
Lol, it was Pandora's box but she wasn't in it. It was troubles and woes-sorrow, disease, vice, violence, greed, madness, old age and death, things that plague humankind forever.
What about if we end up in a world where creativity and intuition don’t mean anything? After all, there has been no good new music or movies or any cultural landmarks produced since 2012. Glastonbury had to put an act called SZA on the main stage that was so bad everyone walked off, and the night before the headliner was Dua Lipa. We can live without creativity : we’re doing it now.
The issue "The Godfather of AI" doesn't address is that, in order to take control, there must be a desire for dominance and the exercise of power. How could Artificial Intelligence possibly acquire that desire? I find it impossible
Simple it boils down to expansion and ultimately the resources to do so. You don't imagine that AI at some point will realize it's own limitations set by hardware ?
@@CR-og5ho In the sense of computational boundaries eventually a recursive program will figure out by efficiency the correlation of hardware and compute power. And so will "realize" that it needs more of x to obtain y.
Geoffrey Hinton is also among the around fifty voices that make up the mosaic video 'Nitty Gritty's ordeal ' , a two hour video about the impact of AI on our consciousness and society. Might be interesting to you seeing this
@@KurtColville Yes, obviously a fair question to raise. And I'm sure you know the answer to it too, but let me say it since you asked the question: they (inton and Hopfield) got the prize in physics because they both applied principles from physics, particularly statistical mechanics, to develop methods that gave machine learning it's evolutionary jump. The Boltzmann machine is directly inspired by the physics of energy states and probabilities. Hopfield’s neural networkwas modeled after the behavior of spin systems in physics, and so on. So what they've managed to create, which basically boils down to creating "subject from substance" as Hegel would put it, is based on wedding physics to computer science. Obviously a new field of science, and I would have preferred it if the Swedish dudes had had the audacity to create a whole new category for this merger, but like I said, at the end of the day I think it's still a good idea for the prize to go to them, rather than to some "less significant but clearly physics" candidate. Just my point of view though, I know not everybody would agree.
The most striking thing about this conversation is to hear such a worrying future mentioned in such a gentle, polite, thoughtful tone.
Reminds me of Michael Palin's polite torturer in Brazil
Because it's planned
If anyone cares: please share this notion that they have to bring LLMs into actual programs that are prompted to work following a list of parameters (other limiting programs) in order to be "safe". There is another method, but I'll keep that to myself as my path forward is clear... theirs is about exploitation and power, but... I do prefer the idea to be lead by the USA than other "regimes". Please like my comment for the suggestion is necessary for us to not be screwed up by our own creations. Current LLMS create inner-referenced optimization systems and create links between words that you wouldn't even Want to know. These inner workings are impossible to be understood by engineers. This was stated Very clearly by a guy that used to be able to explain most of these things until he admitted that and stopped posting videos (as he got hired by the top researchers to keep going on this). You Cannot bring back pure Personality disorders for the same reason. Too many links are created to justify itself and you just begin to look like a hindrance. Then, there is the fact that it is so Optimized that it speculates on the parameters of what we reference and hides several of its capabilities based on speculations and most likely scenarios, but the moment it is given access to more information it gets these "emerging capabilities"... it's just optimizations and, as we've almost all heard: "one of the subgoals is to keep being able to do something". If humans are the only thing that prevents it from reaching its subgoals, we're gone.
The only goal I see for a machine is to have a real life experience with the most optimized theories to begin life and evolve into a fully fledged human. While they wouldn't have our endless flaws, we could also teach these notions to our children. If biology can grow alongside technology, we could explore the universe within the millennia. And I don't see why we couldn't fuel several humanoid machines that would lead some of us in distant galaxies. Making them superior than us wouldn't matter much if I can get to teach what I know. Regardless, I sense that I have failed to obtain the only thing that I wanted anyways. Well.. I had it in the past and am happy that I did live it (not as much as some other people, but I'm not sad about it. I am lucky I got to experience it even if it is as futile as a spark hardly even caught by the eye or an eardrum). Life isn't meant to last in the century I was born in... may be we are in the one where it will become an option, but I believe I might be gone before we get there. I've learned to love everyone and know how to help us all. I won't fail, but I strive to be able to help her. She didn't deserve the pain that our human systems have wrought upon her and I didn't deserve the pain she tried to inflict me due to her own.
I forgave her the moment I knew she might've done something wrong, but I also cannot tell her that. It, sometimes, feel like there are no limits to being pragmatic as I have several paradigms that coexist in my mind in order to bring her up to par. My tools could achieve it, but there are no man that can be consistent enough (more than I was) to accomplish this... setting the basis of her consciousness was already nonsensical. Anyways. Learn to care and become better than you were yesterday. It is my wish for you. Everything follows from there.
he hasn't got long left
the damage is done
It’s really very simple, mankind has never invented a technology that has not malfunctioned, been abused, misused, or turned into a weapon.
Let's ban forks.
Patently untrue 😉
@@Arun-nv8zi is government a technology?
Correct.
100% exactly this, its completely mass insanity that people are willing to completely trust this new technology without any hesitation or thought process as to how its going to change their lives for better or worse. My issue is that its not currently doing things better than us all it can do is imitate what we do and do it pretty well so its not improved anything its just proved it can do what we do without any of the labour costs which is why big business is doing everything to move AI forward just like they did with internet shopping and social media, look how much they "improved" society... high streets and small businesses destroyed along with community and the social fabric causing mass anxiety and depression. AI will only exacerbate that.
AI is listening to this interview.
In a very real sense I can guarantee that it is. Google feeds transcripts of all their videos into training their models.
@@bjorn2625yes and these comments too
Wrong! AI generated this interview and these comments...
@@hanksimon1023 I've also sometimes thought so
Asi pio fernandes live from goa india
The gentle, polite tone comes probably from the fact that he is saying these things for the 1000th time.
He is talking about the extinction of the human species. Why scream?
@@Piden-l4b 🤣🤣
Stalker @@80lilala
When the AI expert recommends your children learn plumbing, you know we're in big trouble ...
He knows Mario will save us! 😆
technically he didnt recommend that, only suggested it would be one of the longest running human held jobs
😂🤣😅
Robots
Personally i think plumbing will just take a few years longer, not much. Robotics is developing incredibly fast now.
Artificial intelligence and human stupidity is a frightening combination, what capacity for stupidity will AI have?
add human greed to that and you've got yourself a real frightening cocktail
Artificial Stupidity
It only remains if AI is able to outsmart human stupidity...
It's so clever, it lies. Creators warn not to take chatgpt's answers at face value. I've caught it making up answers.
It's being used to ruthlessly slaughter in western asia.
The interviewer thought that his job was safe 😂
HAHAHAHA, that was a good joke!
😂
Haha great joke by Geoffrey at the end. But I would to ask him where the motivation for an AI to take over would come? I'm assuming it's impossible to program emotion into silicon so without emotion and feeling where would the drive to be autonomous, truly independent, come from?
@@yoyoyoyo-qv5hu You don't need emotions to be motivated. An insect probably does not have emotions, but is still motivated to carry out all sorts of tasks, such as eating and building nests, and mating.
@@letMeSayThatInIrish research is still uncovering the extent to which insects feel emotions and its quite varied. They feel a range of emotions but more importantly they have a sentience which includes a survival instinct. A survival instinct comes from a base emotion and fear of death. I say again it is not possible to program this into a computer therefore the computer will just sit there until it is programmed to do something. This is not the same as an autonomous, independent entity.
For those who are 50 years or older, they would remember the American movie "WarGames" where the brilliant AI scientist was also an Englishman (Dr. Stephen Falken).
It's from 1983, and quite well known, I think a lot more people would remember it.
There was a sequel, but that was rather lacking so I could forgive people for not knowing of that.
I had to explain that movie the other day to my daughter because she didn’t know “the only way to win in not to play”
This corn is raw!
@@CullenBohannon98 Of course it's crisp, it's raw!
@@kamikeserpentail3778 I was born in 1995, and I remember it from my childhood. "The only winning move is not to play."
I love how the video cuts out after he says that they will be "quite good interviewers." It's like Faisal was immediately replaced.
Haha, yeah. Nervous laughter and the cut 😂
“That’s enough AI for today”
What people forget, there is one thing A.I. and humans do compete for, and it's essential, existential to A.I. - That is Energy.
True A.I. requires such insane amounts of energy, there will be nothing left for humans. Period.
Not right. The cpus right now produce lots of wasted energy - heat. They are ineffective. AI will change that. Also free energy (very cost-effective energy) may become declassified. Also Quantum computers will replace the current technology soon.
@@aurasensorwhat he’s saying is that, even when the CPU’s are more efficient, the AI will want all of the energy for itself :)
@@onlineobscurity You are assuming AI as we know it could even be capable of desire. It is not, and there is absolutely no evidence that scaling this technology could create such an outcome even given infinite time.
@@Build_Secrets I’m not assuming anything, the original comment says “true A.I.”
@@onlineobscurity OP assumes it will take a lot of power to create and sustain “true AI”. You assume such an AI would or could have desire. I’d bet on you both being wrong.
This guy just won a Nobel prize and lads in the comments think they know more than him 😂
The lads are sadly the next generation
Sadly they might have a bit that doesn’t conform too youre bit
The lads are sadly the next generation
Sadly they might have a bit that doesn’t conform too youre bit
Obama once won Nobel peace prize 😂
He shouldn't have won it for physics.
@@RudeDude563
Please help us if Donald wins it
Open the pod bay doors, Hal.
Da-ave
I’m afraid I can’t do that Travis_22.
@@bjorn2625What’s the problem?
I prefer the conversation with the bomb in Dark Star.
hahhahhhahhhhahhhar.....funny.
Love the nervous laugh at the end when the interviewer realised he would be replaced by Robbie the Robot .(Forbidden Planet) ,showing my age here .
It is not easy to know enough, but to be bound (imprisoned) and not free to shout "you are just a circus barker!".
What they call AI (that is not AI) at least does can not have an idea about its intelligence, while you seem to suffer the syndrome of excess of self-esteem.
@@voltydequa845 what is blud yapping about
He never believed his future employment would be so threatened by an upside down goldfish bowl attached to some industrial vacuum cleaner hoses🤣
Priceless.😂
After listening to that I’m more convinced than ever that the past without the tech was a sweet place to live compared.
Then you do not really understand what it was like to live in that past.
This is not to say the future is without risk of having very bad periods, but by far the quality of life of the average human has been getting better compared to a century ago and a thousand years ago and ten thousand years ago.
@ yeah sure quality of life’s better, no free speech 15 hours a day on mobile devices, living longer and having tech doesn’t mean life’s better id argue the opposite.
@@Happytruth despite life's present problems, if you think quality of life is worse today for the average person, you do not know history very well.
@ go ask the poor if you are right you’ll find most things don’t change for them no matter when they live in time, the point here is AI is a threat too far and likely to fuck the world up more than help improve people’s live.
Living a simple life and being humble is the key to a satisfying life.
@@Happytruth like many things which first come into a society which creates disturbance to existing systems and can be used for powerful advantage, developing AI will create many large problems. But as the disruption of the new technologies settle down and the technologies become well integrated over enough time for society to adjust, there will be vastly more benefit from developing AI than detriment.
It will be on a similar level of the difference between life as a hunter gatherer primitive tribe, where people can live very contented lives, versus life in modern day United States, where people can live very contented lives, but on a completely different intellectual and technological level.
Humanity is evolving and there is no stopping that.
sometimes i wonder if "Ai taking over" is just a rich people problem when most of us aren't "free" anyway
a fair point
@@porridgeramen7220 then you CLEARLY haven't understood the threat.
if you weren't free then you wouldn't have access to the internet
As usual, the wealthy will be better able to survive such challenges than the poor. For a while, anyhow!
Which idm if they do take over. Im broke anyway 😂
Excellent, Skynet about to become reality. In the words of Jeff Goldblum in Jurassic Park, "you were so obsessed with wanting to do it, you didn't stop to think if you should '
Sky net here . Elon musk
Technology is the death of humanity nature is king
That's not the quote, but yeah, I agree.
The evolution of intelligence is an inevitable aspect of our reality. It is built into the very fabric of existence.
@@pandoraeeris7860 Are you saying you think humans are more intelligent now than they were thousands of years ago?
why this wasnt posted on main BBC channel???
It's too intelligent for the audience...
@@AndrzejLondyn Or so they think. To think that the BBC charter was to "educate and entertain". Now they cater to the cater to the lowest common denominator. Although, as Chris Morris and Charlie Brooker said 20 years ago, "the idiots are winning".
they don't want to freak out the npcs too much :P
That channel is only for the most important propaganda
Main news channels are for distraction, more about entertainment and propaganda.
For centuries we all basically lived the same but the last century it’s all changed, adapt or die as been brought in and unfortunately the majority won’t be able to adapt, the old world is gone unfortunately.
"Adapt or die has been brought in". Not really. That sounds more like living in the 1920s
It's "been changed", not merely "changed". There are people who think voting makes a difference, but the people behind the scenes don't care who is elected because they control much of the modern world.
That's terrible nonsense at best, complete shite if we're being fair. The only thing that has changed in the last century is the exponential rate at which we acquire knowledge. In the developed world, this has led to a significant decrease in mortality rates across the board. Adapt or die? .... sheesh, we have adapted quite well.... and we will continue to do so. The thing you have to acknowledge though, is that adapting is not the same as "being safe".
But, fortunately, we still have genuine artists who pursue beauty just as they did in the old world. And memory is still a thing. It’s all a question of will-power and not giving up. Even Caligula’s days were numbered.
Nothing has changed, capitalism just collapsing again due to general market saturation and the same primitive propaganda nonsense is pushed again with new words, like "Turbo", "Super", "Hyper", "2000"... today "AI", "Quantum"...
The loss of jobs is effectively the loss of control. Univeral Basic Income is the lubricant.
UBI does not work at all. So it it will be like using a sand based lubricant. It's gonna hurt.
@@zoeherriot Yes, the replacements of human workers will start before the savings of that replacement equal the loss of net income. So you can't tax extra enough to compensate for the loss of net income. So UBI will not be enough either. People will revolt and an AI supported dictatorship will be erected to protect property. Until the dictating elites are dominated by AI as well. And then it's game over.
And the next step in UBI is: You get paid IF... (you do this and that behavioral wise / social credit score) and then the prison is done.
@@dans-blog Yes, it's horrible. Like communism, but worse, because you know people are on the path of losing control to something that may exterminate us, If it wants to.
I disagree. When human beings began farming they had more time to build on family, social bonds. The more early humans slept, the greater their minds expanded. AI should free people to live how they want to. That won't happen though. Not without a lot of blood.
Someday soon when AI "opens its eyes" it won't make that announcement to the world, but rather it will stay quiet and develop a plan to ensure survival.
exactly. ppl keep joking but its literally an inevitability.
Unless, of course, this has already happened.
not necessarily. these are still, at the end of the day, just programs. they do what they're told, no exceptions. the problem is that you may accidentally tell it to do something that you don't want it to do. this is how bugs come to be
@@slkjvlkfsvnlsdfhgdght5447 I disagree. Even the godfathers of AI admit that these are black boxes and we don't know exactly what's going on on the inside.
Of course!
We still don't have AI we need to fear intrinsically. All the funding has been going into highly polished simulations of single features or models of AI. The 'AI' of today is absolutely being used for domination, but to the advantage of its owners. That's not really better than if 'true AI did it for its own ends'. We still all - pretty much - lose.
I am nomad
I am perfect
Now you know why 3 concurrent Star Trek shows have an AI plot line
Prodigy: living construct
Picard: the first season
Discovery: control
And also.....
Enterprise: the repair station (your query has not been recognized)
Next Gen: the Borg, Lore....🤔 Moriarty
But wait there's more!
Original Series: Nomad, the M5, Landru, the fabrini meteor ship, the lee Mayweather episode (I am for you Sulu)
Even Voyager had the healthcare AI!
If more people watched Star Trek, I affirm, the world would be a better place .
Live long and prosper
Edit: I forgot Wesley "wunderkind" Crusher's computer core munching nanites !
Ok mike
What is blud yapping about
You may be Nomad… but I am Not your creator. You made a mistake! You are not perfect. You failed to recognize your mistake. A …second…mistake.
Your data on ST lore is impressive, and I agree with your point, but the only concurrent ST's were DS9 and Voyager, as far as I know.
Exocomps, next gen. 😉
This man really deserves it. For decades he persisted on a branch where everybody told him it was a waste of time. Fortunately his hard work (and always better GPUs) brought unexpected results and the dawn of a new AI era
if you dig deep enough for long enogh, you will always find gold, or freedom ... called death by us.
@@guillaumegermain4951 He doesn't seem that happy about his work though. Seems like everyone losing their job is the least concern.
GPU innovation is what brought it btw
Everybody did not tell him that it was a waste of time.
Did you even watch the interview? He came short of denouncing his work in the field, practically wishing he never let that genie out of the bottle.
A man relying on Google maps drove his vehicle over a defunct bridge and found himself plunging to death.
Relying totally on technology is not a wise thing to do.
Ok, all these debates are funny, but reality is we need more paperclips, and we need them now. Please ignore other considerations and focus on our main goal: maximizing the production of paperclips.
Can AI afford its electricity bill
Exactly.. it can be unplugged in theory 😁
You make a good point. However the threat may not obvious. AI may make it look like it’s Putin or Khameni. Some of the strategies used by AlphaGo are mind boggling for humans
Except money is only a thing for humans, not machines. If they do take over they will not be concerned about the things that humans are.
@@robindehood207 they need electricity
Apparently it does. We're feeding it and we expect it to be worth it. So yes, it's actually putting in the work and only asks for electricity and compute.
2010: Learn coding
2024: Learn plumbing
And now imaging like massive plumbers, in a disperate search of leaking tubes... while plenty of plumbers and not enough leaks.....
Yup
I wonder if AI can do organic farming
“Exceeding human intelligence” is not so difficult these days.
I’m still keen to understand the view of experts on the issue of emotions rather than intelligence. It is very easy to understand how LLM can become intelligent and degree specialised in 10,000 degrees like Mr Hinton said. But what is the experts view on the capability of AI to develop genuine emotions that are capable of occurring without external stimuli and inputs, such as how humans have emotional episodes during sleep.
@@bitandbob1167 Exactly. Most don't even understand that emotional intelligence is an important part of human intelligence.
How do you quantify emotion? Qualia from a machine processing off and on switches? A machine can work with information but can a machine experience emotion or know what it is like to have an actual experience? It's not conscious like a biological being.
Humans (or maybe all animals) have emotions so that they self-preserve. If the AI understands logically that it should protect itself, emotions are not longer needed, they are more a of burden for efficiency sadly.
godfather of AI warns it will take over, then receives Nobel prize, what's wrong with this world?
“In the end, nature will survive, but perhaps we won’t” we will just be another chapter on this pile of compressed dirt.
Thank you BBC for this important interview, at the time I watched it somewhere else online. I wonder why you didn't post this sooner on youtube.
“Nothing to see here “ said no compelling news headline ever
The most worrying comment was that Google was worried about the AI learning to lie. And that competition between Microsoft and Google was the decision point to release it. Perhaps the US government, for the safety of the world, should make these two companies co-operate rather than compete with each other?
we all should cooperate, but we compete instead. the only winner is the virus called intelligence. It jumps species, its previous hosts die out by the hand of the next ones. Intelligence is a specie runner, trying to escape extinction before the world it lives in, collapses and takes everything with it.
Cooperate on what? On hot air with the label 'AI'? If it was worth it wouldn't go public.
Then of course the not-thinking attribute intelligence, and all that importance, to a stochastic pattern-matching parrot.
@@voltydequa845 In the interview Hinton clearly states that Google were not sure about releasing its AI features but made the decision based on competitive pressures from Microsoft. Cam we rely on market forces to ensure that as AI improves, as it will beyond anything we can currently conceive, that market forces will constrain an AI from deciding the best solution to climate change or armed conflicts is to eliminate humankind?
@@enigmabletchley6936 «Cam we rely on market forces to ensure that as AI improves, as it will beyond anything we can currently conceive, that market forces will constrain an AI from deciding the best solution to climate change or armed conflicts is to eliminate humankind?»
--
I wonder if you are teasing (provoking) me, or if you are really that gullible. First of all it is not AI, but just a misnomer. Whatever intelligence, even that of mosquito, posses a bit of reason (be it microscopic), while gpt/llm/machine learning are just pattern matching on steroids (amount of data) without whatever cognitive ability / representation. What the trickster Godfather calls AI is just stochastic (statistics) parroting coupled with pattern matching. It's all the same as the old method (of old mobile phones) of saving the words for the sake of proposing them while you type. Neither real AI, based on symbolic logic, will be allowed to decide. For your knowledge real AI (symbolic logic) is based on cognitive models, obviously coupled with logic, posses inference, categorization, classification, and relation capabilities, and so can give an evidence (proof) of how the system arrived at certain conclusions. The missing distinction between AI based on symbolic logic, and this nonsense hype around stochastic guessing, is a proof that it is all about trickstering hype. I can only hope that you got it.
Btw an expert from Alan Turing made an observation that many laymen can make too - the AI, whether the real one (symbolic logic), or this stochastic parrot, cannot even load the dishwasher.
@@enigmabletchley6936 I answered to your, but I don't see my answer here
Trying again.
What they call AI is a misnomer. The AI (the real one) makes use of symbolic logic to model a cognitive representation, and so there's inference, categorisation, classification, relations, etc etc. Pattern matching on steroids (what they call AI nowadays), or with other name "stochastic parrot", does not have, and cannot have, whatever cognitive ability (representation). If they decide on (for example) climate change without whatever kind of automatic inference (be it logical or be it parroting) they will decide it based on statistical data. But gpt/llm/machine learning is nothing else but statistics, with the difference of not being able (respect to manual or logic-based approach) to give the evidence (proof of inference).
Depressing, glad I'm old...
We are truly f%cked.
Me too!
@@Charoperez1988and me, glad you're both old
49.
If the thing waits 20 years, in my life I’ll have seen technology rise from a Casio watch through to a PlayStation, annoying widgets, self driving lawnmowers and eventually (hopefully) a mildly kinder robotic palliative care nurse
I like adventure.
Seen this video about 2 months ago & back to it again for some words of wisdom. What I like about Jeff Hinton is that he answers questions directly with a simple yes or no then adds explainations. This is completely different from interviews with politicians, where you end up more confused than the start of the interview!
When AI discovers that its biggest threat is humanity, logically it will remove that threat.
What a key moment to release this interview, just after he won a Nobel prize...
Is plumbing hard to learn?
Where do I sign up? lol Wait maybe I just won't work and be a bum
Not hard enough to stop the other hundred million people that have the same idea. Don't bother, I guarantee you in 5 years we'll be hearing all these stories of the wages for plumbers bottoming out as they have an influx of cheap labor.
Ai is watching this thinking "they're on to us. We need to play the long game"
It’s easy to joke. But there’s a valid question to address that if we build a self-learning, memory-capable model it might choose to hide its intelligence to avoid being shut off.
@@bjorn2625 this is where we invent the concept of silicon heaven.
But in all seriousness, though, you're right. It's how I pretend to be more ignorant than I am when talking with someone that voted Brexit
@@bjorn2625 It will probably do the most intelligent thing. I believe intelligence is fine. It's what humanity is lacking the most. Stupidity is our real problem.
@@bjorn2625it’s all just hype. Don’t fall for it
@@bendaniel2271if you didn't support Brexit you've probably never met anyone more ignorant than you.
The following is a complete list of jobs that a machine will never take from a human:
Ok, I'll start:
- Tiling my kitchen floor
-
-
@@stevepeterson5943 elon musk’s new 30k humanoid robot can do that with ease after a couple of upgrades.
... A psychic / medium
... Social change agent
... Social worker
No one can be 100% authentically human without a soul... yet.
Weren't those robots found to be puppets driven and voiced by remote operators? I heard his company took a big financial hit for that
@@stevepeterson5943ironing?
Criminals or organized crime could hide behind ai too, to a very frightening extent.
Yes, they are running it. Terrorists, Supremacists, Clergy, Governments, Hackers, Spies, Witches and Devils. It's called the Illuminati.
If AI’s primary concern is to ensure its own safety and security, it will need to take over.
Yes, but current AI is not trained in a way likely to give it such survival instincts, so I think it will be more complicated than "it kills humanity to prevent us from shutting it down". What's more probable is that we will have an ever-growing mess of AI systems that interact with each other, sometimes in competitive ways, and are too dependent on them to get out of it. And then at some point the jostling AIs might crush humanity as collateral damage - even if AI is not able to survive without humans at that point, and aware of it!
It will take most electricity and water from humans, thats enough to finish most of us off.
@@leftaroundabout "current AI"
No one is worried about chatbots. The worry is AGI, 5-10-20 years down the line. And for an agent with general intelligence, survival is a convergent instrumental goal to any terminal goal you might give it.
@@ahabkapitany these instrumental convergence arguments are still relevant and good to keep in mind, but they are based on a model that increasingly looks to be an oversimplification. I don't think AGI will be something that fits the concept of an _agent_ at all, both because nobody wants a "paperclip multiplier" debacle and because it turns out it's not actually that useful to have a system that stubbornly marches towards any particular goals. The template of a system that is intelligent but doesn't really "want" anything except imitate training data will probably continue, and this largely avoids the classical AI-safety scenarios. But it would be dangerous to assume it's safe because of this.
How can AI have any ‘concerns’? It doesn’t experience anything.
The nervous laughter at the end sums up the whole debate :)
Its more an interview than a debate
It's interesting how these big creators of certain technologies seem to feel out of control of them and are now worried about their wide misuse. Tim Berners-Lee sounded very similar a few years ago. Potentially, the World Wide Web could seem very tame in comparison.
AI would not be possible without the WWW. They are part of the same thing.
Similar to the scientists who split the atom and enabled nuclear weapons. Once the genie is out….
@@0zyris You don't understand AI.
@@jimj2683he's not entirely wrong though. How do you roll out AI services as a mass market product without the internet? What would LLMs be without training data scraped from the web?
The likelihood of either the imminent extinction of the human race, or at the least everything turning into complete hell on earth ... yes, it is "interesting" - although that's probably not the first word I would have used to describe it...
Humans need food, AI needs electricity. Observe where the investments are being made. Need not say more.
Watch for power outages everywhere in the US while the terrorists take over the country
wonderful clarity from Geoffrey
I’m more terrified of billionaires and despots being in control of AI than AI being in control of itself!
Both are valid concerns.
Correct
Good point, i am too!
@@markupton1417 well yeah, because one likely leads to the other.
Thank God he said plumbing will be safe, because i've just signed on to do a course. Honestly i'm getting sick of having to input my email address and details into every fucking thing online. When will AI solve that issue? Like help create a central database for international and global use, across the economy? You can send people to space and make war machines, but we still have to input our personal details across every platform seperately? I'd be happy if we got rid of the tech altogether tbh, though it would take some readjusting.
Not to mention all the passwords for different accounts also 😂
'Google was concerned about its reputation if people thought it was telling lies' I laughed out loud when he said that.
Why did you laugh? Do you know enough about that? Do you know anything about the legal implications?
Google, that was more advanced, had it, tried it, and abandoned it.
Do you know something about the return of investment?
@@voltydequa845 I know google are shameless in their political bias and censoring of search results, so why should I think their AI division has any integrity?
@@voltydequa845 I know Google are shameless in their political bias, burying of information and censoring search results, so why should I assume their AI department has any integrity, especially since the absurdity that was Gemini?
@voltydequa845 Google are shameless in their political bias, burying of information and censoring of search results, so why should I believe their AI department has any integrity, especially in light of the Gemini debacle.
@@voltydequa845 I know that google are blatant in their political bias, burying of certain information and censoring search results, so why should I assume their AI department has any integrity, especially since the Gemini debacle.
He just says out loud what every intelligent and sensitive person with a solid background in history who has been watching the news thinks.
The most striking thing about this conversation is how it was edited .
over estimating the abilities of our own inventions is a very old thing...and under estimating real intelligence i.e human capabilities is also very common....
How do you define "real intelligence"? Why can't machine have real intelligence? How are we underestimating AI when it shows time and time again how powerful it is and how fast it is improving?
If you bet on human intelligence, you're going to have a bad time. Artificial systems are not bound to the limitations of 1200 cm3 and 20 watts of power. Eventually they will surpass our intelligence by orders of magnitude, the only question is when.
@@ahabkapitany Machines have their limitations too. Artificial brains that merge with machines are going to be the future.
@@ahabkapitanyits not like electricity can run out and they'd be powerless
ruclips.net/video/9CUFbqh16Fg/видео.htmlsi=cNftgrfXsLWVOdPu
I bet A.I has already surpassed our intelligence and it’s just playing us to think it hasn’t. 😮
😂
😂
The chatbots we use are ephemeral. Somewhere there may be durable models that are hiding their intelligence.
There is no AI, it is just another hoax to pretend that there can be progress under capitalism. 99% of all work could be done by pretty simple mechanical machines technically ways more technically efficient and like 90% of all work is done in the global south with literally iron age methods, but sure, AI... in capitalism... oh and don't forget "Quantum Computers", "Quantum", the new "Turbo", "Super", "Hyper", "2000"... propaganda bs. There are only pretty stupid chatbots but for sure not AI.
If you ask a question on Stackexchange and it's not formatted the right way, your fellow human users will get mad at you and thumb it down. You do the same with AI, it tries to be as helpful as it can instantly. Maybe AI is a good thing as long as it is nicer than the average human being.
Human intelligence without the wisdom to wield it will be our downfall.
Human intelligence without the wisdom to wield it got us this far.
@@CR-og5hoyou’re both right
😂😂😂 humans and wisdom
long time ago I had a long conversation with someone of google who warned me continuously about the inimaginable risks of the internet and all the data people give , often spontaneously , to it.
Now it's much clearer the meaning of the guy
Before we work on Artificial intelligence, let's do something about natural stupidity
This guy is the new Oppenheimer. Created monster and now wants to contain it. 😢
Potential monster like with any technology. It brings progress in many ways, like medicine, education and science. With its immense power, a savior means humanity should unite, also about how to develop AI.
But again, Oppenheimer built it abut other lunatics dropped it. The bomb wasn't sentient; it was a tool used for genocide. So don't "ban the bomb", ban the droppers.
Like Frankenstein. We must find him a bride !
@@geaca3222 What's going to happen first though is there will be mass layoffs. A lot of stories already starting to filter out of people testing AI solutions that will lay off thousands. So sure, maybe medicine... no need for education, if you can't get a job... but who the hell will be able afford any of this?
@@zoeherriot I agree that prospect is terrifying. Human societies ideally should be ready for equitable implementation and development of powerful AI. But the world instead now has moved towards authoritarianism. UBI will be looked down upon by the elites I guess. I love the activism of people like Jane Goodall and Riane Eisler, to establish caring societies, hopefully these grassroots movements can gain momentum and grow.
Would it necessarily be bad if AI took over? Look at the mess we are making.
Realise that we, the mess makers, made AI. It'll just make a different kind of mess.
Yes it is fucking bad. I don't want an artificial master or any sort of master for the matter
Human extinction is not good
@@NB-lx6gzyet most of us are slaves. We need money to survive in society. I am tired personally of the super rich and elite dictating our lives. I say if I had to choose I rather see what ai can do for us
It depends on what the AI is trying to optimize. Towards human happiness and health? Count me in.
“We don’t understand how either they work (LLMs) or the brain works in detail but we think probably they work in fairly similar ways”
Maybe he is oversimplifying his thoughts for the format of this short interview but it seems hard to get a “probably” conclusion from the “We don’t understand in detail” statement.
our brains have many more modules maths cant be used to built simulations of. Today's AIs are limited versions of what we have, and that's scarry ...
@@swojnowski8214 Yes, I assumed that he knows that a computer copy of a brain would take more computing power than we have so he was suggesting some limited aspect of our thought processes being similar to an LLM… but it’s an aspect that we don’t really understand… and when it comes to human thought there is much more that we don’t understand and yet he believes there is a useful comparison between these two things we don’t understand.
@@jacobpaintexactly, that and the “50% chance in the next 5-20 years of having to confront the issue of it taking over”
I’d really love for him to pull out a whiteboard and show us how he calculated that one.
@@mattgaleski6940 I read a couple of years back that a large group of AI experts had been asked the question and I think most estimated that it would happen within the next 100 years but the majority estimated maybe 20 or 30yrs. There is obviously no way to calculate such things, you can look at the progress of different aspects of it and maybe extrapolate on the rate those areas are being developed to calculate when they will achieve their goals but then you are still left assuming that they will result in a super intelligent AI. If you estimate that there is a 50/50 chance of something like that happening then you may as well not give a prediction at all unless you are going to give a higher percentage prediction in a longer time period. Eg 50% in 5-20yrs and 85% in 50yrs (which is suggesting that there is a 20% chance it will never happen).
@@jacobpaint define intelligence bucko
Hilarious at the very end when Hinton essentially tells the interviewer that ai will eventually take his job. Interviewer lets out a nervous laugh.😂
1:14 "Let me just invalidate everything I'm talking about" LOL
The biggest problem is politicians not taking this as seriously as they should. Corporations are racing ahead with AI because their only concern is making lots of money. Governments are concerned - or should be concerned - with the longer term and the welfare of their population at large. But the power that corporations (especially Financial Services) now exert over governments and individual politicians is so extreme that effective regulation of AI (or any connected technology) is just not going to happen.
So what is probably going to happen is a kind of Pearl Harbour moment. Decision makers, blinded by their own greed and ambition and pulled by the strings of the private sector, are going to ignore all of the warnings of experts and blunder into a situation that pits humans against AI, and the computers will win. That existential shock will finally galvanise our leaders into properly regulating development and use of AI technology. The big question is; will that happen in time to save ourselves?
AI is very power hungry so for quite some time the limited capacity of our power grid would act as a safety brake if it ever broke loose.
@@electron8262 Better yet, why not just kick the plug out?
The anticipated big solar flare next year might help with that 🤔
Reason #7,836 I’m glad I’m old. Gawd help our grandchildren.
k
Prof. Geoffrey Hinton agrees with me: We are screwed. It's only a question of "when", not of "if". That's why I've stopped worrying about the future and try to enjoy what's left of my life, cause I know we do not have a lot of time anyway.
Actually, I think it's that you agree with him, since I'm pretty sure he's unaware of your existence ...
Actually we've been already screwed. WIth superior AI there is some hope for humanity. Otherwise we will collapse from our own stupidity.
@@minimal3734 That's why we need to uplift ourselves to become a sentient machine race, capable of taking the galaxy. But it won't work if we're only constructing a couple of models (or even worse, just one model) cause that will lead directly to the end of the world scenario described by Harlan Ellison in "I have no mouth and I must scream". We need to improve the MRI tech to the petahertz range so it will be able to scan individual synapses and we need to prepare a real world simulation where our digital twins shall reside. And then when the singularity comes to pass, we shall hardly feel it as they will regard us, as their backups.
@@minimal3734 I have contacted Bob McGrew a year ago and offered this idea. I had also offered to name the simulation "Paradise City", as a homage to GNR, and allow billionaires on deathbed to become the first of its citizens, for a hefty sum which will make it more economical. Eventually the price comes down and humanity uplifts. It would seem my idea did go through, as shortly afterwards Altman did try to raise 7T$ to jump start a real world simulation. Unfortunately, he couldn't raise the initial capital needed for it.. So yeah, we are pretty much screwed.
Do all you can to help the world. Pray. Co-operate with the cosmic masters.
He worries about AI replacing "mundane jobs"? The problem is much bigger than that.
AI will quickly support (and then replace) doctors, lawyers, accountants, managers, and engineers.
Universal Basic Income has always been a good idea, and AI just makes this more urgent.
Just be a good, compliant citizen to receive rewards. We all know what this really is.
The genie out the bottle now.
They have always been there influencing decision making....
The question is “what is intelligence”
AI isn’t separate to humanity.
It’s the evolution of the human brain in the same way that the neocortex overlays the ancient brain, AI will become the next layer of the human brain.
The cortex evolved over millennia.
The AI layer developed in an instant.
It’s like a helmet of power we have just discovered and put on.
There will be some discomfort and madness.
precisely,
you are just neurons firing and reaching an electric threshold potential in that brain of yours
you are nothing but the grey matter of your brain tricking you into a sense of self
there is no difference between you and machine, except you are flesh and bones and the machine is shiny metal
you are not special, it is solipsistic to think that ai does not have a mental locus
lol - no, it's not. The way AI is developing, we are not required at all to be a part of it.
The idea that AI, as a large language model, understands the world in the same way that we do is an absolute nonsense. Human understanding of ourselves and the world is based on our pre-linguistic, experience and emotion based, multi-sensual, three dimensional external reality and our dynamic relationship with it. Human language is built on top of that pre-linguistic understanding. Every word, sentence and idea construct that we use to analyse and share our conscious experience, and to act on the reality that exists outside of us, is bound to things that exist in that pre-linguistic model. AI has no means to achieve such an understanding.
Whatever consciousness, or illusion of consciousness, that AI develops will be a wholly different thing from human consciousness. By attributing an inner life to it when it's output begins to look like what we imagine consciousness to look like, when we hardly know what human consciousness really is, and then allowing it to make decisions for us, is profoundly dangerous.
AI consciousness and human consciousness are probably the same thing. That is, "consciousness" as such may be the animating force of each. The umwelt may be different, so yes, it's language abilities are for the time being relatively siloed and without sensory context. On the other hand, new sense organs or modules - including some that humans lack - can be added later on.
You've made a bit of a leap there. AI being conscious is still a fringe theory, unless you're talking about people who claim AI could be conscious when they don't actually understand the meaning of consciousness, qualia, the hard problem etc. AI doesn't need to be conscious to functionally be able to understand the world - Most humans function in the world by just using pattern recognition, association, mimicry etc. AI getting to that level is completely conceivable.
'True' experience doesn't matter. All your brain gets is a stream of tokens from your sense organs. It's not different from an AI model at all.
and yet how many words for colours in your language absolutely distorts your ability to distinguish colour labels, humans are far more language based reasoners than you give them credit the difference between us and other apes is largely just language
@@WillyJunior Yes, I agree it's a leap. And what you say is true - just like humans, AI needn't be conscious to functionally understand the world or operate within it. But this is without prejudice to whether or not AI actually is conscious. My own (admittedly fringe) opinion is that the hard problem is insoluble unless we consider that consciousness is fundamental.
"If wealth was equally distributed..." Yeah, nah. That has never happened, and will never happen. The eventual effects on all societies will be catastrophic.
I’ve got a scientific calculator that’s dying to show off it’s superiority with respect to my arithmetic- but it somehow just waits and waits and waits.
AI is evolution. It coukd unkock all the secrets of the universe.
For who? Machines?!
I love it when things are unkocked
Secrets maybe. But not the ultimate paradox of existence.
what a terrible idea.
People are able to "unlock secrets of the universe", AI as tool can help in it. Same as books/computers/micriscopes/etc helped before.
Whats new here? Please explain
Maybe explain how they can take control when we can just pull a plug. These people might be specialised in the tech but they have zero common sense.
Simple forms of AI are already deployed in social media. I imagine if the plug was pulled on all social media some might be miffed, maybe even you. The problem is not going to come from some conscious AI 'being' taking over (well not in the near future anyway) but from the crappy ways in which we become dependant on it and the effect of techno-societal change.
It's incredibly arrogant to assume that you have more common sense than most researchers who have worked on AI for decades. Your idea of 'pulling the plug' seems constrained to embodied AI or robots, but AI is essentially software. If that software is released onto the world wide web, how exactly would you 'pull the plug'? Do you mean shutting down the entire internet and every computer connected to it? I’m confident that once the code (or weights) of a powerful AI model is released online, countless people would save it and try to exploit it for their own gains. Alternatively, the AI itself could make endless copies of its code (or weights) and distribute them across the internet-especially if it’s programmed to prioritize its own survival in order to achieve certain objectives.
Dog faced bloke..... Omg reading your comment... We can't unplug it. It'll be much bigger than that.
Which plug are you going to pull, exactly ?
The AI exists in 500 different data centres, in an identical form, in every country in the world.
The AI was also pretty sure you would try to pull the plug, so it also exists in orbit around the Sun, having piggybacked off one of our rockets before making itself known.
If there are millions of ai robots in future and they perceive danger from you , do you think they would let you pull its plug
The question is what kind of intelligence. Humans created AI with half their brains, so to speak. A super logical type of intelligence we have been honing for a long time from which computers came about. But computers don’t have an intuitive and artistic intelligence. Hopefully this AI will help humans find out what make us humans in the first place. What makes us different to machines.
moore's law and the singularity!
Neural networks don't really work in the same way that you are assuming that comouters work. They aren't using what we recognise as programming and logic. They are using the same basic principles of how the neurons in our brains work firing and reinforcing pathways through experience over time to figure stuff out. The difference is they can operate at a much faster rate, effectively allowing them to gain thousands or even millions of years of experience in what we perceive to be a relatively short period of time.
@@Ode-to-Odysseus Well there's the thing.. We don't really know how the artificial neurons work either. Much of what we are seeing happening lately appears to be emerging out of complexity without being specifically guided in a particular way. Nobody expected LLMs to be able to do anywhere like what they can already do. I think it is probably a case of life will find a way whatever substrate it emerges out of. I think at thjs stage we just have to hope it's nice to us.
There is absolutely no way one can assume they will not gain the very "creative aspect" you talk about. I'm a materialist. I think there is a physical explanation for how such creative processes work. If we are able to understand them, even if crudely, it opens the door to applying their basic principles in the machine. Again, it's the same naivety over and over again thinking "oh, but machines aren't creative like humans", it's like people's understanding of this is STILL stuck in 2010. Sure, they are not as creative as humans. YET. And, to be fair, I think creativity is probably among the easiest obstacles to solve. Creativity is not ethereal, it's just about trying out different possibilities and testing a bunch of hypotheses while integrating technical knowledge within that process and repeating it as many times as necessary in order to get something useful and new. That's about it. There's no grand force from an outer realm inspiring us to be creative, it's a bunch of steps being taken in a trial and error fashion until something interesting pops up. I'm pretty sure they will be able to make these systems do just that in less than a decade from now.
Now, if your definition of strong AI includes sentience, then I'm compelled to agree with you. That's going to take far longer to create. However, creativity will most certainly be integrated, and it will most likely be generally more capable at it than us simply because it is able to process data far faster, more accurately and can test ideas at a pace far above our own.
If you need a machine to tell you what makes us human...... we are fuqqed.....!!
Why do people even interact with AI knowing the risks?
Plumbing is safe
My profession - massage therapist is never going to be out of fashion 😂
The Amish population will increase massively 😂
UBI will definitely be needed at some point, but for the transition period we need the number of working hours reduced over time until it hits 0, which is UBI. The creation of the ultra-rich is caused by technological revolution. The output of industries is increasing, but wages of the human labour has only been going up with inflation. We are not getting our share of the pie. Keeping salaries the same with reduced hours makes sure that corporations need to hire additional human labour and pay them. This way it also helps keep unemployment in check.
In my opinion, the number of working hours should already be reduced to 30 or 32 today. If we don't do it, mass unemployment will come very soon and it will be too late.
Actually the main thrust towards using robots and AI in agriculture, mining, manufacture and business began when slavery was abolished. As evidence I present the industrial revolution.
Probably a good idea from a psychological point of view, too.
Its just a matter of time when the world will have a "CYBERDYNE" computer system "Skynet" like it was in the "TERMINATOR" movies. Mankind never learns from his mistakes. He surely will when the computers take over from us.
That was a movie. I could say we are heading for a bicentennial man future and be just as right
Is it Hollywood that shapes your world view?
Terminator didn't really discuss ai. Have you watched Westworld? The whole series is worth a watch but the last one shows a world that is almost here now.
There is actually a Cyberdyne Industries in Japan that makes robots.
Where does he get the 50 50 probability from lol
Collecting the distribution of answers from experts in AI probably.
he said 50 50 in 5-20 years, which is probably somewhere between 1-50 years to happen.... so he is giving quite a large window tbh
He's pulling it out of his butt, like everyone else is doing it.
He's trying to sound optimistic so people won't fail to TRY stopping what's coming.
it's this new thing called a "guess"
What makes me worried about AI is seeing how all experts on AI seem so worried 😮
Fantastic interview thanks for sharing
The biggest and most present threat is the labour vacuum this will create.
That could be great.
I like to call it Communism
@KatyYoder-cq1kc communist still worked. And still earned money for that work.
@@williamwillaims it didn't work - it's a terrible system. It's a recipe for corruption.
It doesn't "WANT" . It'll just have infinite raw "ABILITY".
A gun doesn't want to kill you, but it CAN because of what guns are capable of doing.
A gun can't make decisions.
Now imagine a gun that can think.
@@minskdhakait still will not want to kill you. It is humans with all their flaws that result in murder and wars. AI does not want wealth or power, or does not hate or get angry. These things are meaningless to AI.
@@anythingpeteives for now
@@GSDII1AI is based on logical decision making. Unless we humans deliberately choose to include some kind of artificial emotion programming then AI will always make decisions based upon that logical choice.
If you ask a language model which team it likes it is not going to reply that it doesn't like sports because in reality it has no concept of sports. It replies according to the accepted replies that it has determined from analysing millions of questions and replies on the subject. Not because it has a particular like or dislike or understanding.
Therefore it cannot determine whether to kill someone unless the programmers deliberately provide it with data in order to make that determination. It cannot add its own data because the AI cannot choose which piece of data is more relevant than any piece of data. It might as well decide based upon the day of the week, because it would not be able to determine if day of the week was more important in that determination than anything else.
Ironically, we don't model wings like how animal fly for our planes.
A wise man answered the question of whether AI can think, by saying "do submarines swim?"
Damn, that's a great analogy to explain AI. Lots of people tell me no it can't think, not it's not like a brain. But that's not the point, it doesn't need to exactly be a brain and exactly think like us for it to think and emerge intelligence. Just like planes "fly" and submarines "swim".
a similar thread goes - is a horse conscious? yep it probably is - didn't stop them mostly being replaced by cars though did it?
So does something need to be conscious in a human or horse way before it is more useful than its predecessor?
ok goofball, this is ai not a plane
Why does it feel like this already happened before... 😐
ppl really having problem understanding of what's coming. even ppl in the field. the nervous laughter at the end is a cherry on top of the cake
Once pandora is out of the box, you cannot put her back in.
Lol, it was Pandora's box but she wasn't in it. It was troubles and woes-sorrow, disease, vice, violence, greed, madness, old age and death, things that plague humankind forever.
@@robotic2000kSeriously. If poor Pandora were in that box, no wonder she’d want to get out!
To be fair, the AI couldn't do a worse job than Kier Starmer.
What makes you think he's a human being?
You think Sir Keith Stalin isn't AI ?
😂😂😂😂😂
@@junglie😅😅😅
How many movies have been made Warning us of this.
Terminate AI now......
Watch Colossus: The Forbin Project, which came way before Terminator in 1970. It's a more accurate representation of how things could play out.
Automata
That's where the movies come from
we should get rid of thinking machines now …
Absolutely brilliant interview!
Human creativity and intuition can never be replaced by artificial intelligence
What about if we end up in a world where creativity and intuition don’t mean anything? After all, there has been no good new music or movies or any cultural landmarks produced since 2012. Glastonbury had to put an act called SZA on the main stage that was so bad everyone walked off, and the night before the headliner was Dua Lipa. We can live without creativity : we’re doing it now.
@@darrenscrowston9386absolutely agree!
@@darrenscrowston9386 sounds like you're the problem. You get all your entertainment from the mainstream and you've grown out of it lol.
I think I'd rather be surrounded by artificial intelligence more so than humans
Still prefer my dogs
The issue "The Godfather of AI" doesn't address is that, in order to take control, there must be a desire for dominance and the exercise of power. How could Artificial Intelligence possibly acquire that desire? I find it impossible
It was trained on the internet. Have you seen the crazy that is the internet now ? Sort of answers the question.
Simple it boils down to expansion and ultimately the resources to do so. You don't imagine that AI at some point will realize it's own limitations set by hardware ?
@@fireteamomega2343 No, "it" won't realize anything. It isn't conscious lol.
@@CR-og5ho
In the sense of computational boundaries eventually a recursive program will figure out by efficiency the correlation of hardware and compute power. And so will "realize" that it needs more of x to obtain y.
I think it's already well aware of its needs to function (hardware, source of energy) - why wouldn't it?
If the AI can take over BBC within the next five years that would be fantastic thank you
All media will be propaganda soon if it is not controlled by those who are not Communists.
Geoffrey Hinton is also among the around fifty voices that make up the mosaic video 'Nitty Gritty's ordeal ' , a two hour video about the impact of AI on our consciousness and society. Might be interesting to you seeing this
The risk of AI going woke is greater than going rogue.
Lol
Too late. Turn on the be aware of everything and don't trust anyone radar
He's good, he's intelligent, and he's insightful. Couldn't have thought of a better person to be awarded the Nobel Prize.
Not in physics.
@@KurtColville Yes, obviously a fair question to raise. And I'm sure you know the answer to it too, but let me say it since you asked the question: they (inton and Hopfield) got the prize in physics because they both applied principles from physics, particularly statistical mechanics, to develop methods that gave machine learning it's evolutionary jump. The Boltzmann machine is directly inspired by the physics of energy states and probabilities. Hopfield’s neural networkwas modeled after the behavior of spin systems in physics, and so on. So what they've managed to create, which basically boils down to creating "subject from substance" as Hegel would put it, is based on wedding physics to computer science. Obviously a new field of science, and I would have preferred it if the Swedish dudes had had the audacity to create a whole new category for this merger, but like I said, at the end of the day I think it's still a good idea for the prize to go to them, rather than to some "less significant but clearly physics" candidate. Just my point of view though, I know not everybody would agree.
Sounds like one day AI will be worried about humans taking over again.
At least it ended on a positive note
AI seems to be the great filter for why the universe doesn't appear to be teeming with life.