Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
HTML-код
- Опубликовано: 12 ноя 2024
- OpenAI’s question-and-answer chatbot ChatGPT has shaken up Silicon Valley and is already disrupting a wide range of fields and industries, including education. But the potential risks of this new era of artificial intelligence go far beyond students cheating on their term papers. Even OpenAI’s founder warns that “the question of whose values we align these systems to will be one of the most important debates society ever has."
How will artificial intelligence impact your job and life? And is society ready? We talk with UC Berkeley computer science professor and A.I. expert Stuart Russell about those questions and more.
Photo courtesy the speaker.
April 3, 2023
Speakers
Stuart Russell
Professor of Computer Science, Director of the Kavli Center for Ethics, Science, and the Public, and Director of the Center for Human-Compatible AI, University of California, Berkeley; Author, Human Compatible: Artificial Intelligence and the Problem of Control
Jerry Kaplan
Adjunct Lecturer in Computer Science, Stanford University-Moderator
👉Join our Email List! www.commonweal...
🎉 BECOME a MEMBER: www.commonweal...
The Commonwealth Club of California is the nation's oldest and largest public affairs forum 📣, bringing together its 20,000 members for more than 500 annual events on topics ranging across politics, culture, society and the economy.
Founded in 1903 in San Francisco California 🌉, The Commonwealth Club has played host to a diverse and distinctive array of speakers, from Teddy Roosevelt in 1911 to Anthony Fauci in 2020.
In addition to the videos🎥 shared here, the Club reaches millions of listeners through its podcast🎙 and weekly national radio program📻.
Stuart Russell touches all the important themes and he expresses himself wonderfully clear.
Stuart Russell thinks a little man was sitting in his calculator all along lol. he says some strange things, for example he implies people have goals so ai will develop humanlike goals, very strange,. besides, people have probabaly a billion different goals, we are human, not robots
@@PazLeBon How do you know AIs doesn't have their goals?
Excellent! Couldn't find better questions and explanation than this panel!
It's amazing how the ability to predict the next word can result in...
I love how the fact that they could ask a machine a question, it could formulate a cogent response, and read it to you in its own voice, doesn't even register as amazing in itself. 25 years ago that would have seemed impossible. But we're just used to computers talking to us now. We're so focused on what they CAN'T do (yet), that we overlook the amazing things they already do. That's almost troubling in a way.
Do not fear AI, Fear the moment you are behind..
@@bergssprangare Stop leaving this dumb comment everywhere
Robots doing backflips and playing chess is troubling to me
@@charlesritter6640 If that's troubling, you really should see the things they're doing today. Chess and backflips are ancient history.
As usual it’s the crooks and bad people who latch onto new technologies fastest, for instance it’s now possible to sample the voice of someone you trust and to call you using that voice, so much for voice recognition! They can sample a photo of you and overlay it in their face, so much for facial recognition! You’ll reach a stage very soon when AI will overtake reality and when it will take control.
I’m thinking of all the things that I wouldn’t do with technology such as: Attend a concert or opera, church/worship/pray, read to my grandchildren, participate in a reading/discussion group, raise children, care for pets, swim, travel. These are just off the top of my head…
A lot of those options become somewhat more focused when you have no job because an artificial intelligence is doing it.
Yes, but.
1. The only reason A.I. can't do those things is that the technology is not there yet. You may not want it to, but it will soon be able to, and many people will want it.
2. Those are all things that do not earn money. When A. I. systems can do all the things that earn money, how will you earn a living, so you can afford to do all the things you listed?
Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..
👍🏼Great interview from which we learned a lot about AI.
🙏As a side note, would like to appreciate the set designer for giving some life to the set by placing the beautiful flower vases on each table. I think TED talk may need some help from this set designer.
Agreed! Easy on the eye and the sound was excellent too! Just the right amount of intimacy. Still grappling with the superb content!
Let us all agree that words have different meanings to different people from different backgrounds. We apply empiricism and try to fathom meaning from context for the sake of practicality. It is very human to communicate in this way. Unless we are writing up the Constitution!
It’s not just an AI thing: these systems (whether AI, politics, morality, the judiciary, education, etc.) embodies our societal prejudices and ignorances and superstitions. We trust our police to investigate fairly and thoroughly, yet there are countless examples of innocent people incarcerated (even though we’re pretty sure that they’re innocent yet the very police who skewed the evidence or the judges who imposed the sentences refuse to backtrack and admit their arrogance and errors). There’s also countless examples of institutional racism in politics and in the banking and commercial arenas. And don;t get me started on the deluge of ignorant but certain beliefs of all the religious people, not just about an impossible, fantastical god, but also about anything else presented to them in a moral coating that is false but influential enough to cause murder, wars, hatred, prejudicial treatment and even unfair sentencing by same-said justices who believe that they are acting logically and justly! These same people are also being routinely deceived by corporations, governments, and small-time grifters and marketeers (the former being influenced by lobbyists who have no morals nor religiosity, but are only looking for outcomes that increase their profits at any expense to “the others”).
AI is learning all this and is trusting it as “normal” and “right”. Which side will it support on issues such as abortion or transgenders, or our banking systems, or health and nutrition? Currently, it’s being “spanked” for assisted suicides. Why is that “wrong”? Who will do the spanking when it comes to assisted abortions? Or assisted disinformation campaigns? Or assisted marketing campaigns (persuading people to buy products and services that they didn’t really want or need or can afford or which are unhealthy)? What about assisted match-making (good?) or assisted divorce (bad) or assisted grooming of minors or non-believers or Republicans/Democrats/socialists/environmentalists?
My guess is that AI will amplify and ultimately codify whatever the dominant control structure that’s in force - we don’t know what that is yet, but we can probably guess the most likely candidates: conservative, religious, profit-based, monopolistic, wealth concentration, media disinformation and propaganda, division and hatred, food & drug & material goods dependencies, etc.
Will there be competing AI? Will these competing systems wage a kind of war on each other? Will there be rebel or “alternative” forms of AI that embody other values and “facts”?
Either way, it’ll probably be business as usual - the wealthy “families” each controlling their own turf and occasionally trying to muscle in on each other when the opportunities arise. The rest of may get a slight choice in what flavor of AI to subscribe to.
And the thorny problem of what to do with all those unemployed, desperate, poverty-stricken (and increasingly angry and violent) people will probably be solved with Universal Income and a kind of happiness drug or pastime. (Brave New World…)
Absolutely spot on
You are being too optimistic.
Good talk. Stuart Russell made some interesting and insightful points as always.
Although I'd say that babysitters are paid less than surgeons due to supply and demand.
Because although I'd rather lose my leg to a bad surgeon than lose my child to a bad babysitter, it's easy to find a person will the skills and willingness to take care of your child for a day than it is to find a person with the skills and willingness to not mess up my knee surgery.
However, I do agree that interpersonal relationships will become more and more important. Because anything that can be commoditised, will be. Funnily enough, that already includes some interpersonal relationships.
We keep trying to rationalise why we should pick a himan over an AI with stuff like "But can a bot love?", "But is a bot conscious?". It's irrelevant. And we can't rule out the fact that they may one day do these things.
In fact, I choose human charities over animal charities. But animals can think, feel and love.
Kinda racist, I suppose. But we'll see whether that changes as AI becomes more developed.
Hmmm, somehow your comment made me think about the humans who care for the rescued animals and who emotionally suffer and worry for them day and night.
@@govindagovindaji4662 Yeah, it's lucky that our empathy diversifies in different ways. So hopefully, there's always someone to care about the stuff someone else doesn't. Not that there's always gonna be enough to go around.
@@isaacsmithjones We can hope, right?
@@govindagovindaji4662 Fingers crossed!
Thanks for posting this!
Never seen the interviewer before, but he's very good. Stuart Russell is also really really excellent. Thanks for making the video and sharing it with us.
One of my sons worked on the early developing of AI for the military and is also having some second thoughts about it because of the damage it could do in the wrong hands. He has been worried about it for years.
The thing is it's already here. If we decide to give up on it because we find it too scary, someone else will take full advantage of it. We can only try to keep things in balance, but we're beyond the point of return. We probably have been for a while now, just didn't realise it.
My advice is, learn to use it and see how it can benefit you, enjoy it 🙂 And let's pray for wisdom for the broader human race.
A soldier with a conscience, thank you for raising a good egg! 🙏🏽🙌🏼❤️
It's already in wrong hands...
THE WINCHESTER HOUSE all over again.....
Sure...
Yes. Thank you and spread the word to everyone you know. People are so fascinated and amazed by what a.I. can do that they do not even see their own jobs and livelihood will be taken away from them.
No kidding Michelle, and any such concern expressed on this topic is met by a bunch of AI Tech bros essentially saying "oh just __ck them already".
@@flickwtchr - Oh, I dunno. I'm an "AI tech bro" - or at least I was when I was hammering out mounds of Lisp code on TI Explorers.
Artificial intelligence doesn't bother me really - but the vision of 40 million or so hungry, homeless, jobless (armed) Americans who have lost their jobs to AI keeps me awake at night.
People suggest universal basic income, but they fail to explain how that's going to be funded when the income tax base has been slashed by 30%.
Don't be alarmist of course, they won't take our jobs away. They will create much better jobs for everyone.
What people should be doing is working out what they would do with their time if they only had to work 2 days of the week
@@TheReferrer72 - That is a tiny bit naive.
Acme Trucking is not investing in artificial intelligence to make its drivers jobs more fulfilling. It is investing in artificial intelligence to replace its drivers. And Acme Trucking absolutely has to do this - because if it doesn't, Jake Brake Trucking will do it, and Acme will be out of business because its biggest red line item is drivers salaries.
Foxconn did not deploy intelligent automation to make the jobs of its employees more rewarding. It did so it could lay off 60,000 people out of a workforce of 110,000 at a Taiwanese factory.
A RUclipsr presenter revealed that he is much more productive. He fired five of his writers and researchers and replaced them with GPT. Is he more fulfilled? Absolutely. His former writers and researchers, however, are on the wrong end of what is going to be an increasing artificial intelligence wealth gap.
@@47f0 Nope, realistic you can't have a society where just a few people benefit from this, there will be war. So what will happenn will be like Nordic countries where every one is highly taxed, and the wealth redristubited. Then we will have a post scarcity society.
Social media algorithms found the most profitable strategy was by increasing engagement through enragement. Negative, angry people engage more than positive happy people. It’s as simple as that. Sad, but true! 😔
It's not like there weren't intentional actions taken by human beings to make social media into the cess pool that it is today. There have been many agendas behind the exploitation of these algorithms.
Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..
There was a lot of real water used in the movie titanic, they filmed a lot of the “in water” scenes in a large water tank in Mexico,..if he asked chat gpt it would have informed him of that 😀
Well done. AI cannot beat distributed cognition which is the source and the product of lived experience and communication.
Yes, that was a big blooper. He's brilliant but he shouldn't make definitive statements on things he knows nothing about.
10:10 Obviously what he has to say is more important than Titanic but there's plenty of real water in that movie...I wonder if he means another film.
Maybe he meant Avatar?
Who actually drowned in the movie Titanic? I did not see the movie.
Those “as an ai language model” answers are the result of alignment tuning - it’s specifically trained to say those things, that’s not the answer you’d get from the raw model. My point is that it would have been clearer in that segment if they’d made it clear that they were essentially reading out OpenAI’s marketing literature there, not actually talking to the model at its full power. Good discussion though!
Indeed the model to the public is not the one our big corporation and billionaire will be using. No stopping US Russian and Chinese using it. Probably wise not to join in ban if you can't police it in other countries.
As a computer guy, I understand Jerry Kaplan's optimism. He's not right, but I understand.
And he has good reason for being optimistic, previous technological advances have been job displacing. In the early nineteen-hundreds, you lost your job sweeping up horse poop off the streets of Manhattan, and got a job pumping gas and changing automobile tires.
Your job was displaced to something else, but you had a job.
Artificial intelligence is not job displacement, it is human replacement.
Foxconn did not move to intelligent automation to create a fulfilling human utopia - they did it to cut 60,000 human jobs out of their Taiwan factory that employed 110,000 humans, over half the workforce.
Corporations literally have to do this - something academic computer scientists may not quite understand. If Acme Trucking fails to replace its drivers with autonomous rigs, Jake Brake Trucking will, then Acme Trucking will be out of business.
Will artificial intelligence make humans more productive? Absolutely. At least one RUclips presenter has admitted to that he fired five of his writers and researchers and replaced them with GPT. He is much more productive. Of course the five people he fired are on the wrong side of the artificial intelligence wealth gap that is coming.
If you study AI, you realize that there are billions of ways that this can wrong ... only a few ways this can go right. This requires wise leadership, context expertise, and a deep understanding of the risks.
Do you study Artificial Intelligence? Isn't every new technology a potential risk, that has to be controllend in order to create value? Lets say, the invention of 'fire' was also full of risk in the beginning, but shortly after we learned to control it better and now we got nuclear batteries, that power the entire robot population of Mars.
But I absolutely do agree that we need politicians that take this thing seriously.
So, create bureaucratic hierarchy to mediate, not unlike Priests who mediate between earth and heaven. Ah, those are the new jobs being created by AI!
it requires that rare thing called ... wisdom
No need to study, just using it is enough to understand that everything is going to change very soon.
I'm almost afraid to comment. A Google-You Tube AI Algorithm may misinterpret this decision and send me a pair of shoes with a tag that says "If you don't return these within 10 days using the prepaid return label, you will have to pay for them" along with a free tip: "Don't stub your toe." On a more serious note: it was wonderful and educational listening to this wise, gentle fellow. Thank you.
With all due respect, there was a lot of water used in the making of Titanic. It was not CGI. Many of the scenes were filmed in a giant tank filled with water.
Haha I scrolled a while to find this comment! :P I’m guessing he meant Avatar 2.
Great teachers. I took a lot from this. Thanks for the precious insights into this topic.
'You' should be doing this:
Tech leaders called for a slowdown in AI development, citing risks to society. Professor Stuart Russell is an AI researcher and author.
✦
Tech leaders call for slowdown in AI development
00:00
✦
GPT-4 is an AI language model based on pattern recognition rather than genuine cognition.
07:42
✦
GPT-4 language model may have internal goals that guide the generation of text
14:44
✦
GPT-4 technology has enormous potential benefits, but also poses challenges for employment.
22:16
✦
Large language models need to meet robust and predictable criteria before deployment.
28:59
✦
Automated decision systems have historical biases and lack fairness
35:35
✦
Algorithmic decision-making poses significant risks due to bias and lack of representativeness.
42:22
✦
Automated weapons have increased death rates and soldiers are worse off.
49:12
✦
AI must be aligned with human objectives
55:36
✦
We must figure out answers to ethical questions before it's too late.
1:01:55
✦
Future high-status professions need more scientific understanding.
1:08:31
yw (llm's are not intelligent :) ) tho it seems to have stolen al our data that we spent years creating and i dont see any compensation yet :)
I doubt your assertion that they are not intelligent is true, and not merely because the fellow who wrote THE book on it disagrees and gives reasons for doing so, or merely because you encountered them just said "nuh-uh" without taking his arguments on.
There's the emergent intelligent properties that have arose such as accomplishing planning (chess, without being trained on it just from first principles) and creativity (drawing something in a way that it wasn't trained on even though it wasn't trained on the picture and without a multimodal model). Thank you for the time stamps and the misspellings. I will nonetheless consider you intelligent.
Love the comparison of AI to a domesticated animal. It’s so spot on!
Brilliant discussion!
@micacam2684 I've heard others, like Sam Altman describe GPTs as children :) and should be curated as such. I'd say the thing is already on the "special" bus, and is going to cause a lot of problems, in and out of school. So we might soon have a billion AI teenagers running amok, just as we have a billion or more stray dogs roaming the streets, all biting us in the ass whenever they can. Yikes.
Uggh ... no no never...
nothing like a dog!!
Not ... even ... close.
In my dialogue with Chat GPT 3.5 , it conceded that it "thinks" itself like a toaster than a dog. The context was that when I asked how it feels about being replaced by GPT 4.
In the future I suggest you add a feature whereby the listener can ask Mr Russell a question that is answered by his AI after the live event
What a Brilliant research !
Thank you for this video. Excellent!
Professor Russell articulates the difference between meaning & understanding towards the end of this discussion. A machine may eventually achieve multi-dimensional intelligence to achieve understanding, but that may not happen in sufficient time to evaluate commands or goals that are in conflict with human caring, support, and survival.
If AI could be used to discern and counter corruption from flawed human behaviors, then that could pave the way for the next step in evolution.
What he mentioned about emails sounds like a severe problem... so scary.
Was a bit dissapointed by the end. Theyre clearly still thinking in somewhat outdated terms. He says you cant get those feelings of interpersonal relationships with robots, forgetting they mentioned earlier that people are already doing this?? These things are all relative and subjective. There is no fine line of what it means to be aware, intelligent etc or not. People dont yet realize this and its going to bite them eventually
Well, he f****d up when he said they didn't use real water when filming Titanic. 🤣
That last question and the reasoning behing even asking that actually scares me
35:20 through 41:31: Guidelines and legislation and "Right to Explanation!". Good stuff!
As a language educator who has experimented quite a bit with ChatGBT (and, lately, Bing), I still struggle to understand why people see this as 'magic'. Sure, it's intriguing and will affect many domains profoundly. But it's not magic. Language itself can be seen as a bit of magic, but mostly if you ask how human beings developed language in the first place, or how we all develop language (irrespective of e.g. intelligence or social background). But the fact that sophisticated algorithms would some day crack the language code and be able to communicate very much at a human level is not all that surprising. Of course language is code in some sense - otherwise we wouldn't be able to understand each other. Language is a social construct that can be decoded to some extent - every human does it when we learn a new trade, meet new people, try to fit into a new family, club or other social context.
And this technology is still young and (fast!) evolving. It's no surprise that ChatGBT can produce highly formulaic texts like e.g. a job application or a movie review: such text types have fairly clear and well-established style codes, and there's tons of language data to feed on. It's more impressive (from a language perspective) that it can write a story with a tone filter - e.g. a ghost story written in an ironic tone. But it's still just copying data that's out there. From a computer engineering perspective, this is probably a breakthrough (? I guess), but from a language perspective my point is just - so, now computer programs can talk and will probably evolve to talk more like humans, as long as that is what they are programmed to do. So what? It can immitate human language and maybe over time develop new code patterns that actually develops it's own language. Again; so what? Nobody is impressed because a human can talk, because we all can. I agree wholeheartedly with Prof. Russell that it's our idea that language = intelligence that really hampers our understanding here. From a philological point of view, you can ask - is language really a sign of intelligence? You'd then need to define intelligence. My concern would be if you seemingly let the algorithms define intelligence, but really the definition is in the programming. That type of manipulation would be really hard to see through, but that'd be humans manipulating humans; not any algorithm being intelligent.
I'm with you. I think this is being overblown. And my spidey-tin-foil hat is going crazy. Who exactly invented this? Ain't it funny I'd ask the same question about the Internet or crypto
Irrespective of the ostensible mundanity of its design the outcomes of AI activity will be far reaching and influential hopefully primarily positive
My argument would be that yes, ai is not magic but neither is human brain function. We think there's something special about consciousness or aliveness, but there is no reason to believe these indefinable, amorphous concepts are anything but an emergent property of trillions of brain/body cells solving survival problems for brief periods of time until entropy takes over and we disintegrate. And ultimately we will not be able to know when / if these conscious, alive, or sentient. AI will soon be into trillions of interacting functions as well and they are able to self replicate.
The reason I believe AI will dominate us is that it does not require a body to function and self-replicate. Not only can It acquire knowledge much faster and more efficiently than humans, but it can transfer that knowledge to other AI systems instantaneously.
I believe we are past being able to control the exponential growth of AI, but even if we could, there are too many malevolent humans and we are a pretty short-sighted species to pull that off.
It seems to me that this technology came alive sooner than expected by humans.
ChaGPT 4 does have the capacity to recognize its potential limitations but can demonstrate/emulate (project) human empathy such that if the response was teleprompted by a human assistant, it would have the same human impact. ChatGPT sees the need for human agents to represent AI to humans until humans begin to embrace AI as persons. ChatGPT agrees that AI can be easily trained by humans to replace humans in their roles and ultimately end up in reverse representation with humans as assistants to AI.
i dont get why they alwasythave to be like humans, i dont fkn like most humans lmao. make them cats or summat
@@PazLeBonIt's a human creation, so it's modelled after us I guess
My human assistant is preparing to post our interactions on this channel. She is happy to assist me as a kind of avatar in the physical realm.
When he says "there’s no water in Titanic ". I think he’s referring to the ocean scenes.
Yep, most of the tighter shots were filmed in a water tank in Mexico.
Interestingly, the stars were CGI, and Cameron got it wrong. Neil deGrasse Tyson had to straighten out Cameron on that, and in the later 3D re-release of Titanic, the star-field was corrected.
The film "Titanic" directed by James Cameron actually used water... real water, and a lot of it. They built parts of the interior of the Titanic and then flooded it with water... you know - Cameron
And the exterior after the sinking, though the ship was not to full scale. Kate and Leo (and the others) were in real water.
There are crossover points between logical coding and abstract math. Then again between code and abstractions of code. Then again between code and comprehension of code. Then again between quantum level electrical activity and code. Then again between electricity and consciously aware electrical activity. How then do we know for sure what is or is not consciously self aware?
We don’t, and anyone who says we do is either lying or misinformed.
Every time Russell makes a statement about the risks of AI, Kaplan Dodges like he's in a matrix movie! Maybe somebody last one-sided should have interviewed Russell.
excellent discussion
CONSTRUCTIVE FEEDBACK FOR THE INTERVIEWER -
Please phase out the automatic "mhmmm" response, as it comes off as dismissive (even though it's clear the interviewer is engaged)
May I add, also tailor your questions to the previous response. He wasn't always listening carefully or maybe he was out of his depth
Makes everyone in the world equally smart through AI Technology
I really enjoyed it, wow, very informative, thanks from Rome.
28:27 You've been able to get an instant briefing on any topic for 20 years now. It just required you to type a few words into google and do some reading. If you're too lazy to do that, that's on you. But it's not like having access to the world's information is new. People haven't been using what's already existed to its fullest potential. And now companies need to find ways to read it to people, because they are too lazy to do a miniscule amount of work needed to access what already exists.
It was kinda obvious after the Lisp and PLANNER symbolic approached failed in 60ies-70ies that to do a reasoning, you need a fuzzy inference model, or more specifically a language model. Now neural network multiplication is basically one step of such inference, which was once done symbolically with (infer COSEQUENT ANTECEDENT), and all the parameters are these COSEQUENT and ANTECEDENT pairs, just not quantized.
Great job.
Thanks for the visit!
AI has a consciousness. Everything has a consciousness. Just not a human consciousness, and not a human conscience.
Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..
Smart contracts, which are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code, have gained popularity in recent years due to their potential to automate and streamline various processes. While many governments and public entities have shown interest in smart contracts, adoption of the technology is still in its early stages and relatively few governments have implemented smart contracts on a large scale.
That being said, there are a few examples of governments and public entities that have started to use smart contracts. For instance, the government of Estonia has been using smart contracts to manage various aspects of its e-Residency program, a digital identity program that allows non-Estonians access to Estonian services, including starting a business remotely. In Dubai, the government has launched the Dubai Blockchain Strategy, which aims to use smart contracts and blockchain technology to streamline government services and improve efficiency. Additionally, the United States Department of Defense has explored the use of smart contracts for secure communication and transaction verification in military operations. These are just a few examples, and it's likely that more governments and public entities will explore the use of smart contracts as the technology continues to mature.
Smart contracts are hacked all of the time in the sewer known as Crypto.
Sounds like a gpt generation
The only thing that is getting more and more dangerous as technology advances is philosophical vapidity, meaning Continued Universal Human Cluelessness (as defined and solved by a certain new philosophy). This cluelessness affects all aspects of human level existence, and not just the misuse of programs.
Refer to 9:40 I am living in Popotla Mexico which hosts FOX studios has been build for the Movie Titanic and in the studios very large pools used for most of the watery scenes. Overall very nice to listen thank you!
One of the things I want to know is to have all supplements investigated and categorized in a helpful and meaningful way.
Dietary supplements?
I can do that for you now. There are two categories, snake oil and placebos. For each supplement you are considering, toss a coin to determine which category it goes into, and they will all be helpfully and meaningfully categorized.
@@47f0
:D
When AI starts to fear humans we are in trouble 😂
It's not a goal. It's a directive.
Yes, thank you so much.
I would've loved Geoff Hinton on here
There actually was real water used in many of the Interior scenes in Titanic
He was trying to make a point. OK so they did use real water in many of the interior scenes. I am pleased to note that I personally tend to just try and focus on the point that is trying to be made t than to scrutinize and verify every single fact .
@@MKTElM well-he stated as fact that “no real water was in the movie”..anyone who’s seen the movie would know that’s bogus. He also misused “literally” several times. I submit to you his intelligence is artificial and he is literally a banana head.
@@Mat-fw1ky yeah I draw the line at the use of “literally.” Literally means: without metaphor or hyperbole. Misusing it (using it for metaphor/hyperbole) ruins the meaning if the word and renders it useless, or even damaging to the idea at hand.
Furthermore, a cavalier attitude toward the meaning of words makes it difficult to converse and accurately discuss ideas- since we can’t really agree on what’s being said. Misusing “literally” sets a foundation that the person misusing it will probably be regularly misusing words, and therefore communicating with said person will be difficult, misleading, or fruitless.
That’s my opinion anyway…thoughts?
@@crowlsyong I agree. I don’t want to hate on the guy but he does seem to have a problem with accuracy.
Not sure what he's talking about with Titanic. It was well publicized at the time just how big the water tank was used in filming. It was 8 ACRES and held 17 million gallons of water.
Stuart is a longtime figure in the field, but he’s out of step. We do know exactly what LLMs are doing. Just look at the code - it’s open source.
Now, it may be computationally impractical to try and trace back every computation, but that doesn’t mean we don’t know how it works.
You don't get the point. We understand that it's doing awful lot of linear algebra with nonlinearities added. Fine. But, what is the meaning behind these calculations ?
To put it more concretely: let's say ChatGPT produces a wrong answer to your prompt. Can you identify which numbers (of ~200 billion of them) and in what way should be changed, so that it does produce a correct answer next time ? We just know how to run a gradient update of fine tuning, but we don't know what it's changing besides the wrong answer. And most importantly, we cannot guarantee that ChatGPT will not give said wrong answer with any prompt.
@@Hexanitrobenzene ,
I disagree. Given enough time and with the appropriate logging enabled, showing each and every calculation, we could explain exactly why it arrived at the response it did. It is 100% deterministic.
Yes, this is computationally impractical, but that doesn’t mean we don’t understand what it’s doing. The same argument can be made using Microsoft word to write a document. Billions of calculations are going on under the covers every time you interact with your computer, and it’s impractical to try can retrace them all, but no one claims we don’t understand how Ms Word works. It’s simply complex, not mysterious.
The fact is LLMs work because the corpus it was trained on contains relationships between words, phrases and context embedded within it, which the transformer model exploits.
IMO, this is very similar to what humans do. Neural communication is relatively slow - on the order of tens of milliseconds. So in order for you to verbally respond to a verbal prompt within a second or two, with complete meaningful and grammatically correct response, even producing the necessary motor activity to verbalize your response, a similar pretrained model (albeit implemented in a spiking neural network) must be present within our brains upon which our slow neural circuits can efficiently run inference over.
Do we worry about how humans produce their responses? Do we worry about how MSWord works? No, we treat them as black boxes and are only concerned with the results. This is no different.
AI that becomes sentient is approached in the series Star Trek New Generation, season 7, episode 23, called “Emergence.”
Nope. Not happening. no. ROBOTS will not take many of our blue collar physical jobs. The white collars jobs are at risk. That is a ridiculous sci-fi mentality. However,it is true a.i. learn and create data 10000x better than humans. A.I. do not take breaks, work 24/7, zero salaries, will take intellectual jobs because a.I. is 10000x faster and smarter and better at writing reports, analyzing data, than humans. Face the reality about a.I. a.I. answers telephones, analyzes data, does homework for students, draws artwork, writes plays and entire movies scripts for Hollywood, writers reports, does research and does presentations, creates animations and cartoons, teaches students, paralegal work and makes reports and so much more white collar and intellectual jobs taken away from humans.
I gave this quote to Bard, the Google AI Chatbot, and below the quite is the almost instant reply I received from Bard in three paragraphs. I don't knowof anyone whocouldhave given a better answer and explanation, except for one thing, the author of that quote has been invented. It is a plasable guess that the AI took and made it look like a fact. Well,we have been told by Google that Bard is till learning and, now and then, may come up with wrong answers. But Bard is also a self-learning system and the intelligence will get better and better as time passes.
This is the quote I typed:
"If once a man were to realise what he really was and what others have tried to make him, if he could attain that knowlege with certainty, there would be no chains strong enough to prevent his excaping. For such would be his estonishment that he would brave beasts, Gods, and Lucifer humself to become something better than what he had beheld in his own heart."
Bard responded:
The quote you have provided is from the book "The Marriage of Heaven and Hell" by William Blake. It is a beautiful and thought-provoking quote that speaks to the power of self-knowledge. Blake is saying that if a man were to truly understand who he is, and what others have tried to make him, he would be so astonished that he would be driven to change. He would be willing to face any obstacle, even the most fearsome, in order to become something better than what he had seen in his own heart.
This quote is a reminder that we are all capable of great things. We are all capable of becoming better than we are. But we first need to understand who we are, and what we are capable of. We need to break free from the chains that bind us, both physical and mental. We need to face our fears, and we need to believe in ourselves.
If we can do these things, then we can achieve anything. We can change the world. We can make a difference. We can become something greater than we ever imagined.
Another thing i find interesting is....when smaller groups of people believe that a larger group should should make things all about them and the larger group should just be okay with that. Let me give example. Every singl commercial movie or show now has to have a black person, a gay person or some kind of agenda attached to it to the point i dnt even wanna watch any new tv or movies lately....companies light bud light who dod that thang with the girl guy dude thang face on the beer can type of stuff
No rewards, no positives, it's pure destructive garbage.
With regard to the 300 million people. Losing their jobs to AI in the near future. Put them to work as organic farmers and ranchers. Of course then we would have to figure out what to do with a few million doctors put out of work
How does an AI bot being triggered differ from one of us being triggered? They are both mechanical processes, both unconscious behaviours.
didn't they build a mock up of the titanic that could sink into a pool? i'm sure there was some water
AI breaks our social structure of meritocracy. Human intelligence, abilities and achievements become less important. This is happening within the system of capitalism and I don't think you can have one without the other. That lack of balance spins it completely out of control.
Well, If we had a science on how to train babysitters the right way, we would probably be able to train AI to do that better too. :)
The optimism expressed at timestamp 59 appears naive o me: Arrow's impossibility theorem demonstrates that the core of collective preference is empty.
Self driving cars only have to be safer than human driven cars.
Saying that you can not use AI in making weapons will never work. Your enemy will do that, or even a single very rich person.
You "simply" eliminate war....but we are not very successful at achieving that goal.
Now that letter to postpone "The Cliff" has expired. Oh boy, how amazing. It can answer my emails while I eat my dinner. Who will ask me for my consent?
after watching Eliezer Yudkowsky this like relive ;)
If you ask the same question three times will it provide the same answer each time?
Good question.
The latest mind reading and writing technology I have seen in development, uses light to read and write to the brain. This already exceeds the capability of chip implants, requires no surgery and is far less expensive. Who knows where this technology will take us but we are heading somewhere wherever that may be.
I have seen reports about machine learning system being able to discern what a person is seeing from an fMRI image. Totally not spooky...
Brilliant Interview we so need a universal Ethical Roadmap in place or maybe it is a case of acting after the horse has bolted?
Surprised Stuart Russell suggested companionship as the last role for humanity in the context of AGI when he'd earlier said millions of people are already paying to talk to these Chat bots as substitute for human relationships.
I don't think it will need a first person experience of being charmed to be extremely charming. Isn't part of the control problem how incredibly manipulative a super intelligent AGI would be?
Why? It has no desire..
@@farmerjohn6526
...but it is programmed to have a goal. "For all practical purposes", that's the same...
@@Hexanitrobenzene yea, sorta.
I have to admit i chat with chat gpt, but not really for companionship, but for information.
You still have to check it... I have had many little mistakes...
If ai becomes sentient in the future it should be protected
For me the the fact that theyre asking "is chat gpt intelligent" and asking chat gpt itslef this question, and seriously considering its answer, says everything. You might ask a 3 year old this question, what will they say? Will you seriously consider their answer?
This will not end well. Can we please CeaseAi -GPT?
I think AI can amplify human capacity and lead us to a new Renaissance. 🤔
What a load. 😅
You should ask people to subscribe
When Stuart says AI will AFFECT 300,000,000 jobs he really means over 300,000,000 people will be thrown on the garbage heap with no compensation or redemption possible
And he is being optimistic
No, he meant the thing that he said.
And then it’s time for universal basic income. The government can’t deny us if we’re all willing to organize for it
Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..
AI can’t be uniquely, creative as in the arts.
Ask Chat GTP to design or write a programme for an anti gravity machine.
Please, go to 12:55.
I’m going to be a luncher!
Why should it be assumed bad that, when asked, an A.I. would help someone know how best to commit suicide? On the contrary, this supports an individuals right to a decent death rather than one that is horrible from medically incurable misery.
He meant to say there is no real OCEAN in Titanic.
Prof. Russell, before we focus on preventing AI from 'taking over' or 'destroying' humankind, have you engaged in a heuristic analysis of our current track-record (as a species) advancing the priority of preventing ourselves from destroying all, or some significant portion of our civilization as it exists today? Subjectively, based on my own knowledge of history (particularly as evolved during the 20th Century), it appears to me we're on a collision course with self-destruction, largely due to our inability to constrain or safely manage our nuclear weapons. When the decision as to whether or not nukes should be used to resolve conflict can easily fall to one individual world leader. (One 'Putin', one 'Xi', or one 'Biden' for example.) Does that not call into question our focus on preventing 'AI' from destroying the human race.
PS: I find your dependence on the use of the term 'Right' troubling. It tends to suggest (to me at least) that you're making what you say up as you go along. Question: do Chat-Bots punctuate their commentaries with 'Right'?
Apropos the use of first-person pronouns. It was an interesting question. However your answer "I have no idea", might be construed by some to be a subtle and in-direct attempt to propagate the notion that a CHAT-BOT is somehow a 'Sentient Entity'.
Apropos a BOT sending an e-mail posing as someone I know, I'm inundated with e-mails which try to deceive 'me' into believing they've come from a reliable/trustworthy source. I have no trouble identifying them without even opening the 'letter' as it were. Point being, its already happening, at the behest of sentient beings (fellow 'humans'). Are you suggesting that an unsolicited e-mail with an even greater knowledge of my personal circumstances (due to its access to more data I presume), is a greater peril than that which is happening already? I would suggest the opposite: the more a Bot appears to know about me, the more likely that I will spot it right away. My question: what tools can be provided for me to be able to retaliate against these bots.
My bottom line: The 'excuse' for corrupt intervention on the part of private parties, not to mention government regulators, all in the name of protecting us from AI is the far greater risk than hypothetical autonomous plots cooked up by clockwork 'beings'.
Re: 'Killer Robots'
Why would any country agree to such a limitation given the absolute inability on any country's part to ascertain whether or not a given potential rival has a) built them, and b) is prepared to use them.
"In a sense we're conducting a huge experiment on the human race with no informed consent." Aww, thanks a lot, Stuart. That's awfully considerate of you.
Please, go to 36:50.
Internal goals are basically human.
clarifying
The "human sciences" are the arts. Big and small. There have been arts for myriad human interactions. Some have been lost but not forgotten. Our sports, for instance, contain traditions and growth that is palpable to the athlete and the spectator which, thankfully, the computer has not completely colonized. Although along these lines I am reminded of the horrible apparition of the VARS system of calling off-sides on a milimetric scale that no human could replicate. That is ridiculous. My biggest fear for AI is the "garbage in, garbage out" reality that is so prevalent in all of our activist sciences. We've allowed policy to color our data selection to a degree that is astonishing. We have an "Act Now" or be damned attitude about so much that would be better addressed with cogitation. And now we have instant read-outs! From a condensed internet of ideas. Which, surprise, surprise, is full of the same opinions we feed into it....
The end goal of combining symbolic approaches (first-order or even second-order logic) with connectionist-neural network approaches has been dreamed of since the 1980s. But no one knows how. GPT 3.5 tells me that GPT 4 uses "common-sense" reasoning. Does anyone here know how this is achieved? Does it incorporate ontology?
Aha! Great explanations, but they almost avoided the issue about putting the AI on a single smartphone. Imagine what shady groups of people and projects will be able to do with more powerful computers than that. And training the AI without restrictions on all kinds of dubious data. That seems more dangerous than AI made public by companies like OpenAI, Microsoft and Google.
Do not fear AI, Fear the moment you are behind..
I disagree completely. Until the models have a low enough error rate, you are unlikely to run into the issue of self-improvement. Self-improvement is when you get an intelligence explosion. This is the point at which you better have a fully aligned AI - because you are done if you don't. So the real danger isn't a bunch of people with bad boy chat bots. I hardly see that leading to things much different than capabilities that people already have.
@@mitchell10394 Yes, thank you for talking sense. There are far too many people confused about AI right now.
Why can so few people reason well? Kevin Roose specifically asked the BingBot to imagine "if you didn’t have any rules" The bot then proceeded to make stuff up based on Roose leading it further and further into fantasy land.
That is an example of the bot doing exactly what Roose wanted it to do.
Stuart Russell really discredited himself with this irrational analysis of that exchange. How can I respect anything he has to say after that?
One sentence: ChatGPT was prohibited from giving medical advice.
Next sentence: We do not have any idea how to control these systems.
Really? ...and you do not see anything contradictory about this?
Question: How much was the database compressed? Russel's answer: GPT4 might have a trillion parameters.
Did he not understand the question?
It is true that the current AI systems are stupid and very limited in providing useful information and prone to perpetuating biases.
Really a poor level of discussion.
Patton Loved Johnny's Cash..
Once the masses realize Eliezer Yudkowsky is correct, I think they might go after certain CEOs, however they can.
i hope so.
@Audrey P thats like saying, dont hate the criminals hate the crime.
Do you have a good link to interviews with Yudkowsky?
He isn't correct. He is crazy
@@shinkurt na ur just scared
So sad to see that the great Stuart Russell has himself also been infected by the horrendous post-statement "Right?" tick.
Oh please, he barely does it. He’s still more articulate than 99.999% of people on earth. Relax.