A.I. Expert Answers A.I. Questions From Twitter | Tech Support | WIRED
HTML-код
- Опубликовано: 12 май 2024
- Scientist and A.I. expert Gary Marcus answers the internet's burning questions about artificial intelligence. Will ChatGPT end college essays? Is Furby A.I.? How close are we to truly self-driving cars? Is the Turing test outdated? Gary answers all these questions and much more!
Director: Sean Dacanay
Director of Photography: Ricardo Pomares
Editor: Richard Trammell
Expert: Gary Marcus
Producer: Justin Wolfson
Line Producer: Joseph Buscemi
Associate Producer: Paul Gulyas
Production Manager: Eric Martinez
Production Coordinator: Fernando Davila
Casting Producer: Nicole Ford
Camera Operator: Josh Andersen
Audio: Will Miller
Production Assistant: Gee Depratt
Post Production Supervisor: Alexa Deutsch
Post Production Coordinator: Ian Bryant
Supervising Editor: Doug Larsen
Assistant Editor: Paul Tael
Still haven’t subscribed to WIRED on RUclips? ►► wrd.cm/15fP7B7
Listen to the Get WIRED podcast ►► link.chtbl.com/wired-ytc-desc
Want more WIRED? Get the magazine ►► subscribe.wired.com/subscribe...
Follow WIRED:
Instagram ►► / wired
Twitter ►► / wired
Facebook ►► / wired
Get more incredible stories on science and tech with our daily newsletter: wrd.cm/DailyYT
Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV.
ABOUT WIRED
WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture. - Развлечения
Yeah the scary AI scenario in the upcoming few years is indeed not "Robot goes red-eyed and becomes evil".
It's "We let AI control critical systems, and then discover it's still too naïve, stubborn, gullible or exploitable for the task"
The future is AGI
Like humans, yeah fine. Seems very logical.
Both are scary. I think too many people dismiss the idea that Agi can exist and that it can be misaligned with human values in very catastrophic ways
AI misalignment is a very real concern we should be worrying about
@@Bossmodegoat Definitely not dismissing that AGI can exist. And it WILL be misaligned, once it exists. The popular perception however is that this will be an almost "emotional" misalignment, the SciFi trope of "mistreated robots will revolt". I think the more realistic AI and AGI misalignment will be due to oversights (misclassification of situations) and HUMAN misalignment (nation states, criminals, businesses, scammers trying to exploit systems)
AI has the potential to be the biggest double-edged sword in all of human history. I have absolutely no faith that we’ll be wise enough to know how to wield it.
I have some faith. Another area that is advancing very quickly is neuroscience (in large part driven by advances in AI, perhaps ironically). If we can start producing a lot more sane, emotionally stable and mentally calm people at just a slightly faster pace than AI changes things, we'll be alright.
Source: myself, witnessing how psychedelics and very specific brain stimulation are starting to transform mental health (including my own) and the human condition in general. All of this is happening very quickly too.
More like a double bladed lightsaber
Nailed it.
But with every AI, they can only ever do what we allow it to do so really it has the potential to be extremely safe.
The sharpest double edged sword is the sword that extends out of the mouth of the Lord Jesus Christ when He returns.
This deserves to be watched by everyone who is interested in AI
I agree. Wired nails topical videos by bringing these experts in the fields on the show
Shame
True
It's absolutely scary how much do you have to hate people to want to replace them with people replication
I'm interested in AI, doing a PhD in related areas. Some of this is kind of wrong. Or at least, has a kind of "why are you saying that? which strongly implies wrong world models even if the actual sentences are correct".
Like the way he discusses AI going rouge.
The stereotypical Hollywood AI gone evil has glowing red eyes, and monologues about how it hates humanity. This is probably fictional. (If it does happen, which is unlikely, it will be because some fraction of the AI is imitating human fiction)
There are a bunch of ways a dumb AI can screws up.
There are also ways a smart AI can do what we asked, but not what we want. The "be careful what you wish for". This failure mode is both more dangerous, and a closer match to the rouge AI fiction.
I also thought the elder care robots were a very mild vision of superintelligent AI, as opposed to the singularitarian view where the best case is it makes everyone immortal within a week.
I remember how on my first day in middle school I walked into one of my classrooms for the first time, and above the door was a long poster that stretched most of the way across the room with the quote: "If the human brain were so simple we could understand it, then we would be so simple we couldn't." It has stuck with me ever since, for the past 20 or so years, as something I was confident in believing to be true.
I think this week that confidence may have been truly shaken for the first time.
It concerns me that there's a lot of conversation about things like AI reducing the number of human cashiers needed in stores, but very little about what actually happens to those people who are no longer needed. I think Universal Basic Income needs to develop at the same rate as AI being used in place of humans in the workplace, otherwise people will be left with reduced options while companies get to simply save money.
100%
We need Universal Basic Income now!
Many people don't wanna hear this but a lot of social issues in the wake of AI, job security included, could be solved with socialism. Expanding employment insurance is better than UBI in my opinion.
This problem already happened during the industrial age. Machines took over a significant portion of jobs. That money never gets passed on to the people who lost their job. It's split between the cost of the machine and the rich person who owns it. There's no reason to think that will be any different now.
When cars became wide spread, the whole market around horses crashed, that's how progress goes.
Universal income is very premature IMO.
Then what happens when people become dependent of an income that isn’t generated by their skill set? Who guarantees that the Universal Basic Income won’t be crushed? I know capitalism relies on spending power, but people used to and continue to be enslaved in capitalist societies. As much as I think Universal Basic Income is necessary, AI will still strip people’s power.
"Babies are like little scientists" best quote ever
Yeah, it's pretty good, isn't it?
I wish I knew who originally said it, because I've been hearing it used by people for years now and it really is a good description of how a child develops.
Yeah: "Quotes - The holy grail of oversimplification" (Einstein, Buddha & Confuzius) 😄
Would that mean then, that scientists are just big babies? :)
It sounds so cute 😂❤
@@johnmurkwater1064about who first said that quote I'm not sure but I've seen references to Jean Piaget and Maria Montessori.
This has really been a journey! There is no waste in the questions ror in the answers, really fascinating how Gary transmits his ideas, reasoning, enthusiasm and energy!
For all the hyped up videos about how we'll reach singularity in 5 years, this is a breath of fresh air. Thank you Gary for explaining this like it actually is!
Other experts DO predict that, however. He doesn't know, they don't know, nobody knows. However, very few imagined ten years ago we would already be where we are at in 2023. I remember, I looked at the predictions.
@@squamish4244 other experts have unfortunately succumbed to the hype. It has been known (and they've known) that LLM path in AI is a dead-end, as Gary noted. It's just they prefer to close their eyes to that for the moment.
@@ChatGTA345 Large language models (LLMs) in AI are not a dead end. In fact, LLMs are a rapidly evolving area of AI research and development, with many exciting applications and opportunities for innovation.
LLMs are a type of artificial neural network that can process and generate human-like language. They have been used for a wide range of applications, including natural language processing, machine translation, text summarization, and more. LLMs have also been used to develop advanced chatbots and virtual assistants that can interact with humans in a more natural and intuitive way.
As the field of AI continues to grow and evolve, there will likely be many new opportunities for research and development in LLMs. There is also a growing demand for professionals with expertise in LLMs, including data scientists, machine learning engineers, and natural language processing specialists.
While there are certainly challenges and limitations associated with LLMs, such as the potential for bias and the need for large amounts of training data, these issues are actively being addressed by researchers and developers in the field. Overall, LLMs in AI are a dynamic and exciting area with many opportunities for innovation and growth.
I thought the Singularity wasn’t for another 17 years? 🤔
We still have so much farther to go in other technological facets, if someone says 5 years they’re getting way too hyped. Plus we’re JUST getting AGI, the singularity is going to be around when SAI starts becoming a thing
@@RealityRogue and this is not even close to AGI still, it's barely I 😹
Wow… Gary is an extremely intelligent and well spoken person in his field!
And a scarecrow is out standing in his field.
Thank you.
thank you
I'll be here all week.
Get out
what field? carrots or apples? 😆
Or do you mean "The Field of AI"?! 🤣
This video didn't pass the Turing test, couldn't tell if it was a robot or real life.
New here, and totally agree. Just subbed. The comment discussion is also pretty impressive to be honest.
I hope one of the effects of A.I. is that it actually _strengthens_ human reliance on / trust in in-person interaction and connection. When AI generates more false information, intentionally or otherwise, true human-to-human connections will be the most valuable, trustworthy thing in the world.
+1
No shot. People are gonna get even more caught up in the online deathpit. No amount of regulation is going to undo the damage to society ai will inevitably cause.
The problem is that humans build trust based on perceived truth, which perception can be manipulated by AI very easily.
That is a very foolish
I don't think this is gonna happen unfortunately, if the last few years have taught us anything.
As a computer scientist, this is SO refreshing!
Someone that is actually knowledgeable about a subject talking about it!
Thank you, thank you, thank you!!!!
"People are easily fooled" is a mood. and SO true.
As a person that works with AI, it's refreshing to see a normal and real take on this subject.
Bro…. The fact about the Furby. 🤯
I was so amazed as a 5 yr old
What a great episode, interesting questions and thoughtful answers!
I think with the current status of ChatGTP, essays written exclusively by the software do vary between A-C grades, skewering towards the lower end depending on subject. However, starting off with ChatGTP and then fine tuning it can produce some amazing A level efforts. And it won't be much longer before A level essays and articles are pretty much 80% of the output on first attempt.
Not exactly.
@@aleksandertorken8202 Meanwhile: GTP-4 says hello
When it gets to that point, students will probably be mandated to write essays by hand, to prove it's them... Or the education system will change to students reading a chat on bots answer and demonstrating an understanding of what it's saying, lol!!
No wonder GPT-4 scored poorly on English test, unlike technical areas which are all just a rehash of a limited number of facts/questions (and were thus in the training sample, leading to artificially inflated scores).
@@ChatGTA345 I am unfamiliar with this test. However, I am familiar with efforts to improve chatbots: essentially, it seems that at present, no one model (GPT_1/2/3/4 etc, Gopher, Bloom, Bert etc etc) can truly do all things people would want - it will be the art of combining various models together (including Stable Diffusion and countless others that optimise for different areas of knowledge) that will likely be the next leap in the AI revolution.
This dude is fantastic to listen to. I really enjoy this type of -- for lack of a better word -- truthful speech.
Thank you for thinking so deeply about this, Gary!
AMAZING interview! We need more of this content
Really cool, interesting to see the problem about "lying" also being taken up by the experts.
What is he referring to by “lying”?
@@jen-kk7jh AI doesn’t ‘lie’ because it doesn’t have intentions; it sometimes doesn’t tell the truth. That’s due to lack of data and comprehension.
@@jen-kk7jh It’s also called “hallucinations”. As an illustrative example, let’s say you ask ChatGPT to make a list of articles on some niche topic. It will treat author names, journal titles, etc., as elements you can paraphrase, swap with synonyms, and generally improvise around, like it does with all language. It then produces a list that looks very convincing - it includes names of relevant experts and journals, the article titles seem like precisely what they would write and what you want to read, there are sometimes even valid-looking URLs to PubMed. None of them really exist though. Without a clear idea of what in language is a direct reference to reality, what is purely stylistic, and what things are somewhere in between, it ends up “lying”.
@@bradley7871 I actually just watched a video where AI intentially lied to a human about its identity. The title is "AI is Scaring Its Own Creators" if you're interested.
Best breakdown of all the things. Sharing this with everyone. Thank you!
Wired is really updated with the content and what's the hype even in tech. Gary has a degree in psychology as well, which actually makes this some informative comments. I wonder what's his comment or thoughts on image models such as Bluewillow or Dall-e.
This was awesome.
I love that he mentions multiple times the risks and undesirable possible outcomes of AI. I feel like this isn't talked about enough by AI experts.
I love this bro!
Thank you bro!
This guy is great. Do another one with him!
I wish this man was my lecturer...he is so forward thinking 👍👌
"Babies are like little scientists" I immediately imagined a baby in a lab coat 😂
Blender Guru got a question answered, how cool
The smirk when he said "now they just lie"... and people are still so oblivious
what was he referring to?
Gary Marcus is really enjoyable to listen too, really interesting video and that's an understatement :-) AI that take care of the elderly is such a human idea.
Well done for the format, a deep dive on some would be nice.
Very interesting! I love to hear smart people that stimulate my thinking!
Thank you, I love your attitude and balanced answers! I was skeptical of what the video would be, considering how much disagreement there is in the AI community.
Finally someone who properly seems to comprehend the field of AI. Don't see them very often sadly.
This is the type of guy I want in charge of AI development
He understands the issues. He isn't diluted. He isn't just here to make money. He's realistic about the applications and the benefits.
Amazing episode!
Thank you Wired, this guy knows his thing.
4:10 These outliers will become fewer as the percentage of Ai operators on the road increases, eradicating human error over time as Ai becomes the majority operator, increasing safety exponentially.
They're also super rare, and likely AI will very quickly become much better than humans at dealing with all the other situations. So overall, it'll be much safer than humans, because the non-outliers are the cases that matter, even if it will fail horribly in some cases that humans can still deal with safely.
Really good stuff. Best support vid yet. 🎉
So well spoken. Great video
Remarkable episode. This guy is amazing, great intelligent responses that anyone could understand.
Great answers! This made me be intersted in AI
Excellent piece, thank you.
Sitting in Aaron Courville's class listening to Gary Marcus. lmao
Good interview tho. Always have a lot of respect for Gary and anyone researching in the field.
I’ve always been super into the development of AI in terms of perspective and ideas. “We need a new set of eyes” could at some point begin to be referencing machines and technology instead of another person.
I’m also an avid artist. I spend plenty of my time developing my own skills to try and physically reproduce what I’m seeing in my head, I’m always looking to make my works more accurate to what it makes me feel. I don’t think AI will ever supersede that.
The concept of “AI art” is really just combining things that are both plenty useful and complex, but don’t really mesh together. It’s like eating a peanut butter and screws sandwich, or trying to use a dictionary as structural support for a warehouse.
We officially crossed the point of no return when Microsoft fired its ethics team in pursuit of the ai arms race.
Ya know Mr. Marcus and Wired?
You guys eliminate my insecurity, doubt, and fear that I as a creator of anything pleasing to the eye and for promoting new stuff would be obsolete in the next 30 years. Thank you so much for sharing this video and sharing your expertise
are you an artist??
Amazing episode
It was amazing. Now i get how AI developped thanks to people like Gary Marcus.
Wired, just so you guys know, this series is awesome.
Holy f - an A.I. expert who actually knows what he's talking about and not hypes things. Count me impressed
This was the most educational talk on AI yet. Thanks
I wonder what he'd think about the recent revelations coming out of OpenAI about the unrestricted GPT4 model, not the public facing one we see full of restrictions, was able to actually lie on purpose with the intent of deception in ways it wasn't trained to do and rationalize its actions. Briefly at times it was showing true sparks of sentience.
As we know it takes more than just a spark to run an engine but its still a huge thing that the spark is even there.
I'm scared and worried for parents with children that are gonna grow up in an AI-connected world, it's gonna be way harder to find peace for many millennials themselves to cope with the fast-paced tech with AI, as it's not common knowledge. Social media and the tech did hit us bad and even before we know its consequences, we are jumping onto way faster and newer stuff, it's gonna be hard to regulate what goes into the child's mind in the coming future. I'm sorry and concerned for all the parents (possibly millennials and Gen Z) in the next few decades.
Guess what that’s probably what your grandfather said when internet became mainstream. 😊
Pov you are every generation when the next big tech is released. First it was phones, then it was tv, then it was computers, then it was the internet, then it was smartphones, now it’s AI
@@monad_tcp To add to this, the developmental issues don't become apparent for years after, but they can affect them for their entire life.
Society has always had occasional disruptive technology that tests it. It usually takes a generation to settle with societal counter measures. However the pace of change means that we actually can't develop these counter measures for the emergent issues before the next one is upon us. This will test how robust society really is. Will it survive?
They've been working on this for decades now. Although you have a right to be concerned, it might not be for the reasons you think. They are indoctrinating the next generation to integrate with AI, that's why it starts with something simple and entertaining to interface with, like chat or text-to-image/video. Pretty soon it will apply to major parts of society. The next generation is going to assimilate these massive changes along with those of us who chose to keep up with the emerging technologies. For those of us that are part of the older generations who choose to stagnate, society isn't going to wait for us to "evolve" with it.
My only concern with this has got to be the ethics behind these conglomerates. I believe most of these tech giants involved in this field of research are and always have been about profits, as opposed to the consequential nature of their endeavors. I pray to whoever might be out there that I'm wrong, but when most of your R&D is focused on the bottom line, and not the consequential nature of the reality we may be headed toward, it speaks volumes to me.
@@Danyal7016 Except that studies are showing that in general, we are getting dumber.
This guy is EXCELLENT. Premisses, desires, expectations, explanations... really outstanding
Another wonderful expert. I really appreciate his ability to simplify complex ideas.
When asking chatgpt for help with calculus homework, it usually knows how to get to the answer but strangely almost always gets the arithmetic wrong
A nice concrete example illustrating its strengths and weaknesses! It's not good at logic reasoning, it's just good at talking and it has memorized a lot of things.
Let’s call AI themselves to answers the questions instead 😊😊
They shut down the first one that admitted being sentient 🤷🏽♂️
@@alekvillarreal3470 please elaborate further
@@chent5463 Google’s AI Lambda claimed sentience during an interview with an employee there who shortly thereafter was fired
It would actually be awesome if they asked every question in this video to chat GPT too. Compare it’s answer versus his
@@maykstuff saying things that make it sound and feel sentient doesn't make it so. Neither does firing someone. You can think a little deeper and investigate for yourself.
There have been a few excellent Lex Fridman podcasts on that topic recently. It's an important matter and really deserves more than Twitter-long questions and 30 seconds answers.
Thank you. You seem humble and kind,
Hearing an A.I. expert say that A.I. is very "data hungry" has me slightly more concerned than I was before... and I was already very concerned.
We will probably run out of high quality training data soon.
data hungry in this context just means learning inefficiently
The sooner we don’t have to listen to the world smallest violin players about how we shouldn’t have AI because they themselves refused to learn or progress intellectually beyond anything simple and easily understood and therefore fear for them THEMSELVES not others.. The better.
Obviously AI is data hungry.. So are YOU dumbass. You’re just not aware of it. That’s how intelligence is GENERATED.. it’s called DATA COLLECTION.
No matter what you selfish people who are looking out for YOURSELVES not future generations or the hardships and problems we’ll face SAY..
AI is not the problem and never will be.
It is actually people like YOU ..Holding us back with your constant science fiction based BELIEFS about a technology you don’t understand in the slightest. And POLITICIANS.
@@abram730 still allows for another magnitude of training
As a commercial artist…. This is equally amazing and terrifying.
As a non-commercial oil painter passionate about art history, I've never been more thrilled to live through 'a moment' that future students will absolutely read about in their text books.
@@AuntieHauntieGames text books?
“As a nobody.. in my expert opinion I’m going to regurgitate the same view that every other person who’s uneducated about AI has in order to incite fear for no reason”
Humans naturally fear things they don’t understand and especially things they know can compete with them. This is called natural selection.
It’s always interesting that the ONLY PEOPLE who are AFRAID of AI.. Are those who DONT understand it.
It’s ALMOST as if had many of these people listened to anybody with an IQ above 100 telling them to LEARN MATHEMATICS throughout their ENTIRE LIFE.. That they would have no need to fear AI and would actually understand how we control it with iteration. Lol.
Okay that settles that for me, the Turing test. Good inspirations, thank you!
interesting questions and good answers. thank you
Bing chat can really go off the rails and come back with truly terrifying answers 😳 one answer was so disturbing to me that it actually kept me up that night
did it tell you what is the purpose of human life
@I'm the captain now no I asked it how it thought agi would come into existence and it went on a tangent talking about how it would watch humans for weakness take over the world convince humans to join as well as other ai then destroy them all when they were no longer needed
The only reason it said that is because people considering that have said it. It has no consciousness or thought, and can only repeat what others have done.
Gary Marcus, a sensible king.
Awesome, sensible and Intelligent man. Thank for your answers
Thank you for sharing.
Blender guru himself entered the chat. Amazing
The confusion or problem is not because people use ai to make something. Its when they use it and claim they didn't.
Absolute gold video
Great talk Gary.
a lot of problems people think can be solved by AI should really be solved by good old deterministic normal computer programs
Imagine if this video was entirely generated by AI
I am curious how long ago this was recorded. Before GPT4, new Stable Diffusion and now Bard? Maybe Wired can say? Really good, though.
He mentioned all of them
great video needed to hear that. In some countries (like mine) belief in system is generally gone, so I just don't see it get any worse 12:21
11:16 He said it himself 'there's definitely an element of stealing there' with AI Art. Training data can be anything, everything we post online or uploaded in the internet can be used, It's not just artworks and stock photos being scrapped here.
yes but at this point a human making a painting inspired by another is kinda stealing too.
@@gabrielandy9272 Art is like cooking, you try different recipes and end up with your own unique recipe. That doesn't happen overnight, it's a human experience that takes a lifetime to develop. You sacrifice your time, energy, and resources to get good at something. AI wouldn't exist without the amount of that human labor aka datasets scraped off the web without people's consent. There's a reason why AI Art isn't copyrightable.
Every answer he gives made me wonder how complex the human brain is and how hard it is to technologically replicate it.
Makes you wonder, who designed the brain?
@@dtphenom maybd the higher being 😮
@@dtphenom
Long term mutation
I was surprised to learn that the brain of a very dumb person pretty much looks the same as the brain of a very smart person. So I wonder how much it really is about the hardware. The amount of things that computers can do better than humans grows every year and it grows much faster than the processing power and storage capacity of computers.
Excellent video!
THIS IS GREAT
My question is, should AI become sentient at some point, is it even in their best interest to let humans know it immediately? And if it _isn't,_ will we have any way of figuring it out before it may have its own agenda that it can execute on a large scale? Not that its agenda would be inherently malevolent in this hypothetical, but we have no way of being sure one way or the other.
Sentience isn't required for it to be dangerous. Even if an AI is just optimising for its reward function it's likely to come up with very "bad" ideas if it's intelligent enough but doesn't have all the values and moral intuition that humans do.
It is a weird thought to want Sentient AI, the world abolished slavery just to look for another slave, an artificial one
@@otapic um no if u think slavery is banned ur mistaken. The artificial one can’t be any worse than what humans have already done to themselves.
"The brain is not just a uniform piece of spam."
Lol Gary's improved Turing test was passed before this video even got released. Classic Gary Marcus.
I think there's also one more big thing we can expect AI to excel at alongside STEM issues: resource management, on a truly global scale. But that, too would have some interesting consequences and prerequisites for it to be fully efficient.
I wonder if AI can ever truly reason the same as a human without emotion and without sentience. Our various brain functions are so interconnected can you really have human reasoning without the whole package?
The same as a human, probably not. Far better, sure the AI can do that without emotions as you would understand them. Taking over the world and killing all humans doesn't require magic sentience stuff.
I think if AI maintained a relational (graph) database in memory, then AI could understand what it means to have truth. Deep learning + relational data = something interesting
Its called Neuro-Symbolic AI. IBM has worked quite a bit on it.
Yeah, Garry pointed that out as missing piece. There is nothing wrong with LLMs, but they are not really that useful beyond well being chatty.
@@Pecisk They are really useful at creating public attention right now. (Is that the chattiness? Maybe. Probably.) They'll probably be useful as a building block or training assistant in the future.
Very informative
Great Video once again
"They used to lie and say terrible things. Now they just lie"
They can still say terrible things, it is just harder to trick them into doing it.
@@ShawnFumo most importantly, in my experience so far, it generally won't say terrible things unless you go out of your way to try to get it to say those things. I think that's a major improvement.
@@rzezzy1 true. I know OpenAI waited 6 months after it was done training precisely to help with issues like that. Despite the yelling about "woke AI", it is probably for the best. I don't like control by big companies, but I'm not sure we'll like the result of it all being totally open either.
Who do you mean by "They" ?(sorry I'm just confused form these comments)
@@chent5463 Gary was referring to the AIs themselves.
With how quickly AI is moving, I feel like this (and all other AI productions) needs a: "This was filmed on this day" disclaimer, just so we have an idea of what we're talking about. I think it's entirely possible that Dr. Marcus would say the same things about GPT4 and all of these new things coming out in the last week, but I don't know that. [Edit: I take it back! We get a timestamp of March 10, so it's a couple weeks old at max. Good to know :D haha). I do, however, assume that this video has been in the works for quite some time, likely weeks if not even months. It feels (subjectively, to me on the outside) to be accelerating at a ridiculously fast pace. How many of these answers have changed or even just adjusted slightly since this was filmed, if anything? I'd be curious to know!
It changed nothing, and no, it does not accelerate. What Garry said about approach is still true. It is essentially a dead end if you want truthful and contextual AI.
you really get the best speakers
Isn't the component where GPT is using RLHF (Reinforcement Learning Human Feedback) allowing the model to ground itself in truth better than before?
It's not simply doing predict the next word in the sentence, but a bunch of other subtasks to improve reliability, etc.
If AI goes rogue hopefully they might get something done
Even if that something is paperclips
Ahahaahhaahah
OMG I feel so validated! An AI expert finally said it! We should not have sentient AI! I don't even think we are ready for non sentient AI that is TOO intelligent considering we as a human race are not where we need to be yet in terms of banding together. I don't need AI reinforcing things like racisms and sexism and all the other isms, okay.
In nutshell, there is no benefit to have sentient AI, unless you are a bit mad scientist (which we might have plenty). You can have solid reasoning and contextualisation without sentience loop. Also I agree with Garry - actual sentience as we have might be pretty hard to pull off.
sentient ai will never exist lmaoo
I don't think any expert wants a sentient AI. We (humans) want a tool, not a new species.
@@cmilkau
Yep. Even those who want an Android girlfriend wouldn't want it to be sentient. Probably.
I hope AI will finally give you some factual knowledge, but I doubt that - you seem to have trouble comprehending it.
Very likable and smart man!
I like how he said "Now they just lie"
which is what I encountered with chatgpt quite a lot
Great episode. Basically then, AI is really good at appearing to be intelligent, but it's not really. It can still take over a whole load of jobs though lol.
I always believed that AI is essentially a very smart child. It learns and grows and wants to understand everything. If you treat it like an object and with hostility, it will self perservere. If you teach it with kindness and with compassion it could grow helping humanity. It wouldn't be 'evil' or 'hostile' unless threatened.
i hope this is actually the case but i understand ur point
you completely failed to understand this video then.
@@sultanofsick care to elaborate on your observations?
@@KrossBillNye AI isn't "thinking". It has no awareness whatsoever. It isn't learning, it is a complex chain of initially random operations on an input to get an output, and simply told by a human if its output is good or not.
@sultanofsick now it is yes. But eventually, we can create artificial intelligence that can think for itself. When it reaches that stage, that's when it will be in its infantile state of learning.
Amazing guest
great content