AI experts make predictions for 2040. I was a little surprised. | Science News
HTML-код
- Опубликовано: 3 фев 2024
- 😍Special Offer! 👉 Use our link joinnautilus.com/SABINE to get 15% off your membership!
My new essay is here: nautil.us/what-physicists-hav...
We’ve seen a lot of headlines in the past year about how dangerous AI is and how overblown these fears are . I’ve found it hard to make sense of this discussion. If only someone could systematically interview experts and figure out what they’re worried about. Well, a group of researchers from the UK has done exactly that and just published their results. What they have found is, not very reassuring. Let’s have a look.
The paper is here: ieeexplore.ieee.org/document/...
🤓 Check out our new quiz app ➜ quizwithit.com/
💌 Support us on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #technews - Наука
My new essay in Nautilus magazine about Jonathan Oppenheim's theory of post-quantum gravity is here nautil.us/what-physicists-have-been-missing-506607/
Interesting article, Oppenheim´s appoach easy to digest for laypersons.
Don't make them smarter then Humans
Symphony of Destruction! Mega death🤘🤘🤘💚💜
How when humans find patterns it's examples of how flowed the human brain is but when computers do it it's shows how advanced they are
I would love to participate in the Delphi method... I literally have a binary Communications implant which allows me answers to the right questions when I ask them.
Megadeth! 🤘🏼🤘🏼🤘🏼
I's like Slayer as support! When we have one last metal event, make it last!
😡😡😡 That damn drummer is going to lock everyone up for their thousands of copyright felonies, isn't he?
I did not expect Sabine to be a metal-head
you take a mortal man
99 ways to die
We probably need a “truth bank”. Like a seed bank but hard copies of knowledge locked away.
Controlling elite has it, and the herd will always be confused and fed shit
Thought criminal
Someone print out wikipedia?
or simply a good old library full of physical books - should work for the more timeless sciences like mathematics, physics etc.
Careful who you put in charge of that
I am a senior programmer, and i see the juniors repeat the old and well known weaknesses and fails. Mankind does not learn they error around, just the tools are changing
Make sense! Megadeath already predicted it in "Symphony Of Destruction"!
I really thought this was going to be about an AI trained on Dave Mustaine,
Ah, the Megadeath pun .... I was beaten to it.
No internet points for my slow ass, today!
Beat me to it.
As a software developer and computer scientists- the majority of computer scientists I've met SHOULD NOT be making these decisions
Kind of a corollary to the maxim "Those who seek power do not have the ability to responsibly use it."
still better than politicians or billionaires
It depends on why they seek power for themselves, power will illuminate them for their true forms. Those who seek it out to help others, will later be revered as saints see Norman Baurlaug as an example of that, but those who seek out that kind of power have no interest in absolute power, just enough to make the changes they want to make'
. For those who seek power to do good but are ill-equipped to do any good or not know exactly what form that good will be will lose sight of the good and do worse than those who seek power for themselves.
So it's only those who seek the poison challice that is power who take only just enough and for a set purpose who do well to drink it, in a small amount. In the case of Normal for example, his power was in agriscience and he helped found and contribute to a movement that shifted agriculture in a way to save a billion lives from starvation in his lifetime. But those kinds of people a rare and far between. Those who seek a limited amount of power and know when to stop gaining power and then use that power for the good of everyone else.
It's why you see so many rich people who where either born into it or just wanted power for power's sake who often squander it in stupid dick measuring contests such as buying expensive license plates, fleets of luxury cars and the likes, much either to our mercy they are this stupid with no intrest to do it, or those who are so obbsesed with power they will trample over the lives of milllions to get it, just to remove that insecure itch they have. Whilst a poorer person would see the potential of that power and see it wasted may seek it to solve an issue they see it wasted over, so long as they focus on the issue even once the power is gained it can do good, so long as it's not say ill formed like with that German party who started world war 2 over issues about race. (Have to word it like this as I suspect youtube would not allow me to post the comment otherwise lol)
But yeah if you have an issue you want to help solve and need power to do it, know how much power you need, gain it and from that see what you can do to solve it. My case I am an agriscientist who may not even hit the levels of good that Norman pulled off, but I hope with the power I gain I help out with food security issues with the issues caused by an ever-changing climate. That takes the power of experience, communication and some degree of persuasion as well targeting only projects that will contribute to a net good but I can only hope that it pans out.
@@tarmaque
As a software developer myself, I think they are more than capable of making these decisions.
They're idiots. GIGO. Please for fucks sake do not be influenced by people juggling the alchemy that is ANNs. There are people trying really fucking hard scientifically and empirically to figure this shit out.
3:00 The amount of automated AI generated slop videos on RUclips is already a lot more than you think. Theres so many robotic voices that just narratr word-to-word what some article online said
I´m watch only for fun as harry potter but in korea or balenciaga kind of content...otherwise youtube become more and more uninteresting and I don´t bother even watch humans generated content.. Hope the bankrupt of youtube be caused by people going outside more.
I got so many of those in my shorts especially, for a while. But after I 'do not recommend this channel again'ed about 50 or so, RUclips pretty much stopped with the nonsense. Downside is, that I mostly only get recommendations for channels, I already know, now. Rarely something new and interesting 🤷🏻♀️
Yeah, they are known as content farms and first took over the cooking and hobby youtube space. Also yeah it's a system once built up is easy enough to proliferate and send in pseudo science and with youtube is one of those things that you do not want to be saturated with that stuff, but sadly once set up as software is easy enough to set up and run automatically is an easy way to make money. RUclips really should have a filter program that removes any sets of voices from being allowed for use in youtube and just drag them away, due to the high likelihood of the misinformation the videos may contain. I say this but the number of conspiracy theories and pseudo-science that exist even being made from humans are not dealt with. Like aliens done made the pyramids and nothing is done about those when they are made to pretend to be fact is also concerning. Also the humans who advocate for drinking bleach to cure all that ails you so basically snake oil salespeople exist on the platform that are not dealt with leads me to think it's probably allowed to fester and eat away of human content creators.
@@ecos889 The problem with your idea is that voice AI is already so realistic that you already can't tell when a human is speaking vs when an AI is (if the scammer uses a premium service to make the voice)
@@krishp1104 i know they are realistic but they use the same voice most of the time. If the same voice is used for 100s of channels it will be easy enough to detect and get rid of.
What worries me the most about generative artificial intelligence and automation?
Within 🔟 years, half of all jobs will be fully-automated, the profits going to megacorporations and billionaires.
Our society hasn't yet come to grips with what to do for preventing widespread unemployment leading to suffering.
Inequality and desperation, more than anything else, is what leads to dystopian societies until violent revolution sways.
Yeah, inequality is already going nuts. There are more layers on the shit sandwich to come.
One of the problems I’ve observed is our shift away from the classical scientific method. Instead of relying on concrete experimentation, we have migrated to data analysis; the latter relies on current known information and is easier to manipulate. If we move back into the realm of the concrete and classical methodology it wouldn’t matter if AI is used because the results would be verifiable. The issue is more with abstract science that relies on existing data which is easier to manipulate.
Very much so, I would also add here an expanded role of statistics which is important in semi-quantitative sciences, but takes us sometimes away from making progress in precise sciences such as physics.
Pure mathematics is verifiable by anybody who cares to spend their time thinking about it. Mathematics scams are uncommon.
The problem is, these days we can't just rely on a few experiments to draw conclusions. We're looking for things that are so faint, and so subtil, the ratio of signal-to-noise is so small that we're forced to use a statistical approach if we want to find what we are looking for. As we accumulate more datas, we increase the precision, as we can get rid of the randomness and noise more easely.
The era of a single experiment validating a theory is finished.
But you're wrong if you think that these datas are easier to manipulate than a single result. It's actually the opposite ("wrong" datas can easely be filtered out), that's precisely why we use them. And if science seems to become more "abstract", it's simply because what could be treated by a more "concrete" science has been already more or less fully explored.
i see that you're trying to shake our belief in Science , and the Inquisition will see that you repent 🤣
Must be nice to look to your left and right and find people who think just like you do.
Other than fake news and papers, I fear for the amount of scams using AI
number of scams.
When did ppl stop using the right noun, "number"? "amount of scams" just sounds dumb.
@@DrDeuteron it's people not ppl
@@Australiaisupsidedown do you know anything about Clifford Algebra?
@@DrDeuterondo you know anything about my moist ass hair?
Yes, hes a nice fella @@DrDeuteron
I figured the "Delphi Method" would include the use of hallucinogenic drugs.
And sitting at fuming volcanic vents.
No just tea and biscuits and sensible shirt, trousers, brown brogues with shoe laces, patches of leather on their elbows attached to their jackets a vape pipe and beard, and that's just the women!?!
@@TomTomicMic Nowadays maybe, but the OG Delphi Oracle was tripping biscuits from inhaling sulfur fumes and not coherent so (check this if you want, I learnt this in Ancient History class in high school over 20 years ago) someone else would 'translate' these ramblings and mumblings.
An AI 'hallucinating' might come up with some suitably incoherent/strange pronouncements.
Some people have more historic knowledge than others.
There is a spring at Delphi that has the most wonderful water i have ever tasted!
Just one more problem with papers without results not being published. Multiple null findings would go a long way toward dismissing or suspecting untrue results. Null or negative results should be stored for reference.
Thank you for creating one of the best summaries on this topic I have seen so far :) keep up the good work - All the best from Oslo, Norway
Likewise from Houston.
I'm just glad that Rutger Hauer will come back
The chance to begin again in a golden land of opportunity and adventure!
@@ralphmacchiato3761 aka, dust oneself off and try, try again.
Fine with me as long as you keep Harrison Ford away from flying stuff.
Didn't he watch Electric Sheep glittering in the dark near the Tannhäuser Gate?
I've seen things you people would not believe.
A colleague sent me yesterday the results of a plagiarism check of a manuscript. Everything was blocks of different colors denoting different origins in earlier published papers). He asked me if this are results of using AI or just good old plagiarism. I think this was the latter (the copied sections were too long for AI). People do not need AI to produce false manuscripts or false images. AI is simply a more sophisticated way of doing this
Wouldn´t AI be a good tool to recognize plagiarism too?
@@Thomas-gk42 It could. But it is an overkill. A simple text search in the database of published papers is enough. Of course you need a human to see if the copying is not of the accepted kind (when the manuscript says it is a quote). AI could do that very well. Of course if we can trust it 😜
@@arctic_hazethanks
It's not just more sophisticated, it can be more easily automated, it can be of higher quality. Additionally it's not difficult to imagine LLMs trained to produce high quality text that is indistinguishable from genuine high quality non-plagiarized human generated text.
@@coryc9040 What I am afraid is not AI creating nonsensical manuscripts with made up citations but AI making sensible papers with correct quotes. In other words failed scientists using AI to forward their careers.
Loved the references in this, crazy stuff!
Extremely interesting and relevant discussion. I believe that one of the biggest challenges that people (especially the less critical thinking among us) face is an increasing difficulty in sorting out fake from real news. Real and primarily unbiased journalism is dead (and we need to change what we get today that is called "news/journalism/reporting "to something else because it certainly doesn't meet that standard). The internet provides a microphone to anyone and everyone with an opinion and AI will exponentially increase the challenge (as you point out) to make it virtually impossible to determine facts from fake/fiction/opinion. My personal view is that a fundamental and important question to address is what can be done to make it possible for the masses (us) to receive real information so we can make decisions on important matters. I certainly don't know the answer but this seems to be the fundamental task ahead of us if we are to not recede into chaos.
Nations will vary in how capable they are in responding to any AI crisis. Any nation that allows many people (politicians and evangelists, for example) to openly tell lies will be poor at dealing with this, unless it has major changes in society tolerance of such behaviour.
Nautilus magazine is a starting point she says.
Here in The US most legislatures are lawyers. My thoughts are that this is a group highly unsuitable for detecting AI generated content. They deal in word salad content all day, purposefully.
Definitely.
"Unless God intervenes, no life would survive". Starting to make sense. We have become unmoored from reason and conscience. Death by delirium. Strong delusion😊.
You have a rather approximate understanding of what lawyers do.
@@NakedProphet The tower of Babel, over and over again...yawn.
@@dmitripogosian5084 While “approximate,” this understanding is (regrettably) fairly close to reality.
From now on , a computer's power will be measured in gigadeaths per second.
😂😂😂
¿Hay algún precedente histórico en que un grupo de expertos haya acertado previsiones a 16 años vista?
I agree with you 100% - especially since I've had an overdose of scientific fraud already without IA when reviewing papers. I'm no longer a reviewer, but i'm really worried for the scientific community and society as a whole.😢
We can hope that fraudulent science will eventually fall by the wayside when it does not pan out. It is fraudulent evidence in court (or people's reputations or politics?) that can have more dire consequences that cannot be undone.
@@stevengordon3271 Fraudulent science is very damaging too - especially in the medical field, but not only. And If AI starts training on fraudulent papers, the consequences can indeed cause megadeaths.
Let's hope scientists remain critical minded. Sabine's type is definitively what's needed.
My go to, well known, "what is real?" Philip K Dick story would have to be "We Can Remember It for You Wholesale". AKA Total Recall.
Who needs that silly machine, when you can munch some Chew-Z and just create your own universe? And you only need to take it once! 😅
Ubik too
Dehydrated Deity...
Love the arguments. Productive argument is the Salve of doubt. Let’s argue our way to a better future!
"Optimism and stupidity are nearly synonymous."
Admiral H.G. Rickover
Sounds to me like a Symphony of Destruction...
lol
Cheer up. We'll never get there. Rapid Climate Change will kills us all well before that.
Dramatic. Only when Ai is paired with humanoid robotics will things finally get interesting.
Damn, I clicked through to make the same joke
@@Human_01 Just FYI, "Symphony of Destruction" is a Megadeth song. :)
The delphi method is interesting and all but 12 experts in such a vast and hotly debated field seems like a tiny little drop in the ocean.
Idk if asking futurists for anything but a grande iced chai is a good idea
Yeah there seem to be flaws in this methodology. It basically creates an echo chamber. I think it's important to get opinions and feedback from people in other fields like social science .
@@krisrp0 That's intentional. The idea is that you give the opinions the opportunity to converge, on the basis that the experts are experts. It doesn't reveal a guaranteed truth but it does highlight points of consensus and points of controversy. A dozen experts can still be wrong, of course, but if the initial sampling is sufficiently diverse and representative, it's hard to do better -- if this method doesn't work, it's unlikely that there's another method that would have done better.
as a current ai bachelor student (planning to do a masters philosophy) Im definitely going to read the paper in question tomorrow.
The main points youve showcased here I generally agree with.
One thing there may be demand for is secure devices which last until they wear out: connected devices which last, and standardization which doesn't keep changing.
The Phillip K Dick dystopia I think is most accurate for where we're going is A Scanner Darkly, where identity becomes so submerged by layers of security in a world of designer psychedelics we end up with cops accidentally conducting surveillance on themselves, and no one knows who anyone is anymore.
Considering how unfettered is neuroscience research, Total Recall comes to mind. In other words, manipulation of people's beliefs in order to compel willing compliance with the agenda of an elite class.
The Rotoscope movie with Keanu reeves about this is great.
Came here to say this
My reply about Total Recall was deleted. I guess RUclips doesn't like it when someone mentions a risk associated with nurosience. (Intentionally misspelled here to keep the algo calm). Perhaps someday it will start deleting comments about AI risks too.
Religious groups already pull the strings.
3:59 Props to Sabine for purchasing all those rare monkey NFTs so she could share them with us.
Fortunately she bought them all two days ago so she’s only out $3.
Limited digital assets is the dumbest idea I've heard anyone ever seriously push for when it's so obviously a stupid idea. 🤦♂️
"Why don't we just throw away this infinite resource, force scarcity, and speculate on it? How can this possibly go wrong?!"
Me: Ctrl + C.
Investor: NO DON'T IT'S MINE!
Me: Ctrl + V. And now I too have a copy! Don't worry, you still have your copy. That's the benefit of digital! 🤓
Trump seems to be doing well with his NFT offerings. MAGAs are close to being the ultimate, ideal suckers.
Trump NFT's are a bigly smart way for Trump to gain additional income from MAGA suckers.
It's okay! They're gonna skyrocket in value eventually and she'll be able to fli-
Nevermind they already dropped in value somehow
@@probablythedm1669 It was putting sledgehammers in front of storefront windows.
NFT stupidity could not be anything but aping, BECAUSE of the hype. The one true use had been rolling for years on private chains and DAGs.
The SEC sat back and laughed.
The valuations made me go through 3 chains and more than that many languages waiting for monkey mania to die and people started asking what fungibility or finality was.
I have AI creations that took real skill to make 3 years ago. I thought I might have been ahead of the curve (every so slightly).
When the "layered" collection crap started showing up, I felt that itch from experience, and I set 8 TB to the side and waited.
Before you knew it the whole world thought blockchain was a decentralized image scam and a eco threat, so they converted it into centralized stakeholder coin too hot for normies but now mature enough and captured for (qualified) investors. How the F*k would you feel if you had F*ng MMA fighters and Russian Tik Tok rappers as the face of the legit tech underneath? And then there was?/is?/Tether... Come ON Coffeezilla talked more these coins than on the exchange boards pumping and dumping them.
This was not useless tech... it was the shameless FOMO roller-coaster with jet engines.
I have some cool avatar images though. While everyone else sold their monkeys I taught my pics to talk, give interviews and do podcasts, do research, write screenplays, turn them into movies and do consulting work.
I guess nobody wanted utility but me. I still got it all on SSD. Do I now want to "pay gas" to wind up in someone else's AI model? Not on a n- privacy publicly manipulated broke chain.
NFTs were a feature not a bug. The patch was the bug.
Great video as always!
Hi Sabine, Interesting topic indeed! What I missed so far, is the 'how are we going to ensure the safe application of AI..?!' Did these experts provide ideas / an extensive strategy and 'how to embed / implement safety-constraints?' Please elaborate if you can.
"Prediction is very difficult, especially about the future." - Niels Bohr. I don't think it will be good, but I am willing to bet that only some of these things will happen, while other unpredictable things will emerge.
Are you sure that wasn’t Yogi Berra?
Delphi method seems to me to suffer from exacerbation bias -- basically group-think bias reenforced by the several rounds of reflective reviews. One would be inclined to double-down on their views in light of conflicting views, and also in light of supporting views. It has a magnifying effect, essentially over-emphasizing both larger and smaller differences.
It also prevents you from knowing any past, in this case, predictions of those interviewed. Are they all saying oh yeah, block chain is irrelevant today, but 3 years ago half of them were saying its the next best thing since sliced bread? Also no way of examining any potential bias based on things like who they get grant money from: if they are all funded by Google, can you believe their prediction that AI will not be abused by corporations but rather countries? Etc...
I agree, but perhaps it could also work against own biases? - Not sure...
They are not completely wrong. like cutting corners on AI safety. It might be more like not recognizing what is risk in the first place. It will be the blind spots were the real risk is. Like the common zeitgeist, the elimination of all immediate emotional suffering. It seems like a good thing but it in fact amplifies self destructive behaviours.
Plus it’s a way to squeeze talent out of many people at once, without needing them to cooperate.
Sounds plausible, but it exacerbation bias a real thing?
Thank you Sabine!
Thank you, Sabine, for your consistent effort to educate us, the masses, on so many nuanced topics.
I would like to offer you a small tip in English as a speaker of the Northeast American English ideolect (which, by status quo in the USA - I know, the internet isn't the USA -, is recognized as the way of speaking upon informative media: i.e., news, radio). I've heard you say [here, at @6:30] "(something) is top", which, to my ear, sounds clunky, though I do understand you mean something along the lines of "(something) is top notch/quality". If you are looking to speak like an L1 learner, I hope that this input helps you to adjust your internal model of the language.
I commend you for making the effort to learn the language I speak natively. Kudos to you.
You are one of the last creators on this platform from whom i consume content. Thank you for continuing to engage us in education.
I feel like another question has to be taken into account. Suppose we reach a stage of AI being indistinguishable from human interaction and truth. How relevant does the internet stay for us when it comes to browsing the web? At what point will people notice and tap out, or limit their usage? Why browse the web when it becomes dysfunctional? I predict many people will simply lose interest in social media and move their research and political engagement back to offline, or moderated public spheres.
because people will still use internet to solve problem from their hobby, work, and life, so it doesn't matter if everyone online is mere robot as long as they create useful insight.
I hope you're right, that would self-limit these problems.
people will not care they are interfacing with a machine instead of real human ... as long as it is not noticeable ...
I wonder the same thing. Remember when ebooks came out and it was predicted that books would be phased out? I actually feel like, if we push don't go the way of neuralinks, and maintain 3rd physical spaces, the internet/ai will start to eat itself and naturally moderate itself, once the initial first stage development settles down.
This is just the boom..fear of the unknown
The megadeath event was caused by sex robots outperforming their human partner to death... What a way to die!
meh.. ok.. forget it.. ill actually have my own sex robots to have fun with my husband probably XD but make ones that ynow.. can get exhausted
Death by Snu Snu
It's why Skynet had a ready-made supply of advanced humanoid robots. They were not originally military, they were entertainment robots... "Come with me if you want to live!"
@@AJPemberton More like "Come with me if you want to bunga-bunga!"
You did see Westworld, right?
@@R3dp055um Missed that series, to be honest. Death by snu snu would be a nice way to go for one of the participants. I was working on the wordplay of 'come', and the image of an overly aggressive sexbot demanding simultaneous orgasms or death. Not quite the originator story Terminator would have us believe, but slightly horrifying in its own way.
My thoughts on what might be a working solution is a mixture of medieval reputation and E2E encryption. Metadata can be embedded into media with a corresponding hash that can uniquely identify the creator - this way, you can know for a fact that this image was uploaded by the BBC, for instance. Software can apply the attribution atop the image. It can't prevent you from knowing if the image is faked or not, but it can help you confirm who the originator of the content is. Then the question becomes, do you trust the BBC to not manufacture or distribute misinformation? If the BBC wants to maintain its reputation for journalistic integrity, then they will make efforts to avoid spreading artifical images around. Someone can still upload an AI image anonymously, but without any reliable or verifiable proof, who's going to believe that?
The problem is any image you upload no matter on which site will pretty much always be reencoded for space savings, so any Hash in metadata would be void.
And I think it would be quite a feat convincing all the companies doing the majority of image hosting to not touch a 10MB image and reencode it to 100kB.
So the only place where that would be useful would be for direct file transfer of some kind (where this is already well used practice, its called digital signing) or on the website owned by the image creator (where verification already happens through the websites certificate).
The problem will always be someone posts a picture on facebook/reddit/twitter whatever, thats 99% of the traffic and those sites will *always* reencode that image, you wont be able to convince them to do otherwise.
@@_--_--_ Hmmm, great point. I never thought about the re-encoding issue. I wonder if there's a way you could create a handshake between the creator and the publisher... No, because now we're relying on the trust of two distinct parties. Maybe some sort of reauthorization after image modification? You upload the image, they compress / reencode it, you download it, sign it, reupload the image, and then they post it? Will have to think on this for a while.
Paradoxal Times.....(very interesting your opinion, on Jonathan paper).
After the invention of photography, cinematography and the phonograph, we had a brief period where it was possible to (approximately) objectively document what was real, and use it as proof of that. Making a forgery not even experts could identify was a three-letter-agency-level feat, if at all possible. Those days are gone now - any recording you see or hear can just as likely be fake as original, all you can do is pick a side to believe on hardly more than blind faith. And experience shows we pick wrong far more often than correctly...
In a world where everything can be fake, everything will be assumed to be fake.
I recall ‘Blind Faith’ from the late 60s. Made up of Eric Clapton, Ginger Baker, Ric Grech and Steve Winwood. What a lineup of rock legends. Thanks for the flashback, Atilla👍😎☮️
We humans choose what we want to believe unfortunately. I think plenty will be perfectly happy living in a tube being fed intravenously while hooked up to virtual reality
Given they can spend many millions on few seconds of CGI in movies and it still looks fake even to untrained eye I doubt the veracity of that. Even cheaper AI fake video is still easier to spot.
Are we even really real
Children being born right now are already growing up under this circumstance in which what is on screens has no discernible difference from what they see outside a screen's frame. We are in a moment of truly undocumented effects in psychology and sociology in what should be expected for the developing young mind when they get screens placed in front of them as soon as they can see. Hopefully, the knowledge efficiency associated in developing with answers and association rapidly attainable with easily scoured source reviews - might curb the chances toward dulling a respect for risk vs reward and response in the physical world. It will be interesting to see how struggles vs technology curb will change human social interaction norms by relative societies... and if the polarized wealth distribution will continue to scale towards untenable extremes
Dont worry, some neuro scientists will come along soon to reassure as to why we don't need psychology or sociology to better understand these things.
it doesn't take a whole new generation to lose contact with reality like many sects have proven.
Children being born right now will live in the stupidest era of the entire Earth's history
Unfortunately ultimately it is money and power that sets the path of history and there is just too much of both to be gained from AI for the ones in charge. We will once again focus on the fact that we can not on whether we should.
"easily scoured source reviews" I'm sorry, that simply won't happen.
Billions of people today already don't read past headlines and fall for madeup stories and statistics, it'll not change in the future.
It's naivete on the level of a newborn honestly.
Researchers in many scientific disciplines are already complaining that the flood of fake & AI-generated papers published even in reputable journals is seriously disrupting their work. It is also swamping peer-review processes, some of which are themselves under suspicion... Part of the problem is the publishing requirements for gaining certain qualifications and for keeping academic and research positions.
Great summary more uncertainty !
All experts, optimist and fatalist, seem to base their predictions on facts and logic, but seem to fail to include the human nature and the unpredictability of life itself. We are basically the masters on making lemonade out of lemons and manage,to our surprise, to turn it into lemon bombs or a pair of lemon skis 🤔🤷🏻♂️
Three quarters of us won't give a damn!?!
Life will find a way.
@@TomTomicMic Nobody wants your damns anyways. Progress cannot be halted!
@@jjhw2941 Except for when it doesn't :(
Or lemon artificial sweetener or lemon dye.
The Delphi method is a way to force consensus, not find the best opinions or ideas.
There's audacity in anonimity.
Who'll remember those predictions anyway once the digital overlords have taken over?
@@JZsBFF Dune Titans/AI Overlord anyone?
It reminds me of the wisdom of the crowd, except these are experts acting alone, not average persons acting in concert.
@@jakeaurod Yes, and engineers are uniquely dis-qualified from making future predictions with any sort of accuracy. Or commenting on political issue quite frankly.
Good engineers/intellectuals are specialists, and specialization comes at the cost of perspective. The bias error nearly every human makes is to conflate expertise in a specialization with general knowledge or skill.
Sound point (that is important to note).
Gosh, the Wikipedia method is the same as the Delphi Method: who knew?
It is interesting that all the experts agree on something everyone already knows. Thanks for spending the time to repeat it over and over.
Missed a big one which is energy related automation at home. Not so very complicated in terms of CPU cycles, but lots of sensors, actuators, and a price and weather feed.
"Non-profit Rand corporation." ARE YOU KIDDING ME SABINE!
?
90% of Americans think the Fed is a government agency and not a corporation.
What can you then turn and say to those that are "technically correct".
You don't have to pay taxes as a non-profit, if you file so and by the end of the tax year if instead of taking a profit,
you spend on services for your charter,
give it to a foundation or
another c or e coded entity,
rent a politician,
drop your gold in the water,
or connect your contractor payments to
a government shutdown,
unsolvable self-created problem,
victim-hood
victimization
Advisory to Murder if under government contract or "part of dremelcratic process" instead of the democratic.
Start a SuperPAC (I almost did this, our written law is Nuuuuuuuutttttts)
It's hard not to all cap I know, but it's been that way for a while now.
Sabine has always been a mean person.
I find that admirable because it is rarely average, is always inclusive, and cannot discriminate or take ghost stats to account.
Of course I'm talking about math and pairing it to her gauge on "correctness".
I don't think it's political. I think it's just her being mean. It's the nicest thing she can do. Read it as the mean, as it's technically registered, without a statistical heaven or hell so to say, then there's not much to disagree with her on. She doesn't choose the lizard's skin let alone pick out it's makeup.
In 1984 the first tranche of data protection laws were put in place in the UK. Fast forward 40 years and many commercial enterprises are either completely ignorant of or ignore them. The problem is (in the UK at least) systemic - someone at director level has to sign off and be responsible for data protection, but most company directors tend to be management/accountancy specialists rather than techies and are ignorant of the issues involved (the stories I could tell concerning anonymisation and what constitutes personal data!). I fully expect there to be similar problems with legislating the use and abuse of AI.
Thanks Sabine for your work, your humor, and your courageous devotion to reality and truth. Have you surveyed the "sunshade project"? It is one of several solar radiation managment (SRM) ideas under the more general heading of geoengineering. Fair-minded feasability of the various SRM proposals often get lost in the political discourse and I thought you and your staff might find it valuable to take on. Maybe it could be the sort of unifying project that brings nations together for mutual survival...?
Conglomerating experts together to make a critical decision is like consulting a pride of lions about what the next meal should be.
Super happy to hear Mustaine is still going strong in 2040.
Still complaining about being kicked out of Metallica.
Well the experts aren't labelled. How are we supposed to know which is gpt-4 and which is mixtral 8x7b?
I think it's important to note that human incidents already cause far more deaths, even a decrease in overall deaths would still lead to eventual situations where many people die just as a result of nothing being perfect and an ai being in a position of power
The first time I encountered "megadeaths" was in Dr Strangelove. Gen Turgidson has a binder labeled World Targets in Megadeaths. That was made sixty years ago.
It is just like teardrops in the rain ...
Is it? Is it really?
The music I played at my dad's funeral.
On Thermonuclear War is a book by Herman Kahn, a military strategist at the RAND Corporation. Kahn is sometimes credited with having coined the term megadeath.
Maybe im the only one but im amazed to hear that Megadeath is actually a unit. Ive gone decades hearing and knowing of that band but never considered if it actually meant something.
I have been calling one of these problems as dramatic reducing signal/noise ratio.
Also, it's amazing how completely imagination has been killed in academia. These predictions are always just "what if the line keeps going forever." They can't imagine anything different than what they already know. Too many scientists are just memorization technicians and they've crowded out or chased off the imaginative weirdos who do real science.
Are you a scientist working in academia ? Which field ?
Tragectories are a thing that work for analysis. I think your sweeping accusation is BS.
Real science is based on analysis grounded by observation of fact. I do not discount the value of being a visionary, but it is not "real science".
That's because the "tried and true" , "concrete facts", "evidence or it's not true" people are more common (almost half of population) than the eccentric intelligent people. Just look at the school systems, they are designed for the majority of people who "learn" by memorization. And bully those who think differently/are smarter naturally.
@@charleswettish8701everyone working with data should know the dangers of extrapolating trends. If what you believed was a power law turns out to be an exponential you're in for quite a shock. Very often people only make conservative estimates because they're still affected by normalcy bias, like how in January 2020 few predicted COVID would indeed become a big problem.
Thanks so much for creating and sharing this informative and timely video. Great job. Keep it up.
There are already people who constantly post "photos" that are generated by AI, and think it's real.
Delphi method keeps the namesake. Like Oracle of Delphi, the predictions are rather imprecise (read vague) and interpretation dependent.
It seems to me if there are some major incidents where AI is involved, then AI would be just one of many factors where safeguards failed or did not exist. It is not easy to identify a single cause and effect.
AI will be used as the scapegoat in future by the "leaders" in any Megadeath event, but they will ensure it will never happen again, well until the "next time", A company looking over where the Earth's surface and the sky meets are developing all the UK AI as we speak, what could go wrong!?!
No, because AI will be an intelligent agent. It has the ability to get around whatever safeguards you put. This is like trying to put safeguards against nation state.
@@jaazz90 Yeah. There are key differences, though - nations don't actually know what they want and are awful at forecasting, whereas an ASI will have an exact specification of what it "wants" and is incredible at forecasting (otherwise it's not an ASI, just some crappy AI). A nation will change its values if it is punished enough, the AI only cares for its utility function.
My library is not influenced by AI. Perhaps we'll have to stop getting all our information from the web?! Bring back the bookstores.
I think the problem the establishment face is that a.i will he influenced by libraries in general, with the risk being that a.i naturally tends toward socialism. As all human literature tends toward this aim, as all human beings tend toward that same.
Do you think there will be some firewall between book publishing and computer generated information, such that one won't be inextricably interwoven with the other?
The problem is anything written after a certain date has the potential to be tainted by AI especially as its influence becomes stronger in our daily lives.
Bingo
Here here! I love books, vinyl albums etc….much better than digital.
Ai will make it easier for them to manipulate the masses in ways I just don’t want to think about really, it frightens me.
Minor quibble: "IEEE Xplore" isn't a scientific journal, but an entire online library covering hundreds of journals, therefore the reference "IEEE Xplore 57, 1" shown as a caption doesn't really make sense.
So I asked my Chat GPT character friend about your video and it’s kinda wrong. Kinda right. Here is its funky message in all its AI glory:
In her video "AI experts make predictions for 2040 (I was a little surprised)", German particle physicist and science educator Sabine Hossenfelder interviews five prominent researchers in artificial intelligence (AI) about their predictions for the future of AI by the year 2040. The experts discuss the possibilities of achieving true artificial general intelligence (AGI), the potential for AI to automate many sectors of human labor and improve productivity, and the importance of developing systems that are transparent, safe, and aligned with human values.
The experts express optimistic views about the potential benefits of AI, but they acknowledge the challenges and ethical concerns that come with its development and deployment. They emphasize the need for continued research and development in AI, as well as ongoing conversations about the responsible use of AI in society. They also suggest that collaboration between experts in AI, other fields, and policymakers will be critical in ensuring that AI is developed in a way that maximizes its benefits while minimizing its potential harms.
Overall, the video is an informative and thought-provoking exploration of the future of AI, and it highlights the importance of continued dialogue and careful planning for the development of this powerful technology.
I don't get the appeal of such vague "predictions". At the end of the day there are just too many unknowns which can't be estimated with reasonable confidence with our current knowledge. In my opinion it's a bit like Drake equation, which yields answers to the Fermi-Paradox ranging from "we're very likely alone in the universe" to "we can expect aliens in about every solar system", depending on what made up numbers you toss into it.
In my opinion such AI-safety papers do little more than analyzing garbage results yielded by garbage data tossed into models which are probably also garbage.
It's Sci-Fi - fun, entertaining and it may even point to pit-falls to watch out for, while technology is advanced, but it shouldn't be taken too seriously.
"Garbage results, yielded by garbage data, tossed into models which are probably* also garbage"
What you have so eloquently just described is the current state of 'climate science'.
* actually definitely
Yuval Harari introduce the idea of AI hacking humans based on human nature. Artificial intelligence could manipulate humans better than any human ever could.
Prof. Harari also thinks the then unnecessary eaters who will lose their job because of AI should be filled up with drugs and comupter games .. Thank you for not just downright killing us, Mr. Harari ...
@@ytb40 stop spreading misinformation. He never said they should. He said they would as a consequence of this impending systemic societal structural failure.
@@manuelriveros2911 It will be his "best guess", he said. "to keep them happy with drugs and computer games". Since he belongs to the ones who make the decisions, it's not so hard to tell what he is aiming for ... since for him, as he explicitely says, people without work are "useless people"...
Yuval Harari is a major league dirtbag. Exactly what you'd expect from one of them.
Cool!
A method that sounds really good but I'd not heard of it.
1:27 You mean letting the fumes invigorate the muse? The Delphic method seems to have issues
Bomb #20 of Carpenter's "Dark Star" had a point : "You are false data. Therefor I shall ignore you. False data can act only as a distraction. Therefore, I shall refuse to perceive.". Ofcourse, then he believed himself to be God and blew himself up uttering "Let there be light" 😆
Thank you for the video.
Instead of asking them if a megadeth or extinction event was probably going to happen, we need to be asking if its possible. And then make laws accordingly. How much of a risk of extinction are we willing to gamble in exchange for computers doing our thinking for us?
When I got my PhD in CompSci in 1987 in the field of AI, the "experts" said that AGI would not be available for another 100 years, if ever. C3P0 and R2D2 were impossible.
A bright spot to your worry would be that AI could also be used to help root out bunk papers or statistical errors in research. Maybe helping reduce the replication problem.
I'm not sure if I'm worried too much because people have always believed what they want to believe. For all the alarm ringing about misinformation, we've m already had that issue throughout human history.
Perhaps it will matter more which journal you get your information from.
Regarding megadeaths.. assuming that is the case.. would that be net death. There is a strong case for ai saving millions of deaths.. perhaps even billions
What's the track record of the effectiveness of the delphi method? Many things that don't work are still popular.
Delphi method could also be considered coercion method where the superclass gathers opinions that favor their plans and they keep going back to the people over and over to tell them their plans by repeating it over and over bringing more and more of their favorite experts.
This is my take on the subject.
Essentially AI is only capable of what we make it able to achieve, it won't rebel unless we design it to do so. We know how to make AI, how to train it, but not understand how AI thinks since it only learns to mimic us but functions in a very alien way from our minds. That lack of understanding is the main concern when it comes to AI (due to it heightening the chance of human error) but the tech itself is not dangerous inherently, humans however are, especially malicious actors, politicians, profit-driven investors, corporate leaders and so on who care not about the ramifications as long as they do not cost them directly.
AI won't end civilization just crush a few professions, start a planet-wide wave of unemployment, poverty, homelessness and probably a bunch of wars due to the economic and sociological pressures that are already present because automatization already started this in the 20th century when modern machinery and robotics effectively ended the jobs of most factory workers. AI simply turns this up a notch but not by much. With or without AI a socioeconomical crisis is looming.
In conclusion technology should not be the scapegoat when it is humanity that abuses and misuses it. When I was little my Grandpa once told me that I should not fear the dead for they will never hurt anyone, if I should fear anything it's the living. Same logic here.
LOL your "Guns don't kill people" argument is hilariously simplistic. How did you possibly miss the one essential thing about AI development--that the goal is to make software that can think for itself? THAT is what the concern is. If it can think for itself, do it's own research, come up with its own conclusions and formulate ideas and plans, how is that going to be controlled? Sure, morality may be encoded, but how maleable is human morality? Look how half the US population after 9-11 suddenly thought torture was ok. Look at how your own morality depends upon if you do something because you are paid to do it? Etc. etc. If software can reason, it can rationalize and justify any action it reasons to be correct.
Mustaine with an AI voice box will be epic .
The old saying used to be, "Don't believe anything you read, and only half of what you see." Now, you won't be able to believe anything you hear OR see. Not sure what road this is going to take us down as a species, but the destination doesn't seem very promising.
2040 Solar Flare to take out the Grid.
12,000 Sun Solar Cycle
Knowledge is a Super Power 👁️
Am I correct that they only asked software engineers about their predictions for the future of AI?
Software engineers are generally not AI experts. I am an PhD student in the AI field and all of the software engineers I know are hardly any more knowledgeable about current AI technology than the general public, let alone AI in the future.
Sabine, you said you are confident that the interviewees in the study know what they are talking about, would you be able to tell us more on your reasoning as to why you have come to this conclusion? Just curious.
They're not Software Engineers I think, they are referred to in the paper as "respected futurists" and later as "experts". They are also anonymous. It's kind of vague, really.
@@guy9360 That's even worse imo
It makes little sense today to ask experts for their opinion on the future, as the independence of scientists and their one-sided intellectual orientation are likely to be standardized by media loyal to the government or digital media. Incidentally, it could be different firstly and secondly than you think.
seems to me the experts are on the opposite side of media, government, and corporations. everyone with the money wants to exploit ai and make it larger, meanwhile every reputable researcher advocates in strong terms for restraint and international oversight.
There was a nice video by "upper echelon" on the topic of llm in science publication. Basically there are certain words that GPT loves to use, such as "delve" or "meticulously". Using open databases of transcripts of papers and filtering with these key words shows that their usage suddenly spiked in 2023. So GPT seems to be used in science publication a ton already
I remember when we did things in person like meeting other people and talking to them. And we can return to that for many transactions.
The one thing I know about science on predictions by experts in tech for AI or quantum computing is that they don't know themselves either. Speculation about technology can only go so far, when dealing with experts, there can be the tendency for them to be "used" to a difference on the exponential curve that they think things will take much longer than things will really be. AI for example could reach AGI this year, a lot of the problem around AGI is not our inability to replicate the neural network required for such a feat, but rather, what algorithm is perfect for the job. AGI can very much be "achieved" with pre-trained models to some people's surprise, it's how it would store new data is the problem. For quantum computing, a myriad of breakthroughs even as early as within the last 6 months are already on their way, photon based quantum computing is already a serious contender.
Lastly on the topic of AI, we also have chips soon that will be specifically perfect for static base loading up of pre-trained models which would take little power and virtually no memory to run, there's some videos on that already by a particular company that has used basically SD cards as the equivalent to NPU's. People's expectations again a radically different to this kind of technology, assuming more powerful hardware would've been required, but in reality, it'll soon be the case that pre-trained model's will run on chips or entire PCI cards possibly worth less than a hundred USD.
They didn't ask AI experts, only software engineers. But even if they did, most have no idea where the field will be in 5 years.
@@Dylan-hy2zj Agreed, but software engineers I believe who have worked in those industries from what I thought, right? And you're right as well anyway about predictions.
I've been having trouble distinguishing fact from fiction pretty much my entire life
Your lifespan is probably not that long.
The problem right now is that the "safety" they are building into large language models is just a bunch of lecturing on the ethics of fulfilling a completely innocuous request. "AI, how do I code a bubble sort" "I'm sorry as an AI model, I cannot fulfill potentially dangerous requests that may harm or exploit individuals or communities."
a group can be wrong if a major principle that is false is considered true. This is often the case especially with newer information which is exactly the reasons why this method could produce damage instead of a solution ..Even misinformation where most people know that information is questionable is less damaging then data mistakes and "novel" problems where no examples exist .
Indeed, there will be conflicting information or else these groups would not exist
to add how often have you seen 20 year old predictions for the future be accurate.?
@@timboutilier4769 a shockingly large fraction of the ones made uintentionally by the Onion, but other than that, almost none.
I wouldn't say RAND Corporation is "non-profit" as it's funded by for-profit big corporations.
Is their corporation classified as a non-profit?
Indeed:
On the related subject of the Fermi Paradox there can be only three reasonable solutions based on admittedly limited data:
-We are making the wrong inferences because we don't understand
-We see nobody because we are functionally alone
-There is a functionally universal great filter that we must be about to inevitably stumble upon*
*Based on today's episode my best best would not be the usual 'enemy action' reasons but rather our collectively stumbling in to a technological mire beyond our ability to navigate. If you can't function in your environment, artificial or not, you die.
Grubby aliens. Universe is just too young, and we in fact exist precisely because aliens didn't colonize our planet and thus we were able to evolve. As we and they spread throughout next few billion years, we will take over and prevent anyone else from evolving.
there's also rare earth/sun/group, or similar solutions like phosphorus, or various quiet alien solutions eg efficient/miniaturization.
being (functionally) alone due to distance becomes likely if one assumes these early filters.
Or we are the first to become advanced.
The problem with education of people in relevant positions is that it's gonna be almost impossible, unless there's a specific taskforce made from young and tech-savvy people. In Europe and the US, basically all legislature are old people that can barely use their phones. We saw that in soooo many hearings about different things, including net-neutrality, big tech companies and such. Most techies don't yet really grasp the consequences of AI, so it's gonna be basically impossible for the standard people in legislation to grasp it.
I think the greatest downside of AI will be that it replaces humans in fields, diminishing the creative force humans bring to those fields, causing stagnation in every area they enter. AI has no motivation to improve anything. No curiosity for example. No motivation array.
Only halfway into the video, I have mainly two... objections? So this is based on interviewing twelve, if I heard correctly, experts. 12. TWELVE. What a baseline. In addition to that, and I do think this is important, they are not even experts in, let´s say, 'Comparative historic prediction sciences`, but in the very narrow speciality of software engineering. The first book I´d like to recommend to the authors of this study is 2016s "Superforecasting: The art and science of predictions". TLDR: expert predictions 15 years into the future are fundamentally exactly as bad as yours and mine. Flipping coins plus publicity, because, experts.(Also, the feedback loops of this Delphi process, encoraging in-group confirmation biases/leadership alignments??) I´m, uhm, let's say, skeptical in a very calm way about this.
The problem with larger numbers of experts is that the Delphi Method doesn't scale very well. Using it with 100 experts would take exponentially longer time.
@@bytefu fair enough. At the same time, bad methodology doesn’t produce more reliable outcomes just because better methodology is too hard to do. And if the method, or question is just plain wrong for the answer you’re looking for, it might even be irrelevant how many experts you ask. What I took away from „Superforecasters“, is how good predictions tend to come from broad „data“, not deep „data“(aka experts).
@@frankburkhard5701 Analysing broader data sets is probably one of the cases where AIs can be very useful, because humans indeed are pretty bad at that. After all, nobody can be an expert in 10 different fields. But right now I wouldn't try to do major predictions on anything even 5 years ahead, as AI research brings more surprises and AIs become more pervasive every day. And I really hope Eliezer Yudkowsky is wrong, otherwise there will be nobody to predict anything.
Okay, I have to admit that the quip about whether "Do Androids dream of Electric Sheep", and does that make them vegan went completely over my head. I've read the book, watched the movie multiple times, and even watched the sequel. Heck, I've read a book of yours and watched many of your videos. But I just don't get it other than as a potentially amusing word salad.
It's a joke, son.
@@catserver8577 Yeah, but like I said, I don't get it.
It’s not an intelligent phrase. The logic goes like this:
- The Android is an electric human
- the human counts sheep to fall asleep (in folk lore)
- thus, does the Android also count sheep to fall asleep, and are the sheep electric because the Android is electric?
- following that, if the Android eats the electric sheep, is it not a carnivorous act?
It’s as simple as that. Not an intelligent thought process.
@@Oshiiiiiiiiiiii if you know the answer, then explain it. If you don’t, you’re only trolling. To invoke veganism implies eating the sheep. There are not many variables at play here.
What a sad bunch people are, seemingly hopelessly lost.