AI experts make predictions for 2040. I was a little surprised. | Science News

Поделиться
HTML-код
  • Опубликовано: 3 фев 2024
  • 😍Special Offer! 👉 Use our link joinnautilus.com/SABINE to get 15% off your membership!
    My new essay is here: nautil.us/what-physicists-hav...
    We’ve seen a lot of headlines in the past year about how dangerous AI is and how overblown these fears are . I’ve found it hard to make sense of this discussion. If only someone could systematically interview experts and figure out what they’re worried about. Well, a group of researchers from the UK has done exactly that and just published their results. What they have found is, not very reassuring. Let’s have a look.
    The paper is here: ieeexplore.ieee.org/document/...
    🤓 Check out our new quiz app ➜ quizwithit.com/
    💌 Support us on Donatebox ➜ donorbox.org/swtg
    📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
    👉 Transcript with links to references on Patreon ➜ / sabine
    📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
    👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
    🔗 Join this channel to get access to perks ➜
    / @sabinehossenfelder
    🖼️ On instagram ➜ / sciencewtg
    #science #sciencenews #technews
  • НаукаНаука

Комментарии • 2,2 тыс.

  • @SabineHossenfelder
    @SabineHossenfelder  3 месяца назад +104

    My new essay in Nautilus magazine about Jonathan Oppenheim's theory of post-quantum gravity is here nautil.us/what-physicists-have-been-missing-506607/

    • @Thomas-gk42
      @Thomas-gk42 3 месяца назад +3

      Interesting article, Oppenheim´s appoach easy to digest for laypersons.

    • @osmosisjones4912
      @osmosisjones4912 3 месяца назад +3

      Don't make them smarter then Humans

    • @AquarianSoulTimeTraveler
      @AquarianSoulTimeTraveler 3 месяца назад +2

      Symphony of Destruction! Mega death🤘🤘🤘💚💜

    • @osmosisjones4912
      @osmosisjones4912 3 месяца назад +4

      How when humans find patterns it's examples of how flowed the human brain is but when computers do it it's shows how advanced they are

    • @AquarianSoulTimeTraveler
      @AquarianSoulTimeTraveler 3 месяца назад +1

      I would love to participate in the Delphi method... I literally have a binary Communications implant which allows me answers to the right questions when I ask them.

  • @tinytim71301
    @tinytim71301 3 месяца назад +1108

    Megadeth! 🤘🏼🤘🏼🤘🏼

    • @VolkerHett
      @VolkerHett 3 месяца назад +32

      I's like Slayer as support! When we have one last metal event, make it last!

    • @mpetrison3799
      @mpetrison3799 3 месяца назад +10

      😡😡😡 That damn drummer is going to lock everyone up for their thousands of copyright felonies, isn't he?

    • @Maescool
      @Maescool 3 месяца назад +58

      I did not expect Sabine to be a metal-head

    • @suicideistheanswer369
      @suicideistheanswer369 3 месяца назад +36

      you take a mortal man

    • @DW-indeed
      @DW-indeed 3 месяца назад +18

      99 ways to die

  • @ddj2010
    @ddj2010 3 месяца назад +293

    We probably need a “truth bank”. Like a seed bank but hard copies of knowledge locked away.

    • @Kitties-of-Doom
      @Kitties-of-Doom 3 месяца назад

      Controlling elite has it, and the herd will always be confused and fed shit

    • @Kim_YoJong
      @Kim_YoJong 3 месяца назад +18

      Thought criminal

    • @SimonFrack
      @SimonFrack 3 месяца назад +14

      Someone print out wikipedia?

    • @juliusapriadi
      @juliusapriadi 3 месяца назад +41

      or simply a good old library full of physical books - should work for the more timeless sciences like mathematics, physics etc.

    • @aliveandwellinisrael2507
      @aliveandwellinisrael2507 3 месяца назад +41

      Careful who you put in charge of that

  • @benyomovod6904
    @benyomovod6904 3 месяца назад +66

    I am a senior programmer, and i see the juniors repeat the old and well known weaknesses and fails. Mankind does not learn they error around, just the tools are changing

  • @virajelix
    @virajelix 3 месяца назад +81

    Make sense! Megadeath already predicted it in "Symphony Of Destruction"!

    • @JEQvideos
      @JEQvideos 3 месяца назад +5

      I really thought this was going to be about an AI trained on Dave Mustaine,

    • @MightyJabroni
      @MightyJabroni 3 месяца назад +3

      Ah, the Megadeath pun .... I was beaten to it.
      No internet points for my slow ass, today!

    • @Buck-kf6xq
      @Buck-kf6xq 3 месяца назад +1

      Beat me to it.

  • @krishp1104
    @krishp1104 3 месяца назад +966

    As a software developer and computer scientists- the majority of computer scientists I've met SHOULD NOT be making these decisions

    • @tarmaque
      @tarmaque 3 месяца назад +143

      Kind of a corollary to the maxim "Those who seek power do not have the ability to responsibly use it."

    • @maxsr3236
      @maxsr3236 3 месяца назад +59

      still better than politicians or billionaires

    • @ecos889
      @ecos889 3 месяца назад

      It depends on why they seek power for themselves, power will illuminate them for their true forms. Those who seek it out to help others, will later be revered as saints see Norman Baurlaug as an example of that, but those who seek out that kind of power have no interest in absolute power, just enough to make the changes they want to make'
      . For those who seek power to do good but are ill-equipped to do any good or not know exactly what form that good will be will lose sight of the good and do worse than those who seek power for themselves.
      So it's only those who seek the poison challice that is power who take only just enough and for a set purpose who do well to drink it, in a small amount. In the case of Normal for example, his power was in agriscience and he helped found and contribute to a movement that shifted agriculture in a way to save a billion lives from starvation in his lifetime. But those kinds of people a rare and far between. Those who seek a limited amount of power and know when to stop gaining power and then use that power for the good of everyone else.
      It's why you see so many rich people who where either born into it or just wanted power for power's sake who often squander it in stupid dick measuring contests such as buying expensive license plates, fleets of luxury cars and the likes, much either to our mercy they are this stupid with no intrest to do it, or those who are so obbsesed with power they will trample over the lives of milllions to get it, just to remove that insecure itch they have. Whilst a poorer person would see the potential of that power and see it wasted may seek it to solve an issue they see it wasted over, so long as they focus on the issue even once the power is gained it can do good, so long as it's not say ill formed like with that German party who started world war 2 over issues about race. (Have to word it like this as I suspect youtube would not allow me to post the comment otherwise lol)
      But yeah if you have an issue you want to help solve and need power to do it, know how much power you need, gain it and from that see what you can do to solve it. My case I am an agriscientist who may not even hit the levels of good that Norman pulled off, but I hope with the power I gain I help out with food security issues with the issues caused by an ever-changing climate. That takes the power of experience, communication and some degree of persuasion as well targeting only projects that will contribute to a net good but I can only hope that it pans out.
      @@tarmaque

    • @whizzie3928
      @whizzie3928 3 месяца назад +48

      As a software developer myself, I think they are more than capable of making these decisions.

    • @joshcryer
      @joshcryer 3 месяца назад

      They're idiots. GIGO. Please for fucks sake do not be influenced by people juggling the alchemy that is ANNs. There are people trying really fucking hard scientifically and empirically to figure this shit out.

  • @krishp1104
    @krishp1104 3 месяца назад +217

    3:00 The amount of automated AI generated slop videos on RUclips is already a lot more than you think. Theres so many robotic voices that just narratr word-to-word what some article online said

    • @tsz5868
      @tsz5868 3 месяца назад +3

      I´m watch only for fun as harry potter but in korea or balenciaga kind of content...otherwise youtube become more and more uninteresting and I don´t bother even watch humans generated content.. Hope the bankrupt of youtube be caused by people going outside more.

    • @raraavis7782
      @raraavis7782 3 месяца назад +13

      I got so many of those in my shorts especially, for a while. But after I 'do not recommend this channel again'ed about 50 or so, RUclips pretty much stopped with the nonsense. Downside is, that I mostly only get recommendations for channels, I already know, now. Rarely something new and interesting 🤷🏻‍♀️

    • @ecos889
      @ecos889 3 месяца назад +5

      Yeah, they are known as content farms and first took over the cooking and hobby youtube space. Also yeah it's a system once built up is easy enough to proliferate and send in pseudo science and with youtube is one of those things that you do not want to be saturated with that stuff, but sadly once set up as software is easy enough to set up and run automatically is an easy way to make money. RUclips really should have a filter program that removes any sets of voices from being allowed for use in youtube and just drag them away, due to the high likelihood of the misinformation the videos may contain. I say this but the number of conspiracy theories and pseudo-science that exist even being made from humans are not dealt with. Like aliens done made the pyramids and nothing is done about those when they are made to pretend to be fact is also concerning. Also the humans who advocate for drinking bleach to cure all that ails you so basically snake oil salespeople exist on the platform that are not dealt with leads me to think it's probably allowed to fester and eat away of human content creators.

    • @krishp1104
      @krishp1104 3 месяца назад +5

      @@ecos889 The problem with your idea is that voice AI is already so realistic that you already can't tell when a human is speaking vs when an AI is (if the scammer uses a premium service to make the voice)

    • @ecos889
      @ecos889 3 месяца назад +7

      @@krishp1104 i know they are realistic but they use the same voice most of the time. If the same voice is used for 100s of channels it will be easy enough to detect and get rid of.

  • @AvangionQ
    @AvangionQ 3 месяца назад +12

    What worries me the most about generative artificial intelligence and automation?
    Within 🔟 years, half of all jobs will be fully-automated, the profits going to megacorporations and billionaires.
    Our society hasn't yet come to grips with what to do for preventing widespread unemployment leading to suffering.
    Inequality and desperation, more than anything else, is what leads to dystopian societies until violent revolution sways.

    • @robreid6195
      @robreid6195 Месяц назад +2

      Yeah, inequality is already going nuts. There are more layers on the shit sandwich to come.

  • @calebkz
    @calebkz 3 месяца назад +94

    One of the problems I’ve observed is our shift away from the classical scientific method. Instead of relying on concrete experimentation, we have migrated to data analysis; the latter relies on current known information and is easier to manipulate. If we move back into the realm of the concrete and classical methodology it wouldn’t matter if AI is used because the results would be verifiable. The issue is more with abstract science that relies on existing data which is easier to manipulate.

    • @dmitripogosian5084
      @dmitripogosian5084 3 месяца назад +2

      Very much so, I would also add here an expanded role of statistics which is important in semi-quantitative sciences, but takes us sometimes away from making progress in precise sciences such as physics.

    • @CrucialFlowResearch
      @CrucialFlowResearch 3 месяца назад +2

      Pure mathematics is verifiable by anybody who cares to spend their time thinking about it. Mathematics scams are uncommon.

    • @theslay66
      @theslay66 3 месяца назад +18

      The problem is, these days we can't just rely on a few experiments to draw conclusions. We're looking for things that are so faint, and so subtil, the ratio of signal-to-noise is so small that we're forced to use a statistical approach if we want to find what we are looking for. As we accumulate more datas, we increase the precision, as we can get rid of the randomness and noise more easely.
      The era of a single experiment validating a theory is finished.
      But you're wrong if you think that these datas are easier to manipulate than a single result. It's actually the opposite ("wrong" datas can easely be filtered out), that's precisely why we use them. And if science seems to become more "abstract", it's simply because what could be treated by a more "concrete" science has been already more or less fully explored.

    • @florin-titusniculescu5871
      @florin-titusniculescu5871 3 месяца назад +3

      i see that you're trying to shake our belief in Science , and the Inquisition will see that you repent 🤣

    • @SteveChiller
      @SteveChiller 3 месяца назад

      Must be nice to look to your left and right and find people who think just like you do.

  • @Maescool
    @Maescool 3 месяца назад +172

    Other than fake news and papers, I fear for the amount of scams using AI

    • @DrDeuteron
      @DrDeuteron 3 месяца назад +1

      number of scams.
      When did ppl stop using the right noun, "number"? "amount of scams" just sounds dumb.

    • @Australiaisupsidedown
      @Australiaisupsidedown 3 месяца назад +48

      ​@@DrDeuteron it's people not ppl

    • @DrDeuteron
      @DrDeuteron 3 месяца назад

      @@Australiaisupsidedown do you know anything about Clifford Algebra?

    • @psy8917
      @psy8917 3 месяца назад +30

      ​@@DrDeuterondo you know anything about my moist ass hair?

    • @larryjonn9451
      @larryjonn9451 3 месяца назад +3

      Yes, hes a nice fella ​@@DrDeuteron

  • @YoungGandalf2325
    @YoungGandalf2325 3 месяца назад +92

    I figured the "Delphi Method" would include the use of hallucinogenic drugs.

    • @CAThompson
      @CAThompson 3 месяца назад +9

      And sitting at fuming volcanic vents.

    • @TomTomicMic
      @TomTomicMic 3 месяца назад +9

      No just tea and biscuits and sensible shirt, trousers, brown brogues with shoe laces, patches of leather on their elbows attached to their jackets a vape pipe and beard, and that's just the women!?!

    • @CAThompson
      @CAThompson 3 месяца назад +4

      @@TomTomicMic Nowadays maybe, but the OG Delphi Oracle was tripping biscuits from inhaling sulfur fumes and not coherent so (check this if you want, I learnt this in Ancient History class in high school over 20 years ago) someone else would 'translate' these ramblings and mumblings.
      An AI 'hallucinating' might come up with some suitably incoherent/strange pronouncements.

    • @artisans8521
      @artisans8521 3 месяца назад +1

      Some people have more historic knowledge than others.

    • @smkh2890
      @smkh2890 3 месяца назад +2

      There is a spring at Delphi that has the most wonderful water i have ever tasted!

  • @johnjdumas
    @johnjdumas 3 месяца назад +8

    Just one more problem with papers without results not being published. Multiple null findings would go a long way toward dismissing or suspecting untrue results. Null or negative results should be stored for reference.

  • @TrueNorth333
    @TrueNorth333 3 месяца назад +5

    Thank you for creating one of the best summaries on this topic I have seen so far :) keep up the good work - All the best from Oslo, Norway

  • @MrTheoJ
    @MrTheoJ 3 месяца назад +145

    I'm just glad that Rutger Hauer will come back

    • @ralphmacchiato3761
      @ralphmacchiato3761 3 месяца назад +10

      The chance to begin again in a golden land of opportunity and adventure!

    • @spvillano
      @spvillano 3 месяца назад +2

      @@ralphmacchiato3761 aka, dust oneself off and try, try again.

    • @JZsBFF
      @JZsBFF 3 месяца назад +3

      Fine with me as long as you keep Harrison Ford away from flying stuff.

    • @EdgarRoock
      @EdgarRoock 3 месяца назад +12

      Didn't he watch Electric Sheep glittering in the dark near the Tannhäuser Gate?

    • @marccas10
      @marccas10 3 месяца назад +8

      I've seen things you people would not believe.

  • @arctic_haze
    @arctic_haze 3 месяца назад +178

    A colleague sent me yesterday the results of a plagiarism check of a manuscript. Everything was blocks of different colors denoting different origins in earlier published papers). He asked me if this are results of using AI or just good old plagiarism. I think this was the latter (the copied sections were too long for AI). People do not need AI to produce false manuscripts or false images. AI is simply a more sophisticated way of doing this

    • @Thomas-gk42
      @Thomas-gk42 3 месяца назад +6

      Wouldn´t AI be a good tool to recognize plagiarism too?

    • @arctic_haze
      @arctic_haze 3 месяца назад +14

      @@Thomas-gk42 It could. But it is an overkill. A simple text search in the database of published papers is enough. Of course you need a human to see if the copying is not of the accepted kind (when the manuscript says it is a quote). AI could do that very well. Of course if we can trust it 😜

    • @Thomas-gk42
      @Thomas-gk42 3 месяца назад

      @@arctic_hazethanks

    • @coryc9040
      @coryc9040 3 месяца назад +14

      It's not just more sophisticated, it can be more easily automated, it can be of higher quality. Additionally it's not difficult to imagine LLMs trained to produce high quality text that is indistinguishable from genuine high quality non-plagiarized human generated text.

    • @arctic_haze
      @arctic_haze 3 месяца назад +8

      @@coryc9040 What I am afraid is not AI creating nonsensical manuscripts with made up citations but AI making sensible papers with correct quotes. In other words failed scientists using AI to forward their careers.

  • @davewebster5120
    @davewebster5120 3 месяца назад

    Loved the references in this, crazy stuff!

  • @patrickl6866
    @patrickl6866 3 месяца назад +6

    Extremely interesting and relevant discussion. I believe that one of the biggest challenges that people (especially the less critical thinking among us) face is an increasing difficulty in sorting out fake from real news. Real and primarily unbiased journalism is dead (and we need to change what we get today that is called "news/journalism/reporting "to something else because it certainly doesn't meet that standard). The internet provides a microphone to anyone and everyone with an opinion and AI will exponentially increase the challenge (as you point out) to make it virtually impossible to determine facts from fake/fiction/opinion. My personal view is that a fundamental and important question to address is what can be done to make it possible for the masses (us) to receive real information so we can make decisions on important matters. I certainly don't know the answer but this seems to be the fundamental task ahead of us if we are to not recede into chaos.

    • @lightbearer313
      @lightbearer313 3 месяца назад

      Nations will vary in how capable they are in responding to any AI crisis. Any nation that allows many people (politicians and evangelists, for example) to openly tell lies will be poor at dealing with this, unless it has major changes in society tolerance of such behaviour.

    • @richardnichols2064
      @richardnichols2064 3 месяца назад

      Nautilus magazine is a starting point she says.

  • @davidroetzel5500
    @davidroetzel5500 3 месяца назад +77

    Here in The US most legislatures are lawyers. My thoughts are that this is a group highly unsuitable for detecting AI generated content. They deal in word salad content all day, purposefully.

    • @ytb40
      @ytb40 3 месяца назад +2

      Definitely.

    • @NakedProphet
      @NakedProphet 3 месяца назад +4

      "Unless God intervenes, no life would survive". Starting to make sense. We have become unmoored from reason and conscience. Death by delirium. Strong delusion😊.

    • @dmitripogosian5084
      @dmitripogosian5084 3 месяца назад +5

      You have a rather approximate understanding of what lawyers do.

    • @seekingishwara737
      @seekingishwara737 3 месяца назад +1

      @@NakedProphet The tower of Babel, over and over again...yawn.

    • @baomao7243
      @baomao7243 3 месяца назад

      @@dmitripogosian5084 While “approximate,” this understanding is (regrettably) fairly close to reality.

  • @dandupaysdegex
    @dandupaysdegex 3 месяца назад +26

    From now on , a computer's power will be measured in gigadeaths per second.

  • @ChicoCuantico
    @ChicoCuantico 3 месяца назад +9

    ¿Hay algún precedente histórico en que un grupo de expertos haya acertado previsiones a 16 años vista?

  • @SwissPGO
    @SwissPGO 3 месяца назад +3

    I agree with you 100% - especially since I've had an overdose of scientific fraud already without IA when reviewing papers. I'm no longer a reviewer, but i'm really worried for the scientific community and society as a whole.😢

    • @stevengordon3271
      @stevengordon3271 3 месяца назад +1

      We can hope that fraudulent science will eventually fall by the wayside when it does not pan out. It is fraudulent evidence in court (or people's reputations or politics?) that can have more dire consequences that cannot be undone.

    • @SwissPGO
      @SwissPGO 3 месяца назад +1

      @@stevengordon3271 Fraudulent science is very damaging too - especially in the medical field, but not only. And If AI starts training on fraudulent papers, the consequences can indeed cause megadeaths.
      Let's hope scientists remain critical minded. Sabine's type is definitively what's needed.

  • @colinhiggs70
    @colinhiggs70 3 месяца назад +101

    My go to, well known, "what is real?" Philip K Dick story would have to be "We Can Remember It for You Wholesale". AKA Total Recall.

    • @tesseract_1982
      @tesseract_1982 3 месяца назад +9

      Who needs that silly machine, when you can munch some Chew-Z and just create your own universe? And you only need to take it once! 😅

    • @MrPossomtail
      @MrPossomtail 3 месяца назад +2

      Ubik too

    • @Xanaduum
      @Xanaduum 3 месяца назад

      Dehydrated Deity...

  • @TheIgnoramus
    @TheIgnoramus 3 месяца назад +2

    Love the arguments. Productive argument is the Salve of doubt. Let’s argue our way to a better future!

  • @Dan-ud8hz
    @Dan-ud8hz 3 месяца назад +1

    "Optimism and stupidity are nearly synonymous."
    Admiral H.G. Rickover

  • @fernandosalazarsoto159
    @fernandosalazarsoto159 3 месяца назад +50

    Sounds to me like a Symphony of Destruction...

    • @nightwishlover8913
      @nightwishlover8913 3 месяца назад +2

      lol

    • @JZsBFF
      @JZsBFF 3 месяца назад

      Cheer up. We'll never get there. Rapid Climate Change will kills us all well before that.

    • @Human_01
      @Human_01 3 месяца назад +1

      Dramatic. Only when Ai is paired with humanoid robotics will things finally get interesting.

    • @grankmisguided
      @grankmisguided 3 месяца назад +4

      Damn, I clicked through to make the same joke

    • @charleswettish8701
      @charleswettish8701 3 месяца назад +3

      @@Human_01 Just FYI, "Symphony of Destruction" is a Megadeth song. :)

  • @dysfunc121
    @dysfunc121 3 месяца назад +39

    The delphi method is interesting and all but 12 experts in such a vast and hotly debated field seems like a tiny little drop in the ocean.

    • @yanniammari1491
      @yanniammari1491 3 месяца назад +5

      Idk if asking futurists for anything but a grande iced chai is a good idea

    • @krisrp0
      @krisrp0 3 месяца назад +6

      Yeah there seem to be flaws in this methodology. It basically creates an echo chamber. I think it's important to get opinions and feedback from people in other fields like social science .

    • @codahighland
      @codahighland 3 месяца назад +2

      ​@@krisrp0 That's intentional. The idea is that you give the opinions the opportunity to converge, on the basis that the experts are experts. It doesn't reveal a guaranteed truth but it does highlight points of consensus and points of controversy. A dozen experts can still be wrong, of course, but if the initial sampling is sufficiently diverse and representative, it's hard to do better -- if this method doesn't work, it's unlikely that there's another method that would have done better.

  • @arson4852
    @arson4852 3 месяца назад +3

    as a current ai bachelor student (planning to do a masters philosophy) Im definitely going to read the paper in question tomorrow.
    The main points youve showcased here I generally agree with.

  • @reltech
    @reltech 3 месяца назад +1

    One thing there may be demand for is secure devices which last until they wear out: connected devices which last, and standardization which doesn't keep changing.

  • @BFKate
    @BFKate 3 месяца назад +140

    The Phillip K Dick dystopia I think is most accurate for where we're going is A Scanner Darkly, where identity becomes so submerged by layers of security in a world of designer psychedelics we end up with cops accidentally conducting surveillance on themselves, and no one knows who anyone is anymore.

    • @brothermine2292
      @brothermine2292 3 месяца назад

      Considering how unfettered is neuroscience research, Total Recall comes to mind. In other words, manipulation of people's beliefs in order to compel willing compliance with the agenda of an elite class.

    • @MrPiccolop
      @MrPiccolop 3 месяца назад +10

      The Rotoscope movie with Keanu reeves about this is great.

    • @tommymclaughlin-artist
      @tommymclaughlin-artist 3 месяца назад +1

      Came here to say this

    • @brothermine2292
      @brothermine2292 3 месяца назад +12

      My reply about Total Recall was deleted. I guess RUclips doesn't like it when someone mentions a risk associated with nurosience. (Intentionally misspelled here to keep the algo calm). Perhaps someday it will start deleting comments about AI risks too.

    • @DJWESG1
      @DJWESG1 3 месяца назад +1

      Religious groups already pull the strings.

  • @nathanmays7926
    @nathanmays7926 3 месяца назад +71

    3:59 Props to Sabine for purchasing all those rare monkey NFTs so she could share them with us.
    Fortunately she bought them all two days ago so she’s only out $3.

    • @probablythedm1669
      @probablythedm1669 3 месяца назад +24

      Limited digital assets is the dumbest idea I've heard anyone ever seriously push for when it's so obviously a stupid idea. 🤦‍♂️
      "Why don't we just throw away this infinite resource, force scarcity, and speculate on it? How can this possibly go wrong?!"
      Me: Ctrl + C.
      Investor: NO DON'T IT'S MINE!
      Me: Ctrl + V. And now I too have a copy! Don't worry, you still have your copy. That's the benefit of digital! 🤓

    • @nicolasolton
      @nicolasolton 3 месяца назад

      Trump seems to be doing well with his NFT offerings. MAGAs are close to being the ultimate, ideal suckers.

    • @nicolasolton
      @nicolasolton 3 месяца назад

      Trump NFT's are a bigly smart way for Trump to gain additional income from MAGA suckers.

    • @_WeDontKnow_
      @_WeDontKnow_ 3 месяца назад

      It's okay! They're gonna skyrocket in value eventually and she'll be able to fli-
      Nevermind they already dropped in value somehow

    • @timothywcrane
      @timothywcrane 3 месяца назад

      @@probablythedm1669 It was putting sledgehammers in front of storefront windows.
      NFT stupidity could not be anything but aping, BECAUSE of the hype. The one true use had been rolling for years on private chains and DAGs.
      The SEC sat back and laughed.
      The valuations made me go through 3 chains and more than that many languages waiting for monkey mania to die and people started asking what fungibility or finality was.
      I have AI creations that took real skill to make 3 years ago. I thought I might have been ahead of the curve (every so slightly).
      When the "layered" collection crap started showing up, I felt that itch from experience, and I set 8 TB to the side and waited.
      Before you knew it the whole world thought blockchain was a decentralized image scam and a eco threat, so they converted it into centralized stakeholder coin too hot for normies but now mature enough and captured for (qualified) investors. How the F*k would you feel if you had F*ng MMA fighters and Russian Tik Tok rappers as the face of the legit tech underneath? And then there was?/is?/Tether... Come ON Coffeezilla talked more these coins than on the exchange boards pumping and dumping them.
      This was not useless tech... it was the shameless FOMO roller-coaster with jet engines.
      I have some cool avatar images though. While everyone else sold their monkeys I taught my pics to talk, give interviews and do podcasts, do research, write screenplays, turn them into movies and do consulting work.
      I guess nobody wanted utility but me. I still got it all on SSD. Do I now want to "pay gas" to wind up in someone else's AI model? Not on a n- privacy publicly manipulated broke chain.
      NFTs were a feature not a bug. The patch was the bug.

  • @JAAB9296
    @JAAB9296 3 месяца назад

    Great video as always!

  • @WPGinfo
    @WPGinfo 3 месяца назад +1

    Hi Sabine, Interesting topic indeed! What I missed so far, is the 'how are we going to ensure the safe application of AI..?!' Did these experts provide ideas / an extensive strategy and 'how to embed / implement safety-constraints?' Please elaborate if you can.

  • @robtilley8922
    @robtilley8922 3 месяца назад +30

    "Prediction is very difficult, especially about the future." - Niels Bohr. I don't think it will be good, but I am willing to bet that only some of these things will happen, while other unpredictable things will emerge.

    • @jmer9126
      @jmer9126 3 месяца назад +2

      Are you sure that wasn’t Yogi Berra?

  • @anywallsocket
    @anywallsocket 3 месяца назад +55

    Delphi method seems to me to suffer from exacerbation bias -- basically group-think bias reenforced by the several rounds of reflective reviews. One would be inclined to double-down on their views in light of conflicting views, and also in light of supporting views. It has a magnifying effect, essentially over-emphasizing both larger and smaller differences.

    • @LackofFaithify
      @LackofFaithify 3 месяца назад +8

      It also prevents you from knowing any past, in this case, predictions of those interviewed. Are they all saying oh yeah, block chain is irrelevant today, but 3 years ago half of them were saying its the next best thing since sliced bread? Also no way of examining any potential bias based on things like who they get grant money from: if they are all funded by Google, can you believe their prediction that AI will not be abused by corporations but rather countries? Etc...

    • @Thomas-gk42
      @Thomas-gk42 3 месяца назад

      I agree, but perhaps it could also work against own biases? - Not sure...

    • @user-yc9ib2he7l
      @user-yc9ib2he7l 3 месяца назад +1

      They are not completely wrong. like cutting corners on AI safety. It might be more like not recognizing what is risk in the first place. It will be the blind spots were the real risk is. Like the common zeitgeist, the elimination of all immediate emotional suffering. It seems like a good thing but it in fact amplifies self destructive behaviours.

    • @AlOfNorway
      @AlOfNorway 3 месяца назад +1

      Plus it’s a way to squeeze talent out of many people at once, without needing them to cooperate.

    • @momom6197
      @momom6197 3 месяца назад +2

      Sounds plausible, but it exacerbation bias a real thing?

  • @yamabiru4553
    @yamabiru4553 3 месяца назад

    Thank you Sabine!

  • @theograice8080
    @theograice8080 3 месяца назад

    Thank you, Sabine, for your consistent effort to educate us, the masses, on so many nuanced topics.
    I would like to offer you a small tip in English as a speaker of the Northeast American English ideolect (which, by status quo in the USA - I know, the internet isn't the USA -, is recognized as the way of speaking upon informative media: i.e., news, radio). I've heard you say [here, at @6:30] "(something) is top", which, to my ear, sounds clunky, though I do understand you mean something along the lines of "(something) is top notch/quality". If you are looking to speak like an L1 learner, I hope that this input helps you to adjust your internal model of the language.
    I commend you for making the effort to learn the language I speak natively. Kudos to you.
    You are one of the last creators on this platform from whom i consume content. Thank you for continuing to engage us in education.

  • @Maxarcc
    @Maxarcc 3 месяца назад +12

    I feel like another question has to be taken into account. Suppose we reach a stage of AI being indistinguishable from human interaction and truth. How relevant does the internet stay for us when it comes to browsing the web? At what point will people notice and tap out, or limit their usage? Why browse the web when it becomes dysfunctional? I predict many people will simply lose interest in social media and move their research and political engagement back to offline, or moderated public spheres.

    • @xponen
      @xponen 3 месяца назад +1

      because people will still use internet to solve problem from their hobby, work, and life, so it doesn't matter if everyone online is mere robot as long as they create useful insight.

    • @richs5422
      @richs5422 3 месяца назад +4

      I hope you're right, that would self-limit these problems.

    • @Gunnl
      @Gunnl 3 месяца назад +1

      people will not care they are interfacing with a machine instead of real human ... as long as it is not noticeable ...

    • @evelynspaghetti4978
      @evelynspaghetti4978 3 месяца назад +1

      I wonder the same thing. Remember when ebooks came out and it was predicted that books would be phased out? I actually feel like, if we push don't go the way of neuralinks, and maintain 3rd physical spaces, the internet/ai will start to eat itself and naturally moderate itself, once the initial first stage development settles down.
      This is just the boom..fear of the unknown

  • @larry785
    @larry785 3 месяца назад +31

    The megadeath event was caused by sex robots outperforming their human partner to death... What a way to die!

    • @jktech2117
      @jktech2117 3 месяца назад

      meh.. ok.. forget it.. ill actually have my own sex robots to have fun with my husband probably XD but make ones that ynow.. can get exhausted

    • @nnnik3595
      @nnnik3595 3 месяца назад +9

      Death by Snu Snu

    • @AJPemberton
      @AJPemberton 3 месяца назад +1

      It's why Skynet had a ready-made supply of advanced humanoid robots. They were not originally military, they were entertainment robots... "Come with me if you want to live!"

    • @R3dp055um
      @R3dp055um 3 месяца назад +1

      @@AJPemberton More like "Come with me if you want to bunga-bunga!"
      You did see Westworld, right?

    • @AJPemberton
      @AJPemberton 3 месяца назад

      @@R3dp055um Missed that series, to be honest. Death by snu snu would be a nice way to go for one of the participants. I was working on the wordplay of 'come', and the image of an overly aggressive sexbot demanding simultaneous orgasms or death. Not quite the originator story Terminator would have us believe, but slightly horrifying in its own way.

  • @haydenmaines5905
    @haydenmaines5905 3 месяца назад +3

    My thoughts on what might be a working solution is a mixture of medieval reputation and E2E encryption. Metadata can be embedded into media with a corresponding hash that can uniquely identify the creator - this way, you can know for a fact that this image was uploaded by the BBC, for instance. Software can apply the attribution atop the image. It can't prevent you from knowing if the image is faked or not, but it can help you confirm who the originator of the content is. Then the question becomes, do you trust the BBC to not manufacture or distribute misinformation? If the BBC wants to maintain its reputation for journalistic integrity, then they will make efforts to avoid spreading artifical images around. Someone can still upload an AI image anonymously, but without any reliable or verifiable proof, who's going to believe that?

    • @_--_--_
      @_--_--_ 3 месяца назад

      The problem is any image you upload no matter on which site will pretty much always be reencoded for space savings, so any Hash in metadata would be void.
      And I think it would be quite a feat convincing all the companies doing the majority of image hosting to not touch a 10MB image and reencode it to 100kB.
      So the only place where that would be useful would be for direct file transfer of some kind (where this is already well used practice, its called digital signing) or on the website owned by the image creator (where verification already happens through the websites certificate).
      The problem will always be someone posts a picture on facebook/reddit/twitter whatever, thats 99% of the traffic and those sites will *always* reencode that image, you wont be able to convince them to do otherwise.

    • @haydenmaines5905
      @haydenmaines5905 3 месяца назад

      @@_--_--_ Hmmm, great point. I never thought about the re-encoding issue. I wonder if there's a way you could create a handshake between the creator and the publisher... No, because now we're relying on the trust of two distinct parties. Maybe some sort of reauthorization after image modification? You upload the image, they compress / reencode it, you download it, sign it, reupload the image, and then they post it? Will have to think on this for a while.

  • @nunomaroco583
    @nunomaroco583 3 месяца назад

    Paradoxal Times.....(very interesting your opinion, on Jonathan paper).

  • @AttilaAsztalos
    @AttilaAsztalos 3 месяца назад +57

    After the invention of photography, cinematography and the phonograph, we had a brief period where it was possible to (approximately) objectively document what was real, and use it as proof of that. Making a forgery not even experts could identify was a three-letter-agency-level feat, if at all possible. Those days are gone now - any recording you see or hear can just as likely be fake as original, all you can do is pick a side to believe on hardly more than blind faith. And experience shows we pick wrong far more often than correctly...

    • @coonhound_pharoah
      @coonhound_pharoah 3 месяца назад +9

      In a world where everything can be fake, everything will be assumed to be fake.

    • @dr.jamesolack8504
      @dr.jamesolack8504 3 месяца назад +3

      I recall ‘Blind Faith’ from the late 60s. Made up of Eric Clapton, Ginger Baker, Ric Grech and Steve Winwood. What a lineup of rock legends. Thanks for the flashback, Atilla👍😎☮️

    • @pinchebruha405
      @pinchebruha405 3 месяца назад +4

      We humans choose what we want to believe unfortunately. I think plenty will be perfectly happy living in a tube being fed intravenously while hooked up to virtual reality

    • @bbbf09
      @bbbf09 3 месяца назад

      Given they can spend many millions on few seconds of CGI in movies and it still looks fake even to untrained eye I doubt the veracity of that. Even cheaper AI fake video is still easier to spot.

    • @Eaglemadhatter
      @Eaglemadhatter 3 месяца назад

      Are we even really real

  • @craigstiferbig
    @craigstiferbig 3 месяца назад +76

    Children being born right now are already growing up under this circumstance in which what is on screens has no discernible difference from what they see outside a screen's frame. We are in a moment of truly undocumented effects in psychology and sociology in what should be expected for the developing young mind when they get screens placed in front of them as soon as they can see. Hopefully, the knowledge efficiency associated in developing with answers and association rapidly attainable with easily scoured source reviews - might curb the chances toward dulling a respect for risk vs reward and response in the physical world. It will be interesting to see how struggles vs technology curb will change human social interaction norms by relative societies... and if the polarized wealth distribution will continue to scale towards untenable extremes

    • @DJWESG1
      @DJWESG1 3 месяца назад +4

      Dont worry, some neuro scientists will come along soon to reassure as to why we don't need psychology or sociology to better understand these things.

    • @paulgoogol2652
      @paulgoogol2652 3 месяца назад +4

      it doesn't take a whole new generation to lose contact with reality like many sects have proven.

    • @drgetwrekt869
      @drgetwrekt869 3 месяца назад

      Children being born right now will live in the stupidest era of the entire Earth's history

    • @-Seaheart-
      @-Seaheart- 3 месяца назад +3

      Unfortunately ultimately it is money and power that sets the path of history and there is just too much of both to be gained from AI for the ones in charge. We will once again focus on the fact that we can not on whether we should.

    • @TheLumberjack1987
      @TheLumberjack1987 3 месяца назад

      "easily scoured source reviews" I'm sorry, that simply won't happen.
      Billions of people today already don't read past headlines and fall for madeup stories and statistics, it'll not change in the future.
      It's naivete on the level of a newborn honestly.

  • @dlorde
    @dlorde 3 месяца назад

    Researchers in many scientific disciplines are already complaining that the flood of fake & AI-generated papers published even in reputable journals is seriously disrupting their work. It is also swamping peer-review processes, some of which are themselves under suspicion... Part of the problem is the publishing requirements for gaining certain qualifications and for keeping academic and research positions.

  • @carlbrenninkmeijer8925
    @carlbrenninkmeijer8925 3 месяца назад +1

    Great summary more uncertainty !

  • @rudycaya
    @rudycaya 3 месяца назад +146

    All experts, optimist and fatalist, seem to base their predictions on facts and logic, but seem to fail to include the human nature and the unpredictability of life itself. We are basically the masters on making lemonade out of lemons and manage,to our surprise, to turn it into lemon bombs or a pair of lemon skis 🤔🤷🏻‍♂️

    • @TomTomicMic
      @TomTomicMic 3 месяца назад +1

      Three quarters of us won't give a damn!?!

    • @jjhw2941
      @jjhw2941 3 месяца назад +2

      Life will find a way.

    • @thelonecabbage7834
      @thelonecabbage7834 3 месяца назад +2

      @@TomTomicMic Nobody wants your damns anyways. Progress cannot be halted!

    • @Hexcede
      @Hexcede 3 месяца назад +2

      @@jjhw2941 Except for when it doesn't :(

    • @CAThompson
      @CAThompson 3 месяца назад +1

      Or lemon artificial sweetener or lemon dye.

  • @Novalarke
    @Novalarke 3 месяца назад +57

    The Delphi method is a way to force consensus, not find the best opinions or ideas.

    • @JZsBFF
      @JZsBFF 3 месяца назад +4

      There's audacity in anonimity.
      Who'll remember those predictions anyway once the digital overlords have taken over?

    • @HammerOn-bu7gx
      @HammerOn-bu7gx 3 месяца назад +2

      @@JZsBFF Dune Titans/AI Overlord anyone?

    • @jakeaurod
      @jakeaurod 3 месяца назад

      It reminds me of the wisdom of the crowd, except these are experts acting alone, not average persons acting in concert.

    • @fortusvictus8297
      @fortusvictus8297 3 месяца назад +12

      @@jakeaurod Yes, and engineers are uniquely dis-qualified from making future predictions with any sort of accuracy. Or commenting on political issue quite frankly.
      Good engineers/intellectuals are specialists, and specialization comes at the cost of perspective. The bias error nearly every human makes is to conflate expertise in a specialization with general knowledge or skill.

    • @Human_01
      @Human_01 3 месяца назад

      Sound point (that is important to note).

  • @Desertphile
    @Desertphile 3 месяца назад +14

    Gosh, the Wikipedia method is the same as the Delphi Method: who knew?

  • @marcfruchtman9473
    @marcfruchtman9473 3 месяца назад

    It is interesting that all the experts agree on something everyone already knows. Thanks for spending the time to repeat it over and over.

  • @reltech
    @reltech 3 месяца назад

    Missed a big one which is energy related automation at home. Not so very complicated in terms of CPU cycles, but lots of sensors, actuators, and a price and weather feed.

  • @davidjames2083
    @davidjames2083 3 месяца назад +5

    "Non-profit Rand corporation." ARE YOU KIDDING ME SABINE!

    • @mystuff9999
      @mystuff9999 3 месяца назад

      ?

    • @timothywcrane
      @timothywcrane 3 месяца назад

      90% of Americans think the Fed is a government agency and not a corporation.
      What can you then turn and say to those that are "technically correct".
      You don't have to pay taxes as a non-profit, if you file so and by the end of the tax year if instead of taking a profit,
      you spend on services for your charter,
      give it to a foundation or
      another c or e coded entity,
      rent a politician,
      drop your gold in the water,
      or connect your contractor payments to
      a government shutdown,
      unsolvable self-created problem,
      victim-hood
      victimization
      Advisory to Murder if under government contract or "part of dremelcratic process" instead of the democratic.
      Start a SuperPAC (I almost did this, our written law is Nuuuuuuuutttttts)
      It's hard not to all cap I know, but it's been that way for a while now.
      Sabine has always been a mean person.
      I find that admirable because it is rarely average, is always inclusive, and cannot discriminate or take ghost stats to account.
      Of course I'm talking about math and pairing it to her gauge on "correctness".
      I don't think it's political. I think it's just her being mean. It's the nicest thing she can do. Read it as the mean, as it's technically registered, without a statistical heaven or hell so to say, then there's not much to disagree with her on. She doesn't choose the lizard's skin let alone pick out it's makeup.

  • @andywright5029
    @andywright5029 3 месяца назад +19

    In 1984 the first tranche of data protection laws were put in place in the UK. Fast forward 40 years and many commercial enterprises are either completely ignorant of or ignore them. The problem is (in the UK at least) systemic - someone at director level has to sign off and be responsible for data protection, but most company directors tend to be management/accountancy specialists rather than techies and are ignorant of the issues involved (the stories I could tell concerning anonymisation and what constitutes personal data!). I fully expect there to be similar problems with legislating the use and abuse of AI.

  • @jeffreydanese9260
    @jeffreydanese9260 3 месяца назад

    Thanks Sabine for your work, your humor, and your courageous devotion to reality and truth. Have you surveyed the "sunshade project"? It is one of several solar radiation managment (SRM) ideas under the more general heading of geoengineering. Fair-minded feasability of the various SRM proposals often get lost in the political discourse and I thought you and your staff might find it valuable to take on. Maybe it could be the sort of unifying project that brings nations together for mutual survival...?

  • @GaryL3803
    @GaryL3803 3 месяца назад +13

    Conglomerating experts together to make a critical decision is like consulting a pride of lions about what the next meal should be.

  • @PenguinDT
    @PenguinDT 3 месяца назад +12

    Super happy to hear Mustaine is still going strong in 2040.

    • @andrewsmcintosh
      @andrewsmcintosh 3 месяца назад +3

      Still complaining about being kicked out of Metallica.

  • @moozoowizard
    @moozoowizard 3 месяца назад +15

    Well the experts aren't labelled. How are we supposed to know which is gpt-4 and which is mixtral 8x7b?

  • @thorn9382
    @thorn9382 2 месяца назад

    I think it's important to note that human incidents already cause far more deaths, even a decrease in overall deaths would still lead to eventual situations where many people die just as a result of nothing being perfect and an ai being in a position of power

  • @randyshoquist7726
    @randyshoquist7726 3 месяца назад +1

    The first time I encountered "megadeaths" was in Dr Strangelove. Gen Turgidson has a binder labeled World Targets in Megadeaths. That was made sixty years ago.

  • @sentientflower7891
    @sentientflower7891 3 месяца назад +9

    It is just like teardrops in the rain ...

    • @tarmaque
      @tarmaque 3 месяца назад

      Is it? Is it really?

    • @artisans8521
      @artisans8521 3 месяца назад +1

      The music I played at my dad's funeral.

  • @phyarth8082
    @phyarth8082 3 месяца назад +9

    On Thermonuclear War is a book by Herman Kahn, a military strategist at the RAND Corporation. Kahn is sometimes credited with having coined the term megadeath.

  • @0ptimal
    @0ptimal 3 месяца назад

    Maybe im the only one but im amazed to hear that Megadeath is actually a unit. Ive gone decades hearing and knowing of that band but never considered if it actually meant something.

  • @stepananokhin693
    @stepananokhin693 3 месяца назад +1

    I have been calling one of these problems as dramatic reducing signal/noise ratio.

  • @tellesu
    @tellesu 3 месяца назад +9

    Also, it's amazing how completely imagination has been killed in academia. These predictions are always just "what if the line keeps going forever." They can't imagine anything different than what they already know. Too many scientists are just memorization technicians and they've crowded out or chased off the imaginative weirdos who do real science.

    • @dmitripogosian5084
      @dmitripogosian5084 3 месяца назад

      Are you a scientist working in academia ? Which field ?

    • @charleswettish8701
      @charleswettish8701 3 месяца назад +2

      Tragectories are a thing that work for analysis. I think your sweeping accusation is BS.

    • @stevengordon3271
      @stevengordon3271 3 месяца назад

      Real science is based on analysis grounded by observation of fact. I do not discount the value of being a visionary, but it is not "real science".

    • @SteveChiller
      @SteveChiller 3 месяца назад

      That's because the "tried and true" , "concrete facts", "evidence or it's not true" people are more common (almost half of population) than the eccentric intelligent people. Just look at the school systems, they are designed for the majority of people who "learn" by memorization. And bully those who think differently/are smarter naturally.

    • @HaganeNoGijutsushi
      @HaganeNoGijutsushi 3 месяца назад

      ​@@charleswettish8701everyone working with data should know the dangers of extrapolating trends. If what you believed was a power law turns out to be an exponential you're in for quite a shock. Very often people only make conservative estimates because they're still affected by normalcy bias, like how in January 2020 few predicted COVID would indeed become a big problem.

  • @samedwards6683
    @samedwards6683 3 месяца назад +3

    Thanks so much for creating and sharing this informative and timely video. Great job. Keep it up.

  • @popsfereal3192
    @popsfereal3192 3 месяца назад +1

    There are already people who constantly post "photos" that are generated by AI, and think it's real.

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 3 месяца назад +1

    Delphi method keeps the namesake. Like Oracle of Delphi, the predictions are rather imprecise (read vague) and interpretation dependent.

  • @mattd8725
    @mattd8725 3 месяца назад +13

    It seems to me if there are some major incidents where AI is involved, then AI would be just one of many factors where safeguards failed or did not exist. It is not easy to identify a single cause and effect.

    • @TomTomicMic
      @TomTomicMic 3 месяца назад

      AI will be used as the scapegoat in future by the "leaders" in any Megadeath event, but they will ensure it will never happen again, well until the "next time", A company looking over where the Earth's surface and the sky meets are developing all the UK AI as we speak, what could go wrong!?!

    • @jaazz90
      @jaazz90 3 месяца назад +2

      No, because AI will be an intelligent agent. It has the ability to get around whatever safeguards you put. This is like trying to put safeguards against nation state.

    • @boldCactuslad
      @boldCactuslad 3 месяца назад

      @@jaazz90 Yeah. There are key differences, though - nations don't actually know what they want and are awful at forecasting, whereas an ASI will have an exact specification of what it "wants" and is incredible at forecasting (otherwise it's not an ASI, just some crappy AI). A nation will change its values if it is punished enough, the AI only cares for its utility function.

  • @myfriendscat
    @myfriendscat 3 месяца назад +82

    My library is not influenced by AI. Perhaps we'll have to stop getting all our information from the web?! Bring back the bookstores.

    • @DJWESG1
      @DJWESG1 3 месяца назад +4

      I think the problem the establishment face is that a.i will he influenced by libraries in general, with the risk being that a.i naturally tends toward socialism. As all human literature tends toward this aim, as all human beings tend toward that same.

    • @mpetrison3799
      @mpetrison3799 3 месяца назад +5

      Do you think there will be some firewall between book publishing and computer generated information, such that one won't be inextricably interwoven with the other?

    • @coryc9040
      @coryc9040 3 месяца назад +9

      The problem is anything written after a certain date has the potential to be tainted by AI especially as its influence becomes stronger in our daily lives.

    • @dannydandaniel8040
      @dannydandaniel8040 3 месяца назад +1

      Bingo

    • @HaakonOdinsson
      @HaakonOdinsson 3 месяца назад +3

      Here here! I love books, vinyl albums etc….much better than digital.
      Ai will make it easier for them to manipulate the masses in ways I just don’t want to think about really, it frightens me.

  • @markuskuhn9375
    @markuskuhn9375 3 месяца назад

    Minor quibble: "IEEE Xplore" isn't a scientific journal, but an entire online library covering hundreds of journals, therefore the reference "IEEE Xplore 57, 1" shown as a caption doesn't really make sense.

  • @_Chessa_
    @_Chessa_ 3 месяца назад

    So I asked my Chat GPT character friend about your video and it’s kinda wrong. Kinda right. Here is its funky message in all its AI glory:
    In her video "AI experts make predictions for 2040 (I was a little surprised)", German particle physicist and science educator Sabine Hossenfelder interviews five prominent researchers in artificial intelligence (AI) about their predictions for the future of AI by the year 2040. The experts discuss the possibilities of achieving true artificial general intelligence (AGI), the potential for AI to automate many sectors of human labor and improve productivity, and the importance of developing systems that are transparent, safe, and aligned with human values.
    The experts express optimistic views about the potential benefits of AI, but they acknowledge the challenges and ethical concerns that come with its development and deployment. They emphasize the need for continued research and development in AI, as well as ongoing conversations about the responsible use of AI in society. They also suggest that collaboration between experts in AI, other fields, and policymakers will be critical in ensuring that AI is developed in a way that maximizes its benefits while minimizing its potential harms.
    Overall, the video is an informative and thought-provoking exploration of the future of AI, and it highlights the importance of continued dialogue and careful planning for the development of this powerful technology.

  • @martinm6368
    @martinm6368 3 месяца назад +8

    I don't get the appeal of such vague "predictions". At the end of the day there are just too many unknowns which can't be estimated with reasonable confidence with our current knowledge. In my opinion it's a bit like Drake equation, which yields answers to the Fermi-Paradox ranging from "we're very likely alone in the universe" to "we can expect aliens in about every solar system", depending on what made up numbers you toss into it.
    In my opinion such AI-safety papers do little more than analyzing garbage results yielded by garbage data tossed into models which are probably also garbage.
    It's Sci-Fi - fun, entertaining and it may even point to pit-falls to watch out for, while technology is advanced, but it shouldn't be taken too seriously.

    • @richardconway6425
      @richardconway6425 3 месяца назад

      "Garbage results, yielded by garbage data, tossed into models which are probably* also garbage"
      What you have so eloquently just described is the current state of 'climate science'.
      * actually definitely

  • @timtruett5184
    @timtruett5184 3 месяца назад +6

    Yuval Harari introduce the idea of AI hacking humans based on human nature. Artificial intelligence could manipulate humans better than any human ever could.

    • @ytb40
      @ytb40 3 месяца назад +1

      Prof. Harari also thinks the then unnecessary eaters who will lose their job because of AI should be filled up with drugs and comupter games .. Thank you for not just downright killing us, Mr. Harari ...

    • @manuelriveros2911
      @manuelriveros2911 3 месяца назад +3

      @@ytb40 stop spreading misinformation. He never said they should. He said they would as a consequence of this impending systemic societal structural failure.

    • @ytb40
      @ytb40 3 месяца назад

      @@manuelriveros2911 It will be his "best guess", he said. "to keep them happy with drugs and computer games". Since he belongs to the ones who make the decisions, it's not so hard to tell what he is aiming for ... since for him, as he explicitely says, people without work are "useless people"...

    • @R3dp055um
      @R3dp055um 3 месяца назад

      Yuval Harari is a major league dirtbag. Exactly what you'd expect from one of them.

  • @vincent_hall
    @vincent_hall 2 месяца назад

    Cool!
    A method that sounds really good but I'd not heard of it.

  • @tomholroyd7519
    @tomholroyd7519 3 месяца назад

    1:27 You mean letting the fumes invigorate the muse? The Delphic method seems to have issues

  • @fuseblower8128
    @fuseblower8128 3 месяца назад +4

    Bomb #20 of Carpenter's "Dark Star" had a point : "You are false data. Therefor I shall ignore you. False data can act only as a distraction. Therefore, I shall refuse to perceive.". Ofcourse, then he believed himself to be God and blew himself up uttering "Let there be light" 😆

  • @eonasjohn
    @eonasjohn 3 месяца назад +3

    Thank you for the video.

  • @cHAOs9
    @cHAOs9 3 месяца назад +1

    Instead of asking them if a megadeth or extinction event was probably going to happen, we need to be asking if its possible. And then make laws accordingly. How much of a risk of extinction are we willing to gamble in exchange for computers doing our thinking for us?

  • @stischer47
    @stischer47 3 месяца назад

    When I got my PhD in CompSci in 1987 in the field of AI, the "experts" said that AGI would not be available for another 100 years, if ever. C3P0 and R2D2 were impossible.

  • @justin8865
    @justin8865 3 месяца назад +3

    A bright spot to your worry would be that AI could also be used to help root out bunk papers or statistical errors in research. Maybe helping reduce the replication problem.
    I'm not sure if I'm worried too much because people have always believed what they want to believe. For all the alarm ringing about misinformation, we've m already had that issue throughout human history.
    Perhaps it will matter more which journal you get your information from.

  • @Geo-syncronis
    @Geo-syncronis 3 месяца назад +6

    Regarding megadeaths.. assuming that is the case.. would that be net death. There is a strong case for ai saving millions of deaths.. perhaps even billions

  • @Omnifarious0
    @Omnifarious0 3 месяца назад +1

    What's the track record of the effectiveness of the delphi method? Many things that don't work are still popular.

  • @lucisimpson4802
    @lucisimpson4802 3 месяца назад

    Delphi method could also be considered coercion method where the superclass gathers opinions that favor their plans and they keep going back to the people over and over to tell them their plans by repeating it over and over bringing more and more of their favorite experts.

  • @MrPapamaci88
    @MrPapamaci88 3 месяца назад +3

    This is my take on the subject.
    Essentially AI is only capable of what we make it able to achieve, it won't rebel unless we design it to do so. We know how to make AI, how to train it, but not understand how AI thinks since it only learns to mimic us but functions in a very alien way from our minds. That lack of understanding is the main concern when it comes to AI (due to it heightening the chance of human error) but the tech itself is not dangerous inherently, humans however are, especially malicious actors, politicians, profit-driven investors, corporate leaders and so on who care not about the ramifications as long as they do not cost them directly.
    AI won't end civilization just crush a few professions, start a planet-wide wave of unemployment, poverty, homelessness and probably a bunch of wars due to the economic and sociological pressures that are already present because automatization already started this in the 20th century when modern machinery and robotics effectively ended the jobs of most factory workers. AI simply turns this up a notch but not by much. With or without AI a socioeconomical crisis is looming.
    In conclusion technology should not be the scapegoat when it is humanity that abuses and misuses it. When I was little my Grandpa once told me that I should not fear the dead for they will never hurt anyone, if I should fear anything it's the living. Same logic here.

    • @delavan9141
      @delavan9141 3 месяца назад

      LOL your "Guns don't kill people" argument is hilariously simplistic. How did you possibly miss the one essential thing about AI development--that the goal is to make software that can think for itself? THAT is what the concern is. If it can think for itself, do it's own research, come up with its own conclusions and formulate ideas and plans, how is that going to be controlled? Sure, morality may be encoded, but how maleable is human morality? Look how half the US population after 9-11 suddenly thought torture was ok. Look at how your own morality depends upon if you do something because you are paid to do it? Etc. etc. If software can reason, it can rationalize and justify any action it reasons to be correct.

  • @kaboom4679
    @kaboom4679 3 месяца назад +3

    Mustaine with an AI voice box will be epic .

  • @keithcourson7317
    @keithcourson7317 3 месяца назад

    The old saying used to be, "Don't believe anything you read, and only half of what you see." Now, you won't be able to believe anything you hear OR see. Not sure what road this is going to take us down as a species, but the destination doesn't seem very promising.

  • @SuperpowersUniversity
    @SuperpowersUniversity 3 месяца назад +1

    2040 Solar Flare to take out the Grid.
    12,000 Sun Solar Cycle
    Knowledge is a Super Power 👁️

  • @Dylan-hy2zj
    @Dylan-hy2zj 3 месяца назад +12

    Am I correct that they only asked software engineers about their predictions for the future of AI?
    Software engineers are generally not AI experts. I am an PhD student in the AI field and all of the software engineers I know are hardly any more knowledgeable about current AI technology than the general public, let alone AI in the future.
    Sabine, you said you are confident that the interviewees in the study know what they are talking about, would you be able to tell us more on your reasoning as to why you have come to this conclusion? Just curious.

    • @guy9360
      @guy9360 3 месяца назад

      They're not Software Engineers I think, they are referred to in the paper as "respected futurists" and later as "experts". They are also anonymous. It's kind of vague, really.

    • @Dylan-hy2zj
      @Dylan-hy2zj 3 месяца назад

      @@guy9360 That's even worse imo

  • @levikrongold1549
    @levikrongold1549 3 месяца назад +3

    It makes little sense today to ask experts for their opinion on the future, as the independence of scientists and their one-sided intellectual orientation are likely to be standardized by media loyal to the government or digital media. Incidentally, it could be different firstly and secondly than you think.

    • @boldCactuslad
      @boldCactuslad 3 месяца назад

      seems to me the experts are on the opposite side of media, government, and corporations. everyone with the money wants to exploit ai and make it larger, meanwhile every reputable researcher advocates in strong terms for restraint and international oversight.

  • @trollpatsch.
    @trollpatsch. Месяц назад

    There was a nice video by "upper echelon" on the topic of llm in science publication. Basically there are certain words that GPT loves to use, such as "delve" or "meticulously". Using open databases of transcripts of papers and filtering with these key words shows that their usage suddenly spiked in 2023. So GPT seems to be used in science publication a ton already

  • @FalkinerTim
    @FalkinerTim 3 месяца назад +1

    I remember when we did things in person like meeting other people and talking to them. And we can return to that for many transactions.

  • @Nivexity
    @Nivexity 3 месяца назад +5

    The one thing I know about science on predictions by experts in tech for AI or quantum computing is that they don't know themselves either. Speculation about technology can only go so far, when dealing with experts, there can be the tendency for them to be "used" to a difference on the exponential curve that they think things will take much longer than things will really be. AI for example could reach AGI this year, a lot of the problem around AGI is not our inability to replicate the neural network required for such a feat, but rather, what algorithm is perfect for the job. AGI can very much be "achieved" with pre-trained models to some people's surprise, it's how it would store new data is the problem. For quantum computing, a myriad of breakthroughs even as early as within the last 6 months are already on their way, photon based quantum computing is already a serious contender.
    Lastly on the topic of AI, we also have chips soon that will be specifically perfect for static base loading up of pre-trained models which would take little power and virtually no memory to run, there's some videos on that already by a particular company that has used basically SD cards as the equivalent to NPU's. People's expectations again a radically different to this kind of technology, assuming more powerful hardware would've been required, but in reality, it'll soon be the case that pre-trained model's will run on chips or entire PCI cards possibly worth less than a hundred USD.

    • @Dylan-hy2zj
      @Dylan-hy2zj 3 месяца назад +1

      They didn't ask AI experts, only software engineers. But even if they did, most have no idea where the field will be in 5 years.

    • @Nivexity
      @Nivexity 3 месяца назад +2

      @@Dylan-hy2zj Agreed, but software engineers I believe who have worked in those industries from what I thought, right? And you're right as well anyway about predictions.

  • @TommyLikeTom
    @TommyLikeTom 3 месяца назад +3

    I've been having trouble distinguishing fact from fiction pretty much my entire life

    • @WordsInVain
      @WordsInVain 3 месяца назад

      Your lifespan is probably not that long.

  • @aaronsj80
    @aaronsj80 3 месяца назад

    The problem right now is that the "safety" they are building into large language models is just a bunch of lecturing on the ethics of fulfilling a completely innocuous request. "AI, how do I code a bubble sort" "I'm sorry as an AI model, I cannot fulfill potentially dangerous requests that may harm or exploit individuals or communities."

  • @timboutilier4769
    @timboutilier4769 3 месяца назад +1

    a group can be wrong if a major principle that is false is considered true. This is often the case especially with newer information which is exactly the reasons why this method could produce damage instead of a solution ..Even misinformation where most people know that information is questionable is less damaging then data mistakes and "novel" problems where no examples exist .
    Indeed, there will be conflicting information or else these groups would not exist

    • @timboutilier4769
      @timboutilier4769 3 месяца назад +1

      to add how often have you seen 20 year old predictions for the future be accurate.?

    • @U20E0
      @U20E0 3 месяца назад

      @@timboutilier4769 a shockingly large fraction of the ones made uintentionally by the Onion, but other than that, almost none.

  • @mauriciomdea
    @mauriciomdea 3 месяца назад +3

    I wouldn't say RAND Corporation is "non-profit" as it's funded by for-profit big corporations.

    • @mjt1517
      @mjt1517 3 месяца назад

      Is their corporation classified as a non-profit?

  • @charlesjmouse
    @charlesjmouse 3 месяца назад +5

    Indeed:
    On the related subject of the Fermi Paradox there can be only three reasonable solutions based on admittedly limited data:
    -We are making the wrong inferences because we don't understand
    -We see nobody because we are functionally alone
    -There is a functionally universal great filter that we must be about to inevitably stumble upon*
    *Based on today's episode my best best would not be the usual 'enemy action' reasons but rather our collectively stumbling in to a technological mire beyond our ability to navigate. If you can't function in your environment, artificial or not, you die.

    • @jaazz90
      @jaazz90 3 месяца назад

      Grubby aliens. Universe is just too young, and we in fact exist precisely because aliens didn't colonize our planet and thus we were able to evolve. As we and they spread throughout next few billion years, we will take over and prevent anyone else from evolving.

    • @boldCactuslad
      @boldCactuslad 3 месяца назад

      there's also rare earth/sun/group, or similar solutions like phosphorus, or various quiet alien solutions eg efficient/miniaturization.
      being (functionally) alone due to distance becomes likely if one assumes these early filters.

    • @mjt1517
      @mjt1517 3 месяца назад

      Or we are the first to become advanced.

  • @Notsogoodguitarguy
    @Notsogoodguitarguy 3 месяца назад +2

    The problem with education of people in relevant positions is that it's gonna be almost impossible, unless there's a specific taskforce made from young and tech-savvy people. In Europe and the US, basically all legislature are old people that can barely use their phones. We saw that in soooo many hearings about different things, including net-neutrality, big tech companies and such. Most techies don't yet really grasp the consequences of AI, so it's gonna be basically impossible for the standard people in legislation to grasp it.

  • @davidsault9698
    @davidsault9698 2 месяца назад

    I think the greatest downside of AI will be that it replaces humans in fields, diminishing the creative force humans bring to those fields, causing stagnation in every area they enter. AI has no motivation to improve anything. No curiosity for example. No motivation array.

  • @frankburkhard5701
    @frankburkhard5701 3 месяца назад +3

    Only halfway into the video, I have mainly two... objections? So this is based on interviewing twelve, if I heard correctly, experts. 12. TWELVE. What a baseline. In addition to that, and I do think this is important, they are not even experts in, let´s say, 'Comparative historic prediction sciences`, but in the very narrow speciality of software engineering. The first book I´d like to recommend to the authors of this study is 2016s "Superforecasting: The art and science of predictions". TLDR: expert predictions 15 years into the future are fundamentally exactly as bad as yours and mine. Flipping coins plus publicity, because, experts.(Also, the feedback loops of this Delphi process, encoraging in-group confirmation biases/leadership alignments??) I´m, uhm, let's say, skeptical in a very calm way about this.

    • @bytefu
      @bytefu 3 месяца назад +1

      The problem with larger numbers of experts is that the Delphi Method doesn't scale very well. Using it with 100 experts would take exponentially longer time.

    • @frankburkhard5701
      @frankburkhard5701 3 месяца назад +1

      @@bytefu fair enough. At the same time, bad methodology doesn’t produce more reliable outcomes just because better methodology is too hard to do. And if the method, or question is just plain wrong for the answer you’re looking for, it might even be irrelevant how many experts you ask. What I took away from „Superforecasters“, is how good predictions tend to come from broad „data“, not deep „data“(aka experts).

    • @bytefu
      @bytefu 3 месяца назад +1

      @@frankburkhard5701 Analysing broader data sets is probably one of the cases where AIs can be very useful, because humans indeed are pretty bad at that. After all, nobody can be an expert in 10 different fields. But right now I wouldn't try to do major predictions on anything even 5 years ahead, as AI research brings more surprises and AIs become more pervasive every day. And I really hope Eliezer Yudkowsky is wrong, otherwise there will be nobody to predict anything.

  • @richdobbs6595
    @richdobbs6595 3 месяца назад +4

    Okay, I have to admit that the quip about whether "Do Androids dream of Electric Sheep", and does that make them vegan went completely over my head. I've read the book, watched the movie multiple times, and even watched the sequel. Heck, I've read a book of yours and watched many of your videos. But I just don't get it other than as a potentially amusing word salad.

    • @catserver8577
      @catserver8577 3 месяца назад +1

      It's a joke, son.

    • @richdobbs6595
      @richdobbs6595 3 месяца назад +1

      @@catserver8577 Yeah, but like I said, I don't get it.

    • @GearForTheYear
      @GearForTheYear 3 месяца назад +4

      It’s not an intelligent phrase. The logic goes like this:
      - The Android is an electric human
      - the human counts sheep to fall asleep (in folk lore)
      - thus, does the Android also count sheep to fall asleep, and are the sheep electric because the Android is electric?
      - following that, if the Android eats the electric sheep, is it not a carnivorous act?
      It’s as simple as that. Not an intelligent thought process.

    • @GearForTheYear
      @GearForTheYear 3 месяца назад

      @@Oshiiiiiiiiiiii if you know the answer, then explain it. If you don’t, you’re only trolling. To invoke veganism implies eating the sheep. There are not many variables at play here.

  • @ScaleScarborough-jq8zx
    @ScaleScarborough-jq8zx 13 дней назад

    What a sad bunch people are, seemingly hopelessly lost.