Artificial Intelligence: 10 Risks You Should Know About

Поделиться
HTML-код
  • Опубликовано: 2 авг 2024
  • This video has been updated -- please watch • Ten AI Dangers You Can...
    What are the risks of artificial intelligence dangerous, and how do we develop ethical, safe and beneficial AI? This was produced before GPT and LLMs existed, but is more relevant now than ever - simply switch out Alexa for ChatGPT.
    Risk Bites dives into AI risk and AI ethics, with ten potential risks of AI we should probably be paying attention to now, if we want to develop the technology safely, ethically, and beneficially, while avoiding the dangers. With author of Films from the Future and ASU professor Andrew Maynard.
    Although the video doesn't include the jargon usually associated with AI risk and responsible innovation, the ten risks listed address:
    0:00 Introduction
    1:07 Technological dependency
    1:25 Job replacement and redistribution
    1:43 Algorithmic bias
    2:03 Non-transparent decision making
    2:27 Value-misalignment
    2:44 Lethal Autonomous Weapons
    2:59 Re-writable goals
    3:11 Unintended consequences of goals and decisions
    3:31 Existential risk from superintelligence
    3:51 Heuristic manipulation
    There are many other potential risks associated with AI, but as always with risk, the more important questions are associated with the nature, context, type of impact, and magnitude of impact of the risks; together with relevant benefits and tradeoffs.
    The video is part of Risk Bites series on Public Interest Technology - technology in the service of public good.
    #AI #risk #ethics #GPT #ChatGPT #LLMs
    USEFUL LINKS
    AI Asilomar Principles futureoflife.org/ai-principles/
    Future of Life Institute futureoflife.org/
    Stuart Russell: Yes, We Are Worried About the Existential Risk of Artificial Intelligence (MIT Technology Review) www.technologyreview.com/s/60...
    We Might Be Able to 3-D-Print an Artificial Mind One Day (Slate Future Tense) www.slate.com/blogs/future_ten...
    The Fourth Industrial Revolution: what it means, how to respond. Klaus Schwab (2016) www.weforum.org/agenda/2016/0...
    ASU Risk Innovation Lab: riskinnovation.asu.edu
    School for the Future of Innovation in Society, Arizona State University sfis.asu.edu
    RISK BITES LITE
    Risk Bites Lite videos are shorter and lighter than regular Risk Bites videos - perfect for an injection of fun thoughts when you're not in the mood for anything too heavy!
    RISK BITES
    Risk Bites videos are devised, created and produced by Andrew Maynard, in association with the Arizona State University School for the Future of Innovation in Society (sfis.asu.edu). They focus on issues ranging from risk assessment and evidence-based decision making, to the challenges associated with emerging technologies and opportunities presented by public interest technology.
    Risk Bites videos are produced under a Creative Commons License CC-BY-SA
    Backing track:
    Building our own Future, by Emmett Cooke. www.premiumbeat.com/royalty-f...
    ANDREW MAYNARD
    Professor Andrew Maynard is a scientist, author, and leading expert on risk and the ethical and socially responsible development and use of new technologies. He is an elected Fellow of the American Association for the Advancement of Science, serves as co-chair of the Institute for the Advancement of Nutrition and Food Science (IAFNS) Board of Trustees, is a member of the Canadian Institute for Advanced Research President’s Research Council, has served on a number of National Academies of Sciences committees, and has testified before congressional committees on several occasions.
    As well as producing Risk Bites, Andrew’s work has appeared in publications ranging from The Washington Post and Scientific American, to Slate, Salon, and OneZero. He co-hosts the podcasts Mission: Interplanetary and Future Rising, and is the author of the books Films from the Future: The Technology and Morality of Sci-Fi Movies, and Future Rising: A Journey from the Past to the Edge of Tomorrow.
    Andrew received his PhD in aerosol dynamics from the University of Cambridge in 1993, and is currently a professor in the Arizona State University School for the Future of Innovation in Society, and an Associate Dean in the ASU College of Global Futures.
    More at andrewmaynard.net
  • НаукаНаука

Комментарии • 30

  • @Renaissanced
    @Renaissanced 5 лет назад +9

    This should have way more views. Nice job!

  • @ronaldlogan3525
    @ronaldlogan3525 3 года назад +3

    we can look to other technology advancements and see if the eventual outcome tilted in favor of manifesting the negative outcomes vs the positive outcomes. The problem is that we have normalized the negative outcomes so we no longer see them. lead in gasoline is one example of a very bad idea that had lots of political and pseudo-scientific support. A.I. will be no different, but the results will be a billion times more devastating.

  • @nompumelelokhoza2699
    @nompumelelokhoza2699 Год назад

    Thank you ,nice job

  • @abdulrahmanalhumaid3302
    @abdulrahmanalhumaid3302 6 лет назад +5

    what do you think are the benefits ? and does it worth those risks?

    • @riskbites
      @riskbites  6 лет назад +1

      There are plenty of problems in the world that we've failed to address, that AI might help with -- everything from disease and pollution to global warming and feeding 8 billion people. Whether it's worth the risk is nearly impossible to answer - what is easier to address is asking what the risks might be, and how we can manage them effectively.

    • @CarlosRamirez-mu6zj
      @CarlosRamirez-mu6zj 3 года назад

      @@riskbites It's clear by now that we know the risks, and while maybe we can manage them effectively, I suspect the solution will not be purely scientific. We're basically in the 19th century and have correctly predicted the latest tech will lead to global warming. How do we prevent that? Whatever the solution, it will not be something scientists can provide.

  • @juangabrieldeguzman2364
    @juangabrieldeguzman2364 2 года назад +2

    I may not be a scientist or an inventor but they should think first before they invent. We shouldn't put too much faith and reliance on technology. I can't ignore the warnings of the late physicist Stephen Hawking or even the concerns of the late novelist Samuel Butler.

  • @thegamezterb6615
    @thegamezterb6615 4 года назад +4

    Wow my dude! This tells me why some people think ai will take over the world!

    • @Edzhjus
      @Edzhjus 4 года назад

      According to Matrix they already did until humans will catch up.

  • @criscraftsknowledge5692
    @criscraftsknowledge5692 5 лет назад +3

    We have to learn the dangers of these intelligences

    • @seratonyn
      @seratonyn 3 года назад

      The problem, however, is that some of the dangers are so beyond the realm of HUMAN intelligence that humans are unlikely to take those dangers as serious rational considerations.

    • @CarlosRamirez-mu6zj
      @CarlosRamirez-mu6zj 3 года назад

      We already know them. What will you do?

  • @jetcamp101lol8
    @jetcamp101lol8 3 года назад +2

    Its almost like we're inventing a new type of life

    • @CarlosRamirez-mu6zj
      @CarlosRamirez-mu6zj 3 года назад

      Almost indeed. But it's not. Whatever displaces life can only be death.

  • @bulukkugufuh6304
    @bulukkugufuh6304 6 лет назад

    can i share this video.. hehe

  • @TheOCG99
    @TheOCG99 3 года назад

    my techer made this a class video

  • @ArjunCool
    @ArjunCool 3 года назад +1

    Better luck next time ☺️

  • @ErenYeager-pq6pl
    @ErenYeager-pq6pl 3 года назад +1

    thank you I need this for my speech

    • @geraldine_gws
      @geraldine_gws 2 года назад +2

      Who would've thought Eren Yaeger would one day do a speech about AI... How time flies

  • @bradmathias7513
    @bradmathias7513 3 года назад +3

    That's some great content. Thank you for sharing it with us. I found this interview in which they talk about the march toward ethical AI and found it quite fascinating. Hope it adds value! ruclips.net/video/6uNFoY9dH1Q/видео.html

    Keep up the good work though!

  • @buakaw
    @buakaw 6 лет назад +7

    I for one, welcome our new AI overlords

    • @riskbites
      @riskbites  6 лет назад +1

      we, for one, value your allegiance

    • @Edzhjus
      @Edzhjus 4 года назад +3

      Same for Aliens. If that is required to fix planet Earth..

    • @CarlosRamirez-mu6zj
      @CarlosRamirez-mu6zj 3 года назад

      I, for one, hope civilizational collapse induced by climate change prevents the AI overlords.

  • @fairai9229
    @fairai9229 3 года назад

    Ideally, AI will be as dangerous as the humans developing that algo...So it all boils down to...???

  • @eeveelynee
    @eeveelynee 3 года назад

    O

  • @jennifermarieherron7948
    @jennifermarieherron7948 3 года назад

    AI is 1 year older than me. I just began at it. As soon as I did, invisible violence fell on me. So I had to swim up logic. The worst I found is insults at me first.
    It's fine to feel the way you do. But the force coupled with insults at who, as I've had to, has to fight the (most likely) assault who tosses themselves at a nobody by comparison is just an obvious.
    I don't insult so low. Nope. I don't insult homeless persons. They're below my wealth class and in there are, as I am, polite society. Everyone in policies aren't polite society.
    Pick on someone your sizes. Your insult is an uninvolved other's risk first, no matter your reputation.
    Thanks

  • @thebluebeyond2329
    @thebluebeyond2329 3 года назад

    Technological dependency is real. I'm guilty

  • @koppadasao
    @koppadasao 6 лет назад +2

    Don't worry, B4, Lore, and Data, won't be invented for at least 300 years

    • @riskbites
      @riskbites  6 лет назад

      Someone should probably let Ray Kurzweil know ...