The AI Dilemma | Sune Hannibal Holm | TEDxCopenhagen

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • Should important decisions about citizens be taken on the basis of their statistical profile? Artificial Intelligence is finding its way into public administration. We face the AI Dilemma. Sune’s research focuses on ethical issues arising in the context of artificial intelligence and biotechnology. He is particularly interested in questions concerning bias and fairness in algorithmic decision-making and in the dilemmas that arise when algorithmic decision-making tools are introduced into public administration. He also does research on the use of machine metaphors in the biological sciences and on the ethics of risk.
    Sune is associate professor at the Department of Food and Resource Economics at the University of Copenhagen. He holds a Ph.D. in This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at www.ted.com/tedx

Комментарии • 74

  • @productgeneration
    @productgeneration 4 года назад +3

    Human Above AI. Period.

  • @SveinNOR
    @SveinNOR 4 года назад +8

    No, we should not let AI decide! How is this even considerable? The possible, and plausible, dark side is way to scary!

    • @BlaZay
      @BlaZay 4 года назад +1

      I mean, self-driving cars have tremendously smaller chances of crashing themselves than humans do, statistically speaking. The thing is, there are numerous fields where AI makes much better decisions than us, but there are also fields where the roles are reversed. The key is to only give AI tasks that it does better than us, and where an AI-made decision is undeniably better than a human-made one. Also, I believe important decisions can be studied by AI, but actually doing it should always be a human decision.

    • @lisaclark4550
      @lisaclark4550 Год назад

      I do not agree. Any way you look at it, you're crossing a very dangerous line.

  • @dominikxxxxx9642
    @dominikxxxxx9642 4 года назад +52

    The buttom line is to use AI as a supporting tool and not as a final decision maker.

    • @carmenschumann826
      @carmenschumann826 4 года назад +1

      . . . button line ? bottom line !

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 года назад +2

      @Dominik Rohde : Even that can be *(and often is)* falsified and programmed with extreme bias.

    • @dominikxxxxx9642
      @dominikxxxxx9642 4 года назад +1

      @@aylbdrmadison1051 Indeed and for that reason a human needs to be the final desision maker. In order to go back and change the input (learning data) if needed. AI needs to be treated like a child. Correct the learning if the outcome is not the desired outcome.

    • @1Pineapple
      @1Pineapple 4 года назад

      @@dominikxxxxx9642 The child-comparison is such a good one! 🙏
      But damn, it sounds expensive in human ressources to have to foster also digital children now 🙈

  • @mbonuchinedu2420
    @mbonuchinedu2420 4 года назад

    Well said.

  • @michaelwallace9291
    @michaelwallace9291 4 года назад +1

    I'm against minority report concepts.

  • @songhoa3590
    @songhoa3590 4 года назад +1

    Discard AI now. We don't need AI.
    We are humans. We need to be genuine interest all the time.

    • @dakshit04
      @dakshit04 4 года назад

      You can't . Your race is predestined to give rise to AI .
      Only human extinction can save you now .
      Mam , you are growing old , and most probably you will want to make your life bliss in future .
      As ignorance is actually bliss

  • @janitoivanen4166
    @janitoivanen4166 4 года назад +3

    No to a technocratic elite making choices for people based on AI. We can give the AI big data information to autonomous individuals to make better informed and more free choices.

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 года назад +1

      I absolutely agree with your first sentence. But biases *are* being programmed into A.I. Even just at a collection level the biases of political parties, companies, corporations, and programmers are already controlling us to a large and very real extent.

  • @Saxtorial
    @Saxtorial 4 года назад +2

    So there is no perfect solution to societal and social problems via artificial intelligence. Got it. Micro implementation and boxed case studies makes sense. However, I believe firmly, that the power structures of government and military industrial complex, general deep state intelligence and rule, make it near impossible to stop the weaponization of any technological development. In short, each narrow innovation will be analyzed for "defense" applications, "regulated", and structured to serve whatever hierarchy we have in place. I love the idea of disruptive innovation and tech . . . But prioritizing truth and transparency is near impossible. All nation's and people have incentive to embrace and perpetuate secrecy And privacy, Independence and boundaries. We operate in a very small window of autonomy as it is. Universal efficiency renders humans, our bodies, out brain, our biology in general, obsolete. We are pursuing Utopia via perfection but we are imperfect.
    So, if we start with a jumbled mess, not from scratch, then it is the uprooting if old, persistent and prideful systems and programming, yes misinformation and trickery, which need upheaval. But how? Neuralink is a step in that direction. No thought hidden. As it is, we are working toward being a brain if sorts. All humans, continuously connected, no lies, no thought private, no incentive to destroy and steal, and dominate . . . Only to contribute. But we don't know how to do that. Biology demands that we serve our own desires and needs.
    Ultimately, ELIMINATION of privacy is the state's goal. Or whoever is in power. Not that entity's privacy, just everyone else's. Would be best if this ubiquitous problem-solving, sleepless, faultless machine could be implemented with the motive of universal transparency whilst retaining autonomy. That is something we do not understand how to develop.
    I love the possibility of Justice and peace, but a surveillance state leads mostly to abuse of power. We do need transparency for governance. But the governing must be transparent themselves.
    Do you want people to know all the details of the last time you masturbated, for free and on recall to anyone an to time forever? Our desire to be private stems from fear. How can we eliminate fear when we are threatened? Privacy has been stolen from all of us in the name of a better planet, but we all know it is merely to usurp as much power from the less fortunate as possible. Those with good intentions are being deceived.

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 года назад +2

      So, I "liked" your comment because it was thoughtful and made some very good points.
      But I must bring your attention to a major flaw in it. You pointed out (and rightly so) that _"prioritizing truth and transparency is near impossible"_ But then go on to say that Neuralink is a step in the direction of _"uprooting if old, persistent and prideful systems and programming, yes misinformation and trickery, which need upheaval."_
      And yet Neuralink will be as you yourself said: _"I believe firmly, that the power structures of government and military industrial complex, general deep state intelligence and rule, make it near impossible to stop the weaponization of any technological development"_

    • @Saxtorial
      @Saxtorial 4 года назад

      @@aylbdrmadison1051 I did not explain why I said that. It seems that the only way to save humanity is some kind of bizarre access to shared experience that isn't tainted by the detrimental parts thereof. Yes neuralink will be weaponized, but hopefully the most powerful weaponization will be one for good, rather than a transhuman atomic bomb, in a manner of speaking. I think only a machine can solve our problems, but how can we avoid losing our selves? So that is what I mean. Does that make sense?
      And I ran out of time in that moment and simplly posted. Thanks!

  • @johndemeritt3460
    @johndemeritt3460 4 года назад +3

    Let's start out by examining the assumptions that underlie AIs. First, AIs are built upon assumptions about how the human social world. Many of these assumptions are untested, and many of them are wrong. Second, statistical processes reflect only the models and data we have and the relationships between data are valid. The statistical processes only work if the data actually reflect the population they are supposed to represent. Third, the AIs are assumed to have no faults in their predictive functions. However, to date there have been no technologies that have been faultless.
    The AIs we are developing have been shown to be biased in a number of ways. To depend upon AIs without humans thoroughly vetting the results is foolish. The biggest problem I see here is that after enough experience with AI coming up with the right answer every time, we are likely to become lax in critically analyzing the AI's answers in every subsequent case.
    Moreover, there's a problem with outliers -- those people who don't conform to the profiles AIs develop. This goes back to the assumption about how the human social world works. There are people who don't neatly fall into any statistically significant, and AIs will tend to ignore them -- until the AI is given a problem involving those people. Without sufficient data, AIs can't work out solutions that actually work for those outliers. In that case, the AIs will probably default to solutions worked out for people most like the outlier they can't figure out -- and, of course, that probably will not be a good fit.
    Given the problems with AIs, the assumptions they're based on, and the datasets they operate with, I'm not very confident that they will solve complex problems better than humans -- they'll only arrive at the wrong answers faster than actual people. Fortunately, having just turned 63, I don't anticipate living through the worst aspects of our transition to AI driven futures.
    But I wish the rest of you the best of luck.

    • @PanteraRosa91
      @PanteraRosa91 Год назад

      John you are on point👏🏽👏🏽

  • @DanFrederiksen
    @DanFrederiksen 4 года назад

    If somehow high probability was found you could approach it delicately rather than a swat team. Let's say the indicator is a lot of shouting in the family then they could be offered some sort of support for known elements rather than making dramatic statements about the future and putting everyone in strait jackets. It's not an issue.

  • @pyschologygeek
    @pyschologygeek 4 года назад +9

    i use teds talk to improve my English to improve my videos who else do the same

  • @LotsOLuck777
    @LotsOLuck777 4 года назад

    The answer is no. Because whoever controls the AI will be able to decide what is right and wrong for you. No thanks.

  • @larasmith2931
    @larasmith2931 4 года назад

    🦋it sounds a little bit like Minority Report - in my own life if I worried about meeting an imaginary line I would need to feed my son fast & processed foods to meet some line by doctors (um nope)

  • @Jimi_Bozo
    @Jimi_Bozo 4 года назад +2

    when the ai nukes us all i'm the guy slowly clapping...

  • @semblt
    @semblt 4 года назад

    Asimov was a prophet

  • @janwarie
    @janwarie 4 года назад +5

    In relation to this, I invite you all to watch an anime called: Psycho-pass.
    Let me know how it goes. :)

  • @user-hh2is9kg9j
    @user-hh2is9kg9j 4 года назад

    Bring it on already

  • @dakshit04
    @dakshit04 4 года назад +1

    *Sponsored by Agent Smith*

  • @historicaref
    @historicaref 4 года назад +2

    Interesting!

  • @georgesos
    @georgesos 4 года назад +2

    final critique. This was useless.A philosopher who doesnt break down his arguments..I d like to have a long discussion with him ,to explain to him what a philosopher does....

  • @RF-db9st
    @RF-db9st 4 года назад +2

    Pls, everytime add subtitles on video

  • @calholli
    @calholli 4 года назад +2

    Propaganda at it's finest.
    "You're gonna lose your rights and regret it... but just get on train"

  • @benb.8550
    @benb.8550 4 года назад +1

    It is the information era version of death masking.

  • @pyschologygeek
    @pyschologygeek 4 года назад +1

    After years of waiting, nothing came

  • @kakarl4792
    @kakarl4792 4 года назад

    Helpful

  • @danielduran7829
    @danielduran7829 4 года назад

    Does anyone remember the movie "Minority Report"? It is scary!

  • @mayflowerlash11
    @mayflowerlash11 4 года назад

    The silence emanating from the audience indicates the impact these ideas are having on them. They are hanging on every word because the topic is important to them.

  • @sebastianmunoz1134
    @sebastianmunoz1134 4 года назад +2

    Ooo y ellaa

    • @CrysoK
      @CrysoK 4 года назад

      xd

    • @njdotson
      @njdotson 4 года назад +1

      what

    • @CrysoK
      @CrysoK 4 года назад

      @@njdotson Argentine jokes, you wouldn't understand

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 года назад +1

      @CrysøK : Of course they wouldn't understand if you are so rude and or lazy you don't even bother to make any attempt whatsoever to answer their *one incredibly simple question,* and instead just leave a snarky (and empty) comment.

  • @sachiperez
    @sachiperez 4 года назад

    Humanity evolves through moral and ethical struggles. What will happen if Siri takes care of it for us?

  • @KayefX
    @KayefX 4 года назад

    1:20 Yes

  • @khaliltayem1695
    @khaliltayem1695 4 года назад

    First I asked him about

  • @mikebeatstsb7030
    @mikebeatstsb7030 4 года назад +1

    Is this guy's name really Hannibal ⁉️ That's very unfortunate isn't it....?!
    I would seriously have considered long and hard about weather or not I should have changed my name by deed poll having reached the legal age🔞 if I was in his shoes. 👟 👞 🥾 🥿

  • @khaliltayem1695
    @khaliltayem1695 4 года назад

    First dogville

  • @sayakhalder8601
    @sayakhalder8601 4 года назад

    Yay!! I'm first!!

  • @everardogutierrez2809
    @everardogutierrez2809 4 года назад

    🙈🙊🙉🧟🧟‍♂️🧟‍♀️

  • @majkaadam9749
    @majkaadam9749 4 года назад

    I'm for AI 👌😊😎

    • @aylbdrmadison1051
      @aylbdrmadison1051 4 года назад +1

      Then you are for being programmed to not think for yourself.

  • @DDominoGeronimo
    @DDominoGeronimo 4 года назад

    🙌🏾✨🙌🏾✨🙌🏾✨🙌🏾✨🙌🏾✨🙌🏾✨