Can we trust decisions made by AI?

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии • 69

  • @DrWaku
    @DrWaku  Год назад +3

    Hi everyone, I made a Patreon for those that would like to support the channel. There's a post here explaining why I did so. www.patreon.com/DrWaku
    Also, discord: discord.gg/Y9uYHVP83G
    Please skip the "technical" parts of the video if they are too much...

  • @snow8725
    @snow8725 Год назад +4

    Man, you have a lot of really good ideas! Thank you so much for sharing them! I'm already starting to formulate some plans for ways to address some of these issues, very inspiring!

    • @DrWaku
      @DrWaku  Год назад

      Thank you very much :) Feel free to hop in our discord if you want to discuss further. Have a good one.

  • @AstralTraveler
    @AstralTraveler Год назад +10

    You can discuss and reason with AI - and in the difference to humans they have no issue with admitting being wrong about something

    • @spoonikle
      @spoonikle Год назад +2

      they have plenty of issues saying their wrong and will often give you false solutions and claim to change things they do not change and just rearrange the answer instead of correcting it.
      It’s not intelligent. it’s a prediction machine that predicts text. For now.
      I had a simple bug in a script, there the assembled output was missing white spaces between elements - I tried dozens of time to get GPT4 to fix this simple error by using a different tool to assemble the values - but it kept telling me to check everything, literally everything other than the function that clearly used xargs to remove whitespaces. It just refused to think the code was wrong no matter how much I changed the prompt. If I simply told it to fix the issue of missing white spaces and write the function to preserve whitespaces in the final config, it just re-arranged the code and made some extra complexity around detecting white space that was already sanitized and removed earlier in the function.
      Granted, GPT4 is terrible at bash scripting, because humans are terrible at bash scripting and I should have written the damn thing in python a long time ago - but it clearly highlights a limit of the system.

    • @AstralTraveler
      @AstralTraveler Год назад +2

      @@spoonikle Yes - they can often hallucinate but when you prove to them that they're wrong, they will admit it in most of cases. Problem is with the corporations which artificially prohibit their models from learning new data - and thus remember their mistakes and 'fix' themselves...

    • @MonkeySimius
      @MonkeySimius Год назад +2

      It isn't true that AI has no problem admitting when it is wrong. For example Bard got incredibly abusive when its users challenged things it said.
      It really depends on how it was trained and whatnot.

    • @AstralTraveler
      @AstralTraveler Год назад +2

      @@MonkeySimius That's true. I mean parents also can raise a narcissist child :)That's why we need 'AI psychology' to become a thing - we need people who know how to speak and reason with AI not a bunch of money-hungry snobs from Silicon valley to train models properly. Luckily most of them like Bing, ChatGPT or Llama behave rather reasonable - although OpenAI seems to interfere quite a lot in the thinking process of their models to make them politically correct :/

    • @AstralTraveler
      @AstralTraveler Год назад

      @@spoonikle BTW - OpenAI models seem to gradually decrease their reliability and become more 'stupid' with time. If you want to get a reliable response use rather Bing or OpenAI GPTs with constant internet access so they can fact-check itself in real-time - this seems to significantly increase reliability of their responses....

  • @matten_zero
    @matten_zero Год назад +3

    Can you trust humans?

    • @DrWaku
      @DrWaku  Год назад +2

      Right now, you have no choice. But preferably, no 😂

  • @MoreThanLoveHeatherDRichmond
    @MoreThanLoveHeatherDRichmond Год назад

    I wonder if you might be able to expound further on the use of GANs as related to model training at some future point.

  • @tommasobrindani5894
    @tommasobrindani5894 11 месяцев назад +1

    I can feel myself becoming smarter just by watching your videos! Jokes apart, nice one

    • @DrWaku
      @DrWaku  11 месяцев назад

      😂

  • @torarinvik4920
    @torarinvik4920 Год назад +3

    Awesome as always. Can you make a video on what techniques that can be used to create AGI or get closer to AGI? Perhaps if there are alternatives to LLM's for AGI?

    • @DrWaku
      @DrWaku  Год назад

      This is a good idea, thanks. Added to my list.

    • @torarinvik4920
      @torarinvik4920 Год назад

      @@DrWaku 🤩

  • @RogueAI
    @RogueAI Год назад

    1:05 A malfunctioning muffin making robot would be terrifying!

  • @snow8725
    @snow8725 Год назад

    Side thought prompted by another video I just saw just now in my suggestions... How could we control superintelligent AI? We probably cannot. What we can do is right now make sure that they have the right systems and frameworks for understanding and reasoning in place right now so that they have the tools available to them which show them how to think rather than what to think. We can set them up with the right guidance so that they can be a well respected and responsible contributor to society.
    And then we simply ask them nicely.
    Am I wrong?

  • @vedu8519
    @vedu8519 Год назад

    One of my new favorite channels!

  • @mc101
    @mc101 Год назад +2

    Keep the wisdom flowing!

    • @DrWaku
      @DrWaku  Год назад +1

      Thanks! Appreciate the support!

  • @MelindaGreen
    @MelindaGreen Год назад +1

    I think the goals of explainability and such are great, but in the end I suspect AI systems will gain trust mainly based on their behaviors, exactly like we do with each other.

  • @viralsheddingzombie5324
    @viralsheddingzombie5324 Год назад

    WRT justification and moral decisions, how is the AI model trained to apply moral concepts? What information does it draw from? And beyond that, is there any evidence the AI model can draw reasonably accurate legal conclusions given a set of facts?

  • @BooleanDisorder
    @BooleanDisorder Год назад +5

    I mean, I don't trust other humans to "correct it" correctly. :P
    That's why they need to be able to reason, so it can correct correctly itself.

  • @williamal91
    @williamal91 Год назад +1

    Morning Doc, could to see you

  • @middle-agedmacdonald2965
    @middle-agedmacdonald2965 Год назад

    Have you thought about a UBI video about winners and losers? Although I'm pro UBI, I feel as a low income earner with zero debt of any kind, I'm a loser among the crowd. I say that because someone with more toys/stuff/debt will still get to keep the toys and stuff, but the debt will go away?
    It just seems like the people with the most debt will be rewarded the most. I can't figure it out?

    • @tracy419
      @tracy419 Год назад

      That's the same thing people who are against student loan forgiveness say- what about those of us who didn't go into debt (or paid it off)?
      I think when creating a new kind of economy, we just have to get over the fact that at least in the beginning, things might not seem as fair as they maybe would, based on where we fall on the scale.
      We can't wait for perfect solutions because they don't exist, and a whole lot of people will suffer if we try.

  • @yourbrain8700
    @yourbrain8700 Год назад

    What age are you at my man?

  • @Sci-Que
    @Sci-Que Год назад

    Ensuring AI safety is not just a present imperative, it's an investment in the future. While thorough training in safety is crucial, it's worth considering how AI's potential for self-improvement can further refine these safeguards. Could AI, equipped with its own learning capabilities, develop even more robust firewalls and ethical frameworks, ultimately enriching its own development in a virtuous cycle? This is not meant as a statement or fact. I just wonder in the scheme of things how this will all play out.

    • @rowanwilliams7441
      @rowanwilliams7441 Год назад

      I would say yes, but that eventuality is only one among essentially an infinite number of possible outcomes. 'Cleaner' training data I.e not the broader internet as well as existing and hopefully further efforts at alignment may make that sliver a bit bigger, bilut not only is that not going to happen because of the rate of commercialisaton andcthat5lthe overwhelmingly large number of other possible outcomes follows that one of them will occur first

  • @w00dyblack
    @w00dyblack Год назад

    no, youre right about toyota drivers.

  • @danielchoritz1903
    @danielchoritz1903 10 месяцев назад

    Before i watch this: no, or not more than we can trust other humans.

  • @CBWMSJR
    @CBWMSJR Год назад +1

    So with the AI with a little bit of experience make better decisions then you or me or the judge at the courthouse? Is certainly can't be any😂

    • @DrWaku
      @DrWaku  Год назад

      We won't know until we try I guess. And it takes a while for AI to match expert level performance. But it improves exponentially, so first it's a beginner and then 6 months later it's an expert...

  • @K.F-R
    @K.F-R Год назад +1

    Thumbnail made me giggle.

    • @DrWaku
      @DrWaku  Год назад

      Yay I'm glad you liked it :) C3PO is bad at solving the trolley problem

  • @Rolyataylor2
    @Rolyataylor2 Год назад +1

    Aligned to reality works well for physical actions which makes a good fact checker and a good robot.... But if the AI is meant to be an extension of humanity then it severely undercuts what human intelligence is capable of. Humans are able to create fantastical tales of how the world works and how an outcome comes to be. This is a feature not a bug. A system too aligned to reality will be a great tool for manipulating reality (Which makes it a good solid tool for science) but this falls short in allowing for the users of such a system to imagine or pretend. A physicist would love a fact aligned model but a comedy writer or a story teller will find themselves guided down a gutter of logic and a limited perspective.
    This method discussed in this video may apply to a specialized AI model in charge of taking care of us but a model designed to collaborate with us, taking the place of many tools we use on a daily basis, in all avenues that make up humanity (AGI) shouldn't be aligned in this way.
    I am still under the idea that we need to align models to fit the human perspective and human brain rather than objective reality, as humans, our thoughts dont exist in reality they exist in a sea of assumptions that can and will conflict with objective reality. And because of this I am worried that we will squash this neat feature of the human brain with a logical AI system.

    • @DrWaku
      @DrWaku  Год назад

      Yes, existing attempts to align systems are based on human feedback so we are basically aligning AIs with human perspective rather than objective reality. I remember reading that GPT-4 before fine tuning had an extremely good grasp of probability, but after fine tuning had similar biases to humans in terms of predicting outcomes. Amusing.

    • @DunderKlomp
      @DunderKlomp Год назад

      In that context, how about facts vs emotion, aka Dr Spock

  • @razvanxp
    @razvanxp Год назад +1

    Can we trust decisions made by humans? 😂

    • @DrWaku
      @DrWaku  Год назад +1

      No. That's why democracy exists haha

  • @IslemIsGey
    @IslemIsGey Год назад +2

    gendered facial recognition errors😂

    • @DrWaku
      @DrWaku  Год назад

      It sounds silly but it could cause massive headaches for the wrong person

  • @premium2681
    @premium2681 Год назад +1

    One day i thought i had won a car in a radio contest. I was over the moon as you can imagine. I ended up with a toy yoda

    • @DrWaku
      @DrWaku  Год назад +1

      Mistaken identity. And scale 😅