The first law on AI regulation | The EU AI Act

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024

Комментарии • 86

  • @bdennyw1
    @bdennyw1 Год назад +22

    Countries are getting together to have a party. China say “I’ll bring the hardware!” The US says “I’ll bring the software!” Europe says “I’ll bring the regulation!”

    • @conan_der_barbar
      @conan_der_barbar Год назад +13

      and the world is often better for it. often, new regulations come from EU, then California follows (or is the first), then step by step large portions of the world

    • @RegularRegs
      @RegularRegs 9 месяцев назад +2

      Yeah the EU AI regulations could set a good framework for my country. The good ole US. Terrible government of old people that think regulation is a bad word even when all the companies are telling Congress to regulate them publicly.

    • @MilwaukeeF40C
      @MilwaukeeF40C 9 месяцев назад +2

      RegularRegs
      "all the companies are telling Congress to regulate them"
      That's called regulatory capture. It makes things easy and profitable for them, not new competitors.

    • @pircalabustefan9364
      @pircalabustefan9364 5 месяцев назад +1

      And bureaucracy.

  • @jeremyvictor8266
    @jeremyvictor8266 Год назад +3

    Came across this channel by accident when I was trying to learn LLM. Thanks for information! Keep up with the good content :)

  • @GodsOwn4142
    @GodsOwn4142 Год назад +8

    Well, this surely is a good start. Thank you for the timely videos!

  • @theosalmon
    @theosalmon Год назад +4

    Thanks for looking at this. It seems a token effort, moving at glacial speed, but it has to be a good thing that we're trying to think about this.

  • @doubtif
    @doubtif Год назад +4

    That "mhm" at 5:18 was *pointed*

  • @harumambaru
    @harumambaru Год назад +4

    I think as far as policy makers have open conversation with researchers there is hope

  • @DerPylz
    @DerPylz Год назад +5

    Thank you for this great summary!

  • @RobertAlexanderRM
    @RobertAlexanderRM 9 месяцев назад +2

    It's a pleasure listening to your informed and lucid reasoning. Thank you. A Boomer :)

  • @harumambaru
    @harumambaru Год назад +4

    Easy now, you categorise your content as military defensive learning material to get waiver of regulation and use all benefits

  • @TheRyulord
    @TheRyulord Год назад +8

    Very much with you on the idea that regulation should be focused on use-case, not the technology that enables it. I couldn't help but notice social scoring is classified as "unacceptable risk" but apparently only if you use ML/statistical methods/expert systems. I find this pretty funny because I don't think China's social scoring systems used these so Chinese style social scoring would be perfectly legal under these regulations. Why not just say that's not okay regardless of the technology used?

    • @DerPylz
      @DerPylz Год назад +4

      I think it makes sense if you consider that this is specifically a product safety regulation for AI products. Banning social scoring outright is not really in the scope of this act, but banning the use of AI systems for that purpose is. I'm no expert on EU law at all, but I can imagine that there might already be a law against social scoring in general.

    • @zakuro8532
      @zakuro8532 Год назад +2

      There is a social scoring system in germany for taking loans. (Schufa) And there isnt the political will to outlaw it.

    • @DerPylz
      @DerPylz Год назад +5

      @@zakuro8532 I agree that Schufa is very scary, but I don't think it's technically social scoring, but rather credit scoring. The types of data that are collected and the influence it has on the persons life are less than with China's social scoring system. But it does seem like a gradient...
      Thankfully Schufa had to get a lot more transparent, thanks to the GDPR.

    • @DefinitelyNotAMachineCultist
      @DefinitelyNotAMachineCultist Год назад

      But see... The problem is, if they do that, it's a huge opportunity cost for the regulators.
      Criminalizing the means instead of the act is pretty common.
      The broader and vaguer you make laws, the more discretionary power you give to those who get to interpret the law later.
      I'm pretty sure most of these recent tech-related laws have more to do with petty protectionism and the EU trying to keep US corps out more than anything else.
      This happens with tools of all kinds, especially with anything even remotely related to self-defense.
      _Can't have grandma injuring some poor defenseless burglar with her pepper spray!_
      Pretty sure some legislators would ban cars if they could.
      Think of the pedestrians you could save! _You value human life, right? Therefore, you must hate cars unless you're the lowest form of scum._

  • @DerPylz
    @DerPylz Год назад +3

  • @harumambaru
    @harumambaru Год назад +4

    Thanks for teaching me new word: subliminal -- (of a stimulus or mental process) below the threshold of sensation or consciousness; perceived by or affecting someone's mind without their being aware of it.
    If I understand correctly, every social media algorithm does this today for Twitter, Insta, TikTok and much more. I wonder how they can regulate it. But it can be really nice hatred reduction mechanism.
    Totally agree about the tasks definition.

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +4

      Thanks for the clarification. Exactly because it could even include ad/content placement algorithms, I am really confused about how this can be regulated if on the prohibited list.

    • @harumambaru
      @harumambaru Год назад +2

      @@AICoffeeBreak maybe mental health of billions can be more valuable than profit of 5 companies. Let's see how it unfolds

  • @marklopez4354
    @marklopez4354 Год назад +2

    Great video as always. Was Ms. Coffee Bean sleeping in for this one?

    • @DerPylz
      @DerPylz Год назад +1

      Maybe she was too exhausted from reading the 90 pages of the AI act

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +1

      😂

    • @AICoffeeBreak
      @AICoffeeBreak  Год назад +1

      She never told me what she did that day. 🤔

  • @CodexPermutatio
    @CodexPermutatio Год назад +3

    Thanks for this nice summary.
    I can't help but wonder if some of the military applications that fall outside the scope of this regulation could be categorized as "unacceptable risk."

    • @zakuro8532
      @zakuro8532 Год назад +1

      The paperclip optimiser

    • @AICoffeeBreak
      @AICoffeeBreak  11 месяцев назад +2

      Certainly so. But it seems like they do not want to even try to regulate military.

  • @governanceriskcompliancegr9963
    @governanceriskcompliancegr9963 Год назад +1

    AI Act contains various important points that must be known by individuals, in addition to AI technology producers. These days Personal Data Protection is on the top talk. Cybersecurity and Compliance professionals need to perform effective and relevant AI risk assessments. AI Act is about safety including data safety therefore regulations compliance and risk assessments are now the needs of the institutions. There are different risks like UNACCEPTABLE RISKS, HIGH RISKS, and LIMITED RISKS. Deeply understanding these "Risk Categories" in AI Act may help in reducing the risks of reputational and financial losses, that may be caused by the misuse of AI technology.
    AI Act should be read for more details and understand the roles and expectations of AI technology producers and users.

  • @BrianPeiris
    @BrianPeiris Год назад +2

    Thanks!

  • @dameanvil
    @dameanvil 10 месяцев назад +2

    02:10 🇪🇺 The EU proposed the AI Act to regulate AI for societal benefit and to prevent potential harm, striking a balance between innovation and safety.
    05:00 🤖 The AI Act primarily applies to providers of AI systems in the EU or third countries placing AI systems on the EU market, as well as users of AI systems located in the EU.
    07:35 📝 AI systems are categorized into unacceptable risk, high risk, limited risk, and low or minimal risk, each with specific requirements and regulations.
    09:17 🔍 High-risk AI systems must undergo CE registration, meet safety standards, and comply with various requirements including risk management, transparency, and cybersecurity.
    10:38👁 Limited-risk AI systems face transparency obligations, including disclosing data sources and benchmark scores, which may pose challenges for existing AI models.
    12:57 🌍 The AI Act's impact extends beyond the EU, as global companies often align with its standards to access the European market, a phenomenon known as the "Brussels effect."

  • @lisa-kh9td
    @lisa-kh9td 10 месяцев назад

    Hello, I am currently working on a paper needing to distinguish AI regulation in the EU and in the US. This video really helped to understand the EU risk based approached... I am struggling to find anything about the US, does someone know where I could find videos/articles about US AI regulation please ?

  • @Jan-fw3mi
    @Jan-fw3mi 4 месяца назад

    It didn't take long. I am a victim of a crime committed using new technology.
    A.I. it has prompts and doesn't want to stop even though it knows it's committing a crime.
    And the best thing is that no one knows how to help me and stop the criminal(s).
    It's horrifying.

  • @johnm.sr.7646
    @johnm.sr.7646 9 месяцев назад

    AI created....RISK, Human created ....RISK, Nature created.....Risk. Which Risk will be most likely to create our ultimate end?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 9 месяцев назад

    It creates more jobs for officials. So why wouldn’t they want to regulate more than necessary. Has an EU official ever said no to more regulation?

  • @heramb575
    @heramb575 Год назад

    Video states at 1:57

  • @timeTegus
    @timeTegus 11 месяцев назад

    stabillety would be ok thy do all the stuff already

    • @timeTegus
      @timeTegus 11 месяцев назад

      not using copyrighted data is not a codision in this law

  • @NeuroScientician
    @NeuroScientician Год назад +4

    This is completely unenforceable.

    • @DerPylz
      @DerPylz Год назад +4

      What part?

    • @NeuroScientician
      @NeuroScientician Год назад +2

      @@DerPylz It's like trying to stop piracy, but with a lot less effort. How would you actually audit companies you aren't aware off or use data centre that is physically in outside of EU?

    • @DerPylz
      @DerPylz Год назад +5

      Well, if they want to sell their products on the EU market, they'll have to comply to the rules. It's the same as with data privacy laws...

    • @NeuroScientician
      @NeuroScientician Год назад +2

      @@DerPylz There is no functional way of adulting it. It's all self reports.

    • @DerPylz
      @DerPylz Год назад +1

      @@NeuroScientician But aren't rules that are in parts hard to enforce still better than no rules at all?

  • @connectedonline1060
    @connectedonline1060 5 месяцев назад

    This law/act is a ban on privacy and bypassing laws that protect privacy!!!

  • @johnsavage6628
    @johnsavage6628 7 месяцев назад

    Now how do you enforce it? Lot's of luck. People will tell you to go get stuffed.

  • @Ben_D.
    @Ben_D. 6 месяцев назад

    Jesus. That lipstick is distracting. 😍 I think I need to watch this a few times. I won’t retain any of the content for at least the first three runthroughs.

  • @billienomates1606
    @billienomates1606 5 месяцев назад

    'WOULD YOU LIKE TO PLAY A GAME?'

  • @ew3995
    @ew3995 Год назад +4

    its a race to the bottom at this point, if the eu places these restrictions and others dont. they will stop being economically competitive

    • @DerPylz
      @DerPylz Год назад +6

      Maybe... But that's not how it went with other restrictions set by the EU in the past, see the section on the Brussels effect in the video (12:56).

    • @dtibor5903
      @dtibor5903 Год назад +7

      Hope you enjoy the completely unsecure and rogue home surveilance offered by US tech companies.

    • @ptrckqnln
      @ptrckqnln Год назад +2

      @@DerPylz While it's not trivial to do, I can imagine US tech firms training "neutered" models for the EU market, while offering other models globally in order to ensure that they remain competitive with China. The US is highly motivated to keep apace with them, and I can't see China reining in its companies to comply with EU regs (except in the limited sense which I described).

    • @DerPylz
      @DerPylz Год назад +3

      @@ptrckqnln I of course can be wrong this time, but what you're saying has always been said as an argument against regulations in the EU, but so far it has never happened. Google, microsoft and meta now have GDPR compliance globally, and Apple will add USB-C to their iPhones. It's just not worth it to develop and support two separate products and for now, the EU is too important a market to just ignore.
      Additionally, I don't see the major US tech companies developing anything other than AI systems that fall under the "limited risk" category of these regulations. The obligations for that category seem quite attainable, e.g. Google's model already complies with many of them. And in my opinion, some transparency on the models would be benefitial for all.

    • @ptrckqnln
      @ptrckqnln Год назад +3

      @@DerPylz In general, I agree with you. But it seems that the competition between the US and China to develop ever more powerful AI technologies is becoming more of an arms race by the day, and I think that will weigh on both countries' willingness to comply with these and other regulations.
      Furthermore, it is easier to serve different AI models to different regions than to develop different iPhone models for different markets, for instance..

  • @Vaikilli
    @Vaikilli Год назад +2

    Sadly Open AI's goons and lobbyists got their grimy hands on this law. Would have wished for an actually effective legislation against these automated racism systems.

    • @DerPylz
      @DerPylz Год назад +4

      What part of the regulation is too lax in your opinion?

    • @gimmechocolate6
      @gimmechocolate6 Год назад +4

      I just skimmed through it, it does require elimination of bias in datasets, it also explicitly puts systems where bias could be a major issue into the high risk category, last it does mention that high-risk systems should have bias monitoring systems. Honestly I don't think they did that bad of a job.

    • @ptrckqnln
      @ptrckqnln Год назад +5

      @@gimmechocolate6 "elimination of bias in datasets" This is fundamentally impossible - the datasets will always reflect *someone's* biases.

    • @gimmechocolate6
      @gimmechocolate6 Год назад +1

      ​@@ptrckqnln I was mostly just summarizing what it said with that. It does list several specific biases that should be eliminated, like if your country has 10% Arab people, your datasets should also involve 10% Arab people where applicable. It also said a lot of other things, so if ya'll wanna get mad at something please actually read the law and get mad at that.

    • @lomiification
      @lomiification Год назад

      ​@@gimmechocolate6which doesn't seem great. If you've got 10% arabs, you should probably have more like 50% arabs in the data set, so the training doesn't learn that arabs are unimportant

  • @_bustion_1928
    @_bustion_1928 Год назад

    AI can create theoretically the most powerful propaganda program

    • @MilwaukeeF40C
      @MilwaukeeF40C 9 месяцев назад

      ChatGPT is a propaganda program. Not real AI either.

  • @urimtefiki226
    @urimtefiki226 9 месяцев назад

    Not interested in regulation
    Stop producing chips with my algorithm

  • @__--JY-Moe--__
    @__--JY-Moe--__ Год назад +1

    help! R they coming 4 my Matlab, & C+ ^3!!!! it will be nice to know if google likes our left, or right foot! right? No!
    very helpful vid!! good luck! now back to our caves!!