We Need To Talk about AI

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 797

  • @LowLevelTV
    @LowLevelTV  26 дней назад +29

    🔴 Come chat with me about this at lowlevel.tv/live

    • @LowLevelTV
      @LowLevelTV  26 дней назад +11

      or dont idk

    • @Hmmindividualguy
      @Hmmindividualguy 26 дней назад

      @@LowLevelTV I think I will

    • @CaptMirage
      @CaptMirage 26 дней назад +3

      ​@@LowLevelTV wish u were live 🫠

    • @kazkz5331
      @kazkz5331 26 дней назад

      RMA your intel CPU, tell them to refund instead of requesting cross shipping.

    • @quantumblurrr
      @quantumblurrr 26 дней назад

      Didnt expect to hear the hard right wing ‘anti big government’ line from you lol. Pretty wild. The idea comes from ceos wanting to deregulate all of america’s industries

  • @TheBackyardChemist
    @TheBackyardChemist 26 дней назад +731

    I think the correct response would be a "fine, so be it, you can train on unlicensed data, but your model weights are not protected by trade secret laws and any output is instantly public domain"

    • @ZeAlfredo
      @ZeAlfredo 26 дней назад +106

      The whole model's source code should be public domain. Lol

    • @J.erem.y
      @J.erem.y 26 дней назад +36

      ​@ZeAlfredo alot of then are. Source code isn't personal data tho. This conversation is about data.

    • @tiorontoron7531
      @tiorontoron7531 26 дней назад +27

      its all vaporware unless i can run it offline, airgapped, on my own machine

    • @glowingone1774
      @glowingone1774 26 дней назад

      @J.erem.y you sure source code can't be personal info?

    • @linklovezelda
      @linklovezelda 26 дней назад +49

      @@tiorontoron7531 the word vaporware has a definition, and that's not it lol

  • @Conz3D
    @Conz3D 26 дней назад +116

    "I’m not pirating movies, I’m just training my model."

  • @nagi603
    @nagi603 26 дней назад +165

    The fact that they have to take steps to prevent training data leakage means it's not anonymized at all. That's the end of the story as far as anonymization goes. (And it's inherent to the tech itself, like hallucination.)

    • @georgesmith4768
      @georgesmith4768 26 дней назад

      @@nagi603 Data anonymization has always been a scam… It just means they carve out name, ssn, etc. Knowing your address place of work, medical history, gender, age etc has always been more than enough to deanonymize someone.
      The AI model just doesn’t know to pretend it isn’t connecting the list of people who live at certain addresses with the list of addresses only “anonymized” bank payment info

    • @JamesTDG
      @JamesTDG 26 дней назад +12

      Indeed, the AI literally is blindly trained on content, it can and has de-anonymized some of its training data. Remember the whole watermark and signature fiasco a bunch of image generators had?

    • @imeakdo7
      @imeakdo7 26 дней назад +1

      The reason is because the data is what makes the ai. If the data leaks, competitors will have their work cut out for them

    • @Imperial_Squid
      @Imperial_Squid 25 дней назад +1

      Nope, it's just that training data is trade secrets in the same way that code for a closed source software is.
      ML models train on incredibly similar code no matter the domain (most of the time it's a big for loop with bells and whistles), but it's the data that makes models useful.
      There's a saying in machine learning, "garbage in, garbage out" ie that the quality of the model reflects the quality of the data, so similarly "gold in, gold out" also applies.
      If a company has worked really hard and sunk a bunch of money into collecting and pruning a high quality dataset, they're not going to want it to leak, since that allows competitors to shortcut their advantage.

    • @dojohansen123
      @dojohansen123 24 дня назад +2

      Well, say a model "reveals" to you in a chat that a famous person P is HIV positive, and assume P refutes it, or simply declines to comment. There is no way to know if the model halluscinated or "knows what it's talking about", for you or for anyone else, including the model itself. So while technically it might be correct to call it a "revelation" in cases where what's claimed is in fact true, there would be no way to know the revelations from the halluscinations. Obviously P might not be pleased about rumours of him being HIV positive floating around, should it impair his ability to get laid, but this too is equally true whether the rumours are due to halluscination or revelation.

  • @Cinarbayramic
    @Cinarbayramic 26 дней назад +385

    ai wasnt going to replace us anyway, at least not by training on my messy code...

    • @lizardkeeper100
      @lizardkeeper100 26 дней назад +8

      If it works I don't care how messy my code is until some one finds a bug and I curse my younger self.

    • @el_larva
      @el_larva 26 дней назад +2

      😂😂 so true

    • @Eagledelta3
      @Eagledelta3 26 дней назад

      especially with groups out there intentionally trying to poison the data

    • @ImSquiggs
      @ImSquiggs 26 дней назад +15

      If AI trained itself on my code, all it would learn is how to program something for a year until it gets complex enough that it’s hard to troubleshoot, thus indicating the correct time to abandon the project

    • @pixelcatcher123
      @pixelcatcher123 26 дней назад

      u know it i know it but some ppl just dont

  • @Noksus
    @Noksus 26 дней назад +77

    GDPR isn't just to tell users that cookies are being used. It is part of it and it requires websites to be opted out of tracking cookies by default because privacy should be the default. GDPR also has other aspects regarding data privacy other than just cookies. It is about conpanies needing to take care of the security of the personal data and it outlines the potential fines for ignoring data privacy.

    • @Imperial_Squid
      @Imperial_Squid 25 дней назад +15

      Exactly, for some specifics just so people have an idea of just how huge GDPR was legally speaking:
      - It defined what counts as personal data legally
      - It said that companies must have a _legitimate_ reason to collect the data
      - It gave users the right to know what data was being collected and for what reasons, in clear and understandable terms
      - It gave users the right to access the data companies hold about them, as well as know why they have it, who has access, etc
      - It gave users the right to have that data deleted (with some conditions)
      - It gave users the right to object to any processing of their data (with some conditions)
      - It gave users the right to compensation from the company should their mismanagement of personal data cause harm to the user
      - Companies must only collect the absolute minimum data necessary
      - Companies must have infrastructure in place such as audits and internal controls
      - Companies must report data breaches within 72 hours to a legal body
      - Companies must store data in a "pseudonymous" form, ie it must be useless without some additional information to transform it back into a useful for(, eg it must be encrypted), and that key information must be stored separately
      - If a company matches certain criteria (eg it has more than X employees, etc), they must keep data access logs
      - Companies must have a dedicated data protection officer to oversee all of this
      - Companies who share your personal data with other entities must ensure those entities _also_ uphold GDPR
      - Companies in breach of GDPR may be fined €20m or 4% of annual profits, whichever is _greater_
      The cookies thing is very much just a down stream effect of all of the above, but it's FAR from the only thing GDPR does.
      Edit: notably, the "they have to tell people they're training models on their data, and those people have to consent to this" _is already covered_ by GDPR. In fact, it's the reason Facebook had to send out emails a few months ago alerting people to the fact that they would be training models on their data and give the users a way to opt out of that.

    • @autohmae
      @autohmae 25 дней назад +5

      strictly speaking, cookies is not even part of GDPR. Also when it comes to cookies, nobody seems to know the law, they just put a cookie banner on their site without checking if they even need to do so. Thus giving websites visitors no options to go to an other website who doesn't do tracking.

    • @clray123
      @clray123 24 дня назад +3

      Since nobody can control or enforce GDPR at scale, the effect is mostly (1) annoyance for everyone and (2) ability of corrupt politicians to terrorize certain big companies and extort bribes for personal benefit if need be. Also, it omits governments from applying the same strict privacy laws that they demand from even small companies.

    • @clray123
      @clray123 24 дня назад +4

      BTW as an IT guy I have regular inquiries from organizations woried about whether their cookie banner is adequate. Meanwhile they outsource all their data, authentication and authorization to Microsoft without even batting an eye. That is how GDPR works in practice.

    • @cherubin7th
      @cherubin7th 23 дня назад +1

      GDPR is completely useless and a scam.

  • @bryanburns3391
    @bryanburns3391 26 дней назад +90

    It isn't even just about reading the policy, you can read it, but if you disagree with it you still can't use the product. As you mentioned, I constantly feel pressured to either give up my information or I can't use most products today. Hardware included with all the devices tied to proprietary apps

    • @Threedogsinatrenchcoat
      @Threedogsinatrenchcoat 26 дней назад +15

      Not to mention the policy is written with the intent to dissuade understanding, they have full fledged legal departments to develop new jargon that would allow them to do as they please but not hold up in court.

    • @viralarchitect
      @viralarchitect 26 дней назад +11

      Exactly. It's all gate-kept by big tech. Don't like it? Don't use it! But EVERYONE uses it, so you are ostracized from real people in the real world if you don't consent.

    • @J.erem.y
      @J.erem.y 26 дней назад +1

      Serious question, would you use the service if it wasn't free? Which is more important, data privacy or price? Unfortunately, society chose price.

    • @neruneri
      @neruneri 26 дней назад +10

      @@J.erem.y This is a false dichotomy. Data Privacy would be eroded either way. Data brokerage does not actually prop up "free" services, investment capital does. Likewise, the very many companies who are charging prices for entry do not attempt data privacy. That is not a trade being made.

    • @gearboxworks
      @gearboxworks 26 дней назад +5

      @@J.erem.y- Captains of the tech industry are answering your rhetorical question among themselves with "Why not both? 🤔"

  • @nojuanatall3281
    @nojuanatall3281 26 дней назад +268

    As a musician I quit putting my music on social media to keep facebook and others from stealing it and training AI on my work. It is hard to be an artist these days.

    • @el_larva
      @el_larva 26 дней назад +37

      AI was supposed to help with things like housekeeping chores. Not replace artists... we are being pushed to use it for the wrong kind things

    • @Un_Pour_Tous
      @Un_Pour_Tous 26 дней назад +12

      Who are you again?

    • @viralarchitect
      @viralarchitect 26 дней назад +14

      You thought you could escape corporate politics by following your passion and pursuing a career in the arts? Nice try! lol I'm sorry you've got to deal with that.

    • @makesushi
      @makesushi 26 дней назад

      @@el_larva right!

    • @sebastianramirez5781
      @sebastianramirez5781 26 дней назад +8

      that's stupid lol

  • @handle_your_set
    @handle_your_set 26 дней назад +93

    Just last week, I asked my Aetna appointed therapist a very specific question.
    “Is there any possibility that my session notes have been viewed by others outside these sessions?”
    He hesitantly answered with a few departments.
    I then asked if there is any possibility that my session notes are being used to train any large language model, ergo, AI?”
    There was hesitation, then there was evasion, followed by..
    “I don’t think so…”.
    So yeah. Oh and he didn’t call me this week, as he has every week since we started.

    • @ProgrammedAttempts
      @ProgrammedAttempts 26 дней назад +2

      Vendors have requirements and people follow them blindly. This is going to happen regardless how we feel about it.

    • @handle_your_set
      @handle_your_set 26 дней назад +15

      @@ProgrammedAttempts my point being, maybe it won’t, if enough express the same interest in their own care and privacy as I have.
      I’m not pretending that this is some gotya moment from me to them. I’m purely speculating that my particular line of questioning (which I was advised to ask by another in the field),
      and this upcoming release by the government, may be the reason he didn’t call this week.
      There are a few other possibilities. Maybe he didn’t call this week because he knows I’m done talking my innards out to a corporation.
      Either way, I’m done with it. If I can’t even find sanctity in therapy, is there any left to be had?

    • @omertaprimal6913
      @omertaprimal6913 26 дней назад +1

      So wild

    • @AshnSilvercorp
      @AshnSilvercorp 25 дней назад

      RUclips is just siphoning our comments as we speak. They think just like people who force themselves on women. They don't understand the definition of consent. They will never actually ask you for it. They'll just email you their EULA and say they are legally covered.

    • @clray123
      @clray123 24 дня назад

      Professionals generally prefer not getting their custom from crazy customers. You may need to seek help from another professional while being less crazy.

  • @Bean-Time
    @Bean-Time 26 дней назад +116

    5:40 also in American schools you are forced to use Google drive and their stuff to turn things in. I am not sure how much of a choice it is when the alternative is flunking out of school. They could have put the human centipede clause, but is a third grader really going to know what the implications are, or have the power to reject it?

    • @2639theboss
      @2639theboss 26 дней назад +13

      @@Bean-Time I mean even at a corporate level, its mandatory. I work for a tech startup, think thats about as flexible it gets, and were still locked into a couple vendors for certain things.

    • @boredstudent9468
      @boredstudent9468 26 дней назад

      I'd assume it's different for enterprise solutions, like Microsoft does it.

    • @georgesos
      @georgesos 26 дней назад +3

      in Greece the same.
      also our education ministry gave the personal details of every student in the country to Cisco I exchange for their video conferencing platform that our schools used during COVID.

    • @alfonzo7822
      @alfonzo7822 23 дня назад +2

      Same in UK. Kids are assigned accounts from day one. It's actually so weird...

  • @wizardadmin
    @wizardadmin 26 дней назад +42

    5:55 Because there are so few of those big tech companies, its idiotic that a company can decide to just exclude you from like 1/5 of the internet for violating policies they just came up with

  • @SimonCopsey
    @SimonCopsey 26 дней назад +22

    Fully agree. Satya Nadella's recent announcement that SaaS is dead and AI agents would replace them says two things silently. (1) Anything you do using Microsoft WILL be used, processed and share with AI and (2) Any green credentials are dead (if ever alive). The idea that everything runs on a foundation of LLM's is so ungreen and power hungry it just isn't funny. Satya Nadella is proposing we multiple by maybe 1,000,000 fold the power used by online services. That ridiculous. It is 100% capitalism over planet.

    • @imeakdo7
      @imeakdo7 26 дней назад +1

      Wouldn't the ai agent be SaaS? Also Microsoft just purchased power from a retired nuclear plant

    • @nomore6167
      @nomore6167 25 дней назад +3

      Regarding power usage, also look at the massive push for cryptocurrency, which is explicitly based on wasting as much energy as possible.

    • @SlinkyD
      @SlinkyD 25 дней назад +1

      ​@@nomore6167Banks at idle (closed times) use more power than crypto miners. You really think the shutdown computers when they leave?

    • @nomore6167
      @nomore6167 25 дней назад +2

      @@SlinkyD "Banks at idle (closed times) use more power than crypto miners" - I don't think you really know much about crypto mining.
      "You really think the shutdown computers when they leave?" - I know for a fact that at least some of them do (I know because I've done IT work at banks). Even for those that don't, the systems most likely go to sleep after 15-30 minutes of inactivity, as that is the default power management plan.

  • @rickymort135
    @rickymort135 26 дней назад +16

    The standard should be "informed consent". In enrolling on a clinical trial, the regulations around informed consent are that it needs to be written at an 8th grade reading age, and all the all possible negative future consequences need to be explicitly explained to you. No part of the consent you give these companies even approaches this standard of informed consent. Combined with the consolidation of the web, the choice isn't even a free choice, it's under duress as not consenting means not taking part in society. We need to break up these companies, start enforcing anti-trust laws and regulating informed consent to a similar standard to what we have for medical data.

  • @ImSquiggs
    @ImSquiggs 26 дней назад +48

    Apples latest update forces their Voice Recorder app to make text transcriptions of all your messages. There’s no way to turn this off with any settings, it’s purposely excluded, and you can guess why - to illegally train their AI on your personal data without your knowledge or consent.
    For anyone else that uses this app as frequently as I do and now hates the Apple corporation, I dug deep and found some angry nerds on a forum with a workaround, haha. Install the Japanese language on your phone and then force just that app to open in Japanese. Assuming you don’t speak Japanese, the app won’t be able to parse your speech and generate a transcript.
    Insane I have to do this, but that’s the world we live in now. I think Apple needs a class action against them for this, among the other dozens that should be levied against other companies doing similar things.

    • @josho225
      @josho225 26 дней назад +6

      your voice print will be used to identify you. the conspiracy theorists were right. we have been conditioned (lulled) into give up our privacy for convenience 😢

    • @JamesTDG
      @JamesTDG 26 дней назад +2

      Can't class action sue them, binding arbitration.

  • @mercifuldeath
    @mercifuldeath 26 дней назад +16

    The only reason the government cares is because it affects them. We had to stop using Adobe, for example.
    Can we look at forced arbitration while we're at it?

  • @nerdy_crawfish
    @nerdy_crawfish 26 дней назад +33

    18:05 the sad thing is we haven’t been defunding education. According to NCES statistics inflation adjusted spending per pupil has been increasing since the 1970s and it now at an all time high of 15000 per student on average. Yet at the same time teacher pay has been going down when adjusted for inflation…. So we are spending more money but it’s not getting to the people on the ground where it would actually make the system better.

    • @pantallahueso
      @pantallahueso 26 дней назад +1

      It's worth noting that it's not just teacher pay that matters, but also the support staff. In addition, the tools for providing education have also gotten more expensive, especially as schools have started to incorporate technology into the workflow. And the expense isn't just in the technology itself, but also the staff that is needed to provide service to make sure that's all working properly -- which is often lacking, in my opinion.
      That is to say, the increased funding may very well be going to something important, but it could also be going to something frivolous. That question would determine whether the solution is better funding or better allocation. Either way, the only way to know for sure is to know exactly where that money is going in the first place -- it's not enough to know where it's *not* going.

    • @RobertFletcherOBE
      @RobertFletcherOBE 26 дней назад

      If that concerns you take a look at the comparison of what Americans pay for healtcare compared to Europe, and the difference in quality. America de-regulated itself into hell, then the corporations used the media to make you all blame the regulation for the problem. You're sleep walking into feudalism all the while cursing the things that could save you.

    • @draconightwalker4964
      @draconightwalker4964 26 дней назад

      so where is it going then?

    • @AuxiliaryPanther
      @AuxiliaryPanther 25 дней назад +5

      Yeah, at my local community college, our calculus professor said he was on of 10 people capable of teaching high level math, yet they got a notice that the college was looking to cut one member from the physic or math department. He also said the number of admin staff had doubled to 200 in his several years there.

    • @mage3690
      @mage3690 25 дней назад

      ​@@draconightwalker4964I heard one person claim it goes to admin staff; i.e. the principal, his secretary, his secretary's secretary, etc etc, double that for the vice principal, yada yada. And the reason it's so difficult to fix that problem (because obviously people see and recognize the problem and are fighting against it) is because it creates jobs, and creating jobs is the ultimate economic good in modern economic theory, proving Goodall's law yet again.

  • @paradox8425
    @paradox8425 26 дней назад +12

    I think a new law should be created which says: "Any data used to training AI, must be consented by its owner in a separate document dedicated for that purpose only and must have a clear title that specifies that intent"
    Also another one saying: "Access to a service can't require consent to using user data for AI training, unless it's an AI only service like ChatGPT"

    • @buycraft911miner2
      @buycraft911miner2 25 дней назад

      what do those help with? im pretty confused, they seem pretty roundabout

    • @paradox8425
      @paradox8425 25 дней назад +6

      @buycraft911miner2 Basically first one says you can't hide AI consent into a 500 page privacy policy, it needs to be a dedicated document so you know you are consenting to AI.
      Second one says they can't force you to consent to AI in order to use services like RUclips, twitter or anything else (could even be physical service) unless it's an AI only service like ChatGPT

  • @pav431
    @pav431 26 дней назад +39

    One thing I'm very concerned about is the state of the law over here in the EU.
    Sure, Czechia has already declared that AI derived works are uncopyrightable by nature, but we still lack broader legislation applicable throughout the whole of the European Union.
    ...And that's despite this continent spearheading such privacy revolutions, like the world famous GDPR...
    I really hope it can be ammended or something that in order for a company to use user provided data, they'd need an explicit, direct agreement from the user...

    • @TheBackyardChemist
      @TheBackyardChemist 26 дней назад +12

      The "no copyright for robots" is really the only workable solution here, anything that comes out of a ML model is public domain. If some corporation wants to replace artists with robots, I say fine, so be it, but no copyright protections for them, whatever they publish that was made with AI is free for all to use.

    • @CmdZOD
      @CmdZOD 26 дней назад +1

      @@TheBackyardChemist I kinda agree with you. It's a very brutal solution but if IA works were non-copyrightable, ultimately it would be useless to big corp because everybody could reuse it. That would make IA works unusable by any serious company.
      This being said, it would only shift the problem to proving it was made by an IA. And to this, i have no realistic solution.

    • @georgesos
      @georgesos 26 дней назад

      In Greece, the (right wing) government, gave all the personal info of all students/all schools to CISCO for free.(they justified it by saying it was I exchange for the video conferencing platform our schools used during covid. noone was asked for permission, no EU official intervened.
      The laws are made for the poor.They dont apply to the rich and their corporations.

    • @jorge69696
      @jorge69696 26 дней назад +1

      The voice of darth vader is now AI generated. Does that mean it's now free to use in Europe?

    • @JamesTDG
      @JamesTDG 26 дней назад

      And the agreements should be a per-work deal. If a company wants to use my content to train their AI, then I should be able to DIRECTLY pick and choose which pieces are allowed to be used in their system, oh, and, it needs to be opt-in.

  • @kuhluhOG
    @kuhluhOG 26 дней назад +15

    9:30 Fun fact: Not all countries allow you to surrender (or even transfer) your Copyright.

  • @VikiSil
    @VikiSil 26 дней назад +4

    The only use of AI that is legitimately a net positive for humanity, that I have seen thus far, is solving the protein folding problem. And to be fair, that was a huge breakthrough.

  • @dz1rkys
    @dz1rkys 21 день назад +3

    I just got an AI girlfriend ad under this video 😭

  • @boredstudent9468
    @boredstudent9468 26 дней назад +9

    Since it was referenced, no the GDPR doesn't really say you have to make cookie banners. It's just something many companies do because they are data greedy or uncertain about legal details. There are multiple justifications to collect data, e.g. it is required to provide the service or there is a directly connected business interest, like logging.
    And the duty to inform is also passive, there needs to be a privacy policy in the legal fine print, no need for a popup or explicit consent.

  • @ZeAlfredo
    @ZeAlfredo 26 дней назад +42

    "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."
    -- Joanna M.

    • @roknovak9991
      @roknovak9991 25 дней назад +1

      @@ZeAlfredo we've had washing machines snd dish washers for a while

    • @ZeAlfredo
      @ZeAlfredo 25 дней назад +8

      @roknovak9991 if you don't get it, just say that.
      When this line was spoken, it was regarding AGI and the purpose of javing automation handle menail tasks automatically, not intellectual tasks and creative tasks.
      The idea was to have a robot servant in yojr home that would collect the clothing, and wash it. Fold it for you, etc.
      The idea wasn't to have it doing painting, music etc while people still have to do household chores.
      The idea is that the jobs that pay well and the fun creative work is now being automated leaving just menail task for the average person.
      The average person is not benefiting from automation

    • @roknovak9991
      @roknovak9991 25 дней назад

      @@ZeAlfredo I get what the argument is attempting to convey. And there's absolutely the need to discuss the future of AI and it's impact on society. But that goofy one liner is on a Saturday morning cartoon level of understanding of what AI is supposed to be. It's basically saying, I don't want AI, I want a robot butler instead. It's not tackling any actual problems of AI, and is doing a disservice to real discussion about the issue

    • @ZeAlfredo
      @ZeAlfredo 25 дней назад

      @roknovak9991 I beg to differ. While not an eloquent line, I think it summarizes the feeling of the average person in regards to AI.
      AI has not be marketed or sold as something that actually helps the average consumer anymore than algorthims already have for the past 10 years.
      AI has been sold as a replacement for human labor, without concern for what this massive displacement of jobs will do for spciety.
      The expectation was that it would NOT take jobs but make lives easier. That iss the AI utopia everyone expected.
      What we got is AI that is trying to compete against humans for creative jobs, and particularly the well paying jobs.
      We already have a problem with wealth gaps between rich and power, AI as these companies promote it in their B2B sales pitches, effectively eliminates the middle class job employment opportunities...
      Renders higher education useless for 90% of Americans and edfectively demolishes the economy.
      Ultimately, it is not being marketed as something to help consumers but a way to replace consumers. While you may not like the line, you cannot simply dismiss it's candid observation on public sentiment.

    • @NinaGothMambaNegra
      @NinaGothMambaNegra 24 дня назад

      love this!!! is so anti annunaki!!!!!

  • @evan5237
    @evan5237 25 дней назад +12

    Require training data to be explicitly opt-in only from the individual copyright holder, no grandfather clause. Any existing model needs to guarantee compliance or start from a compliant dataset. As i suspect the vast majority of creators would not opt-in, the industry could effectively be brought to heel. Probably would lead to a new industry of paying creators to opt-in, but that's more ethical than now as long as the compensation is fair and not coerced.

    • @Houshalter
      @Houshalter 25 дней назад

      Facebook, twitter, RUclips, etc all have huge amounts of data from users who post on their site and sign away their copyright. It wouldn't hurt them much at all. Just small AI companies and open source projects.
      I also don't think people realize just how worthless their data is on an individual level. These things need 5 billion images or a trillion words at the minimum. Even if the AI model made a $100 million in profit (none of them ever have), that's what, less than a penny per image or paragraph? Not enough to be worth the time to license. Each data point only trains a single bit of a multi-gigabyte model after all.

  • @rudomg
    @rudomg 26 дней назад +58

    Worst even is these kind of policies about sharing data to third party are being done by health providers too; good luck trying to explain a doctor how dangerous is.

    • @kazkz5331
      @kazkz5331 26 дней назад +10

      Just wait until they start charging a premium because they saw your grocery list.

    • @viralarchitect
      @viralarchitect 26 дней назад +6

      In my experience, it's because hospitals have to outsource their IT because it's way too complicated. The hospital budget doesn't cover what you need in terms of the wide range of subject matter experts and MSP would provide. Those are the 3rd parties they are referring to.

    • @theodorekorehonen
      @theodorekorehonen 26 дней назад +5

      Yup and it's trivial to deanonymize so called private health information

    • @nomore6167
      @nomore6167 25 дней назад +1

      "Worst even is these kind of policies about sharing data to third party are being done by health providers too" - It's also done by the governments, state and federal. And we, the people, have no way to opt out of any of it. And even worse than sharing is that the companies with whom they share the data have no controls/restrictions. Health care providers (in theory, at least) are bound by HIPAA. The companies with whom they share our data are not.

    • @SlinkyD
      @SlinkyD 25 дней назад +2

      My data was stolen in a medical 3rd party breach & I can't do squat about it. Can't even file a report because I can't get access to needed info, even with a lawyer. Took them almost a year to let me know, all while the 3 credit report companies polled my credit weekly.

  • @privateprivacy5570
    @privateprivacy5570 26 дней назад +8

    7:48 You are kinda missing the point. Tech bros don't need a problem to solve. They need dragons to slay. And creating real AI would be one of those dragons. They think creating other beings is our natural destiny just like many see colonizing other planets as our destiny. What's human well-being compared to never ending expansion?

  • @zeroskill.
    @zeroskill. 26 дней назад +43

    the no catching up to us megabillionaire ai corporation laws yay

    • @Dan-cm9ow
      @Dan-cm9ow 26 дней назад +14

      Bipartisan is the main clue... The two parties aren't just working together for the same goal unless its for a donor.

    • @sambhatia7692
      @sambhatia7692 26 дней назад

      Why so low trust on the Institutions? ​@@Dan-cm9ow

    • @calholli
      @calholli 26 дней назад +6

      Exactly... Now that we are in the club.. lets shut the doors behind us

  • @DaveCavalari
    @DaveCavalari 26 дней назад +50

    With the new president being elon, there's no chance of congress passing anything that will block him and his billionaire buddies

    • @foxxrider250r
      @foxxrider250r 25 дней назад +6

      orange man bad?

    • @nomore6167
      @nomore6167 25 дней назад +7

      @@foxxrider250r "orange man bad?" - It has nothing to do with what anybody thinks of orange man, whether they think he's bad or the next coming of the messiah. It has to do with acknowledging reality, and the reality is that Musk is being treated as the next president. He has had immense power in the government for a long time, and that power is only going to increase in the next administration.

    • @jared_bowden
      @jared_bowden 25 дней назад +3

      Here's the weird thing about that: the President's real cozy with Elon and some other Tech billionaires, but at the same time a lot of influential figures in the MAGA movement are critical of big tech and accuse them of monopoly - and then there's JD Vance, who is simultaneously both a tech bro _and_ super critical of other tech companies. So, there's a conflict of interests in the party, and it will be interesting to see if it blows up like the many other Republican conflicts of interest (for example, the recent fiasco about the government shutdown). My guess is that politicians will be politicians and do whatever the big tech tell them to do whilst actively lying to their voter bloc about how they are "fighting for their rights" and hope that they don't notice - this worked very well in Trump's 1st term. However, this in and of itself gives the Democrats an attack vector, that *IF* they use correctly could cleave into Trump's support, especially if those supporters really start to feel the "enshitification" of media and services that AI is already causing in their own personal lives. Currently the Democrats couldn't collectively figure out how to use a stapler if you gave them illustrated instructions: however that may well change in the coming year.

    • @foxxrider250r
      @foxxrider250r 24 дня назад +3

      @@nomore6167 I was really just kidding. Elon does have alot of power, that is true. I think the things he stands for are good for the most part though

    • @Kwazzaaap
      @Kwazzaaap 23 дня назад

      How is that different from what's already going on? It's just a different group of billionaire buddies.

  • @giochanturia1304
    @giochanturia1304 26 дней назад +5

    1:04 Was the report really 256 pages or did you round it?

  • @Eagledelta3
    @Eagledelta3 26 дней назад +17

    So how would a Law differentiate between a person being inspired/learning from a copyrighted work and AI doing the same thing via web scraping?
    My biggest concern are that legislators are lazy and try to make catch all laws (or being lobbied into making things illegal that shouldn't be). I could totally see lobby groups pushing to make it illegal to learn from copyrighted work (how that would be enforced is beyond me) OR pushing to make all work someone does owned by an employer.... which would have implications for OSS Projects

    • @2639theboss
      @2639theboss 26 дней назад +7

      I mean thats still copyright infringement. If youre "inspired" so much by Steven King, and you write a book that sounds real close to "It", youre getting sued and losing.
      It only matters if youre plagiarizing copyrighted or other protected content, but this isnt a school essay and virtually everything AI companies use as training data is protected. Your random social media posts may not be, so if you want to train a rage bot, youre good. But creator content, art, books, code....99% is protected.

    • @Dominexis
      @Dominexis 26 дней назад

      To add to what @2639theboss said, inspiration is not a thing that exists within AI. All it can do is plagiarize at some level, though it's plagiarizing several things simultaneously in an amalgamated abomination. The human mind is fundamentally different and can take an idea, reinterpret it, develop it, and turn it into something totally unrecognizable. Machines aren't people. We have to keep those two things in separate boxes, especially with respect to the law.

    • @zyansheep
      @zyansheep 26 дней назад +3

      @@2639theboss Its copyright infringement, but only if its like *really* close, and usually it is up to the courts. Most art is some frankenstein amalgamation of ideas sourced from various places. The way AI models function were inspired by human brains. You can create really similar derivative works with humans as well as AIs. You can create really unique works that combine a bunch of different styles into something novel, with both humans and AIs.

    • @2639theboss
      @2639theboss 26 дней назад +3

      @@zyansheep It doesnt have to be close. Copyright and other IP laws protect derivative work, as in you have the exclusive rights to make derivative work.
      If your AI model uses 50% protect content in its training data, congrats, youve now created a derivative work from each component of that 50% and youve violated copyright law.
      It really is that simple. Is a 75 yo judge going to understand that? Well no, they can barely drive without being overwhelmed, but its not a legal problem at that point, its a problem of who were electing.

    • @Eagledelta3
      @Eagledelta3 25 дней назад +1

      @@2639theboss If that were true, then most Software would be copyright infringement as the ideas taken are ALL taken from what came before. In fact, most of the STEM and Medical fields are literally ideas built on ideas. By the very nature of Science, Tech, Engineering, Math, and Medicine - you can't have advancement in isolation. Especially since Tech, Engineering, and Medicine are built on Math and Science which deal with the Nature of the Universe.
      Anyway, that wasn't my point. I am more referring to the fact that everything from artists to musicians are creating based on a combination of their own experiences and what they've learned from others. Nothing in this world is created in a vacuum.
      I'm not a fan of AI being used to try and replace artists, programmers, etc. However, the people behind these AIs/LLMs aren't stupid, they based it on the way the human brain operates because they KNOW that legislating against it could end up legislating against the very creative mind we want to protect. And it's only because legislators can't be bothered to build laws that are tailored and specific

  • @tgburnes
    @tgburnes 15 дней назад +1

    A lot of the lifestyle promotions that these tech companies constantly demonstrate are utter rubbish. "You can ask our device to remind you when you have a doctor's appointment... blah blah blah"... As if we all couldn't already do that using paper and pencil.

  • @johnpica1082
    @johnpica1082 26 дней назад +38

    Re: your closing statement about AI - I’m also unconvinced by generative AI LLMs. However, there are some great “AI” (I hate the word) models out there that are great at pattern recognition and quite useful.
    For example, the healthcare space is currently being revolutionized by models (typically CNN’s) that can analyze ECGs or ultrasound recordings. The model will parse long recordings and flag abnormalities for a human (medical professional) to review. While the doctor can make the same diagnoses, the AI can parse a long recording very quickly, whereas it will take a person far longer.
    So I think there are great usecases for targeted ML models, we just shouldn’t be pouring so much effort (not to mention energy?) into these generative LLMs, and instead be putting money & time into these more helpful specialized models.
    I know you were talking about LLMs and the greedy things these corporations are doing with them, though.

    • @todayonthebench
      @todayonthebench 26 дней назад +3

      I agree that there is plenty of useful ways to apply machine learning to various fields.
      But to a degree I think the video focuses more on generative systems. (In oversimplified terms anything that it is applicable to systems that "increases entropy in the output data compared to the input".)
      Like large language models, image/sound/video generators, and other such systems that often take an input and makes it into something far more complex.
      While a lot of content recognition, object tracking, speech to text, and other analysis systems looking for things in bigger datasets typically does the opposite. And here a lot of the issues discussed in this video is a lot less applicable.

    • @hjups
      @hjups 26 дней назад +1

      @@todayonthebench There are plenty of useful ways to apply generative systems to various fields as well. Specifically in healthcare, I am aware of it being used for CT reconstruction (reduces radiation dosage and scanning time) [generative diffusion models], and for record summarization and retrieval for clinical staff [generative LLMs].

    • @todayonthebench
      @todayonthebench 26 дней назад +1

      @@hjups Well, I didn't say it weren't of use in such fields. Only that a lot of non generative stuff falls largely outside the scope of this video's topic.

    • @hjups
      @hjups 26 дней назад

      ​@@todayonthebench Many of the non-generative models also rely on large datasets which may contain copyrighted data. For example, ImageNet is the standard benchmark for image classification, but those images were scraped from Flickr without consent. I don't personally think that's an issue, but the copyright discussion in the video certainly applies. Other modalities such as text (e.g. BERT, T5, etc.) are likewise trained on copyrighted data (the Pile?). I believe many of the medical imaging datasets did receive consent for non-commercial research use, but I am not certain of this.
      You can also use the predictive models as generative models via differential updates (find an input that satisfies the classification).

    • @tiorontoron7531
      @tiorontoron7531 26 дней назад +2

      i understand how you feel, but this is like someone telling me that bitcoin isnt money in 2008...i know you're wrong, you know you're wrong, its just a matter of when you're going to be wrong.

  • @Warflay
    @Warflay 26 дней назад +9

    The idea that lawmakers can't effectively regulate tech because they "don't understand software" stems from a complete misunderstanding of how governments work.
    Laws aren't written by elected politicians, because yes they don't know enough about the topic (not just tech, same for chemistry, engineering, etc.)
    Politicians decide what direction to go in on a very high level as in: these are the outcomes we want to achieve.
    Then these ideas are given to the specialized governmental agency in charte, so in case of it security this would for example be the DHS in cooperation with other departments like DoD or NIST, where experts in the field, who've been working in that area would then discuss what are the technically feasible solutions to make this idea happen.
    Politicians only decide the "what". The contents of the laws describing the "how" is written by engineers and other experts who have experience in the field and have been working in it often for decades

  • @skygradient6248
    @skygradient6248 26 дней назад +5

    I asked Gemini to generate an image of kermit the frog, and it spit out an image of literally kermit the frog.. so like.. how is that not copyright infringement lol

    • @imeakdo7
      @imeakdo7 26 дней назад

      Wouldn't it be like fan merch? I guess the difference here is that fan merch benefits a person while AI benefits evil corporations

  • @el_larva
    @el_larva 26 дней назад +14

    Amazing how big tech does not have an option where you can pay their services but they don't use your data to train their models. You can pay not to have ads but not to really own your data

    • @sayorancode
      @sayorancode 26 дней назад

      WRITE THAT DOWN

    • @vangelistad
      @vangelistad 26 дней назад

      @@sayorancode he already did! ;)

  • @TNH91
    @TNH91 13 дней назад +1

    Any training on copyrighted data by definition needs to copy that data, most of the time without having explicit consent to do that copying, thus infringing on the copyright of the copyright holder. Copyright doesn't (AFAIK) care about whether the data copied without consent was deleted afterwards or other such things. And by definition training a generative AI model on copyrighted material is making a derivative work from that copyrighted material.

  • @jmacku35
    @jmacku35 24 дня назад +1

    I think the problem with this is the copyright is too long anyway. If we cut it down to twenty-fifty years we could provide new training data and strengthen laws protecting recent works.

  • @mb2776
    @mb2776 26 дней назад +8

    I love the science behind machine learning and the mathematics of it. I hate the current use of AI...

  • @EmmanuelIstace
    @EmmanuelIstace 26 дней назад +1

    A note on GDPR: it's more than just accepting cookies. You can decline them, websites are not forced to provide all functionalities, but most do in the end as cookies don't often play roles others than tracking the user nowadays. Also it's about which and how data are collected and treated within the company, justify their collection, being forced to disclose it and how the company should behave in case of data breach. That's a regulation that allow to legally attack companies with bad behavior. If it wasn't a problem for "big tech" and was just about a popup banner to accept, they wouldn't have fight against it or menace to remove their service from europe. It's not perfect, but it was a step in the good direction to me.

  • @jumphigher-runfaster
    @jumphigher-runfaster 25 дней назад +1

    Cookies are not related to GDPR, in fact the cookie law is considered a mistake by the regulators. GDPR have requirements on transparency about data processing, not hoarding data, not sharing it with 3rd parties without explicit consent for each of them and giving the ability to view and remove stored data.

  • @PJutch
    @PJutch 25 дней назад +1

    Honestly, I still find it funny how OPEN AI have decided to stop their work from being OPEN the moment they smelt money

  • @NoSponsoredContent
    @NoSponsoredContent 25 дней назад +1

    Thank you for actually speaking up about this issue on a major platform despite the very real consequences that could result in you doing so. It truly shows that your demeanor is genuine and that you have strong moral character, as an INDIVIDUAL, and not merely as just an audience-facing RUclipsr.
    You have my ultimate respect and I think any person who is competent enough about the technology to understand what is already going on, and the detrimental path its current utilization is headed toward, should feel a sense of moral obligation to do something, anything, in attempt to prevent a very real and dangerous sequence of events from arising.
    To the developers that are actively and directly involved in perpetuating the deplorable ways in which AI and data collection are currently being utilized: I would advise you to read into the Nuremberg Trials, and think about them the next time you try to justify why what you are doing is not inherently wrong.

  • @ALulzyApprentice
    @ALulzyApprentice 19 дней назад +1

    Frank Herbert’s Dune: “Thou shalt not make a machine in the likeness of a human mind.”

    • @TNH91
      @TNH91 13 дней назад

      Time to go Butlerian!

  • @ragnarok7976
    @ragnarok7976 14 дней назад

    My big issue with Eula is it's presented to you after you buy the product.
    In the case of the iPad you are already 100-1000s of dollars in when you get a notice that essentially says "agree to this or your expensive item is now a paper weight".
    Sure you might be able to refund that product if you just bought it but that doesn't cover the new updated Eula that comes out months or years after purchase.

  • @carrycat876
    @carrycat876 17 дней назад

    Great video. Thanks for sharing your voice on this topic and giving some insights to understand what's going on. It's hard to keep up with what's happening.

  • @jeffcauhape6880
    @jeffcauhape6880 26 дней назад

    Thanks for producing this session. You've brought up issues I hadn't considered.

  • @HyenaEmpyema
    @HyenaEmpyema 19 дней назад +1

    1) It's amazing how IP law was held up so stringently for the past several decades. Many of you don't know this: Up until about 15 years ago, the song _Happy Birthday_ was copyright Universal Music Group. This is why so many movies used hes-a-jolly-good-fellow in place. They otherwise would have had to pay royalties. On RUclips, anyone can claim copyright if a song is used without license, and RUclips says you're guilty first, until proven innocent. Suddenly, IP laws don't matter anymore now that the corporations are doing the violations.
    2) AI isn't going to make anything cheaper. It is only going to vacuum up working people's salaries and send that money to billionaires. For instance, if they made an AI Radiologist that actually works (these "researchers" have been trying this for 20+ years now), they aren't going to sell it for cheaper. they are going to charge the same and feed the money back to the corporation. It's only job theft, identity theft (not the normal usage of those 2 words, what it will do is actually steal our human identities by emulating us, taking our voices away, etc), and IP theft, as all its "knowledge" is stolen from people. It's not going to make anyone's life easier except for the CEO's. AI can write a country song, but it still cannot do my dishes or laundry. why? Because that's not profitable to billionaires.

  • @JoshVoyles
    @JoshVoyles 26 дней назад +1

    Right now, I'm using AI to do two meaningful things. I have AI take useful notes for podcasts I consume and then I have AI write keywords/tags for notes I have in Obsidian to make finding notes easier.

  • @IzzyIkigai
    @IzzyIkigai 26 дней назад +3

    All of this is cute but... In the EU we have laws like "right to be forgotten" which absolutely does not work once data that falls under those laws is slurped into the model since (at least afaik) there's currently no way to remove data from a models weights. So that line of argumentation will be a very fun discussion in european courts at some point that might completely crash the whole bubble because of this technical limitation. .

    • @foobarf8766
      @foobarf8766 26 дней назад

      Yeah whether data was rasterised (like an image) or vectorised (into middle layer parameters) makes no real difference, theft is theft.

    • @TNH91
      @TNH91 13 дней назад

      @@foobarf8766 It's not going to fall under theft, unfortunately. The same way piracy is not theft. It'll be some other unlawful data harvesting or copying or somesuch it would fall under. _Maybe_ fraud.

  • @quantum5768
    @quantum5768 26 дней назад +1

    I think something that often is left out of consideration on AI training data discussions is open source code/hardware. Most people tend to see open source as copyright free and that report's description of trade secret also gives that vibe. Just because something is visible on the internet doesn't mean it's copyright free, most of my software and hardware designs were once open source under an MIT license. That doesn't mean I consented to my software/hardware designs being used for AI training, I had no qualms about commercial use of my open source material, but any usage at least required attribution of original author which AI doesn't do.
    I basically see generative AI as a copyright laundering service for big business, it has been known to reproduce licensed code used in its training data verbatim, without also reproducing the license. Imagine how Microsoft would feel if their source code for Windows leaked and was used as AI training data, they'd complain and sue over it. Yet when open source or even private repos of smaller companies have been stolen for the same purpose, Microsoft is totally okay with that being used for AI training. It's a massive double standard that won't be fixed without stringent regulation, I don't foresee that regulation being done well or being implemented on a reasonable timeline.

  • @nil0bject
    @nil0bject 26 дней назад +7

    we have not made an intelligence. AI is what we called NPC's because it sounded cooler

    • @vastabyss6496
      @vastabyss6496 26 дней назад +4

      Intelligence is a super broad term though, present in almost every living thing from slime mold, to ant colonies, to mice, to humans.
      Likewise some level of machine intelligence exists in many computer programs from expert systems, to pathfinding algorithms, to neural networks.
      Of course, it's a spectrum though. Expert systems are not very intelligent at all, and are incapable of doing much of anything useful, and because of that I personally don't call expert systems "AI". While a ResNet trained using deep RL and MCTS can achieve superhuman performance on a wide range of tasks, making it much more intelligent.

    • @nil0bject
      @nil0bject 26 дней назад +1

      all life may be sentient, but not intelligent. unless you hypothesise that intelligence is merely chemical response and reaction. sentience covers that's definition, therefore intelligence is more specific and defined. not mold

  • @kaupaxup
    @kaupaxup 25 дней назад

    I was just telling a couple friends (in business and academia) about this and one responded, "What's AGI?" Some folks are seriously unprepared for the world that is upon us.

  • @davidlocontes3564
    @davidlocontes3564 26 дней назад +27

    Cognitive dissonance: I don't want big government regulating stuff, except the stuff that I care about. Wake the f up, right wing libertarianism is the same thing as feudalism.

    • @Samcharleston24
      @Samcharleston24 26 дней назад +1

      Ohh right because adding more rules and regulations that will only allow the big monopolistic tech companies be able to afford to deal with will somehow free the peasants. The amount mental gymnastics here is astounding, you literally have the cattle not just asking but outright demanding to be first in line to the slaughter. Explain to me how adding more laws and regulations will allow small companies and average people be more free and equal the playing field? It’s the fucking tech monopolies that are pushing for this regulation. It’s literally called regulatory capture for a reason

    • @davidlocontes3564
      @davidlocontes3564 25 дней назад

      @Samcharleston24 That's the problem, you live in a corporatocracy controlled by oligarchs. You just voted to give the richest oligarch on the planet total control over your lives. Is that what will make you free? Regulations should prevent monopolies to form and oligarchs to gain control of your lives and of the government. You failed to do that because you kept voting for neo-liberals and neo-cons for 45 years. Your country is so f..ed-up

    • @nomore6167
      @nomore6167 25 дней назад +3

      @Samcharleston24 "Explain to me how adding more laws and regulations will allow small companies and average people be more free and equal the playing field?" - Tell me you're not an artist, and that you have no respect for artists, without telling me.

    • @Xur______
      @Xur______ 10 дней назад

      Apparently you've never noticed how large corporations lobby the government to their benefit. Like, walmart gets subsidies, favorable zoning laws, etc. Governments facilitate corporations, something that wouldn't happen under libertarianism.

  • @eugene_dudnyk
    @eugene_dudnyk 13 дней назад

    The only way to fix copyright issue is to pay royalties to the creators, which at some point will translate into a basic income paid by big tech companies to the humanity.

  • @X4R2
    @X4R2 26 дней назад +10

    13:00 AI proponents argue that it's like a person learning to write/draw by watching others write/draw. AI proponents completely miss the major aspect of art that is creating a unique style.

    • @Konanzor
      @Konanzor 25 дней назад +2

      I've seen many outputs from generative models that are unique. What do you think "unique style" is anyway? It's just a blend of existing styles.

    • @X4R2
      @X4R2 25 дней назад +1

      @@Konanzor can AI create a new style in an evolutionary step like we see in movements such as impressionism or cubism? Even if you don't recognize an AI's output, it's potentially a style from an artist you are unfamiliar with. Style is "just a blend of existing styles"? Humans can create art in a vacuum of art e.g. cave paintings. Machines cannot, because they do not have an innate motivation to do so. Humans provide an external motivation for machines to create art and they fuel them with humans' art. Without human creativity, there will be model collapse (a documented phenomenon).

    • @X4R2
      @X4R2 25 дней назад +1

      @@Konanzor IMO AI will not create true art until it gains consciousness. That's not a great answer since consciousness is difficult to define, but I do think current AI models are a small step towards understanding consciousness.

  • @Silly-s8n
    @Silly-s8n 18 дней назад

    I think the only solution is to have model-policies that companies _have to_ adapt, rather than writing their own.
    When they want to change something, they enter something in red, and they list the changes on a specific page.
    If the changes exceed a word limit, they need to pass it through a legal body.
    That way, you kind of need to only read them once and legal tubers or tech tubers can inform you about the different forms.

  • @Basil-the-Frog
    @Basil-the-Frog 24 дня назад

    AI/ML (machine learning) Has been doing: (1) making electronic circuits better, (2) finding new chemicals, and (3) saving gas for delivery companies.

  • @ryanwillingham
    @ryanwillingham 18 дней назад +2

    the thing is, ai isn't even good as a search engine. it hallucinates, it can't cite its sources, and it can't tell fact from satire. the astronomically small number of potential problems that ai could solve are far outweighed by the number of problems it causes.

  • @scarletevans4474
    @scarletevans4474 26 дней назад +2

    Why we still don't have autonomous 8-legged Ambulance Stretchers that can walk up-/downstairs, so that paramedics or firefighters don't need to carry people that need help?
    We have self-driving cars, intelligent drones etc. but when a 200 lbs lady needs to go to an ER she have to be carried downstairs with muscle-requiring methods like from Ancient Egypt?

    • @atijohn8135
      @atijohn8135 24 дня назад +1

      because of capitalism. capitalism incentivizes profit, and you don't profit from a 200 lbs lady that needs to go to an ER; but you do profit from stealing other people's work and selling it as your own

  • @ncascini01
    @ncascini01 6 дней назад

    What's most disconcerting about this is that these people don't care about our privacy in the slightest, as they have actively worked to remove it in the past using technology

  • @nagi603
    @nagi603 26 дней назад +5

    12:20 Also let's not forget that EVEN IF their whole argument stands, there is a significant portion of uploaded works where the uploader does not have actual rights / control over. Films, music, art, certain revenge pictures... yet these all get ingested too.

  • @lewiskirvan5106
    @lewiskirvan5106 23 дня назад

    I think that they are saying it's unclear whether legislative action is needed because we don't have court rulings under existing law it could be the case that judges decide that use in traing a model is infringement under existing law.

  • @SunsetGraffiti
    @SunsetGraffiti 25 дней назад

    This is extremely meaningful content. Thanks for keeping us hip to the chaos, you handsome brute ~~

  • @timelinegod2995
    @timelinegod2995 26 дней назад +4

    Ah sweet! Cannot wait to see AI made horror beyond our comprehension ~by Peter Griffin

  • @nomore6167
    @nomore6167 25 дней назад +1

    That "Key Findings" section's "It is unclear whether legislative action is necessary..." is literally Congress saying, "our corporate overlords don't want us to protect your property because they want to use it to generate profit for themselves". Section "It is often difficult for creating to know...", which states "While the question of fair use in the context of generative AI remains the subject of ongoing litigation", is more than disingenuous. It is not fair use, full stop.
    Fair use has four basic considerations:
    1. "Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes" - The purpose of using copyrighted works to train AI is to generate profit for the company providing the AI model. It is not for nonprofit educational or noncommercial use.
    2. "Nature of the copyrighted work" - AI is trained using all copyrighted work, including creative works (novels, movies, songs, paintings, software code, etc).
    3. "Amount and substantiality of the portion used in relation to the copyrighted work as a whole" - Copyrighted works are used in their entirety to train the AI model.
    4. "Effect of the use upon the potential market for or value of the copyrighted work" - The explicit purpose of the AI model is to generate contact substantially similar to the copyrighted work, thus the effect is to severely diminish the value of the copyrighted work.
    Using copyrighted works to train AI models fails all four considerations of fair use.

  • @BryanSeigneur0
    @BryanSeigneur0 20 дней назад

    7:40 I like your thesis. Do you apply it to code as much as other forms of intellectual work?

  • @JamesTDG
    @JamesTDG 26 дней назад

    4:55 don't forget that the data can end up getting de-anonymized depending on what training data is used, such as GenAI outputs containing attempts to reproduce signatures and watermarks.

  • @JeffMooredotcom
    @JeffMooredotcom 25 дней назад +1

    This is the root problem with cloud systems. If it doesnt reside on your server it aint yours. Tip: Use their AI to deobfuscate these policies and EULAs to bring these corporate motives to the forefront and publish it for others to learn from.

  • @letsRegulateSociopaths
    @letsRegulateSociopaths 26 дней назад +2

    This is the definition of regulatory capture by monopoly corporations.

  • @Jo_Wick
    @Jo_Wick 26 дней назад +5

    This type of behaviour from companies isn't new. For example, about 5 or so years ago, I read the entire privacy policy of Snapchat. What I realized is that (and I'll bet it's the same now) when you agree to the "privacy policy," it also means that for *all* data you send to the service-things like pictures, comments, etc-you are surrendering your copyright of that data to them. Case closed. You have access to that data, but that doesn't mean it's yours. I'm sure it was an easy extension of that policy to include generative AI training.
    Looking towards the uncertain prospect of today and the future, you can already see artists losing their jobs. This could be the impetus for a change in policy not only in privacy policies, but copyright itself. While I was initially on the side of using AI to train on other's works, I've since changed my mind. It's not ok to allow large companies to swallow up creativity. Human minds thrive on that stimulation, and in a different occupation they'll feel less enjoyment and fulfillment.

    • @metaforest
      @metaforest 26 дней назад

      It is not surrendering your rights to the data. You are giving a license for the copy of that data you gave them so that they can publish it on your behalf... but it also gives them to use that copy any way they see fit. But you can turn around and publish that data in a book, or to another service provider usually under the same terms of a non-exclusive right.

  • @AliceTurner-q3f
    @AliceTurner-q3f 26 дней назад +3

    Usually people forfeit the rights to their data by agreeing to the terms of service at the point they share it. People literally had their data to companies which are basically malware like Facebook or Windows. So the people this happening to literally gave permission

    • @josho225
      @josho225 26 дней назад +2

      yep, basically agree to this shit sandwich or you cannot use this OS on the new computer you just bought and already paid the license for.

  • @nati7728
    @nati7728 26 дней назад +15

    It’s not that they don’t understand how software works it is that they are bought by corporations. You shouldn’t have to “understand computers” to want a law passed that protects consumer privacy.

    • @Un_Pour_Tous
      @Un_Pour_Tous 26 дней назад +4

      Wow that's a oxymoron comment. You need to understand 100% of something before it's law or loop holes will be made lol

    • @Squiggy545
      @Squiggy545 26 дней назад +6

      I think that's the difficulty. You don't need to understand computers to want to make a law protecting privacy and online rights. You do need to understand computers to actually make the law and have it work the way you wanted.

    • @JamesTDG
      @JamesTDG 26 дней назад +1

      I mean, there is an understandable requirement to have someone understand what they are legislating upon, otherwise you trigger loopholes that can and historically have been abused.

  • @IndyAdvant
    @IndyAdvant 26 дней назад

    Dope video dawg

  • @heck-r
    @heck-r 26 дней назад

    I'm not sure about the training being an issue with copyright as long as it doesn't give the original back, and even then it is questionable for the same reason you don't sue an artist when they practice by making existing art (though I'm really not sure what's the case with selling that).
    My point is that when isolating this argument, due to the underlying mechanism being similar to human training (it is kinda modelled after it), should then reflect the same restrictions whatever those may be. So if a human can go, look at the Mona Lisa, copy it by hand and sell it, then it should technically be the same for the model (and if the human is not allowed, then the model should also not be allowed). If a human can practice by copying other stuff and then make stuff that's "their stype" which is just the combination of stuff they trained on and could learn, then the same should not be restricted in principle.
    It only bothers people because it does it good enough, and very fast.
    If a freak of nature human would be able to do the same (same quality and same speed) then would it be right to not allow them to look at others' works and restrict this person's work or make him pay others? I'd say not.
    That said, I would definitely not want artists to go out of business, because that's not only bad for them, but for everyone, simply because as it is, models are hitting a platou, and as they kill off artists and flood the internet with works, they overfit themselves and result in an overall decrease of quality.
    Also, these models are kinda designed not to introduce new ideas, as they are pretty much autocorrect on steroids (although it's a different discussion what amount of combination of existing stuff may constitute a new idea), so creators must be protected to avoid anchoring ourselves to the existing concepts that already exist in mediocre quality.
    So I'm not saying this should be left without regulation, I'm just saying that the copyright argument doesn't really make sense in case of this technology due to its design.

  • @sfertman
    @sfertman 26 дней назад +1

    Same on linkedin -- they opted everyone in by default and then you have to dig through the menus to opt out.

  • @asdf9769
    @asdf9769 23 дня назад

    42 seconds in, you're 100% right. Subbed.

  • @TxRedneck
    @TxRedneck 26 дней назад

    You 100% summed up my POV on "ai" in the first 15sec. No need to finish the vid. 😝
    Ok, now I'll finish listening.

  • @hiramcofer2407
    @hiramcofer2407 26 дней назад +1

    I don't think complaints about profiteering belong in the conversation about ai because profit and competition are the methods by witch better products and services are brought to the market.

  • @JamesTDG
    @JamesTDG 26 дней назад +2

    12:56 Seriously, how TF can they argue that the generated output has no effect on the value of the work, when we have thousands of artists already out of work because of GenAI. I've read messages from some who've taken their own lives, because they cannot even find labor in a sector they worked in for so long, after they were replaced with AI at their previous job.
    They need to get it through their thick skulls that if creators cannot even get paid from the upper slice of media creation, that it will cascade down to affect other media companies too. If Cartoon Network replaces all of their animation staff with AI, other companies will follow, and it will directly devalue the work of an animator in the eyes of the market, just because AI is cheaper and faster, despite a noticeable decline in quality.

    • @Konanzor
      @Konanzor 25 дней назад

      "already out of work because of GenAI" - not true. They're out of work because employers only care about the bottom line. Generative art is just what happened to let those companies fire the employees. It could have been literally anything else as well, and the end result would be the same. People fired so that company makes profit. No different than automation.

  • @零云-u7e
    @零云-u7e 5 дней назад

    The sources/assets should have a public doc, listed, modelled, tiered, chain-of-custody, agreements, licenses, etc. General exposure. Ecosystems could be (maybe should be) forced to allow opt-out of AI agreements. That sounds common sensed. Big question : would any laws actually squelch open source AI software w online sources? That benefits big AI. It becomes a gatekeeper, a buy-in with a backplane of licensing.

  • @brandonw1604
    @brandonw1604 26 дней назад +5

    Pull the plug on it, make companies go back to actually building decent software/OS, and quit shoving AI into everything.

    • @nomore6167
      @nomore6167 25 дней назад +1

      "and quit shoving AI into everything" - Including processors, which needlessly drives up the price of processors and the devices into which they're embedded.

  • @nomore6167
    @nomore6167 25 дней назад +1

    Here's a simple fact -- every document and every piece of art that is created is automatically copyrighted by its creator. By training AI algorithms on that data, you are effectively copying that data, and everything generated by that AI algorithm is a derivative work of everything on which it was trained. That's textbook copyright infringement. Unfortunately, corporate copyright infringement became acceptable once Google started copying books in their entirety.

    • @zzzzzzz8473
      @zzzzzzz8473 25 дней назад

      training a neural network is not a derivative work you dont understand how neural networks work , if we analyze an image through code to produce a piechart of the dominant colors we have not violated its copyright , even if we create 13B analyses across different dimensions of composition or detail frequency etc , at no point is the original being copied "into" the neural network . lastly copyright is completely toxic and manipulated by disney and other corporates , far from its original intent of short duration , if you extrapolate a world where copyright is more enforced it becomes far more suffocating then one without , as corpos would claim and consolidate ownership of general concepts barring anyone from being creative for fear of infringement , look at what nintendo is doing right now , or how disney lobbied successfully made it illegal to do what they did ( animated adaptation of brother grimm fairy tales ) .

    • @nomore6167
      @nomore6167 25 дней назад +1

      @@zzzzzzz8473 "training a neural network is not a derivative work you dont understand how neural networks work " - You clearly don't know what a derivative work is.
      "copyright is completely toxic and manipulated by disney and other corporates" - Spoken like someone who does not value artists or the works they create.

  • @traolin5877
    @traolin5877 26 дней назад +11

    I'm really going to miss Lina Kahn. I wish we could vote for FTC chair also. It's a position that helps the people. Why don't the people have a say who gets the position?

    • @EmbeddedSorcery
      @EmbeddedSorcery 26 дней назад +3

      Too many appointed positions in general.

    • @Xathian
      @Xathian 26 дней назад

      Yeah definitely going to miss the clown that let telco and grocery mergers that fuck Americans go through without a peep but drives up a circus over Microsoft owning CoD or free, expensive internet services sustaining themselves with ads. God the delusion around that woman is bigger than even Trumps

    • @foobarf8766
      @foobarf8766 26 дней назад

      You vote for appointments (in general no the US specifically or anything) by becoming a member of a political party. I don't know about the US but here in NZ the parties will often have internal (member only) polls on who to appoint etc.

  • @jackypaulcukjati3186
    @jackypaulcukjati3186 22 дня назад

    Got a question, what about people that scrape twitter to train their models? And given the regulations that you are suggesting, how do you intend for the USA to compete against other countries in the machine learning technology realm?

  • @HardKore5250
    @HardKore5250 26 дней назад

    Where can I get a copy of this report?

    • @AuxiliaryPanther
      @AuxiliaryPanther 25 дней назад

      It appears to be linked in the description of the video.

  • @noone-ld7pt
    @noone-ld7pt 25 дней назад +2

    I'd love to hear your take on the o3 breakthrough from openAI. Seems like a paradigm shift to me.

  • @racvets1
    @racvets1 26 дней назад +3

    I don’t know how to apply meaningful laws to AI without totally wrecking useful applications of it. The machine learning prediction algorithms are cool, object detection in live video is amazing, super complex physical modeling is great.
    But how to do balance it with the issues of generative ai? Or the AI crime prediction models? Most search engines are also models, are they bad? Opening up the models isn’t really useful either, it’s just a bunch of weights and connections, not human observable to find biases.

    • @trisimix
      @trisimix 26 дней назад +1

      Big data llms are probably a dead end pretty much anyways. The datasets required to do the yhjngs society actually needs dont exist and wont be collectable through things like website tracking.

  • @josephfredbill
    @josephfredbill 26 дней назад

    I agree with your argument @LL (great nickname?)? There is one area where the effect benefits us all, though admitedly with problems in knowing its accuracy and reliability in situations where that matters - translation (and communication across cultures). I would expect, with appropriate technical standards, that there will be beneficial effects for humanity there. We need to be careful how its used.

  • @letsRegulateSociopaths
    @letsRegulateSociopaths 26 дней назад

    So once musk gets grok (also a copyright violation against Robert Heinlein) trained, then he will be forced to retrain it on Grim's fairy tales (wouldn't surprise me, that sht is truly dark).

  • @ivanmaglica264
    @ivanmaglica264 15 дней назад

    One problem might occur - if US starts hardcore legislating AI, it might be possible that those organizations will just move outside the borders of US, like equivalent to tax-havens. I'm much more interested in how AI produced data/art that is trained on personal data is going to be treated in future. The whole can AI art be copyrighted. Training AI on some obscure artists songs and then produce AI song producer...

  • @Waterwater743
    @Waterwater743 26 дней назад

    Did you see the OpenAI whistleblower "die of suicide" recently? He was a witness in a copyright case.

  • @lifein240p8
    @lifein240p8 26 дней назад +8

    People - both artists and AI techbros - are focusing too much on the copyright aspects of the output and not THE FACT THAT SOMEONE IS BUILDING A COMMERCIAL TOOL using their work without their consent or any compensation.

  • @tjgonline1304
    @tjgonline1304 25 дней назад

    It's starting to look like humanity won't pass "The Great Filter."

  • @RobColbert
    @RobColbert 26 дней назад +14

    This argument is becoming simply the "sampling in rap music" argument and will end the same way. If someone wants to "sample" your work and create a derivative, they owe you a license fee, royalties, and similar. It's no different. How is this even an argument? We literally have precedent.

    • @viralarchitect
      @viralarchitect 26 дней назад

      and output can be tracked, so there's no way they can't pay a licence fee any time the inputs from that work are weighted in the output and give the owner their fair cut.

    • @markcoren2842
      @markcoren2842 26 дней назад +3

      The dilemma now is that in most cases, there is no straight-line audit path to say "this definitively came from that" and there are no centralized licensing authorities for publishing data in general. Tokenizing methods end up adding yet another layer of obscuring heredity.
      Additionally, value can be derived from the structure of an input without technically violating its copyright. Add in fuzzification and the means of enforcement become enigmatic at best. That leads to the "nobody can own chords" argument, where most nobody is going to get paid ☹️.

    • @landlubbber
      @landlubbber 26 дней назад +2

      Because generative models do not sample anything in a way humans could and calling the works "derivative" would only make sense if the models overfitted to the point that the output was practically identical, which does happen but not as often as is believed for the larger models

  • @hawkbirdtree3660
    @hawkbirdtree3660 26 дней назад

    I wonder if AI will increase demand for physical human-made objects, like humans have been doing for thousands of years. Finding a diamond is an amazing find, but if everyone in the world has diamonds, they become just another crystal.

  • @salty_cucumber
    @salty_cucumber 22 дня назад

    Aaand the situation would be this: billionaire companies already used this data (OpenAI allegedly used it to it's possible max) and new companies would not be able to catch them

  • @PwnySlaystation01
    @PwnySlaystation01 26 дней назад

    Just for clarity, you give RUclips a LICENSE. The difference may be subtle, because you also grant them the right to sublicense your video, so they can probably still get away with ingesting it. But they do not own your content, you still own the copyright to your videos.

  • @novousabbott4926
    @novousabbott4926 26 дней назад

    I see AI specifically openAI's LLM as a coding assistant but, i notice that when I'm having specific issues it'll give me a piece of code that when googled leads to a stack overflow page

    • @loopingdope
      @loopingdope 26 дней назад

      I got 1-1 code from stack overflow for one particular function i needed

  • @KrumpetKruncher
    @KrumpetKruncher 23 дня назад

    wonder how politicians would like it if they were told someone was making "Politician AI" that would watch how they interacted with constituants and lobyists and then record their votes to make an AI model of a polititian that could take their place in the future. "but dont worry, it will anonymously analyze their actions"

  • @luketurner314
    @luketurner314 26 дней назад

    A potentially hot take (an option for some people): vote with our wallets by not paying for access to AI models. If you must use AI for your profession, maybe try training and running it locally.
    I am convinced that the next era in tech is going to be self-hosting; it's partially here already. Personally, I think RUclips is the last thing from Google that I use on a daily basis (yes, I self-host a search engine)