This Lawsuit Might End AI as We Know It

Поделиться
HTML-код
  • Опубликовано: 13 янв 2025

Комментарии • 15

  • @geraldmerkowitz4360
    @geraldmerkowitz4360 Год назад +2

    I hope it'll have an impact but I don't expect it to be major, unless other countries step up and act in the same direction. It's a bit like climate policy, always good to see people act, but the endgame is that everyone follows otherwise the problem isn't solved.

  • @aldproductions2301
    @aldproductions2301 Год назад +7

    Honestly? GOOD.
    Companies have not been responsible with AI, and we *need* something that halts it quickly.
    We need to deal with the questions of theft but also the questions of liability, and if this lawsuit succeeds, we'll get a chance to deal with those at a better pace.

    • @financemadesimple_official
      @financemadesimple_official  Год назад +1

      Agreed, the reckless abandon that these companies are pursuing AI with us concerning. There needs to be better regulation - I think it’s a bad idea to just let big tech companies push the envelope and potentially break things.

    • @g.personal342
      @g.personal342 11 месяцев назад

      And now they're able to "generate" footage. Imagine the billions of hours of footage they've stolen? There's probably so many files in there, they haven't even sifted through 0.1% of the garbage they're inputting into this thing. Terrifying, there's probably so many illegal files in there. If god exists, they must give these AI companies karma for the irresponsibility and lack of etiquette. They are just criminals, but sadly the government is so old, they can barely keep up with the advancements. Get ready for the war venture capitalist are funding! If these AI companies have these illegal files in them, THAT MEANS THEY ARE DISTRIBUTING THE ORIGINAL FILES. They need to be sued immediately. Imagine all the sick things they are distributing, like gore, cp, propaganda, and a list of other outrageous garbage. Wait until the CCP gets a hold of this and makes propaganda videos. China is gonna love this one! No child labor necessary either, it can just be funded on the backs of the billions of people it has stolen data from. Sam Altman is summoning a demon, he is anti social, anti humanity, pro psychopathy. AI is an anti social machine. If justice exists, they will be sues into the ground. I hope every fortune 500 company sued them, so these demonic people cannot bully people into submission anymore. Elon musk just lost 200 billion, there is plenty of hope.

  • @OgdenM
    @OgdenM Год назад +1

    *rolls his eyes* I'd like to see the NYT or anyone take on Google or Microsoft for real.
    Besides, LLM's are now open source. There is no stopping them.
    The most that is going to come out of this is that OpenAI etc etc will have to have their AI state sources at the bottom every time someone uses them to generate something. Which sure, might be a mess and hard if they didn't already keep track of them, but I bet they did.
    You know, like how Google and a lot of other search engines with summarizing features already state sources.
    Besides that, I don't see NYT going after all the other news organizations that out right copy their stuff on a daily biases.

    • @suckmyartauds
      @suckmyartauds Год назад

      Perplexity AI is able to cite sources and I really appreciate it for that. I agree this functionality is the most likely outcome of the lawsuit

    • @financemadesimple_official
      @financemadesimple_official  Год назад +3

      The legal and regulatory environment is more hostile to big tech than it’s ever been with the federal government itself launching several antitrust cases against companies. I think getting a win against Microsoft in this case here is a bigger possibility than you are estimating given that context and the actual legal merits of the case.
      I also don’t think citing sources is enough to get out of the copyright issues in this one. The fundamental difference between this and Google’s summarizing features is that Google still pushes traffic to sites (helping publishers) while AI even with citing sources would be diverting traffic away from sites (hurting publishers). This is relevant from a copyright perspective because the effect on the copyright holder’s market potential for their copyrighted material is one of the main legal criteria for determining fair use.
      I think there’s a real possibility that tech companies will have to license the content. There’s a reason OpenAI is quickly trying to strike licensing deals with tons of publishers (it’s easy to find articles talking about this on Google if you want to read more). If they legitimately didn’t think that they had to strike licensing deals, they wouldn’t be spending the money to do that right now. That tells me that internally they think they are at real legal risk on this issue.
      I could easily see this playing out similar to Napster in the 90’s with how that changed licensing in the music world.

  • @GraveUypo
    @GraveUypo Год назад

    oh this has as much chance of ending AI as it does of making computers illegal

    • @financemadesimple_official
      @financemadesimple_official  Год назад +1

      It won’t end AI, but it might “end AI as we know it.”
      I think there’s a very real chance that after this plays out in court that AI companies will be rehired to license the data in their models going forward and that copyright holders can opt out. That might not sound like a huge deal at first, but it would fundamentally change the industry because creating and maintaining an AI model would become much more expensive for these companies to do. This would change the competitive landscape of the industry and likely the cost structure.

    • @g.personal342
      @g.personal342 11 месяцев назад

      It would absolutely end AI as we know it, no artist, film-maker, writer would opt into this garbage if they could. It just sifts the internet and steals people's data and they expect us to not complain or say anything. The stupid "robots" they call AI is just an information compiler with no original thoughts of its own. Do you propose all the data it has was legally inputted into these algorithms? No... I don't want my art or data in these data compilers. Why would I voluntarily give it my data? It's basically like Facebook, but 100x more malicious. They think this robot will take over the world, but it hasn't had a single thought of its own, it works NOTHING like a human brain, and nor is it sentient. Proposing AIs are "learning" is like saying a parrot can speak English because it can pronounce English words. AI is just a fancy parrot. If I had the resources, I would be suing them right now. In fact, once I do get the money I will be suing AI companies. All they do is steal. Steal. Steal. They want to replace humans with the data we've given this data analytics machine. They need to stop stealing our data.

  • @evilpcdiva9
    @evilpcdiva9 Год назад

    The arguments presented in this video are seriously flawed. While it would definitely be copyright infringement if I asked ChatGPT to print me the specific New York Times article from Sept. 5th, 2021, and the language model spat it out word for word, that's not what the model does. The model reads publicly facing websites and simply trains the network based on them. The network itself does not copy the works in question, nor, to my knowledge, accesses information that's not already available for public reading. As such, no infringement is taking place.
    Essentially, when training a Language Model (LLM) or any other AI model, what these companies are doing is putting a toddler in a library that has access to every written work, teaching it to read and write, and then allowing it to read everything that's ever been written and made available to the public. When you ask it to write a steamy fanfic about your two favorite fantasy characters in the style of Joe A. Uthor, it's using those works as simple inspiration to create a new piece.
    It is no different than a human being well-versed in the works of Shakespeare creating a new sonnet in his style.
    Certainly, like every technological innovation, it can be abused by less than savory actors. However, until we see actual proof of such abuses, such as hacking the NYT's servers to access articles not available for public consumption, these arguments hold no weight.

    • @financemadesimple_official
      @financemadesimple_official  Год назад +1

      Respectfully, I think it’s very disingenuous for you to just hand wave the arguments away as “flawed” when legal experts are mixed and debating this very issue right now. These are uncharted waters, so to act like it’s clear cut fair use and ignoring the unique circumstances surrounding how generative AI works is not realistic. Your opinion that it is protected by fair use is a reasonable one, but acting like it’s obviously protected is just not true.
      I think you’ve also shifted the goal posts with what constitutes copyright infringement. Something doesn’t need to be a word for word copy or exact reproduction to violate copyright. The Supreme Court itself ruled this just last year in the case “Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith” when it decided in a bipartisan 7-2 decision that a reimagined artistic reproduction of a photograph was not sufficiently transformative to be fair use. Generative AI is going to have to clear that same bar too and the legal precedent of that case isn’t going to help it.
      I’m very familiar with how an LLM is built and trained and your analogy comparing it to the way a human learns also seems like it ignores key differences between the two. AI is fundamentally different in that if you took away the data, it couldn’t “create” anything. To characterize in the analogy the data as simply “inspiration” for the generative output is wrong, because there couldn’t be an output at all without the data.
      Big tech might win these cases in court and be able to continue exactly the same way as they are now, but I think your response is overly reductive about this issue. It’s nowhere near as cut and dry as you make it and there’s a lot of legal precedent to show that this data needs to be licensed. The legality of this and licensing could play out very similar to Napster in the 90s.

    • @evilpcdiva9
      @evilpcdiva9 Год назад +1

      I am compelled to maintain my disagreement with you on this matter. While I acknowledge that my statement simplifies events, please point to one modern work that is completely original and not inspired by something that came before. All of these instances fall into the case of fair use, (again provided that there is no blatant copyright infringement happening).
      If one argues that the data used to train the LLMs violates copyright protection, one is, by the same token, stating that any work even remotely inspired by another must also violate said protections, even if only inspiration occurred.
      Additionally, I argue that if the source material is not explicitly stored within the model, NYT (et. al.) will face significant challenges in proving infringement in court. This is particularly true because their data was not reproduced without a license but was merely used to train a model based on publicly facing available works. This, in itself, seemingly does not in any way constitute infringement.
      I would also like to point out that from the moment humans are born, we are taking in "data" - from people speaking around us to news on television, books we read, and even the puffy clouds we imagine shapes from on lazy summer days. All of these serve as the same sort of input that the neural nets receive from scraping publicly available websites, and I feel that your argument -- whether intentionally or otherwise -- ignores this key similarity.
      From Samuel Clemens, we see that:
      "There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn, and they make new and curious combinations. We keep on turning and making new combinations indefinitely..."
      And this is the crux of my point and why this lawsuit should be thrown out of court, because it could very well lead to a slippery slope wherein the simple act of inspiration is forbidden under copyright.
      @@financemadesimple_official

    • @mekingtiger9095
      @mekingtiger9095 2 месяца назад

      Kinda late to the party, but I'd argue that if we are to treat the LLMs the same way a human mind works, then while it does not constitute copyright infringement on the prompter's part, it also means that the prompter is not elligible to copyright the product of what was prompted either, depending on how much agency and intent was put behind the process. Because the real "owner" of the IP would then be the machine itself and not the user. And then we get to the issue that the United States Copyright Office has declared that works not created by a human author are not elligible for copyright protection.
      So while no copyright infrigement happened, I could very hardly see AI having much of a commercial use in this case. Specially for big companies who rely a lot on protecting their IPs. But I suppose that's kind of besides the point discussed here. Still, I wanted to give my two cents on this.

  • @christietang3985
    @christietang3985 11 месяцев назад

    Open source!