Copyright Lawyer Explains Drake AI Song and More | WSJ

Поделиться
HTML-код
  • Опубликовано: 3 ноя 2024

Комментарии • 173

  • @wsj
    @wsj  6 месяцев назад +12

    AI’s explosive rise is causing soaring demand for data center development across the U.S. But it faces a major obstacle: getting enough power. Watch this video on the AI data center boom here: on.wsj.com/4aykCr8

  • @trogdorstrngbd
    @trogdorstrngbd 6 месяцев назад +18

    This was an extremely easy to digest summary of the current situation. Good job WSJ!

  • @matthewbaynham6286
    @matthewbaynham6286 6 месяцев назад +61

    It would be good if AI generated art wasn't able to ever be copyrighted. This would mean Holywood isn't going to get replaced by some computers.

    • @BeIlG
      @BeIlG 6 месяцев назад +3

      ai generated images* . I feel they also shouldn't hold the protections some artists receive in the Fair use sector. This is using others work to create a narrative that is different than the original subject via a Human author. Thus ai image generators are never making art. There is 0 purpose or intent behind them so they should only be considered images and NEVER art. I too hope you're right and they never get to be copyrights.

    • @stompysnake8233
      @stompysnake8233 6 месяцев назад +4

      What's wrong with replacing holywood with some computers?

    • @matthewbaynham6286
      @matthewbaynham6286 6 месяцев назад +4

      @@stompysnake8233 it would hurt the economy in the US. The film industry is a multi-billion dollar industry. You would have doftware development teams in india and server farms in indua generating films and the US would not be able to compete with the prise.

    • @FreshSmog
      @FreshSmog 6 месяцев назад +3

      That's why there's mass layoffs everywhere?

    • @huntercowles9657
      @huntercowles9657 6 месяцев назад +1

      This is big for authors, too! No author scraping a living wants to see their work and characters copied, tweaked a little, and sent back out (for zero money or recognition).

  • @johnl.7754
    @johnl.7754 6 месяцев назад +62

    Copyright office will probably get swamped if AI can copyright

    • @WoodEe-zq6qv
      @WoodEe-zq6qv 6 месяцев назад +7

      Sure but the complexity behind that case, is that the guy who made the picture *also made the AI Model*
      So it's kind of ridiculous for the court to say he didn't make the picture.

    • @douglassun8456
      @douglassun8456 6 месяцев назад +3

      Probably not, as you don't have to file in order to have your copyright recognized. As long as you put your copyright bug on the work, it is recognized that you have implied copyright. What will happen is that the AI program (or whoever is representing the AI) will have to spend a lot of time policing possible violations of its copyright, then file the DMCA takedowns and generate the cease-and-desists as necessary. It's a huge task, which is why even obvious violations slip through the cracks all the time. Just spend a day on Etsy and you'll see what I mean.

    • @arthura.2587
      @arthura.2587 6 месяцев назад +1

      so if AI generated art can't be copyrighted... how is Midjourney etc. able to decide what part of their free or paid plans you have commercial rights on?
      I.E. Just use freely generated Midjourney images and sell those commercially as digital files or prints... and if they want to sue you for it, they can't, right?
      Because the images have no human authorship, it should be legal for anyone to use as they please because Midjourney doesn't have rights to it either... ?

    • @abdiganiaden
      @abdiganiaden 6 месяцев назад +2

      Yea, allowing AI copyright will flood copyright junks to the point where it will be meaningless to enforce

    • @AB-wf8ek
      @AB-wf8ek 6 месяцев назад +2

      Like a DOS attack on the Copyright Office

  • @ericcartmansh
    @ericcartmansh 6 месяцев назад +2

    Brilliant breakdown. I can easily see the expert in this video coming to Congress to distill the issues down!

  • @CoinOpTV
    @CoinOpTV 6 месяцев назад +3

    Would like to see more of Naruto's selfie portfolio!

  • @JAbc360
    @JAbc360 6 месяцев назад +18

    #Justice for Naruto

  • @homewall744
    @homewall744 6 месяцев назад +3

    If you allow access to anything, you must accept that they can learn/train from it. Now if they produce new content based on it that makes significant use of your image/words, then you violate copyright just as if a person did. After all, the tool does not create anything that the owner/operator of the tool had it do. I mean, if you use your hands, shovel or excavator to dig a hole, the hole is what was created regardless of the tool used or whether you'd seen other holes before.

    • @arizvisa
      @arizvisa 6 месяцев назад

      This. That is exactly how information works.

  • @alexandrugheorghe5610
    @alexandrugheorghe5610 6 месяцев назад +8

    Nice to put a face to the WSJ narration voice

    • @gus473
      @gus473 6 месяцев назад

      Yes! And increasingly important as we get more AI generated videos on RUclips and elsewhere! 😎✌️

    • @timogul
      @timogul 6 месяцев назад

      That was just an AI metahuman. Maybe.

    • @interestsavvy6813
      @interestsavvy6813 6 месяцев назад

      Who knows maybe this video is made by AI

  • @nikkivieler3761
    @nikkivieler3761 6 месяцев назад +22

    So artistic and creative works are protected... That's good news... 😍

    • @MrJwyne
      @MrJwyne 6 месяцев назад +2

      Very good news! Can’t believe (well I can actually believe it) people would try to copyright AI work 😄

    • @colbzyk2128
      @colbzyk2128 6 месяцев назад

      They were always protected, but if someone builds a machine learning model and uses there data, should be protected.

    • @BeIlG
      @BeIlG 6 месяцев назад

      unless it is for grinding down your lifes work into ai image generators. apparently... :'o

    • @mf--
      @mf-- 6 месяцев назад

      Are they though? If the courts ruled data can be scrapped and used for any reason, that's definitely not in service of data protection. A poor ruling.

  • @stafonvoncamron
    @stafonvoncamron 2 месяца назад

    A good argument to it could be, Disney and other studio has been using computers and cgi to make all their films and still get to copyright them.

  • @osigano
    @osigano 6 месяцев назад

    Great piece! Thank you, Jeff Vezos!

  • @Burnlit1337
    @Burnlit1337 6 месяцев назад +2

    Find it odd that none of these organizations WSJ contacted wanted to comment. I mean its good that the journist reached out and its these orgs can not reply but when all of them declines, then I gotta question about the common denominator

  • @duran9664
    @duran9664 6 месяцев назад +2

    🔆The lady at the end is definitely an Ai 🤏

  • @mf--
    @mf-- 6 месяцев назад +1

    Creating a software based on unlicensed assets is an obvious violation of copyright.

  • @AndreaDoesYoga
    @AndreaDoesYoga 6 месяцев назад

    Fascinating insights, AI really is changing the game! 🤖

    • @ttt5205
      @ttt5205 4 месяца назад

      For the worse

  • @BeIlG
    @BeIlG 6 месяцев назад +3

    SCRAPPING NEEDS TO BE BANNED IF YOU DO NOT SITE YOU DIRECT SOURCES. EVERY PIECE OF INFO IS BEING SOURCED AND YOU DESERVE THE RIGHT TO OPT OUT! If every tweet holds some amount of publishing copyright, than you are legally allowed to prevent others from using your work. Also for image and video scrapping, this should fully be banned no matter. It should be a solely opt in situation.

    • @arizvisa
      @arizvisa 6 месяцев назад

      Unenforceable... and also does not solve the problem. Preventing scraping will only result in providing data silos or people with access to said data silos a potential business opportunity. Not to mention that it's not easy to prove whether someone has or has not "thefted" information.

  • @arizvisa
    @arizvisa 6 месяцев назад

    We should be signing all of our content anyways. Digital anything is easily reproduceable. Without there being some cryptographic/mathematical cost to creation, verification, or even viewing, the whole point is moot. We're ~30 years into the information age, we treat algorithms as adversaries, and still don't understand how information works.

  • @SomeNerd361
    @SomeNerd361 6 месяцев назад +24

    If AI generated content can be copyrighted, human-created works are doomed. No human is going to output work at the pace of a computer, ever.

    • @timogul
      @timogul 6 месяцев назад +2

      Yes, but this is not the problem. The _problem_ is that if a human cannot compete with a machine, then his wellbeing is at risk. THAT is the problem that needs solving, not by hobbling the machine, but by disconnecting "wellbeing" from "productivity."

    • @BeIlG
      @BeIlG 6 месяцев назад +5

      it is also backwards of the point of copyrighting things. There is no ability of originality in ai. It is like buying a custom ikea desk and trying to copyright the design...YOU DID NOTHING THO!!! lol

    • @ttt5205
      @ttt5205 4 месяца назад

      ​@@timogulAI will demoralize artists from making anything regardless of financial factors. It cheapens the medium just by existing.

    • @timogul
      @timogul 4 месяца назад

      @@ttt5205 Meh, an artist that gets demoralized because someone else can do better is not worth bothering with. I've been a working artist for over twenty years now, and there have always been TONS of better artists than me. It's just the nature of the business. People who have art in their souls will produce art regardless of what attention they get from others for it.

    • @ttt5205
      @ttt5205 4 месяца назад

      @@timogul Artists dont get demoralized because someone "does better". Another artist making better art is something to strive for. Artists get demoralized when the work theyve spent hours pouring their time and energy into, gets buried under hundreds of AI generated mediocrity that can be created within that same time. You know nothing of artists or our motivation.

  • @ryan_singh
    @ryan_singh 6 месяцев назад +2

    Why did you guys skip NyTimes vs OpenAI

  • @sommmeguy
    @sommmeguy 6 месяцев назад +1

    How can you copyright art that was stolen from other artists? All "AI" is just an amalgamation of images scrubbed from the internet. Art that was created by humans.
    If it was copyrighted and then shared on the internet, then another AI could just use that image to create another just like it.
    This is probably the fate of all art in the AI age - endless regurgitations of mediocre images copied from other mediocre images.
    And it is all a moot point because the copyright law works is that the infringed party must assert its copyright - hire lawyers and fight the big tech companies. You can imagine how often and how effective that will be. Creators' only hope is government regulations and bans.

  • @kylokat
    @kylokat 6 месяцев назад

    a huge issue that this video misses is that people will simply not disclose the use of AI if it is really not copyrightable.

  • @jarrettbobbett5230
    @jarrettbobbett5230 6 месяцев назад +2

    I vote No copy rights for AI

    • @arizvisa
      @arizvisa 6 месяцев назад

      Unenforceable. Information doesn't work like that due to not having a real monetary cost (unless you explicitly assign a cost). Even then if there is a cost assigned, due to it being digital it's impossible to distinguish whether said material was made by a human, an augmented human, or through software. Information is like speech, it's impossible to know where it came from without there being metadata/references.

  • @marcusmoonstein242
    @marcusmoonstein242 6 месяцев назад +4

    If I take a photograph of the Empire State Building, then I can copyright that photograph. I'm not saying that I created the Empire State Building, I'm just copyrighting an image I made of it using my camera and my skills as a photographer.
    To me the same principle applies to AI generated art. It was my skills as a prompt writer and image editor that produced that image with the help of AI. That image would never have existed without my input so it belongs to me and I should be able to copyright it.

    • @truthboom
      @truthboom 6 месяцев назад +2

      Except the Empire State Building is not copyrighted, Also AI needed a image reference to train on

    • @mf--
      @mf-- 6 месяцев назад +1

      Just because someone mines a particular diamond from a particular seed in minecraft does not mean they own it. Mining the latent space of an image generator is not creatove when the output can be replicated exactly with the same view inputs of noise seed and text prompt.

    • @ttt5205
      @ttt5205 4 месяца назад +1

      "skills as a prompt writer" 😂

  • @carkawalakhatulistiwa
    @carkawalakhatulistiwa 6 месяцев назад

    Feel the AGI

  • @timogul
    @timogul 6 месяцев назад +3

    I think that outside of arbitrary "nope" rulings, AI work _should_ be copyrightable. It's just work, an AI run by a company should be considered no different than a work for hire artist. I think that the "training" issue is a non-starter, if a human can look at a ton of art and learn from them and draw better, then there's nothing wrong with that, so there's nothing wrong with an AI doing it either.
    The issue of an AI work looking too similar to another work should also be handled exactly like with a human though, so AI might get in trouble for this, just as humans sometimes do. The bright side of this though is that you can build AI "similarity detectors" that can compare a finished piece to every piece on record and see if it is substantially similar, so you can catch these mistakes before they become a liability.

    • @AB-wf8ek
      @AB-wf8ek 6 месяцев назад

      The irony is, it's possible to use AI to measure exactly how similar one image is to another, all lawmakers would have to do is define a threshold.
      Though the problem with defining a hard line is that soon everyone will optimize for it and open the floodgates of automated copyright registration, like someone kind of digital gold rush, or domain squatting.

    • @timogul
      @timogul 6 месяцев назад

      @@AB-wf8ek Yeah, although "copyright squatting" is a lot harder than patent or trademark squatting. You can't just spit out a million different characters and then claim ownership over anything "substantially similar" to their designs. The works would need to be published in a significant manner, such that the future person who might run afoul of the copyright could possibly be aware of them. It would be a lot of work.
      You would also have to register each one, which would cost you money, and you would be unlikely to get that money back.

    • @AB-wf8ek
      @AB-wf8ek 6 месяцев назад

      @@timogul That's good to know, so many rent-seeking grifters out there trying to profiteer off every little loophole without caring about the negative impacts.
      What do you think about using AI to measure the similarity between images as the legal basis for infringement? Do you see any ways in which people would try to game a system like that?

    • @timogul
      @timogul 6 месяцев назад

      @@AB-wf8ek Well, I don't think "The AI says so" will be admissible in court, at least not any time soon. I think that there will be tools for AI to determine "substantial similarity," if there aren't already, but that these will just be "suggestions," rather than absolute fact. That is, if you use one, it might look through ten million pictures and go "I think this one looks like that one," but ultimately, a human would have to look at the two and agree that they do, rather than just accepting that as fact. Likewise, if an AI missed a connection, but a human showed two similar images and made their case, "the AI didn't catch it" is no excuse.
      But then on the other hand, a lot of people take things like DNA evidence, lie detectors, handwriting analysis, and other things as being more 100% proof than they might actually be, so having an AI back you up would probably be strong in a court case.

    • @AB-wf8ek
      @AB-wf8ek 6 месяцев назад

      @@timogul I think people generally misunderstand machine learning due to it being marketed as AI. Fundamentally it's a process for mapping the connection between data points, but to such a large degree that it can emulate human conversation and generate images.
      You wouldn't "ask" a ruler what the distance is between 2 objects, you would just use it as a measuring device. Similarly with a language model, you could use it to measure the distance between 2 words/concepts, and with an image model you could use it to measure the distance between 2 images. It's a mechanical device.
      I guess it will just take a while for people to understand.

  • @TheArcanis87
    @TheArcanis87 6 месяцев назад +4

    Nothing prevents an artist from looking at a Van Gogh painting, and making and selling his own creation in the style of a Van Gogh.
    I don't see why a Generative AI can't look at Van Gogh works and make new art in the style of Van Gogh.

    • @poorboyjim6392
      @poorboyjim6392 6 месяцев назад

      His work is no longer protected by copyright so the point is moot.

    • @jasenfoster5973
      @jasenfoster5973 6 месяцев назад

      ⁠@@poorboyjim6392yeah but same principle applies to works that are not protected.

    • @d.b5954
      @d.b5954 5 месяцев назад

      I'm going to tell you why : because AI doesn't actually ''look'' at a picture. In fact, it cannot see anything.
      If I ask you to draw a cat in Van Gogh artstyle, you will look for references to understand which color you should use, the anatomy of the cat, the shape of your lines, etc. = Meaning you have the ability to make something original
      But an AI doesn't understand all that stuff, but it does know where the pixels are and their colors with codes.
      = meaning it is completely dependent of the references since it directly taking from them. So, AI cannot do something original like a human do

  • @jashannon
    @jashannon 6 месяцев назад

    How can the copyright office determine if a human was involved or not?

  • @ayanefuji
    @ayanefuji 4 месяца назад

    This is assuming that new technology innovations should be copyrighted immediately...

  • @aliettienne2907
    @aliettienne2907 6 месяцев назад +1

    7:46 If the script was flipped these gigantic companies wouldn't spare any punishments such as infringement or violations committed against them. Can anyone explain to me why those big companies refuse to examine those dangerous violations before trodden upon them? To me it's like these folks don't care if Ai progress comes to a dead halt when something like copyright violations are massively preventable for gigantic companies like tech companies. They can hugely avoid those violations but they still decide to tread upon those dangerous grounds. Are they that desperate to overlook the potential or possible lawsuit that can be filed against them? Who does that? Now they are going to ruin Ai technology for me and others because of silly preventable violations that they willingly and arrogantly violate. Shame on these greedy big companies. You're already wealthy why be so greedy? 💭🤨💭🤨 Now they ruin it for everybody.

    • @poorboyjim6392
      @poorboyjim6392 6 месяцев назад

      They didn't mention that the New York Times is suing too. For billions.

  • @Jexep
    @Jexep 6 месяцев назад +2

    Very funny when WSJ asked many AI companies and they all didn’t reply or comment 😂 your interview can be used in the court huh?

    • @timogul
      @timogul 6 месяцев назад +1

      They should have asked Chat GTP to give them an answer, but they were too lazy.

  • @ambi.music_
    @ambi.music_ 5 месяцев назад +1

    AI companies should pay creators they train on, publicly available data is different from creative works. It takes years to become proficient in an artand 10mins to update your linkedin page. What they are stealing is not the data, it's the human perspective on what will be appealing to another human.
    When AI trains on AI generated content with each generation is devolves further from something a human relates to, because it inherently has no understanding of the human condition. It literally cannot create art it can just mix and match bits of other peoples arts with various weightings. Given it is 100% dependent on prior art to generate these weightings it cannot be argued its creating original content.
    This is theft of creativty in the same way someone would steal 10 cents of everyones bank account and launder it to make a new billion dollar balance. AI just mixes and matches 0.001% of everyones existing creativity, calls it something new and then hopes to get paid for it. Silicon valley nerds hope no one will notice so they can profit off everyone elses hard work becoming proficient in their respective their arts so they can buy a new super yacht.
    Put it this way: AI needs humans for "Creativity", humans don't need AI for Creativitiy.
    If all the humans gave up creating music, and AI was left to train on it's own data in the subsuquent years it would rapidly devolve.
    "Whenever we are training an AI model on data which is generated by other AI models, it is essentially learning from a distorted reflection of itself. Just like a game of “telephone,” each iteration of the AI-generated data becomes more corrupted and disconnected from reality. Researchers have found that introducing a relatively small amount of AI-generated content in the training data can be “poisonous” to the model, causing its outputs to rapidly degrade into nonsensical gibberish within just a few training cycles. This is because the errors and biases inherent in the synthetic data get amplified as the model learns from its own generated outputs.
    The problem of model collapse has been observed across different types of AI models, from language models to image generators. Larger, more powerful models may be slightly more resistant, but there is little evidence that they are immune to this issue. As AI-generated content proliferates across the internet and in standard training datasets, future AI models will likely be trained on a mixture of real and synthetic data. This forms an “autophagous” or self-consuming loop that can steadily degrade the quality and diversity of the model’s outputs over successive generations."
    If big tech lawyers successfully argue the case, and these tech billionaires get to yoink everyones hard work via their online data, launder the fragmented data via a LLM so they can repackage it up again and sell it back to us it will be one of the biggest crimes against humanity.

  • @skatedurr
    @skatedurr 6 месяцев назад

    If you can't tell does it matter?

  • @MageOfGravity
    @MageOfGravity 6 месяцев назад

    Intellectual Property is not real property as it is not scarce. It's time to end copyrights, patents and trademarks.

    • @poorboyjim6392
      @poorboyjim6392 6 месяцев назад

      You really want to talk about what's scarce?

    • @ttt5205
      @ttt5205 4 месяца назад

      Copyright is the only reason the creative industry even exists. Nobody is going to draw or make movies when copyright ceases to exist.

  • @wheressteve
    @wheressteve 6 месяцев назад +1

    Thank goodness "most" people are honest and would never lie about authorship.

    • @JohnShawOhio
      @JohnShawOhio 6 месяцев назад

      😅

    • @timogul
      @timogul 6 месяцев назад +3

      I assume sarcasm, but it still matters, because if you build some multimedia empire on an AI generation, and then it _ever_ became public, that could just knock the sandcastle completely out from under you. Very risky.

    • @arizvisa
      @arizvisa 6 месяцев назад

      @@timogul Doesn't even have to be public...If you were a victim of a disinformation campaign, in a period where generative content was frowned upon, it would also have the exact same effect. The nature of information is that it is similar to speech, the only difference is that it is typically digitized and exposed to us through software or mediums interacted with by software.

    • @ttt5205
      @ttt5205 4 месяца назад

      At least when it comes to images, its practically impossible to have any significant social media presence without being found out as an AI user. Its too easy to identify, especially with multiple images.

  • @callibor3119
    @callibor3119 6 месяцев назад

    What’s going on is simple.
    Big Tech is testing what is and is not legal on their end. It’s all costly and when they are hit with lawsuits left right and center across the globe, they have to bring their prices up higher while working on new technologies.
    And depending on the class, the AI reads on who is financially wealthy and who are not doing so well to meet the needs of the customer.
    But to do that, they also have to work minimally with each software, hardware, manufacturer, distributors and businesses to keep the competition alive as they give room for even us average folks to work with their open source codes and technology and make another boom which leads to another Golden Age.
    And our tribute to working with them pays a lot and that is not the common thought that people have out in public or on the internet.
    The internet is also pushing itself to the new WEB 3.0 era with Metaverses like Unreal Engine and with an atmosphere that suits the customers needs.
    All of that is slowly but surely working in the favor of the customers if the customers can stop thinking that their life is better when they only consume.

  • @prilep5
    @prilep5 6 месяцев назад +1

    People will start using AI for creative work that copy/reproduce similar picture. Then the copyright should be to AI generator because it was earlier inspiration to human reproduction

  • @MrLegendra
    @MrLegendra 6 месяцев назад

    There must be a copywrite for AI so people's work is protected

    • @ttt5205
      @ttt5205 4 месяца назад

      AI generations arent "people's work"

  • @TheEmpire822
    @TheEmpire822 6 месяцев назад +1

    So just everything created by AI is just going to be free game? That’s setting up a bad precedent because there is no telling what AI can actually do in the future. And another issue is what if AI helped you write code for an app or made a RUclips video then you wouldn’t be able to monetize that whatsoever.. I don’t see this lasting at all, they will come out with a rule that if you have a non human entity that is under your power and you can prove ownership then you will be able to claim any work done by it. There will always have to be human input into it to make it workZ

    • @ttt5205
      @ttt5205 4 месяца назад

      Determining the degree of human involvement is going to be unsustainable. Its significantly better if AI isnt part of the final product at all.

  • @gosselka
    @gosselka 6 месяцев назад +3

    If AI was copywrittable couldnt someone just create everything and then copywrite every conceivable idea...

    • @timogul
      @timogul 6 месяцев назад +4

      Copyright doesn't work that way. It's very specific, you can't copyright "ideas," it has to be a completed work. You can copyright "The Lord of the Rings," but you can't copyright the _concept_ of "some fantasy adventurers go on a trip."
      So in theory you could have a million AIs writing a million books, but it would still take a lot of output to monopolize "all stories." Not to mention, if someone did write a book you wanted to go after them over, you would have to figure out a way to find out _which_ of your millions of books it was actually copying.

  • @fragr33f74
    @fragr33f74 6 месяцев назад +2

    My hope is that these scummy AI companies are held to account and liable for copyright infringement, but I think we all know our corrupt and inept politicians will fail us on that front.

    • @dibbidydoo4318
      @dibbidydoo4318 6 месяцев назад

      Please read this: en.wikipedia.org/wiki/Copyright_law_of_the_United_States , AI companies are not infringing.

    • @dibbidydoo4318
      @dibbidydoo4318 6 месяцев назад

      AI companies are not infringing.

  • @Kagari-lf3me
    @Kagari-lf3me 6 месяцев назад +2

    All the copyright lawsuits on training data will fail, as "learning" and "analyzing" a work (which what AI does) is out of scope for copyright, that only covers copying and reproduction.
    AI getting copyright protection would however be a bad idea as it would be trivially easy for copyright trolls to generate 1000000 pictures and start suing people. It will be like the patent trolls but even worse.

    • @federicoaschieri
      @federicoaschieri 5 месяцев назад +1

      Wrong. To train AI you need to make a local copy of the copyrighted work, so the computer can process it.

    • @ttt5205
      @ttt5205 4 месяца назад

      Then why do AI companies argue fair use when 'copying' isnt a problem in the first place? This argument makes no sense

  • @nutssense7499
    @nutssense7499 6 месяцев назад

    hi The Wall Street Journal
    , i would like to do video editing for your content , looking forward to hear from you soon

  • @landoftheunknow
    @landoftheunknow 5 месяцев назад

    Bruh I thought he was gonna explain Taylor made..

  • @goodfortunetoyou
    @goodfortunetoyou 6 месяцев назад +1

    One way to think of these AI tools is as very sophisticated collage generators.
    They sample from x inputs, blend and modify them, then access parts of the modified inputs to use in the final work. This is similar to something like a collage or AMV, because it's a composition of copyrighted works made by others. So the proportional retention of a given work in the new work, and it's contribution (or non-substitutability) should matter.
    -my opinion

    • @ain92ru
      @ain92ru 6 месяцев назад +2

      Have you ever used DALL-E, MJ or SD? It's actually very hard to make it work like a collage generator, it very much "wants" to generate original content, even if prompted for something as well-known and old as Mona Lisa

    • @gus473
      @gus473 6 месяцев назад

      Maybe, in some very general sense. Not sure if the analogy will hold up when something is a cultural icon (or the money is huge). We'll see. ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

    • @goodfortunetoyou
      @goodfortunetoyou 6 месяцев назад +2

      @ain92ru I tried them out. I disagree that it "wants" anything. I can't say I'm an expert on neural nets, but the basic premise is that you train/encode data into a mathematical object that allows you to use linear algebra to converge from random noise, to a picture.
      The training step requires data to know what to converge to, and the encoding has quirks. However, the internal representation is a manipulation of the input data. It's not quite interpolation, or copying and pasting, but in the same manner a collage might contain sub-parts of a copyrighted work, so does AI. Even if it's taking a single pixel from a million paintings, or creating a new internal representation where it learns the key attributes of concepts and objects, the output was necessarily derived from the inputs.
      I recall reading an interesting article on the memory capacity of neural nets before corruption occurs, which would prevent you from ever recreating the original training data exactly. These things are clearly somewhat tricky.

    • @TheArcanis87
      @TheArcanis87 6 месяцев назад +1

      A Stable Diffusion model is 6 GB in size. There just isn't space to store a thousand terabytes of images to make "collages". The model learn patterns, it doesn't make collages.
      Like an artist isn't making a collage of a lifetime of memories they saw with their eyes. An artist learned art.

    • @goodfortunetoyou
      @goodfortunetoyou 6 месяцев назад

      @@TheArcanis87 Yes, but I think that's a false distinction in the context of training a model. Conceivably, you could train a model that solves the traveling salesman problem. Realistically, I don't think you would get great answers outside the data you train on.
      Similarly, the input data must have been compressed during training in some sense to get a reasonably sized model, so it's clearly "learning patterns". However, it's also not possible to get away from the fact that without the input data, no training could happen, and you couldn't generate something. The "patterns" are based on the training data. If you consider simple linear regression as a proxy, every data point has an influence on the calculated slope and intercept, even if that influence diminishes the more data is used.
      As far as I'm aware, a collage is the use of prior works, after transforming them, into a new work of art. So, if an artist has seen n works of art, and their brain is a black box which learns from those pieces, they ARE producing a collage that includes/references a lifetime of memories including copyrighted works, every time they make anything. That's what it means to learn how to make art.
      Artists do complain that other people steal their ideas, copy their compositions, and use their work without permission. Whether the copying rises to a level at which it might be worth litigating is not the same concern as whether seeing something has an influence. It depends on the amount of the specific work retained in the resulting image, and whether what was retained is copyrightable, among whatever other considerations the law provides.
      Remember, I'm not an expert on neural nets or law, but I feel like there's a lot of nuance and depth to this topic.

  • @nopiihere
    @nopiihere 6 месяцев назад

    Kendrick is coming

  • @DoctorOfProduct
    @DoctorOfProduct 6 месяцев назад

    Plot twist: this lawyer is AI

  • @newmanpc4253
    @newmanpc4253 4 месяца назад

    this is an attemt from ai companys, to steal all pictures that people has made . JAIL THESE crimminal companys owners. justice must be served.

  • @ChintanCG
    @ChintanCG 6 месяцев назад

    The courts are going to also get replaced with AI right😅

  • @GokuLevelKi
    @GokuLevelKi 6 месяцев назад +1

    Take some % value of profit from AI and put into UBI (Universal Basic Income).

    • @ttt5205
      @ttt5205 4 месяца назад

      AI barely makes a profit. Youd be giving everyone a couple of cents.

  • @ForeheadPushUps
    @ForeheadPushUps 6 месяцев назад

    AI generated legal justice system must be used in these case

  • @kinga-635
    @kinga-635 6 месяцев назад

    So morale of the story is what ? Can’t be bothered to watch all this

  • @flareonspotify
    @flareonspotify 6 месяцев назад +2

    copyright kills art because it stops people from creating it

    • @dibbidydoo4318
      @dibbidydoo4318 6 месяцев назад +1

      People have been making art thousands of years before copyright was even a concept. The Mona Lisa was created before copyright.

    • @poorboyjim6392
      @poorboyjim6392 6 месяцев назад

      Is your Mom still doing your laundry? This is juvenile.

    • @flareonspotify
      @flareonspotify 6 месяцев назад

      @@poorboyjim6392 i bet your wife doesnt do you or your laundry

    • @ttt5205
      @ttt5205 4 месяца назад

      Incorrect. Copyright actually allows people to own what they create. There wouldnt be nearly as many artists if there wasnt any copyright protection. Similarly the film industry wouldnt exist at all if it wasnt for being able to sell a product, which they can due to copyright protection. Copyright can be abused sometimes, but overal its a necessity to protect artistic endeavor.

    • @ttt5205
      @ttt5205 4 месяца назад

      ​@@dibbidydoo4318Not comperable at all. The mona lisa is a physical painting that cant just be reproduced, its protected by the very medium its created with. Copyright is essential since most art is digital now.

  • @iamtafara
    @iamtafara 6 месяцев назад

    People have money to burn. Why say it was generated by AI if you know it will be rejected

    • @timogul
      @timogul 6 месяцев назад

      They probably want to challenge the case in hopes of future profits.

  • @rodfer5406
    @rodfer5406 6 месяцев назад

    No

  • @quixoticelixer.
    @quixoticelixer. 6 месяцев назад

    😮

  • @KaawSauce
    @KaawSauce 6 месяцев назад

    Engagement

  • @rosbifle413
    @rosbifle413 6 месяцев назад

    Just want to add. I absolutely hate Drake. I hate what he does, I hate his 'music'. I hate people who like him. Drake disgusts me.

  • @posthocprior
    @posthocprior 6 месяцев назад

    If human authorship is required, and machine authorship is banned, what's stopping anyone from saying that a human created an AI image/song/logo/etc? That is, since there is no way to distinguish a finished work that was made by a computer and one that was made by a human, the legal argument is only valid if the author is honest about how it was created. Because, however, there is strong incentive to deceive the trademark office, I don't see how this legal distinction, between human and machine, is enforceable.

    • @dibbidydoo4318
      @dibbidydoo4318 6 месяцев назад

      what's to stop someone from stealing from their recently deceased acquaintance friend's works and telling the copyright office that they made it?

    • @ttt5205
      @ttt5205 4 месяца назад

      Thats called... Fraud, and its illegal.

  • @United_Wings
    @United_Wings 6 месяцев назад

    Oh

  • @auro1986
    @auro1986 6 месяцев назад +1

    this same lawyer will get paid by ai publisher and wsj in future so it's no use thinking what ai gives you

  • @Yodakaycool
    @Yodakaycool 6 месяцев назад

    Yo she was made by AI!!!! in the end!

  • @federicoaschieri
    @federicoaschieri 5 месяцев назад

    The legality of web scraping doesn't make legal to train AI. Collecting the data is a prerequisite for achieving a goal, so what matters, is how you actually use the data. For pure scientific research, it's ok. But if I scrape the web and I republish the WSJ articles, is that ok? No, of course not. So the point is whether using the data you collected to train AI is fair use or not. And the likely answer in art and music is no, because the use of data damages the market of the stolen works. Incredibly bad journalism here.

  • @MikeisRelic
    @MikeisRelic 6 месяцев назад +2

    Capitalism and ai don't seem to be compatible. To be fair humans weren't that compatible with capitalism either...
    Emergent systems are only going to acquire more ability to match their agency. Humans don't even recognize nationalism as being a bad idea.
    How ai get its training data is a poor exploration of our current problems and how ai can integrate at solving them,

    • @WoodEe-zq6qv
      @WoodEe-zq6qv 6 месяцев назад

      Under capitalism we have experienced:
      The best increase in life expectancy.
      The best literacy rate.
      The highest measured IQs.
      The most access to material goods.
      The least number of wars.
      The highest number of democratic nations.
      In human history.
      Not compatible my a*

    • @jajefan123456789
      @jajefan123456789 6 месяцев назад +2

      Only comment that knows what they’re talking about

    • @arizvisa
      @arizvisa 6 месяцев назад +1

      100% not compatible. Trying to determine how to enforce where training data comes from demonstrates a complete misunderstanding. We're ~30 years into the information age and we still don't get how information works in a society.

    • @arizvisa
      @arizvisa 6 месяцев назад +1

      @@jajefan123456789 For real.