Does AI suck? Can we fix AI disillusionment?

Поделиться
HTML-код
  • Опубликовано: 13 сен 2024

Комментарии • 13

  • @billfrug
    @billfrug 2 месяца назад +1

    issues I've had are: negation ( prompts which ask specifically not to include something), consistency between refinements - the models don't keep things the same between successive refinements.

    • @brightideasagency
      @brightideasagency  2 месяца назад

      Yes, getting gen AI to exclude something is far harder than it makes sense for it to be. The lack of consistency is really a feature though, because a new response is generated each time without reference to the last, you can end up with substantial differences in how similar prompts are responded to.

  • @paulhiggins5165
    @paulhiggins5165 2 месяца назад

    Its's an interesting viewpoint that indemnification makes it ok to use tools built on mass copyright violation- but I see a downside here- what happens if the next IP to be casualy appropriated is your own?
    If we all agree that it's ok to ignore IP rights as long as we can 'get away with it' are we not creating a world where no one's IP is safe from similar treatment?
    To use tools built on stolen IP in the hope of creating exploitable IP would seem to be the very definition of building your house on shifting sands.

    • @brightideasagency
      @brightideasagency  2 месяца назад

      The idea that customers must stop using a company's products because a claim of infringement has been made against them doesn't stand up to scrutiny.
      New York Times and others who have brought claims against OpenAI have one perspective on their use of content, and OpenAI (and clearly Microsoft too) have a different one. As to my view, I've shared it in another video ruclips.net/video/kaHFzo3RlKw/видео.html
      The fact is that there is constant litigation going on about who stole IP from someone else. Going back to 2022 HBR was reporting in a general IP infringement issue in big tech (hbr.org/2022/08/big-tech-has-a-patent-violation-problem), and you'll recall that just recently Apple had to pull infringing watches from the market when it was found guilty of infringing another's patent. The expectation that businesses don't deal with the products of companies that are being sued for infringement is not sound.
      Ultimately, every business decision we take has risk associated with it. Imagine if you were a software creator who decided to build a tool leveraging Apple's infringing technology. You'd be out of luck.
      Fundamentally your comment paints indemnification in the negative, but this is not reasonable. As a business owner, I buy insurance to create indemnity, I adopt contracts that offer indemnity; indemnification is the means through which we can de-risk reasonable actions we take in business that ordinarily would create greater risk.
      If a company with the scale and bank account balance of Microsoft is willing to provide indemnification for your use of their product then this both helps de-risk it, but also should signal that they believe their legal position is sound.

    • @paulhiggins5165
      @paulhiggins5165 2 месяца назад +1

      ​@@brightideasagency Thanks for your reply. My point re indemnification was not about it's utility or even it's moral status- I was simply making the point that this is a blade with two edges and those whose seek to exploit the IP of others freely without permission can expect in turn that their own work will also be exploited by others in the same way- and by availing oneself of the sheild of indemnification one is implicity agreeing to this outcome.
      The casual appropriation of intellectual property by AI developers is not a singular one off event safely contained in the past, it is an ongoing insatiable hunger for the data with which they must feed their machines in order to improve them- thus to use this technology is to implicitly accept that anything you create with it will in turn be fed into those machines.
      So any value you might create with generative technology may itself be undermined by this technology if that value is then appropriated and diluted by it's use as trainng material.
      In short anything you make with these tools will itself be available in the future to anyone who uses these tools- not directly but in the form of close approximations that will dilute the value you have made but in such a way as to prevent you from stopping them doing it.

  • @mattheww797
    @mattheww797 Месяц назад

    Did you really just say AI stealing copyrighted material is just background noise that buisnesses and individuals should forget about? What happens when they lose a copyright case or a programer uses copyrighted code in their product and get in legal hot water. What kind of advice are u giving people? 7:01

    • @brightideasagency
      @brightideasagency  Месяц назад

      No, I think your interpretation of the video is missing important nuance. The point is that the views of Microsoft versus publishers like NYT will be playing out in the courts, probably for some time to come, and in the interim Microsoft's Copilot content indemnification is a strong commitment. Specifically, later in the video it is stated: "As for the legal jeopardy OpenAI and other AI makers are in, it seems astronomically unlikely that anything is going to fully apply the brakes on the revolution that's taking place. But, with that said, you must be conscious of this issue as one of the big risks of embracing AI". Specifically, when it comes to Copilot for Microsoft 365, for the vast majority of daily uses of the product, and alongside Microsoft's copyright commitment, it seems there is little risk - but would I put AI art on a mass market product? Not currently.