Daniel Dennett Investigates Artificial Intelligence | Big Think

Поделиться
HTML-код
  • Опубликовано: 22 апр 2012
  • Daniel Dennett Investigates Artificial Intelligence
    New videos DAILY: bigth.ink
    Join Big Think Edge for exclusive video lessons from top thinkers and doers: bigth.ink/Edge
    ----------------------------------------------------------------------------------
    Luis Perez-Breva, a professor at MIT, thinks that we've probably been watching too many Terminator movies for us to really understand what AI actually is. It will (hopefully) (knock on wood) be much less hyper-intelligent humanoid killing machines and more of a sidekick role. Luis brings up a great point that many in the AI world gloss over: that we saw this kind of so-called "job killing" a century ago when Henry Ford created automation in the workplace; Luis posits that it won't be that much different than we're used to and that mankind should be creative enough to figure out how to assimilate human jobs and AI side by side. Luis Perez-Breva's new book is Innovating: A Doer's Manifesto for Starting from a Hunch, Prototyping Problems, Scaling Up, and Learning to Be Productively Wrong.
    ----------------------------------------------------------------------------------
    LUIS PEREZ-BREVA:
    Luis Perez-Breva, PhD, is an expert in the process of technology innovation, an entrepreneur, and the the author of Innovating: A Doer’s Manifesto for Starting from a Hunch, Prototyping Problems, Scaling Up, and Learning to Be Productively Wrong. (MIT Press 2017).
    Currently Perez-Breva directs the MIT Innovation Teams Program, MIT’s flagship hands-on innovation program jointly operated between the Schools of Engineering and Management. During his tenure, i-Teams has shepherded over 170 MIT technologies to discover a path to impact.. He has taught innovating as a skill worldwide to professionals and students from all disciplines; and gotten them started innovating from pretty much anything: hunches, real-world problems, engineering problem sets, and research breakthroughs.
    He is a serial innovator with successes in emergency cell phone location technologies currently deployed worldwide, and a fully automated portfolio allocation and trading system; and numerous other stories to share from his trial and error adventures to conceive artificial intelligence technologies that tackle real-world problems and drive them to market.
    As an innovator and entrepreneur he has worked on cell phone location for emergency response and national security, genetics, healthcare intelligence, automated portfolio allocation, and developed several non-profit organizations, including building a new university centered around innovation. In 2011, the government of Spain recognized his career achievements with the Order of Civil Merit of the Kingdom of Spain.
    ----------------------------------------------------------------------------------
    TRANSCRIPT:
    Luis Perez-Breva: A lot of people are scared about AI. And the reason is I think we’ve seen too many Terminator movies. So we’re mixing many things up. So it is true Terminator is not the scenario we are planning for. But when it comes to artificial intelligence people get all these things confused. It’s robots, it’s awareness, it’s people smarter than us to some degree. So we’re effectively afraid of robots that will move and are stronger and smarter than we are, like Terminator. So that’s not our aspiration. That’s not what I do when I’m thinking about artificial intelligence. When I’m thinking about artificial intelligence I’m thinking about it in the same way that mass manufacturing has brought by forth created a whole new economy. So mass manufacturing allowed people to get new jobs that were unthinkable before. And those new jobs actually created the middle class. To me artificial intelligence is about developing... making computers better partners, effectively. And you’re already seeing that today. You’re already doing it except that it’s not really artificial intelligence. Today whenever you want to engage in a project you go to Google. Google uses advanced machine learning, really advanced.
    And you engage in a very narrow conversation with Google except that your conversation is just key words. So a lot of your time is spent trying to come up with the actual key word that you need to find the information and Google gives you the information. And then you go out and try to make sense of it on your own and come back to Google for more. And then go back out and that’s the way it works. So imagine that instead of being a narrow conversations with key words you could actually engage for more and actual information meaning to have the computer reason with you about stuff that you may not know about. It’s not so much about the computer being aware. It’s the computer being a better tool to partner with you. Then you would be able to go much further. The same...
    For the full transcript, check out bigthink.com/videos/luis-pere...

Комментарии • 15

  • @salasvalor01
    @salasvalor01 11 лет назад

    I've been researching Kurzweil extensively, now it's time to see what Dan has to say.

  • @Tnu1138
    @Tnu1138 5 лет назад

    of course there are good reasos to try if we ca understad the humanmidn we can improe o it and remoe limitations.

  • @lyleg4584
    @lyleg4584 6 лет назад

    There are robotic birds that fly like real birds. There's a video of one demonstrated in a TED talk 7 years ago.

  • @AlexSeeMr
    @AlexSeeMr 11 лет назад

    You can go to MIT and shoot a laser to bounce off the reflector that's located on the Moon.
    I guess your claim has a good explanation for that.

  • @ThunderChunky101
    @ThunderChunky101 9 лет назад

    I always like to think like this -
    Imagine a computer that's at least as complex as a human brain. It has to defrag itself. If it can think, and by definition I think it can, then exactly what would it do when it defrags? What would it do physically and what would it be "thinking" while it was rearranging that data? All that sensory input, all those images and sounds.
    The data would have to be partially read in order to do so, and I expect the amount of processing power would be a lot. It would have to shut down physically at least partially for stretches of the day in order to get it done.
    This I imagine would be like sleep, and those dreams would be those of the electric sheep variety...
    Any objections? 

  • @TopShelfization
    @TopShelfization 12 лет назад

    has this guy heard of Myon?

  • @I_have_solved_AGI
    @I_have_solved_AGI 4 года назад

    wrong..

  • @kcsunnyone
    @kcsunnyone 11 лет назад +2

    why duplicate humans? there is no reason to do it; we have billions of humans already. Make something better and more useful

  • @rahmiaksu
    @rahmiaksu 9 лет назад

    There is a big difference between constructing a generic AI computer and an AI agent. We ideally want the former, an all-purpose tool that can compute things without the heavy workload of trying to figure out how to teach it. It could learn by itself. An AI agent is not useful to us, and that is where Dennett's argument comes in. Give me your results and do the tasks that I tell you. If my commands are not specific enough, ask me to specify. Don't do things with your own "initiative". That's where problems can occur. But a generic AI machine would be infinitely useful. It would be the greatest tool ever created.

    • @eurethnic
      @eurethnic 9 лет назад

      Wouldn't it have to be self programming to be a true intelligence? Would it then create agency for itself to help solve the problems we give it?

    • @rahmiaksu
      @rahmiaksu 9 лет назад

      LEE WILLIS Well it hasn't been invented yet so we can't really say what such a tool would technically entail. However, I'd imagine that it may not necessarily need to "program" itself, rather it would be as flexible in its execution as needed to reach a goal without knowing exactly how to reach it. It would try to find out how to do the task without supervision. If this amount of comprehension requires agency or self-awareness, then indeed my point would be moot. But I really doubt this...