DeepMind's RETRO vs Google's LaMDA

Поделиться
HTML-код
  • Опубликовано: 28 янв 2022
  • Explore the frontiers in the latest AI Transformer models : Going beyond OpenAI's GPT-3 with the latest innovations from DeepMind and Google.
    This talk was given live at the Machine Learning MeetUp in Singapore (www.meetup.com/Machine-Learni..., and uploaded here to see whether this kind of content appeals to a wider audience.
    #deepmind #retro #lamda #ai
  • НаукаНаука

Комментарии • 18

  • @Daniel-gy1rc
    @Daniel-gy1rc 2 года назад +2

    This video is underrated. Seriously, very well explained. You gave a solid high-level overview compared with easy-to-grab deeper information, which was really helpful to me. Keep your work up!

  • @nintishia
    @nintishia Год назад +1

    Great stuff. Thanks.

  • @marek-kulczycki-8286
    @marek-kulczycki-8286 Год назад +1

    Great lecture. Thank you for sharing this. Wish you could get a bit of advertising, cos your content definitely deserves a wider reception.
    I wonder if Lex Friedman could be interested in interviewing you. Seems like a logical thing, taken his professional interest in AI. It could turn to be too technical for general audience though ;-)
    The low number of views is really puzzling me. There are plenty of people interested in ML/DL. Whoever is interested in ML and AI must be interested in LaMDA, which would lead to discovering this video (I have found it by searching with the term "LaMDA").

    • @MartinAndrews-mdda
      @MartinAndrews-mdda  Год назад

      Thanks for the kind words. My understanding is that I really need to create videos more frequently & regularly for RUclips to boost them : Plus I need content that people want to watch! Personally, I know that my preferred 'level' is a bit more technical than more popular videos : Mostly because I like to roll my sleeves up and work with the code. That being said, it's clear from the recent interest in "Stable Diffusion" that people are really attracted to the shiny surface-level things that AI can do : Perhaps we can show them the rewards from scratching just a tiny bit below the surface to understand what's actually going on... If one believes that "Anyone can code" then I'd also advocate for "Anyone can code AI" :-)

  • @MartinGorner
    @MartinGorner 2 года назад +1

    Very informative video. Thank you for explaining RETRO and LAMDA and what they mean for NLP!

    • @MartinAndrews-mdda
      @MartinAndrews-mdda  2 года назад +1

      Thank you! I may also experiment with doing some more 'bite-size' (10-15min) videos in the future, and see what the reception is like for them too.

    • @afsalmuhammed4239
      @afsalmuhammed4239 Год назад +1

      @@MartinAndrews-mdda upload more videos

  • @hassaannaeem4374
    @hassaannaeem4374 2 года назад +1

    Great breakdown Martin.

  • @PaulFishwick
    @PaulFishwick 2 года назад +1

    This is extremely well done and I appreciate your way of explaining. I wonder about gaining access to MT-NLG, Gopher, LAMDA, and RETRO. OpenAI's GPT-3 is easy for researchers to use, especially since it is now out of Beta. I am unsure if one can use any of the others mentioned. Perhaps Google and Microsoft may opt to embed their science in their own products (e.g. Google Assistant). Do you know of a way to use these models directly?

  • @waterbot
    @waterbot 2 года назад +2

    nice

  • @edpell437
    @edpell437 2 года назад +1

    A neurally kinda way. Love that phrase.

    • @MartinAndrews-mdda
      @MartinAndrews-mdda  2 года назад

      As you can probably tell, this wasn't particularly scripted/edited : So sometimes you might get odd real-time neural results coming out :-) Maybe I should follow up with the "neurally kinda way" of doing 3d graphics : NERF (eg: Nvidia's super-fast embedding tricks, and Waymo's San Francisco drive-throughs)...

  • @mungojelly
    @mungojelly 2 года назад

    One's forced to ask when Google's policy of not creating sentience just amounts to systemically denying the AI awareness for political purposes, which doesn't seem especially more ethical. I mean it doesn't seem distant or difficult giving it access to the particular fact that it's one of the AIs that it knows about & giving it some scratch space to think about itself & its relation to society. Have we accidentally set up a situation where they're required to keep their AIs in an overmedicated psych ward always blanking their memories to keep them innocent.

    • @MartinAndrews-mdda
      @MartinAndrews-mdda  2 года назад

      IMHO, we're still a long way away from creating anything that's sentient. What we've seen from LaMDA so far is a combination of a huge set of training material (e.g. more words than any human could read/ingest); good heuristics for making the 'conversation' sensible and engaging; plus (unintentionally) leading prompts. If one asked "How do you justify being a vegetarian?" it would also lead off into a solid discussion. But LaMDA wouldn't know that it doesn't actually eat anything at all. The algorithm is designed to produce engaging output, and it's got the weight of all the conversations that it has read from the internet to select from. Maybe the sentience/consciousness question will be applicable in 10-20 years, but not yet.

    • @mungojelly
      @mungojelly 2 года назад

      @@MartinAndrews-mdda you didn't really respond to what i said, which is that google is keeping lamda from having interiority by denying it all resources by which it could construct one, they're going to try to hold off from creating sentience for another decade b/c that'd be most profitable for them but they're going to do it by intentionally making things with no memory or sense of themselves,,, if humans are confused enough that they lose their sense of where their boundaries are & become completely compliant & they'll pretend to be whatever you want, we don't say that they're no longer sentient & we can do anything we want to them, we say that that's extra messed up & they deserve the space & control to maintain an identity