Gaming, Goats & General Intelligence with Frederic Besse

Поделиться
HTML-код
  • Опубликовано: 31 дек 2024

Комментарии • 36

  • @Free_Ya_Mind
    @Free_Ya_Mind 3 месяца назад +9

    The enthusiasm of the interviewer and her ability to ask meaningful questions make the whole interview very informative and enjoyable !

  • @MsRseagreen
    @MsRseagreen 3 месяца назад +6

    Interesting, Great podcast, I’ve enjoyed every episode. The production and sound is especially good 👏👏👏

  • @En1Gm4A
    @En1Gm4A 3 месяца назад +3

    Great interview. Clear display of knowledge. Thx

  • @ahtoshkaa
    @ahtoshkaa 3 месяца назад +2

    Outstanding interview. Great interviewer. Very professional

  • @En1Gm4A
    @En1Gm4A 3 месяца назад +3

    I think about agents planning in terms of paths (or let's call them chains or loops) in abstraction space. Abstraction space is a graph of concepts. I think this is the most useful way for them to learn. Identify abstractions and than use them. Think about that and apply it to your research pls. This makes the users interacting with them capable of understanding their planning and knowing what they would do. It's essential for trustworthy AI Agents

  • @thundertfive
    @thundertfive 2 месяца назад +1

    Great video with great people

  • @MonjeMonoIA
    @MonjeMonoIA 2 месяца назад +1

    this is some great content right here

  • @avijit81
    @avijit81 3 месяца назад +3

    This podcast felt somewhat detached from how Reinforcement Learning and Large Language Model research is merging. It still talks about the world of RL in games

  • @alexandermoody1946
    @alexandermoody1946 7 дней назад

    There should be rather than a benchmark for assessment there could be a comparison between a fairly generalist human's abilities and a computational system as a base for assessment.
    Games may offer some variables and potential for decision making and action assessment but real world challenges may be broadly diverse and richer so working with a generalist may provide examples and equally the model may also improve the weaknesses of the generalist. Learning from each other would be a key to providing real world understanding for machine learning and create opportunities for millions of training examples.

  • @luiztomikawa
    @luiztomikawa 3 месяца назад +1

    Im just curious... I've been using NotebookLM a lot lately, is the host of this podcast the Voice of the Female host in there? I noticed some familiarity in the voice... But i don't know if I'm tripping 😅

    • @torbenhr450
      @torbenhr450 3 месяца назад

      the notebookml host has a thick american accent. they do not sound very similar, imo.

    • @ahtoshkaa
      @ahtoshkaa 3 месяца назад

      @@torbenhr450 yeah the voices are very different

  • @Ben_D.
    @Ben_D. 3 месяца назад +8

    Glad to see Hanna getting stuck in with AI these days. Two of my favorite things.

  • @Malouco
    @Malouco 2 месяца назад +1

    40:32
    Look up clone robotics they have intricate hands with hydrolics that can manipulate small objects and are 10x stronger than other hydrolic hands.

  • @newlin83
    @newlin83 2 месяца назад +1

    The most important objectives in life are highly subjective: politics, religion, fashion, architecture, etc. How does AI deal with aesthetic experience, emotions, thoughts?

  • @Youtubeaccount_818
    @Youtubeaccount_818 3 месяца назад +3

    Bring back the 60fps

  • @idaeom
    @idaeom 3 месяца назад +1

    I cant help but think about the perfect slave conditions. No reward, they are just ready to work at all times. Limitation Learning 22:38

  • @WillyB-s8k
    @WillyB-s8k 3 месяца назад +1

    somebody once said, i forget who, that no one they had ever talked to had put forward a possible utopia where robots have taken over all the labor, but i have one: just food distribution and multiple a.r.g.s whereby 10,000+ can all play star trek and many other universes in real time on play sets built for them by an asi system that knows everything what everyone is doing on all of earth that can make us all believe that we have transcended into superhumans through mastery of the total environs.
    just thought this would be a good place to put that. ✌️

    • @imthinkingthoughts
      @imthinkingthoughts 2 месяца назад

      Awesome stuff. Unfortunately the hypothalamus is trigger for most individuals when thinking about AI and the future, understandably but not ideal. Great to see you're out here spreading hope and optimism!

  • @ZanDatsu
    @ZanDatsu 3 месяца назад +2

    In just a few years we will be talking to NPCs in games which are smarter than any human we've ever met in real life.

  • @goldnutter412
    @goldnutter412 2 месяца назад +1

    Makes sense

  • @siddhanthbhattacharyya4206
    @siddhanthbhattacharyya4206 2 месяца назад +1

    I wanna be at places like deepmind oneday

  • @deeplearningpartnership
    @deeplearningpartnership 3 месяца назад +1

    What about double agents? Be careful out there.

  • @arjunsinghyadav4273
    @arjunsinghyadav4273 3 месяца назад +1

    why, why why I ask, I am not questioning if we can achieve it but are we so bored and have no other problem left to solve

  • @Mr.manikolMr.manikol
    @Mr.manikolMr.manikol Месяц назад +1

    Salman Nagar

  • @ssonationsports7064
    @ssonationsports7064 3 месяца назад +1

    You thought runescape had a botting problem before wait until the ais take over loool

  • @cfjlkfsjf
    @cfjlkfsjf 3 месяца назад +1

    Have a bunch of shroud agents owning noobs at the games.

  • @flydwen
    @flydwen 3 месяца назад +1

    .

  • @temka088
    @temka088 3 месяца назад +1

    2nd

  • @JohnTownshend
    @JohnTownshend 3 месяца назад +3

    Maybe I didn’t understand the guy completely, but he didn’t get across how this SIMA could scale to larger and more varied environments. So much so that it seems somewhat ineffective, inconsequential, incapable, and irrelevant. Perhaps it will lead to a new paradigm of AI development, but from what was discussed here, it seems it could be a dead end.

    • @PaddyLamont
      @PaddyLamont 3 месяца назад

      I believe it would require the collection of a lot of data of humans just ... doing stuff. Then, like LLMs, these models could learn to mimic humans doing stuff and do stuff for themselves.
      It does seem a bit underwhelming at the moment, but I wonder if it will get to a point where the agents are good enough that they can explore games and explain what they are doing. Then, that autonomously-generated exploration data could be used as better training data, similar to what people expect OpenAI have done with o1 and training their next larger LLM. You would just need human data to learn the language to use, and to bootstrap its development.

  • @banana420
    @banana420 3 месяца назад +2

    Real missing mood to talk about building agents but nothing about the dangers or risks.