Theory of mind through the lens of algorithms | Andreea Diaconescu | TEDxZurich

Поделиться
HTML-код
  • Опубликовано: 1 дек 2024

Комментарии • 15

  • @zciliyafilms5508
    @zciliyafilms5508 2 года назад +7

    As a mathematics major who has been thinking about questions of consciousness for a while, she's onto something.

  • @rodrivers4073
    @rodrivers4073 5 лет назад +12

    Like other people who commented, I found this talk rather confusing. I think this might of been something to do with the contrast between the deep significance of the topic and the way in which it is presented. Theory of mind must be one of the highest level human capabilities, forming the foundation for trust and all kinds of social interaction. Yet the implication is that there is a simple mathematical formula to explain it. The talk seems to leave so much out in its explanation of the algorithms, it’s assumptions and implications. So much so that I followed up by finding some of the speakers research papers. In one it concludes “Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy ….“. This kind of seems fair enough because it’s claims are really to do with a very general learning process that is not necessarily specific to theory of mind. I think, I would feel more comfortable if it was presented as a hypothesis as to how there are individual differences in the way that people develop trust, rather than making the claim that it was a mathematical representation of theory of mind. Theory of mind is really very much more complicated and this mathematical model would only be a small part of the algorithms required to give an artificial intelligence the capability to model human intention.

  • @heromaiker4414
    @heromaiker4414 3 года назад +2

    I would love to see a video explaining this model in detail

  • @alinad3038
    @alinad3038 4 года назад +1

    Love it! Nicely summarized a difficult topic and its outstanding potential! Inspiring!

  • @ArtVandelay99
    @ArtVandelay99 8 лет назад +3

    Ce placere imi face sa vad o prezentare asa buna care e data de un om de stiinta care este (i) roman, (ii) de sex feminin, si (iii) cu o atitudine asa de fireasca si lipsita de "aere". Observatia cea mai salienta - privind ca si barbat - nu o voi face, pentru a nu distrage atentia de la observatia mai serioasa de mai sus ;)

    • @mrjores
      @mrjores 8 лет назад

      Cred ca deja ati facut-o!

  • @atthehops
    @atthehops 8 лет назад +3

    I'm confused by this talk and the research. While the talk mentions that the parameters here vary from person to person (4: 36, "internal representations") it fails to apply a context; do we represent everyone the same or do we represent each individually separately, and if so how? There appears to be two algorithms at work perhaps, and the first informs the second.
    But there also may be two distinct models; one where all individuals are perceived to be represented as the same, the second where each individual is considered separately.

    • @helixalgorithm3160
      @helixalgorithm3160 7 лет назад

      Those symbols and graphs did look hazy to me too and the cross section of that patients brain struck me as empty nutshell.
      However, us humans are limited. Our intellect relies on the well being of our body. Thus it must take its identity as a standard representation of others. And for adaption reasons this identity makes us bond with similar people because their actions are the most common and transparent. As it goes for normal folks, it also works for criminals this way: they perceive other criminals as safe and transparent people.
      Bottom line is that we lack the resources for discriminate perception (obviously not for the populistic meaning of it). Would it be peaceful if an 8 000 000 000 people lived inside your head? Starting with two distinctive ideas can be already a mess. So individuals are filtered by their resemblance to our dominant representation of person. Filter consists of neural nets in which are billions of local and hundreds of global parameters. Between two extremums of representation models there are billions of spectral lines.

  • @Kryonsmommy
    @Kryonsmommy 4 года назад

    9:37

  • @TheNutCollector
    @TheNutCollector 2 года назад

    This talk was confusing and I checked out halfway through. The only thinf I did catch was that she miscatagorized autism. It is a neurological disorder, not a psychiatric disorder.

  • @TokyoShemp
    @TokyoShemp 4 года назад

    Vast numbers of people somehow get good grades and then become tools.

  • @wulphstein
    @wulphstein 5 лет назад

    A theory of mind so worthless, they can't predict human behavior, make us happy, or build a robot that can do dishes. Worthless!

    • @zciliyafilms5508
      @zciliyafilms5508 2 года назад

      Can't really blame them if no one has made any large scale attempt to implement their ideas.

    • @borntodoit8744
      @borntodoit8744 Год назад

      UPDATE 05May2023 :
      Four years ago the THEORY OF MIND was just that theory.
      Today (2023) the TOM is now implemented using AI reasoning models.
      ChatGPT is the leader to go viral (released Nov2022 by OpenAI).
      Bard released by Google.
      Many more coming online everyday free and paid open API's.
      AI's create domain specific embedded knowledge.
      AGI's created by daisy chaining domain AI's into a general intelligence.
      Theres also OpenAssistant which avoids the overhead of training ai with static data by training ai with dynamic data (from the web)