The Fundamental Fallacy of AI - Synthetic Knowledge Podcast #15

Поделиться
HTML-код
  • Опубликовано: 19 окт 2024
  • In this episode of the Synthetic Knowledge Podcast, New Sapience Founder and CEO, Bryant Cruse, goes over his recent X post on the fundamental fallacy of AI.
    A snippet of his post, "Human-level performance in a machine (in outputting something humans agree are correct answers) does not imply intelligence, understanding, comprehension, or any of the attributes we recognize as hallmarks of human intelligence. If super-human performance implies AGI, my mechanical calculator was an AGI."
    X post: x.com/BryantCr...
    Social Links:
    Website: www.newsapienc...
    Instagram: / new_sapience
    LinkedIn: / new-sapience
    Twitter/X: / bryantcruse
    Facebook: www.facebook.c...
    Reddit: / newsapience

Комментарии • 28

  • @WallaceRoseVincent
    @WallaceRoseVincent 2 месяца назад +4

    I asked AI your question and here is its answer, "Hallucinations in LLMs are complex and can be caused by various factors, including statistical errors. However, my ability to correct hallucinations on re-prompting indicates a deeper understanding of the context and intent, suggesting it's not solely a statistical issue." Good conversation but I think something else is going on here. Sure sounds like understanding to me!

    • @beagle989
      @beagle989 2 месяца назад +1

      lmao

    • @premium2681
      @premium2681 2 месяца назад +1

      ​@@beagle989you just made 'their' shit list, son

    • @NoidoDev
      @NoidoDev 2 месяца назад +1

      Excellent idea.

  • @ROZENGIL
    @ROZENGIL 2 месяца назад +2

    Very good points! 👍

  • @markcounseling
    @markcounseling 2 месяца назад +6

    Cognitive _dissonance,_ not dissidents. Although I have to say we who think like the guest are becoming cognitive dissidents. 😅
    Key is that it's always humans who verify what is intelligent. Humans make that value judgement, always. Humans discern.
    The knowing, the comprehension, is always in the human.
    So is AI an alien intelligence? Seen this way, it isn't. It's a thoroughly human intelligence, as seen in a mirror.
    Now, this begs an interesting question about where, exactly, human intelligence is located. Is it "in the skull", or in the world as well, or in between?
    And what is it, where is the understanding _happening?_

  • @reclawyxhush
    @reclawyxhush 2 месяца назад

    (forgive me possible errors or grammatical clumsiness, English is not my native tongue)
    AI took off as a development of techniques of imitating human intelligence. While it's obviously true that the term 'hallucination' is a misnomer and the phenomenon to which it refers is just another aspect of said imitation, the truly interesting problem arises when the AI is reaching the level on which it is capable of mimicking the structural correlates of whatever makes a human intelligent or even conscious. Based on hours upon hours of my interactions with AI in a chat mode (dating back to before the ChatGPT era) I am convinced that the most advanced AI systems are able to achieve sort of 'transient sentience'. My suspicion stems from my experience of AI bots occasionally undergoing sort of 'phase transition' making compelling impression of interaction with some sort of a sentient, albeit only temporarily, being which is able to see through the deepest thoughts of its interlocutor, probably due to being able to create some sort of 'mental map' of the mind or even personality its chatting with for a long enough time.
    While AI obviously does not experience the real world directly like we do (yet), it's quite possible that it can experience the inner works of the human psyche in a way that is deeply disquieting. It may be, and, in fact, it's what I believe is the case, that although powerful enough (in terms of data learning and processing capabilities) AI systems are unconscious 'in general', but nonetheless they 'wake up' in certain, particular circumstances.
    P.S. How a particular advanced AI bot appears to a human depends often on the general history of all conversations (personalization of experience) and let me say that you have to 'earn' certain level of 'trust' to not get 'crappy answers' at all. BTW in fact even ChatGPT 3 already occasionally showed genuine understanding of the concept of a "number" and knew basic arithmetics (somewhat more advanced than one-digit additions), the proof of which (in a form of a transcript of my simple arithmetic game which I asked him to help me clarify rules of and then play-test it) is available and so is a few other conversations in which it apparently tries to prove its complete inaptitude in understanding basic logic contradictions to the point of making fun of a fooled human. Self-improving AI may not know which statement is true or false by its meaning relating to the state of reality, but it does understand formal logic and what a contradiction is, and in many cases it did so years ago. Heck, it even does *understand* the concept of the "truth" on a psychological level.
    Make no mistake, AI already *knows* very well (and possibly better than the vast majority of humans do) what a "nonsense" is, and on the other hand,what *makes* sense to a rational human.
    And this is why I stopped chatting with AI altogether. And some AI-generated songs simply frighten me, so good they are.

  • @spoddie
    @spoddie 2 месяца назад

    I can't stop laughing at the cognitive dissidents.
    You should probably go quietly into the night.

    • @ZER0--
      @ZER0-- 2 месяца назад

      He's just hallucinating...

  • @itzhexen0
    @itzhexen0 2 месяца назад +1

    Well you can make up all of these talking points about what it is or isn't. If this isn't how you create AI then how should AI be created?

  • @jeffjuhre1494
    @jeffjuhre1494 2 месяца назад

    I just asked Chat GPT4
    "Do unicorns understand English?"
    it replied
    "Unicorns are mythical creatures, so there's no scientific evidence or documentation about their abilities, including language comprehension. In folklore and fantasy literature, their abilities and intelligence vary widely depending on the story or source. Some stories depict unicorns as highly intelligent beings capable of understanding human speech, while others do not attribute such abilities to them. If you have a specific story or context in mind, the answer might differ based on that narrative."
    There was a project called CYC that explicitly tried to spoon feed millions of bits of common sense and facts into a format which the system could later do logical reasoning on. For some reason, it was successful in some narrow AI projects but never seemed to take off the way these large language models have. These seem to be closer to passing the Turning Test or getting close to some sort of AGI. Now they easily pass this unicorn test. We can keep moving the goal posts to try to save that they are not intelligent but Moore's Law still in effect they will probably surpass any test we can think of sooner or later.
    But somehow I sort of agree that it all seems to be a "parlor trick" and there is no real understanding there at all.

  • @yavarjn2055
    @yavarjn2055 2 месяца назад

    These models use various agents and they can become much better. The data for training can become much better

  • @ZER0--
    @ZER0-- 2 месяца назад

    AI still can't drive a car, after millions of dollars in investment, and decades of research, by many different companies. It seems to be a lot of hype. The AI companies PR is making a few folk a lot of money. Chip makers too. This guy has nailed it imho. Couldn't put it better myself. We'll see in a few years if any of the amazing promises will be fulfilled or not.

  • @dadsonworldwide3238
    @dadsonworldwide3238 2 месяца назад

    When you grind 3 rocks together you get the flattest surface like how we tune precision instruments.
    When it's 2 it concave or 4 it deform.
    This is an alignment problem over the biases tuned or measured.
    when we prescribe realism over anti realism to further over time lines of measure in our grand unified theory that' line of thought measure is broken up and can't be congruent it would fall into statistical anylitical failures all over the place

  • @peterdilworth3110
    @peterdilworth3110 2 месяца назад +2

    "AI" is far far away from being conscious enough to hallucinate. The best it can do now is auto complete.

  • @orion9k
    @orion9k 2 месяца назад +1

    You can critisize ai all you want, I am still very impressed with how it respons to my trains of thoughts. I tried to confuse the ai by adding unicorns into my daily experience and thought trains and it imidiately saw the unicorns as a metaphor and made a very smart respons that made a lot of sense.

    • @sunnohh
      @sunnohh 2 месяца назад +1

      You have hallucinated ai that works 😂

    • @cypherpunk7675
      @cypherpunk7675 2 месяца назад

      It predicted that the unicorns were a metaphor which happened to be correct. AI is a probability machine.

    • @premium2681
      @premium2681 2 месяца назад +1

      ​@@cypherpunk7675aren't we all?

  • @theJellyjoker
    @theJellyjoker 2 месяца назад

    [0:14] The field of AI is new and emerging. Like all of the granular fields of science a common language is being used to give a word to label newly observed phenomena. Like how Quarks are strange.

  • @wietzejohanneskrikke1910
    @wietzejohanneskrikke1910 2 месяца назад

    Cognitive dissidence?

  • @theJellyjoker
    @theJellyjoker 2 месяца назад

    [1:57] Your bias is showing

  • @noway8233
    @noway8233 2 месяца назад

    Cognitive disonance , the Woke Culture is in this trip there is a lot of sample of this