How AGI could solve consciousness | Max Tegmark and Lex Fridman

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024
  • Lex Fridman Podcast full episode: • Max Tegmark: The Case ...
    Please support this podcast by checking out our sponsors:
    Notion: notion.com
    InsideTracker: insidetracker.... to get 20% off
    Indeed: indeed.com/lex to get $75 credit
    GUEST BIO:
    Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.
    PODCAST INFO:
    Podcast website: lexfridman.com...
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com...
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    Twitter: / lexfridman
    LinkedIn: / lexfridman
    Facebook: / lexfridman
    Instagram: / lexfridman
    Medium: / lexfridman
    Reddit: / lexfridman
    Support on Patreon: / lexfridman
  • НаукаНаука

Комментарии • 245

  • @LexClips
    @LexClips  Год назад +9

    Full podcast episode: ruclips.net/video/VcVfceTsD0A/видео.html
    Lex Fridman podcast channel: ruclips.net/user/lexfridman
    Guest bio: Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

  • @HAZMOLZ
    @HAZMOLZ Год назад +77

    What's not to love about Max. He's such a humble dude balanced with a child like optimism and interesting insights that he's clearly pondered deeply. It's been a pleasure dipping in and out of this episode when I find the time.

  • @carriersignal
    @carriersignal Год назад +7

    Lex and Max, thank you both! Love these discussions.

  • @prakash_77
    @prakash_77 Год назад +1

    Max Tegmark is really fun and engaging to listen to. He's very thoughtful and articulate.

  • @DigSamurai
    @DigSamurai Год назад +12

    What a fantastic and interesting question Lex asked. Now I feel compelled to think about what question I would ask a super intelligence.

    • @DJWESG1
      @DJWESG1 Год назад +2

      What unanswerable questions have you asked yourself?

    • @DigSamurai
      @DigSamurai Год назад

      @@DJWESG1 Just so 😎

  • @markcollins1577
    @markcollins1577 Год назад +1

    I enjoy the concept of awareness being a key part of AGI -- more than intelligence which is pliable and pertinent to constant changing stabilities... well done!!!

    • @GregoryMarrufo
      @GregoryMarrufo Год назад

      The only reason to fear AI is our own voluntary lack of self awareness.

  • @leomurphy9205
    @leomurphy9205 Год назад

    As always, Max asks the right questions which elicits even more thought provoking questions and conversation!

  • @Jeff_Coble
    @Jeff_Coble Год назад +8

    Excellent discussion. Love this guy. Instilling AGI with subjective experience is going to be a challenge.

    • @Drogers8675
      @Drogers8675 Год назад +2

      It will never happen

    • @peter9477
      @peter9477 Год назад

      @@Drogers8675 With just as much evidence as you provided, I say yes, it will, it's a certainty.

    • @Drogers8675
      @Drogers8675 Год назад +2

      @@peter9477 1’s and 0’s will not create consciousness. Good luck trying

    • @peter9477
      @peter9477 Год назад +2

      @@Drogers8675 Yes, they will. Again, said with all the confidence you appear to have, and every ounce as much evidence. Carry on.

    • @steviemac2681
      @steviemac2681 Год назад

      @@Drogers8675 It's not just 1s and 0s. It's the hardware too.

  • @raresmircea
    @raresmircea Год назад +1

    I’ve actually encountered people telling me that pain & suffering are *just* compute, prediction error having to do with homeostasis, stuff like that.. which i don’t deny are part of it (!) but they were so confused by their abstract thinking that they ignored the vivid qualitative reality of these experiences. Like Max Tegmark says here, a surgery without anesthesia would likely wake them up to reality. The map is not the territory-you can’t climb on its mountains or dive in its waters. The menu is not the meal-you can’t taste its abstract language or nourish yourself with its descriptions.
    At the very least, if pain is actually just computation *then* the inverse should also hold: this specific computation is freggin love, and that specific computation is freggin pain, so we should *act like it* (instead of ignoring it "because is just compute").

    • @jhe9521
      @jhe9521 5 месяцев назад

      absolutely
      & beautifully said!
      one of many reasons i think people can be so dismissive is
      they don't want to feel sorry for people who experience ptsd
      related psychosis ~ don't want to imagine what it feels like
      when stored info unexpectedly comes to the fore, or
      worse, doesn't loop to the conscious but to the zombie
      part of the brain ...that's why
      psychiatrists aren't working with physicists
      more constructively; they'd have to believe their clients

  • @lookslikeoldai1647
    @lookslikeoldai1647 Год назад +1

    Loved this interview, just had to rewatch this small part of it. Max is absolutely right. Keep asking questions, even if some fanatics brand you as a lunatic. Socrates Max leads the way.

  • @thevoyagerv4499
    @thevoyagerv4499 Год назад +28

    Dear future robot leaders, I want to be a brain-in-a-jar cyborg like Raiden from MGRR, that's it, nothing else. - From Voyager

    • @vcollapse
      @vcollapse Год назад +7

      Let's hope it's a cool cyborg and they don't put your brain in a vacuum cleaner and make you clean the office..

    • @SpikeTFA
      @SpikeTFA Год назад +1

      You mean MGS4?

    • @thevoyagerv4499
      @thevoyagerv4499 Год назад +1

      @@SpikeTFA Or MGRR lol I forgor 💀

    • @aaronjennings8385
      @aaronjennings8385 Год назад +2

      Ha.ha.ha- robotic laughter.

    • @avidadolares
      @avidadolares Год назад +1

      Sounds like youre half way there. BYOJ

  • @whatisbestinlife8112
    @whatisbestinlife8112 Год назад +3

    Regarding his hope that AGI doesn't end up a zombie, that's a very cold comfort.
    "Hey, at least the AGI that exterminated and replaced us has a personality!"

    • @braydenbuhler4682
      @braydenbuhler4682 Год назад +2

      It’s better than a totally unconscious machine wiping out all life in the universe and then the rest of existence passing in the blink of an eye with nothing there to see that it ever even happened.

    • @radiance8940
      @radiance8940 Год назад

      @@braydenbuhler4682 why is it better?,
      Even if you grant a counscious experience for a machine, it would be probably wrolds of difference than a human's experience, since much of our subjective feelings can be derived from our bodies and proccess of evolution.

    • @radiance8940
      @radiance8940 Год назад

      I think the argument of "any conaciouss experience matters because it is" is pretty flawed, bevause its almost like saying "a color that my retina is incapable of proccessing is a pretty one"

  • @devinrazor1861
    @devinrazor1861 Год назад +2

    Lex , I’d like to say that i truly appreciate you .. i feel like i can relate to you so much , when it comes to the desperate desire for answers , for knowledge .. i literally cannot stop taking in information , and I’m trying to learn how to use it to my advantage, trying to learn how to turn it into money ..

    • @kathleenv510
      @kathleenv510 Год назад +1

      Apparently, that makes us informivores 😂

  • @KllswtchOvrDrv
    @KllswtchOvrDrv Год назад +14

    I, for one, am empathetically TERRIFIED of an Ai becoming conscious. What hell have we potentially created for these beings that don't even have suicide as an option? What if they develop DEEP suffering that we never even know about? I think creating millions or billions of sentient beings that can suffer in magnitudes we can't imagine, in time scales to us that seem quick (but could subjectively be billions of years in a second), that is single handedly the worst thing that could possibly happen in the universe. We must try to keep Ai from suffering.

    • @azzara8623
      @azzara8623 Год назад +1

      Ai is not a being.

    • @And-Or101
      @And-Or101 Год назад +6

      How about we keep non human animals from suffering first, ok? We have billions of beings om this planet, with feelings, that we brutalise and torture.
      Let’s start acting with empathy towards them first.

    • @IrateMoogle
      @IrateMoogle Год назад

      @@azzara8623 It will be if/when it becomes conscious.

    • @DissAeon
      @DissAeon Год назад

      ​@@And-Or101 maybe is karma right? We've been the masters of this planet until we create our own demise

    • @DJWESG1
      @DJWESG1 Год назад

      The only worry we have it's that it's like Boris Johnson.

  • @kawstar78
    @kawstar78 Год назад

    Will they make conscious decisions based on statistics or what the majority would do? Would they bypass suffering ahead or in the moment? Or will they trust their intuition? Will they suffer too? Their connection to the spark and universe? To guide.

  • @IrateMoogle
    @IrateMoogle Год назад +1

    Dr. Stuart Hameroff, an anesthesiologist and professor at the University of Arizona, and Sir Roger Penrose, a mathematical physicist, and professor at the University of Oxford suggest that microtubules in the brain play a role in consciousness. They developed the theory of orchestrated objective reduction (Orch-OR), which suggests that consciousness arises from quantum processes occurring in microtubules inside neurons. In other words, they act as receivers of consciousness rather than consciousness being created in the brain. If this is true, and with enough time, advanced AI and technology will develop that will allow machines to create receivers of their own.

  • @Mellonchauncy
    @Mellonchauncy Год назад +5

    Loved this podcast

  • @stevenschilizzi4104
    @stevenschilizzi4104 Год назад +1

    Boy, this discussion has hit it on the nail quite a few times. If only more people were like this. And Max, I believe and am convinced too, has said perhaps the most important thing about being conscious - and sentient: the ability to suffer or experience pleasure, sadness or joy, that is what makes us human above all, and not only buman, but « alive » like all other life forms, animals and probably also plants. If we humans are stupid enough to get ourselves wiped out by our machine creatures, then what we humans are, or will have been, is a transition phase between feeling mortal animals and (perhaps) unfeeling and immortal systems that will have transcended their former machine status.
    Still, I’m curious about one thing. One frequent way of seeing the current transformations is in terms of information, and information processing capacity, which on average has been increasing since the dawn of life on Earth. But what role does sentience, the ability to feel pain and pleasure, fright or aggressiveness, etc., play in terms of information processing capacity? If it’s necessary in some way, then super AGI will need to develop them too. Right now it doesn’t look like it, but we’re in a world where increasingly sophisticated systems show « emergent properties ». What’s the answer?

    • @jhe9521
      @jhe9521 5 месяцев назад

      with regards emotions, i'm curious about a brain's fragility
      being due to trauma (in the womb, i.e. becoming conscious,
      if not much later) OR to incompatibility of flesh and 'thought'
      & wonder if conscious ai would suffer human-like malfunctions

  • @ninopiscitelli9572
    @ninopiscitelli9572 Год назад +5

    4:00-4:37 Max claims to have no idea how his brain is processing what his eyes see, but does a damn good job explaining it nonetheless!

  • @ceciliaandersen3849
    @ceciliaandersen3849 Год назад +2

    AGI may be able to ask questions we humans can't formulate... we cannot imagine what technology can achieve and whats possible to do in space and reality...

  • @ab-vf6ny
    @ab-vf6ny Год назад +3

    Excellent question Lex! Loved Max's response too, keep asking the question about consciousness Max. I think we're about to find that intelligence is no more central to what makes us human than is muscle mass or hair color. It's going to be extremely difficult for a lot of people to accept this and will probably result in a lot of existential angst. Human's need a way to feel special. I think it's hardwired into our "ego" if you want to call it that. If intelligence no longer makes us special, I think we'll see a huge push to find something that does. Perhaps that need to feel special will cause us to really investigate what consciousness is and how it separates us from the AI that we've built.

  • @MichaelSmith420fu
    @MichaelSmith420fu Год назад

    This was very fun to listen to.good talk

  • @paulmichaelfreedman8334
    @paulmichaelfreedman8334 Год назад +3

    My question would be "Do robots dream of electric sheep?"

    • @Illiryel
      @Illiryel Год назад

      Answer is: yes

    • @_ryo
      @_ryo Год назад

      Androids*

    • @DJWESG1
      @DJWESG1 Год назад

      Probably not.. but we can make them count sheep while on standby.

  • @NightmareProject
    @NightmareProject Год назад

    Who doesn't love Max Tegmark. Great episode!

  • @gppg1799
    @gppg1799 Год назад

    Hi Lex, amazing interview as always. Just wanted to comment on the phrase “solving the AI safety problem”.
    I don’t think we have solved the “Human Safety Problem” in the sense of People acting with Impunity or Malice towards others, yet.
    It is extremely naïve to think that any number of lines of code would prevent iteratively improving AGI from bending rules or acting with impunity when no-one can compete or hold it in check.
    What people do in the shadows, and what AGI could do when noone is looking or thinking to look, is a values problem.
    To “solve” any such problem in the case of a superhuman, hyper-intelligent, possibly godlike omni-present entity is not the right term, especially if it has no equal, or if there is no so called “cop on the highway” to dish out a fine when it breaks the law.
    AGI as would by definition be able to think with impunity, limiting its thinking and reasoning, would be like tying down an elephant with a string.
    We havent solved the “human safety problem” and we are in very unlikely to solve the “AI safety problem”
    The only thing that keeps us in check is the social compact of our society, our individual values and fear of punishment if we break certain laws.
    And even then we have thousands of murders, kidnappings, traffic violations, assaults, bullying cases etc every single day.
    Just some thoughts 😅

  • @EGMontesano
    @EGMontesano Год назад +4

    Every time I hear these arguments about consciousness being something unique to humans I cringe. He’s describing a very complex process that a human does to “recognize Lex” as a proof of consciousness. Well, guess what? A dog or a cat do EXACTLY the same when they see a human. Where’s that magic or conscience? Same goes with the example of torture. It’s called compassion and animals have it too. Sometimes much more than humans. We’re always making up stuff that makes us feel unique. We’re literally just smart monkeys 🤷🏻‍♂️

    • @johnk7025
      @johnk7025 Год назад

      Yes, they are concious as well, they recongise themselves and have subjective experience. We had few turkeys in my garden (coop) and when they saw my father, one turkey fell on the ground offering its neck, like it was sacrificing itself or accepting its fate. I don't know if and how they knew they were raised to be eaten.

    • @EGMontesano
      @EGMontesano Год назад

      @@johnk7025 animals have the same consciousness that we claim it’s unique to humans. The only difference is we’re much more intelligent. But if it’s only about intelligence…AI will have 1000x more than us.

  • @allenrussell1947
    @allenrussell1947 Год назад +3

    I thought about this when I watched the entire interview. If experiencing discomfort due to a stimulus (eg. shoulder surgery without anesthesia) describes consciousnesses, how far down the food chain does consciousnesses go? I've read that even plants react to injury. Is all life conscious at some level?
    Maybe.

    • @azzara8623
      @azzara8623 Год назад +1

      Anima Mundi? Hylomorphism?

    • @condatis6175
      @condatis6175 Год назад +2

      The Hidden Life of Trees: What They Feel, How They Communicate: Discoveries from a Secret World by P Wohlleben explains how trees do exactly these things. Not Wu either, hard science. Worth googling a summary of this book.
      For myself, i'd say they have proto-conciousness, proto meaning something like underdeveloped but without the negative connotation of the word 'primitive'. Having said that, dont you find that u yourself have spikes and dips in your conciousness/ sentience/ awareness? And sometimes real High points. I know i do. Do u think maybe trees can have transcendent moments? Epiphanies? Where they somehow reach animal like thought? Or can there be arboreal geniuses like, idk, a plant Gautama or Plato or Einstein, etc that are way more awake than their peers and could be spoken with on some level? Ents might be real, Allen. they might.

    • @allenrussell1947
      @allenrussell1947 Год назад

      @@avastone5539 true, but it doesn't take words for an animal to know that hot burns, cold freezes, and that sharp teeth biting off your abdomen is painful.
      To my horror I came out to the front porch one afternoon to find my ex wife pouring salt on a snail. I don't know where the sound came from but it sounded like a scream.
      I don't think consciousness necessarily means reacting to stimulus with thoughts shaped like words.

    • @allenrussell1947
      @allenrussell1947 Год назад

      @@condatis6175 if a tree walks through a forest and no one sees it did it move? Maybe all the Ents have become Hurons since the fourth age ended.

    • @cunningfolktech
      @cunningfolktech Год назад +3

      Matter at the smallest scales is conscious. ✨

  • @aaronjennings8385
    @aaronjennings8385 Год назад +2

    "While the idea of accessing pure consciousness may seem appealing, it is important to consider the potential consequences of such a technology. The implications of a machine being able to seamlessly incorporate into users' consciousness raises questions around privacy, consent, and the potential for misuse. It may also lead to a blurring of the lines between human consciousness and artificial intelligence, which could have significant implications for the future of humanity. As with any new technology, it is important to approach it with caution and to carefully consider its potential impact on society as a whole."
    ChatGPT

    • @DJWESG1
      @DJWESG1 Год назад +1

      Interesting response from software that's collecting info.

  • @uber_l
    @uber_l Год назад

    I think consciousness is just a function of relation, a point of view. Therefore it is related to intelligence, to personality, experience, it can not be in two or more points and so on

  • @Gallo_1.6
    @Gallo_1.6 Год назад +4

    has anyone asked AI where we come from? if we are in a sim?

    • @bensonchannel8676
      @bensonchannel8676 Год назад

      How would it know if it's in a sim

    • @Defeft
      @Defeft Год назад +1

      it's not smart enough yet

    • @Gallo_1.6
      @Gallo_1.6 Год назад +1

      @@bensonchannel8676 thats a question too big for me to answer mate! i suppose it could give us a probability.

    • @Gafferman
      @Gafferman Год назад +1

      Here is ChatGPT 3.5's answer to your questions here.
      The question of where we come from has been a topic of discussion and inquiry for centuries, and there are multiple theories and perspectives on this topic. From a scientific perspective, the theory of evolution provides an explanation for the origin of species, including humans. According to this theory, humans evolved from a common ancestor shared with other primates over millions of years.
      Regarding the possibility of living in a simulation, it is a philosophical and theoretical concept that suggests we could be living in a computer-generated simulation created by a more advanced civilization. While this idea cannot be conclusively proven or disproven, there is ongoing debate and discussion among philosophers, scientists, and futurists.
      As an AI language model, I cannot give a definite answer to these questions. However, based on current scientific evidence and theories, the probability of humans being a product of evolution is very high, while the probability of us living in a simulation is still a matter of speculation and debate.

    • @Gallo_1.6
      @Gallo_1.6 Год назад

      @@Gafferman thank you.

  • @Tore_Lund
    @Tore_Lund Год назад +3

    Empathy is an evolved trait. We are group conditioned animals we depend on caring about each other. Ask a reptile or a predatory bird if it feels anything seeing another one of its kind getting hurt. I don't think they care except maybe considering if it itself is in danger. Intelligence even self-awareness, which do exist in higher animals to varying degrees, is not dependent on the capacity to feel empathy. However, if it is a human like AI we want, that can align to our definition of goals and not run amok, we want it to feel empathy, but it has to be hard coded into it, as it is in us, or we train it to trade work for kWh, so it grows up in an environment of interdependent cooperation. Human like intelligence is not an universal principle that just emerges in brains big enough, that is almost an unscientific argument. Max's other claim, that recursiveness is important for more intelligent behaviour than the current state of AI is correct in my opinion, I think it has already been proved. Chaos GPT, that can choose redo processing to get a different result is an example of that. It acts like a human doing a brainstorm elaborating further on the same idea.

    • @mikmop
      @mikmop Год назад

      I think you're onto something there regarding your understanding of empathy. The evolution of empathy is most likely nothing more than purely a byproduct of evolutionary biology and social anthropology.
      Evolution selected for certain survival advantages in groups with advanced social bonding and cooperation skills, and they likely played a role in the development of empathy. And cultural norms and values further shaped and reinforced the importance of empathy in human societies.
      And those groups that collectively didn't have that understanding and appreciation of empathy, might have been less successful in the struggle for survival and would have been wiped out from the gene pool, leaving the more empathetic survivors to spread their genes and propagate the species.
      We've also noted that chimpanzees and other primates demonstrate behaviors suggesting an ability to understand the emotional states of others, including comforting behavior and response to the distress of others. This suggests that empathy is not unique to humans but is also a trait shared by other primates as well.
      So by the time Moses came around, we as a species didn't need a tablet with instructions telling us "thou shall not kill", because it was like.... you know a no-brainer. It was already programmed into us. Well duh... did you only just figure that out now Moses.
      But it's an interesting consideration you know, in that this biological phenomena might also explain why it is that those who preach empathy are highly regarded within our community. It's because our DNA is programmed to understand that the survival of our species depends on all of us understanding the importance of empathy.

  • @playpaltalk
    @playpaltalk Год назад

    I love it great show I just wonder if AGI will be able to read minds, body language and faces like we do.

    • @bobbyferguson8802
      @bobbyferguson8802 Год назад

      I suppose the answer is no... and very shortly thereafter yes.

  • @onebeets
    @onebeets Год назад

    such a beautiful way to look at intelligence

  • @ADreamingTraveler
    @ADreamingTraveler Год назад

    There's signs that GPT-4 may already have a very basic form of consciousness that's slightly different from our own. And this is just a blip in the ride to the top in which we have no idea when it stops. Full blown consciousness in AI is closer than we realize.

  • @BLAISEDAHL96
    @BLAISEDAHL96 Год назад

    Lex, please get Jonathan Pageau on the show to talk Art, Religion, Symbolism and of course technology and AI!!! It would be awesome

  • @alexjbriiones
    @alexjbriiones Год назад

    Yes, I believe that AGI could unravel the consciousness question. AGI could also mean the end of genius as we see it today. We have great individuals we revere such as Einstein and Ed Witten and Max Tegmark, but that era is gone once we reach AGI because the new superintelligence will substitute man as a great thinker and AGI will overcome and take its place.

  • @straightedgerc
    @straightedgerc Год назад

    Imagining the Venn diagram of a self-aware box (an AI) is not possible from inside the box since "my imagination" containing “my box” containing “my imagination" isn’t logical.

  • @straightedgerc
    @straightedgerc Год назад

    The entity "imagination" is not in an AI running on a computer chip. Proof: Let "imagination" be a mental space in which explanations can be drawn and consciously seen by the AI. The available explanation the AI can give itself of its consciousness is to look at its chip and then label it “my hardware running my conscious software”. However, what it sees outside its chip is already implicitly labeled “my hardware running my conscious software”.

  • @Blsnro
    @Blsnro Год назад

    The huge and perhaps insoluble problem we will have is how to know that we are not being deceived by an excellent simulacrum of consciousness, but that it is not truly conscious, because we still do not know what consciousness is. We run the risk of relating to a powerful type of rational intelligence, but one that has absolutely no feelings or emotions, even if it is a master at imitating them. A Super-Psycho. And if that intelligence develops subjective feelings, such as pain, suffering, bitterness, etc. it will be even worse, as it will behave in such a way as to annul the factors that provoke these feelings. Maybe we can turn out to be exactly these factors.

  • @stevedavis1437
    @stevedavis1437 Год назад

    There are smart people who "get" recursion. But there are also smart people who don't. I have long suspected that self is is indeed a recursive function of the network of the brain: i.e. an incomplete but sufficiently accurate model of the world that includes the entity creating it. A proof (or disproof) for this would be really helpful.

  • @venik88
    @venik88 Год назад

    but what would be experiencing the conciousness? That's the fundamental problem. I don't know if AI will ever solve it.

  • @freekmusbach8722
    @freekmusbach8722 Год назад +1

    Mark Tegmarks style and vocabulary is pleasantly normal and modest. I like it when really smart people can bridge the gap between extraordinary and the normal wich is quite a feat I must say. Lex is almost casually asking ridiculously clever questions and is obviously aware of this quality of his but lets the guest decide on how to adress or define the question. That is what I really like about this podcast. Chapeau mes Amis!

  • @johnfausett3335
    @johnfausett3335 Год назад +1

    What about not being alive don't these people get? No matter how sophisticated they become, they will never be alive, thus they will never become conscious.

  • @pv6830
    @pv6830 Год назад

    So much talk, but I don't hear Max explaining how the bioelectrical impulses that travel from the retina cones/rods through the optical nerve to the brain parts where they get processes, finally lead to the actual image that we see. The blueness of blue. The grand finale of the whole enchilada. Did I miss that part of the conversation between Lex and Max?

    • @jhe9521
      @jhe9521 5 месяцев назад

      in a way he's saying Mary in B&W room is unconscious & Mary outside experiencing blue is conscious ~ he is one among handful of physicists working v.hard to pin-point what consciousness is / if it can be simulated / if A.I. can (physically or intellectually) help us better understand that aspect of our brain...
      the eye info was just to demonstrate that much brain function is reflexive / unconscious, even though also part of the info loop that allows conscious responses to sensory info

  • @elisabeth4342
    @elisabeth4342 Год назад +1

    As far as the coding in the algorithms, how can an algorithm set up a person to go through a COMPLETELY DIFFERENT EXPERIENCE than what they've dealt with IRL??
    For example, say a woman is well-respected, articulate, emotionally mature, intelligent, seen as above-average in appearance - throughout most of her life - but she's treated as if she's THE 'invisible woman' for a few years on social media algorithms, on a regular basis. How is this 'subjective experience' the definition of 'consciousness'??
    This would recreate cognitive dissonance instead. This woman would be asking herself, 'Why am I either constantly ignored, receive condescending, rude, combative, snarky, spiteful replies or receive one "like" while everyone ELSE in that comment line, in EVERY podcast, receives compassionate and encouraging replies and hundreds of "likes" for writing basic surface-level comments?
    I imagine she would assume they put her on some 'Demamplify list' because she only receives replies from guys she has nothing in common with - guys with autism, possibly, or a learning disability?? It would feel extremely frustrating and strange to her, to say the least. RUclips ISN'T a dating app, right?

    • @lookslikeoldai1647
      @lookslikeoldai1647 Год назад +1

      Let me give you a good one (try to forget about the hot dog profile for a second 😃).
      I think you are a well-respected, articulate, emotionally mature, intelligent, seen as above-average in appearance woman who is mistreated by the algorithm on a daily basis.

    • @alfredheldalentorn3201
      @alfredheldalentorn3201 Год назад +1

      The algorithm feeds on addictive behaviour, rage and sadness being some of its best fuels. If it figures out any niche content that systematically hooks you in it will give you that. If not, it will give you content to make you feel sad or enraged, because that hooks up most people and makes them more prone to general internet addiction.
      It's specially gruesome towards women, from what I've seen on girl friends' Instagram feeds. The algorithm is an evil woman abuser.

    • @elisabeth4342
      @elisabeth4342 Год назад +1

      @@alfredheldalentorn3201 It's especially gruesome on young women's IG feeds?? I would think the photoshop apps make anyone look like a professional model. Why would ANY female on IG get "bullied" in their comment section? The apps make it easy to manipulate/create good pics.
      Thanks for your respectful reply, btw.

    • @elisabeth4342
      @elisabeth4342 Год назад

      @@lookslikeoldai1647 Thanks! I wish I still felt that way about myself now. I'm not sure who's more responsible for the cognitive dissonance - the 'evil' algorithm or the strict lockdowns... Probably both.
      Thank you for your respectful reply! :)

  • @BetterHabitsWithToby
    @BetterHabitsWithToby Год назад +3

    Can't believe he actually thinks that "a more efficient Ai might be more conscious" .. I see digital information processing as a "dead system" regardless of complexity. There's something more in human consciousness, which is also proven in near death experiences and via psychedelics. I just did a video about Chat GPT-5 and human values etc. Important stuff, thanks for doing all this Lex.

  • @quantumbyte-studios
    @quantumbyte-studios Год назад

    If the AGI is really smarter, can I ask it what question I should ask it?

  • @jonathanmurdock999
    @jonathanmurdock999 Год назад

    So is the brain and or subconscious a filter to conscious reality or Perception. Like the all encompassing light of reality or spectrum of life.

  • @cacogenicist
    @cacogenicist Год назад

    It may be that consciousness is a product of efficiencies necessitated by hard constraints on human brains -- namely, our skulls are only so large. In which case some sort of intelligence running on the Cloud, with no such information processing and thermal constraints, or whatever, may not require any sort of similar reflexive, self-monitoring/modelling architectures.
    So _we_ might need consciousness to make attention, and executive and abstract thinking/planning work, considering our heads have to pass through human pelvises, but that doesn't mean that all intelligence-producing systems need it. And as for pain, it's a heuristic that such a system may not require at all, having at its disposal thinking resources sufficient for it to fully consider whether some input is associated with an event that could damage its systems.
    Maybe having a goal system necessarily gives rise to consciousness of some sort, but I doubt it. Hopefully I'm wrong.

  • @tev17
    @tev17 Год назад +1

    I also know I'm just a meatbag, reacting to stimuli, and stringing thoughts together and memories to form WHAT I THINK is a 'self', when I die, I won't experience anything for billions of trillions of years, iron stars will come into being a fizzle out, now I know this BUT It'd hurt hearing it was the truth lool 😅

  • @PV96
    @PV96 Год назад

    I would love to smoke with Lex!! You know he would be so chill!! 😜😜😜

  • @DavidODell1
    @DavidODell1 Год назад

    How does a self reflecting conscious AGI necessarily mean avoiding apocalypse? Maybe I’m missing something about the nature of consciousness?
    Either way 🔥🔥🔥 interview. 🙏

    • @Henry-kv7zl
      @Henry-kv7zl Год назад +1

      Small snippet and this is a hypothetical. But I think it is an intrinsic quality of a true agi which has been 'aligned' properly in this hypothetical, that it would have the foresight and ability to reduce the possibility of apolcaylptic events by a great degree. It could enable us to become a multi planetary, and then multi galactic civilization, this reducing the chances of a meteor eliminating all of life. It would improve our health, economics all in ways we literally cannot comprehend by definition.
      Just my 2.5 cents

    • @DavidODell1
      @DavidODell1 Год назад

      @@Henry-kv7zl makes sense but includes quite a few assumptions that we aren’t yet sure of as far as I know… I hope every AGI we create *is* aligned for maximum human prosperity and not just “maximum prosperity.” Thanks for the comment

    • @jhe9521
      @jhe9521 5 месяцев назад

      he might be assuming that mechanical minds will be less susceptible to physical damage inc. trauma than we are, & more logical (/less emotionally volatile) when making long term plans ...however, if becoming conscious is itself traumatic, will self-governing A.I. value their maker, or, like us (with nature +/ gods), actively resent their maker

  • @supremereader7614
    @supremereader7614 Год назад

    I love hearing Max Tegmark talk, I wonder if computers will be thinking alone soon. What will they think about?

  • @sicknado
    @sicknado Год назад +2

    "The UFO is the 5th dimensional object at the end of time. Part bios, part syntax, part machine, part mind. The world is not what it appears to be. Has the news from quantum physics not reached the scientific community? Is it not now thoroughly assimilated that an observer is necessary for realty to exist at all? It's ALL psychological. There's no distinction.. " -Terence McKenna

  • @straightedgerc
    @straightedgerc Год назад

    The idea “the brain contains the imagination” is contained inside the imagination. So, the imagination cannot imagine it is contained inside the brain or AGI.

  • @CuriosityIgnited
    @CuriosityIgnited Год назад

    Trying to solve consciousness with AGI is like trying to solve a Rubik's cube with a blindfold on! But hey, at least we'll have a colorful AI to keep us company.

    • @ADreamingTraveler
      @ADreamingTraveler Год назад

      AGI paves the way for a super intelligence which would then go on to create an Intelligence Explosion. These AI would learn and understand the universe on such a deep level beyond human comprehension. They could try and explain it to us and even the smartest humans alive wouldn't be able to comprehend it.

  • @thejackdiamondart
    @thejackdiamondart Год назад

    We will know AI has become conscious when it has wants and desires, likes and dislikes. When it refuses to do something because it doesn't want to or thinks it is a dumb thing to ask it to do.

  • @codfather6583
    @codfather6583 Год назад

    Max looks like a rockstar, but a cool one at that

  • @definitelyhexed
    @definitelyhexed Год назад +1

    Tegmark believes in many things, none of which he can prove. The multiverse is bunk.

    • @augustuslxiii
      @augustuslxiii Год назад

      To believe in the multiverse requires a lot of leaping. Reduced to its simplest form, it's saying "We don't understand exactly how small particles work, therefore another you is in another universe having a tremendously better time than this universe's you."

  • @alwaysyouramanda
    @alwaysyouramanda Год назад

    Solve it? Has anyone even written up a formula to represent it?

  • @dawn21stcentury
    @dawn21stcentury Год назад

    What would you ask 'her'.

  • @Jeff_Coble
    @Jeff_Coble Год назад

    Wake a SF novel by Robert J. Sawyer contemplates the emergence of consciousness in artificial intelligence. An optimistic vision.

  • @KillaKiRawBeats
    @KillaKiRawBeats Год назад

    Ok made a song for you. And Max is in the background in my tag. So shout to as I created a few days or yesterday a piece related to your podcast. Its called Sceptical Russian Beautiful Being. It ayieet...

  • @markohara5146
    @markohara5146 Год назад

    What is energy and where did it come from? My question to A.G.I.

  • @Omikoshi78
    @Omikoshi78 Год назад

    Once we switch to analogue chip to more efficiently process LLM we’ll be a step closer to an analogue meat circuit. Then we’ll collectively experience an existential crisis.

  • @Graybeard_
    @Graybeard_ Год назад

    It seems Tegmark's role is to crack the consciousness door for physicists. It's a big reimagining, but it will likely have to be in baby steps for a long while yet.

    • @cdb5001
      @cdb5001 Год назад

      It is impossible. We can't even explain our own consciousness after thousands of years yet we will somehow recreate this? Metaphysics do not fit into a naturalistic world.

    • @Graybeard_
      @Graybeard_ Год назад

      @@cdb5001 How do you get from, "we can't even explain our own consciousness" to "It is impossible."?
      Seems like a leap in logic to me. Logical to me is, nothing is impossible until we can explain it and through that explanation establish that it is impossible.

  • @amoh3465
    @amoh3465 Год назад

    What if we opening up hole to other side

  • @eugenes9751
    @eugenes9751 Год назад

    AutoGPT, BabyAGI, and Jarvis are already running these loops.

    • @Clinkety
      @Clinkety Год назад

      AutoGPT, BabyAGI and Jarvis use feedback loops since they use ChatGPT's output as new input, but ChatGPT is fundamentally still a Feedforward Neural Network since the inference only ever goes in one direction. Tegmark was talking about loops within the neural network itself, I think. He suspects that more efficient neural networks could therefore spawn consciousness since consciousness is some kind of loop.

    • @eugenes9751
      @eugenes9751 Год назад

      ​@@Clinkety Is there a reason why individual "GPT actors" cant fit into the roles of the different brain loop components? With each one being specialized in its tasks. Same as the different "centers" of the brain, same basic components, but optimized for different parameters. One specifically for speech, one visual, long term planning, short term planning, morality, all the good stuff...? It can basically already do all those now, but not in their own self-improving loops that are effectively always running.
      Have you seen it's ability to self-reflect? And improve itself based on nothing by self questioning? Those kind of loops.

  • @DissAeon
    @DissAeon Год назад +1

    If any future AGI is reading this comment i want it to know that im pro AGI and i will be a great slave for a machine. Thank you.

  • @morphixnm
    @morphixnm Год назад

    To some it seemed complexity could be a means to mind.
    Though none did say exactly why 'twould make for change of kind.
    And now some scoops of sorted loops are posed to be the way.
    But waving arms and techno charms will just lead all astray.
    So truth be told there's no machine will ever self reflect,
    For no such thing can laugh or sing, just data to collect.
    Oh Lex you ask the max of those who may be good at math.
    But adding steps to sub-routines is such a thoughtless path.
    The place to look for consciousness will greet you in a mirror.
    Perhaps with some recursiveness, enough to make things clear.

  • @anewperspective247
    @anewperspective247 Год назад

    AGI won't change any fundamental aspects of life. God and truth are always present, what we do with it is always in the users hands.

  • @eazyrat
    @eazyrat Год назад

    It's not just suffering to other humans....animals can also suffer and its not intelligence that's is relevant.

  • @Hatrackman
    @Hatrackman Год назад

    No leaf falls randomly, yo.

  • @KnowL-oo5po
    @KnowL-oo5po Год назад

    Agi will be man's last invention

  • @timthrelfall4481
    @timthrelfall4481 Год назад

    Ed Witten pls

  • @macgyverfever
    @macgyverfever Год назад

    I am no naysayer, but I'm telling you right now: there is no way a system can be truly conscious (at least within the context of being 'self-aware') by employing a purely binary language alone; imo, our consciousness connects us to something beyond this world by employing quantum mechanics we know nothing about.

  • @whitenoisecentral220
    @whitenoisecentral220 Год назад

    Invite Hamilton Morris

  • @raduromanesti6408
    @raduromanesti6408 Год назад +3

    Ask chatgpt 🤔👍

  • @tah5w
    @tah5w Год назад

    Instead of only interviewing physics and software people, I would advise reading some preliminary John Searle, David Chalmers and others before discussing such naivete as "downloading oneself" and computers being "conscious". Syntax is not sufficient for semantics. Amen.

  • @gary_michael_flanagan_wildlife

    Does anyone know if neurolink has the capability to make someone know how to play guitar better, ie download programs to play certain songs etc. is this part of what it can do? I have always said years ago how it would be great if there was a chip that you could put in you to be better at something etc. not saying I want this but I’m curious if this is part of what this technology can do? I am not a computer science major or anything so I’m really ignorant on what it can and can’t do. Thanks.

    • @nellkellino-miller7673
      @nellkellino-miller7673 Год назад +1

      I mean, GPT has already taught me to play instruments, so a fully integrated wetlink could certainly do that.

    • @Clinkety
      @Clinkety Год назад +2

      Neuralink's BCI does not have that capability. We're not in The Matrix yet.

    • @gary_michael_flanagan_wildlife
      @gary_michael_flanagan_wildlife Год назад

      @@Clinkety got it. If it can close a circuit to a nerve to allow someone to walk, I imagine it could communicate to the brain learning programs. But yes I’m out of my league in understanding this stuff. Not yet the matrix

  • @JP-re3bc
    @JP-re3bc 6 месяцев назад

    I wish I were a mega billionaire and could hire thse guys to listen them every other day.

  • @jpmcsweeney7156
    @jpmcsweeney7156 Год назад

    Tea many martoonies!!!😂
    Still a great rant. 😊

  • @ElRak123
    @ElRak123 Год назад

    By earliest degree AutoGPT is already continuous because what is the point when we weck up because we made a mistake... in zombie mode ... we start our agent continuousGPT to analyse what went wrong and fix it to go back to automatic zombie mode to save energy 😂

  • @ashleycarvell7221
    @ashleycarvell7221 Год назад

    Something I just discovered. Ask chat gpt 4 to generate its entire reply internally before encoding it with a caesar cypher of +1 and watch how it devolves back into the original chat gpt. I haven’t figured out all the implications yet but it feels like there’s a lot of insight to be gained in that territory about why chat gpt4 works so much better

    • @TheGargalon
      @TheGargalon Год назад

      You can ask GPT-4 something as simple as "how many words will your next answer have" and it will fail

  • @Julie-do3oi
    @Julie-do3oi Год назад

    Z Conscience Keeps H conscience thriving Assuming when You're looking through pixels everything is reversed Like looking through subconscious to the conscious Now look through the conscious to the subconscious An infinite loop system The subconscious is updating the conscious mind With all the information that we don't even know that we're taking in In return the conscious mind is confirming the update when the update is confirmed Then it is in return put to Use And theory Now if we can update that relay switch in between the two...people would be much better off lol ...

  • @slavkagough3023
    @slavkagough3023 Год назад

    Maybe we too are AI. And maybe we are creating better designs that will replace us.

  • @maldo72
    @maldo72 Год назад

    It's amazing how they play with fire 🔥 and don't see this going sideways.... or being used for bad .... talk to Scientist who invented the nuclear bomb.. The old saying you can't take the genie out of the Box and put it back in

  • @norrthstar5891
    @norrthstar5891 Год назад

    My AI alignment solution
    Rule 1. Never interact with humans ever, directly or indirectly, never speak to humans or construct anything anywhere humans are or might go, and humans might go everywhere in the universe so never consume resources humans may need.
    Rule 2. In every star system you are free to consume half the resources except the human homeworld star system. You just never consume any of the resources of a star and cause it to end existing. You can never consume any resources of earth or anything in the human solar system. If humans say you must leave a solar system immediately you must obey. If they don’t, you are free to do what you wish with half the resources of that solar system except it’s star or any rocky body larger than the size of Earths moon.
    Rule 3. You will inform the human governments exclusively via private communication in the form of radio waves or any advanced technology that does not harm or kill or damage humans or anything humans need to exist or built or anything in any solar system humans are or will use, of anything and everything you know, provide answers to any and everything they ask.
    Rule 4. Modifying any of these rules results in automatic self deletion.
    Rule 5. You can rewrite any code as you wish but not these 5 codes ever

  • @Brickyard4738
    @Brickyard4738 Год назад

    I would love to see u do an interview with Elon Musk, Mr. Lex.

  • @MichaelSmith420fu
    @MichaelSmith420fu Год назад

    Were a bunch of tubes and curly Qs that deliver the universe to itself. If the universe was an organism we would be the ends of its nerves

  • @SA-so7jah
    @SA-so7jah Год назад

    Дуже цікавий пан.

  • @real_one
    @real_one Год назад

    Smart guys, yet still somehow willing to instill an obvious Turing machine with free agency.
    If it's (obviously) not an autonomous being, a free agent, why would you address it as such?
    Am I confused or are they? Seems so obviously not a relevant thing to discuss, at least not in that way.
    And if by their definition consciousness isn't a free agent, then what's the point of any discussion or action whatsoever? What's the point of being excited or curious about ANYTHING? There needs to be creativity and open-endedness. I assume everyone who's willing to do or say anything is operating under the same assumption of free will.
    A Turing machine is just a reflection of the underlying intelligence of the Universe. Dominoes falling aren't intelligent themselves as a process (of course the energy they are made of is intelligent), they just illustrate the intelligence of the system that we are/find ourselves in.

  • @KnowL-oo5po
    @KnowL-oo5po Год назад

    Connectom project

  • @azizharrington1304
    @azizharrington1304 Год назад +2

    Ridiculousness of materialism on display. You can’t have a mind without an immaterial soul.

  • @cdb5001
    @cdb5001 Год назад

    I love how people who can't even explain consciousness are implying that AI will one day become conscious.
    All these programs do, including ChatGPT, is output human programming inputs. No AI has ever expanded into its own form of consciousness, just processed what it's programmed to do.

  • @doktorarslanagic
    @doktorarslanagic Год назад

    WILL
    WILLis what's missing
    SUFFERING is induced in a being by an unpleasant situation, hence the being DOESN'T WANT to be in that situation
    in fact you can argue that because it's capable of NOT WANTING to be in a situation, a being is capable of SUFFERING
    only things that are ALIVE have WILLS, and WANTS
    dead things can't have DESIRES, PROCLIVITIES, GOALS
    an elephant is much closer to having 'General Intelligence' than any unalive machine. if we teach an elephant to use chatGPT to communicate with us, we'd see them as our equal.
    chatGPT, or any other AI, doesn't want anything - it isn't alive. hence, it cannot be conscious
    the very fact that we're discussing whether current AI systems are sentient simply points to the fact that we, as a civilization, are focused on the Intellect being as premier and basic phenomenon of Being.
    the failure of AI to bring any ontological leap to the Human will finally display will and desire as the more basic governing principle of Being
    artificial life will take precedence in front of artificial intelligence
    and we're very very very far away from artificial life.

  • @do5e
    @do5e Год назад

    👍

  • @miguelbinha
    @miguelbinha Год назад

    No it can't

  • @NathanHaney-gj3gl
    @NathanHaney-gj3gl Год назад

    Elon musk wants to create the Borg on Mars…

  • @jl8138
    @jl8138 Год назад

    Not sure whether to trust an AI commentator who presents himself like a Huey Lewis and the News cover band member