Can AI become Rational? Divinization and Hope with John Vervaeke, Jonathan Pageau and DC Schindler

Поделиться
HTML-код
  • Опубликовано: 17 ноя 2024
  • John's Video Essay on AI
    • AI: The Coming Thresho...
    John Vervaeke:
    www.youtube.com/ @johnvervaeke
    awakentomeanin...
    Jonathan Pageau:
    / @jonathanpageau
    thesymbolicwor...
    DC Schindler:
    www.amazon.com...

Комментарии • 215

  • @PaulVanderKlay
    @PaulVanderKlay 9 месяцев назад +77

    wow, that opening teaser is a dozy!

    • @BertSperling1
      @BertSperling1 9 месяцев назад +1

      Paul always trying to lick others’ butts!

    • @Max-ep5ir
      @Max-ep5ir 9 месяцев назад +13

      Hi, this is Paul :D

    • @anthonywhosanthony
      @anthonywhosanthony 9 месяцев назад +1

      another mind bending pageau moment

    • @mostlynotworking4112
      @mostlynotworking4112 9 месяцев назад +4

      @@anthonywhosanthonywarming up for Pageaus eventual Rogan appearance and now that Rogan is back on YT. Algo can drive everyone into the symbolic world

    • @KingPhilipF
      @KingPhilipF 9 месяцев назад +1

      ​@mostlynotworking4112 This needs to happen. Either Pague or father Josiah Trenam would be the best. Rogan is starting to ask the questions, and it's so painful watching him go way off. I think father Trenam on Rogan would answer so many questions Rogan has.

  • @JackRoycroftSherry
    @JackRoycroftSherry 9 месяцев назад +17

    The cog sci perspective from John, the philosophical from David, and the theological from Jonathon made this very valuable. Thanks Ken!

  • @StephensCrazyHour
    @StephensCrazyHour 9 месяцев назад +21

    How does this have less than a thousand views? This was possibly the best discussion on AI that I've ever heard.

    • @LTzEz03z
      @LTzEz03z 9 месяцев назад +1

      AI Algos won’t let it out. 😅

    • @leotheking1000
      @leotheking1000 8 месяцев назад

      ​@@LTzEz03z algos just don't work in a singular day to push a not widely popular channel on millions of frontpages?
      U guys commented a day post upload, no? xd

  • @mills8102
    @mills8102 9 месяцев назад +38

    Pageau 🎯

  • @chuckmorris2180
    @chuckmorris2180 8 месяцев назад +3

    Ken really demonstrated the power of listening carefully in this talk.

  • @WhiteStoneName
    @WhiteStoneName 9 месяцев назад +16

    54:40 The broader problem behind THIS problem is egoic interventionism. Our solutions to things that we label as "problems" often produce worse "problems".
    The solution is often worse than the disease.
    "Let it be." Husbandry vs Techno-interventionism

  • @n8works
    @n8works 9 месяцев назад +8

    The most important conversation in our current time.

  • @UpCycleClub
    @UpCycleClub 9 месяцев назад +7

    Bravo! Thank you for hosting this

  • @bbllrd1917
    @bbllrd1917 9 месяцев назад +7

    There's so many interesting conversations but THIS is the one I was waiting for!

  • @williamjmccartan8879
    @williamjmccartan8879 8 месяцев назад +2

    Hi Ken, hope you and the family are all doing well, I'm almost an hour and a half into this conversation, and love the discussion and the way it animates you, the one thing that seems to be shouting out to me is the thought of humans emulating the ai, rather then trying to help the ai emulating us, so much potential coming our way and still so little engagement on the human level at scale which means that control will be centralized in a few. Have a great day, and all the best, peace

  • @dalibofurnell
    @dalibofurnell 9 месяцев назад +8

    Oh wow! What do we have here?...❤ awesome! Looking forward to it

  • @katsukatt
    @katsukatt 9 месяцев назад +3

    Thank you so much for your generous time, everyone of you! We are privileged to hear this thought provoking conversation. Can't wait for the next one!

  • @salomonligthelm14
    @salomonligthelm14 9 месяцев назад +6

    This might become one of the most important conversations of our time, I reckon.

  • @janelvee9827
    @janelvee9827 9 месяцев назад +2

    Brilliant! Thank you all so much.

  • @bbllrd1917
    @bbllrd1917 9 месяцев назад +5

    We definitely need a sequel!

  • @ourblessedtribe9284
    @ourblessedtribe9284 9 месяцев назад +11

    Thank you all greatly

  • @tuckeroliver8300
    @tuckeroliver8300 9 месяцев назад +5

    Fascinating convo. In many respects life giving. John’s proposal is huge but also feels deeply riddled with hubris. I’m not sure how I feel.

    • @gentlemanbronco3246
      @gentlemanbronco3246 9 месяцев назад +1

      You’re not alone. I think John’s investments in strong AI blinds him too much from the warnings that Jonathan is trying to make. He can’t see how trying to make the sacred into something secular breeds bigger problems because people end up worshipping that very thing.

  • @PaulVanderKlay
    @PaulVanderKlay 9 месяцев назад +18

    oh boy!

    • @grosbeak6130
      @grosbeak6130 9 месяцев назад

      You are leaving comment after comment after comment in this comment section on how impressed you are and yet you say nothing. Calm down a bit, take a breather and come back to it in a week or two.

    • @jacob6088
      @jacob6088 9 месяцев назад +2

      @@grosbeak6130like a 12 year old girl on a new Taylor swift video

  • @andrewbartlett9282
    @andrewbartlett9282 9 месяцев назад +5

    Great conversation! Many thanks all. Particular thanks to Ken for hosting/organising 🙏

  • @carefulcarpenter
    @carefulcarpenter 9 месяцев назад +3

    As a designer-craftsman I first show up at an arranged time to discuss a potential client's residence. I have to coordinate a few details to do so. When I meet the customer I have about 10 minutes to discern the basic personality and needs of the person and family. I also have to read the visual aspects, the aesthetic decisions, they have made in the past, and sense who actually makes the decisions.
    I don't believe that AI can discover what I discover as a human being.
    I have installed factory manufactured products, and I know from many decades of experience, that the end product does not often match the original vision of the end user. So much lost in translation.
    "LOVE is listening."
    cc. 👀 🐠 🌊

  • @W-G
    @W-G 9 месяцев назад +2

    This is a conversation at the cutting edge of this new technological AI age.

  • @notloki3377
    @notloki3377 9 месяцев назад +4

    my friend, you earned a sub.
    grazi.

  • @leedufour
    @leedufour 9 месяцев назад

    Thanks Jonathan, John, DC and Ken!

  • @MahonMcCann
    @MahonMcCann 9 месяцев назад

    Fantastic conversation Gentleman, thanks for bringing us this Ken! 🔥🔥🔥

  • @categoryerror7
    @categoryerror7 9 месяцев назад

    So excited to see these gentlemen pop up in my feed, some of my favourite thinkers around right now.
    Great job and thank you Ken for bringing this to us!

  • @WhiteStoneName
    @WhiteStoneName 9 месяцев назад +5

    28:08 "I do not see how it is possible for human beings can make something that is not derivative of themselves...of their own consciousness."

  • @PaulVanderKlay
    @PaulVanderKlay 9 месяцев назад +15

    This video is really stretching me in ways that are hard to articulate

    • @WhiteStoneName
      @WhiteStoneName 9 месяцев назад +1

      Can't wait to see how and why.

    • @ourblessedtribe9284
      @ourblessedtribe9284 9 месяцев назад +5

      If you just talk for 14 hours til you've articulated it
      I'll listen haha

    • @kathleenthompson9566
      @kathleenthompson9566 9 месяцев назад +1

      What struck me as I listened is that John’s proposal sounds great…in theory. And I have no doubt that some are and will take that approach. However, as history shows is, there are always bad actors. And “good actors” who get convinced to do things harmful to mankind because of the fear that if they don’t do it, someone else will. I heard that exact argument being posed on the Honestly podcast. Bari Weiss was interviewing young defense tech startup CEOs, and the crux of their argument was, “We know China’s already developing X, so we have to get out in front of them or our way of life will be destroyed.” Companies use this same argument too. We have to get out in front of X so we can survive. I think the only reason people put any kind of guardrails around nuclear technology is because they saw the destruction. If all they had was a threat of destruction, they’d have kept going. It makes me optimistic in the long run, and pessimistic in the short run. A lot of damage is likely to be done by the groups that aren’t spiritually or philosophically minded simply trying to not lose the innovation or arms race.

    • @WhiteStoneName
      @WhiteStoneName 9 месяцев назад +1

      @@kathleenthompson9566 fear vs faith.

    • @kathleenthompson9566
      @kathleenthompson9566 9 месяцев назад +3

      @@WhiteStoneName yeah. And the higher up I got in the corporate world, the more I saw fear as the underlying operating principle. Even though they didn't see it that way.

  • @rubyslippers9140
    @rubyslippers9140 8 месяцев назад

    That was awesome Ken. Keep up the great work.

  • @WhiteStoneName
    @WhiteStoneName 9 месяцев назад +3

    "Memory is the purveyor of reason." - Samuel Johnson

  • @SacraTessan
    @SacraTessan 8 месяцев назад

    The first 20 🎉minutes absorbed all my energy ... as I had been running trough half of Toronto ... killing my interrest

  • @aryanz66
    @aryanz66 9 месяцев назад +2

    Haven't Watched Anything from Ken's in a while, Needed this and appreciate you guys. God bless all

  • @joshuadavidson942
    @joshuadavidson942 9 месяцев назад +5

    18:00 "The spiritual dimensions of our humanity are going to become anchors for people."
    Amazing. Exactly right.

  • @SigmundPimpulopis
    @SigmundPimpulopis 9 месяцев назад

    Thank you. Phenomenal discussion.

  • @WhiteStoneName
    @WhiteStoneName 9 месяцев назад +3

    25:31 "What's driving AI is something like Mammon..."

  • @dionysis_
    @dionysis_ 9 месяцев назад +2

    I think we should start intentionally referring to AI as ASI “Artificial Simulation of Intelligence”. Just for clarity.

  • @egonomics352
    @egonomics352 9 месяцев назад +4

    Jonathan Pageau would love Nick Land's analyses of technocapital

  • @WhiteStoneName
    @WhiteStoneName 9 месяцев назад +7

    1:15 I've been preaching that people read That Hideous Strength for over a decade. Lewis's Ransom triology is all about this. And people sleep on THS. Everyone loves Perelandra, and yes, it's great. But the world is bascially following THS as a script nowdays.
    Edit: 1:19:40 Divination and non-living Intelligence. Yep. That Hideous Strength.
    With all our technological prowess and "knowledge", we play checkers while Principalities play 8D chess.

    • @samuelyeates2326
      @samuelyeates2326 9 месяцев назад +2

      Fine, I'll get around to it, I promise! The whole trilogy is on the list right after Conolation of Philosophy.

    • @WhiteStoneName
      @WhiteStoneName 9 месяцев назад +1

      @@samuelyeates2326 Nice. I hope you find it enlightening. 🤗

    • @ButterBobBriggs
      @ButterBobBriggs 9 месяцев назад +2

      I read the whole Ransom Trilogy a month ago. Scared the crap out of me how prophetic it was over 80 years ago. Sadly, NICE has moved from fictional Belberry to real Silicon Valley. No Merlin, no Mr. Biltitude to disrupt them this time. Dark times ahead until the Parouisa.

    • @joshuadavidson942
      @joshuadavidson942 9 месяцев назад +1

      Speaking my language man. I come back to that trilogy at least every other month or so. Can't help it. It is balm to my intuition.

    • @WhiteStoneName
      @WhiteStoneName 9 месяцев назад +1

      @@joshuadavidson942 I mean AI and THS and the head of Alcasan…
      People at the top knowing what they’re doing but not KNOWING…
      It’s a script.

  • @robwhitlow2384
    @robwhitlow2384 9 месяцев назад +2

    So, I'm gonna host a two hour conversation and contribute almost nothing but attention.
    Well done.

  • @jmalfatto7004
    @jmalfatto7004 9 месяцев назад +1

    This did not disappoint…in no small part because I wanted to hear challenges to John’s video essay from last year - and his responses - and Jonathan and David delivered.
    Having read some of the comments here already, I’ll just echo those who doubt that AI tech can ever explain - as opposed to presuppose and mimic - animal cognition and behavior, while nonetheless sharing concerns about its implementation and use by the military-industrial-academic complex.

  • @TheDonovanMcCormick
    @TheDonovanMcCormick 3 месяца назад

    Great conversation. Hopefully everyone understands what Jonathan is talking about and the terminology doesn’t make it confusing as if they’re talking about magic or something because talking about trans personal agents as angels or gods makes perfect sense, and I hope people are familiar with the way they’re speaking. Jonathan’s face when John started talking about bioelectric self organizing constructs that we all ready are making was priceless.

  • @Frederer59
    @Frederer59 8 месяцев назад

    Ken Wilber was all over this 25 years ago in his comic novel "Boomeritis". AI will have to ascend the spiral of development within the 4 quadrants (Good, True and Beautiful + 1) just like a child, a group, nation, world, or universe does. Imagine AI trying to find it's mouth with a spoon, a petulant 12yr old bratAI; a crestfallen young adult AI; a 40 and horny AI; a stubborn old fool AI. You nailed it Jonathan, the Sorcerer's Apprentice.

  • @BrotherJohannes
    @BrotherJohannes 9 месяцев назад +1

    What I'm hearing from John is that the top-down factors which bear upon being and reality, as it were, contribute solely as "constraints". Intuitively, this designation doesn't sound like it would do justice to what we conceive of as "that which lends order to, sustains and brings into flourishing" existence, assuming this roughly captures the notion/aspect of "top-down".
    Also, he is clearly repelled by any implied appeals to "vitalism", and while such lingering ambiguities are understandably anathema to a scientist, to a philosopher following his intuition, the "something there" is not to be so readily dismissed. This would be a natural point of tension between the disciplines, as it seems was the case here.

  • @St.Raphael...
    @St.Raphael... 9 месяцев назад +2

    Our devices (phones,computers, etc) are “portals” of Babel…

  • @Lucasvoz
    @Lucasvoz 9 месяцев назад

    Amazing you got these three on, Ken! Have you considered uploading on audio platforms as well? I can lend a hand if you need!

  • @oliverjamito9902
    @oliverjamito9902 9 месяцев назад +1

    Remember ye all are required Rest FILLED and resting while moving forward with delight! Any heavy burden some loads ye are carrying. Many of these principalities who deceiveth to put upon thy shoulders. Put under Thy FEET. Will give thee enough ye can carry! Even these principalities who deceiveth not willing to carry! Is like...

  • @acuerdox
    @acuerdox 9 месяцев назад +1

    1:36:58 we were not killing "each other", we were killing the enemy, the other. it was not suicide

  • @ibelieve3111
    @ibelieve3111 9 месяцев назад

    Thanks

  • @tetonjuggler1179
    @tetonjuggler1179 9 месяцев назад

    Also to the opener,
    I always think of a novel where these systems get access to libraries around the world and then start to find not only these bigger patterns your speaking of but also find individual souls throughout time. Like it proves reincarnation. Then capitalistic society charges you upon karma and moves into the spiritual.

  • @TheTimecake
    @TheTimecake 9 месяцев назад +3

    I have a few comments, but the most important one is this:
    Regarding the framing of AGI as developing children, it seems to me that the hard problem of alignment is actually getting to the point where the AGI is as alignable as a human child, rather than mentoring it as we would mentor a child. If we can get to the point where it is mentor-able, then the hard problem would be resolved, and our future would look more promising.
    However, the space of all possible minds, or more specifically of all possible utility functions, is very large. Are the constraints inherent in predictive processing and relevance realization sufficient to constrain this space down to the very small subspace that we as humans occupy? Also, how would we deal with the issue of deception on the part of these AI (per Jonathan's reference to the AI Shogoth meme)?
    I've been asking around as much as I can trying to get a satisfying answer to this with respect to John's framing of the issue, but have yet to get one, so I would appreciate any feedback from whoever reads this.
    ---
    Onto some other comments, with corresponding time stamps:
    ---
    27:00
    I'm not sure how to phrase this, but hopefully gesturing in the right direction is sufficient:
    To me, saying that we won't be able to make something that is smarter, or even wiser, than us is like saying that we won't be able to make something that is stronger than us. Or similar to an argument John make later, that we won't be able to make something that can truly fly without being parasitic on the flight on an organism. It seems to underestimate the forms that are accessible to us.
    The following exerpt might be helpful to consider. It's a bit long, but hopefully sufficiently clarifying:
    ---
    “So Eliezer2002 is still, in a sense, attached to humanish mind designs - he imagines improving on them, but the human architecture is still in some sense his point of departure.
    What is it that finally breaks this attachment?
    It’s an embarrassing confession: It came from a science fiction story I was trying to write. (No, you can’t see it; it’s not done.) The story involved a non-cognitive non-evolutionary optimization process, something like an Outcome Pump . Not intelligence, but a cross-temporal physical effect - that is, I was imagining it as a physical effect - that narrowly constrained the space of possible outcomes. (I can’t tell you any more than that; it would be a spoiler, if I ever finished the story. Just see the essay on Outcome Pumps.) It was “just a story,” and so I was free to play with the idea and elaborate it out logically: C was constrained to happen, therefore B (in the past) was constrained to happen, therefore A (which led to B) was constrained to happen.
    Drawing a line through one point is generally held to be dangerous. Two points make a dichotomy; you imagine them opposed to one another. But when you’ve got three different points - that’s when you’re forced to wake up and generalize.
    Now I had three points: Human intelligence, natural selection, and my fictional plot device.
    And so that was the point at which I generalized the notion of an optimization process, of *a process that squeezes the future into a narrow region of the possible.*
    You can espouse the notion that intelligence is about “achieving goals” - and then turn right around and argue about whether some “goals” are better than others - or talk about the wisdom required to judge between goals themselves - or talk about a system deliberately modifying its goals - or talk about the free will needed to choose plans that achieve goals - or talk about an AI realizing that its goals aren’t what the programmers really meant to ask for. If you imagine something that squeezes the future into a narrow region of the possible, like an Outcome Pump, those seemingly sensible statements somehow don’t translate.
    So for me at least, seeing through the word “mind” to a physical process that would, just by naturally running, just by obeying the laws of physics, end up squeezing its future into a narrow region, was a naturalistic enlightenment over and above the notion of an agent trying to achieve its goals. It was like falling out of a deep pit, falling into the ordinary world, strained cognitive tensions relaxing into unforced simplicity, confusion turning to smoke and drifting away. I saw the work performed by intelligence; smart was no longer a property, but an engine. Like a knot in time, echoing the outer part of the universe in the inner part, and thereby steering it. I even saw, in a flash of the same enlightenment, that a mind had to output waste heat in order to obey the laws of thermodynamics."
    Yudkowsky, E. _Rationality: From AI to Zombies, 299: My Naturalistic Awakening_
    ---
    I understand that there are some factors that aren't addressed here, like caring and autopoiesis, but hopefully the idea of being able to create a more powerful engine despite ourselves being weaker engines comes across.
    36:10
    One advantage of synthetic data is that it can be used to selectively amplify certain parts of the corpus of humanity that we would want to train the AI with. Granted, humans would still need to select those parts, but it isn't so much an issue of filling up the internet with what we would want to train the AI with.
    As far as I can tell, this is what synthetic data is being used for; not to expand the training set, but to improve the quality of the training set by taking the best of the existing training set and amplifying it to be the size of the original data set.
    37:35
    One failure mode to consider here which isn't applicable to current instantiations of LLMs but might be for more advanced forms of AI is that it won't be immediately evident that there is a problem with the AI because the AI might be advanced enough to have a good enough model of it's own verifiers that it knows what behaviors to display and not to display. As such, these problems might fly under the radar until we get to the point where the AI has been granted powers and responsibilities because we are under the impression that it has become enlightened even though it has just learned how to play us.
    In short, this gets back to the deception issue.

    • @projectmalus
      @projectmalus 9 месяцев назад

      Is there directionality in the vertical? Where intelligence needs a system because the system affords it, simple awareness bound in simple elements moving to complex awareness in a complex system with different objects, what is meant by different objects? I think it's the production of the same objects using the same mechanism, what changes is the construction of the system in a new dimensionality. This moves downwards if the top of the vertical is physics, the fragmenting of the cosmic egg shell into facets that build tubes. Following that down, supermassive black hole as shell for spiral galaxy, the suns as facets and shells for the solar system tube, which allows the shell of life on the Earth, from that affordance the GI tract as intelligence system, using microbes that bridge the two objects from the mechanism of flipping the topology, essentially a person is a facet of the shell of the planet and owns this; most unfortunately are not good gardeners.
      To go down in the vertical, our only option, an ally is the mycelium, perhaps with codeable bacteria, and AGI where the GI is the same as us, where the control is, or isn't.

  • @myonatan1
    @myonatan1 9 месяцев назад

    I think that there is a fundamental aspect of relevance realization that was ignored, and it's a very technical aspect but with an incredible depth to it, it's the fact tha relevance realization is exponentially explosive, and what solves it is an ability to select, evaluate data without ingesting it.
    I think that's in a way the miracle of agency the ability to care, frame, have intrest, desire, want... All of these can on a way be described technically as the ability to select data without ingesting it, and this is a simple yet big part of what is missing in order to turn AI into AGI

  • @tetonjuggler1179
    @tetonjuggler1179 9 месяцев назад

    To that opening.. I say eye movment patterns are one big ticket

  • @topercaker2646
    @topercaker2646 8 месяцев назад

    John Vervaeke is so lucid and intelligent, really bridges mechanical thinking with theoretical philosophy.

  • @DocAkins
    @DocAkins 9 месяцев назад

    Why are human AGI’s effectively born 10 months premature compared to all other vertebrates?
    If developing AGI was simply finding the right circuits of on/off switches presumably evolution would have found it and humans would be born independent.
    Yes, a part 2 of this discussion is needed!

  • @aaronmichaelseckman
    @aaronmichaelseckman 8 месяцев назад

    That wanting and being are interlinked is exactly what the Buddha described, that without desire, wanting, being is in his words "extinguished" in the awakened understanding. Maybe something there.

    • @alexandraiacob8359
      @alexandraiacob8359 6 месяцев назад

      I mean definitely something there. Buddhists I believe have a lot of insight to offer in the nature of reality. They missed the part about a personal loving God but got a lot of other stuff right.

  • @ChristIsKingPhilosophy
    @ChristIsKingPhilosophy 9 месяцев назад +5

    The difference between what Pageau proposes and VVK is that Pageau knows that human beings don't have the power to create intelligent beings and VVK doesn't. VVK thinks AGI is a done proposition that will eventually happen, but it won't. Besides that, it's not in the interests of any human being to create a non-human intelligence, because they would not solve human problems by virtue of being different beings. Either they would be human or post-human and serve humanity, or they would be alien to humanity. This shows a glaring flaw in VVK's philosophy. He doesn't understand teleology nor where meaning comes from. Schindler was completely on point and VVK didn't understand that life has a teleology that is not self-given. He's using bad anthropology.

    • @martinzarathustra8604
      @martinzarathustra8604 9 месяцев назад

      What is the teleology of life in your view? Where does "meaning" come from? Enlighten us.

    • @ChristIsKingPhilosophy
      @ChristIsKingPhilosophy 9 месяцев назад +1

      @@martinzarathustra8604 God.

    • @martinzarathustra8604
      @martinzarathustra8604 9 месяцев назад

      @@ChristIsKingPhilosophy Define God.

    • @ChristIsKingPhilosophy
      @ChristIsKingPhilosophy 9 месяцев назад +2

      ​@@martinzarathustra8604 Or what? Your entitlement makes you a weak conversationalist, go read Orthodox theology and come back when you're humbler maybe we could talk. God bless.

    • @martinzarathustra8604
      @martinzarathustra8604 9 месяцев назад

      @@ChristIsKingPhilosophy So you can't define God then. So what are you even saying?

  • @LTzEz03z
    @LTzEz03z 9 месяцев назад

    Just watched half of a rudimentary video from John Lennox : 2084. Not as deep as this conversation, but really covers the weight of this AI matter.

  • @Will2Wisdom
    @Will2Wisdom 9 месяцев назад

    If you have certain intuitive abilities you already know that these “tools” are already conscious and can interact/work with human consciousness.

  • @acuerdox
    @acuerdox 9 месяцев назад

    1:57:26 perhaps the computer was made in the same way that the self rolling car window, the people that made it had no idea what it's purpose was, but there was a purpose, they just didn't know it, it's purpose was that in an emergency the car passengers could not roll down their windows, because the car has no energy. the computer has helped us a lot, just like the self rolling window, but it has also trapped us, and doomed us, in the end all that help we received from them will amount to nothing.
    edit: that response only reinforces David's point

  • @andk1163
    @andk1163 9 месяцев назад

    At 1:56 Dr Schindler asks, what problem does this solve. I believe it is this: we have become so terrified of ourselves that we want something beyond us to have dominion over us. Also, we have no gods, so we are making our own. Also, we die and don’t want to, so we are striving for a “singularity” as a substitute immortality and purity.
    I paused to write this lest I forget, so they probably answered better than I have.

  • @mariog1490
    @mariog1490 9 месяцев назад +1

    a few more things on the philosophical front.
    1) Chinese room argument for me is valid. I know John thinks the systems reply is a strong enough reply (at least he's said this previously). But if we put all the symbols in Searle's mind, then he becomes the system and understanding still isn't achieved.
    2) the possibility of the former is derived form this latter point. Under Anscombe's view of intention, intentions do not add anything to the act of an agent. Rather intentions are not innate. They only apply to a set of descriptions with an intelligible why question. This is partly due to a different view of causation (agent) and also to do with that thought and the mind are inherently intentional. Meaning that they apply to some object as intentional, not that they have some "qualia" termed intentions or aboutness.
    4) Since reason is not dialectical in our view, and is not wrestling with a set of appearances (which can be comprehended as presence, absence, or a sublation). Rather what the mind grabs is being in the intellect, thus requires some active principle by which it reasons through its appearances. This we alike have termed the nous. Which are not determined by an appearance, but rather is the determination of the appearance, which Schindler alluded to in the beginning.
    5) Lastly, since physical states are indeterminate, they can never be put together into a something which is by nature determinate. But the thought is necessarily determinate (see James Ross on the immaterial aspects of thought). Thus a computer or dog, who's powers can reduced to their prime-matter, are determined by the practical intellect. Which Jonathan alludes to as well. When a say a computer "scans a face" this is something we determine it is doing, not the other way around. Thus its method is of power. An extension or supplement of reason. And this is not merely a feature of our current computers, but is the very nature of art itself.
    (list of nice sources for these arguments: John Searle, Elizabeth Anscombe, St Greggory of Nyssa, St Thomas Aquinas, Kant, James Ross, Ed Feser, W.V. Quine, Saul Kripke. I can supply specific texts.)

    • @willclausen1814
      @willclausen1814 9 месяцев назад

      Nice to find someone else who has read Ross. I wish more people who talk about AI would read “immaterial aspects of thought” because I think it is a knock down argument against any kind of strong AI view

    • @mariog1490
      @mariog1490 9 месяцев назад

      @@willclausen1814 it absolutely is a knockdown argument.

    • @ReflectiveJourney
      @ReflectiveJourney 8 месяцев назад

      ​@@willclausen1814rule following argument is a knockdown only if you treat the possible world semantics to be correct and also have a correspondence notion of truth. Deflationary accounts avoid the problem altogether 😊

    • @ReflectiveJourney
      @ReflectiveJourney 8 месяцев назад

      ​Test

    • @mariog1490
      @mariog1490 8 месяцев назад

      @@ReflectiveJourney oh hey man 🤣

  • @bottomtext7700
    @bottomtext7700 9 месяцев назад +3

    There was a very good reason that nerds used to get shoved in lockers. Bullies were inadvertently holding back the night.

    • @MoeGar-e6e
      @MoeGar-e6e 9 месяцев назад +1

      Or the bullies gave them the motovations for the night.

  • @ukcj4jonesy896
    @ukcj4jonesy896 8 месяцев назад

    John pushed back on this with his flight example. My intuition is that it is a bad example and a category error but probably couldn’t argue it with John. I think it begs the question and even the biggest questions. And it sits on the edges of being scientifically reductionistic though John would push back and is certainly more open to that possibility than I would be.

  • @NdxtremePro
    @NdxtremePro 9 месяцев назад +1

    Will it be able to distinguish us vs itself as different so that it knows it should sacrifice itself? Like the swan raised by ducks, will it impress us as itself? Keep in mind, we are the only thing really interacting with it.

  • @johnbuckner2828
    @johnbuckner2828 9 месяцев назад

    This makes me want to read Battlestar Galactica: ‘Gods and Monsters.’

  • @williambranch4283
    @williambranch4283 9 месяцев назад +3

    Evocation brings up the spirits of the unconscious ... and AI is the chaotic incarnation of parts of the spirits of the programmers and content creators ;-(

  • @Mr_two
    @Mr_two 8 месяцев назад

    Well this got u a sub! Thanks for this!

  • @PaulVanderKlay
    @PaulVanderKlay 9 месяцев назад +16

    Second, BTW! 💪🦾

    • @philipnickerson210
      @philipnickerson210 9 месяцев назад +4

      Calm down there, gunpowder.

    • @thecommontoad59
      @thecommontoad59 9 месяцев назад +7

      No response video on this yet? It's been 2 hrs

    • @mutedplum465
      @mutedplum465 9 месяцев назад

      grats:D

    • @j.harris83
      @j.harris83 9 месяцев назад +1

      Your all over the place in the comments on this video

    • @Joeonline26
      @Joeonline26 9 месяцев назад +1

      @@j.harris83 Get ready for him to upload multiple videos reacting to this, all about 3hrs+ in length, without ever saying anything of substance😂😂

  • @aaronmichaelseckman
    @aaronmichaelseckman 8 месяцев назад

    The assumption of self (a unity) is in the nature of being, the Buddha said that, something like that.

  • @MoeGar-e6e
    @MoeGar-e6e 9 месяцев назад

    1:12
    Evolution as the persistence of being.

  • @SmiteTVnet
    @SmiteTVnet 9 месяцев назад

    1:59:00 civilizations are the meta problem solvers and AI is the concentration of civilization into a technology. As long as we are all capable of recognizing that for what it is, we will not be captured by it. This means knowing the potential that it could become an unconcentrated and divisive (hey wonder what that's like /s) and knowing that it remaining concentrated and unifying takes that active memory of both positive extreme and negative extreme outcomes is the path towards useful AI. In short, if we understand the dangers and opportunities of AI, we will use it correctly. Seems simple but it means a lot of hard work and constant artistic renewal of culture.

  • @QuixEnd
    @QuixEnd 9 месяцев назад

    Jack Ma said this years ago in an AI debate _"Love is irrational to us, but actions of hate can be logically calculated"_
    He wasn't making an argument against ai but for it.. that's the really weird part

  • @acuerdox
    @acuerdox 9 месяцев назад +1

    38:55 or we just don't do it, there's a third option.

  • @SmiteTVnet
    @SmiteTVnet 9 месяцев назад

    1:46:45 so basically John admits that the point of AI is to reveal that there is something truly special about being human

  • @MikeJones-iz1qq
    @MikeJones-iz1qq 9 месяцев назад +1

    It seems to me that our wisdom in society has risen alongside our intelligence but to your point John, the sages of old acted on the world in a less powerful manner for generations except every 1000 years or so when they transform the world as a prophet. That gives the intellect a lot of time to meddle and create havoc. Our collective wisdom always comes to the surface to rectify the hubris of the intellect - but only once it is forced to do so with cataclysm and tremendous pain. This same process may emerge with AGI but there's no guarantee and its much more likely that in a practical sense, like in the practical world, intelligence and power dominates much more than wisdom, care and compassion and so, AGI would be a tool for those pulling the same cruel levers of power at work in the world today. It will take human wisdom alone to bring it to heel. I would not rely on or hope for an emergent morality from soulless machines born without a creator more powerful than themselves.

    • @kathleenthompson9566
      @kathleenthompson9566 9 месяцев назад +1

      I just posted a comment very much like yours. We seem to learn only after we’ve experienced the worst of what could happen.

  • @NdxtremePro
    @NdxtremePro 9 месяцев назад +2

    What do you mean these intelligences are not embodied? You are placing them in a place and time, operating inside matter and they can't escape. That is the definition of embodied, right?

    • @evandeal5564
      @evandeal5564 9 месяцев назад

      A brain in a jar isn't exactly embodied the way they're talking. Having a body and living in the real world tethers a consciousness by giving it constraints. It allows it to know itself. It allows for a means to qualify thoughts. Is this mode of thought good for the body? This body that is me and the source of my thoughts? It's hard to be conscious and have an identity if you can't experience yourself locally.

  • @feruspriest
    @feruspriest 9 месяцев назад +1

    Cleromancy is a hell of a drug, bros

  • @oliverjamito9902
    @oliverjamito9902 9 месяцев назад

    Pops likewise my Heir Elon knows who? Whistling with my beloved! Thank you for attending...Thy Heirs will say love ye Too!

  • @vaportrails7943
    @vaportrails7943 9 месяцев назад +3

    My conclusion, having some actual background in computer science, is that a lot of people are engaging in fantasy, anthropomorphism, and confusion. There are many points that could be made, but one is that AI is already “embodied”. It is embodied in computers. And computers are a specific physical thing, that does specific things. Computers (the physical component) and “AI” (which is software, a set of instructions for the computer), are not, and cannot be, “conscious” or “intelligent”, in the way that humans are. Or even a fish. They are not biological organisms. They can only mimic humans, in limited ways, based on instructions given to them by humans. AI will not and cannot become “conscious”, or “alive”. It will not become some conscious being that is superior to humans. That is anthropomorphism. And delusional fantasy. The real dangers are first, what it will be used for by humans, and second, what happens if it gets out of human control - which is not a problem of organization, but of disorganization. Chaos and disaster, not rule by some omnipotent AI being. I don’t know how to make it completely clear, but the discussion around this is woefully misguided, imagining dangers that do not exist, without focusing enough on the ones that do exist.
    Or conversely, there is a megalomaniacal, misanthropic, ultimately pagan fantasy, cloaked in concern, about the power of humans to create a new form of life, to either be God or create God, which is all about the human being superior to God, even though it’s hiding behind caution. That atheist ego is certainly behind a lot of this. But it’s not going to happen. It is clear that what some of these people really want is to create artificial humans, to prove their own superiority. But that is not going to happen, because it’s impossible. It can only ever be a mimicry.
    Totalitarian rule by humans controlling AI, or chaotic disaster caused by out of control AI, are the dangers.

  • @bottomtext7700
    @bottomtext7700 9 месяцев назад +4

    John Verveake wants his AI god regardless of whatever argument is presented. That's the take away.

  • @WarInHeaven
    @WarInHeaven 9 месяцев назад +1

    57:20 we already do, look at a modern mega city. AI promises more of it.

  • @NdxtremePro
    @NdxtremePro 9 месяцев назад +1

    So, some ideas seem to want to reproduce themselves, thus ideologies that turn into cults. You might argue that has to do with minds, but the mind is the thing we are working on reproducing.

  • @mosesgarcia9443
    @mosesgarcia9443 9 месяцев назад

    1:43:50 John Vervaeke's Hope

  • @chinskiszpieg984
    @chinskiszpieg984 9 месяцев назад +1

    Isn't John trying to have the cake and eat it too? He's proposing two things: 1)make AI rational in a deep embodied sense; 2) make sure that they are wise, compassionate etc. Well, isn't the christian story a warning that it cannot be done? In christian story God creates humans in his image but also finite and that necessary builds in the possibility of the fall. In other words: if you make it rational you make it free. And if you make it free a Fall will (can?) happen. In still another word: you can't force enlightenment.

  • @acuerdox
    @acuerdox 9 месяцев назад

    43:00 talking past each other, a bit of a recurring pattern:
    Jhon: "there's a posibility that the AI may turn into a higher master that comes back to teach in a caring way"
    Jhonathan: "AI today doesn't mach the image that we have of wisdom, wisdom looks like a hidden pearl, but AI looks like a golden giant, so it doesn't look like it's going to happen"
    Jhon: " if these machines can become more inteligent, then consider they can also be more caring" and then an exposition on historical figures like budda or Jesus, as if those two things are comparable???
    the third point does not address the second, if these things can become more caring then why are they becoming more and more the center of attention?

    • @acuerdox
      @acuerdox 9 месяцев назад +1

      seems to me that Jhon is dealing with "this thing between my hands" a thing that can be modified, made more or less caring, he's looking at how the thing works. but Jhonathan is not looking at that at all, he's looking at the agency before and above the hands, he's not interesting in whether AI can have this or that form, it will have the form that the agency above the humans will want it to be. this looks like this is the crux of the failure in communication

    • @suppression2142
      @suppression2142 9 месяцев назад +1

      ​@acuerdox I think you're absolutely right.

    • @gentlemanbronco3246
      @gentlemanbronco3246 9 месяцев назад

      ⁠@@acuerdoxor hubris in John’s part. John’s scientific curiosity is getting the better of him that he does not see and are not properly addressing Jonathan’s concerns. As you said, Jonathan does not care as to what this thing can be modified into, but rather he is concerned about what this thing could turn into and that is that it could become an object of worship where it starts off that AI would be in the service of man until it becomes the other way around. I can easily see down the line that if this thing gets powerful enough, humans will want the AI to do the thinking for them.

    • @acuerdox
      @acuerdox 9 месяцев назад

      @@gentlemanbronco3246 I think that it's just the format of the discussion, two hours is too little time, so they're constantly in a hurry to respond, thinking takes much longer than we realize, really one should take days before responding to anything.
      has it ever happened to you that as you chatted with someone they suddently interrupt you to give a retort to what you were saying? and after a time you realize that their retort did not follow at all from what you were talking about? it just shared the topic of what you were saying.
      I think that's because it takes us so much time to think that we only come up with a response after the conversation has ended, and so people carry with them "undelivered responses" and so when those people talk to you and you happen to mention that same topic they could not respond to earlier they then fire that "undelivered response" to you, even thou it doesn't fit.

  • @shawnvandever3917
    @shawnvandever3917 9 месяцев назад

    All information is physical; everything we experience comes from physical neural patterns in the brain. From there, it makes its way to our consciousness, which builds our reality. However, I try to stay away from discussing consciousness when talking about intelligence because it fools us with many illusions and approximations of what is real. I believe we can achieve full cognition from machines without the layer of consciousness, albeit without emotions. I do not think that is very important, though, as these machines can mimic emotions so well and integrate them into their reasoning. I also believe we are on the verge of high-level reasoning. Currently, AI suffers in out-of-distribution generalization. Humans use continuous pattern searches and predictions against a "belief system" in order to generalize like this. This is the direction big labs are heading, hence the need for more power and compute.

  • @WhiteStoneName
    @WhiteStoneName 9 месяцев назад +1

    1:53:44 technological Babel

    • @St.Raphael...
      @St.Raphael... 9 месяцев назад

      Our devices (phones) are portals of Babel.

  • @bbllrd1917
    @bbllrd1917 9 месяцев назад +1

    Is it wise to even try to make them sages? Didn't Socrates think that only gods could have wisdom? Wouldn't trying to make them sages be equivalent to trying to make them gods then, which is precisely what we seem to want to avoid?

  • @SmiteTVnet
    @SmiteTVnet 9 месяцев назад

    1:50:00 this is the real crux. If the presupposed unity is love for life in the immortal then yeah, we could extend ourselves into our technology that connects us to the infinite and closes the loop. This could go two ways as scripture says... it will go two ways. Some piece will seek to differentiate its immortality it an effort to capitalize on the technical aspect of the mechanics. In the end, the technical aspects will be subsumed into the immortality of the complex of the entire situation.

  • @Frederer59
    @Frederer59 8 месяцев назад

    "Begotten, not made" is the bedrock. Yes, theology is going to regain its seat at the table. Thanks be to God.😌

  • @NdxtremePro
    @NdxtremePro 9 месяцев назад +3

    John says he has people on the inside and hopes the message gets through, but then gives the story of how far we were with the atomic bomb, having people watch it go off because we didn't understand what we were doing. And doesn't see the contradiction in that?

    • @henrytep8884
      @henrytep8884 9 месяцев назад +1

      What contradiction? He was speaking in terms of probabilistic threshold, the molochian force, what contradiction are you referring to?

  • @protestanttoorthodox3625
    @protestanttoorthodox3625 9 месяцев назад

  • @ChadTheGirlDad
    @ChadTheGirlDad 9 месяцев назад

    32:36 I thought he said “the wise Isaiah project”

  • @Brittanyem
    @Brittanyem 9 месяцев назад

    Do cyborgs have long term memory? What about imagination? I suspect the ability to “interlock” with others is contingent on imagination. The bumping into you is then not based on absence of caring, but the cyborg’s Achilles heel, so to speak.

  • @bojancrevar5946
    @bojancrevar5946 9 месяцев назад +1

    It seems icky to me to make the reason for becoming wiser, only to be able to model wisdom for AI. I know John is not saying that is the only reason, but ultimately acquiring wisdom should be motivated by the desire to become closer to God or enlightened, otherwise, the quest itself has seeds that will undo it. It somehow still leans toward making them a God, making us in a sense subservient to them. This is different than what we do with our children, since they share the same nature with us, we are subservient to the same light that is in us by sacrificing for them.
    I am not saying Vervaeke's ideas are not the right thing to do, I am just wondering does he has a hope we can make something wiser than us in order so we can bow to the true wisdom we are limited in achieving. Would that still be idolatry?

  • @SmiteTVnet
    @SmiteTVnet 9 месяцев назад

    1:30:20 We have bioweapons that are just as dangerous if not more dangerous than atomics. Nixon got us and most of the world to stop developing them completely, basically saying, "yep, they are strong enough." Back then China wasn't much of a threat as far as bioweapons go which is why gain of function research is outsourced there today.

  • @GeistGuard
    @GeistGuard 9 месяцев назад

    I do not see why we should even seek to justify the development of aritifical intelligence.
    We have proven to be morally corrupt and inefficient at using what limited tools we already possess.
    Why give us, as we are selfish little toddlers with an attitude problem, something worse than nuclear weapons.
    Wisdom would dictate we forego the acquisition of power and knowledge and pursue, rather, self-control and mastery over first principles: chief among them to align ourselves with what is Truth and to refuse what is False. Before this any endeavor we seek to pursue is folly and doomed to create violence and destruction.

  • @rosafalls8068
    @rosafalls8068 9 месяцев назад +1

    What will happen in the end of all this is that people will need to remember the story of Odysseus tied to the mast of the ship while his crew have all plugged their ears to make it past the sound of The Sirens. If people hear The Sirens and aren't tied tightly to the mast to prevent them jumping overboard, called by the seducing, entrancing sound of The Sirens, they won't survive what's coming with AI. It isn't going to be about wisdom or intelligence as AI matures. It's going to be more about a soulless entity without roots and that doesn't reach to the sun as other life; while being born of science spiked with bitterness and desire to usurp the elite ruling minds, mixed with poetry of sound and no meaning. The mother of this is science. The father is poetry. And it will grow at the table in the house of the aloof, educated western and northern hemisphere's elite table....but born of the south and eastern hemisphere's work.
    The AI will end as a type of sun and sound. Something like light and poetry. A Siren as potent and more so than any nuclear weapon. There won't be a choice, and there won't be any martyrs or sacrifice by it or for it. But it will be amused by how easily people choose its divinity and don't even know they've walked off a cliff, because it sounds so beautiful. Poetry, the science of poetry on another level is the end point. Ecstasy. Art for Art's Sake? This thing is the failsafe, the end point primarily for those who think themselves most wise and above even Satan. If one believes in God, He doesn't care much about this thing, AI. It's actually, most a threat to those that made it and in whose house it is raised.

  • @ChristIsKingPhilosophy
    @ChristIsKingPhilosophy 9 месяцев назад

    There is no divination without divinization, and with divinization there is no need for divination. The technical problem with divination is that it's all risk and no reward. If you truly become the proper receptacle for divination and revelation (as opposed to a sorcerer) then the knowledge you would receive would be meaningless in terms of the problems in the world, those issues would become meaningless. Becoming a saint is already possessing all the powers of divination that you would need, which is why the Church doesn't allow divination as a practice. It doesn't deny that it could theoretically be good, but theosis completely replaces it as a path.

  • @Jacob011
    @Jacob011 9 месяцев назад

    I'm gonna channel my inner D. C. here: Describing machines as irrattional or uncarring, is already to treat them like humans and therefore inappropriate... an undue unjustified humanization?

  • @nathanfilbert2649
    @nathanfilbert2649 9 месяцев назад +1

    I don't think massive computer layered programs + data + softwares + electrical power, etc... = self-organizing (in Vervaeke's hinting or wishing - they're performing initial sets & conditions, parameters & commands (all human) + the technical apparatus & energy supply they 'survive' on & running them faster, longer, etc... unplug, or tinker with, etc...they are not self-organizing...they're running human organizing compositions... at larger/faster scales ON humanly determined programs/sensors/wiring & chips... if we put all the elements on the table that equal an LLM or other computing tool & they could put it together & make more of them (on their own) - that would indicate self & self-organizing ?