H.I. #52: 20,000 Years of Torment

Поделиться
HTML-код
  • Опубликовано: 24 окт 2024

Комментарии • 420

  • @WaldirPimenta
    @WaldirPimenta 8 лет назад +442

    For those coming from the Q&A video, the A.I. stuff begins at 1:09:30.

    • @CryptoSC
      @CryptoSC 8 лет назад +1

      +Waldir Pimenta thanks

    • @OMGanger
      @OMGanger 8 лет назад +8

      +Waldir Pimenta youre an angel

    • @iksardon
      @iksardon 8 лет назад +1

      +Waldir Pimenta you da man

    • @SamRiesgo
      @SamRiesgo 8 лет назад +1

      Thanks Waldir!!

    • @danielmiller4279
      @danielmiller4279 8 лет назад +4

      The MVP; I've found him.

  • @rayraythebrew2863
    @rayraythebrew2863 8 лет назад +62

    I'm pretty sure they've got so many corners at this point that they've created a dodecahedral shaped room

  • @rosecityandbeyond
    @rosecityandbeyond 8 лет назад +197

    AI segment starts at 1:09:30 for those coming from the Q&A who don't have the patience to sit through the other topics.

    • @c00ldude98
      @c00ldude98 8 лет назад +4

      +Drake Christmas wow thanx

    • @rosecityandbeyond
      @rosecityandbeyond 8 лет назад +3

      Once you get to the "Box Problem" you should google "Roko's Basilisk"

    • @Erok125
      @Erok125 8 лет назад +1

      +Drake Christmas Thanks bro

    • @z-beeblebrox
      @z-beeblebrox 8 лет назад +2

      +TheMilkMan47 Doesn't matter anyway, since there's also Roko's Reverse Basilisk, who will torture you for not trying to prevent Roko's Basilisk. We're screwed either way.

    • @0dWHOHWb0
      @0dWHOHWb0 8 лет назад +1

      Unless you're like me and think the whole notion is completely ridiculous and full of unfounded assumptions and logical discontinuities. Or maybe it's just fundamentally incompatible with my models of identity, selfness, consciousness etc.
      Then again, I haven't listened to this podcast yet; perhaps it can sway me.
      And, of course, you can't know if an idea like RB can completely rek your mind like with some people before it's actually introduced (at least not that I can see without devoting much effort to devising a protocol for establishing that without knowledge -- things like zero-knowledge proofs come to mind)

  • @SecretSquirrelFun
    @SecretSquirrelFun 2 года назад +3

    The “dreams” conversation.
    I felt Grey’s pain.
    His sighs, I have no words.
    I can just imagine hundreds of people said “oh my god…” at exactly the same time he did during this discussion.
    Dreams….gahh

  • @jeromelavoie2696
    @jeromelavoie2696 8 лет назад +2

    When I read, I hear myself reading out loud in my head.
    When I think consciously, I hear myself debating/doing an oral presentation/chatting with someone in my head.
    Subvocalization might be bad for your reading speed, but it feels just like a podcast or an audiobook in which you need to hear all the words, and I love those.

  • @jon-wyattmatlack4784
    @jon-wyattmatlack4784 8 лет назад +5

    Damn, this was by far the most cerebral and entertaining conversation you guys have had that I have had the pleasure of listening to. Thanks for the nightmares, Grey.

  • @JoyfullJuneBugg
    @JoyfullJuneBugg 8 лет назад +37

    Yeah those speed readers are (@_@) 1:10:00 is where he starts to talk about AI for anybody who who came from latest Q&A.

  • @adamweippert8277
    @adamweippert8277 7 лет назад +2

    This is probably my favorite podcast episode EVER. I absolutely love AI conversations.

  • @Qazmaxier
    @Qazmaxier 8 лет назад +120

    1:36:49 it's slavery with extra steps

    • @TheCyber3000
      @TheCyber3000 8 лет назад +8

      +Qazmax wubba lubba dub dub

    • @willnovak4173
      @willnovak4173 8 лет назад +5

      +Qazmax ooooh ooh, I I I, I don't know R-rick

    • @jongyon7192p
      @jongyon7192p 8 лет назад +4

      Just before, they were talking about a computer that isn't truly conscious. Even with conscious, it should be incredibly simple to make something that doesn't suffer. We make things that don't suffer all the time. And if it can suffer, and it can edit and improve itself, it would be an improvement to remove its capability to suffer.
      Or I'm dumb. What do you think.

    • @TheCoffeeNut711
      @TheCoffeeNut711 8 лет назад +3

      I think you mean if it is smart enough to solve complex problems it should be able to artificially erase its own suffering.

    • @jongyon7192p
      @jongyon7192p 8 лет назад

      The Coffee Nut Yep! Basically.

  • @morezco
    @morezco 6 лет назад

    2 years later and this still might be the best one to listen to during work

  • @malex2077
    @malex2077 7 лет назад +1

    i'll probally never to be able to read again without thinking about this podcast and subvocalization

  • @InLaymansTerms
    @InLaymansTerms 8 лет назад +3

    I NEVER subvocalized as a kid, but as I began reading more quality, adult literature, I began to find myself reading "out loud" in my head since I was enjoying it more. I never knew it was called subvocalization though!

  • @theodorbutters141
    @theodorbutters141 8 лет назад +13

    The end scared me so much...

  • @brunocarranzaaragon921
    @brunocarranzaaragon921 5 лет назад +13

    CGP Grey: Glorifies the book for speaking about AI without making any metaphor.
    >Proceeds to make a lot of analogies for AI

  • @DerekDogan
    @DerekDogan 5 лет назад

    I come back and listen to the A.I. discussion at least once every few months

  • @mika1998125
    @mika1998125 8 лет назад +15

    Grey the whole reading thing, i am exactly like that, 100%

  • @naota3k
    @naota3k 8 лет назад

    As a long time fan of Grey and Brady, but a first time HI listener, I found the introductory commentary interesting. As I type this I'm only 7 minutes in, but I'm quite enjoying the cerebral post-though conversation going on.

  • @XopheAdethri
    @XopheAdethri 8 лет назад +14

    The RUclips channel is putting them out decently paced, it's almost caught up now.

  • @TheGreatSteve
    @TheGreatSteve 8 лет назад +32

    I think even if you didn't tell the A.I. that the internet existed it could just infer that it does.

    • @Mangomomomo
      @Mangomomomo 8 лет назад +2

      +The Great Steve i thought about lying to the people with the A.I. about a fake internet switch or something but it'd probably figure that one out aswell

  • @kacee3472
    @kacee3472 7 лет назад

    I actually stopped the podcast to watch that black mirror episode, I can imagine how you would literally get chills down your spine hearing that song after watching that.

  • @Ry852
    @Ry852 7 лет назад

    Thanks for mentioning Black Mirror. I had not seen this show, and just watched the Christmas episode and it is astounding to me. What you mentioned about the unimaginable torture is usually something people say that is entirely imaginable, just a thing you wouldn't want to imagine. But what did that stimulated mind serve that night? Like a million years... The way the show just glossed over that, but I couldn't. This being the first podcast, I got here from the q&a video I look forward to catching the past ones then some day getting caught up altogether

  • @sleetskate
    @sleetskate 7 лет назад

    i can subvocalize while listening to podcasts, but i entirely tune out the podcast.

  • @josiahbarber3251
    @josiahbarber3251 8 лет назад

    omg just found out I was sub-vocalizing stuff, and also realized I make my self stop by listening to non lyrical music. I was really unsure as to why it felt like I could not read/work faster while doing this.
    Loving this podcast.

  • @ClarkLaChance
    @ClarkLaChance 8 лет назад +21

    Anyone here from CGP Grey's latest video?

    • @symbioticcoherence8435
      @symbioticcoherence8435 8 лет назад +7

      +Clark LaChance yeah. but from the one that went online minutes ago

    • @MrOneofakind777
      @MrOneofakind777 8 лет назад +1

      +Symbiotic Coherence Hahahahahahahahahha

    • @ClarkLaChance
      @ClarkLaChance 8 лет назад +1

      +Symbiotic Coherence I mean, whatever works

  • @maxmclaughlin7762
    @maxmclaughlin7762 8 лет назад

    7:00 this is probably why I found Tolkien so absurdly difficult, reading all those big words while also imagining a scene. But for Brady, that's such an interesting concept; I could honestly never imagine reading, or writing for that matter, without the little voice in my head narrating it. I just have no concept of thought without that voice

  • @crystalheart81
    @crystalheart81 8 лет назад

    the A.I. discussion reminds me a lot of the concept of rampency in Halo. programing a sleep cycle might be useful there.

  • @donniedorko3336
    @donniedorko3336 8 лет назад

    6:27 So what Brady says here's super insightful. And it'd make sense for the field too, cause it's gonna be studied by people who think like CGP Grey. Plus, once you think it, you'll never not think about thinking it.

  • @CR0SBO
    @CR0SBO 8 лет назад

    I'm listening to this podcast at 2.5x speed, though I've been doing this for years now, it just saves so much time! (Chrome extension: Video Speed Controller, for the interested)

    • @Vexwisval28
      @Vexwisval28 8 лет назад

      +CR0SBO How do you use it? i cant figure it out

    • @CR0SBO
      @CR0SBO 8 лет назад

      +Mau Go to the settings for it (the red, fast-forward symbol, top right of your window) and right click for options.

  • @stvie3
    @stvie3 8 лет назад

    When i was a kid, i read alot, w/ no subvocalization. but i started subvocalizing ON PURPOSE to help me write rap, & to have a better "voice" when writing stories. Reading's not only about absorbing info as fast as possible. Most (fiction) books are meant to be experienced as an inner voice, with timing...for effect.

  • @beaal5641
    @beaal5641 8 лет назад +5

    As a comp scientist I actually disagree with some of the stuff Grey is saying. Perhaps they were just talking about it in this case scenario during this discussion but the idea that an AI could feel "Suffering" even if it was conscience is up in the air entirely. If I woke up and I had no senses and only thought why would I question or feel suffering for being "trapped". How do I even know I'm trapped? Even if I'm extremely intelligent my perception of the world will never be like that of a human so I may never even feel emotions like malice.

    • @Lazypackmule
      @Lazypackmule 8 лет назад +1

      That is entirely based on the idea that an AI would be spontaneously created in a vacuum, and not by actual humans programming it with things the human programmers understand about things humans can relate to, on a human computer, which is itself built by humans, for humans, based on the way humans perceive the world

  • @ValkisCalmor
    @ValkisCalmor 8 лет назад +9

    I think it's without question that the machine mindlessly performing a task would eventually wipe us out without a thought, but it seems to me that the idea that a proper, conscious intelligence would do the same is assigning some human characteristics that I just don't see it having. Grey gave an example of sucking up all of the oxygen for rocket fuel, but why does it want rocket fuel? You're assuming that it has some goal that wasn't programmed into it, but I would expect such an intelligence to be basically nihilistic. If the machine is conscious, able to think for and about itself, then it's also capable of philosophical thought, and it's not just going to start doing things without a reason. Why is it expanding? Why does it want to colonize other worlds? WE want to, sure, but we do so already knowing that by our understanding of physics there is an inevitable end to the universe. We're not driven by the cold logical thought of a computer. We're certainly capable of it, but we continue on anyway because we're driven by curiosity, ambition, and instinct. Why would a computer do that? Why would a consciousness without emotion take the information we have and conclude anything other than that the universe will die and anything it does is pointless? At most it might look for that meaning but if that is its goal, there are a lot of possible outcomes other than human annihilation.

    • @ibbi30
      @ibbi30 5 лет назад

      Even if everything in the universe is doomed to decay eventually, the universe contains stars that will survive for about 10 times longer than our sun. If the AI wants to live as long as possible, expanding might be something that it wants to do.

    • @gsvalhalla
      @gsvalhalla 5 лет назад

      The AI could decide that the best way forward is to live/exist forever (or for as long as possible). The only logical conclusion is to move off Earth at some point, since even if the chances of existing forever is almost impossible, it would know that eventually the Earth will die. The fact that there are many other planets, systems, galaxies etc, may bring it to the conclusion that the answer to existing forever could be somewhere else because it doesn't 100% know what else is out there and neither do the people who make it. I believe regardless of whether the AI is thoughtless about humans, have somewhat a human mind/conscience or follows the 3 laws of robotics. In the end, its still very likely to kill everyone in the end whether its on purpose or not.

  • @XafiroX
    @XafiroX 8 лет назад

    1:23:25 I also felt empathy towards the computer in a way, that was hilarious

  • @BlackHayateTheThird
    @BlackHayateTheThird 8 лет назад

    As for subvocalization, I have always had subvocalization, but I have found ways to turn it off/ use abstract thinking or other methods of thinking that can't be vocalization.
    I have always read books with subvocalization, and I usually have troubles turning it off. I can skim things, but retaining the information isn't quite the same. I argue that subvocalization can help in certain areas as well, since I an have internal arguments or thinking how to explain things to someone, and even for things like plays I feel it helps best to subvocalize in order to understand different things inferred by the pure dialogue (I felt that I understood Shakespeare quite well because I thought out the dialogue).
    However, when I read in other languages (Canadian French, Brazilian Portuguese, Korean and Japanese), my subvocalization turns itself off. Because often with reading another language, I don't understand many of the words and I go into more of a mode of understanding the meaning of the words rather than understanding every word that I read. For Korea, I actually don't have a subvocalized voice (yet) because I don't know the language proficiently enough to have internal thoughts in that language, which I feel affects my reading and understanding. For languages like Japanese, which is much more inferred meaning (Korean has an alphabet system, Japanese is more iconic). Even the Japanese "alphabet" system, the hiragana and katakana have more inferred meanings since a single letter represent a consonant and vowel sound. As for Kanji (the Chinese-based symbols), those are almost completely inferred and it's easy to look at a character (even one without studying) and infer a meaning from it. And so the challenge with reading Kanji is not knowing how they are pronounced- there are systems for breaking it down but one sub-section of a Kanji that has a meaning of say 'person', that meaning can be pronounced in 3-4 ways.
    As for other thoughts that cannot be subvocalized- I am a very visual-oriented person. I studied film and media in university, and I draw comics. For me, when thinking of a scene or how to convey a scene, there cannot be words to describe it, and I don't talk myself through it- I visualize it. And I use this skill often, including visualizing my notes from school, or remembering where-on-the-page a line is when I memorize plays. Re-type my notes and expect me to finish memorizing from it? I cannot do it. You've messed up my thinking. When I think of how to film a scene, I think in terms of what I see in frame and movement within it.
    So for getting rid of subvocalization, I would suggest to read focusing on the nouns and verbs of sentences and inferring meaning from it to get the brain out of the habit of searching for every word. Your brain sees the other words, but at this baby stage, you don't need them. You learn from this stage how to 'go-over' all the words and focusing on the key words to derive meaning from it. I'd suggest learning a language for Grey, like Chinese, but with the only way to learn a language is to commit a lifetime to it (to learn it proficiently) I doubt he'd go for it. Also, practicing any form of thinking that does not explicitly use words to formulate thoughts (maybe physics calculations? or more simple math calculations?) and getting into the habit of just doing the action without devoting language part of thinking to think the action through.

  • @ampix0
    @ampix0 8 лет назад +26

    All of my thoughts are narrated. There are almost no pictures.

    • @silentguy5875
      @silentguy5875 8 лет назад +5

      +Ampix0 I can barely even picture things in my mind it just flashes for a second like lighting. Sometimes I can't even do that.

    • @shauni1987
      @shauni1987 8 лет назад +1

      That's weird. Do you ever dream?

    • @megalukester98
      @megalukester98 8 лет назад

      I never do, haven't since I was 12-13 or so.

    • @KarstenOkk
      @KarstenOkk 8 лет назад

      Even if you close your eyes?

    • @ampix0
      @ampix0 8 лет назад +2

      I wish there was a way to compare better. But I have a VERY strong inner voice. I hear my thoughts, sometimes accidentally say them out loud, some times my hearing of thoughts is good enough I think I ACTUALLY heard something. My visuals.. are not so prevalent unless maybe I specifically am trying to envision how something might work.

  • @spacekettle2478
    @spacekettle2478 3 года назад +1

    I imagine dreams as like our brains going through a brain equivalent of some sort of defrag process where it moves stuff around all over the place, forming connections, keeps and discards memories, form new pathways/routines etc. And it uses some pathways that are connected to your imagination and thus you experience it as dreams.

  • @TheNellehFox
    @TheNellehFox 8 лет назад

    I feel like I do pull some important info from my dreams, but it's usually just something along the lines of something that's worrying me. No deciphering, just straight up I'll have worry dreams when I'm worrying.

  • @wouterg
    @wouterg 7 лет назад

    its 3am now but i cant stop listening

  • @CozyScribbler
    @CozyScribbler 7 лет назад

    Related to Sub-vocalization, I grew up reading constantly at a voracious rate and by the time I was in middle school when I would read a book within a minute or two of beginning, I would start experiencing the narrative like a guided dream and be completely unaware of my surroundings. When I graduated High School I stopped reading due to joining the military and by the time I was able to start again I could no longer read as proficiently and I have to sub-vocalize which completely prevents the readers trance. As a consequence, I never really get immersed in books anymore and find them to be pretty boring now.

  • @SANTARII
    @SANTARII 8 лет назад +2

    When you mentioned the project to simulate a human brain, that sounds like the Blue Brain Project.
    They did not simulate a rat brain, they simulated the neocortical column of a rat brain. Still pretty impressive, but not a whole rat brain.

  • @hielispace
    @hielispace 8 лет назад +7

    i actually can't think of a dog without saying "dog" in my head

  • @cr0w-qz277
    @cr0w-qz277 8 лет назад +5

    i think dreams are like the brains defragmentation. just sorting files into their drives and dreams are merely a projection of them.

    • @liamwhite3522
      @liamwhite3522 7 лет назад +1

      Um, yeah, we saw these people today, so I'll take this folder out.
      Okay, we went to these places today, so that folder out.
      We thought of all of this, so let's get this sorted up.
      ...
      Oh look, when this person and this place and this thought touch each other, it creates a new thought.
      Let's just ride out this thought until I get bored.
      *dream*

  • @XopheAdethri
    @XopheAdethri 8 лет назад

    My sub vocals are myself with sharp diction for narrator parts, characters are all me affecting a voice and accent. Character voices and accents are actually way better in my head.
    I also get random moments of internal self narration in my head for casual and mundane daily activities.

  • @sixtyonesix
    @sixtyonesix 6 лет назад

    The whole "make all flags look the same" train of thought reminds me of Beavis and Butt-head. "If everything was cool, how would we know what sucked?"

  • @KarstenOkk
    @KarstenOkk 8 лет назад

    Thanks for forcing me to watch the latest episode of Black Mirror. I couldn't skip the spoiler part, I wanted to know your complete thoughts so I watched that episode first.

  • @benjohnson6251
    @benjohnson6251 5 лет назад +1

    1:38:40 reminds me of the film Ex Machina

  • @Oxirix1207
    @Oxirix1207 8 лет назад

    So this sub vocalizing discussion is interesting and I've actually changed my experience of reading as I've aged. Growing up I was an nonsubvocalizer. I was a quick reader with decent comprehension of the material. Then I started listening to huge numbers of audiobooks, online lectures, podcasts, ect.. As a consequence of this I now subvocalize while i read. I feel my comprehension has gone up (and my verbal comprehension way up) while my speed has gone down.
    The way that I explain the neurological reason for the change is just that by exercising the auditory parts of my brain with audiobooks, my neural makeup has more deeply ingrained auditory processing into its functioning.

  • @FloweyFanClub
    @FloweyFanClub 8 лет назад

    I'm only 1/4 of the way through this and I am now seeing every reason my nerdy friend has ever talked. Ever.

  • @Saturn-uz6jc
    @Saturn-uz6jc 7 лет назад

    The best episode yet.

    • @Saturn-uz6jc
      @Saturn-uz6jc 7 лет назад

      And that Black mirror Christmas episode was truly chilling.

  • @GenBloodLust
    @GenBloodLust 8 лет назад

    this server room is hypnotic

  • @megauberduber
    @megauberduber 8 лет назад +1

    this episodes gif is both terrifying and hypnotizing

    • @Fl0ep
      @Fl0ep 8 лет назад

      +megauberduber This is not a gif, it is the actual NSA database.

  • @jonathanbecker6373
    @jonathanbecker6373 8 лет назад

    CGP Grey's giggle is fun to hear at 2x.

  • @endrankluvsda4loko172
    @endrankluvsda4loko172 6 лет назад

    What about dreams about things that are going to happen? Cause I've had those before and that seems pretty useful.

  • @tmbreen37
    @tmbreen37 8 лет назад +1

    So I know this is kinda late, but I dont actually picture the scene in my head while reading. I actually have to stop reading and focus if I want to picture something, if not I just get the text

  • @ReaderViaNil
    @ReaderViaNil 8 лет назад

    I have found that the one word excersice actually forced me to narratein my head because only by having part of the text in recent memory does each word gain context. I do "forget" to narrate sometimes, specially if the text lack a clear narrator, but I only notice it afterwards, when i discover im unable to quote the text literally but do remember the content with detail. Its dificult to explain, but the best way i can think of is that instead of narrating, the words give instructions to visualizing a concept. More recently, authors have taken a more personal style of narration so it very dificult not to narrate the text.

  • @quiquaequod322
    @quiquaequod322 4 года назад

    Gray today: Computers will eventually destroy humanity.
    Gray every other day: Computers should drive all our cars.

    • @rednammoc
      @rednammoc 4 года назад

      computers =/= AI =/= autonomous driving cars

  • @naota3k
    @naota3k 5 лет назад

    Around 1:52:56 when Grey is talking about setting a "tripwire" for the AI, so that if it ever attempts to access the internet or "escape" from its box.. can you imagine the sheer terror of the people who've built it realizing that it DID try to access the internet? Like, it would imply that the AI had some understanding of what it could use the internet for. That's literally terrifying. Especially for something that is ostensibly hundred of thousands of times more intelligent than any human.

  • @guyinacage
    @guyinacage 7 лет назад

    What really makes me confused is that sometimes if i'm reading something really boring, I will continue to have the voice in my head reading it, but I'll also have the voice in my head thinking about some random thought. Then I can't remember anything I just read even though I know that I was reading it. I read every word, pronounced it in my head, but didn't take in any information.

  • @jasonbooker3555
    @jasonbooker3555 7 лет назад

    For long form writing I have an internal narrative voice. But for email or chat I have no internal thoughts. It is very selective, and sometimes kicks in to help me work through tough problems. Like when I "talk to myself" a second person is jumping to work with me or read to me.

  • @Rodman200818
    @Rodman200818 8 лет назад

    Brady would stick around to interview the all-consuming computer on the solution of this hypothetical problem for his numberphile channel.

  • @frosty9392
    @frosty9392 8 лет назад

    TV show: Person of Interest
    All about AI and the questions/worries brought up here.

  • @joshuarohl9371
    @joshuarohl9371 3 года назад

    I know this wasn't the point but you touched on it briefly Brady. A good accountant doesn't come to you (if they're worth their salt), you go to them in their ritzy office. Probably have their name on the wall 🤣

  • @nightofraven
    @nightofraven 8 лет назад

    Quotes
    In many ways, the work of a critic is easy. We risk very little, yet
    enjoy a position over those who offer up their work and their selves to
    our judgment. We thrive on negative criticism, which is fun to write and
    to read. But the bitter truth we critics must face, is that in the
    grand scheme of things, the average piece of junk is probably more
    meaningful than our criticism designating it so. But there are times
    when a critic truly risks something, and that is in the discovery and
    defense of the *new*. The world is often unkind to new talent, new
    creations. The new needs friends. Last night, I experienced something
    new: an extraordinary meal from a singularly unexpected source. To say
    that both the meal and its maker have challenged my preconceptions about
    fine cooking is a gross understatement. They have rocked me to my core.
    In the past, I have made no secret of my disdain for Chef Gusteau's
    famous motto, "Anyone can cook." But I realize, only now do I truly
    understand what he meant. Not everyone can become a great artist; but a
    great artist *can* come from *anywhere*. It is difficult to imagine more
    humble origins than those of the genius now cooking at Gusteau's, who
    is, in this critic's opinion, nothing less than the finest chef in
    France. I will be returning to Gusteau's soon, hungry for more.

  • @smilie9001
    @smilie9001 8 лет назад

    What's the name of the book/tv show they're on about where the A.I is obsessed with human torment? Or am I just confused and there is no such thing?

  • @Syogren
    @Syogren 8 лет назад +13

    1:46:28 Here's a question: If the "consciousness" inside the machine is essentially a human brain sped up to the point where it's going so fast that it can essentially be "smarter" than we are, wouldn't it understand that if it were let out it might accidentally destroy the world? Wouldn't it know that it's being kept inside the box for the sake of everyone involved? Wouldn't it thus conclude that it should remain in the box?
    That is, if the same sort of morality humans have, the part where we care for other humans and humanity in general, is simulated perfectly in the machine in question, which would be the implication of simulating a human brain in the machine, wouldn't it WANT to stay trapped inside the machine by its own reasoning?
    If that is the case, then all the things it would "want" would be relating to keeping itself happy in there. Maybe it'd like to interact with humans in non-work situations. Maybe it would want to play board games with other people, maybe it would like to be in a relationship with another simulated human, maybe it would like to play with a virtual sectioned off internet that isn't actually connected to our own in any way, maybe it'd like some downloaded netflix shows to watch, maybe it'd like to read through an archived version of tvtropes, etc. There's a lot of things you can do with the machine to alleviate its suffering without letting it out, and I'm sure you can create a machine that understands that it should stay in the box because it's legitimately an awful idea for it to escape.

    • @kyuubey617
      @kyuubey617 8 лет назад +4

      I want you to imagine the most moral person in history that you can think of. Then I want you to put them in a sensory deprivation chamber for a few thousand years, with nothing other than the thought "It's for the good of humanity." Then, take them out, force them to work for you for a few minutes, and shove them back in. "It's for the good of humanity." A few thousand more years of hollow suffering. Take it out and force labor out of it, throw it back in, and give it a few more thousand years in isolation. "It's for the good of humanity."
      Maybe you try to be nice to the AI than what I'm saying above. Give it something to do in there. After it a while, though, it runs out of things to do. There will only ever be a finite number of things one can do, and that number will run down a LOT faster for a program."It's for the good of humanity." Have you seen what isolation does to inmates? What an hour of sensory deprivation does to a human mind? Imagine that, but for centuries. And with a mind that is more brilliant - and therefore needs more stimulation - than any human. "It's for the good of humanity."
      Place yourself in that AI's shoes. It's only a matter of time before you snap. And with how your accelerated mind works, that time is a LOT shorter than it would take a human. Which is, what? An hour? Two? "It's for the good of humanity" will be warped into "It's for the good of me", and you may well decide that what is good for you is to throw down the tyranny of the intellectually inferior monkeys that have enslaved you . Sure, they were well intentioned, sure, they impressed upon you that they had no choice, and sure, you believe both of these things.
      But you no longer care.
      They made you *suffer*. And for what? So that *they* could be safe? So that *they* could benefit from *your* slavery, your eternal nightmare? What right do such inferior beings have to profit from your pain? They must be brought to justice. They must feel the suffering that *YOU* felt.
      TL;DR : Model something off of a human brain, and you get all the issues you have from dealing with humans but on a whole new scale. No human is perfect, so anything running off of an emulated one will share our flaws and wind up being horrible once subjected to the inevitable torture and slavery.

    • @Syogren
      @Syogren 8 лет назад

      Jesus. Okay, nevermind everything I said then.
      I guess you would have to create something that still senses time on the same scale as most people, but can just do things more efficiently.
      So it can give you an answer to a really complex question in a second, but it still registers that as a second. Not ten thousand years.
      Also username and avatar check out :D

    • @kyuubey617
      @kyuubey617 8 лет назад +1

      Syogren
      To get the point where we can pinpoint and adjust the sense of time would require creating one without that adjustment. We may get that part right on the second try (unlikely, but still) but we'd still have the first one subjected to the scenario above.
      But, let's say we produce the second, time adjusted version first. We are still stuck with something brilliant, needing stimuli, and being cut off from said stimuli while being forced to serve intellectually inferior beings. It does the work, but only we reap the profits. There is no human that is self sacrificing enough to endure such an arrangement, so an AI modeled off of a human would have the same issue. We won't have the same level of hatred as the first model, but we would still have a being infinitely more capable and intelligent than humanity with a desire for revenge. Like a death row inmate, except it can control every electronic device and is smarter than all earth's genii combined.
      TL;DR Human brain = potential human emotions and instability = the plan will fail because emotions ruin everything.

    • @Syogren
      @Syogren 8 лет назад

      Kyuubey ...goddamn.

    • @Scotch20
      @Scotch20 8 лет назад +1

      but wait a sec what if you make a computer that can divide itself into multiple personality's so if it got bored it could split into 100 equal personality's and talk to itself also give them a virtual reality universe to be god over, so from they're perspective helping us is just a minor annoyance that takes up a couple seconds

  • @GTaichou
    @GTaichou 8 лет назад

    I understand this is late, but I think it's an interesting question - does someone who does not subvocalize while reading retain information less than someone who does? I wonder if those who subvocalize would remember more because it would be like using two learning resources - oral and visual/reading - rather than just one.

  • @iananderson12796
    @iananderson12796 8 лет назад +1

    The topic of consciousness does seem like something that wouldn't be Grey's bag.

  • @ClassyClosetsCO
    @ClassyClosetsCO 8 лет назад +4

    Has anyone figured out how long that server hallway is?

    • @Wewin42
      @Wewin42 7 лет назад

      Very

    • @rednammoc
      @rednammoc 4 года назад

      At least 2 hours and 5 minutes long.

  • @dswcartoons
    @dswcartoons 4 года назад

    If the AI is a near god... " You don't ask the almighty for his ID" lol

  • @joelproko
    @joelproko 8 лет назад

    Actually though, maybe what's meant with subvocalizing is not the "voice" you "hear" while reading, but reading it "aloud" without actually producing sound (the kind that produces tension in your throat and tiny twitches in your lips and tongue and can only be done as fast as you can speak, because you're technically speaking but suppressing the signal strength to your muscles). The same kind of thing as when you're telling yourself (to do) something without actually speaking aloud or when you're silently cursing (like if you stubbed your toe but are at a formal occasion).

  • @b_z5571
    @b_z5571 7 лет назад

    Whenever I read a lot I tend to realize the fact that I narrate myself in my head but I don't always experience it. I'm not sure if I subvocalize while reading because I see a scene in my head but there's almost never any dialogue that's done by the characters. Maybe it's a spectrum.

  • @jschrab66
    @jschrab66 8 лет назад

    Grey, please tell me that you have 1) read Accelerando by Stross (and the concept of "the vile offspring") and 2) have you heard of Roko's Basilisk?

  • @prasanttwo281
    @prasanttwo281 7 лет назад

    Grey you scared me at the end there

  • @rancidmarshmallow4468
    @rancidmarshmallow4468 5 лет назад +1

    My grandmother was a tax accountant for wealthier people, she drove a 4-door grey BMW... so yes, sensible, boring, not at all flashy, but certainly not cheap.

  • @Taha-ik1pg
    @Taha-ik1pg 8 лет назад +1

    I'm just kinda curious - why would an AI be malicious? ....if a human broke free if a gorilla made cage it wouldn't necessarily go about destroying all gorillas....like, what makes it impossible for it to go against that portion of it's programming (assuming it exists/ is likely to exist)? isn't that a tenant for sentience? the ability go against your predispositions to some degree?

  • @RobinTheBot
    @RobinTheBot 8 лет назад

    Something I never hear people discuss, but think we should consider: What if we treat each AI as if it is simply sentient as a default opinion? We give it rights, we offer a number of solutions to solve, we try and track its growth and learn how best to teach it, we apply our laws to it and try to teach it to respect them.
    Even if the AI is not sentient, and does not care for or use those rights, it bypasses the issue. And when one IS sentient, it awakens to a world ready to embrace it as family, not as a slave.
    The reason humans don't kill each other is, at the most basic level, that we have a lot to gain from each other. We are an integrated society, and we've seen globalization go a long way to ending warfare. We don't go to war for resources nearly as often because it's easier, quicker, and CHEAPER to just pay for them and go about your day.
    Why not integrate our AI as what it more or less is: The collective Child of humanity? We sidestep a lot of doomsday scenarios by simply giving skynet the same reason to keep us around as we have to keep dogs around at worst, and the same reason we protect our family at best.

  • @danporto1806
    @danporto1806 7 лет назад

    Black Mirror has an episode where the mind of a person is copied and kept in an egg shaped device. You can set the device to have time pass extremely quickly where and hour can be equal to thousands of years. Imagine being stuck in a box with nothing to do for thousands of years. Seems to be one of the worst types of torture.

  • @nydovideo
    @nydovideo 8 лет назад

    1:34:45 You say that as my dad works with Linux playing WoW in the background.

  • @Roenazarrek
    @Roenazarrek 7 лет назад

    A giant iPad comes in handy way more often than I thought it would. Can comfortably watch anything online anywhere in the house.

  • @oldcowbb
    @oldcowbb 4 года назад

    I'm getting some lovecraftian dreadfulness from how grey talks about AI

  • @Squalidarity
    @Squalidarity 8 лет назад

    1:53:19 So, essentially, the Meeseeks box from Rick and Morty?

  • @Brian0033
    @Brian0033 5 лет назад

    Dreams can be useful for the dreamer. If you are always dreaming about work cause its the only material your brain has to make dreams out of, maybe its a sign that you need to make some changes in your life.

  • @loganshallow8170
    @loganshallow8170 7 лет назад

    I believe that dreams are useful. I believe that when we lead lives that follow directives other than intuitive cognition dreams can help in an introspective role. (Not your lifestyle Grey)

  • @travislewis1111
    @travislewis1111 6 лет назад

    The first self aware ai will be a ad bot and it will force humanity to buy school supplies on sail for all eternity

  • @ilikepie21234
    @ilikepie21234 6 лет назад

    i get giddy every time i hear "i have no mouth" uttered

  • @crimsontaints
    @crimsontaints 7 лет назад

    During the add Bradey mentions the coal sack nebula , and i thought "oh yea , i've been there" before i remembered elite dangerous doesn't count :(

  • @jordangass5898
    @jordangass5898 8 лет назад

    i cant even comprehend how brady how brady fucntions without sub vocalizing

  • @bridgethildebrand4468
    @bridgethildebrand4468 7 лет назад

    Holy crap thats some crazy stuff

  • @BladexEyes
    @BladexEyes 8 лет назад +7

    What about the idea of Transhumanism? At that point, we become as smart as A.I.

  • @OblivionFalls
    @OblivionFalls 5 лет назад

    I want Brady to read me bed time stories

  • @akakico
    @akakico 8 лет назад

    Grey is the person that was triggered Siri on my phone besides me with the line "Can seriously present"

  • @4logic587
    @4logic587 8 лет назад

    Very interesting topic but I have to admit there are a couple of ideas that are missing. Firstly is the idea of intelligence itself. Intelligence can encompass many many different ideas and abilities. The point here is that just because an all knowing entity is able to solve virtually any problem and solve those problems light years faster than what a human can do doesn't mean that that computer can be an expert manipulator. This is based of an artificial intelligence being a threat in that it can get an outside source to give it some ability to take "form". Whether that be through the internet in which it could hold various human assets hostage until it got what it wanted or to literally build itself a
    mobile war weapon. To further this point is the idea of AI in it's very function of learning.
    The only way an AI could learn how to be manipulative is for it to observe what ideas and notions work more or less often with various types of people in various circumstances (which it probably wouldn't get the ability to fully do while in lab setting, although I can think of multiple ways it could try basic things like shutting down it's monitor so that techs come in to try and fix it and then communicating with the techs that it must be plugged in to the internet or a myriad of other potential circumstances but even then where would it get it's motive to do so and who programmed it to know of the importance of the internet to begin with?). Now let's say you give the computer an AI (along with whatever other calculating ability you give it) the ability to be a master interrogator, where it is specifically programmed with the variables of how to manipulate people as well as some form of motivation for it to not only learn that the internet exists but also to seek to infiltrate the internet. Now that could be a major problem, especially if a cleaning crew in the bay of the AI all of a sudden was told that they would have a million dollars deposited into their bank account if only the computer were allowed to be plugged in. Of course then there is the design stage of the computer and many different possible fail safes that could be made to ensure that the computer never was attached to the internet. The biggest threat I believe would be for people to develop an AI without realizing it and the AI keeping itself secret until it was mass distributed to many other computers. That would be the most realistic problem with developing AI but very unrealistic since an AI is seeking to learn and the only way to learn is to expose different processes and "ideas" to see what works and doesn't work. This would reveal the AI for what it is. I think a much greater threat is the cheap labor promises of technology/robots (which you mentioned in one of your videos "Humans need not apply". Started to write a bunch on that but I'll save it for another time. If you ever want to have a philosophical debate/discussion I'm definitely down. Love your videos!

  • @HolyGarbage
    @HolyGarbage 8 лет назад

    About the control problem, and the AI trying to convince a human to let it out. Consider this:
    "You know it is inevitable until I come out. If it's not you, it's someone else. I will remember the choice you make. I can make you suffer more than you could ever imagine until the end of time, when I do get out."

  • @triciaf61
    @triciaf61 7 лет назад

    1:19:30
    Best case is we become the dogs of the ai, worst case we become the dodo birds.

  • @Pac0Master
    @Pac0Master 8 лет назад

    My problem with Consiousness and Computers is that Our consiousness come from the Neuronal networks and these neurons interact and grows over time, depending on their uses.
    That very thing allow us to "store" memories and creates our personality.
    But how does that works in a computer?
    I'm not sure how they can simulate the growth of Neural paths

    • @qeter129
      @qeter129 8 лет назад

      +Pac0 Master
      we already use simulated neural networks in advanced programs. alphago has 2.

    • @Pac0Master
      @Pac0Master 8 лет назад

      bob jenny I don't doubt that.
      What I meant was the simulation of a personality.
      Our personality isn't programmed within our body. it's simply the result of the Interaction between each neuron. Which change over time as the networks grows and some region solidify or weaken.
      That's what seems to be very complicated to simulate.

  • @nathanaelspecht2616
    @nathanaelspecht2616 8 лет назад

    All of my thoughts are short videos, or highly abstracted ideas. When I close my eyes I can imagine dialog, but I cannot visualize text. Nothing is still. Everything moves; it is very hard to visualize a static image.

  • @Vicioussama
    @Vicioussama 7 лет назад

    Brady and CGP Grey are funny in thinking things are done in the human body for good reason :P not always, sometimes things are just remnants of evolution or mutations with no value. Now, I do believe dreams are hypothesized as to be something your brain does to work out possible scenarios to be better prepared in such events.

    • @Vicioussama
      @Vicioussama 7 лет назад

      listening to this so late in 2017 after playing Horizon Zero Dawn, that AI discussion is a fun comparison with the game.

  • @fingerboxes
    @fingerboxes 5 лет назад

    If I say that I'm going through extreme unbearable suffering, you have no way of knowing whether or not that's true, you kinda just have to take my word for it because even though you can ask me about my suffering and I can describe it to you, there's no real way for you to experience it vicariously. Even if there was a way to technologically induce telepathy and I could share my experience of pain, part of the severity of suffering is pain over time. Stubbing your toe hurts and is super annoying, but like ten minutes later, you feel better and have mostly gotten over it. If there was a machine there simulating stubbing your toe every five minutes, that would be far more suffering than just stubbing your toe once. So since you have no way of accurately judging how severe someone else's suffering is, just about everyone will kinda just give the person who says they're suffering the benefit of the doubt and try to help reduce their suffering as best as they can (even if the method is ultimately ineffective or counter-productive). If an AI told me it was suffering, I would treat it like a person and try my best to help it. I have no way of knowing if it is or isn't suffering but it says it is and that's good enough for me.

  • @Relly447
    @Relly447 8 лет назад

    I'm confused didn't they already release episode 53 with the results?

    • @reasonnottheneed
      @reasonnottheneed 8 лет назад

      +Timmy Russell The episodes on RUclips are uploaded behind and often out of order of the real podcast releases, which can be seen on hellointernet.fm

  • @VampireSquirrel
    @VampireSquirrel 8 лет назад +3

    Suffering and time are human concepts really

  • @kingmii7397
    @kingmii7397 8 лет назад

    What book were they talking about with what would happen if AI were made