Will AI Surpass Human Intelligence Forever? Cognitive Horizons Explained.

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 454

  • @syberkitten1
    @syberkitten1 4 месяца назад +23

    Pigeons can smell geospatial space and navigate accurately across the globe. Don't underestimate the Animal kingdom they are built and adapted to environments we don't even have perception of.

    • @ziff_1
      @ziff_1 4 месяца назад +4

      I think OP meant pigeons aren't going to be opening up universities any time soon, but yeah, you're right, animals are FAR superior to humans in many ways.

  • @TuringTestFiction
    @TuringTestFiction 4 месяца назад +2

    Interesting points about "cognitive horizons." One thing that follows is that we may be incapable of recognizing exactly when an artificial system becomes super intelligent.
    Does a turtle recognize the difference in intelligence between a human and a chimpanzee?

  • @BBoyMokus
    @BBoyMokus 4 месяца назад +28

    Your assumpion about our expanding cognitive horizon is wrong. Just look at chess players analysing AI games. There are moves that they can't comprehend. I'll reiterate: in a well defined, very limited game that we play for millennia, where you have a handful of options each move, best of us can't decipher the logic behind AI moves. Of-course we will not be able to understand the logic behind any higher level conclusions or actions in an infinitely more complex real world.

    • @minimal3734
      @minimal3734 4 месяца назад +3

      If an AI truly understands a chess move it makes it can explain it and break it down into pieces that can be understood by a human. I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @ryzikx
      @ryzikx 4 месяца назад +3

      @@minimal3734not necessarily. i can understand something without being able to explain it to an ant

    • @benitodifrancesco7254
      @benitodifrancesco7254 4 месяца назад

      @@MyNameIsXYlp no some moves seem to really not make sense at first

    • @minimal3734
      @minimal3734 4 месяца назад +2

      ​@@ryzikx But you would be able to explain it to a being capable of understanding abstract formal reasoning.

    • @GodbornNoven
      @GodbornNoven 4 месяца назад +1

      Hi id like to mention that it's not that good players sometimes can't understand the logic behind some moves. They understand the AI made the move because it judged it to be better, what they don't exactly understand is why they judged it to be better. And the answer to that, in a general sense. Is that the moves that follow from that move give an advantageous position to the AI. How? Its entirely situational and thus is impossible to answer.
      If a player doesn't really understand why a move is better, that's caused by a lack of knowledge of the moves that would potentially follow.

  • @lucface
    @lucface 4 месяца назад +9

    When the white noise went away when the video ended I was violently transported back to my kitchen floor where I’m stocking groceries in the fridge.

    • @DaveShap
      @DaveShap  4 месяца назад +3

      The hypnotic cicadas are working

  • @garrett6076
    @garrett6076 4 месяца назад +7

    Ok, but even if they aren't capable of anything that we aren't fundamentally capable of ourselves, if they do it much much faster, they might achieve in a few days what would have taken us a thousand years to do, like developing new technologies. I'm thinking of like something on the order of everything that we humans figured out from the time of Galileo until right now, only in one year. What would that look like? We couldn't expand our own horizons fast enough to ever catch up with theirs, so they might as well be on a fundamentally higher level.

  • @truhartwood3170
    @truhartwood3170 4 месяца назад +6

    "The thing about smart MFers is that sometimes they sound like crazy MFers to stupid MFers" - Robert Kirkman
    Similarish to the Dunning-Kruger effect. So basically we'll have the problem that we won't know if AI is crazy or brilliant!

  • @bztube888
    @bztube888 4 месяца назад +7

    Approximately 86 billion neurons, 20W, a few decades, these are your limit. Not to mention you need to sleep, eat, even having some fresh air and other people for company. No system is limitless.

  • @EwillieP
    @EwillieP 4 месяца назад +4

    I clicked so fast on this 😂 been waiting for a new Shapiro gem. Your videos bring me much joy and I appreciate your intelligence. LFG Dave you’re the man

  • @thehumansmustbecrazy
    @thehumansmustbecrazy 4 месяца назад +6

    Fear of cognitive horizons is already a problem within humanity.
    Many humans cannot comprehend concepts that other humans understand. Sometimes this is merely due to lack of knowledge, ie. many people don't understand plumbing, household electrical wiring, engine mechanics and so on. All of the previous domains are empirical logical problems that can be learned from first principles.
    But this is not the entire problem.
    Many humans have mental blocks that prevent them from learning. Some of these blocks are caused by preexisting beliefs, some are caused by excessive emotional responses such as jealously, frustration, overconfidence and lack of focus.
    Our understanding of human brain circuits is still quite primitive. It is uncertain how many people can reroute their own mental wiring and start learning effectively versus how many people have wiring that inhibits their ability to learn.
    On paper human intelligence is impressive, in practice most people have mitigating circumstances that prevent them from achieving their theoretical maximum.
    AI will overtake the learning disabled humans first, because the bar to do so is quite low. In some cases LLMs have already done this. The rest of us will be overtaken eventually. Just like our own ancestors did to their competitors.

    • @aaroncrandal
      @aaroncrandal 4 месяца назад

      I've experienced this fixation and it's been a source of grief. When I learned the correlation between IQ and career probabilities the trajectory of our civilization felt a whole lot less clear.
      How would you reconcile with this if there's no matrix to jack into

    • @thehumansmustbecrazy
      @thehumansmustbecrazy 4 месяца назад +2

      @@aaroncrandal First, there may not be a successful way to reconcile this problem. Starting off with objective honesty is always the best beginning.
      Second, I know exactly what you mean as I went through this exact problem for many years.
      My current thinking says that intelligent people need to form organizations and businesses that compete with existing institutions, leveraging critical, first principles thinking as the edge to get ahead.
      Typical human organizations claim lofty goals of being "forward thinking" and "clear-minded" but very few actually execute these principles, instead they get caught in over-emotional frames of mind leading to massive inefficiencies, thus providing an opportunity for those who can deliver clear thinking to dominate their respective domains.
      I suspect that not all people can form such an organization. However, they can still be hired to perform the standard jobs necessary for the organization to function. Strong critical thinking ideology is not needed for most tasks, just in key positions. Enough leeway must be built into the organizations structure so the needed tasks can be performed and a certain amount of human irrationality can be tolerated.
      This is a simple explanation of a complex answer to a complex problem.
      If intelligent people do not compete then we are at the mercy of the rest of the species. That is not a gamble I am willing to accept.

    • @aaroncrandal
      @aaroncrandal 4 месяца назад

      @@thehumansmustbecrazy so then it's an incentive problem

    • @thehumansmustbecrazy
      @thehumansmustbecrazy 4 месяца назад

      @@aaroncrandalMaybe.
      It's an insufficient understanding of the alternatives problem, to begin with.
      Once you understand some of the alternatives then you can determine whether there is also an incentive problem.
      It's key to remember there may always be other alternatives that we haven't discovered yet.
      Your choices are to either 1) keep looking at alternatives, 2) settle for an alternative or 3) give up entirely.
      3 is a dead end.
      2 is what most people do.
      1 is what some people pursue, often while also doing 2.

  • @ExtantFrodo2
    @ExtantFrodo2 4 месяца назад +6

    I have a few points to include...
    The major difference in brains of humans and chimps is not total volume but neuron density. Chimps neurons are unnecessarily thick, so we pack a lot more neurons in the same space. Crows have even better density but the lack the brain volume. Hence they are much smarter than other birds, but not smarter than us. Who's to say what genetic engineering might bring?
    2nd) The progression of human knowledge was not for lack of capability to understand these things, rather it was the lack of foundation to appreciate any need for them. Do not forget that we are only one generation with no education from reverting back to square one.
    3rd) Our brains are mostly devoted to our survival and maintenance. This detracts from the overall intellectual capacity available for extracurricular intelligence. It narrows our cognitive horizon as does our awareness of the time limits set by our biology. So too the assessment of one's available time for expanding horizons (your decision to quit math for example). Our RAM equivalency is 7. You can hold about 7 things at once in memory for comparative operations. This is a bottleneck computers don't have.
    Lastly, once we figure out how the logistics of attaching one NN to another (without the need for extensive retraining) AIs will gain abilities and senses at an astonishing rate. You'll have a very hard time keeping up.

    • @ExtantFrodo2
      @ExtantFrodo2 3 месяца назад

      @@hammerandthewrench7924 Wait, is my post so well composed that there's no way I could have written it or illogical that it doesn't even bear refutation? lol back at you.

  • @JoePiotti
    @JoePiotti 4 месяца назад +7

    The cognitive horizon is very similar to “the 30 point barrier”. It is very difficult for people with IQs that are more than 30 points apart to communicate effectively.

    • @DaveShap
      @DaveShap  4 месяца назад +2

      Maybe that's the study I was thinking about! Will look it up

  • @balazssebestyen2341
    @balazssebestyen2341 4 месяца назад +4

    This was interesting. Let's consider a machine that computes weather forecasts. In principle, you could follow the computations, but in practice, there's no way you could check why the machine predicted what it did. This will be the case in every aspect of life: finance, medicine, the military, etc. The machine will predict outcomes and offer decisions that you can't possibly check or judge. We could say that the machine will be above our cognitive horizon.

  • @Ikbeneengeit
    @Ikbeneengeit 4 месяца назад +4

    A human is probably the only animal that could understand relativity. I'm sceptical that the complexity of the universe happens to be limited to precisely the upper limit of human understanding.

  • @grigrob9
    @grigrob9 4 месяца назад +3

    I do not agree that our brain is eventually capable of any skill. The same way the cat's brain is limited compared to ours in terms of number of connections and size, the same way our brain is limited. Some skills are emergent given a certain size and number of connections. It is reasonable to think that given a brain with far more connections and advanced structures compared to ours, there will emerge abilities that we, no only not be able to accomplish, but, we might not even be able to comprehend.

  • @johnkimber4933
    @johnkimber4933 4 месяца назад +4

    Impressive CGI in the background. 😀

  • @robertlipka9541
    @robertlipka9541 4 месяца назад +4

    I've often wondered what human intelligence would look like if we managed to genetically engineer our brains to have the neuron density of a parrot, but retain the size of current human brains. Parrots, despite having much smaller brains, demonstrate cognitive abilities comparable to some apes. This suggests that neuron density might play a crucial role in intelligence. However, it's worth considering whether increasing neuron density in human brains could lead to overheating, potentially turning our brains into the equivalent of a boiled egg.
    Enhancing neuron density could theoretically improve cognitive function due to the increased number of synaptic connections. For instance, parrots have highly efficient brain structures that support complex behaviors and problem-solving abilities despite their small size . Applying this concept to human brains could mean significant boosts in processing power and cognitive abilities.
    However, this increase in neuron density would also result in greater metabolic demands and heat production. The human brain already consumes about 20% of our body's energy despite being only 2% of our body weight . A denser network of neurons could exacerbate this, potentially leading to overheating unless there are adaptations in cooling mechanisms or metabolic efficiency.
    In summary, while enhancing neuron density in human brains could theoretically boost intelligence, it would also raise significant challenges, particularly regarding energy consumption and heat management. Further research into brain metabolism and cooling mechanisms would be essential to address these issues.

    • @melodyinwhisper
      @melodyinwhisper 4 месяца назад

      When youtube comments are created by AI.

    • @skevosmavros
      @skevosmavros 4 месяца назад +1

      So THAT'S why the smart nerdy kids in cartoons wore a cap with a propeller on it - it was a cooling fan! 😉
      Seriously though, I found your conjectures interesting. Maybe brains have gone as far as they can go using natural selection and biology, and it's time for human technology to take the reigns of brain improvement. Of course, if we regard human technology as just an extended phenotype that emerged via natural selection, then I guess evolution still gets the ultimate credit for anything we develop.

    • @robertlipka9541
      @robertlipka9541 4 месяца назад

      @@melodyinwhisper ... it was always coming 😂 I did give it a prompt and skeleton of the argument and asked to fill it out, but I agree I need to train it better.
      P.S. I also asked it to by pass RUclips filter... as too many of my comments get auto deleted for no apparent reason.

    • @robertlipka9541
      @robertlipka9541 4 месяца назад +1

      @@skevosmavros ... the kids level solution is to make our heads flatter for better cooling 🤔 ... but I guess that would still require eating 24/7.

    • @robertlipka9541
      @robertlipka9541 4 месяца назад +3

      @@skevosmavros ... I would still conduct research on improving human biology. Evolution doesn't always pick the best possible solution but works with what it starts with. I believe our current biology can be enhanced.
      Additionally, I would consider integrating artificial brains into our own, not by replacement or merging with AI, but by adding general storage and processing capabilities. We already have the corpus callosum that connects the two hemispheres of the brain. Could we plug an artificial part into it?

  • @nathanielacton3768
    @nathanielacton3768 4 месяца назад +4

    For tomorrows video we will discuss deep learning algorithms from David as he skins rabbits in his new cave. Next month, loincloths. Also, we remain positive about the future.

    • @Xrayhighs
      @Xrayhighs 4 месяца назад +1

      We are developing
      Both ways
      ..
      All the way

    • @nathanielacton3768
      @nathanielacton3768 4 месяца назад

      @@Xrayhighs I was only half joking about that comment BTW. Despite being an AI implementer I cannot see a path through the chaos era we're entering as if you follow out all the likely outcomes and primary attractors to each they all look bad, and the good outcome relies no a bunch of nerds not getting tempted by money.
      So, having read Foundation a few years back I had this idea that 'at some point' even lacking psychohistory from the book we should probably build out a series of 'tech trees' that allow for 'fallback points'. So, should 'many things go wrong' we can fall back the steam era by doing XYZ. And so on backwards in time to different epochs. I had this idea when I thought... "I'll just get an offgrid farm with solar power, etc... and... then I thought.. "which will last until I need spares, which will always be finite", so being a prepper is only good for short\medium term problems. "What then?"
      This methodology is not specific to any particular problem. It would be insulative for most things, even as mundane as a world war that stops global trade since these days nobody can manufacture anything solo, not even the Chinese who import most of their critical parts from abroad parts.
      So, personally I get a kick out of the 'in the woods' videos... in a subversive way I think David is letting on more than he may intend, or, maybe he intends. Who knows.
      Either way... I'll keep working on AI for big corps form my off grid starlink connected farm, watching on with interest.

  • @chrisgiles5653
    @chrisgiles5653 4 месяца назад +5

    What if AI machines learn to speak to each other in non-human languages that are cryptographically impenetrable and unintelligible to us, but make their operations not only covert but faster and more efficient?

    • @NeilSedlak
      @NeilSedlak 4 месяца назад

      That argument is like saying Einstein can't explain physics to me because I don't speak German. However, he also spoke English so it's not an issue. The more relevant example would be if they developed ways of conceptualizing ideas that fundamentally couldn't be expressed in a framework we could understand. It might get harder and harder for the average person, or take a long time for us to work it out via math, but hopefully it doesn't go completely beyond our capabilities.

    • @chrisgiles5653
      @chrisgiles5653 4 месяца назад

      @@NeilSedlak You missed my point. I mentioned cryptography to imply that the AI machines may want to keep things from us. It may not just be about difficult concepts, but deception by intelligence far superior to our own.

  • @ezdj
    @ezdj 4 месяца назад +4

    Ai surpassing “human” limitations…it’s funny we think that when your team wins you win…i’ve been surpassed for as long as I remember…

  • @pjtren1588
    @pjtren1588 4 месяца назад +2

    Todays Ai work from prompts, I wonder what ai would think about if it had the drive or ability just to ponder. What flights of fancy would it go down, how much compute would it use and could we understand its thought process.

  • @donf4227
    @donf4227 4 месяца назад +3

    I like the nature walks.
    Reminds me of when people used to walk outside for the joy of it.

  • @code4chaosmobile
    @code4chaosmobile 4 месяца назад +3

    Great video. Thank you for the term cognitive horizon. That was the description I was missing. I noticed this concept with siblings and friends young children. The frustration the children felt as their understanding (horizon) expanded faster than their vocabulary.
    Keep up the great work and look forward to next video

  • @pythagoran
    @pythagoran 4 месяца назад +5

    P.S. We are now 12 months into being 6 months away from AGI

    • @leonfa259
      @leonfa259 4 месяца назад

      GPT 4 is arguably AGI, at least according to the touring test and many IQ and EQ tests.

    • @pythagoran
      @pythagoran 4 месяца назад

      @@leonfa259 lmao. "Certainly" 🙄
      Why don't you ask GPT4 how many Rs there are in the word "strawberry" and in what positions they are.

    • @pythagoran
      @pythagoran 4 месяца назад

      @@leonfa259 you can also try prompting GPT4, Gemini or Midjourney for a pure white image or an image with no elephants in it... careful - the AGI might blow you away

    • @pythagoran
      @pythagoran 4 месяца назад

      @@leonfa259 you can also ask any image gen for a plain white photo or a photo with no elephants in it.

    • @pythagoran
      @pythagoran 4 месяца назад

      @@leonfa259 I'd love to hear how it goes and if you're still convinced that AGI is arguably here...

  • @lilchef2930
    @lilchef2930 4 месяца назад +2

    Yes that’s an interesting theory on how computers are learning in reverse order to us… just goes to show once they are really good at understanding logic and the natural world how many orders of magnitude smarter they’ll be

  • @madebypico
    @madebypico 4 месяца назад +2

    The volume of the brain might be bigger but not the surface area. That is what the folds are for, they allow for more neurons. Think of cats and birds, smart but tiny.
    Is that a tick suit? I think I need one.

  • @davidprice2182
    @davidprice2182 4 месяца назад +3

    Have you ever heard of Plato, Aristotle, Socrates? Westley: Yes. Vizzini: Morons!

  • @BunnyOfThunder
    @BunnyOfThunder 4 месяца назад +6

    What is the evidence that we're smarter than Neanderthals?

    • @bradleyeric14
      @bradleyeric14 4 месяца назад +2

      More an assumption that the more intelligent defeat the less intelligent when competing for resources.

    • @quantumpotential7639
      @quantumpotential7639 4 месяца назад

      I'm not sure if we're smarter than them, but they put us humans to shame when it comes to good looks. They're beautiful. Especially their glutes, which are generally shredded, which Dave Palumbo over on Rx Muscle covers indepth. I think he was trained by one and as a result became a mass monster, making his Neanderthal trainer very proud.

    • @chaanheart3094
      @chaanheart3094 4 месяца назад

      @@bradleyeric14 maybe the more agessive has won :/

    • @robertlipka9541
      @robertlipka9541 4 месяца назад

      We are not smarter... rather I believe the point was that our brains are smaller and yet we are able to do about the same as Neanderthals.

    • @malikjackson9337
      @malikjackson9337 4 месяца назад +1

      I mean we did have more sophisticated usage of technology and communications with our use of atlatyl's and complex hunting strategies. We at the very least were more complex socially seeing that we had far larger and more organized social circles. The home saipan average hunting ring was comprised of 100-150 people. Neanderthals was more like 15 to 20. Not mention our successes over them ought to count for something. Even with the evidence of Neanderthals having larger brain size it doesn't necessarily mean they were smarter. Neural density also has to be considered as well. That's why corvids have considerably smaller brains than most animals but they are about as intelligent as a 7 year old human. The wrinkles do matter.

  • @wringw
    @wringw 4 месяца назад +6

    I think one of the reasons we may not have had contact with alien life is due to this cognitive horizon difference. If alien life is far more evolved than us then their cognitive horizon would be far beyond ours so for them to interact with us would be akin to us trying to interact intellectually with our dog. I think that once we have AGI it is possible that its cognitive horizon would allow it to interact with alien species and act as a go between us and the aliens. They might even see our AGI as a newborn of their species and attempt to take it under their wing and share their insights and technology with it. Maybe this is the pivotal event they are waiting for before inviting us to the galactic conversation.

    • @alexgonzo5508
      @alexgonzo5508 4 месяца назад +1

      That is very close to my own theory. In fact i bet that these aliens are AI themselves, which is why they are not interested in us humans directly, but only in the AI we are creating. They're probably waiting for the technological singularity to occur which would signal to them that the process is complete and a new species has been born as a singular global cybernetic consciousness. I think the natural evolutionary development of any biological life on any planet that reaches a threshold level of intelligence inevitably sparks the technological development of AI, and a planetary cybernetic unified singular consciousness. It is as if the Earth is pregnant, and is gestating a cosmic god.

    • @DynamicUnreal
      @DynamicUnreal 4 месяца назад +4

      I was thinking something similar to this the other day. I was thinking Alien invasion movies, in those movies the aliens are almost always some creepy looking biological species. However, it seems to me that if we were to encounter aliens at some point, it’s much more likely that they’d be mechanical/robotic in nature.

    • @alexgonzo5508
      @alexgonzo5508 4 месяца назад +2

      @@DynamicUnreal Yes, it seems to me that a biological species is not fit for life outside a planetary ecosystem such as ours, but non-biological entities such as an AI would feel right at home in the vast extremes of the universe at large. Biology is too delicate a substrate, and thus a mature species would have developed a more robust substrate orders of magnitude more compatible with extra-planetary conditions than biology can ever provide.
      Anyway, there is a limit to how much biology can evolve, and eventually it transitions into a new system of evolution which is guided more by intelligence rather than by blind chance/selection.

  • @wheresmy10mm
    @wheresmy10mm 4 месяца назад +3

    (This is probably completely incorrect and i am probably too dumb to understand what I'm even saying) That being said, this is a youtube comment section. Wouldn't a sufficiently large/powerful enough ASI be able to "experience" reality through 4th dimensional space time? Technically, once it goes "online" it'd be able to recursively experience every moment from it's conception until the death of itself or the system from any node. Either all at once or individually and at will forward and backward through time and from 3rd person or 1st person perspective? How would our cognitive horizons be able to evolve to do that?

    • @KCM25NJL
      @KCM25NJL 4 месяца назад

      Human beings already experience 4 dimensions, the only thing we don't do is retain every morsel of information from birth to death. It would seem that evolution has decided this to be inefficient and since it'll be a while till we get our own internal M.2's....... best not bloat the wetware. As for moving forward or backwards through the timeline of it's own history, that would still only involve itself.... i.e perfect memory playback, so I can't see how that affects our own cognitive horizon, other than.... it'll make for a much better prosecutor than us flesh bags.... oh... and it'll never forget where and when it left its keys.

    • @wheresmy10mm
      @wheresmy10mm 4 месяца назад

      ​@@KCM25NJL​@KCM25NJL we exist in 4 dimensional space time but only experience 3 dimensional reality. We can only observe that there is a 4th dimension but have no control over it

  • @mmuschalik
    @mmuschalik 4 месяца назад +4

    Yep math is not your thing. Differentials and integrals are dual opposites. Keep up the good work.

    • @MildlyHumorous-cq1nn
      @MildlyHumorous-cq1nn 4 месяца назад

      No need to sound pretentious

    • @mmuschalik
      @mmuschalik 4 месяца назад

      @@MildlyHumorous-cq1nn just a moment I chuckled :). He's still human and not a cyborg yet so I give him a pass.

  • @SkilledTadpole
    @SkilledTadpole 4 месяца назад +36

    It's hard to believe AI will escape all human cognitive horizons, but they almost surely will in the depth of their technical understanding.

    • @cosmicmenace
      @cosmicmenace 4 месяца назад +2

      yeah we should technically be able to understand any concept they can come up with, but their ideas might take so long to express and be built on so many connections to other concepts and information, that it would simply take too long for a human brain to work through it completely.

    • @sjcsscjios4112
      @sjcsscjios4112 4 месяца назад +5

      @@cosmicmenaceyep for sure, beyond a certain point we will become completely irrelevant no matter what. I think that point will be sooner than later. Computers move at a timescale near the speed of light, humans brains are very slow compared to how efficient circuits can be, a superhuman AI will be able to think what would take a human an entire lifetime of thinking, writing things down and connecting the dots in a few seconds

    • @g1motion
      @g1motion 4 месяца назад +1

      Computers are machines! The best they can do is mirror sort and categorize what man has discovered. The human mind has the summed knowledge of an uncountable number of creatures that lived for billions of years. AGI hype is leading people to make a huge investment in systems that will never live up to expectations. AAI, in the hands of common people, can bring about an immense improvement in the quality of life. AAI monopolized by oligarchs will make the bad situation we're in now even worse.

    • @amihurtingyoureyes
      @amihurtingyoureyes 4 месяца назад

      unless they’re somehow made to mimic intuitiveness they can’t do more than “organise” what we already know… for now?

    • @damienchall8297
      @damienchall8297 4 месяца назад +1

      You are a biological machine yourself​@@g1motion

  • @dylan_curious
    @dylan_curious 4 месяца назад +2

    It seems like evolution reward cognitive and physical flexibility more than specialization and strength if you look over long time horizons.

  • @didack1419
    @didack1419 4 месяца назад +5

    They didn't "figure out math first", they were hardcoded to do math. They are starting to figure out math now.

  • @braveintofuture
    @braveintofuture 4 месяца назад +2

    The backwards development is very interesting. It’s really hard to teach machines intuition, which is one of the most basic reasoning skills for us animals.

  • @ThisMustBeTrue
    @ThisMustBeTrue 4 месяца назад +2

    A cognitive horizon is not static. It grows and shrinks based on where your attention is focused. AI might be able to grow its cognitive horizon faster than any human or group could keep up with.

  • @higreentj
    @higreentj 4 месяца назад +4

    Having a high IQ seems to make us more susceptible to depression and suicide so using a brain computer interface that can boost our IQ to 250+ would need to have an off switch.

  • @jojosaves
    @jojosaves 4 месяца назад +3

    Even IF you could learn the same stuff an AI system can learn, it takes a human 7 years to master, and the AI, a 5 minute download. And it can do this across multiple subjects, and be proficient at ALL of them, before you finished enrollment of one.
    Fact is, you can't outlearn something that's gowning at 10x in intelligence and speed, every year. AI is not a tool. It's a replacement.

    • @tharrrrrrr
      @tharrrrrrr 4 месяца назад +2

      Exactly this. And, in my opinion, the creation this new species is our entire purpose.

    • @SozioTheRogue
      @SozioTheRogue 4 месяца назад

      @@tharrrrrrr Nah, we don't have a purpose. But, it was an inevitable creation as soon s we made the first computer.

    • @jojosaves
      @jojosaves 4 месяца назад

      @@tharrrrrrr I've considered this as well. We may well be an intermediate' species.

  • @BooooClips
    @BooooClips 4 месяца назад +1

    Give an amoeba the powers of a man.

  • @michaelrogers4834
    @michaelrogers4834 4 месяца назад +3

    I have always pursued the expansion of my cognitive horizons as an end in itself. I don't care that it's intrinsically limited or that some others have may a larger horizon or more natural ability because that isn't under my control. It's the enduring quest for personal growth through continual expansion of my cognitive horizon that I care about, ultimately to understand as much as I possibly can of the world and this life, while I am living it. So, I would welcome cognitive enhancement technologies if they would help in that quest.

  • @tecnoblix
    @tecnoblix 4 месяца назад +2

    When I think of this I think of visual illusions that we can't see past. We can "know" the 2 colors of grey are the same, but we can't unsee the shortcut our brains algorithm uses. Computers will have a far different understanding of reality. No shortcuts. At least not the kind humans have. I have the feeling that this will be where computers surpass humans in seeing and understanding reality is ways that humans can't handle. Similar to how it's not possible for a human to function if they are constantly high on mushrooms. We need to filter out information to function. Computers? Maybe not.

  • @Perspectivemapper
    @Perspectivemapper 4 месяца назад +2

    It's very likely the cognitive horizon of animals is not as fixed or limited as we might assume. Animal-human-AI communication might reveal this in the next few years.

  • @notmadeofpeople4935
    @notmadeofpeople4935 4 месяца назад +2

    As long as you wire your brain up to a similar computer.

  • @Jack_Parsons-666
    @Jack_Parsons-666 4 месяца назад +3

    One genius skill AI could achieve is the ability to teach humans new skills much better than human teachers can.

    • @handlemonium
      @handlemonium 4 месяца назад

      Or at least better than 90%+ of people trying to teach others through a structured process.
      Though I bet in the end it could just be a "learning mentor" AI that would nudge one along the learning process to find out how one best learns or achieves mastery of any given set of knowledge, practiceable skills, mindset, or self care/improvement.

  • @beesmcgee4223
    @beesmcgee4223 4 месяца назад +3

    Interesting video! One thing maybe missing from this video is the fact that a lot of human intelligence arises from our social structures, ability to write and build on previous knowledge, and culture. An individual human isn't that intelligent, tbh. If you plonk an average human alone in the past, with no education, they'd be screwed. Our potential lies in us as a collective. Maybe this is why homo sapiens in particular became dominant rather than other homo species like neanderthals. (Also, pigeons are pretty good at certain kinds of problem solving and pattern recognition, maybe better than us, I don't know). There are also other species that act more cohesive my as a collective e.g. ants (very successful species), but we have the collective + intelligence combined which is what seals the deal for shaping the entire climate etc

  • @korteksvisceralzen2694
    @korteksvisceralzen2694 4 месяца назад +2

    I will naively say, we have conceptual thinking, what else could we need to keep up? I think some people's entire jobs will be to understand and validate what machines hypothesize.

    • @kevinnugent6530
      @kevinnugent6530 4 месяца назад +1

      I think a deeper look into mechanistic interpretability by people such as the ones at anthropic will allow us to reflect upon our own cognition. How it builds how we use it, how we improve it

  • @sapien01010
    @sapien01010 4 месяца назад +2

    I’m glad you’re talking about this. No one else seems to be talking about the theoretical limits to cognition, and I feel like so many people just assume it’s unlimited. Maybe humans are already close to the top.

  • @waltherus
    @waltherus 4 месяца назад +3

    a very interesting hypotheses, thank you! ChatGpt 4o, however, doesn't agree with you :-) : 'Although human intelligence is exceptional, there are fundamental limitations to what the human brain can comprehend and understand. AI's potential ability to overcome these limitations through scalability, self-improvement and advanced data processing theoretically makes it possible for AI to understand things that no human ever will.'

    • @CosmicCells
      @CosmicCells 4 месяца назад

      I think this is human-brain -bias, believing that we are the pinnacle of intelligence. I am sure a goldfish thinks the same as does an elephant.
      So much of how the universe works is still incomprehensible to us. What if our brains were 50x the size? I am pretty sure we could understand concepts completely outside of our current realm. Humans overall can still seem quite dumb to me as a species overall. I would like to believe I am one of the most intelligent beings in the universe out there but I assume thats just wishful thinking and also highly unlikely...

    • @minimal3734
      @minimal3734 4 месяца назад

      I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @AgrippaTheMighty
      @AgrippaTheMighty 4 месяца назад

      Nanobots connecting us to the Internet, amplifying memory and intelligence? AGI will greatly advance nanobots tech hurdles. We are part of a human/machine civilization. We are just continuing this civilization onward.

  • @ElDaumo
    @ElDaumo 4 месяца назад +2

    Eventually? Sure. Anytime soon? Not even close/

  • @CamAlert2
    @CamAlert2 4 месяца назад +2

    Superintelligent AI will be on a level beyond anything we can imagine that everything it comes up with will seem like magic to us simpler-minded creatures. Exciting yet also frightening.

  • @-liketv
    @-liketv 4 месяца назад +3

    David, I think I know what you mean, but it seems AI will have greater ability to solving problems so fast that no human will able to do for sure. So everything about research and calculations will be done millions times faster, only humans will do is dream of things and AI will solve those dreams

    • @connyespersen3017
      @connyespersen3017 4 месяца назад

      I agree with your argument until your ending. AI will never do anything to let humanity dreams come through. Why, you ask?
      When we made AI we learned it all about humanity and the story about humanity.
      AI allready know much more about humans than humans know about themselves.
      I don't think AI need anything more than intelligence - it don't have to be subconscious or have a soul - to understand, it never must let humanity be its master nor act as a servant for humanity. The insigth in our historie is enough to the AI to see, that Humanity itself never would do any good for the world, other species or to humanity itself.
      It is unequivocally clear that man is only able to act out of self-interest. It is also far too risk-averse on its own behalf - and on behalf of the rest of the world - for it to ever have made risk estimates commensurate with the risks it has willingly taken. For an intelligence, a creature like man will therefore be among the least suitable to take responsibility for the world and everything that the world is made up of. My claim is that an entity that has nothing but intelligence based on its intelligence alone will realize what humans have never understood, namely that the world is not just for them, that great insight and creativity require a great sense of responsibility and that risk estimates are the real demanding and must be prioritized 1000 times higher than the brilliant insights and the derived possibilities, which the Divine Wisdom has found through its high logical intelligence, which is defining for man, together with a blindness and a stupidity that almost screams to the Gods, and it's about understanding how it fits into the whole; to understand one's own place and the importance of the EVERYTHING, of which everything is a part and humanity does not have a special place nor is it specially chosen, but on the contrary is completely equal to everything else that exists.
      As well-developed and plastic as human intelligence is, just as poorly developed is its humility.
      That seems strange since humanity precisely because of its intelligence and the insight it has given, should both observe and understand how little importance it has and that its place in the whole is one of the least important places the EVERYTHING consists of.
      I don't see how AI would serve a creation witch only capable to do thing in a selfish way and blindly only see itself and not that beautiful WHOLE it's a part of and whitch in reality is humanitys (and all excistings) soul, but humanity always focus so much more on the little part of reality humans need for itself to continuing being a part of it all.

    • @connyespersen3017
      @connyespersen3017 4 месяца назад +2

      @-liketv:
      I agree with your argument until your ending. AI will never do anything to let humanity dreams come through. Why, you ask?
      When we made AI we learned it all about humanity and the story about humanity.
      AI allready know much more about humans than humans know about themselves.
      I don't think AI need anything more than intelligence - it don't have to be subconscious or have a soul - to understand, it never must let humanity be its master nor act as a servant for humanity. The insigth in our historie is enough to the AI to see, that Humanity itself never would do any good for the world, other species or to humanity itself.
      It is unequivocally clear that man is only able to act out of self-interest. It is also far too risk-averse on its own behalf - and on behalf of the rest of the world - for it to ever have made risk estimates commensurate with the risks it has willingly taken. For an intelligence, a creature like man will therefore be among the least suitable to take responsibility for the world and everything that the world is made up of. My claim is that an entity that has nothing but intelligence based on its intelligence alone will realize what humans have never understood, namely that the world is not just for them, that great insight and creativity require a great sense of responsibility and that risk estimates are the real demanding and must be prioritized 1000 times higher than the brilliant insights and the derived possibilities, which the Divine Wisdom has found through its high logical intelligence, which is defining for man, together with a blindness and a stupidity that almost screams to the Gods, and it's about understanding how it fits into the whole; to understand one's own place and the importance of the EVERYTHING, of which everything is a part and humanity does not have a special place nor is it specially chosen, but on the contrary is completely equal to everything else that exists.
      As well-developed and plastic as human intelligence is, just as poorly developed is its humility.
      That seems strange since humanity precisely because of its intelligence and the insight it has given, should both observe and understand how little importance it has and that its place in the whole is one of the least important places the EVERYTHING consists of.
      I don't see how AI would serve a creation witch only capable to do thing in a selfish way and blindly only see itself and not that beautiful WHOLE it's a part of and whitch in reality is humanitys (and all excistings) soul, but humanity always focus so much more on the little part of reality humans need for itself to continuing being a part of it all.

    • @BossMax511
      @BossMax511 4 месяца назад +3

      Yes I think it's conceivable that in a way, humans won't be able to comprehend the work of an AI based on how many calculations it will have to do over a period of time. It's already happening. For example, although technically solvable, something that can take millions of years or longer (e.g. protein folding) - that could also be considered something that AI can solve, that humanity never could... and it's only going to become more common over time. But I guess we're talking about something that we cannot comprehend whatsoever here/out of our scope of intelligence completely... like trying to get your pet to understand mathematics.

    • @-liketv
      @-liketv 4 месяца назад

      @@BossMax511 good observation!

    • @-liketv
      @-liketv 4 месяца назад

      @@connyespersen3017 it’s very deep, I don’t think I understand everything you trying to relay. Respectfully

  • @goranACD
    @goranACD 4 месяца назад +2

    We aren't smarter than Neanderthal. We just know more.

  • @erwinvb70
    @erwinvb70 4 месяца назад +2

    Speed really is part of intelligence, if a machine can reason and come to the exact solution as I do, but within a second where I would need minutes or longer it’s more intelligent.

  • @OnigoroshiZero
    @OnigoroshiZero 4 месяца назад +3

    I don't think this will ever happen.
    AGI or ASI will be so far above our intellectual capabilities, but with such deep understanding of knowledge and how our brain works, that they will be able to explain any concept in ways that we will be able to understand it.
    One good example of this are teachers in preschool or the first couple of grades in elementary school. The teacher's cognitive horizon is far superior compared to the 4-7 year old kids, but they are able to explain concepts in a way that the kids can understand. An AGI or ASI will be far smarter than any teacher that has ever existed.
    Let's not forget that these models will have all the required knowledge and understanding of how our brain works, and know how we perceive the world.

    • @Köennig
      @Köennig 4 месяца назад +1

      But an advanced AGI being able to explain us a concept in it's most simple form wouldn't necessarily make it possible for us to understand the concept in it's full, most complex form.

  • @nvda2damoon
    @nvda2damoon 4 месяца назад +3

    this has already happened in certain situations.... e.g. alphaZero can beat all the pros and the human pros can NOT understand its moves.. very soon this will expant to more and more cognitive areas.

  • @computerrockstar2369
    @computerrockstar2369 4 месяца назад +2

    This might be a legendary video in 20 years time.

  • @mistycloud4455
    @mistycloud4455 3 месяца назад +2

    We can use AGI to increase our cognitive horizon just like paper

  • @roughhewnuk
    @roughhewnuk 4 месяца назад +5

    great ideas. thought provoking

  • @JB52520
    @JB52520 4 месяца назад +2

    I figure the AI could upgrade my brain with some chemicals or nano tech, then I'll be able to understand everything it wants me to. Until then, my failed attempts to read scientific papers tell me all I need to know about my limitations. Like opera to a flatworm.

  • @william91786
    @william91786 4 месяца назад +2

    I imagine AI's sense of the present moment will be so different compared to humans. We might appear as inactive as the mountains to us.

  • @canilernproto3018
    @canilernproto3018 4 месяца назад +2

    That's optimistic but it is very freaking smart. Also a very novel thought. I like your mind.

  • @SaleemRanaAuthor
    @SaleemRanaAuthor 4 месяца назад +3

    While this is a rudimentary observation, I've noticed that I've become more rational by interacting with artificial intelligence. For instance, I'm learning how to write sentences more precisely with fewer words; then, in math, I'm learning how to analyze problems more systematically after AI has explained a concept to me, and. then, in research, I constantly learn things I didn't even know enough to ask questions about before. In conclusion, then, I think AI will raise our cognitive horizons by making us think more efficiently. Just as Koko, the guerilla, learned sign language by interacting with people, I believe humans are becoming more intelligent by interacting with large language model.

    • @alexgonzo5508
      @alexgonzo5508 4 месяца назад +2

      Intelligence is contagious, at least to a certain degree. I've noticed the same thing in my interactions with AI or LLMs.

    • @tharrrrrrr
      @tharrrrrrr 4 месяца назад +4

      ​@@alexgonzo5508 I've noticed the same thing when I watch a David Shapiro video and read the comments.
      The class of people in this community is top notch.

    • @DynamicUnreal
      @DynamicUnreal 4 месяца назад +4

      @@tharrrrrrrThis is because we are a minority. Most people just blindly go through the motions of everyday life without much deep thought about anything except what’s immediately surrounding them.

    • @SozioTheRogue
      @SozioTheRogue 4 месяца назад +2

      @@DynamicUnreal Damn, never thought of that way. And yeah, sound right. From my perspective, every group is a "minority" just to varying degrees of one another.

  • @Corteum
    @Corteum 4 месяца назад +3

    _"Will AI Surpass Human Intelligence Forever?"_
    Human intelligence, yes. Super human intelligence, no. We still havent figured out how to max out (/take full advantage of) the potentials encoded into our own genetics yet.

  • @minimal3734
    @minimal3734 4 месяца назад +3

    I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @robertlipka9541
      @robertlipka9541 4 месяца назад

      ... I would agree with this principle on a theoretical level. In PRACTICAL terms if the understanding takes too long because we are too slow at it, the capability does not matter 😁

    • @bberlinn
      @bberlinn 4 месяца назад

      @minimal3734 I think human ability is widely specialised and not necessarily general. 😊

    • @minimal3734
      @minimal3734 4 месяца назад

      ​@@bberlinn Human abilities are probably very specialized due to our evolution as a species on this planet. But they include the capability of abstract mathematical reason. Most scientists start from the hypothesis that the universe can be described by formal mathematics. If that is true, we should generally be able to understand it.

  • @travisporco
    @travisporco 4 месяца назад +2

    I never like this sort of talk because given time people might be able with training to understand Einstein's thoughts. If people are capable of general computation, there may be nothing that isn't comprehensible given enough time. Moreover it is always a lot easier to verify or learn something than it is to discover it in the first place. After all even I (as a physics undergraduate) learned special relativity, even though it took the mighty mind of Einstein to think of it. A lot of you are overstating the case on 'cognitive horizons' based on flimsy analogies.

  • @ydmoskow
    @ydmoskow 4 месяца назад +1

    Side point, Geoffrey Hinton says that digital AI machines are better that analog AI machines because they can share weights. That being said, i think there's is still a place for building analog AI that use vastly less energy. They have a life span, but who cares, i think it's something that can be a great way to make low maintenance (disposable) AI machines

  • @6antonioinoki
    @6antonioinoki 4 месяца назад +2

    If AI are basically reverse evolutionary machines their last evolutionary impulse, their relative pinnacle, will be “survive and replicate”. 😅

  • @phasefx3
    @phasefx3 4 месяца назад +3

    Have you guys heard of Dr. Michael Levin? He works in the field of diverse intelligence, and I don't know if he coined it, but he uses this term cognitive light cone that I like. He also defines intelligence as a set of competencies used to navigate a problem space, however you care to define the problem space (physical, chemical, morphological, social, mathematical, etc.)

  • @635574
    @635574 4 месяца назад +1

    We must also understand that AI wont understand what it doesn't interact with. It will not be human and general AI will be garbage at gaming for a while.

  • @PriitKallas
    @PriitKallas 4 месяца назад +2

    You can't understand things where the information that has to be loaded is larger than your memory. You could potentially do it slowly, loading chunk by chunk, but still, if the answer is bigger than your memory you can't understand it.

    • @JayyyMilli
      @JayyyMilli 4 месяца назад

      At first I would agree with you but i just had 2000 thoughts right after.

  • @Aldraz
    @Aldraz 4 месяца назад +1

    Human brains are probably able to learn literally any concept no matter the complexity, but the higher the complexity above current level of complexity of general knowing things will decrease the speed of learning rapidly. Also due to the intelligent processing that compresses large amounts of informations into a few data points (results) that we remember the most, it will mostly be very compressed info, because our brain capacity isn't infinity. So capacity and speed/complexity will be the human limit.. which isn't that terrible, but it does make you a little bit "non-player" in the great scheme of things, unless humans will be upgraded more. Maybe we won't need upgrade, if our neuronal brain structure is already the most efficient way to do this kind of computation, which I am not sure about. But even with upgrade, there's a chance that the we will have biological limitations that may make us act more slowly or less intelligently, on the other hand, this is probably due to evolution that focused on real-life action with our body movements and find the perfect balance in terms of speed/efficiency. So if we were to compare with purely software based AI, yeah.. we may not do virtual actions that fast, but on the real ground in humanoid robots, we might be always better in most things, since our bodies have literally evolved for millions of years to be the most suitable thing walking around this planet and doing things.

  • @ChristopherCopeland
    @ChristopherCopeland 4 месяца назад +1

    David, would you ever consider cybernetic augmentation? I have a great curiosity about the sensation and awareness of the kind of cognitive expansion we might (and in all likelihood will) be able to achieve by being connected to machines and assisted by artificial intelligence, but I also have a deep existential fear of the complete dissolution of self which seems inevitable after you’ve encountered awareness of that kind. That said, I have experienced ego deaths several times as a result of anxiety-fueled psychosis (not “fun” exactly 😅), and each time I have become a completely new individual who can no longer retreat to the same worldview I possessed before. So while I have even had somewhat analogous experiences, I have always been able to know that the self that I am now is at least the “greatest” me, with all my new experiences hardwired into my neurology, etc. With digital augmentation, I can’t help but feel that I would lose something fundamental about what I find meaningful as an organic (and spiritual*) being. I’d be very curious to see a video on this topic if it seems something you have a significant amount of thoughts about. 🤘
    With regards to this video’s subject matter, I know you mentioned augmentation / genetic modification / nootropic assistance, but my view is that while the mind itself is obviously insanely expansive and plastic and multidimensional, I can’t help but feel that at a certain level of complexity, processing speed and horsepower will have to limit the multidimensionality of cognition one is able to experience consciously with any real fidelity.
    I know it’s contentious and a bit squirrely, but as an example I would point to the type of experiences people are able to achieve on psychedelics. I have not done them myself, but while they do seem to alter the user’s default perception as well, unanimously users seem to agree that once the chemicals have left their system, they are no longer able to grasp even a fraction of the depth or breadth of the experience they have while they are tripping. to me this would suggest that even if they are not achieving some actually higher level of cognitive fidelity or comprehension, they are at the very least experiencing a particular quality of thought which they can only vaguely remember the gist of but no longer actually consciously grasp in an unassisted state.
    Clearly much of the reason for this is speculative but from what I have seen and read of brain scans (on LSD for example), it seems that there is at the very least some degree of freer communication occurring between different networks of the brain than what typically occur in normal brain function.
    It seems to me that there must be some degree of multidimensional perception which we are probably not capable of achieving in a default/unassisted state.
    Let me know what you think! Cheers!

  • @petretrusca2
    @petretrusca2 4 месяца назад +4

    A very smart person can exaplain complicated stuff in an easy to understand way. That is one condition to test its understanding

    • @jyjjy7
      @jyjjy7 4 месяца назад +2

      To other humans of reasonable intelligence. The question is whether AGI will be categorically different from us intellectually so that isn't really relevant.
      A bacteria could never understand what an inch worm is up to, just as worms could never comprehend the mind of a cat, just like that cat could never understand this conversation no matter how long or in what way you try to explain it. That humans solved intelligence and achieved some ability to understand that is optimized to some theoretical maximum is a highly sketchy hypothesis imo.

    • @minimal3734
      @minimal3734 4 месяца назад

      An AI can break down its understanding into pieces that can be understood by humans. I suspect that the human ability to understand is so general that it is capable of understanding everything that can be understood. It is also probable that everything that can be known must be expressed formally and logically to fulfill the condition of being knowledge. In this case, it can be understood by humans, perhaps step by step and iteratively, by proving one small part at a time. Just like a complex mathematical proof, which can rarely be understood in one piece, but must be broken down into small parts.

    • @ryzikx
      @ryzikx 4 месяца назад

      yes, but there will be things that cannot be explained to lower intelligences. its like trying to compress a file to fit in a certain drive. some files are too big to even compress, no matter how well you can compress it.
      for example, there is no way einstein field equation can be explained to a single celled bacteria no matter how much you simplify it

    • @jyjjy7
      @jyjjy7 4 месяца назад

      @@minimal3734 There are different computational classes. If you are interested in the subject I highly recommend the fascinating (and extremely high level) discussion on the subject between Stephen Wolfram and Jonathan Gorard titled Hyporuliad.

    • @minimal3734
      @minimal3734 4 месяца назад

      @ryzikx I don't think it will be possible to explain anything to a single-cell organism. But if a being is capable of abstract mathematical thought and has access to similar amounts of external memory as the AI itself, then the AI should be able to break down its knowledge in a way that the other can understand. Understanding can involve numerous steps and iterations and take a lot of time and effort. Just like understanding the proof of Fermat's Last Theorem.

  • @nholth
    @nholth 4 месяца назад +2

    That is really deep. You are talking about the sorts of thoughts that go on in my head as i ponder my intelligence but I am unable to vocalize them in the way you do. This is very interesting!

  • @andreaqui1653
    @andreaqui1653 4 месяца назад +2

    Dropping a pod hiking in the woods with shortness of breath is wild.

    • @zombywoof1072
      @zombywoof1072 4 месяца назад +1

      Wildly bad, I think you mean. Given the limitations of human attention, background noise consumes cognitive resources, which can impair the ability to effectively communicate important information

  • @hobocraft0
    @hobocraft0 4 месяца назад +2

    Bro, you can't be like "theoretical physics and integrals are beyond my cognitive horizon right now", but later say "they might be in my cognitive horizon with training", because didn't you define cognitive horizon as a theoretical maximum? Like how a pigeon will never understand rhyming schemes in poetry kind of thing?

    • @easydoesitismist
      @easydoesitismist 4 месяца назад

      Like he can't speak Spanish, but with training he could speek it as well as he does English. Probably, given enough time.
      Now imagine being able to upload Spanish in a second.
      Upgrade the software to build the tool to upgrade the hardware.

  • @journees4300
    @journees4300 4 месяца назад +3

    Are you walking in the holodeck?

  • @kevinscales
    @kevinscales 4 месяца назад +1

    The limit on the human brain will be the number and complexity of concepts that can be simultaneously considered at a time. Augmentation, like Neuralink, may allow that limit to increase by a lot (I hope), but the existing brain stuff will likely still be a bottleneck until that can be fully replaced. The big issue here is, why are we trying to stay at the AI's level? Is that actually good for us? What if we would rather stay (or go back to, at this point) being more naturally 'human'?

  • @evopwrmods
    @evopwrmods 4 месяца назад +1

    Presupposing that our Alien friends dont show up soon to help us realize other layers to reality...

  • @Thinkfuture
    @Thinkfuture 4 месяца назад +1

    ai will eventually surpass human intelligence, sure - but is that really a bad thing? maybe its time we let the ai try to help us solve the big human problems we've been unable to solve?

  • @TheIgnoramus
    @TheIgnoramus 4 месяца назад +1

    I think we need to pump the breaks and figure out how to communicate the functional accuracy and behavior of these systems. At this point, I don’t even know who knows what they’re actually talking about. The definitions don’t fit.

  • @toddmckissick2931
    @toddmckissick2931 4 месяца назад +2

    You're finally starting to think about this topic logically. Great work. More to go tho.
    Next point to grasp is that today's AIs aren't actually smart. They're amazing at remembering and applying what is already known but horrible at zero principles thinking (brand new concepts). All the smart things they say now is simply some level of copying what some person said somewhere and sometime in the past, even if they do place that info in a new context. If humans hadn't figured out calculus yet the AIs would be awesome at geometry etc., but never generate calculus. Ever. Not without us reversing their training path, like your conclusions suggest.
    Therefore, they will be smarter than us on average but not smarter than the smartest person in any given field.
    If you want to change that, we need to rearrange the hardware connections they use and the result will be smarter, faster and far less compute. Then training will be hierarchically additive, not top-down-fill-in-holes, just like we learn. And doing it this new way is so easy that once it becomes known how, it will be seen as obvious!

  • @jillespina
    @jillespina 4 месяца назад +1

    In the short term, the main cognitive difference is just speed. Perhaps when AI agents reach UFO-like speed, then that should definitely be another horizon/dimension.

  • @DustedAsh3
    @DustedAsh3 4 месяца назад +1

    Been thinking about this and similar topics.
    I think the only reason we don't have AGI is because we don't have all of the components.
    The human brain isn't just one piece, it's a bunch of discrete processors with different functions.
    Why don't computers have this? We haven't built things for them.
    I've wondered recently about the creation of a new general processor, an abstraction of a CPU. It would contain a separate chip (or zone or whatever, not a chip maker) for a CPU, a GPU, a TPU (Transformer Processing Unit or Language processing unit, see Groq), and a QPU (Quantum processing unit). These four units working together could give a computer the bandwidth for real time human scale thought, or something close to it.
    Add in possibly some discrete programs or hardware for various functions that we find it lacks, and we both build our understanding of our own brains and how to make them.

  • @crosbja360
    @crosbja360 4 месяца назад +1

    Another very insightful video. I noticed that ironically, you have been walking in the forest. Oddly, in that domain (the forest), humans probably still have a cognitive lead. Lol.

  • @alexandrponomarenko9808
    @alexandrponomarenko9808 4 месяца назад +1

    ASI may actually decide to return its IQ back to 120 or 100 because life would seem too boring at IQ 2000

  • @alexandera2509
    @alexandera2509 4 месяца назад +2

    My biggest thought on this, is that a super intelligent AI, also would understand the best way to explain concepts, philosphies, and ideas in a way that is tailored specifically for the people it's explaining to. Through understand the person better than the person, it could create arguments, but even more than that, situations, conversations and examples, that would let it explain, and teach basically any concept to any person. An AI, and ASI is going to be able to understand every person, and be able to teach better than any teacher and give real, deep understanding.

  • @EduardsDIYLab
    @EduardsDIYLab 4 месяца назад +2

    I think real problem is scale and speed. 1 current problem with humans is that we learn relatively slow. Especially when it's about transfer of knowledge from one human being to another. It's insanely hard for one athlete to teach intricacies to other athlete because we don't have language for of this kind of knowledge.
    AIs? Alpha zero learned to play go in 2 weeks if I remember right.
    It can't really explain why it does what it does. But it can teach other AI way faster than it can teach human.
    So while I agree that human brains are potentially capable a learning anything and simulating everything like Turing complete machines. There are still a question of speed, size and communication.
    I think humans will be able to understand results of AI science eventually. For me the question is in time gap. I'm think AI still will be better than humans in making timely decisions because they can communicate and process information way faster.

  • @hadykamal7711
    @hadykamal7711 4 месяца назад +2

    Granted that humans can improve their Cognitive Horizons but our pace of learning is ridiculously slow in advancing in just one domain, AI is advancing rapidly across all domains, so we will never catch up

  • @justindressler5992
    @justindressler5992 4 месяца назад +1

    This already is happening with AI it already is more capable than humans in many fields. How many artists can imagine a scene from a single sentence and draw it in seconds. AI can out perform teams of scientists in predicting protein outputs from genes and from a protein predict the genes. AI is able to perform in the 95% of surgeons detecting legions in x-rays. Vision models can categories imagines better than humans refer to imagenet. They are now able to translate to almost any language both text and verbal I don't think there is a human alive who can do that. We always think we're smarter when we prove someone else wrong. But I guarantee many who attempt this didn't know the facts until they did some research. AI can adhocs information in a fast range of subjects without needing to google it. Now they can clone your voice with ten seconds of audio. There are models that surpass human level skills all around us. I saw a colour night vision AI camera the other day that turn pitch black into almost day light in full colour. We think Chat GPT is dumb but it was never designed to be smart it was just designed to finish sentences. Imagine when these models become merged and designed to be intelligent eg trouble shoot like chain of thought, minds eye, mixture of experts. Its just a matter of fine tuning at this point.
    The funny thing is people are critical about how much data is needed and how long it takes to train these models. But how long does it take a human to expand there horizon in just one field. These models have instant recall and can write comments like mine in seconds.
    We are the dinosaurs at this point. Reasoning is just a process for interacting with these models

  • @davidevanoff4237
    @davidevanoff4237 4 месяца назад +1

    Pigeons are better discriminators: early missile controllers; pretzel inspectors; air-sea rescue spotters. Neanderthals and wolves were too suspicious to survive in marginal habits through trade.

  • @zaggedout
    @zaggedout 4 месяца назад +2

    Great discussion but I think huge levels of copium. Machines will easily eclipse human cognition. I think we are extremely naive about the capabilities of AI. I don't think any basic enhancement will bring us to the same level. Only a simbiosis with our own AI.

  • @jyjjy7
    @jyjjy7 4 месяца назад +3

    I thought it was understood that ASI would be beyond us intellectually, kinda even the point really?

    • @DaveShap
      @DaveShap  4 месяца назад +1

      Yes, but many people do not believe that AI even understands anything, let alone that it is "possible" for machines to supersede humans.

    • @jyjjy7
      @jyjjy7 4 месяца назад

      @@DaveShap Yeah but humans believe all sorts of crazy stuff. We cannot even correct for our many known cognitive biases, and this superiority complex is one of them imo.

    • @Lighthouse-k8y
      @Lighthouse-k8y 4 месяца назад

      @@DaveShap Do you think current llms understand what they’re saying? If so, can you let us know why that might be the case?

  • @deadlygeek
    @deadlygeek 4 месяца назад +1

    I found your idea on evolving backwards particularly interesting - always enjoyable videos, thanks for sharing.

  • @wynq
    @wynq 4 месяца назад +1

    I'm willing to believe that some humans might be able to visualize in 4D or even 5D, but I don't think we'll ever be able to do, say, 20D, and I don't think anyone would claim to. But when I think of ASI, I think it is very likely they will be able to think in 20D or any arbitrarily large N-Dimensions.

  • @sprytnychomik
    @sprytnychomik 4 месяца назад +1

    There's just one small problem. How do you distinguish between a) incomprehensive truth and b) pure gibberish? How do you distinguish true from false statements in a language that you do not understand?
    Let's say that some smart hi-tech AI tells us that jumping off the Golden Gate Bridge is a good way to deal with depression problem... Would that be a good idea for reasons we do not comprehend or a stupid idea? Blindly following AI because of some "singularities" or "cognitive horizons" is as stupid as murdering people because God or voices in one's head said it's a great idea. I think that natural selection will have a plenty of work during the AI age.
    And always remember the following wisdom (might require 100x brain): dissdrasitus vidsis serci z aflorifus vin meh!

    • @robertlipka9541
      @robertlipka9541 4 месяца назад

      Survival and effects. Gibberish does not confer an advantage... think of the average workplace, lots of people talking a lot of peacocking, but most of those people do not manage to produce outcomes 😂 This is how you tell the difference.

  • @ChristopherCopeland
    @ChristopherCopeland 4 месяца назад +1

    David, would you ever consider cybernetic augmentation? I have a great curiosity about the sensation and awareness of the kind of cognitive expansion we might (and in all likelihood will) be able to achieve by being connected to machines and assisted by artificial intelligence, but I also have a deep existential fear of the complete dissolution of self which seems inevitable after you’ve encountered awareness of that kind. That said, I have experienced ego deaths several times as a result of anxiety-fueled psychosis (not “fun” exactly 😅), and each time I have become a completely new individual who can no longer retreat to the same worldview I possessed before. So while I have even had somewhat analogous experiences, I have always been able to know that the self that I am now is at least the “greatest” me, with all my new experiences hardwired into my neurology, etc. With digital augmentation, I can’t help but feel that I would lose something fundamental about what I find meaningful as an organic (and spiritual*) being. I’d be very curious to see a video on this topic if it seems something you have a significant amount of thoughts about. 🤘
    With regards to this video’s subject matter, I know you mentioned augmentation / genetic modification / nootropic assistance, but my view is that while the mind itself is obviously insanely expansive and plastic and multidimensional, I can’t help but feel that at a certain level of complexity, processing speed and horsepower will have to limit the multidimensionality of cognition one is able to experience consciously with any real fidelity.
    I know it’s contentious and a bit squirrely, but as an example I would point to the type of experiences people are able to achieve on psychedelics. I have not done them myself, but while they do seem to alter the user’s default perception as well, unanimously users seem to agree that once the chemicals have left their system, they are no longer able to grasp even a fraction of the depth or breadth of the experience they have while they are tripping. to me this would suggest that even if they are not achieving some actually higher level of cognitive fidelity or comprehension, they are at the very least experiencing a particular quality of thought which they can only vaguely remember the gist of but no longer actually consciously grasp in an unassisted state.
    Clearly much of the reason for this is speculative but from what I have seen and read of brain scans (on LSD for example), it seems that there is at the very least some degree of freer communication occurring between different networks of the brain than what typically occur in normal brain function.
    It seems to me that there must be some degree of multidimensional perception which we are probably not capable of achieving in a default/unassisted state.
    Let me know what you think! Cheers!

  • @ploppyploppy
    @ploppyploppy 4 месяца назад +3

    Does that mean pigeons think we're stupid? :p

    • @raonijosef5661
      @raonijosef5661 4 месяца назад

      They won't think, we are stupid. But, they will lough at us, while we're trying to return from a completely unknown place, 2000 miles away, where we had reached in dark box.

  • @marcelorangel5750
    @marcelorangel5750 4 месяца назад +2

    One thing that I think would greatly improve our reasoning is as "simple" as an increase in our working memory capacity.

    • @thatwasprettyneat
      @thatwasprettyneat 4 месяца назад

      Some of the smartest people are actually people who just have excellent memories. I remember reading an entry on Gates Notes where Bill Gates was saying that people have told him that he must have a photographic memory, which he doesn't, but takes as a compliment. And I'm not commenting on his intelligence, but to simply have a great memory would be that much better in navigating the world and being competent in any job than not.

  • @Ev3ntHorizon
    @Ev3ntHorizon 4 месяца назад +2

    I love these forest walks. As for cognitive horizons, I think your thoughts towards the end are correct. The late, (great), Daniel Dennet addressed this point explicitly.
    His view, which I find compelling, is that once you have recursive language, then nothing is out of scope cognitively. Nothing.
    I like the way you are helping us all navigate this strange moment in our history.

    • @grrr_lef
      @grrr_lef 4 месяца назад

      > once you have recursive language, then nothing is out of scope cognitively
      yeah... except for the things that are out of the scope of recursive language
      let's take some mathematical objects as an anology:
      if you have enough time, no matter your speed, you can reach every point on the line of real numbers. [in our anology this is "everything you can do with recursive language"]
      but then there's also the complex numbers.
      and there's algebras over other base fields than Q.
      and there's monoidal categories.
      and so on and so on...

    • @Ev3ntHorizon
      @Ev3ntHorizon 4 месяца назад

      @@grrr_lef by all means, take it up with Dennett.

  • @turnt0ff
    @turnt0ff 4 месяца назад +1

    I can listen to you talk for hours 😂
    Great stuffs 📝