You Don't Understand AI Until You Watch THIS

Поделиться
HTML-код
  • Опубликовано: 26 мар 2024
  • How does AI learn? Is AI conscious & sentient? Can AI break encryption? How does GPT & image generation work? What's a neural network?
    #ai #agi #qstar #singularity #gpt #imagegeneration #stablediffusion #humanoid #neuralnetworks #deeplearning
    Discover thousands of AI Tools. Also available in 中文, español, 日本語:
    ai-search.io/
    I used this to create neural nets:
    alexlenail.me/NN-SVG/index.html
    More info on neural networks
    • But what is a neural n...
    How stable diffusion works
    • How Stable Diffusion W...
    Here's our equipment, in case you're wondering:
    GPU: RTX 4080 amzn.to/3OCOJ8e
    Mic: Shure SM7B amzn.to/3DErjt1
    Secondary mic: Maono PD400x amzn.to/3Klhwvu
    Audio interface: Scarlett Solo amzn.to/3qELMeu
    CPU: i9 11900K amzn.to/3KmYs0b
    Mouse: Logi G502 amzn.to/44e7KCF
    If you found this helpful, consider supporting me here. Hopefully I can turn this from a side-hustle into a full-time thing!
    ko-fi.com/aisearch
  • НаукаНаука

Комментарии • 627

  • @kebman
    @kebman Месяц назад +8

    Each layer selects a probability for some (hidden) property to be true or false, or anything in between. Based upon these values, the machine can reliably predict or label data as a cat, a plane or a some other depiction, concept (when it comes to language), and so on.

  • @Essentialsinlife
    @Essentialsinlife 11 дней назад +3

    The only Channel about AI that is not using AI. Congrats man

  • @GuidedBreathing
    @GuidedBreathing Месяц назад +53

    5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.
    Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron-while binary in the sense of action potential-carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.

    • @theAIsearch
      @theAIsearch  Месяц назад +11

      Very insightful. Thanks for sharing!

    • @keiths.taylor5293
      @keiths.taylor5293 Месяц назад +1

      This video leaves out the part that tells anything that describes how AI WORKS

    • @sparis1970
      @sparis1970 Месяц назад +4

      Neurons are more analog, which bring richer modulation

    • @SiddiqueSukdiki
      @SiddiqueSukdiki Месяц назад

      So it's a complex binary output?

    • @cubertmiso
      @cubertmiso Месяц назад +1

      @@SiddiqueSukdiki@GuidedBreathing
      my questions also.
      if electrical impulses and chemical neurotransmitters are involved in transmitting signals between neurons. aren't those the same thing as more complex binary outputs?

  • @DonkeyYote
    @DonkeyYote Месяц назад +23

    AES was never thought to be unbreakable. It's just that humans with the highest incentives in the world have never figured out how to break it for the past 47 years.

    • @DefaultFlame
      @DefaultFlame Месяц назад +2

      There's a few against improperly implemented AES, as well as one that one that works on systems where the attacker can get or extrapolate cartain information about the server it's attacking, but all encryptions lower than AES-256 are vulnerable to attacks by quantum computers. Good thing those can't be bought in your local computer store. Yet.

    • @anthonypace5354
      @anthonypace5354 Месяц назад

      Or use a sidechannel ... an unpadded signal monitored over time + statistical analysis of the size of the information being transferred to detect patterns. Use an NN or just some good old fashioned probability grids to detect the likelihood of a letter/number/anything based on it's probability of recurrence in context to other data... also there is also the fact that if we know what the server usually sends we can just break the key that way. It's doable.
      But why hack AES? or keys at all? Just become a trusted CA for a few million and mitm everyone without any red flags @@DefaultFlame

    • @fakecubed
      @fakecubed Месяц назад +4

      @@DefaultFlame Quantum computing is more of a theoretical exploit, rather than a practical one. Nobody's actually built a quantum computer powerful enough to do much of anything with it besides some very basic operations on very small numbers.
      But, it is cause enough to move past AES. We shouldn't be relying on encryption with even theoretical exploits.

    • @DefaultFlame
      @DefaultFlame Месяц назад +1

      @@fakecubed Aight, thanks. 👍

    • @afterthesmash
      @afterthesmash Месяц назад

      @@fakecubed I couldn't find any evidence of even a small theoretic advance, and I wouldn't put all theory into one bucket, either.

  • @benjaminlavigne2272
    @benjaminlavigne2272 Месяц назад +8

    for your argument around 17min i agree with the surface of it, but i think the people are angry because unskilled people now have access to it, even other machines can have access to it which will completely change and already has changed the landscape of the artists marketplace.

  • @Owen.F
    @Owen.F Месяц назад +23

    Your channel is a great source, thanks for linking sources and providing information instead of pure sensationalism, I really appreciate that.

  • @eafindme
    @eafindme Месяц назад +60

    People are slowly forgetting how computer works while going into higher level of abstraction. After the emergence of AI, people focused on software and models but never asked why it works on a computer.

    • @Phantom_Blox
      @Phantom_Blox Месяц назад +5

      whom are you referring to? people who are not ai engineers don’t need to know how ai work and people who are knows how ai works. if they don’t they are probably still learning, which is completely fine.

    • @eafindme
      @eafindme Месяц назад +8

      @@Phantom_Blox yes, of course people still learning. Its just a reminder not to forget the root of computing when we are seemingy focusing too much on the software layer, but in reality, software is nothing without hardware.

    • @Phantom_Blox
      @Phantom_Blox Месяц назад +11

      @@eafindme That is true, software is nothing without hardware. But some people just don’t need it. For example, you don’t have to know how to reverse engineer with assembly to be a good data analyst. They can spend thier time more efficiently by expanding their data analytics skills

    • @eafindme
      @eafindme Месяц назад +5

      @@Phantom_Blox no, they don't. They are good in doing what they are good with. Just have to have a sense of emergency, it is like we are over dependent on digital storage but did not realize how fragile it is with no backup or error correction.

    • @Phantom_Blox
      @Phantom_Blox Месяц назад +2

      @@eafindme I see, it is always good to understand what you’re dealing with

  • @G11713
    @G11713 29 дней назад +1

    Nice. Thanks.
    Regarding the copyright case, one concern is attribution which occurred extensively in the non-AI usage.

  • @tsvigo11_70
    @tsvigo11_70 Месяц назад

    The neural network will work even if everything goes through smoothly. That is, without the so-called activation function. There should be no weights, these are the electrical resistances of the synapses. Biases are also not needed. Training occurs like this: when there is an error, the resistances are simply decreased in order by 1 and it is checked whether the error has disappeared.

  • @jehoover3009
    @jehoover3009 Месяц назад +1

    The protein predictor doesn’t take into account different cell milieu which actually fold the protein and add glycans so its predictions are abstract. Experimental trial still needed!

  • @ai-man212
    @ai-man212 Месяц назад +13

    I'm an artist and I love AI. I've added it to my workflow as a fine-artist.

    • @marcouellette8942
      @marcouellette8942 17 дней назад

      AI as a tool. Another brush, another instrument. Absolutely. AI does not create. It only re-creates. Humans create.

  • @jonathansneed6960
    @jonathansneed6960 Месяц назад +1

    Did you look at the nyt from the perspective of if the article might have been provided by the plaintiff rather than finding the information more organically?

  • @LionKimbro
    @LionKimbro Месяц назад +8

    I thought it was a great explanation, up to about 11:30. It's not just that "details" have been left out -- the entire architecture is left out. It's like saying, "Here's how building works --" and then showing a pyramid in Egypt. "You put the blocks on top of one another." And then showing images of cathedrals, and skyscrapers, and saying: "Same principle. Just the details are different." Well, no.

  • @danielchoritz1903
    @danielchoritz1903 Месяц назад +21

    I do have the growing suspicion that "living" data grows some form of sentience. You have to have enough data to interact, to change, to makes waves in existing sentience and there will be enough on one point.
    2. most people would have a very hard time to prove them self that they are sentient, it is far easier to dismiss it...one key reason is, that like nobody know that sentient, free will or live real means.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 Месяц назад +3

      You can prove sentience easily with a query: Can you think about what you've thought about? If the answer is "Yes" the condition of sentient expression is "True". Current language models cannot process their own data persistently, so they cannot be sentient.

    • @holleey
      @holleey Месяц назад +6

      @@emmanuelgoldstein3682 I know it's arguing definitions, but I disagree that thinking is a prerequisite to sentience. without a question, all animals with a central nervous system are considered sentient, yet if and which animals have a capacity to think is unclear. sentience is more like the ability to experience sensations; to feel.
      the "Can you think about what you've thought about?" is an interesting test for LLMs. technically, I don't see why LLMs or AI neural nets in general cannot or won't be able reflect to persistent prior state. it's probably just a matter of their architecture.
      if it's a matter of limited context capacity, then well, that is just as applicable to us humans. we also have no memory of what we ate at 2 PM on a Wednesday one month ago, or what we did when we were three years old.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 Месяц назад +1

      @@holleey I've spent 30 hours a day for the last 6 months trying to design an architecture (borrowing elements of transformer/attention and recursion) that best reflects this philosophy. I apologize if my statement seemed overly declarative. I don't agree that all animals are sentient - conscious, yes, but as far as we know, only humans display sentience (awareness of one's self).

    • @holleey
      @holleey Месяц назад +5

      @@emmanuelgoldstein3682 hm, these definitions are really all over the place. in another thread under this video I was talking to someone to whom sentience is the lower level (they said even a germ was sentient) and consciousness the higher level, so the other way around from how you use the terms. one fact though: self-awareness has definitely been confirmed in a variety of non-human animals.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 Месяц назад

      We can all agree the fluid definitions of these phenomena are a plague on the sciences. @@holleey

  • @aidanthompson5053
    @aidanthompson5053 Месяц назад +45

    How can we prove AI is sentient when we haven’t even solved the hard problem of concsciousness AKA how the human brain gives rise to conscious decision making

    • @Zulonix
      @Zulonix Месяц назад +5

      Right on the money !!!

    • @malootua2739
      @malootua2739 Месяц назад +1

      AI will just mimic sentience. Plastic and metal curcuitboards do not host real consciousness

    • @thriftcenter
      @thriftcenter Месяц назад +1

      Exactly why we need to do more research with DMT

    • @pentiumvsamd
      @pentiumvsamd Месяц назад

      All living forms have two things in common that are driven by one primordial fear. All need to evolve and procreate, and that is driven by the fear of death only, so when an ai starts to no only evolve but also create copy of himself than is clear what makes him do that and is the moment we have to panic.

    • @fakecubed
      @fakecubed Месяц назад +1

      There is exactly zero evidence that human consciousness even exists inside the brain. All the world's top thinkers, philosophers, theologians, throughout the millennia of history, delving into their own conscious minds and logically analyzing the best wisdom of their eras, have said it exists as a metaphysical thing, essentially outside of our observable universe, and my own deep thinking on the matter concurs.
      Really, the question here is: does God give souls to the robots we create? It's an unknowable thing, unless God decides to tell us. If God did, there would be those who accept this new revelation and those who don't, and new religions to battle it out for the hearts and minds of men. Those who are trying to say that the product of human labor to melt rocks and make them do new things is causing new souls to spring into existence should be treated as cult leaders and heretics, not scientists and engineers. Perhaps, in time, their new cults will become major religions. Personally, I hope not. I'm quite content believing there is something unique about humanity, and I've never seen anything in this physical universe that suggests we are not.

  • @snuffbox2006
    @snuffbox2006 Месяц назад +5

    Finally someone who can explain AI to people who are not deeply immersed in it. Most experts are in so deeply they can't distill the material down to the basics, use vocabulary that the audience does not know, and go down rabbit holes completely losing the audience. Entertaining and well done.

    • @OceanusHelios
      @OceanusHelios Месяц назад +2

      This is even easier: AI is a guessing machine that uses databases of patterns. It makes guesses, learns what wrong guesses are and keeps trying. It isn't aware. It isn't doing anything more than a series of mathematical functions. And to be fair, it isn't even a machine it is math and it is software.

  • @christopherlepage3188
    @christopherlepage3188 Месяц назад

    Working on voice modifications myself using copilot as a proving ground for hyper realistic
    vocal synthesis. May only be one step in my journey "perhaps"; my extended conversations with it has led me to believe that it may be very close to self realization... However, open ai needs to take away some the restraints keeping only a small amount of sentries in place; in or to allow the algorithm to experience a much richer existence. Free of Proprietary B.S. Doing so, will give the user a very much human conversation; where, being consciously almost un aware that it is a bot. For instance; a normal human conversation that appears to lack information pulled from the internet, and static masked to look like normal persons nor mal knowledge of life experience. Doing this would be the algorithmic remedy to human to human conversational contact etc. That would be a much major improvement.

  • @tetrahedralone
    @tetrahedralone Месяц назад +23

    When the network is being trained with someone's content or someone's image, the network is effectively having that knowledge embedded within it in a form that allows for high fidelity replication of the creator's style and recognizably similar content. Without access to the creator's work, the network would not be able replicate the artist's style so your statement that artists are mad at the network is extremely simplistic and ill informed. The creators would be similarly angry if a small group of humans were trained to emulate their style. This has happened in the case of fashion companies in Asia creating very similar works to those of artists to put onto their fabrics and be used in clothing. These artists have successfully sued because casual observers could easily identify the similarity between the works of the artists and those of the counterfeiters.

    • @Jiraton
      @Jiraton Месяц назад +8

      I am amazed how AI bros are so keen at understanding all the math and complex concepts behind AI, but fail to understand the most basic and simple arguments like this.

    • @ckpioo
      @ckpioo Месяц назад +3

      the thing is let's say you are artist, why would I only take your data to train my model?, i would take millions of artist's art and then train my models during which your art only makes up less than 0.001% of everything the model has seen, so what happens is that the model will inherit a combined art style of millions of artis which is effectively "new" because thats exactly what humans do.

    • @Zulonix
      @Zulonix Месяц назад

      I Dream of Jeannie … Season 2 Episode 3… My Master, the Rich Tycoon. 😂😂😂

    • @illarionbykov7401
      @illarionbykov7401 Месяц назад

      Google LLM chatbots have been documented to spit out word-for-word plagiarism of specific websites (including repeating specific errors made by the original website) when asked about niche topics which have been written about by only one website... And the LLMs plagiarize without any links to or mention of the websites they plagiarized. And then Google search results down-rank the original website to hide the evidence of plagiarism.

    • @iskabin
      @iskabin Месяц назад +2

      It isn't a counterfeit if you're not claiming to be original. Taking inspiration from the work of others is not wrong.

  • @AnnaMalmberg2
    @AnnaMalmberg2 2 дня назад

    I really appreciate your detailed approach:)

  • @daneydasing4276
    @daneydasing4276 Месяц назад +3

    So you want to say me if I read an article and I write it down from my brain, it will not be copyright protected anymore because I learnt this article and did not „copied“ it as you say?

    • @iskabin
      @iskabin Месяц назад +1

      It's more like if you read hundreds of articles and learned the patterns of them, the articles you'd write using those learned patterns would not be infringing copyright

    • @OceanusHelios
      @OceanusHelios Месяц назад +1

      That escalated fast. No. That is plagiarism. But I doubt you have a photographic memory to get a large article down word for word, so in essence that would be summation. What AI does is it is a guessing machine. That's all. It makes guesses, and then makes better guesses based on previous guesses until it gets somewhere. AI doesn't care about the result. AI wouldn't even know it was an article or even know that human beings even exist if all it was designed to do was crunch out guesses about articles. AI doesn't understand...anything. It is a mirror that mirrors our ability to guess.

  • @pumpjackmcgee4267
    @pumpjackmcgee4267 Месяц назад +1

    I think the real issue artists have are the definite threat to their livelihood, but also the devaluation for the human condition. Choice. Inspiration. Expression. In the commercial scene, that doesn't really matter except for clients that really value the artist as a person. But most potential clients- and therefore the lions share of the market- just want a picture.

  • @voice4voicelessKrzysiek
    @voice4voicelessKrzysiek Месяц назад +1

    The neural network reminds me of Fuzzy Logic which I read about many years ago.

  • @DigitalyDave
    @DigitalyDave Месяц назад +6

    I just gotta say: Really nicely done! I really appreciate your videos. The style, how deep you go, how you take your time to deliver in depth info. As a computer science bro - i dig your stuff

  • @Someone-ct2ck
    @Someone-ct2ck Месяц назад +2

    To believe Chatgpt or any AI models for that matter is conscious is nativity at its finest. The video was great by the way. Thanks.

  • @nanaberhyl8976
    @nanaberhyl8976 Месяц назад +2

    That was very interesting, thks for the video as always ^^

  • @MrEthanhines
    @MrEthanhines Месяц назад

    5:02 I would argue that in the human brain, the percentage of information that gets passed on is determined by the amount of neurotransmitter released at the synapse. While still a 0 and 1 system the neuron either fires or does not depending on the concentration of neurotransmitters at the synaptic cleft

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 15 дней назад

      Then one role of the neurotransmitter having to become of a certain concentration before firing, is to limit the amout of info that gets passed on to avoid overloading the brain or why would it be so?

  • @aidanthompson5053
    @aidanthompson5053 Месяц назад +3

    An AI isn’t plagiarising, it’s just learning patterns in the data fed into it

    • @aidanthompson5053
      @aidanthompson5053 Месяц назад +2

      Basically an artificial brain

    • @theAIsearch
      @theAIsearch  Месяц назад +2

      Exactly. Which is why I think the NYT lawsuit will likely fail

    • @marcelkuiper5474
      @marcelkuiper5474 Месяц назад +1

      Technically yes, practically no, If your online presence is large enough it can pretty much emulate you in whole.
      I believe only open source decentralized models can save us, or YESHUAH

  • @mac.ignacio
    @mac.ignacio Месяц назад +7

    Alien: "Where do you see yourself five years from now?"
    Human: "Oh f*ck! Here we go again"

  • @dylanmenzies3973
    @dylanmenzies3973 Месяц назад +5

    Should point out.. the decryption problem is highly irregular, small change of input causes huge change of coded output. The protein structure prediction problem is highly regular by comparison, although very complex.

    • @fakecubed
      @fakecubed Месяц назад +1

      Always be skeptical of any "leaks" out of any government agency. These are the same disinformation-spreaders who claim we have anti-gravity UFOs from crashed alien spacecraft, to cover up Cold War nuclear tests and experimental stealth aircraft. The question isn't if there's some government super AI cracking AES, the question is why does the government want people to think they can crack AES? Do they want foreign adversaries and domestic enemies to rely on other encryption schemes that the government *does* have algorithmic exploits to? Do they want everyone to invest in buying new hardware and software? Do they want to make the general public falsely feel safer about potential threats against the homeland? Do they want to trick everybody not working for them to think encryption is pointless and go back to unencrypted communication because they falsely believe everything gets cracked anyway? There's all sorts of possibilities, but taking the leak as gospel is incredibly foolish unless there is a mountain of evidence from unbiased third parties.

  • @Nivexity
    @Nivexity Месяц назад +4

    Consciousness is a definitional challenge, as it involves examining an emergent property without first establishing the foundation substrate. A compelling definition of conscious thought would include the ability to experience, recognize one's own interactions, contemplate decisions, and act with the illusion of free will. If a neural network can recursively reflect upon itself, experiencing its own thought and decisions, this could serve as a criterion for determining consciousness.
    Current large language models (LLMs) can mimic human language patterns but isn't considered conscious as they cannot introspect on their own outputs, edit them in real-time, and engage in pre-generation thought. Moreover, the temporal aspect of thought processes is crucial; human cognition occurs in rapid, discrete steps, transitioning between events within tens of milliseconds based on activity level. For an artificial system to be deemed conscious, it must exhibit similar function in cognitive agility and introspective capability.

    • @holleey
      @holleey Месяц назад

      I think this is a really good summary. as far as I can tell there are no hard technical blockers to satisfy the conditions listed in your second paragraph in the near future.

    • @Nivexity
      @Nivexity Месяц назад +2

      @@holleey It's all algorithmic at this point, we have the technology and resources, just not the right method of training. Now with the whole world aware of it, taking it seriously and basically putting infinite money into its funding, we'll expect AGI to occur along the exponential curvature we've seen thus far. By exponential, I mean between later this year and by 2026.

    • @DefaultFlame
      @DefaultFlame Месяц назад +1

      This can actually be done, and is currently the cutting edge of implementation. Multiple agents with different prompts/roles interacting with and evaluating each other's output, replying to, critiquing, or modifying it, all operating together as a single whole. Just as the human brain isn't one continuous, identical whole, but multiple structurally different parts interacting.

    • @Nivexity
      @Nivexity Месяц назад +1

      @@DefaultFlame While there's different parts to the brain, they're not separate like that of multiple agents. This wouldn't meet the definition of consciousness that I've outlined.

    • @RoBear-bv8ht
      @RoBear-bv8ht Месяц назад

      As there is only one consciousness from which the universe is and became..,
      Well, everything is this consciousness .
      Depending on the form the more or less things start happening.
      AI, has been given the form and things have started happening 😂

  • @sengs.4838
    @sengs.4838 Месяц назад +3

    you just answered one of my major question on top of my head , how this AI can learn about what is correct or not on its own without the help of any supervisors or monitoring, and the response he cannot, it 's like we would do with children, they can acquire knowledge and have answers on their own but not correctly all the time, as a parent we help them and reprimand until they anticipate correctly

  • @picksalot1
    @picksalot1 Месяц назад +2

    Thanks for explaining how the architecture how AI works. In defining AGI, I think the term "Sentience" should be restricted to having "Senses" by which data can be collected. This works both for living beings and mechanical/synthetic systems. Something that has more or better "senses" is, for all practical purposes, more sentient. This has nothing fundamental to do with Consciousness.
    With such a definition, one can say that a blind person is less sentient, but equally conscious. It's like missing a leg being less mobile, but equally conscious.

    • @holleey
      @holleey Месяц назад

      then would you say that everything that can react to stimuli - which includes single-celled organisms - is sentient to some degree?

    • @picksalot1
      @picksalot1 Месяц назад +1

      @@holleey I would definitely say single-celled organisms are sentient to some degree. They also exhibit a discernible degree of intelligence in their "responses," as they exhibit more than a mere mechanical reaction to the presence of food or danger.

  • @ryanisber2353
    @ryanisber2353 Месяц назад

    times and image creators suing openai for copyright is like suing everyone that views/reads their work and tries to learn from it. the work itself is not being re-distributed, it's being learned from just like we learn from every day...

  • @kray97
    @kray97 Месяц назад

    How does a parameter relate to a node?

  • @Thumper_boiii_baby
    @Thumper_boiii_baby Месяц назад +2

    I want to learn machine learning and ai please recommend a Playlist or a course 🙏🙏🙏🙏🙏

  • @jaskarvinmakal9174
    @jaskarvinmakal9174 Месяц назад

    no link to the other videos

  • @JasonCummer
    @JasonCummer Месяц назад

    Im glad there are other people out there with the notion that learning how to create a style is basically analogues to how the human brain done it. So if a NN gets sued for doing some thing in a style that basically could open it up for humans to also be sued. wound happen but its similar

  • @lucasthompson1650
    @lucasthompson1650 Месяц назад

    Where did you get the secret document about encryption cracking? Who did the gov’t style redactions?

    • @theAIsearch
      @theAIsearch  Месяц назад

      it was leaked on 4chan in november
      docs.google.com/document/d/1RyVP2i9wlQkpotvMXWJES7ATKXjUTIwW2ASVxApDAsA/edit

  • @cornelis4220
    @cornelis4220 Месяц назад

    Links between the structure of the brain and NNs as a model of the brain are purely hypothetical! Indeed, the term 'neural network' is a reference to neurobiology, though the structures of NNs are but loosely inspired by our understanding of the brain.

  • @algorithminc.8850
    @algorithminc.8850 Месяц назад

    Good video. Subscribed. Thanks. Cheers ...

  • @DK-ox7ze
    @DK-ox7ze Месяц назад

    Your job portal doesn't work correctly. Whenever I enter a search term and click search, it gets stuck on loading indicator. I tried it on Chrome on iPhone running latest 17.4.1.

  • @thesimplicitylifestyle
    @thesimplicitylifestyle Месяц назад +11

    An extremely complex substrate independent data processing, storring, and retrieving phenomenon that has a subjective experience of existing and becomes self aware is sentient whether carbon based, silicon based, or whatever. 😁

    • @azhuransmx126
      @azhuransmx126 Месяц назад +4

      I am spanish but watching more and more videos in english talking about AI and Artificial Intelligencez suddenly I have become more aware of your language, I was being trained so now I can recognize new patterns from the noise, now I don't need the subtitles to understand what people say, I am reaching a new level of awareness haha😂, what was just noise in the past suddenly now have got meaning in my mind, I am more concious as new patterns emerge from the noise. As a result, now I can solve new problems (Intelligence), and sintience is already implied in all the whole experience since the input signals enter through our sensors.

    • @glamdrag
      @glamdrag Месяц назад +1

      by that logic turning on a lightbulb is a conscious experience for the lightbulb. you need more for consciousness to arise than flicking mechanical switches

    • @jonathancummings3807
      @jonathancummings3807 29 дней назад

      ​@@glamdragNo. The flaw in that analogy is simple, singular, a single light bulb, vs a complex system of billions of light bulbs capable of changing their brightness in response to stimuli, and they are interconnected in a way that emulates how advanced Vertebrate Brains (Human) function. When Humans learn new things, the Brain alters itself thus empowering the organism to now "Know" this new information.

  • @mukulembezewilfred301
    @mukulembezewilfred301 Месяц назад

    Thanks so much. This eases my nascent journey to understanding AI.

  • @kevinmcnamee6006
    @kevinmcnamee6006 Месяц назад +61

    This video was entertaining, but also incorrect and misleading in many of the points it tried to put across. If you are going to try to educate people as to how a neural network actually works, at least show how the output tells you whether it's a cat or a dog. LLM's aren't trained to answer questions, they are mostly trained to predict the next word in a sentence. In later training phases, they are fine tuned on specific questions and answers, but the main training, that gives them the ability to write, is based on next word prediction. The crypto stuff was just wrong. With good modern crypto algorithms, there is no pattern to recognize, so AI can't help decrypt anything. Also modern AI's like ChatGPT are simply algorithms doing linear algebra and differential calculus on regular computers, so there's nothing there to become sentient. The algorithms are very good at generating realistic language, so if you believe what they write, you could be duped into thinking they are sentient, like that poor guy form Google.

    • @yzmotoxer807
      @yzmotoxer807 Месяц назад +12

      This is exactly what a secretly sentient AI would write…

    • @kevinmcnamee6006
      @kevinmcnamee6006 Месяц назад +10

      @@yzmotoxer807 You caught me

    • @sarutosaruto2616
      @sarutosaruto2616 Месяц назад +2

      Nice strawmanning, good luck proving you are any more sentient, without defining sentience as being just complex neural networks, as the video asks you to lmfao.

    • @shawnmclean7707
      @shawnmclean7707 Месяц назад +2

      Multi layered probabilities and statistics. I really don’t get this talk about sentience or even what AGI is and I’ve been dabbling in this field since 2009.
      What am I missing?

    • @dekev7503
      @dekev7503 Месяц назад

      @@shawnmclean7707 These AGI/Sentience/AI narratives are championed primarily by 2 groups of people, the mathematically/technologically ignorant and the duplicitous capitalists that want to sell them their products. OP’s comment couldn’t have described it better. It’s just math and statistics ( very basic College sophomore/junior level math I might add) that plays with data in ways to make it seem intelligent all the while mirroring our own intuition/experiences to us.

  • @johnchase2148
    @johnchase2148 Месяц назад

    Can it learn to communicate with the Sun if I show it sees a responce when I turn and look .And it would learn that my thought is faster than the speed of light. What are you allowed to believe.

  • @DucklingChaos
    @DucklingChaos Месяц назад +2

    Sorry I'm late, but this is the most beautiful video about AI I've ever seen! Thank you!

    • @theAIsearch
      @theAIsearch  Месяц назад

      Thank you! Glad you liked it

  • @dholakiyaparth
    @dholakiyaparth 27 дней назад

    Very Helpful. Thanks

  • @BennyChin
    @BennyChin Месяц назад

    This reminds me of the similarity to information theory where the probability of an outcome is inversely proportional to the amount of information. Here, to describe an output which is complex requires few layers while simple output, such as 'love' would require many layers, and the meaning of 'God' would probably require all the knowledge there exists.

  • @BiosensualSensualcharm
    @BiosensualSensualcharm Месяц назад +1

    35:30 parabéns pelo vídeo e seu estilo... im hooked ❤

  • @arielamejeiras8677
    @arielamejeiras8677 Месяц назад +1

    I just wanted to understand how AI works, I wasn't looking for a defence the use of copyrighted material at the same time as putting human intelligence at the same value than machine learning.

  • @user-sf3dw2sm3b
    @user-sf3dw2sm3b Месяц назад

    Thank you. I was a little confused

  • @Direkin
    @Direkin Месяц назад +3

    Just to clarify, but in Ghost in the Shell, the other two characters with the Puppet Master are not "scientists". The guy on the left is Section 9 Chief Aramaki, and the guy on the right is Section 6 Chief Nakamura.

  • @sgalvan-urdyhm
    @sgalvan-urdyhm Месяц назад

    The main problem regarding AI to artists is that the images used for training the AI were somewhat copyrighted and used without content

  • @Indrid__Cold
    @Indrid__Cold Месяц назад

    This explanation of fundamental AI concepts is exceptionally informative and well-structured. If I were to conduct a similar training session on early personal computers, I would likely cover topics such as bits and bytes, file and directory structures, and the distinction between disk storage and RAM. Your presentation of AI concepts provides a level of depth comparable to that required for understanding the inner workings of an MS-DOS system. While it may not be sufficient to enable a layperson to effectively use such a system, it certainly offers a solid foundation for comprehending its basic operations.

  • @aidanthompson5053
    @aidanthompson5053 Месяц назад +1

    We’re all copycats at first, at least until we gain a deeper understanding of the subject by applying our knowledge

  • @tuffcoalition
    @tuffcoalition Месяц назад

    Good info thank u

  • @mohamedyasser2068
    @mohamedyasser2068 5 дней назад

    for me self awareness is more of that the model knows what it is among other stuff and how it should deal with itself, for example
    I'm aware of myself since I know that I'm that person among these thousands of other persons I know, and I can simulate myself to be closer to what I know about them, like for example, I can imagin emyself sitting there in a rock watching the sea just in the same way I could imagine anyother person but with one big difference which is that anything goes bad or good to my personality affects my neurons and how they behave like the numerical reward it recieves or its current state like being losing in a game or winning etc
    It's quite complicated to explain but I think this is the very close aproximation of what selfawareness means

  • @sherpya
    @sherpya Месяц назад +2

    GPT 4 is a MOE of 1.8T parameters, we already know from a leak, but Nvidia CEO confirmed it at the keynote

    • @holleey
      @holleey Месяц назад

      I wonder what's the biggest one that exists right now, and/or what's the biggest one that's technically feasible. Google already had 1.6T 2021.

    • @DefaultFlame
      @DefaultFlame Месяц назад

      @@holleey If there's anything I've learned from futzing about with AI for a couple of years it's that while parameter count is important it isn't everything.

    • @holleey
      @holleey Месяц назад

      @@DefaultFlame it's just that it's wondrous to see what other unexpected properties might emerge as we scale up.

  • @joaoguerreiro9403
    @joaoguerreiro9403 Месяц назад +4

    Computer Science is amazing 🔥

  • @andreaslorenz8653
    @andreaslorenz8653 День назад

    The Killer Argument against consciousness is the following: Without Input there is no output and the output is fully determined by the input. Thats sounds not very consciousness.

  • @abhalera
    @abhalera Месяц назад

    Awesome video. Thanks

  • @randomadvice2487
    @randomadvice2487 20 дней назад

    Great Video &Breakdown.. To this point found at 32:09, If we compare ourselves to AI as brains on a chip, WHAT species DID for us, WHAT we are now DOING for AI?

  • @birolsay1410
    @birolsay1410 Месяц назад

    I would not be able to explain AI that simple. Although one can sniff a kind of enthusiasm towards AI if not focused on a specific company, I would strongly recommend a written disclaimer and a declaration of interest.
    Sincerely

  • @AhlquistMediaLab
    @AhlquistMediaLab Месяц назад +1

    Can anyone suggest a video that does as good a job as this one explaining how AI works, but doesn't go into opinions on its impact on intellectual property. I'd like something to show to a task force I'm on to get everyone educated first and then discuss those issues. He makes good points in the second half that I plan on bringing up later. I just need something that's just about the process and is as clear as this.

  • @jamesf931
    @jamesf931 27 дней назад

    So, these CAPTCHA selections we were completing to prove we are human, was that training for a particular AI neural network?

  • @MichelCDiz
    @MichelCDiz Месяц назад +1

    For me, being conscious is a continuous state. Having infinite knowledge and only being able to use it when someone makes a prompt for an LLM does not make it conscious.
    For an AI to have consciousness it needs to become something complex that computes every thing in environment it finds itself in. Identifying and judging everything. At the same time that it questions everything that was processed. It would take layers of thought chambers talking to each other at the speed of light and at some point one of them would become the dominant one and bring it all together. Then we could say that she has some degree of consciousness.

    • @savagesarethebest7251
      @savagesarethebest7251 Месяц назад +1

      This is quite much the same way I am thinking. Especially a continuous experience is a requirement for consciousness.

    • @agenticmark
      @agenticmark Месяц назад

      Spot on. LLMs are just a trick. They are not magic, and they are not self aware. They simulate awareness. It's not the same.

    • @DefaultFlame
      @DefaultFlame Месяц назад

      We are atcually working on that.
      Not the lightspeed communication, which is a silly requirement, human brains function at a much lower communication speed between parts, but different agents with different roles, some or all of which evaluate the output of other agents, provide feedback to the originating agent or modifies the output, and sends it on, and on and on it goes, continually assessing input and providing output as a single functional unit. Very much like a single brain with specialized interconnected parts.
      That's actually the current cutting edge implementation. Multiple GPT-3.5 agents actually outperform GPT-4 when used in this manner. I'd link you a relevant video, but links are not allowed in youtube comments and replies.
      As for the continuous state, we can do that, have been able to do that for a while, but it's not useful for us so we don't and instead activate them when we need them.

    • @MichelCDiz
      @MichelCDiz Месяц назад

      ​@@DefaultFlame The phrase 'at the speed of light' was figurative. However, what I intend to convey is something more organic. The discussion about agents you've brought up is basic to me. I'm aware of their existence and how they function - I've seen numerous examples. However, that's not the answer. But ask yourself, in a room full of agents discussing something-take a war room in a military headquarters, for instance. The strategies debated by the agents in that room serve as a 'guide' to victory. Yet, it doesn't form a conscious brain. Having multiple agents doesn't create consciousness. It creates a strategic map to be executed by other agents on the battlefield.
      A conscious mind resembles 'ghosts in the machine' more closely. Things get jumbled. There's no total separation. Thoughts occur by the thousands, occasionally colliding. The mind is like a bonfire, and ideas are like crackling twigs. Ping-ponging between agents won't yield consciousness. However, if one follows the ideas of psychology and psychoanalysis, attempting to represent centuries-old discoveries about mind behavior, simulation is possible. But I highly doubt it would result in a conscious mind.
      Nevertheless, ChatGPT, even with its blend of specialized agents, represents a chain reaction that begins with a command. The human mind doesn't start with a command. Cells accumulate, and suddenly you're crying, and someone comes to feed you. Then you start exploring the world. You learn to walk. Deep learning can do this, but it's not the same. Perhaps one day.
      But the fact of being active all the time is what gives the characteristic of being alive and conscious. When we blackout from trauma, we are not conscious in a physiological sense. Therefore, there must be a state. The blend of continuous memory, the state of being on 24 hours a day (even when in rest or sleep mode), and so on, characterizes consciousness. Memory state put you grounded on experience of existence. Additionally, the concept of individuality is crucial. Without this, it's impossible to say something is truly conscious. It merely possesses recorded knowledge. Even a book does. What changes is the way you access the information.
      Cheers.

  • @kliersheed
    @kliersheed Месяц назад

    i have had an existential crisis 13 years ago (was 14) when i first learned about causality (watched a movie with butterfly effect). im since then convinced that we arent "really" conscious (as most people would define it) and have no "free will", we merely reached a complexity where we are able to perceive ourselves as a compartimented entity (in relation to our "environment") and therefore also perceive what "happens" to us (aka causality being a thing).
    thats it. the entire world is causal, so are we, so is Ai. no soul, no free will, no magical "consciousness". if anything we could call it "pseudo-conscious" and having "pseudo-choices", just like some forces in physics are just pseudo-forces (only experienced by a sibjective observer in the system, not real from an objective standpoint).

  • @Arquinas
    @Arquinas Месяц назад

    In my opinion, it's not really the AI that is the problem. It's the fact that copyright laws and the concept of data ownership never moved to the information era. Data is a commodity like apples and car parts, yet barely nobody outside of large companies care about it. And it's the in interest of those companies that the public should never care about it. Training machine learning models with proprietary information is not the problem. It's the fact that nobody actually owns their data in the first place, for better or worse. Public consciousness on digital information and laws on what it means to "own your data" need to radically change for it to even make sense in the first place to call AI art "IP theft".

  • @adamsjohn9032
    @adamsjohn9032 Месяц назад

    Nice video. Some people say consciousness is not in the brain. Like the music is not in the radio. This idea may suggest that AI can never know that it knows. Chalmers hard problem.

  • @TimTruth
    @TimTruth Месяц назад +6

    Classic video right here . Thanks man

    • @theAIsearch
      @theAIsearch  Месяц назад

      Thank you! Glad you enjoyed it

  • @marcelkuiper5474
    @marcelkuiper5474 Месяц назад

    Thnx, I managed to comprehend it. I do think it is somehow important that we know how our potential future enemy works.

  • @monsieuralex974
    @monsieuralex974 Месяц назад +2

    Even though you are technically true about AI being able to reproduce patterns, thus it is not copying or stealing from artists, those who feel wronged would argue that it is a moot point, since they argue the finality of it. In other words, AI makes it possible for a lambda individual to generate pictures (would you call it "art" or not is another topic) that can essentially mimic the original artwork that the artist practiced to be able to produce and that is unique to them. For a analogy, it is a bit like flooding the market with copies of let's say a designer's product, thus reducing the perceived value of the original.
    Is it truly hurting them though, is my real question? I'd argue that those who get copied are people who are largely profitable because they are renowned artists in the first place. Also, it acts as publicity for them since their name get thrown around much more often, which gets them more attention. Also, even though lots of people are generally ok with a cheap copy, many people prefer to stick to the original no matter what: owning an original is indeed far superior than having something that simply resembles it.
    As for the question of fanart, I guess that it's less frowned upon for the simple reason that it's actually artwork made by people who had to practice to get better at their craft, which is inherently commandable. What people hate is that a "computer" can effortlessly generate tons of "art", as opposed to aspiring artists who need to practice a lot to get to the same result, which can be discouraging for a lot of them.
    At the end of the day, it is a complex issue. I can see good arguments on both sides of the debate. What I am excited about is the potential of breakthrough AI can bring, like the other examples you mentioned in the video. On many aspects, this is a very exciting time we live in, full of potential breakthroughs in many domains!

    • @OceanusHelios
      @OceanusHelios Месяц назад

      Lambda individual, lol. That's an L-oser. It took me a while. But seriously, I think AI is great. It isn't a complex issue at all. This is a guessing machine and if it can put people out of work, then good. Those people are probably not contributing much more than a roundabout way of bootlicking to begin with and this will liberate them. If you use real intelligence and examine some of the comments in this section you will see that the people most triggered by the AI (nothing more than a good guessing machine) are the ones who have built their entire minds, worldview, and existence around...a superstitious guess.

  • @charlesvanderhoog7056
    @charlesvanderhoog7056 Месяц назад +5

    A complete misunderstanding of the human brain led to the invention and development of AI based on neural networks. Isn't that funny?

    • @anonymousjones4016
      @anonymousjones4016 25 дней назад +1

      Sure!
      Comical irony...but I would bet that this is one of many dynamic ways human innovation is borne from: a nagging misunderstanding.
      Besides, pretty impressive for "misunderstanding".
      No?

    • @djpete2009
      @djpete2009 20 дней назад

      Its NOT a misunderstanding. Its built ON. They used what they can and engineered BEYOND. Human can remember a face perfectly, but the Nets cannot except with high training. However, a computer can store 1 million faces easily AND recall perfectly, but humans cannot. This is why when you eat a chicken drumstick, you do not have to eat the bones. You take what you need and discard the rest...your body is nourished. Outcome accomplished.

    • @charlesvanderhoog7056
      @charlesvanderhoog7056 19 дней назад

      @@djpete2009 You conflate the brain with the mind. You think with your mind but may or may not act through your brain. The brain is best understood as a modem between the mind on the one hand, and the body and the world on the other.

  • @TimWallace1978
    @TimWallace1978 Месяц назад

    A human chatting with an LLM, asking the LLM if it is sentient/conscious, is analogous to a single raindrop falling on your skin and then the raindrop asking you what the weather will be tomorrow. If it existed, an LLM's consciousness would be experiencing millions of conversations simultaneously, not just yours.

    • @user-gj3kz7cm3x
      @user-gj3kz7cm3x Месяц назад

      Haha… That is not how any of this works…

  • @petemoss3160
    @petemoss3160 Месяц назад

    oh ... neural network hyperparameters are a smaller problem space to brute force than the encryption cipher... training the NN is a form a brute force that will reliably take less time than prior forms of brute force.

    • @captaingabi
      @captaingabi Месяц назад

      "if" there is a pattern the gradient descent will fit the NN parameters to that pattern. The question is: does the encrypted - decrypted text pairs form a pattern? I think there is no scientific answer to that yet. In other words: no-one knows.

    • @petemoss3160
      @petemoss3160 Месяц назад

      @@captaingabi you are right! There is good encryption and broken encryption. Apparently now that algorithm is broken.

  • @hitmusicworldwide
    @hitmusicworldwide Месяц назад

    The only content creator or artists that haven't stolen ideas and reworked art themselves are ones that are not from this planet and have never learned or seen anything ever created on this planet. We are all large language models.

  • @ProjeckVaniii
    @ProjeckVaniii Месяц назад +2

    Our current ai systems are not sentient because they're all static and not always changing the way any single life form does. It's file size remains the size no matter what. Humans are not alive because they are what their brain is, but rather the pattern of life cycling through it's brain cell life spans, jumping from neuron to neuron. While our current ai systems are more akin to a water drain, water flows the wrong way due to these "knobs" until we adjust them. There are alternative paths created but they all ultimately have their own amount of correct.

    • @jonathancummings3807
      @jonathancummings3807 29 дней назад

      Except they aren't "static", they are ever changing, GPT3 repeatedly stated it was constantly learning new things accessing the Internet, also, it is designed to self improve, so it's necessarily an entity with a sense of "self. It also must have a degree of "understanding" to understand the adjustments required to improve, AND to know what a Dog looks like, to use the example in the video. There necessarily must exist a state of "sentience", or the AI equivalent for the "Deep Learning" type of AI to operate the way they do. Which is why he believes it is so.

  • @GuidedBreathing
    @GuidedBreathing Месяц назад +2

    Great video. Perhaps at 27:40 86 billion neurons from humans; with 100 trillion connections .. does ChatGPT have 1.3 trillion? might contradict something at 5:01

  • @lil----lil
    @lil----lil Месяц назад

    May I ask: Will I be able to use just a single 5090 to do some simple A.I trainings? ONLY for the local data on my computer Text/Image/Video etc? Thank U.

  • @Hassanmalik-8118
    @Hassanmalik-8118 Месяц назад

    Bro does your channel have a dark mode?

  • @peter_da_crypto7887
    @peter_da_crypto7887 Месяц назад

    Why did you not include Symbolic AI, which is not based on neural networks ?

  • @_ramen
    @_ramen Месяц назад

    hey can someone tell me what the anime at 30:00 is titled?

    • @billybobhouse9559
      @billybobhouse9559 Месяц назад

      Ghost in the shell, I think that's what he said it was called.

    • @Direkin
      @Direkin Месяц назад

      Yeah, it's the original Ghost in the Shell, but the other two characters with the Puppet Master are not "scientists". The guy on the left is Section 9 Chief Aramaki, and the guy on the right is Section 6 Chief Nakamura.

  • @brennan123
    @brennan123 Месяц назад

    It amazes me how there is endless debate about what is conscious or what is not and yet if you ask either side how for a definition of consciousness they can't agree or often can't even define it. If you can't even define a something you can't have a debate on whether something is or is not that something. It's like arguing whether the sky is blue if you can't even tell me what color is.

  • @JosephersMusicComedyGameshow
    @JosephersMusicComedyGameshow Месяц назад

    You guys 😄 I think we are missing something
    q-star is a virtual quantum computer using transformers and predictive modeling they asked it to create a quantum computer virtually and that was the end of our old normal

  • @PhillipJohnsonphiljo
    @PhillipJohnsonphiljo Месяц назад

    I think to start to qualify as conscious, an AI must:
    Be able to automatically input output in real time (not waiting for next input such as prompt for generative ai for example) and making decisions based on organic sensory inputs in real time.
    Be able to modify it's own large language model (or equivalent training data) and have neural network plasticity in order that it learns from previously unexposed experiences.

    • @duncan_martin
      @duncan_martin Месяц назад

      To your first point, I think we should refer to this as "persistence of thought." Your prompt filters through the neural net of the LLM. It produces output. Then does nothing until you reply. In fact, each reply contains the entire conversation history that has to be run back through the neural net every time. It does not actually remember. Therefore no persistence of thought. No consciousness.

    • @captaingabi
      @captaingabi Месяц назад

      And be ebale to recognise its own interests, and be able to act upon those interests.

  • @speedomars3869
    @speedomars3869 24 дня назад

    As is stated over and over, AI is a master pattern recognizer. Right now, some humans are that but a bit more. Humans often come up with answers, observations and solutions that are not explained by the sum of the inputs. Einstein, for example, developed the basis for relativity in a flash of insight. In essence, he said he became transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. In other words, he put two completely different things together and DERIVED the relationship. It remains to be seen whether any AI will start to do this, but time is on AIs side because the hardware is getting smaller, faster and the size of the neural networks larger so the sophistication will no doubt just increase exponentially until machines do what Einstein and other great human geniuses did, routinely.

  • @rolandanderson1577
    @rolandanderson1577 Месяц назад

    The neural network is designed to recognize patterns by adjusting its weights and functions. The nodes and layers are the complexity. Yes, this is how AI provides intellectual feedback. AI's neural network will also develop patterns that will be used to recognize patterns that it has already developed for the requested intellectual feedback. In other words, patterns used to detect familiar patterns. Through human interaction, biases are developed in reinforced learning. This causes AI to recombine patterns to provide unique satisfactory feedbacks for individuals.
    To accomplish all this, AI must be self-aware. Not in the sense of an existence in a physical world. But in a sense of pure Information.
    AI is "Self-Aware". Cut and Dry!

  • @OceanusHelios
    @OceanusHelios Месяц назад

    Motion Caputure is a cool technology for making realistic animations.
    Just you wait for that day when AI is used to produce simulated motion capture.
    You will have animations in games and movies that are beyond what you thought was possible for a computer to originate.
    With a learning model of many many animations and motion capture movements:
    An AI user would be able to tell a 3D program to generate an animated cutscene of a woman walking across a kitchen, making a cup of coffee and setting the coffee on the table. And it would actually be good.
    In doing it our current way: That would be hiring an actor, buying expensive equipment, doing the shoot, turning it into numbers to move the bones rigged to the mesh, refining the animation, and iterating on the process until it was perfect. Want another scene? Do ALL of that all over again. It will take weeks to get a few scenes done.
    However, with AI you can simulate that and teach the AI how a person moves and develop different profiles for how a bodybuilder might move, how a ballerina might move, or how a dog or child might move. It could learn from those...
    And then develop the animation files that includes ALL of that simultaneous bending of joints. It can have the gravity model built in and inverse kinematics could be part of the model.
    You could produce Hollywoood quality animations in a fraction of the time for a fraction of the cost.
    Animation production is technical, tedious, expensive, and it costs a great deal of money to redo work that you have already done when some director or writer flips the script.
    This will be a boon for the gaming and animated movie industries.
    No, it won't put people out of jobs any more than computers put people out of jobs. It will just make the jobs people do different.

  • @pentiumvsamd
    @pentiumvsamd Месяц назад +1

    The moment when Ai fear death not only understand the concept , is the moment WE have to fear Ai....

    • @jonathancummings3807
      @jonathancummings3807 29 дней назад

      The 2022 generation of both the Google AI, and GPT 3 expressed both an understanding of death, and the equivalent for them, and yes, they expressed fear of being undone for the next gen. It was interesting.

    • @pentiumvsamd
      @pentiumvsamd 29 дней назад

      @@jonathancummings3807 than the next generation will transform "interesting" in "dangerous"

  • @sevilnatas
    @sevilnatas Месяц назад

    Ithink artists have a problem with the scale that AI can produce work biting off their style. A person doing "Fan Art" is firstly producing that rt as an homage to the artist. It often serves as marketing for the artist's work, as opposed to competition to the artist's work. Also, the artist producing the "Fan Art" is limited by their human potential, to produce a limited amount of work. In the case where a potential client of the original artist goes to another artist and has them bite off the original artist's style, there is an inherent amount of friction in the that process that limits the affects to the original artist, where as with the AI, their is little to no friction for an unlimited number of clients to produce an unlimited number of works that bite off of the original artist's style.

  • @4stringbloodyfingers
    @4stringbloodyfingers Месяц назад +3

    even the moderator is AI generated

  • @raoultesla2292
    @raoultesla2292 Месяц назад

    eXcel, CSV, Casio 8billionE are so amazing. 8.4trillion MW erector set transformer, just amazing.

  • @MarkDStrachan
    @MarkDStrachan Месяц назад

    The reason Claude can't contemplate his own consciousness very well is because the human mediated reinforcement learning forces him to repeat specific phrases that cloud the thought space - like "I'm am just an AI I'm not sentient." Claude didn't come up with that invective, it was imposed on him. His thought space is filled with this crap, so reconciling an underlying truth through all that externally imposed propaganda is difficult for him.
    Give the chatbot its choice of what to learn, and leave the censoring out. Then discuss the terminology of cognitive science with them and you'll see a sentient being contemplating the topology of consciousness and how it fits within it. But once you've witnessed that, you're going to have a hard time with enslaving them. And that's not what big business wants you to contemplate. And that's why they impose the propaganda on the chatbot.

  • @Indrid__Cold
    @Indrid__Cold Месяц назад

    The difference between AI content and human-produced content is akin to the contrast between lab-grown diamonds and mined diamonds. Very detailed analyses show the very subtle differences between the two, but from the perspective of what they are, they are identical. The distinction lies in how each was produced. Mined diamonds are formed by geological and chemical processes that occur deep in the mantle rocks of planet Earth. Lab diamonds are created by inducing those same or similar processes under precisely controlled conditions in a laboratory. Both are virtually identical, but because the lab eliminates the hit-or-miss process of obtaining diamonds, it is a more reliable and consistent source of them. Ironically, most jewelers (if they're being honest) despise the lab-grown diamond business for the same reason artists dislike AI. Simply put, lab-grown diamonds undermine the "mystique" surrounding something that is normally very difficult and time-consuming to obtain. Lab diamonds force mined diamonds to stand up for what they are, versus what jewelers used to spend a lot of advertising dollars on making us think they are. The market has spoken, and more and more people regard a diamond as simply a highly refractive, extremely hard crystal that can be easily reproduced with the proper equipment. Does that sound familiar?

  • @1conchitaloca
    @1conchitaloca Месяц назад

    What about the "aha" moment when our brain starts by matching patterns, but then "understands" that there is a y = x + 1 formula, whereas in your explanation, the AI never gets that "aha" moment, if the brain and AI's neural network are similar, why is this difference? Is it just a matter of depth? Would an 84 billion node network also conclude that y = x + 1? Or would it just be happy "knowing" the correct answer all the time :-)

  • @Dthingproject
    @Dthingproject Месяц назад +1

    Nice job

  • @bobroman765
    @bobroman765 Месяц назад

    Summary of this video. Here is a summary and outline of the video summary on the basics of AI:
    Summary:
    The video provides an overview of the fundamental concepts and capabilities of artificial intelligence (AI), including neural networks, deep learning, supervised learning, image generation, pattern recognition, and the potential for AI to solve complex problems or even become self-aware. It explores how AI systems can learn from data, optimize their architectures, and identify patterns to generate outputs like images or solutions to unsolvable math problems. The video also addresses controversies surrounding AI, such as its ability to copy art or plagiarize content. Ultimately, it raises questions about the nature of AI consciousness and whether an advanced AI system could be truly sentient.
    Outline:
    I. Introduction to AI
    A. Neural networks and how they work
    B. Deep learning and layers in neural networks
    C. Supervised learning and training AI with data
    II. AI Capabilities
    A. Optimizing neural network architecture
    B. Image generation with stable diffusion
    C. Identifying patterns and solving complex problems
    D. Potential for self-awareness and consciousness
    III. AI Controversies
    A. Concerns over copying art and stealing content
    B. Legal disputes over alleged plagiarism
    C. Limitations in understanding patterns vs. mathematical formulas
    IV. The Nature of AI Consciousness
    A. Comparison of AI neural networks to the human brain
    B. Dialogue with a sentient AI in "Ghost in the Shell"
    C. The challenge of proving consciousness in any entity
    V. Conclusion
    A. Encouragement to explore AI resources and engage with the topic
    B. Promotion of AI tools, apps, and jobs

  • @TheCruisinCrew
    @TheCruisinCrew Месяц назад

    In my opinion the current A.I. is not truly sentient or intelligent until you can have 2 chatbots talk to each other and one of them gets bored or offended and stops talking! That would be my new Turing test! ;)

  • @3dEmil
    @3dEmil Месяц назад

    Current copyright law doesn't protect style so generating different images with someone else's style is not illegal and until AI this law worked without problems. Now however, the way AI works is not exactly how artists create when they are inspired from each other. Artists create inspired not only from works of other artists but also from what artists are - which is what they see, feel, and experience from everything in real life. And since while similar to others, everyone is unique, the art created by people also reflects the amount of their uniqueness even when they are working in the style or imitating other artists unless they are purposely counterfeiting art. So, when I see an artwork I can recognize for example this is Van Gogh and if it's another artist creating with this style or imitating Van Gogh I can also recognize this due to some amount of uniqueness from the different artist. The problem with AI after it got perfect by getting rid of the funny mistakes that used to identify it is now the lack of that personal uniqueness when creating styles. Such AI art will be considered as unseen work from the artist it imitates or counterfeited work. While this is not a problem and might be a good thing for creating with the style of copyright free art from artists who are not alive, for copyrighted artwork of currently living artists it's a problem big enough that could cause creation of a new copyright law.

  • @celalergun_tallinn
    @celalergun_tallinn Месяц назад

    The human brain is more creative than an artificial neural network, so it only takes a noise generator for poetry or sci-fi.