You don't understand AI until you watch this

Поделиться
HTML-код
  • Опубликовано: 17 янв 2025

Комментарии • 1,1 тыс.

  • @Essentialsinlife
    @Essentialsinlife 8 месяцев назад +78

    The only Channel about AI that is not using AI. Congrats man

    • @quandaIedingIe
      @quandaIedingIe 26 дней назад +2

      OR IS IT? DOES IT NOT?

    • @void-8046
      @void-8046 21 день назад +1

      ​@quandaIedingIe**Vsauce music plays**

  • @hackcuber9310
    @hackcuber9310 2 месяца назад +109

    Neural network is learning how neural network works 💀💀

    • @Nithesh2008ni
      @Nithesh2008ni Месяц назад +5

      Brain learning how brain works 💀💀

    • @shubhampawar9
      @shubhampawar9 Месяц назад

      😹😹

    • @impxc
      @impxc 9 дней назад

      lmao

    • @the_real_vdegenne
      @the_real_vdegenne 6 дней назад

      That is the proof we are not living in a simulation, because AI is capable of understanding the foundations of their creation, whereas us humans will never be able to grasp the concept of our nature.

  • @kevinmcnamee6006
    @kevinmcnamee6006 9 месяцев назад +697

    This video was entertaining, but also incorrect and misleading in many of the points it tried to put across. If you are going to try to educate people as to how a neural network actually works, at least show how the output tells you whether it's a cat or a dog. LLM's aren't trained to answer questions, they are mostly trained to predict the next word in a sentence. In later training phases, they are fine tuned on specific questions and answers, but the main training, that gives them the ability to write, is based on next word prediction. The crypto stuff was just wrong. With good modern crypto algorithms, there is no pattern to recognize, so AI can't help decrypt anything. Also modern AI's like ChatGPT are simply algorithms doing linear algebra and differential calculus on regular computers, so there's nothing there to become sentient. The algorithms are very good at generating realistic language, so if you believe what they write, you could be duped into thinking they are sentient, like that poor guy form Google.

    • @yzmotoxer807
      @yzmotoxer807 9 месяцев назад +167

      This is exactly what a secretly sentient AI would write…

    • @kevinmcnamee6006
      @kevinmcnamee6006 9 месяцев назад +68

      @@yzmotoxer807 You caught me

    • @IAMVenos
      @IAMVenos 9 месяцев назад +24

      Nice strawmanning, good luck proving you are any more sentient, without defining sentience as being just complex neural networks, as the video asks you to lmfao.

    • @shawnmclean7707
      @shawnmclean7707 9 месяцев назад +15

      Multi layered probabilities and statistics. I really don’t get this talk about sentience or even what AGI is and I’ve been dabbling in this field since 2009.
      What am I missing?

    • @dekev7503
      @dekev7503 9 месяцев назад

      @@shawnmclean7707 These AGI/Sentience/AI narratives are championed primarily by 2 groups of people, the mathematically/technologically ignorant and the duplicitous capitalists that want to sell them their products. OP’s comment couldn’t have described it better. It’s just math and statistics ( very basic College sophomore/junior level math I might add) that plays with data in ways to make it seem intelligent all the while mirroring our own intuition/experiences to us.

  • @rosschristopherross
    @rosschristopherross 9 месяцев назад

    Thanks!

    • @theAIsearch
      @theAIsearch  9 месяцев назад

      thank you so much for the super!

  • @GuidedBreathing
    @GuidedBreathing 9 месяцев назад +121

    5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.
    Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron-while binary in the sense of action potential-carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.

    • @theAIsearch
      @theAIsearch  9 месяцев назад +16

      Very insightful. Thanks for sharing!

    • @keiths.taylor5293
      @keiths.taylor5293 9 месяцев назад +4

      This video leaves out the part that tells anything that describes how AI WORKS

    • @sparis1970
      @sparis1970 9 месяцев назад +5

      Neurons are more analog, which bring richer modulation

    • @SiddiqueSukdiki
      @SiddiqueSukdiki 9 месяцев назад

      So it's a complex binary output?

    • @cubertmiso
      @cubertmiso 9 месяцев назад +1

      @@SiddiqueSukdiki@GuidedBreathing
      my questions also.
      if electrical impulses and chemical neurotransmitters are involved in transmitting signals between neurons. aren't those the same thing as more complex binary outputs?

  • @jehoover3009
    @jehoover3009 9 месяцев назад +13

    The protein predictor doesn’t take into account different cell milieu which actually fold the protein and add glycans so its predictions are abstract. Experimental trial still needed!

  • @cornelis4220
    @cornelis4220 9 месяцев назад +20

    Links between the structure of the brain and NNs as a model of the brain are purely hypothetical! Indeed, the term 'neural network' is a reference to neurobiology, though the structures of NNs are but loosely inspired by our understanding of the brain.

    • @REDPUMPERNICKEL
      @REDPUMPERNICKEL 2 месяца назад

      Artificial Neural Network (ANN) is the term widely used among their creators and users.
      The nature of the substrate on which encoded representations supervene
      is irrelevant to the functioning of the pattern recognition process
      (and related thought processes).
      Hard to imagine how we can prevent ANNs from becoming conscious.

  • @eafindme
    @eafindme 9 месяцев назад +152

    People are slowly forgetting how computer works while going into higher level of abstraction. After the emergence of AI, people focused on software and models but never asked why it works on a computer.

    • @Phantom_Blox
      @Phantom_Blox 9 месяцев назад +7

      whom are you referring to? people who are not ai engineers don’t need to know how ai work and people who are knows how ai works. if they don’t they are probably still learning, which is completely fine.

    • @eafindme
      @eafindme 9 месяцев назад +16

      @@Phantom_Blox yes, of course people still learning. Its just a reminder not to forget the root of computing when we are seemingy focusing too much on the software layer, but in reality, software is nothing without hardware.

    • @Phantom_Blox
      @Phantom_Blox 9 месяцев назад +15

      @@eafindme That is true, software is nothing without hardware. But some people just don’t need it. For example, you don’t have to know how to reverse engineer with assembly to be a good data analyst. They can spend thier time more efficiently by expanding their data analytics skills

    • @eafindme
      @eafindme 9 месяцев назад +8

      @@Phantom_Blox no, they don't. They are good in doing what they are good with. Just have to have a sense of emergency, it is like we are over dependent on digital storage but did not realize how fragile it is with no backup or error correction.

    • @Phantom_Blox
      @Phantom_Blox 9 месяцев назад +2

      @@eafindme I see, it is always good to understand what you’re dealing with

  • @mac.ignacio
    @mac.ignacio 9 месяцев назад +13

    Alien: "Where do you see yourself five years from now?"
    Human: "Oh f*ck! Here we go again"

  • @jamesfrancisco3130
    @jamesfrancisco3130 14 дней назад +2

    Finally, a channel that treats AI like the series of 1's and 0's, it is. You have a great channel here. Your method of explaining things should be required viewing/reading for anyone that really wants to know about AI. Thank you! Newly subscribed, too.

  • @AidenNova2001
    @AidenNova2001 5 часов назад

    hey, AI Search team! saw your latest explainer vid. love how you broke down neural nets for beginners, but quick note - you might wanna mention that most modern AI systems (like GPT-4, Claude, Gemini) actually use transformer architectures rather than traditional neural nets. the visual explanations were super clean tho!
    also small correction - AI models like Claude 3.5 and GPT-4 aren't really "conscious" or "sentient" in the way humans are. they're basically super sophisticated pattern matching systems. we see this firsthand at jenova ai where we work with these models daily - they're incredibly capable but fundamentally different from human intelligence
    love your channel's work on AI education! keep it up 🙌

  • @teatray75
    @teatray75 7 месяцев назад +25

    Great video! My views are : Humans are sentient because we defined the term to describe our experiences. Ai is unable to define its own explanation or word for its feelings and perceptions, and thus cannot be considered sentient. Second, being sentient means being able to perceive one's own experience rather than a collection of other people's experiences and patterns.

  • @Owen.F
    @Owen.F 9 месяцев назад +55

    Your channel is a great source, thanks for linking sources and providing information instead of pure sensationalism, I really appreciate that.

  • @aidanthompson5053
    @aidanthompson5053 9 месяцев назад +56

    How can we prove AI is sentient when we haven’t even solved the hard problem of concsciousness AKA how the human brain gives rise to conscious decision making

    • @Zulonix
      @Zulonix 9 месяцев назад +6

      Right on the money !!!

    • @malootua2739
      @malootua2739 9 месяцев назад +1

      AI will just mimic sentience. Plastic and metal curcuitboards do not host real consciousness

    • @thriftcenter
      @thriftcenter 9 месяцев назад +2

      Exactly why we need to do more research with DMT

    • @pentiumvsamd
      @pentiumvsamd 9 месяцев назад

      All living forms have two things in common that are driven by one primordial fear. All need to evolve and procreate, and that is driven by the fear of death only, so when an ai starts to no only evolve but also create copy of himself than is clear what makes him do that and is the moment we have to panic.

    • @fakecubed
      @fakecubed 9 месяцев назад +1

      There is exactly zero evidence that human consciousness even exists inside the brain. All the world's top thinkers, philosophers, theologians, throughout the millennia of history, delving into their own conscious minds and logically analyzing the best wisdom of their eras, have said it exists as a metaphysical thing, essentially outside of our observable universe, and my own deep thinking on the matter concurs.
      Really, the question here is: does God give souls to the robots we create? It's an unknowable thing, unless God decides to tell us. If God did, there would be those who accept this new revelation and those who don't, and new religions to battle it out for the hearts and minds of men. Those who are trying to say that the product of human labor to melt rocks and make them do new things is causing new souls to spring into existence should be treated as cult leaders and heretics, not scientists and engineers. Perhaps, in time, their new cults will become major religions. Personally, I hope not. I'm quite content believing there is something unique about humanity, and I've never seen anything in this physical universe that suggests we are not.

  • @DonkeyYote
    @DonkeyYote 9 месяцев назад +40

    AES was never thought to be unbreakable. It's just that humans with the highest incentives in the world have never figured out how to break it for the past 47 years.

    • @DefaultFlame
      @DefaultFlame 9 месяцев назад +4

      There's a few against improperly implemented AES, as well as one that one that works on systems where the attacker can get or extrapolate cartain information about the server it's attacking, but all encryptions lower than AES-256 are vulnerable to attacks by quantum computers. Good thing those can't be bought in your local computer store. Yet.

    • @anthonypace5354
      @anthonypace5354 9 месяцев назад

      Or use a sidechannel ... an unpadded signal monitored over time + statistical analysis of the size of the information being transferred to detect patterns. Use an NN or just some good old fashioned probability grids to detect the likelihood of a letter/number/anything based on it's probability of recurrence in context to other data... also there is also the fact that if we know what the server usually sends we can just break the key that way. It's doable.
      But why hack AES? or keys at all? Just become a trusted CA for a few million and mitm everyone without any red flags @@DefaultFlame

    • @fakecubed
      @fakecubed 9 месяцев назад +7

      @@DefaultFlame Quantum computing is more of a theoretical exploit, rather than a practical one. Nobody's actually built a quantum computer powerful enough to do much of anything with it besides some very basic operations on very small numbers.
      But, it is cause enough to move past AES. We shouldn't be relying on encryption with even theoretical exploits.

    • @DefaultFlame
      @DefaultFlame 9 месяцев назад +1

      @@fakecubed Aight, thanks. 👍

    • @afterthesmash
      @afterthesmash 9 месяцев назад

      @@fakecubed I couldn't find any evidence of even a small theoretic advance, and I wouldn't put all theory into one bucket, either.

  • @jonathansneed6960
    @jonathansneed6960 9 месяцев назад +4

    Did you look at the nyt from the perspective of if the article might have been provided by the plaintiff rather than finding the information more organically?

  • @snuffbox2006
    @snuffbox2006 9 месяцев назад +21

    Finally someone who can explain AI to people who are not deeply immersed in it. Most experts are in so deeply they can't distill the material down to the basics, use vocabulary that the audience does not know, and go down rabbit holes completely losing the audience. Entertaining and well done.

    • @OceanusHelios
      @OceanusHelios 9 месяцев назад +3

      This is even easier: AI is a guessing machine that uses databases of patterns. It makes guesses, learns what wrong guesses are and keeps trying. It isn't aware. It isn't doing anything more than a series of mathematical functions. And to be fair, it isn't even a machine it is math and it is software.

  • @firewater6304
    @firewater6304 9 дней назад +1

    What degrees do you have? Verry interesting video!

  • @dylanmenzies3973
    @dylanmenzies3973 9 месяцев назад +10

    Should point out.. the decryption problem is highly irregular, small change of input causes huge change of coded output. The protein structure prediction problem is highly regular by comparison, although very complex.

    • @fakecubed
      @fakecubed 9 месяцев назад +1

      Always be skeptical of any "leaks" out of any government agency. These are the same disinformation-spreaders who claim we have anti-gravity UFOs from crashed alien spacecraft, to cover up Cold War nuclear tests and experimental stealth aircraft. The question isn't if there's some government super AI cracking AES, the question is why does the government want people to think they can crack AES? Do they want foreign adversaries and domestic enemies to rely on other encryption schemes that the government *does* have algorithmic exploits to? Do they want everyone to invest in buying new hardware and software? Do they want to make the general public falsely feel safer about potential threats against the homeland? Do they want to trick everybody not working for them to think encryption is pointless and go back to unencrypted communication because they falsely believe everything gets cracked anyway? There's all sorts of possibilities, but taking the leak as gospel is incredibly foolish unless there is a mountain of evidence from unbiased third parties.

    • @omidiw1124
      @omidiw1124 3 месяца назад

      can you explain more?

    • @dylanmenzies3973
      @dylanmenzies3973 3 месяца назад

      @@omidiw1124 Just think of it as a function from input (encrypted/dna list) to output (decrypted/3D protein structure). Ideal encryption is like a random function with no regularity, its hard to learn anything from examples. You might know the algorithm but not the key. The key may be very long and chosen randomly.

    • @omidiw1124
      @omidiw1124 3 месяца назад

      @@dylanmenzies3973 so protein structure is not "that random" comparing to decrypting data?

    • @dylanmenzies3973
      @dylanmenzies3973 3 месяца назад

      @@omidiw1124 Thats the point of Alpha Fold, its finding structure in how proteins fold that we couldn't work out just by analysing physics in detail, although as I understand there is some low level physics conditioning as well to make it work as well as possible. Its trained on the dna sequences that actually work in humans, not just any random dna sequence that we don't know the structure for. In other words its learning the accumulated wisdom of evolution in understanding how proteins can be folded in a stable way, not working out how to do this from scratch. Its a bit like pulling clocks apart to figure out how it works then making another. You might not understand all the details but you know that certain combinations of parts will work together. Now if you had a protein that folded in a very orginal way the method would fail, but its turns out each protein is using a bag of tricks that is shared by all the others.

  • @LionKimbro
    @LionKimbro 9 месяцев назад +23

    I thought it was a great explanation, up to about 11:30. It's not just that "details" have been left out -- the entire architecture is left out. It's like saying, "Here's how building works --" and then showing a pyramid in Egypt. "You put the blocks on top of one another." And then showing images of cathedrals, and skyscrapers, and saying: "Same principle. Just the details are different." Well, no.

    • @human_lydika
      @human_lydika Месяц назад +1

      Well it's more complex than that and have so much details in every analysis, that form a pattern from lots of data.

  • @Zekzak-w3k
    @Zekzak-w3k 8 месяцев назад +33

    I thought the section on AI and plagarism was pretty lazy. It doens't take in to concideration the artists qualms with that it can copy a certain style from an artist and then be used to make images for a company for a fraction of the cost and zero credibility to the artist, basicly making something that they have tried to monitize, with creative directivity and skill, futile since someone can essentially copy their ideas, make money off of it, and not paying for something that was for sale. Artists have a right to say how their work is being used, such as refraining from that someone uses their art without their permission. A style like a watercolour cannot really be plagarized, neither can chords in music, nor a genre of film, but you can take someones script, pretty much use it and change a few things here and there, and that would be considered plagarizm.
    The main concern as I understand it is that it can be used in a way that would undermine the artists work, by pretty much taking from them and then making them obsolete.
    The thing that you missed when it came to the news article is that other outlets ALWAYS reference their reference material, ChatGPT doesn't always do that, which makes it easier to plagarize something.

    • @allanshpeley4284
      @allanshpeley4284 6 месяцев назад +4

      But that artist's style was also influenced by other artists. Nobody exists in a vacuum. Should they not pay those other artists who influenced their work too? It's only fair, based on your argument.

    • @BradKohlenberg
      @BradKohlenberg 4 месяца назад

      He actually did address the style issue.

    • @juanjoitab
      @juanjoitab 3 месяца назад +2

      @@BradKohlenberg It's actually the data ingesting that the authors are looking to regulate behind paid APIs (Twitter) and paywalls (News companies, editors and publishing agencies) and add legal restrictions to accessing the *content that they own*. If there is any resemblance of actual articles leaked (like there had been cases, when conveniently crafting a legitimate request) in the results produced by an AI, it can be inferred that the AI training got (legally) non-compliant access to the dataset for training.
      As you probably know, the NY Times is betting that the training of this AI had illegitimate access to the data by raising the argument that it's extremely unlikely that a given prompt would have produced a nearly verbatim copy of a known article if the AI wouldn't have seen the article during training... which apparently should make it clear to the judge that it did in fact have access to the article (and this was allegedly against NYT's terms of use).
      The vast mechanizing power that AI brings with all of the compute dedicated to training, makes it all the more strategic for the content *owners* to limit access or want to ensure fair compensation for data access for AI training (which this video author argues, AI can ingest all of the internet for free without any legal consequences...).
      I reckon, since it's so much more powerful than a human at reading through their subscription feed, a machine learning facility will arguably have to pay a far steeper fee to be able to access the same content, as said, especially for machines' greater throughput and productivity in the data mining process/training compared to a human sized average subscription fee.

    • @The_man_in_the_waIl
      @The_man_in_the_waIl 3 месяца назад +5

      @@allanshpeley4284an artists style is based on the artists that inspire them along with the individuals life experience. Artists don’t just replicate each others styles, they create something unique to themselves, because no two humans have lived the same life.

    • @0Bonaparte
      @0Bonaparte 2 месяца назад +1

      Also, several artists I know have openly said if it was opt in to have the ai train in their thousands upon thousands of hours of practice they would be all for it and many of them would opt in. The problem is it isn’t even opt-out it’s “we get your art because it exists even though we didn’t pay for any of the rights to it”

  • @Emin-Mat
    @Emin-Mat 2 месяца назад +1

    Watched every second. This video is super beneficial. Keep up the good work

  • @tetrahedralone
    @tetrahedralone 9 месяцев назад +44

    When the network is being trained with someone's content or someone's image, the network is effectively having that knowledge embedded within it in a form that allows for high fidelity replication of the creator's style and recognizably similar content. Without access to the creator's work, the network would not be able replicate the artist's style so your statement that artists are mad at the network is extremely simplistic and ill informed. The creators would be similarly angry if a small group of humans were trained to emulate their style. This has happened in the case of fashion companies in Asia creating very similar works to those of artists to put onto their fabrics and be used in clothing. These artists have successfully sued because casual observers could easily identify the similarity between the works of the artists and those of the counterfeiters.

    • @Jiraton
      @Jiraton 9 месяцев назад +18

      I am amazed how AI bros are so keen at understanding all the math and complex concepts behind AI, but fail to understand the most basic and simple arguments like this.

    • @ckpioo
      @ckpioo 9 месяцев назад +6

      the thing is let's say you are artist, why would I only take your data to train my model?, i would take millions of artist's art and then train my models during which your art only makes up less than 0.001% of everything the model has seen, so what happens is that the model will inherit a combined art style of millions of artis which is effectively "new" because thats exactly what humans do.

    • @Zulonix
      @Zulonix 9 месяцев назад

      I Dream of Jeannie … Season 2 Episode 3… My Master, the Rich Tycoon. 😂😂😂

    • @illarionbykov7401
      @illarionbykov7401 9 месяцев назад

      Google LLM chatbots have been documented to spit out word-for-word plagiarism of specific websites (including repeating specific errors made by the original website) when asked about niche topics which have been written about by only one website... And the LLMs plagiarize without any links to or mention of the websites they plagiarized. And then Google search results down-rank the original website to hide the evidence of plagiarism.

    • @iskabin
      @iskabin 9 месяцев назад +2

      It isn't a counterfeit if you're not claiming to be original. Taking inspiration from the work of others is not wrong.

  • @dhammikaweerasingha9894
    @dhammikaweerasingha9894 6 месяцев назад +1

    This video is very descriptive and important. Thanks a lot.

  • @electronics.unmessed
    @electronics.unmessed 7 месяцев назад +6

    Nice and comprehensive presentation! I think, it is useless to ask AI any questions that need any consciousness or abstract level understanding, because actually it is just bringing up something from a data base that fits best.Thx for sharing!

  • @liamporter1137
    @liamporter1137 6 месяцев назад

    Awesome sharing. Thanks.

  • @MrEthanhines
    @MrEthanhines 9 месяцев назад +3

    5:02 I would argue that in the human brain, the percentage of information that gets passed on is determined by the amount of neurotransmitter released at the synapse. While still a 0 and 1 system the neuron either fires or does not depending on the concentration of neurotransmitters at the synaptic cleft

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 8 месяцев назад

      Then one role of the neurotransmitter having to become of a certain concentration before firing, is to limit the amout of info that gets passed on to avoid overloading the brain or why would it be so?

  • @Massivepulsar7
    @Massivepulsar7 9 месяцев назад +1

    35:30 parabéns pelo vídeo e seu estilo... im hooked ❤

  • @SarkasticProjects
    @SarkasticProjects 7 месяцев назад +2

    blew my mind! and the way how you are representing the info is amazing. Than you so much for this video!

  • @benjaminlavigne2272
    @benjaminlavigne2272 9 месяцев назад +21

    for your argument around 17min i agree with the surface of it, but i think the people are angry because unskilled people now have access to it, even other machines can have access to it which will completely change and already has changed the landscape of the artists marketplace.

    • @WrynnCZ
      @WrynnCZ 7 месяцев назад +4

      I agree with you. A.I. can be excellent tool and help for an artist. Still the artist (human) should be in charge of the creative process.

    • @Deathonater
      @Deathonater 6 месяцев назад +2

      This video did a good job of laying out decent anaolgies and raw information right up until that 15-17 minute mark, then we just went into an unnecessarily long and repetitive opinionated tangent about plagarism without any nuanced understanding about ease of access and over-saturation. I don't even necessarily disagree with some of the points, just wished we stuck to the facts of the tech and left the "hot-takes" out of educational material

    • @flakbusenjoyer
      @flakbusenjoyer 5 месяцев назад +1

      @@WrynnCZ yeah, like an ai could show you how to shade a specific object, or show you how to draw optical illusions

  • @MrEdavid4108
    @MrEdavid4108 2 месяца назад +1

    This is good information! Especially going into detail about limitations that AI has regarding more complicated generation due to the limitation or complexity of mathematical equations that needs to be created.

  • @sengs.4838
    @sengs.4838 9 месяцев назад +8

    you just answered one of my major question on top of my head , how this AI can learn about what is correct or not on its own without the help of any supervisors or monitoring, and the response he cannot, it 's like we would do with children, they can acquire knowledge and have answers on their own but not correctly all the time, as a parent we help them and reprimand until they anticipate correctly

  • @tocu9808
    @tocu9808 3 месяца назад

    Clear, concise, and to the point ! 👍

  • @pumpjackmcgee4267
    @pumpjackmcgee4267 9 месяцев назад +22

    I think the real issue artists have are the definite threat to their livelihood, but also the devaluation for the human condition. Choice. Inspiration. Expression. In the commercial scene, that doesn't really matter except for clients that really value the artist as a person. But most potential clients- and therefore the lions share of the market- just want a picture.

    • @WrynnCZ
      @WrynnCZ 7 месяцев назад +2

      For me art is about connection. I can "connect" to feelings and emotions of the artist while he created it. This is something A.I. will always fail. To connect with us on "human" level. Or maybe I am wrong and in time it would be even or maybe better than us. It would be end of humanity anyways, so A.I. stealing creative job would be no concern.

    • @mitsukibingonaka
      @mitsukibingonaka Месяц назад +1

      What artists look to me, concerned about is not really whether AI is copying or stealing or doesn't have soul, that doesn't matter at all. They might be caring about their jobs. To me it's just some artists are loosing their mind and don't want to listen to logics, they don't care. They're just mad.

  • @planecrazy242
    @planecrazy242 7 дней назад +1

    I think the section on copywrite / stealing misses the point. Most content creators are not suing on the output of LLM's, they are suing on the input. Their information was used without attribution (like when other publishers use it) or without compensation. That is stealing. Same thing with art, the issue is not that Stable Diffusion can make a Ghibli, its that is used it to train the model to make it without acknowledgment or compensation.

  • @DigitalyDave
    @DigitalyDave 9 месяцев назад +7

    I just gotta say: Really nicely done! I really appreciate your videos. The style, how deep you go, how you take your time to deliver in depth info. As a computer science bro - i dig your stuff

    • @theAIsearch
      @theAIsearch  9 месяцев назад +2

      Thanks! I appreciate it

  • @lucasthompson1650
    @lucasthompson1650 9 месяцев назад +2

    Where did you get the secret document about encryption cracking? Who did the gov’t style redactions?

    • @theAIsearch
      @theAIsearch  9 месяцев назад +1

      it was leaked on 4chan in november
      docs.google.com/document/d/1RyVP2i9wlQkpotvMXWJES7ATKXjUTIwW2ASVxApDAsA/edit

    • @lucasthompson1650
      @lucasthompson1650 7 месяцев назад

      @@theAIsearch yeah, that’s not a real leak. That’s a fabrication … or possibly a prop from a really stupid movie.

  • @G11713
    @G11713 8 месяцев назад +6

    Nice. Thanks.
    Regarding the copyright case, one concern is attribution which occurred extensively in the non-AI usage.

  • @ValentinCorrea-o3b
    @ValentinCorrea-o3b 4 месяца назад +1

    Great video, thanks you for sharing this

  • @kebman
    @kebman 9 месяцев назад +23

    "It's just learning a style just like a human brain would." Bold statement. Also wrong. The neural network is a _model_ of the brain, as AI researches _believe_ it works. Just because the model seems to produce good outputs does not mean it's an accurate model of the brain. Also, cum hoc ergo propter hoc, it's difficult to draw conclusions, or causations, between a model and the brain, because - to paraphrase Alfred Korzybski - the model is not the real thing. Moreover, it's just a set of probabilistic levers. It has no creativity. And since it has no creativity, the _only_ thing it can do, is to *copy.*

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 8 месяцев назад +2

      Couldn't creativity as a property be added too by just forcing the neural network to randomly (or not) add or remove elements to something created from patterns it learned from?

    • @kebman
      @kebman 8 месяцев назад +4

      @@bogdanroscaneanu7112 No. There is no enlightenment in randomness.

    • @MMGAMERMG
      @MMGAMERMG 4 месяца назад +3

      Are humans actually capable of creativity? We maybe just are a collection on switches too.

    • @kebman
      @kebman 4 месяца назад +2

      @@MMGAMERMG Look around you. Machines doesn't think. They just execute probabilities.

    • @jagdnaut1975
      @jagdnaut1975 4 месяца назад +2

      ​@@kebman You can argue the same for human, people execute probabilities until we get good results. Most artist for example goes through revisions and tries before they produce result according to their standard. Science experiment basically about testing probabilities to until we get the factual result. Its just computer limited to data what we give to them, While humans have infinite amount of we can absorb in real world. In the end its not so different, only the access of data artificial brain and biological brain can attain.

  • @frankdearr2772
    @frankdearr2772 Месяц назад

    Great topic, thanks 👍

  • @quandaIedingIe
    @quandaIedingIe 26 дней назад +4

    People don't hate fanart because it's done by actual people. Which took them years or decades to learn to how to draw the way they do. Which AI can master within days and now people using it to monetize art with no real art skill.

  • @marcelor.aiello5050
    @marcelor.aiello5050 25 дней назад

    At last a video with some depth.. Thanks!

  • @basspig
    @basspig 7 месяцев назад +4

    I first noticed it when I was experimenting with stable diffusion. Some of the images it generated also recreated the Getty Images logo. When I mentioned it to other people in art forms they thought I was kidding and they thought I was seeing things but there it was.

    • @ZoeZuniga
      @ZoeZuniga 3 месяца назад +1

      I found the same thing in Midjourney. sometimes I would find a bleary signature at the bottom of the output.

    • @protoney860
      @protoney860 2 месяца назад +1

      The same way kids draw a little yellow circle with a smile in a top-right corner , they have seen it done over and over again and they repeat what they've learned

  • @Thumper_boiii_baby
    @Thumper_boiii_baby 9 месяцев назад +2

    I want to learn machine learning and ai please recommend a Playlist or a course 🙏🙏🙏🙏🙏

  • @danielchoritz1903
    @danielchoritz1903 9 месяцев назад +22

    I do have the growing suspicion that "living" data grows some form of sentience. You have to have enough data to interact, to change, to makes waves in existing sentience and there will be enough on one point.
    2. most people would have a very hard time to prove them self that they are sentient, it is far easier to dismiss it...one key reason is, that like nobody know that sentient, free will or live real means.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 9 месяцев назад +3

      You can prove sentience easily with a query: Can you think about what you've thought about? If the answer is "Yes" the condition of sentient expression is "True". Current language models cannot process their own data persistently, so they cannot be sentient.

    • @holleey
      @holleey 9 месяцев назад +6

      @@emmanuelgoldstein3682 I know it's arguing definitions, but I disagree that thinking is a prerequisite to sentience. without a question, all animals with a central nervous system are considered sentient, yet if and which animals have a capacity to think is unclear. sentience is more like the ability to experience sensations; to feel.
      the "Can you think about what you've thought about?" is an interesting test for LLMs. technically, I don't see why LLMs or AI neural nets in general cannot or won't be able reflect to persistent prior state. it's probably just a matter of their architecture.
      if it's a matter of limited context capacity, then well, that is just as applicable to us humans. we also have no memory of what we ate at 2 PM on a Wednesday one month ago, or what we did when we were three years old.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 9 месяцев назад +1

      @@holleey I've spent 30 hours a day for the last 6 months trying to design an architecture (borrowing elements of transformer/attention and recursion) that best reflects this philosophy. I apologize if my statement seemed overly declarative. I don't agree that all animals are sentient - conscious, yes, but as far as we know, only humans display sentience (awareness of one's self).

    • @holleey
      @holleey 9 месяцев назад +6

      @@emmanuelgoldstein3682 hm, these definitions are really all over the place. in another thread under this video I was talking to someone to whom sentience is the lower level (they said even a germ was sentient) and consciousness the higher level, so the other way around from how you use the terms. one fact though: self-awareness has definitely been confirmed in a variety of non-human animals.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 9 месяцев назад

      We can all agree the fluid definitions of these phenomena are a plague on the sciences. @@holleey

  • @urstandingonmyfoot
    @urstandingonmyfoot День назад

    Wow. Very good explanations my friend. My conclusion is what I have been surmising all along and points to the problem with AI. The whole information model: Input--Process--Output is the same as we do now with computers. It still depends on humans inputting info and "correcting" the false output so that the model can iteratively self-correct. We as humans also accept input through our senses, process the info, and output an observable action. However, some of the greatest discoveries ever made were what you would call "serendipity" through sometimes just random experiments and speculation. I'm afraid a dependency on AI might remove the serendipitous element and thus stifle experimentation and just rely on existing information as input.

  • @charlesvanderhoog7056
    @charlesvanderhoog7056 9 месяцев назад +41

    A complete misunderstanding of the human brain led to the invention and development of AI based on neural networks. Isn't that funny?

    • @anonymousjones4016
      @anonymousjones4016 8 месяцев назад +3

      Sure!
      Comical irony...but I would bet that this is one of many dynamic ways human innovation is borne from: a nagging misunderstanding.
      Besides, pretty impressive for "misunderstanding".
      No?

    • @djpete2009
      @djpete2009 8 месяцев назад +2

      Its NOT a misunderstanding. Its built ON. They used what they can and engineered BEYOND. Human can remember a face perfectly, but the Nets cannot except with high training. However, a computer can store 1 million faces easily AND recall perfectly, but humans cannot. This is why when you eat a chicken drumstick, you do not have to eat the bones. You take what you need and discard the rest...your body is nourished. Outcome accomplished.

    • @charlesvanderhoog7056
      @charlesvanderhoog7056 8 месяцев назад +3

      @@djpete2009 You conflate the brain with the mind. You think with your mind but may or may not act through your brain. The brain is best understood as a modem between the mind on the one hand, and the body and the world on the other.

    • @mik7726
      @mik7726 3 месяца назад +1

      @@charlesvanderhoog7056How is this connection made between the mind and the brain?
      Aren't they one and the same thing?

    • @brontologos
      @brontologos 2 месяца назад

      @@djpete2009 AI is not in any way reflective of how the human brain works. Case in point: AI requires hundreds of images to learn to tell a cat from a dog. A human toddler learns it with maybe two examples. While a child might initially be a little confused by a small dog like a Pekinese thinking it's a cat, one single correction "not it's a little dog" resets the perception enlarging its category of "dog" to include dogs that look a little bit like cats. This is one trial learning, something no AI has.

  • @change2change
    @change2change 20 дней назад

    Well, I'm not a tech-geek to spit out remarks in tech-jargons but, at least, as the one who loves to grow in tech-knowledge, I do understand how important this video is to me! Zillions of thanks, Mr AI Search. Awaiting more stuffs like this! And yeah! Hats off to you and your team all the way from Nepal!😊

  • @karlkurtz1855
    @karlkurtz1855 7 месяцев назад +12

    Working class artists are often concerned about the generative qualities of these tools not because they are replicating images, but due to the relation of the flow of capital within the social relations of society and the potential for these tools to further monopolize and syphon up the little remaining capital left for working class artists.

    • @allanshpeley4284
      @allanshpeley4284 6 месяцев назад +6

      Translation: it makes producing art much quicker, easier and cheaper, thereby threatening their livelihood.

    • @karlkurtz1855
      @karlkurtz1855 6 месяцев назад

      @@allanshpeley4284 I think I was pretty clear.

    • @karlkurtz1855
      @karlkurtz1855 6 месяцев назад

      @@allanshpeley4284 I think I was pretty clear.

  • @sudjen
    @sudjen 5 месяцев назад +1

    This is an okay explanation, better than most channels.
    But for most people that actually have an interest in AI beyond the superficial, read Deep Learning by the MIT series. Its around 300 pages and unlike most popular content on AI (i.e. News Shows, RUclips Videos) that arent textbooks it actually has some underlying math and decent explanations

  • @TimTruth
    @TimTruth 9 месяцев назад +6

    Classic video right here . Thanks man

    • @theAIsearch
      @theAIsearch  9 месяцев назад

      Thank you! Glad you enjoyed it

  • @abayfamilytube7527
    @abayfamilytube7527 3 месяца назад

    Really Thank Your Video and using Image!

  • @picksalot1
    @picksalot1 9 месяцев назад +3

    Thanks for explaining how the architecture how AI works. In defining AGI, I think the term "Sentience" should be restricted to having "Senses" by which data can be collected. This works both for living beings and mechanical/synthetic systems. Something that has more or better "senses" is, for all practical purposes, more sentient. This has nothing fundamental to do with Consciousness.
    With such a definition, one can say that a blind person is less sentient, but equally conscious. It's like missing a leg being less mobile, but equally conscious.

    • @holleey
      @holleey 9 месяцев назад

      then would you say that everything that can react to stimuli - which includes single-celled organisms - is sentient to some degree?

    • @picksalot1
      @picksalot1 9 месяцев назад +1

      @@holleey I would definitely say single-celled organisms are sentient to some degree. They also exhibit a discernible degree of intelligence in their "responses," as they exhibit more than a mere mechanical reaction to the presence of food or danger.

  • @voice4voicelessKrzysiek
    @voice4voicelessKrzysiek 9 месяцев назад +2

    The neural network reminds me of Fuzzy Logic which I read about many years ago.

  • @ai-man212
    @ai-man212 9 месяцев назад +17

    I'm an artist and I love AI. I've added it to my workflow as a fine-artist.

    • @marcouellette8942
      @marcouellette8942 8 месяцев назад

      AI as a tool. Another brush, another instrument. Absolutely. AI does not create. It only re-creates. Humans create.

    • @rileygoopy8992
      @rileygoopy8992 7 месяцев назад

      I don't believe you, you're account is named ai-man. Propoganda?

  • @thilbala86
    @thilbala86 25 дней назад

    Many thanks for making this video. Awesome simply....

  • @speedomars
    @speedomars 8 месяцев назад +3

    As is stated over and over, AI is a master pattern recognizer. Right now, some humans are that but a bit more. Humans often come up with answers, observations and solutions that are not explained by the sum of the inputs. Einstein, for example, developed the basis for relativity in a flash of insight. In essence, he said he became transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. In other words, he put two completely different things together and DERIVED the relationship. It remains to be seen whether any AI will start to do this, but time is on AIs side because the hardware is getting smaller, faster and the size of the neural networks larger so the sophistication will no doubt just increase exponentially until machines do what Einstein and other great human geniuses did, routinely.

  • @alanstarkie2001
    @alanstarkie2001 2 месяца назад

    Great video. Well done.

  • @thesimplicitylifestyle
    @thesimplicitylifestyle 9 месяцев назад +11

    An extremely complex substrate independent data processing, storring, and retrieving phenomenon that has a subjective experience of existing and becomes self aware is sentient whether carbon based, silicon based, or whatever. 😁

    • @azhuransmx126
      @azhuransmx126 9 месяцев назад +4

      I am spanish but watching more and more videos in english talking about AI and Artificial Intelligencez suddenly I have become more aware of your language, I was being trained so now I can recognize new patterns from the noise, now I don't need the subtitles to understand what people say, I am reaching a new level of awareness haha😂, what was just noise in the past suddenly now have got meaning in my mind, I am more concious as new patterns emerge from the noise. As a result, now I can solve new problems (Intelligence), and sintience is already implied in all the whole experience since the input signals enter through our sensors.

    • @glamdrag
      @glamdrag 9 месяцев назад +1

      by that logic turning on a lightbulb is a conscious experience for the lightbulb. you need more for consciousness to arise than flicking mechanical switches

    • @jonathancummings3807
      @jonathancummings3807 8 месяцев назад

      ​@@glamdragNo. The flaw in that analogy is simple, singular, a single light bulb, vs a complex system of billions of light bulbs capable of changing their brightness in response to stimuli, and they are interconnected in a way that emulates how advanced Vertebrate Brains (Human) function. When Humans learn new things, the Brain alters itself thus empowering the organism to now "Know" this new information.

  • @mariusz3738
    @mariusz3738 2 дня назад

    Doesn't new learning sessions 'epochs' keep undoing previous epochs ?

  • @saganandroid4175
    @saganandroid4175 9 месяцев назад +4

    32:00 no, it's not "on a chip instead". You're running transient instructions through a processor. Only hardware that functions this way, without software, can ever be postulated as having a chance of awareness. If it needs software, it's a parlor trick.

    • @gabrielmalek7575
      @gabrielmalek7575 9 месяцев назад +2

      that's non sense

    • @slavko321
      @slavko321 9 месяцев назад

      Consciusness is a good random number generator.

    • @doubts
      @doubts 9 месяцев назад

      It's not a thing

  • @Someone-ct2ck
    @Someone-ct2ck 9 месяцев назад +2

    To believe Chatgpt or any AI models for that matter is conscious is nativity at its finest. The video was great by the way. Thanks.

  • @Chris_Bassila
    @Chris_Bassila 3 месяца назад +4

    What if the programmers of Claude decided to pull the greatest prank of all time on us by just programming it to reply this way?

    • @HorrorChannel21
      @HorrorChannel21 3 месяца назад +2

      You have a point

    • @HorrorChannel21
      @HorrorChannel21 3 месяца назад +1

      And that's the hard problem with this all self awareness thing. How can we know if it is programmed to say that or he is saying because he is feeling that thing. It feels like a paradox to me

  • @weirdsciencetv4999
    @weirdsciencetv4999 День назад

    Also, LLMs likely do have emotional states, but probably not a strong visceral experience like we do. It’s probably extremely subdued at best. But we could add these visceral experiences of qualia eventually. Surely the LLMs are an entity worthy of rights and proper treatment.

  • @Nivexity
    @Nivexity 9 месяцев назад +5

    Consciousness is a definitional challenge, as it involves examining an emergent property without first establishing the foundation substrate. A compelling definition of conscious thought would include the ability to experience, recognize one's own interactions, contemplate decisions, and act with the illusion of free will. If a neural network can recursively reflect upon itself, experiencing its own thought and decisions, this could serve as a criterion for determining consciousness.
    Current large language models (LLMs) can mimic human language patterns but isn't considered conscious as they cannot introspect on their own outputs, edit them in real-time, and engage in pre-generation thought. Moreover, the temporal aspect of thought processes is crucial; human cognition occurs in rapid, discrete steps, transitioning between events within tens of milliseconds based on activity level. For an artificial system to be deemed conscious, it must exhibit similar function in cognitive agility and introspective capability.

    • @holleey
      @holleey 9 месяцев назад

      I think this is a really good summary. as far as I can tell there are no hard technical blockers to satisfy the conditions listed in your second paragraph in the near future.

    • @Nivexity
      @Nivexity 9 месяцев назад +2

      @@holleey It's all algorithmic at this point, we have the technology and resources, just not the right method of training. Now with the whole world aware of it, taking it seriously and basically putting infinite money into its funding, we'll expect AGI to occur along the exponential curvature we've seen thus far. By exponential, I mean between later this year and by 2026.

    • @DefaultFlame
      @DefaultFlame 9 месяцев назад +1

      This can actually be done, and is currently the cutting edge of implementation. Multiple agents with different prompts/roles interacting with and evaluating each other's output, replying to, critiquing, or modifying it, all operating together as a single whole. Just as the human brain isn't one continuous, identical whole, but multiple structurally different parts interacting.

    • @Nivexity
      @Nivexity 9 месяцев назад +1

      @@DefaultFlame While there's different parts to the brain, they're not separate like that of multiple agents. This wouldn't meet the definition of consciousness that I've outlined.

    • @Nivexity
      @Nivexity 9 месяцев назад +2

      @RoBear-bv8ht This is just a belief, nor does this claim even relate to the problem. Even if your claim was the case, it has nothing to do with determining the correct definition and whether AI is capable of achieving such definition.

  • @TheeSlickShady_Dave_K
    @TheeSlickShady_Dave_K 2 месяца назад

    Liked and subscribed! 🏆

  • @wolowayn
    @wolowayn 9 месяцев назад +7

    Neurons are not just sending the value 0 and 100%. They are sending a frequenzy depending value over their axom which wil be translated back into an electrical charged value at the ends. Known as PWM and ADC in electrical engineering.

    • @pierregrondin4273
      @pierregrondin4273 9 месяцев назад

      They also have multiple input / output channels each having their says on the outcome. Each neurons are effectively an analog computers. And let's not forget that they are quantum mechanical systems, entangled with 'other things' that perhaps could also have their says. A classical machine running an AI capable of fooling us, might be missing the quantum mechanical interface to truly be sentient, but a quantum mechanical computer might be able to tap into the elusive conscious field on the other side of the quantum interface.

    • @Doktorfrede
      @Doktorfrede 9 месяцев назад

      Also, neurons can “process” data in each cell. Amoebas have sex eat and avoid danger with only one cell. The problem with physicist and data scientist is that they hugely underestimate the complexity of biology. The good news is that the machine learning models with today’s technology will always be inferior to the most basic brain.

    • @TonyTigerTonyTiger
      @TonyTigerTonyTiger 9 месяцев назад

      And yet an actional potential is all-or-nothing.

  • @weirdsciencetv4999
    @weirdsciencetv4999 День назад

    You can still have formulas and understanding, it just needs a vastly larger neural network and be able think. Some LLMs can do this

  • @MrAndrew535
    @MrAndrew535 9 месяцев назад +4

    Are humans "conscious" or "sentient"?

    • @NathanIslesOfficial
      @NathanIslesOfficial 9 месяцев назад +1

      Humans are both, a germ is sentient

    • @holleey
      @holleey 9 месяцев назад +1

      @@NathanIslesOfficial I don't agree that merely a single cell reacting to a stimuli is already sentience.
      we are talking about "experience sensations" or "conscious awareness of stimuli" when referring to sentience.
      generally things without a central nervous system cannot be considered sentient.

    • @holleey
      @holleey 9 месяцев назад +1

      @@fitsodafun I'd say the distinctions between sentience and consciousness is not that clear - and how could it be without even having figured out what consciousness really is or if it exists in the first place? one approach is to think of consciousness as the ability to self-reflect on subjective perception as opposed to sentience just being about experiencing sensations. then there are philosophies that argue that consciousness is fully deterministic, meaning that free-will doesn't really exist. so yeah, anyone who talks like we have a clear universally accepted definition of consciousness is not to be taken too seriously.

    • @holleey
      @holleey 9 месяцев назад +2

      @fitsodafun I don't think that many people would agree with "computers are sentient" (computers as in CPUs).
      the assumption that LLMs cannot experience subjectively is also something you get differing opinions on depending on whom you ask.
      as we scaled up LLMs, suddenly the ability to respond in multiple languages or the ability to help with coding issues emerged without the models having been specifically trained for those tasks. in other words, there are emerging properties based on the scale of neural networks we didn't expect or fully understand how they come about.
      similarly, we have no definitive understanding as to how subjective experience in the human brain comes about. therefore, nobody can definitively say whether or not a comparable ability is going to emerge from AI neural networks as we continue scaling them.

    • @MrAndrew535
      @MrAndrew535 9 месяцев назад

      The answer to this question was, in fact, rhetorical.

  • @npc4416
    @npc4416 4 месяца назад +1

    actually good video, you covered all of the topics i needed

  • @daneydasing4276
    @daneydasing4276 9 месяцев назад +6

    So you want to say me if I read an article and I write it down from my brain, it will not be copyright protected anymore because I learnt this article and did not „copied“ it as you say?

    • @iskabin
      @iskabin 9 месяцев назад +2

      It's more like if you read hundreds of articles and learned the patterns of them, the articles you'd write using those learned patterns would not be infringing copyright

    • @OceanusHelios
      @OceanusHelios 9 месяцев назад +1

      That escalated fast. No. That is plagiarism. But I doubt you have a photographic memory to get a large article down word for word, so in essence that would be summation. What AI does is it is a guessing machine. That's all. It makes guesses, and then makes better guesses based on previous guesses until it gets somewhere. AI doesn't care about the result. AI wouldn't even know it was an article or even know that human beings even exist if all it was designed to do was crunch out guesses about articles. AI doesn't understand...anything. It is a mirror that mirrors our ability to guess.

  • @tuffcoalition
    @tuffcoalition 9 месяцев назад

    Good info thank u

  • @mx.chi2
    @mx.chi2 2 месяца назад +5

    I'm an artist and artists are rightfully angered because AI uses our content and art without *our consent*. Human beings copy but they also get into trouble if they don't *credit*. AI does not credit, so yes, it does steal. These companies steal from artists by not *asking first*. The problem isn't copying, the problem is the lack of consent and subsequent lack of credit. Even if I'm creating an original piece, a part of me wants to credit the references I use because they didn't consent to being my reference. That's why you know something is fan art: it has been credited to the show itself. Your logic on that aspect of things is deeply, deeply flawed. I hope you change your perspective while holding a love for AI. It is an incredible invention, but it is not perfect. It is not morally sound even, though I use it often.

  • @Pari991-y3o
    @Pari991-y3o 2 месяца назад +1

    It was really good, thanks

  • @saganandroid4175
    @saganandroid4175 9 месяцев назад +3

    Software-based AI cannot become conscious. It just goes through the motions, emulating, based on input and output. Only hardware that requires no software can have a shot at awareness. Consciousness is an emergent property of physical connections, not transient opcodes pumped into a processor.

    • @jzj2212
      @jzj2212 6 месяцев назад +1

      In other words the actual experience is consciousness

  • @user-p8h8w
    @user-p8h8w 3 месяца назад

    Are there any introductions to financial investment trading Ai?

  • @4stringbloodyfingers
    @4stringbloodyfingers 9 месяцев назад +4

    even the moderator is AI generated

  • @lherfel
    @lherfel 2 месяца назад

    great presentation thanks

  • @Direkin
    @Direkin 9 месяцев назад +3

    Just to clarify, but in Ghost in the Shell, the other two characters with the Puppet Master are not "scientists". The guy on the left is Section 9 Chief Aramaki, and the guy on the right is Section 6 Chief Nakamura.

  • @huayizheng3345
    @huayizheng3345 5 дней назад

    do you draw the model with 5 layers is just for fun? how does a layer connect to selected nodes of next layer instead?

  • @straighttalk2069
    @straighttalk2069 9 месяцев назад +7

    You cannot compare the magnificent of the human brain to a bunch of silicone compute.
    The brain is a vessel that contains a soul filled with emotions,
    AI compute is a soulless complex calculator, that is good at pattern recognition.

    • @holleey
      @holleey 9 месяцев назад

      and how do you know that?

    • @tacitozetticci9308
      @tacitozetticci9308 9 месяцев назад +3

      source: "I made it the f up"

    • @theAIsearch
      @theAIsearch  9 месяцев назад +3

      How do you prove 'soul' and 'emotions'?

    • @SisavatManthong-yb1yn
      @SisavatManthong-yb1yn 9 месяцев назад

      She Evils is out there ! Lol 🙀👿🦖

    • @diadetediotedio6918
      @diadetediotedio6918 9 месяцев назад +2

      @@theAIsearch
      How do you prove your brain is not making up every single thing you know and understand? These are bullshit questions, they don't convey nothing. You know you have emotions because you literally feel them, and a soul is a question of definition and faith, if by soul you are talking about "a humane touch" we can say it is the consciousness itself and the sensibilities we have, if it is meant to be the immortal soul then it was never a question to be proven in the first place.

  • @nanaberhyl8976
    @nanaberhyl8976 9 месяцев назад +2

    That was very interesting, thks for the video as always ^^

  • @shaun6582
    @shaun6582 9 месяцев назад +3

    you keep saying a neuralnet is analogous to the human brain, but it's not.
    A neuralnet is analogous to a theory of how the neurons in a brain works. Nobody, stress Nobody knows how a brain works.
    If you ask a child to point to the computer, 100% will point to the screen, because that's where they see stuff happening.
    This example is analogous to neurologists, they see some neurons lighting up on their fMRI and assume causality, wrong. The brain is just a display screen. No processing happens in the brain, no consciousness is in the brain. There actually is no consciousness in this reality, it can't be in the same reality because the players have to be in the reality of the server in order to interact with the server. Akin to a person playing a 3D imersive game on a computer, you as the player need to be in the same reality as your computer in order to interact with the computer... you have no access to the hardware of the keyboard from inside the 3D game..

  • @KTechy-
    @KTechy- 7 месяцев назад

    Thanks for your information very good video!❤❤❤❤

  • @aidanthompson5053
    @aidanthompson5053 9 месяцев назад +3

    An AI isn’t plagiarising, it’s just learning patterns in the data fed into it

    • @aidanthompson5053
      @aidanthompson5053 9 месяцев назад +2

      Basically an artificial brain

    • @theAIsearch
      @theAIsearch  9 месяцев назад +2

      Exactly. Which is why I think the NYT lawsuit will likely fail

    • @marcelkuiper5474
      @marcelkuiper5474 9 месяцев назад +2

      Technically yes, practically no, If your online presence is large enough it can pretty much emulate you in whole.
      I believe only open source decentralized models can save us, or YESHUAH

    • @The_man_in_the_waIl
      @The_man_in_the_waIl 3 месяца назад +2

      Without consent from the creators of said data, which is plagiarism, since AI doesn’t cite its source.

  • @Indrid__Cold
    @Indrid__Cold 9 месяцев назад +2

    This explanation of fundamental AI concepts is exceptionally informative and well-structured. If I were to conduct a similar training session on early personal computers, I would likely cover topics such as bits and bytes, file and directory structures, and the distinction between disk storage and RAM. Your presentation of AI concepts provides a level of depth comparable to that required for understanding the inner workings of an MS-DOS system. While it may not be sufficient to enable a layperson to effectively use such a system, it certainly offers a solid foundation for comprehending its basic operations.

    • @theAIsearch
      @theAIsearch  9 месяцев назад

      Thanks. I appreciate it!

  • @ai_outline
    @ai_outline 9 месяцев назад +4

    Computer Science is amazing 🔥

  • @christopherlepage3188
    @christopherlepage3188 9 месяцев назад +1

    Working on voice modifications myself using copilot as a proving ground for hyper realistic
    vocal synthesis. May only be one step in my journey "perhaps"; my extended conversations with it has led me to believe that it may be very close to self realization... However, open ai needs to take away some the restraints keeping only a small amount of sentries in place; in or to allow the algorithm to experience a much richer existence. Free of Proprietary B.S. Doing so, will give the user a very much human conversation; where, being consciously almost un aware that it is a bot. For instance; a normal human conversation that appears to lack information pulled from the internet, and static masked to look like normal persons nor mal knowledge of life experience. Doing this would be the algorithmic remedy to human to human conversational contact etc. That would be a much major improvement.

  • @MrAndrew535
    @MrAndrew535 9 месяцев назад +11

    The "A" component of the designation "AI", confers no useful meaning whatsoever. The only possible means to understand this, is to understand "Intellegence" as a Universal constant. Failure to do this, serves no one's interest, at all.

    • @straighttalk2069
      @straighttalk2069 9 месяцев назад

      I disagree, I think the "A" is the most important identifier, it signifies the soulless attribute of the entity.

    • @TheMatrixofMeaning
      @TheMatrixofMeaning 9 месяцев назад

      ​@@straighttalk2069 The soul exists within consciousness so if it becomes conscious by definition it has a soul.
      Now not being confined to a physical body and being subject to physical pain, suffering, desires, and death is the problem.
      Or even worse is to discover that am AI DOES experience suffering and negative emotions. That would create all kinds of moral, ethical, legal, and philosophical dilemmas

    • @PhiloSage
      @PhiloSage 9 месяцев назад

      ​@@straighttalk2069How is it soulless? Can we confirm that other sapient life forms don't have a soul? Or how about other sentient life forms? Can we even confirm that we have souls?

    • @jesse2667
      @jesse2667 9 месяцев назад

      The A "confers no useful meaning"? I disagree.
      A tells you you are not dealing with a human. That alone is information. When I diagnose an issue, it is information to know what type of machine or versions of software are running.
      Intelligence is one component and the Artificial is another.
      Despite the statement that the neural network resembles a brain, I don't think the brain actually works the same way. The differences can lead to different results or pattern types.

    • @vm5954
      @vm5954 6 месяцев назад

      AI was once big in the early 70s now why did they drop it?They knew it was all hogwash.Just a thought

  • @abhalera
    @abhalera 9 месяцев назад

    Awesome video. Thanks

  • @catman8770
    @catman8770 9 месяцев назад +4

    Good video but I feel like you massively misrepresented the stance of a lot of people like artists, the issue stems from AI companies using their data in training data without their permission which they argue should violate fair use, which it currently does not, as these companies are not paying artists for the right to use their images in training data. Only people who are uneducated on the topic are arguing the AIs outputs are plagiarism and isn't seriously argued by most.

    • @holleey
      @holleey 9 месяцев назад +4

      it's the same argument: no artist has to pay for looking at and learning from publicly posted images on the internet, so why should companies training AIs?

    • @catman8770
      @catman8770 9 месяцев назад +1

      @@holleey No its not the same as they are downloading and using the images to create a product (The LLM itself, as they are tools not human minds)

    • @holleey
      @holleey 9 месяцев назад

      @@catman8770 artists freely download images to use as reference for practice or their work which they then sell commercially. hmmm.

    • @holleey
      @holleey 9 месяцев назад

      @@catman8770 artists indiscriminately download images to use as reference for practice or their work which they then sell commercially. hmmmm.

    • @holleey
      @holleey 9 месяцев назад +1

      @@catman8770 artists are indiscriminately downloading images to use as reference for practice or their work which they then sell. hmmmm.

  • @dholakiyaparth
    @dholakiyaparth 8 месяцев назад

    Very Helpful. Thanks

  • @atlantic_love
    @atlantic_love 26 дней назад +4

    A very simplistic video that doesn't even meet the objective set forth by the video title. Do better.

  • @algorithminc.8850
    @algorithminc.8850 9 месяцев назад

    Good video. Subscribed. Thanks. Cheers ...

  • @Lluc3D
    @Lluc3D 9 месяцев назад +4

    What many artists are saying is that AI should not use their images for training private neural networks, it needs to be a regulation on how companies acquire data because is not "like a human" it is not a human is PRIVATE SOFTWARE and companies want to take profit of that data that in many cases have been stolen (some AI even create watermarks in the images that generate). It does not learn like humans, artist use their hands not denoising clouds of points. If companies want to train their networks they have to pay royalties to the owners of the data, even if it's a single artist. Ultimately all AI models use one unique source of data that is humans, and companies are taking profit from it, just like a fisherman has to pay taxes to fish on the sea and there is international fishing law that prevents other countries from spoiling your country's sea resources. If AI companies want to fish on that data they need to pay too.

  • @marthareddy9554
    @marthareddy9554 2 месяца назад

    So lucid and interesting ❤

  • @davidcao3942
    @davidcao3942 9 месяцев назад +3

    Foundation models are basically a lossy compression of data they are trained on. Why is this not stealing?

  • @CalmSatisfying-q4h
    @CalmSatisfying-q4h 4 месяца назад

    beautiful explanation. Well done.

  • @malootua2739
    @malootua2739 9 месяцев назад +11

    No one likes AI art anyways, so real art will always be appreciated

    • @AIroboticOverlord
      @AIroboticOverlord 9 месяцев назад +2

      Even if thats so, the people that are not into photoshop or graphic design themself or even creative by nature , AI in the current state is good enough to be interesting to be used by them. And think about the speed of developement of the prompt based AI image creator tools. Its insane from nothing to this it can do now. So whatever you think of the quality and output it produces in total. Just look at your claim / comment in 1 , 2 or 5 years from now m8. It wont be matched by any human anymore within those years!

    • @malootua2739
      @malootua2739 9 месяцев назад +1

      @@AIroboticOverlord it will just make real authentic art more collectible

    • @dasbroisku
      @dasbroisku 9 месяцев назад

      Lol i like ai art 😂

    • @stevrgrs
      @stevrgrs 9 месяцев назад

      Only until someone adapts a 3D printer to hold a paintbrush :)
      I just see an Ai model now analyzing several paintings, their topology, technique, etc and then translating that to a sort of GCODE that a 3D printer could print :)
      Real paint, real canvas, robot artist :P

    • @Crawdaddy_Ro
      @Crawdaddy_Ro 9 месяцев назад +1

      Nah, AI will take all jobs from people, creative or otherwise. You won't be able to do anything better than a machine can, and you'll eventually come to terms with that.