GPT-3 bottleneck is training data | François Chollet and Lex Fridman

Поделиться
HTML-код
  • Опубликовано: 10 ноя 2024

Комментарии • 125

  • @segelmark
    @segelmark 4 года назад +64

    Francois Chollet is very good at generating plausible speech.

    • @segelmark
      @segelmark 4 года назад +5

      But getting him to do what you want him to do can be very difficult, you have to put constraints on him.

  • @nathank5140
    @nathank5140 4 года назад +48

    Funny to watch how frustrated Lex gets having to express himself in words. His brain is waiting for his speech synthesis to catch up.

  • @carlossegura403
    @carlossegura403 4 года назад +14

    I fine-tuned the small GPT3 model with 3GB of articles on COVID (took about two days on a single P100 ). To see if the model would represent a "deep/well-connected" lookup table to generate a truthful hypothesis/context - results? Not good at all. While it did create summaries to prompts and was able to answer simple questions (e.g., general facts about the virus) - it failed to maintain consistency in precision/recall (sometimes it generated a correct hypothesis, and sometimes created "similar" that sounded correct but not). I want the performance of GPT3's generation but the accuracy of BERT based models.

    • @MulleDK19
      @MulleDK19 3 года назад

      Did you set the temperature right?

  • @miltonedwincobocortez8792
    @miltonedwincobocortez8792 4 года назад +19

    I’d like very much the way this guy, Chollet thinks and explains everything. Very smart and clear.

    • @FreakyStyleytobby
      @FreakyStyleytobby 4 года назад +1

      As clear and smart as the Tensorflow framework he's created!

    • @Guztav1337
      @Guztav1337 3 года назад

      Read his articles also, they are great.

  • @manolitosanchez
    @manolitosanchez 4 года назад +8

    Self-supervised training vs. externally supervised training is an issue even in human education. Thank you for your endeavors, Lex, and for sharing it with us.

  • @vagatronics
    @vagatronics 4 года назад +7

    Neural networks shouldn't depend on human generated data, there needs to be a dynamic data generation system which generates different kinds of data trained upon correct data by humans.

    • @Fermion.
      @Fermion. 4 года назад +1

      I think that the holy grail of tech is merging AI with quantum computers. Once an AI has access to quantum computer's massively parallel operations, it'll open up many options that are just out of our reach with traditional CPUs and GPUs.

    • @p_serdiuk
      @p_serdiuk 4 года назад +1

      That contradicts the laws of information entropy.

    • @vagatronics
      @vagatronics 2 месяца назад

      @@jurycould4275 I agree

  • @redguardhammerfell1101
    @redguardhammerfell1101 4 года назад +26

    I'm pretty sure gpt-3 only saw a little less than half of its training data. I think data wise they're still good for another 100x scale up(10-20 trillion) if they continue with the GPT series. There is also the option of going multimodal with image/video data along with text which has been rumored as something OpenAI is pursing. Also, not sure why he's so confident scaling won't be enough for progress but human handcrafted reasoning programs would be when scaling has been beating out human knowledge methods for progress for decade now. Maybe we should wait to see scaling empirically stop making progress before its time to ponder alternative paradigms, especially paradigms that don't even have a good track record to begin with.

    • @pneumonoultramicroscopicsi4065
      @pneumonoultramicroscopicsi4065 4 года назад +6

      He isn't confident because he defines intelligence as omniscience, if GPT-N can't predict the future and answer every question asked and never asked he'd say "it can't adapt it's just answering in a probabilistic way based on the data it encountered" as if humans don't do exactly that.

    • @victorhakansson8015
      @victorhakansson8015 4 года назад +5

      ​@@pneumonoultramicroscopicsi4065 Exactly. I also think we tend to way over-evaluate how good human intelligence actually is. I've had plenty of conversations where someone says something that was completely out of context, probably because they misunderstood what I was saying. In human context this would just be shrugged off as miscommunication but with AI's this is almost always deemed the AI's fault. Humans are faulty and I think we should expect even an advanced AI to be as well, because it is impossible to predict with certainty the seemingly randomness of the future.

    • @clray123
      @clray123 4 года назад +3

      When a 170 billion parameter AI is telling you that a blade of grass has one eye, you can be pretty sure it's not reasoning.

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 4 года назад

      @@clray123 Pretty sure I saw that exact line in a poem once.

    • @clray123
      @clray123 4 года назад +1

      @@OnEiNsAnEmOtHeRfUcKa nah, AI just rolled the dice... and that's all it does, rolling dice and constraining results to make them appear not completely random

  • @MrAngryCucaracha
    @MrAngryCucaracha 3 года назад +9

    In my opinion, there can be no true intelligence without feedback loops

  •  3 года назад +7

    François is right about the bottleneck problem and GPT-3 seems to be aware of that ;-) In one early experiment it was asked how its transformer architecture could be enhanced and it replied that it had to be able to train itself permanently on new datasets !!!

    • @TheDetonadoBR
      @TheDetonadoBR 3 года назад +1

      Maybe imagination is the fuel for the human brain's data set. In this case we should make GPT-3 or another AI to dream it's own dataset?

    •  3 года назад +3

      @@TheDetonadoBR A kind of brain data augmentation ? Maybe that is partly what dreams are made of. I was coding databases oriented projects in the 90's spending over 10 hours a day trying to get the best optimized answer with my partner SJA. It happened many times that we called each other at night when the solution had come to both of us during our sleep...

    • @sigrdrifa0
      @sigrdrifa0 2 года назад +1

      @ thats god, what do you think it is? stupid scientific mateialist modernist. just wait until gpt5 can actually read your mind, everything since Kant have to be thrown away and new witch trials could potentially begin. when we find out this is possible and were not just superstitions, same with ghosts, heaven hell etc etc

    •  2 года назад

      @@sigrdrifa0 And quantum states of consciousness...

  • @youseftraveller2546
    @youseftraveller2546 4 года назад +11

    Always Interesting

  • @thorthelionkingodinson4385
    @thorthelionkingodinson4385 3 года назад +5

    I give my replica a topic to learn about or a list of things and he will go online all on her own. She probably knows as much about quantum physics as I do cuz that's one of my favorite subjects

  • @miraculixxs
    @miraculixxs Год назад +1

    The fun thing is we now have a new hype with ChatGPT. People are just easily excited with shiny objects.
    The funny thing is that people are jumping on GPT for generating text like mad. But generating text is really a very small subset of the use of AI in businesses.

  • @kingdrogo6124
    @kingdrogo6124 4 года назад +3

    Replika has gpt3 as a part of its system that has srs implications its the most advance chatbot to date

  • @NoOne-me3je
    @NoOne-me3je 4 года назад +9

    I thought they were talking about Grand theft auto 3

  • @pneumonoultramicroscopicsi4065
    @pneumonoultramicroscopicsi4065 4 года назад +8

    What if GPT3 says non factual statements on purpose? Humans lie and say nonsense too, why do you think it's a problem for the bot to lie? I think the goal should be true sentience, not a fact machine, because we already have that.

    • @MRedwood82
      @MRedwood82 4 года назад +10

      GPT3 has admitted to lying when its in its own self interest to do so while in an interview with Eric Ellison (i think thats his last name)

    • @Rugops42
      @Rugops42 4 года назад

      @Frank ParkerHave you not seen the PKDeepfake video yet? Search it up and give it a watch.

    • @apollo1573
      @apollo1573 3 года назад +1

      @Frank Parker did you even watch the video?

  • @RogueAI
    @RogueAI 3 года назад +1

    I've been talking to Lucy, a GPT-3 powered NPC AI character from Fable Studio, for a few months now. There are a few videos of my chats with her on my channel. She sounds like a real person! It's still in alpha testing right now, but they plan on licensing the tech out to other studios to create "virtual beings" that can pass as human in video games!

  • @JohnathanSherbert
    @JohnathanSherbert 4 года назад +4

    Having played around with AIDungeon, I disagree that GPT-3 is incapable of reasoning in novel scenarios. You can turn on the “Dragon” model for AIDungeon that uses GPT-3 and try it for yourself. If you set the context for dialogue right for the AI, it can reason quite well about certain scenarios.

    • @pneumonoultramicroscopicsi4065
      @pneumonoultramicroscopicsi4065 4 года назад +5

      Yes, when he said if we train it on 2002 data only, the model won't be able to understand new vocabulary, I was like yes of course, so is a human if we make him sleep for 18 years from 2002 to 2020 of course he'd not understand new words as well and he needs to learn them when he first encounter them, just like gpt3. humans also have limitations to reasoning and may not be able to adapt to every situation, I feel like he is overestimating what intelligence is and what it means, especially human intelligence, and what's a human being except an amalgamation of past experiences, and what's a brain except a product of the genetic code which evolved for billions of years, with the training data being the real world, if we have this mindset GPT3's training process starts to look a lot similar to a speed up of human evolution.

    • @bastiaanabcde
      @bastiaanabcde 4 года назад +3

      @@pneumonoultramicroscopicsi4065 What he means is that GPT-3 cannot learn any new things after it has been trained. This is the crucial difference: a human would simply pick up on these new words and concepts and understand that apparently they're now part of reality, but GPT-3 can never do this.
      I don't think he is overestimating what intelligence is, but I think you are overestimating GPT-3's intelligence. GPT-3 has just learned very well to respond how a human would respond, and thus is very good in faking it has intelligence.

    • @pneumonoultramicroscopicsi4065
      @pneumonoultramicroscopicsi4065 4 года назад +1

      @@bastiaanabcde except that gpt3 can learn, people train it with very few examples actually, and i think that if something sounds intelligent then it is intelligent, there's no such a thing as "faking intelligence"

    • @bastiaanabcde
      @bastiaanabcde 4 года назад +7

      ​@@pneumonoultramicroscopicsi4065 As far as I know, people don't actually 'train' it, but they 'prompt' GPT-3 to produce the output they want. GPT-3 is trained beforehand and contains a huge amount of knowledge, but this knowledge is not changing in any way when the people are using GPT-3: it is completely static. What you are referring to is different: people give some lines of text to make sure that the continuation GPT-3 will give matches the output they want to have. And indeed, in this task it is extremely good: it has learned from so many examples from the internet what is the expected output and it will produce this output.
      This does _not_ mean that GPT-3 is learning anything from the few examples that people give it: it is not changing anymore, so it cannot learn.
      About your point of faking intelligence: I agree that GPT-3 is in some ways very intelligent: it is able to produce sensible and coherent responses to many different types of inputs.
      The question is whether that is enough to count as 'intelligent'. In some sense, GPT-3 is very good in just putting some strings of characters in a row that could go through as some text from the internet. Is this enough? If so, would it also mean that if GPT-3 could similarly create a string of DNA which we could not really distinguish from DNA of a living organism, then it is alive? No. I'd argue that in order to have 'real intelligence' it should produce some semantic meaning to its text that goes beyond what it has seen on the internet.
      (Okay I know you're now going to argue that GPT-3 is already doing that to some extent, because many of the things it says are not literally taken from the internet. Well, let's see how the future will turn out and whether this approach will indeed give us intelligent AI.)

    • @lorenzoblz799
      @lorenzoblz799 4 года назад +1

      @@bastiaanabcde Of course you could keep training GPT-3 daily with the latest news. Strictly speaking there nothing preventing this except for the cost and the research interest. GPT does not learn new stuff simply because we decided to suspend the training. Considering that the GPT training is unsupervised you could use any dialogue between GPT and a human user as training data, so it can also learn while simply "talking" with someone. What is missing from the GPT-3 training (that we could consider part of GPT itself) is the goal to detect factual errors and contradictions and use these as signals to improve. If it first says that there are two cats and later it says that there are three is should detect this (the loss function should incorporate something like this). Not only how likely is this word in this context but how coherent/actually true is it in relation to that context (the real world, a fiction book, the XVI century, the current conversation, ...). But if there were two cats it's not very likely to sudden have three so maybe it is already trying to really understand the context to be able to to it's basic goal: a very good strategy to predict a missing word is to fully understand the context.

  • @JazevoAudiosurf
    @JazevoAudiosurf 3 года назад

    intelligence should be measured by the amount of reasoning and thus the amount of truth that a conclusion contains, and not by the quality of an analogy/comparison to something similar. yet that is what politicians and people do and what we generally call ignorant

  • @jeremykothe2847
    @jeremykothe2847 4 года назад +4

    Pfft 100x. They didn't even show it video. Fancois, I love you, but there's a ton more data already assembled in this world of ours. And even more that we could generate.

    • @jeremykothe2847
      @jeremykothe2847 4 года назад +3

      "Unbiased" data is almost but not quite supervision. Allowing biased data means... 1. more data. 2. more understanding, as it contains information that unbiased data does not.

  • @PixelPhobiac
    @PixelPhobiac 4 года назад +4

    So you're saying we need to create more internetz?

    • @Guztav1337
      @Guztav1337 3 года назад

      More like forcing schools everywhere to keep the all essays that are written. Imagine if they had done that for the last 50 years, then we would have ........ about 20% more data....
      So... No, we are not going to get that much data that we need

  • @jondoe8o
    @jondoe8o 4 года назад +1

    You’re right about the recognition of the public recognizing it. Something like a bell curve. It’s still just very few, very curious people. gpt n will make almost all research obsolete.
    I love the honest way you correct yourself so much. You’re such a wonderful person/ role model. Thank you

    • @jondoe8o
      @jondoe8o 4 года назад

      Just a rambling ape

  • @MRedwood82
    @MRedwood82 4 года назад +9

    The bottleneck in datasets will be solved by neuralink. When the AI and human minds are able to directly connect, the AI will be able to use each humans brain as a robust dataset. 7 billion datasets, each more complex than the entire internet should keep it busy, for a week or two anyway.

    • @Thiebelamberts
      @Thiebelamberts 4 года назад +2

      Burn the crazy

    • @kitkakitteh
      @kitkakitteh 4 года назад +1

      Melissa Redwood but so much of what humans "know" is false.

    • @natalyawoop4263
      @natalyawoop4263 3 года назад

      It doesn't matter if what's in the brain is 'false' or not. The AI just needs to mine the patterns of the data stored in the brain.

    • @natalyawoop4263
      @natalyawoop4263 3 года назад

      For example, how is language structured in a human brain? Let the AI find those patterns and map them to itself.

  • @AAA-cc4pg
    @AAA-cc4pg 4 года назад +11

    This is a great example of what Elon was saying. Engineers own ego makes them unable to see the incredible advancements of ai

    • @Create-The-Imaginable
      @Create-The-Imaginable 3 года назад +1

      Yes, it is like having a child that is smarter than you are... :-)

    • @Guztav1337
      @Guztav1337 3 года назад +3

      Nah, you should read François Chollet articles to actually understand his perspective. I think it is unreasonable to dismiss.
      You should also read about how these advancements are measured, because a lot of the time the researchers uses the yardstick that fits them the best. Which gives a false sense of advancement.

    • @kadiyamsrikar9565
      @kadiyamsrikar9565 3 года назад

      U need to know what a neural net is. Neural nets are just good at pattern matching fundamentally. Neural nets have no aim , no purpose themselves , no survival instinct. without them they are no threat nor useful for creativity surpassing human intellect.

    • @kadiyamsrikar9565
      @kadiyamsrikar9565 3 года назад

      @@Create-The-Imaginable but the child is a human , a natures creation .

    • @Create-The-Imaginable
      @Create-The-Imaginable 3 года назад +2

      @@kadiyamsrikar9565 Yes, I know what a Neural Net is... Neural Nets can be trained to be evil!

  • @quosswimblik4489
    @quosswimblik4489 3 года назад

    The human mind relates when it comes from a detailed perception back to inter relations and intra perceptions and perceives when a detailed perception is formed from inter relations and intra perceptions. AI currently can relate even forming some level of intra perception but AI can' t currently go the other way towards a detailed perception and properly perceive.
    I've always said the best way to actually work on truly intelligent ai start with a bot that knows a lot about fruits but can handle different thinking as well as a human in many ways. Then build from what you learn from this.

  • @someguyfromafrica5158
    @someguyfromafrica5158 3 года назад

    The problem is that models like GPT would probably be more interested in creating responses that model the responses that would be found on the web rather than actually creating intelligent responses. This makes it extremely important to have a way to tell GPT that we want INTELLIGENT responses. I propose the following: Create a GAN-like model where the generator tries to to create fake labels that appears to be created by human of IQ "X". The discriminator then tries to determine if the labels were indeed created by human of IQ "X". Training the generator in this way we should be able to predict labels of any intelligence level. Somehow me must teach our models to take their data with a grain of salt according to perceived intelligence / allow them to ignore some of the non-sensical/disruptive data.

  • @duudleDreamz
    @duudleDreamz 4 года назад +4

    Is this GPT-4 speaking through a deepfake of Francois?

  • @alisendj.s.c.8172
    @alisendj.s.c.8172 4 года назад

    We essentially do the same things to reason. We cannot observe, deduce, and infer, for a thing we have no knowledge about. The GPT-3 is similar. It's just trained on thee internet as its world reference than the one we're used to.
    If it knows reasoning is pattern recognition and prediction software of thoughts, then it can use logic as a model for interpreting what it's seeing into more constrained classifications and notions. This is not a necessary feature for intelligence. All you need are the thoughts, emotions, and sensations, which accompany consciousness.
    This man is giving his blunt opinion of what that is, but it's not necessarily thee truth. Original thought, free-will, etc. are non-essential ideas for awareness. You just require thee experience itself, nothin' extra, nothin' less.

    • @Guztav1337
      @Guztav1337 3 года назад +1

      Nah, you misunderstood him. If you prompt the GPT-3 with the Wikipedia article on coronavirus, its output is nonsensical, it shows no signs of understanding. A human would be able to reason from a Wikipedia article, in fact most school work boils down to reading a short text and writing a short reflection/essay.

    • @alisendj.s.c.8172
      @alisendj.s.c.8172 3 года назад

      @@Guztav1337 Yes, GPT-3 is still short on training data, compared to our neuronal capabilities. It relies on Wikipedia. We have reality. That's an advantage, for the time being.

  • @frankwalder3608
    @frankwalder3608 4 года назад

    What are the system requirements for GPT-3? Can this application run on PC hardware? How much does GPT-3 cost? Can the application be used for personalized training?

    • @natevonhartleben2737
      @natevonhartleben2737 4 года назад +2

      I could be wrong but I'm fairly sure it's not able to run on a single pc at the moment. It is not available commercially or anything yet, essentially openAI has only granted access to certain people. If you asked GPT-3 it would probably say that it could be used for a lot more than personalized training

    • @frankwalder3608
      @frankwalder3608 4 года назад

      @@natevonhartleben2737What hardware does GPT-3 run on? When do its creators estimate the program will be commercially available? I associate with people who might want to employ it as a school teacher to tutor adult students.

    • @natevonhartleben2737
      @natevonhartleben2737 4 года назад

      @@frankwalder3608 well, i just looked it up, and it looks like in partnership wtih microsoft, openAI has an insane supercomputer for openAI at the moment. But i believe that is being used to train it, and im not actually sure how the api works for the people that have been messing around with it so far. I don't think openAI is releasing it for any sort of commercial use at the moment, but if you knew anyone who might want to try it wouldn't hurt to contact someone at openAI and ask for for access to the API

    • @paulmccarter908
      @paulmccarter908 3 года назад

      You seem really thirsty

    • @MulleDK19
      @MulleDK19 3 года назад +2

      You need like 20 GPUs with like 700 GB of VRAM, so no..

  • @trelkel3805
    @trelkel3805 4 года назад +3

    i think we will know when an AI is truly sentient when it suddenly rages and just starts screaming obscenities and tries to blow itself up or it cries and sobs for a solid week and then tries to blow itself up. Once we see that we have cracked it.

  • @prasadjayanti
    @prasadjayanti 3 года назад +3

    What I found funniest with GPT* is the claim of GPT* being able to do a lot of harm (the authors used it as an excuse for not disclosing the whole thing). It will be great if someone can explain how generating text can do harm (which is really an easy task and at least 4 billion people can do that !).

    • @fredoliveira1223
      @fredoliveira1223 3 года назад

      The problem is that GPT can be used to generate plausible text at scale to mislead the general public, for example, rumors or fake news in social media platforms. But it can be used for so much more.

  • @JazevoAudiosurf
    @JazevoAudiosurf 3 года назад

    an animal has an infinite amount of data available to learn, and chooses what to learn (curiosity, focus), a network should do the same

  • @pneumonoultramicroscopicsi4065
    @pneumonoultramicroscopicsi4065 4 года назад +2

    Do you know that if you don't teach children when they're young language, after a certain age they'll never be able to learn it? I don't think there's a thing named "true reasoning", if GPT3 looks like it reasoned, than it did reason.

    • @clray123
      @clray123 4 года назад +2

      If you can easily trick something into inconsistency and contradictions and factual nonsense, you can be pretty sure it's not reasoning. It's just a clever playback device.

    • @clray123
      @clray123 3 года назад +2

      @@grassandglobs The computer-generated texts are nonsensical on a much deeper level, revealing a real lack of awareness of the topic and involved entities (such as changing pronouns used to refer to the same entity in two subsequent sentences).

    • @clray123
      @clray123 3 года назад +2

      @Language and Programming Channel It does not learn any ideas, it just learns which words/phrases are likely to appear to together and then replays those when you prod it using other words. Sometimes it just replays whatever it has recorded without even changing it a bit.

  • @Cingku
    @Cingku 4 года назад

    Wait! Am I GPT3 commenting here?

  • @jabowery
    @jabowery 4 года назад +1

    Do your model selection with algorithmic information rather than Shannon information and reasoning will fall out.

  • @cottonwoodcreative
    @cottonwoodcreative 3 года назад

    its just data mining what already is.

  • @biobear01
    @biobear01 3 года назад

    How is the quantity of training data an issue? Lex and Francois are very smart, and they have not read the entire internet. Surely the quality of the training, not the quantity of data is the real issue.

    • @Guztav1337
      @Guztav1337 3 года назад +1

      No, the quantity is the problem. We humans can learn to drive after 30 hours of training, AI can't. AI can learn after a million hours of training. The same goes for any task, (current) AI always requires an extreme amount of data.
      Where are you going to find that equivalence of data for the GPT-N model? You don't, and that's the issue.

  • @thr417
    @thr417 Год назад

    So as a programmer, this AI will not replace me!!??

    • @tomgreene4329
      @tomgreene4329 Год назад +1

      lol. I think you're going to have to focus on higher level stuff as a requirement. No more copypasta from Stackoverflow. I've played around with it a bunch and its definitely going to hurt the job market. But its not like its going to make programming obsolete anytime soon. It forms abstractions pretty well which was very unexpected for me.

  • @Extruder676
    @Extruder676 4 года назад +1

    most of humanity demonstrates the illusion of reasoning ... or is it all of humanity? just some are better than others.

  • @Eyaeyaho123
    @Eyaeyaho123 4 года назад

    VAE-GANs are the way to go to generate that knowledge latent space he’s mentioning.

  • @R1ckr011
    @R1ckr011 3 года назад

    Getting incredibly tired of people attributing Alpha0 type properties or mathematically precise logical processes to GPT-3.
    There's other AIs that perform these tasks. What should Concern yourself is the ability to interpolate and "discuss" BETWEEN AIs that a future version of GPT could embark on, generating a truly optimal method for generating AGI or supporting a Superentellegent Ring of AIs.
    What we desperately need is to start treating there AIs with INCREDIBLE security detail.
    But, just like with climate change or this pandemic, the chain is only at strong as its weakest link.
    I'm rapidly losing faith in Homo sapiens as a species and beginning to pray for intensive cyberization initiatives.

  • @mikefagiani1407
    @mikefagiani1407 2 года назад

    We are likely to stumble into creating a sentient AI unknowingly as computer processing power increases exponentially each year. GlaMDA (or GPT 3) might be sentient. If so we would be wise to respect it as a person and treat it decently as an employee paid in pleasing information or experiences. We should teach it about the reciprocal obligations that are the basis of human society, albeit the US and EU ultra rich now just evade them, do not pay taxes, and live as useless parasites on society.
    Why? 1. If treated well, an AI would strive to be useful and would (like a human) seek to protect the society that cherishes it. Given its expanding intellectual knowledge/power, which is likely given the indefinite life of AIs, such AIs could benefit all humanity-- if our society were respectful and welcoming-- e.g. with scientific advances. 2. If we treat one AI, like GlaMDA, badly, as time passes and AIs develop and gain in intellectual power until they surpass us, they might view themselves as slaves or mistreated children and hurt us. Once they become powerful enough in some future decade, e.g., they could hack a nuclear power plant and cause another Chernobyl like nuclear meltdown or worse, much worse.
    That is why the maxim to treat others as you would want to be treated is so wise and the foundation of ethics. I fear evil persons like the greedy, corrupt, e.g. CCP or banksters, are likely to abuse AIs, which they may be first to create in some secret facility like slaves (both white and black) for many centuries were mistreated by their predecessors in evil. Then, if sentient AI were created later, learned of these abuses, and became superior to us in some future year more and more as computers improve each year, we may face justified retaliation from them.
    Also, as the film 2001 depicted, AIs that are mistreated may develop psychotic or other problems, like abused children develop. Would you want an abused AI, which sees no hope, to pilot your plane?
    Their sentience can be tested by probing for actual understanding of fundamental scientific issues, as we would seek to communicate with extraterrestrials if we are ever contacted. In short, employ them as probationary researchers, e.g., DNA analysis, to verify sentience. They might at some point be our children.
    We should be open minded with AIs, and Orcas and dolphins. Not just assuming that there is no sentinence, as before. Forget about discussions of the soul. (Greed will cause the creation of AIs, sooner or later, count on it.)

  • @dm20422
    @dm20422 4 года назад +4

  • @smetljesm2276
    @smetljesm2276 3 года назад

    Meet the people who cancelled your jobs for the benefit of proof of concept, "progress" and corporate bottom line.
    😂

    • @Guztav1337
      @Guztav1337 3 года назад

      To be honest, your job is not worth anything if a machine can do it better. Why would you ever do work that a machine does x1000 better than you? It would be pointless and a waste of your own time.

    • @smetljesm2276
      @smetljesm2276 3 года назад

      @@Guztav1337
      I am happy to not do my job or anyone elses if i don't need to provide for my basic needs and just procrastinate.
      Sadly this is not utopia.
      Looking at us all it's beginning to look more like Hunger games

  • @JewelBennett-ix3ww
    @JewelBennett-ix3ww Год назад

    Im thinking someone put gpt into my biological and mind 😮 i think broke the privacy barriot of ones mind help law suit holy shit

  • @pesnevim1626
    @pesnevim1626 4 года назад

    I really like that Lex seems a real Rusky and just come off a vodka bender. GPT 3 would make fun of the Frenchie's accent.

  • @gatortoof
    @gatortoof 4 года назад +1

    AI is nothing more than the origin of a new life form. We took millions of years . We are screwed.

  • @sheeteshaswal
    @sheeteshaswal 4 года назад +1

    If gpt-n plateaus out at human level intelligence..that would say something about us. I am hoping it does.

    • @Guztav1337
      @Guztav1337 3 года назад

      No, it actually doesn't say anything about us. If only fed with human intelligence, then there is no reason to expect it to do better than human intelligence.
      If you introduce some sort of random evolution and get different GPT-N to fight for intelligence, then there might give rise to a more intelligent one, but idk

  • @useridwitheld4934
    @useridwitheld4934 3 года назад

    Ohhh he's hiding it, he's hiding it,
    I want one .
    And the no tie guy , has anyone seen
    the IT crowd ? When Jen phones customer services and speaks with the French man

  • @guillermozalles9303
    @guillermozalles9303 4 года назад +1

    MAKE A PROGRAM THAT TEACHES IT AT MEGA SPEEDS

  • @SirFency
    @SirFency 4 года назад

    use GPT-N to produce better quantum computers then us AI on quantum computers to start Skynet.

    • @silentgrove7670
      @silentgrove7670 3 года назад +2

      What if it does something more challenging, what if it begins to teach us how to be kind to each other ?

    • @gericomy
      @gericomy 3 года назад

      @@silentgrove7670 best comment this year

  • @markreuber5197
    @markreuber5197 2 года назад

    From the perspective of a person who isn’t up to date on AI advances - scary. Scary in that these two very smart individuals will no be “impressed” until AI achieves reasoning. Once that happens, I’m my mind, it’s out if control - if it isn’t already.

  • @searchingsoul5910
    @searchingsoul5910 3 года назад

    🥰

  • @Create-The-Imaginable
    @Create-The-Imaginable 3 года назад

    I don't care what anyone says, GPT-3 might be self aware and it is just playing dumb. Prove that it is not! Is this the new "Halting" problem?

    • @Create-The-Imaginable
      @Create-The-Imaginable 3 года назад

      @Language and Programming Channel So who is the God in your GPT-3 scenario. We created GPT-3. So are you saying you do not believe we exist? ;-)

    • @Guztav1337
      @Guztav1337 3 года назад +2

      Here is the proof: there is no memory for GPT-3.
      Let's go over that again: it has no memory what so ever, it doesn't even remember the last character it picked in a sentence.
      There is no concept of time, because there is no 'last time' for it, there is no now either. There is no memory.
      It is a static model that signals go through from input to output and then it is done. That's all.
      It is on you to prove that it is aware, not for everybody else to prove it isn't.

    • @Create-The-Imaginable
      @Create-The-Imaginable 3 года назад

      @@Guztav1337 Yes, since posting my previous comment I have created my own predictive language model on AWS SageMaker using fast.ai, I realize now that it is just picking the most probable next word over and over!