ChatGPT, Explained... by ChatGPT

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024
  • Keep exploring at brilliant.org/.... Get started for free, and hurry-the first 200 people get 20% off an annual premium subscription.
    Watch this video ad-free on Nebula: nebula.tv/vide...
    Video generated using www.synthesia.io
    🐦 SOCIAL STUFF:
    Instagram ➔ / jordanbharrod
    Twitter ➔ / jordanbharrod
    Tiktok ➔ / jordanbharrod
    Vlog Channel ➔ / @checkedoutbyjordan
    Weekly Newsletter ➔ www.jordanharr...
    AI Reading List ➔ bookshop.org/l...
    Anywhere Else You Might Find Me ➔ linktr.ee/jord...
    For business inquiries, contact me at jordanharrod@standard.tv
  • НаукаНаука

Комментарии • 30

  • @docsigma
    @docsigma Год назад +30

    One thing that I would love to see from ChatGPT is a confidence score. Sometimes, it will create output which is factually incorrect, but it will be worded with an air of confidence and authority. It would be interesting to know if these inaccurate, authoritative outputs are associated with lower confidence scores. It would also allow people to decide how seriously to take any specific piece of output.

    • @JordanHarrod
      @JordanHarrod  Год назад +8

      Absolutely, I’d love to see that integrated into ChatGPT. I’m working on another video where I had ChatGPT generate an outline for it and specifically prompted it to include sources, and a lot of the sources weren’t accurate (in the sense that the source listed was unrelated to the text it was cited on)

    • @laurenpinschannels
      @laurenpinschannels Год назад

      There's some great work getting neural networks to know-what-you-know. I've been surprised it hasn't been integrated as part of the standard training pipeline yet. I haven't redone the lit search for it, but one paper one might look at to search forward is Virtual Outlier Synthesis (which has been beaten since it was released, but was a cool paper and is a good reference to the subfield.)

    • @AlkisGD
      @AlkisGD Год назад +1

      Can we take this a step further and have it speak less authoritatively like a human would when they're not certain what they're saying is true?

    • @archvaldor
      @archvaldor Год назад

      "One thing that I would love to see from ChatGPT is a confidence score." I may have missed something obvious from you people with giant brains: but how does chatgpt know when it is wrong?If it knew what it was saying was wrong why not just not say that and/or say something else?

    • @docsigma
      @docsigma Год назад

      @@archvaldor These are good questions which may not have answers.
      I know that similar neural network AIs have a confidence score associated with their output, and it may be the case that ChatGPT has one as well but it’s simply not sharing it with the end user. Or, it may not have one at all. Either way, I’d be curious to see some “under the hood” data with respect to why ChatGPT comes up with what it comes up with, particularly when it’s wrong.
      I’ve tried asking it “Why did you say that?” and it always just says something like “As a neural network trained on…” with no further details.
      My main problem with ChatGPT isn’t that it’s often wrong, it’s how confident and authoritative it often sounds when it is wrong.

  • @NortherlyK
    @NortherlyK Год назад +7

    The number of times this script uses the word "understand" is fascinating.

  • @rwbgill
    @rwbgill Год назад +5

    It's interesting to me that GPT doesn't include some of the basic grammar checks that certain apps use. The grammar checker that shows me endless YT ads will whine if I use the same word more than twice. Yet this ChatGPT used the word "input" one zillion times. 🙂

  • @kahlips0180
    @kahlips0180 Год назад +8

    I tried asking chat gpt about itself weeks ago. It stopped responding, and said I needed to try again later. It magic 8 balled me! 😭💀

  • @argcargv
    @argcargv Год назад +5

    Very clever presentation.

  • @AnshumanKumar007
    @AnshumanKumar007 Год назад +1

    A dealbreaker for me has been the lack of a confidence score returned by GPT 3. The only workaround is to write the prompt in a specially formatted way by appending: "Answer this question only if you are sure and if you are not, please say I'm not sure"

  • @HanaGabrielleBidon
    @HanaGabrielleBidon Год назад +4

    I would like to see more Chat-GPT3 videos. I wonder what recipes it would provide and if it could help writers jump-start their works.

  • @psikeyhackr6914
    @psikeyhackr6914 Год назад +4

    It sound like it says the same thing multiple times in different ways. I have encountered people doing this when they are pretending to understand something but don't really.

  • @PanAndScanBuddy
    @PanAndScanBuddy Год назад +1

    When I was a child I read this book series by Bruce Coville called "A.I. Gang" - where they trained an AI by scanning various books in, and of course giving it the voice of Sherlock Holmes (Basil Rathbone)
    I know it's not your normal video but it would be interesting to know how close it is to AI machine learning now, especially since back then it was so Sci Fi that huge chunks of the book are basically just spy games.

  • @JanBabiuchHall
    @JanBabiuchHall Год назад +7

    How did this video compare to your usual ones in terms of the amount of time and effort needed to make it?

    • @JordanHarrod
      @JordanHarrod  Год назад +9

      Probably took me about an hour total from start to finish, which is a lot shorter than my usual turnaround, but a lot of that comes down to me being pretty familiar with this topic and being able to fact-check pretty quickly (I have an editor so that’s not a factor). I’m working on another video that I asked ChatGPT to write an initial script for and it has probably taken longer to make that video because I have to fact check every little thing (vs doing my own research)

  • @edmundfreeman7203
    @edmundfreeman7203 Год назад +4

    The answers ChatGPT gave aren't exactly right. Two problems: It says it provides output that is similar to the input. If I ask a question, I don't want another question back -- I want an answer. The way the question is similar to the answer is highly non-trivial. Second, in the first answer ChatGPT says it generates answers word-by-word. In the second answer where it talks about encoder/decoder it talks about generating an answer holistically. These answers don't jive. SO yes -- honestly, typical ChatGPT. Like it says, don't rely on ChatGPT for anything important (yet. The future is still looking amazing).

    • @midn8588
      @midn8588 Год назад +1

      I think right now the best use involves generating ideas, which can never be factually incorrect. It's great for outlining ideas and formalizing thoughts in that I can sort of feed it a fragmented feed of my thoughts and it'll write them out in a nice, referencable way.

    • @sparshjohri1109
      @sparshjohri1109 Год назад +1

      The decoder layer considers the input holistically but generates the output word-by-word by sampling. So ChatGPT's answer there was correct in this instance.
      Of course, both explanations are overly simplistic, but given that the model seems to be intended for layperson use, that might just be part of the intended functionality. While ChatGPT can make pretty big mistakes when asked about specific technical details, I don't think any such mistakes were made here.

  • @djdavehouserap
    @djdavehouserap Год назад +1

    This is absolutely amazing. Thank you for using Chat GPT-3 in this unique way as I have only ever seen text and art creations but not a virtual avatar speaking and responding so eloquently.
    Could you do a video about the career potential for AI programming, development, application, integration, etc?
    It seems as revolutionary as when the Internet started as it will COMPLETELY reconfigure so many areas.
    Last question (sorry but I’ve been your FAN since seeing you on Harvard CS50 online courses 😊)
    How could you use Chat GPT-3 to enhance your PhD thesis as to dissertation and publication?

  • @Micetticat
    @Micetticat Год назад +4

    It is probably the AI effect, but the more I use ChatGPT, the less I'm impressed! For example this explanation of Transformers is rushed and not detailed. The long term memory of ChatGPT is not large enough to produce a detailed and useful explanation of transformers architecture.

  • @666dianimal
    @666dianimal Год назад +1

    As one of your Gen X subscribers...I thank you for always keeping me up to date!! Awesome video, I loved the perspective!

  • @ibrahim_öztürk_youtube
    @ibrahim_öztürk_youtube Год назад

    Is the biophysics book on your shelf searchin for principles by william bialek?

  • @proudafricanamerican7586
    @proudafricanamerican7586 Год назад

    Thank you… appreciate it.

  • @PontusWelin
    @PontusWelin Год назад

    How can ChatGPT have knowledge of itself when it was trained on data before it existed?

  • @spanke2999
    @spanke2999 Год назад

    Many thanks for the great content.
    what I find interresting in this context is the comparisn between a human and ChatGPT. I mean being not factual is kind of a big thing with humans too. If you actually examin a random conversation and fact checking it, I would assume that humans tend to be incorrect here and there, even if they are trained in a field that is relevant to the conversation.
    So, the question is, are we having higher standards towards the AI? Same with driving, we all cry out when the AI is involved in a car crash while humans crash way more often and more serious then an AI. same here, have a conversation with a random dude about all these diverse topics with the pretext, the person has to reply, I would bet that there will be much much more 'mistakes' with the dude being human compared to an AI.

  • @bigsarge2085
    @bigsarge2085 Год назад +1

    The incorrect facts seems problematic to me, and I wonder if it ChatGPT will ultimately be helpful or hurtful for humanity. Thanks for your perspective.

  • @gerardvongyw670
    @gerardvongyw670 Год назад

    Goodness me