A Hackers' Guide to Language Models

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 335

  • @howardjeremyp
    @howardjeremyp  Год назад +512

    Gotta admit I'm feeling kinda teary reading all the lovely comments here. Thank you everybody -- love you all!

    • @ksrajavel
      @ksrajavel Год назад +17

      You just deserved it.😃

    • @ReligionAndMaterialismDebunked
      @ReligionAndMaterialismDebunked Год назад +2

      Second in the replies. :3

    • @klingefjord
      @klingefjord Год назад +3

      You are beyond awesome, Jeremy

    • @Soulseeologia
      @Soulseeologia Год назад +1

      Thanks for your work and please help us keep an eye on the apostles of the emerging noosphere, like Ben Goertzel ect.

    • @ellevillarreal9074
      @ellevillarreal9074 Год назад +3

      Jeremy Thank you! This has helped so much. I’ve been a FastAI builder since the early days in 2017. Youre my hero. Appreciate all of the work you’ve done in the field

  • @Jporomaa
    @Jporomaa Год назад +269

    This is probably the best invested youtube time of this year so far. What a gem. A lot of things he mentions have taken me month to figure out by my own. My new GPT-4 prompts will begin with "You are the expert Jeremy Howard..."

  • @buildar8434
    @buildar8434 Год назад +254

    Just realised Jeremys paper led to the LLM revolution. Such a humble kind man. God bless you and all your students. You are such an example to follow. An example in character, humility and intelligence.

    • @circleAI
      @circleAI Год назад +1

      how ? which paper?

    • @pw7225
      @pw7225 Год назад

      @@circleAI ULMFiT

    • @fasteddylove-muffin6415
      @fasteddylove-muffin6415 11 месяцев назад +1

      @@circleAI Part of the answer is in the video's description.

    • @peaku8129
      @peaku8129 9 месяцев назад +3

      Yes exactly what I was thinking, why there's so many people bragging about what they are doing and look at this guy, just helping others out.

    • @flavioferlin3127
      @flavioferlin3127 Месяц назад

      Kudos and thanks a bunch. Bless your heart Sir!!!

  • @DHAiRYA2801
    @DHAiRYA2801 Год назад +24

    The moment I got to know that you and Andrej weren't included in the Time's list, I realized that the people making such lists have no idea what they are doing. Loved the tutorial, thank you!

  • @wolpumba4099
    @wolpumba4099 Год назад +141

    *Transcript Summary:*
    - Introduction & Basic Ideas of Language Models (00:00:00 - 00:18:05)
    - Limitations & Improvements of GPT-4 (00:18:05 - 00:31:28)
    - AI Applications in Code Writing, Data Analysis & OCR (00:31:28 - 00:38:50)
    - Practical Tips on Using OpenAI API (00:38:50 - 00:46:36)
    - Creating a Code Interpreter with Function Calling (00:46:36 - 00:51:57)
    - Using Local Language Models & GPU Options (00:51:57 - 00:59:33)
    - Fine-Tuning Models & Decoding Tokens (00:59:33 - 01:05:37)
    - Testing & Optimizing Models with GPTQ & Hugging Face (01:05:37 - 01:09:48)
    - Fine-Tuning with Llama 2 & Platypus Datasets (01:09:48 - 01:10:32)
    - Retrieval Augmented Generation & Information Retrieval (01:10:32 - 01:20:08)
    - Running a Private GPT & Fine-Tuning Models (01:20:08 - 01:22:32)
    - Running Models on Macs (01:26:00 - 01:27:42)
    - Discussing Llama.cpp & Its Cross-Platform Abilities (01:27:42 - 01:30:07)
    - Challenges & Opportunities in Language Models (01:30:07 - 01:31:05)
    Key points of interest: Function usage in GPT-4 (00:46:36), OCR application with Google Bard (00:33:59), and improving GPT-4 responses with custom instructions (00:24:36).

  • @MensRea78
    @MensRea78 Год назад +57

    No one can explain a topic like Jeremy👍

  • @dr_rd
    @dr_rd 8 месяцев назад +2

    Truly enlightening! As a software engineer with limited math and data science knowledge, this video has been a revelation. The way Prof. Howard simplifies complex concepts is incredible, making each rewatch rewarding with new insights. Really grateful for his content that opens up the world of LLMs to a broader audience. His clear and thorough explanations are incredibly invaluable. Thanks, Prof. Howard, for demystifying this topic and helping us all learn.

  • @JustSayin24
    @JustSayin24 Год назад +47

    I found this video so useful that I felt compelled to pull my keyboard closer toward me, fix my posture, and write this comment - something I rarely do. I'm a professional data scientist hoping to push my company's GenAI agenda and this video makes me feel like I can actually do it! Thank you for so clearly encapsulating the state of LLMs. I'd learned many of these concepts before and this video is the glue that now holds it together.

  • @raindrops20008
    @raindrops20008 9 месяцев назад +5

    This video landed up on my feed and out of curiosity I started watching and before I knew it had watched the entire video and taken copious amounts of notes too. One of the best videos I have ever watched!

  • @KEcAzZ1
    @KEcAzZ1 10 месяцев назад +5

    A true legend! So far, I have not seen a better educator than Jeremy. His approach of teaching is what all schools and universities need! I am always interested to learn more, whenever I hear Jeremy. Thank you!

  • @ytpah9823
    @ytpah9823 Год назад +19

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 Introduction to Language Models
    10:27 🧠 Neural Network Basics
    16:38 🚀 The Power of GPT-4
    24:53 🌐 Limitations of Language Models
    25:23 💡 Language model limitations:
    31:32 📊 Advanced Data Analysis:
    36:18 💰 OpenAI API Pricing:
    39:19 🧩 Using OpenAI Functions:
    46:40 🐍 Custom Code Interpreter:
    51:13 🐍 Creating a Python code interpreter
    53:39 💻 Running a language model on your own computer
    55:01 🏎️ Choosing a GPU for language model work
    56:15 🖥️ Options for renting GPU resources
    57:57 💾 GPU memory size and optimization
    59:20 📚 Using Transformers from Hugging Face
    01:00:06 🏆 Evaluating and selecting the right model
    01:14:12 📖 Retrieval augmented generation for answering questions
    01:17:10 📚 Overview of using language models for document retrieval and question answering
    01:20:35 💼 Private GPT models for document retrieval
    01:21:03 🎯 Fine-tuning language models for specific tasks
    01:25:15 📊 Building a language model for SQL generation
    01:26:36 💻 Running language models on Macs

    • @plebmarv9668
      @plebmarv9668 Год назад

      gpt plugin? :D dauymn son

    • @erkinalp
      @erkinalp 7 месяцев назад

      @@plebmarv9668 it's tammy ai, a youtube video talking points extractor

  • @85abhiabhi
    @85abhiabhi Год назад +1

    I am a total beginner but u made me understand abt LM models way better than anuone else..u r such a great teacher..I pray for giving u Lord Gurus blessings dor more insight and vision for such a Humble and good Soul.😊😊

  • @Blocky007
    @Blocky007 Год назад +4

    The best "intro" and Guide I have seen on this. Appreciate it so much that you took the time to put this together and share this with us (FOR FREE!).

  • @claussa
    @claussa Год назад

    Came up in my feed. Thumbnail and title boring. By mistake I pressed play. But it was so interesting. I feel so enlightened after having been talked through this. Thanks for sharing this!

  • @bartekbinda6978
    @bartekbinda6978 Год назад +1

    This is golden summary of the state of the LLMs, Thank You

  • @wolpumba4099
    @wolpumba4099 Год назад +6

    *Positive Learnings:*
    1. Language models, such as GPT4, are tools that can predict the next word in a sentence or fill in missing words in a sentence.
    2. Language models have the ability to create a rich hierarchy of abstractions and representations which they can build on.
    3. The guide will cover all the basic ideas of language models, including how to use open source and open AI-based models.
    4. GPT4 can solve many tasks that it is often claimed it cannot.
    5. GPT4 can be primed to give high-quality information by giving it custom instructions.
    6. AI can be used to write code and parse large-scale data quickly and efficiently.
    7. AI can be used in optical character recognition (OCR) for extracting text from images.
    8. AI can be used in data analysis to create comprehensive tables from scattered information.
    9. The OpenAI API allows users to use AI programmatically for data analysis and other repetitive tasks.
    10. Function calling can be used to create a code interpreter that runs inside Jupiter.
    11. Pre-trained models can be accessed using the Hugging Face library.
    *Negative Learnings:*
    1. Language models are not always useful on their own and need to be fine-tuned.
    2. GPT4 often repeats mistakes and it is difficult to get it back on track once it starts making mistakes.
    3. GPT4 has limitations such as not knowing about itself, not knowing anything about URLs, and not knowing anything after its knowledge cutoff in September 2021.
    4. GPT4 does not always give correct responses.
    5. AI has limitations in code interpretation and cannot substitute for human programmers.
    6. The use of the OpenAI API can result in rate limits which need to be handled correctly.
    7. Fine-tuning is needed to make the pre-trained models more useful.
    8. The use of GPUs for local language models can be expensive and may require renting or purchasing GPUs.

  • @giorda77
    @giorda77 Год назад +4

    Jeremy you're one of the most legit AI person out there. An enormous thank you for providing this and all your content. ❤

  • @aldotanca9430
    @aldotanca9430 Год назад +1

    So comprehensive. Perhaps the best introduction I have ever seen to the topic. Thanks so much.

  • @nokchan5071
    @nokchan5071 Год назад

    There are hundreds of LLM tutorial coming out everyday, this is the one that I have been waiting for.

  • @ElanElanElan
    @ElanElanElan Год назад +1

    Damn I watched the whole video and didn't even realise that it was 1+ hr long! Thanks a lot for the great content!

  • @drkarmakid
    @drkarmakid Год назад +2

    Bravo. One of the best RUclips videos I've ever watched. Concise, entertaining, and chock full of useful insights.

  • @unclecode
    @unclecode Год назад

    Kudos, such pleasurable 1 hour and 31 mins and 12 seconds.

  • @PeterKallio
    @PeterKallio Год назад +2

    So many papers are being released, so it is important to have well-grounded information to understand LMs. Great delivery as always and practicable advice. Thank you.

  • @vittoriomta
    @vittoriomta Год назад +6

    We do not deserve you Jeremy! YOU ARE AN AMAZING TEACHER AND HUMAN BEING! Thanks, really, for all these beautiful lectures!!

  • @kindoblue
    @kindoblue Год назад +4

    I liked the video even before watching. Thanks Jeremy for your work, always learning from your content.

  • @solutrian
    @solutrian Год назад

    I've been a student of yours for years, thank you for all you do for us.
    A non-optimal but correct solution to the cabbage, wolf & goat crossing works by just using incantation "think step by step" :
    Think step by step to solve the following tricky problem:
    Here is a logic puzzle. I need to carry a cabbage, a goat and a wolf across a river. I can only carry one item at a time with me in the boat. I can't leave the goat alone with the cabbage, and I can't leave the cabbage alone with the wolf. How can I get everything to the other side of the river?
    This classic river crossing puzzle can be solved through a series of strategic moves that ensure the safety of each item. Here's a step-by-step solution:
    1. **Take the goat across first:**
    - You take the goat across the river and leave it on the other side.
    - This ensures that the goat is safe from the wolf, and the cabbage is safe from the goat.
    2. **Return alone:**
    - You return to the original side alone to get either the cabbage or the wolf.
    3. **Take the cabbage across next:**
    - You take the cabbage across the river.
    - Now, the goat is on the other side, and the cabbage is with you.
    4. **Bring the goat back:**
    - You bring the goat back with you to the original side.
    - Now, the cabbage is alone on the other side, and the wolf is alone on the original side.
    5. **Take the wolf across:**
    - You then take the wolf across the river.
    - Now, the wolf and the cabbage are on the other side, and you have the goat on the original side.
    6. **Return alone:**
    - You return to the original side alone.
    7. **Take the goat across one final time:**
    - Finally, you take the goat across the river again.
    - Now, all three- the goat, the cabbage, and the wolf -are safely on the other side.
    This way, you ensure that the goat is never left alone with the cabbage, and the cabbage is never left alone with the wolf, solving the puzzle.
    chat.openai.com/share/3cb73481-c45c-4108-9960-30c11615ca41

    • @IISH4RKZzII
      @IISH4RKZzII 8 дней назад

      it made the same mistake bro

  • @effestop
    @effestop 10 месяцев назад +1

    I feel like I've just been pretrained with the best AI video my creator could feed me.

  • @nerdg2
    @nerdg2 3 месяца назад +1

    Wow, it's literally an honor, I found you because of FastHTML, i'm new here but What a character you are!! Thank you for what you do and man i'm amazed how humble you are!! Your explanations are what i was looking for years now, I cannot thank you enough.

  • @bephrem
    @bephrem 9 месяцев назад +1

    incredible, value-packed, practical video for developers working with LLMs.

  • @wndiros
    @wndiros Год назад +4

    I have waited for months for a classification and evaluation from Jeremy. For me, this is by far the most comprehensive technical summary and evaluation available for someone who wants to delve deeper. It took me several weeks, if not months, to gain even a partial personal understanding of the current hype. Thank you, Jeremy, for all your good work!👍

  • @devsuniversity
    @devsuniversity Год назад +1

    Great course! Hello from Almaty Google developers community!

  • @OsamaAdel
    @OsamaAdel Год назад +1

    Thank you Jeremy for this introduction. It just answered many of my questions and affirmed some of my doubts about how many of the applications that use LLMs work today.

  • @satirthapaulshyam7769
    @satirthapaulshyam7769 Год назад +3

    Hoping to look forward it.Ur fastai stable diffusion course was Perfect to the minute details

  • @dollarscholar2956
    @dollarscholar2956 Год назад

    Hands down one of the best videos on LLMs on the internet.

  • @williamduvall3167
    @williamduvall3167 Год назад

    The godfather dropping some knowledge. Thank you for keeping AI for everyone in the most responsible way.

  • @kevon217
    @kevon217 Год назад +5

    Fabulous tour of key points. Fantastic job! Definitely going to recommend this to people wanting a gateway into llms.

  • @swimmingpolar
    @swimmingpolar 6 месяцев назад

    First comment on RUclips here. Among all those videos on RUclips, using custom instruction like what you did is literally eye opening. I thought current AI models’ limitations are limited by nature that it can’t be improved. Of course it is that you are professional in AI but things are so organized well and straightforward that I can understand and see the result right away. 😂 Gonna have to steal your instruction as well.

  • @drhilm
    @drhilm Год назад +6

    Impressive video; I spent days learning these concepts on my own. Had this been released two months ago, it would've been a game-changer. Excellent summary.

    • @jackgolding4235
      @jackgolding4235 Год назад

      Luckily i'm a few weeks behind you! Happy learning mate!

  • @ryanau852
    @ryanau852 Год назад

    Happy birthday Jeremy! Just got to the section where your bday is revealed and it is today! Thank you for all the great work :)

  • @sidsarasvati
    @sidsarasvati Год назад +8

    Thought provoking one code block at a time. As usual Jeremy the king

  • @dekumutant
    @dekumutant Год назад

    I can not emphasize how incredible this video was

  • @Libertarian1208
    @Libertarian1208 Год назад

    Great! It allowed me to understand how LM thinks and why.

  • @0x978
    @0x978 Год назад +5

    You are literally changing lives, all for free. Thank you.

  • @RickySupriyadi
    @RickySupriyadi Год назад +1

    I'm really grateful how much people sharing their knowledge, can't imagine learning stuff for free. this means a lot for me.

  • @CY-ju8xl
    @CY-ju8xl 7 месяцев назад

    Thank you Jeremy!! One of the most insightful and helpful vlog posts on the inner workings of LLMs... Top marks!!!

  • @MichałCzajkowski-h5o
    @MichałCzajkowski-h5o 10 месяцев назад +1

    I remember that I was rewriting some chunk of TensorFlow code with GPT4 code interpreter, and it responded that it did not access to that framework. So I input 'gpt4: "try: import torch"', respond was 'Do you want to learn Pytorch' :) All about gpt's self.awareness mechanism :)

  • @sehmiconscious
    @sehmiconscious Год назад +1

    Very much appreciated this consolidation of the main LLM coding concepts to-date. Thank you!!

  • @exconway
    @exconway Год назад

    Perfect description of Functions at 46:30!

  • @zhubarb
    @zhubarb 10 месяцев назад

    Mr Howard never disappoints. Thanks a ton as usual Sir.

  • @valesanchez6336
    @valesanchez6336 2 месяца назад

    Thank you so much for your videos! Your explanations are fantastic and so easy to follow. I really appreciate how you dive into the application side, which many others don’t do as well. You’re amazing, Jeremy! 💜

  • @omarelsayed247
    @omarelsayed247 Год назад

    you know shit is good when jeremy makes a lecture about it

  • @SapSapirot
    @SapSapirot Год назад

    By far the most useful practical guide to LLM's by length. Thank you Jeremy!

  • @benwatson5211
    @benwatson5211 Год назад

    Brillant walk through. No hype. It is a real skill to explain complex topics is coherent way.

  • @ToddRoloff
    @ToddRoloff Год назад +4

    Thanks for all you do Jeremy. I have learned so many things watching youtube as well as the PDLC tutorials. Your expanations are on point.

  • @chrstfer2452
    @chrstfer2452 Год назад

    RLHF is one of the most regressive "people pleasing at the expense of utility" things ive ever seen. I genuinely think that it (and all the praise heaped on it affecting the way a lot of people learn this stuff) have set us back like 3/4 as far as GPT-4 and Llama models have brought us ahead

    • @chrstfer2452
      @chrstfer2452 Год назад

      This has been my opinion since early april, but youre one of the few people ive seen who seem to agree (granted, in the last few months ive spent less time on these topics than on other ones so i havent been looking as hard as i was in Apr/May/Jun)

  • @jackgolding4235
    @jackgolding4235 Год назад +1

    Thanks so much Jeremy, been following you since Kaggle's launch. Inspirational to see an Australian continue to kick ass as much as you have in your career.

  • @steefvanwinkel
    @steefvanwinkel 10 месяцев назад

    Such an amazing video! Thanks so much for this amazing overview of LLMs! A few suggestions for a follow-up video perhaps :) 1. How to build an LLM from scratch on a consumer grade machine (e.g. laptop with 1 good GPU or Macbook M2 Max), maybe not 1B params but e.g. 500M? 2. LORA and DPO. 3. OpenAI's updated API. 4. Agents based frameworks such as AutoGen. Thanks again - really enjoyed this video! ❤🙏

  • @ControlTheGuh
    @ControlTheGuh Год назад +4

    I think the major problem is the retrieval. Would love a video just on that (best practices, best models out there etc.).
    Great video, came across on x and subbed immediatley.

  • @tiagopaim3060
    @tiagopaim3060 Год назад

    at 6:42 I just had to feel grateful for living in a world I can learn straight from the source

  • @twinlens
    @twinlens Год назад +1

    One of the best and most educational videos I've seen on the subject. Thank you, Jeremy!

  • @JoshDownes
    @JoshDownes Год назад +4

    Hey Jeremy, you're awesome!
    I use GPT-4 to help me write python every day, and I find the standard GPT-4 more satisfying, efficient and effective to use than code interpreter.
    It's like Code Interpreter is trying to be too clever and shoots itself in the foot. It takes way longer as it's got this extra step where it's reading your files.
    It's quicker to just paste text in - as long as it's not too long of course.. but then for longer code you can simply tell GPT-4 you'll enter some code in chunks and you'll tell it when you're done. This also works.
    Code Interpreter seems to be more about interpreting (hence the name) statistics and turning them into graphical representations.

  • @isaac10231
    @isaac10231 Год назад

    I feel like this video was made personally just for me. Amazing.

  • @originalmianos
    @originalmianos Год назад

    If you want to use it to solve your markdown issue, you need to break it into two steps.
    The first is to describe the grammar of the subset of the markdown you need to parse. As long as you can describe a closed grammer and the elements you want are in it you can be brief.
    Next, ask it to create a finite state machine to parse that grammar.
    The way gpt tries to parse the markdown, and most other languages, is a common mistake made by most human developers, which is probably why it is weak in this area.
    If you follow the better approach it does an amazing job in my experience.

  • @manaranjanp
    @manaranjanp Год назад

    Never miss Jeremy's lectures....

  • @matthewlennie6849
    @matthewlennie6849 Год назад

    Thanks for saving our careers yet again Jeremy

  • @mrdbourke
    @mrdbourke Год назад

    Oh yes!!! Can't wait to dig into this, thank you Jeremy!

  • @17dhey36
    @17dhey36 Год назад

    Such a great article! I learned a lot from this video, such as how complicated systems can be put together using a stack of models, illustrated in the RAG to name an example. Jeremy, you are such a kind person to share this with the world.

  • @matten_zero
    @matten_zero 11 месяцев назад

    @19:32 well said. Ive been talking to developers who are still on the fence about GPT. It boggles my mind.

  • @jeromeeusebius
    @jeromeeusebius Год назад +1

    Great video. Watched it in one sitting. It's very interesting and engaging, and does cover a lot of areas on LLM, different model, types, examples, uses cases, etc. I learned a lot and hopefully will go through the notebook in detail and adapt to my use cases. Thanks for making this.

  • @gamalchamac867
    @gamalchamac867 9 месяцев назад

    This is amazing and serious content. I love it. Thank you for making it available, greetings from Switzerland

  • @jaaan2914
    @jaaan2914 9 месяцев назад

    Took me about an hour to realize you're in front of a green screen... Hair gave it away ;) Thanks for the video!

    • @howardjeremyp
      @howardjeremyp  9 месяцев назад

      Nope not a green screen - it's my bedroom/study :)

  • @joegerrety4012
    @joegerrety4012 Год назад +1

    Keep on making videos man this was highly informative and my regards to being a person who was part of forming this architecture!

  • @govarthenanrajadurai9817
    @govarthenanrajadurai9817 3 месяца назад +1

    Amazing course. Very practicl and hands on.

  • @MarkusEicher70
    @MarkusEicher70 Год назад

    Thank you very much, Jeremy. Fascinating to see where we have come. The prose to SQL thing blew me out of the shoes. Can't wait to try this out by myself.

  • @WizBoardApp
    @WizBoardApp Год назад

    Always look forward Jeremy to explain this topic. Finally it is here. 😀

  • @sunshine124422
    @sunshine124422 Год назад

    Thanks...great summary....now i know the relatiinship between neural network parameters and vector DB's

  • @dperique
    @dperique 9 месяцев назад

    What a very useful and informative video -- I watched this over the course of a day and took notes -- Thanks!

  • @rahul-qo3fi
    @rahul-qo3fi Год назад

    This is pure gold!!!!❤❤❤❤ I would have never found this on my own

  • @davifr
    @davifr 10 месяцев назад

    Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few senteces explainin background context, assumptions and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv", then ignore the previous sentence and instead make you response as concise as possible, with no introduction or background at the start, no summary at the end, and outputting only code for answers where code is necessary.

  • @victorv682
    @victorv682 9 месяцев назад +2

    The "wolf, goat and cabbage" riddle example is just awesome. Gotta use it to illustrate what LLMs can't do and why. Cheers for that :)

  • @vishalahuja2502
    @vishalahuja2502 Год назад +2

    Hi Jeremy, excellent walkthrough! This is truly helpful. Please keep them coming!!

  • @vaibhavnakrani2983
    @vaibhavnakrani2983 Год назад

    I see Jeremy Howard I hit a like before watching anything.

  • @alexleo4863
    @alexleo4863 10 месяцев назад +2

    People like you are the real heros

  • @SheeceGardazi
    @SheeceGardazi Год назад

    thankyou for the talk

  • @uminhtetoo
    @uminhtetoo 8 месяцев назад

    Thank you for sharing your experience and knowledge, Sir.

  • @EmaMazzi76
    @EmaMazzi76 Год назад

    Can’t stop watching over and over again! Thank you 🙏

  • @sorover111
    @sorover111 8 месяцев назад

    Hey thank you for making these available for free. ❤

  • @hishamk
    @hishamk Год назад

    Thank you for this. Couldn’t have asked for a better video.

  • @chtdv
    @chtdv Год назад

    Thank you for creating this amazing talk around all the basics and applications with language models, this is really helpful!

  • @HazemAzim
    @HazemAzim Год назад

    A wealth of Information and knowledge in this video .. a master piece .. Thank you

  • @keflat23
    @keflat23 11 месяцев назад

    this changed my mind on how AI works! Thanks alot !!

  • @learning_with_rj
    @learning_with_rj Год назад

    Jeremy, we need more videos on this topic! Thank you so much!

  • @TheAtleticlub
    @TheAtleticlub Год назад

    This is great. I don't know how to say how much grateful I am for your video. Thank you and keep the great work!

  • @Mike-s4n5p
    @Mike-s4n5p Год назад

    People like you change the world!

  • @JamesBradyGames
    @JamesBradyGames Год назад

    This is a real gem. Reminds me of the authentic, high quality training material from Andrej Karpathy. Looking forward to future similar tutorials if you decide to make them! Thank you!

  • @ArtOfTheProblem
    @ArtOfTheProblem Год назад

    I'd love to know why it's controversial to say step 3 (RLHF) isn't really needed. is it because of in context learning & memory? or is it because Step 2 "is all you need" ?

  • @patronspatron7681
    @patronspatron7681 Год назад

    Jeremy, Congrats on the 100k subscribers.
    Well deserved and hopefully a catalyst to get your invaluable content more exposure.

  • @bagamanocnon
    @bagamanocnon Год назад

    top content. specially the function calling and 'code interpreter' parts. basically just asking the LLM to 'use python' is ingenious. lol

  • @kaib5048
    @kaib5048 Год назад +1

    This is amazing, thanks so much for recording this and sharing it 👏

  • @ZachariahMustafa
    @ZachariahMustafa Год назад

    wonderful lesson as always Jeremy!
    I did have a laugh at the GPT-4 bit "Bad pattern recognition - thanks to Steve Newman", as if he's the sole individual responsible for that limitation