Decoder
Decoder
  • Видео 7
  • Просмотров 182 949
LangChain Fundamentals: Build your First Chain
LangChain is one of the most popular frameworks for coding complex LLM-powered logic. It provides the ability to batch and stream calls across different LLM providers, vector databases, 3rd party APIs, and much more. In this video, we explore the very basics of getting started with LangChain - understanding how to build a rudimentary chain complete with templating and an LLM call. Let's go!
Links:
Code from video - decoder.sh/videos/langchain-fundamentals:-build-your-first-chain
LangChain - langchain.com
Ollama Integration - api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html
Prompts & Templates - python.langchain.com/v0.1/docs/modules/model_io/prompts/quick_sta...
Просмотров: 6 285

Видео

Meta's Llama3 - The Mistral Killer?
Просмотров 1,9 тыс.4 месяца назад
Meta's LLama3 family of models in 8B and 30B flavors was just released and is already making waves in the open source community. With a much larger tokenizer, GQA for all model sizes, and 7.7 million GPU hours spent training on 15 TRILLION tokens, LLama3 seems primed to overtake incumbent models like Mistral and Gemini. I review the most important parts of the announcement before testing the ne...
RAG from the Ground Up with Python and Ollama
Просмотров 31 тыс.5 месяцев назад
Retrieval Augmented Generation (RAG) is the de facto technique for giving LLMs the ability to interact with any document or dataset, regardless of its size. Follow along as I cover how to parse and manipulate documents, explore how embeddings are used to describe abstract concepts, implement a simple yet powerful way to surface the most relevant parts of a document to a given query, and ultimat...
LLM Chat App in Python w/ Ollama-py and Streamlit
Просмотров 8 тыс.6 месяцев назад
In this video I walk through the new Ollama Python library, and use it to build a chat app with UI powered by Streamlit. After reviewing some important methods from this library, I touch on Python generators as we construct our chat app, step by step. Check out my other Ollama videos - ruclips.net/p/PL4041kTesIWby5zznE5UySIsGPrGuEqdB Links: Code from video - decoder.sh/videos/llm-chat-app-in-py...
Importing Open Source Models to Ollama
Просмотров 31 тыс.7 месяцев назад
Hugging Face is a machine learning platform that's home to nearly 500,000 open source models. In this video, I show you how to download, transform, and use them in your local Ollama setup. Get access to the latest and greatest without having to wait for it to be published to Ollama's model library. Let's go! Check out my other Ollama videos - ruclips.net/p/PL4041kTesIWby5zznE5UySIsGPrGuEqdB Lin...
Use Your Self-Hosted LLM Anywhere with Ollama Web UI
Просмотров 72 тыс.7 месяцев назад
Take your self-hosted Ollama models to the next level with Ollama Web UI, which provides a beautiful interface and features like chat history, voice input, and user management. We'll also explore how to use this interface and the models that power it on your phone using the powerful Ngrok tool. Watch my other Ollama videos - ruclips.net/p/PL4041kTesIWby5zznE5UySIsGPrGuEqdB Links: Code from the ...
Installing Ollama to Customize My Own LLM
Просмотров 33 тыс.8 месяцев назад
Ollama is the easiest tool to get started running LLMs on your own hardware. In my first video, I explore how to use Ollama to download popular models like Phi and Mistral, chat with them directly in the terminal, use the API to respond to HTTP requests, and finally customize our own model based on Phi to be more fun to talk to. Watch my other Ollama videos - ruclips.net/p/PL4041kTesIWby5zznE5U...

Комментарии

  • @acan.official
    @acan.official 5 дней назад

    where do i put the first code, im very beginner

    • @acan.official
      @acan.official 4 дня назад

      found it. but why does the web change everytime, can i make it fixed or custom it or something?

  • @swxin9
    @swxin9 9 дней назад

    Dude just made my doubts clear before I finished my tea.

  • @UTubeSucksssss
    @UTubeSucksssss 11 дней назад

    for the life of me i cant figure out how to connect open webui to ollama. 127.0.0.1:11434/ show ollama is running. I tried: host.docker.internal:11434, 127.0.0.1:11434/ at open webui ollama api still unsuccessful. Went inside the docker (docker exec -it name) and curl 127.0.0.1:11434, still no good. I delete ollama, remove all docker image etc, still no good :( the regular container without port mapping works fine though. Great videos btw, i have watched all your videos and follow your tutorial, I hope you do more ollama and langchain videos.

  • @TimothyMusson
    @TimothyMusson 11 дней назад

    I'm really impressed with the 27B version of Gemma2. It's working well for me as a usefully competent Russian language conversation partner/tutor, which is pretty amazing for something small enough to run locally. Mistral (7B) and Llama3 (8B) weren't quite sharp enough.

  • @nandinijampana528jampana3
    @nandinijampana528jampana3 11 дней назад

    First of all Thank you for making this vedio!!, can you also make vedio on how to handle when mulitple text files are there. Thank you.

  • @TimothyMusson
    @TimothyMusson 12 дней назад

    I really like the way you present this stuff so clearly and directly - you have a great teaching style. I'm going to keep tinkering with the chat app: it's fun! Glad I found this channel - thanks :)

  • @szebike
    @szebike 12 дней назад

    Awesome structure and explanation !

  • @OgeIloanusi
    @OgeIloanusi 15 дней назад

    This is a great video. You teach like a Professor. You're an expert and well talented! Your organization will indeed love working with you.

  • @OgeIloanusi
    @OgeIloanusi 16 дней назад

    You're great!

  • @OgeIloanusi
    @OgeIloanusi 17 дней назад

    Thank You!!

  • @OgeIloanusi
    @OgeIloanusi 17 дней назад

    Thank you!

  • @mohammedrashad-n9p
    @mohammedrashad-n9p 20 дней назад

    Very helpful, thank you

  • @DmitriZaitsev
    @DmitriZaitsev 21 день назад

    Getting the error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 1 ----> 1 from langchain_community.llms import Ollama 3 # llm = Ollama(model="llama3") 4 5 # print(llm.invoke("Tell me a joke")) ModuleNotFoundError: No module named 'langchain_community'

  • @bigRat4335
    @bigRat4335 25 дней назад

    can run command on 8:01 denied permission?😕

  • @samadislam4458
    @samadislam4458 28 дней назад

    I can only see the bin file where is the gguf file?

  • @ISK_VAGR
    @ISK_VAGR Месяц назад

    Man. That is amazing. It took me 10 min to set this up and I am not a coder. Thanks. That is bunkers. I got 3 immediate questions. Is that safe in terms of the information that one loads there? Can one customize the logos and the appearance? can one use it for personal and commercial purposes?

  • @robwin0072
    @robwin0072 Месяц назад

    Will one lose the private LLM when using ngrok? IOW, will the prompts and responses be exposed to external servers, the Internet?

  • @AnmollDwivedii
    @AnmollDwivedii Месяц назад

    can you please add a video/short in which you tell that how to display the name what i want using open web ui i mean i write something in open web ui localhost which you have tell and i want that when i send the message its format is Me: bla bla bla the the llm reply like this [the name i want] : bla bla bla please 🥺🥺

  • @RealEstate3D
    @RealEstate3D Месяц назад

    That was an interesting one. I saw some of your videos and you've been subscribed instantly. You deserve more attention. Hope to see more from you in the future.

  • @张立昌-o4l
    @张立昌-o4l Месяц назад

    I have downed ollama, stroed it in my computer, but cannont open it. Why? How to deal with this?

  • @maxlgemeinderat9202
    @maxlgemeinderat9202 Месяц назад

    how would you import not-quantized models?

  • @romabu2041
    @romabu2041 Месяц назад

    No way I just found this in the 10 minute video. I wasted a whole day trying to set-up nginx properly. Thank you so much for your concise and informative tutorial, now it works for me. P.s. I am in deep regret that some extraterrestrial force made me find out about nginx, but it is also overshadowed by the fact that it literally took me 15 minutes to set up the whole online llm vm on gcp from zero to hero.

  • @Mr.Morgan.
    @Mr.Morgan. Месяц назад

    thank you for video! I have one issue - when i trying to chat with my model through phone, using OpenWebUI and ngrok - model generate answer edlessly after one or maybe two completed answers. But if i do this from another PC - all works as it should be. Does anyone know the solution for this?

  •  Месяц назад

    Thanks a lot for the detailed explanation in the video! I have a question regarding Ollama: Is it possible to use Ollama and the models available on it in a production environment? I would love to hear your thoughts or any experiences you might have with it.

  • @mahaltech
    @mahaltech Месяц назад

    hello pro its very good tutorial i have some file contain some article its good to with small articles from 2 - 3 line but if lines is more than 20 line its give response from no were can i increase chunk size or any solution to solve this ? thank you in advance

  • @NoHack_Know_How
    @NoHack_Know_How Месяц назад

    Hello, how can I run Ollama for my internal network? I don't really need outside access yet; can you explain or point me in that direction, please.

  • @matthewnohrden7209
    @matthewnohrden7209 Месяц назад

    THIS IS SO COOL. I've been looking for a way to do this for a couple of months now

  • @spsoni
    @spsoni Месяц назад

    You are doing an amazing job. Looks like you are not feeling motivated to make more videos. I see you're spending too much time editing hence the turn off. Just produce raw videos, people will excuse small mistakes. You've some great talent, spread the knowledge around. Can I request you to make similar video on crewai? thanks

    • @decoder-sh
      @decoder-sh Месяц назад

      Hey thanks for the comment! You rightfully noticed my absence, however I’ve actually been spending that time moving to a new city and building a tool to help me edit much faster! It’s not quite ready for prime yet yet but I am looking for alpha testers - it’s matcha.video Anyway, more videos coming soon, crewai and related tools are on my list. Thanks for watching :)

  • @skperera-g8l
    @skperera-g8l Месяц назад

    Fantastic video! The RAG example given is for a single document, but a repository usually contains dozens of documents. Is there a way to bulk-upload the documents at once to the LLM (for chunking and embedding)? Thanks.

  • @hugopristauz538
    @hugopristauz538 Месяц назад

    nice, small sized demo, demonstrating the principles. good job, I learnt a lot 🙂

  • @BobbyTV23
    @BobbyTV23 2 месяца назад

    Thanks for the video. Keep going. Your explanations are on point!

  • @Hotboy-q7n
    @Hotboy-q7n 2 месяца назад

    Hey man!!! Thankx You're the man You're the one who wakes the rooster up You don't wear a watch, you decide what time it is When you misspell a word, the dictionary updates You install Windows, and Microsoft agrees to your terms When you found the lamp, you gave the genie three wishes When you were born, you slapped the doctor The revolver sleeps under your pillow You ask the police for their documents When you turned 18, your parents moved out Ghosts gather around a campfire to tell stories about you hugs for brazil

    • @decoder-sh
      @decoder-sh 2 месяца назад

      Wow no one has ever written me lore before! I hope to live up to your impression of me 🫡

    • @Hotboy-q7n
      @Hotboy-q7n 2 месяца назад

      @@decoder-sh No need to try hard you already saved my life from an Indian villain who was holding me for more than 6 hours in a suicidal tutorial When you come to Brazil, you already have a house to stay in

    • @decoder-sh
      @decoder-sh Месяц назад

      Then it sounds like it's time to take this show on the road 😎

    • @Hotboy-q7n
      @Hotboy-q7n Месяц назад

      @@decoder-sh 😎😎😎😎

  • @richsadowsky8580
    @richsadowsky8580 2 месяца назад

    Fantastic video. Yes, I could explain embeddings. I already had a basic concept, but your simple file-based cache of the embeddings really highlighted the basic functionality without needing a vector database.

  • @CodingerdaLogician
    @CodingerdaLogician 2 месяца назад

    Same as adding a system prompt in curl, right? without having to create a new model.

  • @AIVisionaryLab
    @AIVisionaryLab 2 месяца назад

    Keep it up, brother! I love the way you teach with live demos It's really effective and easy to understand

    • @decoder-sh
      @decoder-sh 2 месяца назад

      Thank you for the support, I'm resuming filming soon!

  • @nat7352
    @nat7352 2 месяца назад

    Amazing video, super clear, helped me understand and debug my code! Thank you for sharing this.

  • @user-tk5ir1hg7l
    @user-tk5ir1hg7l 2 месяца назад

    would like to see a video on pretraining and fine-tuning models

  • @DihelsonMendonca
    @DihelsonMendonca 2 месяца назад

    💥That´s wonderful. I´m not a programmer, don´t know Python, but I could install Open WebUI, nd it has only Ollama models, and I love those Hugging Face GGUF models. So I need a way to run them on Open WebUI. Thanks ! ❤❤❤

  • @nikwymyslonynapoczekaniu123
    @nikwymyslonynapoczekaniu123 2 месяца назад

    C:\Users mibe>ollama create arr-phi --file arr-modelfile Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message" Can anyone help?

  • @VipulAnand751
    @VipulAnand751 2 месяца назад

    thanks man

  • @skatemore33
    @skatemore33 2 месяца назад

    Hey man great tutorial. For some reason on my phone, I can't access my chat history. I can only start a new chat each time. Do you know how to fix this?

  • @itsban
    @itsban 2 месяца назад

    If you are into the Apple ecosystem, you can use Enchanted as the UI. It is native and available on Mac.

    • @decoder-sh
      @decoder-sh 2 месяца назад

      I'll check that out, thanks for the tip

  • @parthwagh3607
    @parthwagh3607 2 месяца назад

    How to run this in windows, where files are safetensors? Where to create modelfile? I have multiple models on different directory of oobabooga/text-generation-webui, I have to use them in ollama.

  • @skylarksparrow932
    @skylarksparrow932 2 месяца назад

    Things worked for me on my local host, upto the stage where it asked me to select a model. No model showed up. I have Phi installed. Can someone help?

    • @Kalsriv
      @Kalsriv 2 месяца назад

      Try changing port from 3000 to 4000 when creating container. Worked for me

  • @manassingh5351
    @manassingh5351 2 месяца назад

    Great video! I have a question, after getting a link via NGrok, the whole AI model is still running offline, or the data is going to any other server, that is my main concern. Thanks again

  • @MrOktony
    @MrOktony 2 месяца назад

    Probably one of the best beginners tutorial out there!

  • @its_sid_
    @its_sid_ 2 месяца назад

    How simply he explains the concept Chaining and Piping 👏 But I have a question, Is it a RAG model that you've developed ....??

  • @dannish2000
    @dannish2000 3 месяца назад

    Are the commands the same if I am using Linux ubuntu, WSL?

    • @decoder-sh
      @decoder-sh 3 месяца назад

      Linux and Mac are both Unix so I imagine they would be the same

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 3 месяца назад

    I love that your video is up to date with the latest Langchain imports 👍👍 Are you planning a series on LangChain ?

    • @decoder-sh
      @decoder-sh 3 месяца назад

      I would like to! A few videos on langchain, then a few videos on llamaindex