How To Connect Local LLMs to CrewAI [Ollama, Llama2, Mistral]

Поделиться
HTML-код
  • Опубликовано: 4 фев 2025

Комментарии • 164

  • @wadejohnson4542
    @wadejohnson4542 11 месяцев назад +10

    I asked and you delivered! I'm at a loss for words to describe you. Just know that you are one of the best in the biz. And from the other comments, you can see that your work is very much appreciated. Thank you.

  • @jdsharp2277
    @jdsharp2277 10 месяцев назад +1

    You are a great teacher. Very easy to follow and cover the topics thoroughly. So glad I found your channel ! Thanks for all your hard work and dedication! 👍⭐⭐⭐⭐⭐👍

  • @RetiredVet1
    @RetiredVet1 11 месяцев назад +17

    In this video Brandon mentions two other Crew AI videos he has created. I've taken the crash course video and it is about the best Crew AI video I have seen. There are other good ones on RUclips, but you should not miss Brandon's videos if you are interested in Crew AI.

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад +3

      Thanks Edward! I seriously appreciate you saying that. I put in a lot of work to get these tutorials just right so that means a lot to me!
      If there is anything specific that'd you like to see, please let me know! I'm always open to suggestions!
      CrewAI can do so much so I want to crank out a lot more videos for you guys!

    • @yellowboat8773
      @yellowboat8773 11 месяцев назад

      ​@@bhancock_aiclearly an AI response bruh

    • @pakiking1993
      @pakiking1993 11 месяцев назад

      Ok bot

  • @lalpremi
    @lalpremi 11 месяцев назад +3

    Thank you for sharing, looking forward to testing Crewai on my local systems. Have a great day. :-)

  • @ReflectionOcean
    @ReflectionOcean 11 месяцев назад

    - **Learn how to run local language models (LLMs) on your machine**: By the end of the video, viewers will know how to run LLMs like Llama 2 and Mistol locally and connect them to Crew AI for free.
    - **Access valuable source code for free**: Click the link in the description to access all the source code provided in the video without any cost.
    - **Cover the four core technologies**: The tutorial will start by recapping the four technologies used, namely Olama, Llama 2, Mistol, and Crew AI. Ensure understanding before proceeding.
    - **Set up and run Llama 2 and Mistol on your machine**: Step-by-step guidance will be given on setting up and running Llama 2 and Mistol using Olama on your local machine.
    - **Modify and configure LLMs for Crew AI compatibility**: Learn how to customize LLMs by creating model files with specific parameters to seamlessly integrate them with Crew AI.
    - **Connect LLMs to Crew AI example - Markdown validator**: Connect local language models to Crew AI examples like the Markdown validator to demonstrate practical usage, such as analyzing Markdown files for errors and improvements.
    - **Update environment variables for local host communication**: Update your environment variables to point to the local host running Olama, facilitating communication between Crew AI and the LLMs you've set up.
    - Point to your Local Host where AMA is running at 14:53
    - Ensure to point your open AI model name to the newly configured large language model by setting up environment variables at 15:00
    - Check the logs in the server folder to validate that the configuration is working properly at 15:34
    - Delete the .env file to show that the setup is still functioning, demonstrating an alternative method at 16:15
    - Create a new chat open AI by providing the model name and base URL directly within the code for a more explicit approach at 16:23
    - Activate the crew by running python Main after setting up the large language model at 17:02
    - Ensure the open AI key is specified to avoid errors at 17:11
    - Monitor the server logs in real-time to validate the execution at 17:23
    - Connect the crew AI to a local llm by specifying the llm in the agents file for each agent task at 20:57
    - Provide detailed context in the tasks to ensure meaningful results with local llms at 22:52
    - Be aware of limitations in using advanced features like asynchronous tasks when working with local language models at 24:00

  • @johns.107
    @johns.107 11 месяцев назад

    Thanks a ton for dropping this. I was literally working all afternoon on this very thing. That last part about which models support the different functionality was very applicable and time saver versus banging my desk.
    I had also tried using both a local LLM with LMStudio (which mirrors the environmental setup) for my agents and GPT-4 for the manager_llm. Couldn’t get that to work.
    I’ve been focused on making crews create a Python GUI for parsing videos as a PoC. I’d love to see you give it a whirl. Since we know that stand-alone chats are not great at full fledged more complex coding projects, I am trying to move past that hurdle by using a DevTeam Crew. This far, even with GPT-4, I’ve been unsuccessful and usually end up with just the example output I provide it.

  • @RenatoCadecaro
    @RenatoCadecaro 11 месяцев назад

    🎯 Key Takeaways for quick navigation:
    00:00 *🚀 Introdução ao vídeo e objetivo principal*
    - Aprender a executar o Crew AI gratuitamente usando o Olama
    - Executar LLMS localmente, como Llama 2 e Mistral
    - Conectar esses LLMS ao agente Crew AI para execução gratuita do Crew AI
    00:55 *🛠️ Quatro tecnologias principais neste tutorial*
    - Olama: Ferramenta para modificar, baixar e executar LLMS localmente
    - Llama 2: Modelo de linguagem treinado pela Meta, com diferentes modelos e requisitos de RAM
    - Mistral: Modelo de linguagem grande, com desempenho notável em comparação ao Llama 2
    - Crew AI: Framework para criar e gerenciar agentes de IA para resolver tarefas complexas
    04:12 *🚀 Configuração do Olama para executar LLMS localmente*
    - Baixar e instalar Olama, movendo para a pasta de aplicativos
    - Configurar Olama no terminal, baixar e executar o modelo Llama 2 localmente
    - Configurar modelo Mistral de maneira semelhante, preparando os LLMS para uso com Crew AI
    08:53 *⚙️ Configuração de modelos personalizados para Crew AI*
    - Criar arquivos de modelo para personalizar configurações específicas dos LLMS
    - Executar scripts para criar e configurar modelos personalizados do Llama 2 e Mistral para uso com Crew AI
    - Verificar a lista de modelos instalados usando o comando "Olama list"
    11:00 *🚀 Exemplo prático: Conectar LLMS locais ao Crew AI*
    - Apresentação do exemplo usando um validador de Markdown com Crew AI
    - Executar o exemplo conectando o Llama 2 personalizado ao Crew AI
    - Exibir saída e feedback do exemplo de validação de Markdown usando o Llama 2
    Made with HARPA AI

  • @tobiaslim1
    @tobiaslim1 11 месяцев назад +4

    Excellent! presented exactly as an educator would! I've been through many tutorials and all of them were too difficult to follow! You got it right by providing the workflow at the beginning as well as the programs needed! Great job!

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Thanks man! I really appreciate you saying that! I always worry my videos are too long but there is just so much info that you have to know in order to use these technologies.

    • @tobiaslim1
      @tobiaslim1 11 месяцев назад

      @@bhancock_ai videos can be long as long as there's an outline to follow and objectives to accomplish. I've been a teacher for 20 years and believe me your tutorials have been the best I've seen so far (for non techies like myself)

  • @AC-pr2si
    @AC-pr2si 11 месяцев назад +1

    Great video Brandon!!!Thanks for taking your time to make it.

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Of course! I have a lot more CrewAI content in the works for you guys!
      If there is anything specific that you'd like me to add to the queue, please let me know!

  • @Taisen-oi1fp
    @Taisen-oi1fp 11 месяцев назад +1

    OMG, you are the Hero!! Nice video Brandon!!!

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Thanks Taisen 😂 I appreciate it!

  • @LunaJLane
    @LunaJLane 10 месяцев назад +1

    Not sure where my previous comments went to but I was able to work out all the issues I ran into.

    • @Dr_Tripper
      @Dr_Tripper 9 месяцев назад

      ^C^CTerminate batch job (Y/N)? n
      Environment variables loaded from .env
      Prisma schema loaded from prisma\schema.prisma
      Datasource "db": PostgreSQL database "crew_ai_visualizer", schema "public" at "localhost:5432"
      Error: P1000: Authentication failed against database server at `localhost`, the provided database credentials for `postgres` are not valid.

  • @TailorJohnson-l5y
    @TailorJohnson-l5y 11 месяцев назад +11

    I just subscribed. Please focus on local models when you can, this is fascinating! Thank you!

  • @denijane89
    @denijane89 8 месяцев назад

    Thank you for the detailed walk-trough. It took me the whole evening, two conda environments and gemini and finally chatgpt's help to set it up, but yay me. In the end, it worked, but damn, is it slow without cuda. I don't know which of my previous local llm experiments decided I don't need cuda in my life, so now I'm waiting for my major kde update to reinstall it. I think maybe the internet search won't work by default, maybe it will require some api for a search engine but as I said, it's slow.

  • @DannyBordwell82
    @DannyBordwell82 10 месяцев назад

    Hey mate, first thank you for making this video! First video I have by you and I really like your teaching style. I went to your web site and filled out the form but didn't get an email. I'll keep an eye on it but wanted to give you a heads up that you might have a bug. Thanks again for your contributions to all of this, and keep up the great work :)

  • @rickmurphy9613
    @rickmurphy9613 10 месяцев назад +4

    Thanks for the detailed explanation for setting up local LLMs and Agent crews, really informative for beginners like myself!

    • @bhancock_ai
      @bhancock_ai  10 месяцев назад +2

      Thanks Rick! Glad it was helpful!

  • @petlivematters8646
    @petlivematters8646 7 месяцев назад +1

    very nice content thanks for the resource and the continued email updates 🎉🎉

  • @CuriousCattery
    @CuriousCattery 10 месяцев назад +1

    Absolutely brilliant! Any chance you could do a video on deploying this to a server so it could be run remotely?

  • @johns.107
    @johns.107 11 месяцев назад +1

    Got both examples working. One thing you didn't cover: While using a local LLM, can you define the Crew to use sequential or hierarchical processes?

  • @RealLexable
    @RealLexable 11 месяцев назад +1

    Looking forward for more. Thx

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад +1

      I'm in the works on my next tutorial for you guys now!

  • @TopicTome
    @TopicTome 10 месяцев назад +2

    any reason why I am getting this error : "It seems we encountered an unexpected error while trying to use the tool. This was the error: can only concatenate str (not "dict") to str ". It seems like it is not reading the MarkdownTools file correctly

  • @BobbyReedy-qy1wp
    @BobbyReedy-qy1wp 9 месяцев назад

    That's a little bit of a lot is my favorite fray sound thank you

  • @ndungimusyoka3011
    @ndungimusyoka3011 6 месяцев назад

    thanks i learnt alot. Have you encoutered "Action don't exist"issue with local llms. If so how did you resolve

  • @hassanahmed7326
    @hassanahmed7326 10 месяцев назад

    Many thanks for the fascinating guide and tips really appreciate it!!
    I have a question or ask for a favor if you make a guide how to deploy and use crow Ai with open source LLM for production environments?

  • @Techonsapevole
    @Techonsapevole 11 месяцев назад +1

    Epic! thank you

  • @arthuraquino8356
    @arthuraquino8356 11 месяцев назад +3

    Hi Brandon! Do you intend to create a video like others you've already made by creating a crewai application and deploying it on Vercel or another platform and generating an API or something like that to be used in production? That would help a lot.

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Hey! That is the end goal! There is just so much foundational information that I want to cover first before making a full stack video like that.
      From what I've seen on RUclips there isn't a course that covers how to use CrewAI in a production environment. Have you seen one yet?

    • @arthuraquino8356
      @arthuraquino8356 11 месяцев назад

      @@bhancock_ai Nowhere and I look all the time and can't find anything about using it in production. It would be amazing if you did it and it would be a pioneer here on RUclips in relation to this.

  • @musumo1908
    @musumo1908 11 месяцев назад +1

    Awesome..would love to see this running with openrouter…

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Is OpenRouter just like Ollama? Does it provide some additional features or maybe it's easier to use?

  • @Jack-RichardViri
    @Jack-RichardViri 7 месяцев назад

    Can you do this with Google collab for running the LLM and code? I appreciate ur videos man!

  • @renierdelacruz4652
    @renierdelacruz4652 11 месяцев назад

    What a great video, thanks very much for sharing.

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Thanks Renier! If you liked this video, I think you'll love the new CrewAI tutorial video I just released too!
      ruclips.net/video/OumQe3zotGU/видео.html

    • @renierdelacruz4652
      @renierdelacruz4652 11 месяцев назад

      @@bhancock_ai Of course, I will watch the video for sure

  • @SolutreanHypothesis
    @SolutreanHypothesis 11 месяцев назад +1

    Awesome, thank you!

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      You bet! I have more CrewAI content coming out soon that I'm sure you'll love as well!

  • @TheAmazonExplorer731
    @TheAmazonExplorer731 8 месяцев назад

    Thanks for the great video please make a video that crewai multi agent for computer vision

  • @IdPreferNot1
    @IdPreferNot1 11 месяцев назад

    Thx for this one. Think i'll have to try out poetry more... seems better than venv or simple conda

  • @kushagrajain2407
    @kushagrajain2407 8 месяцев назад +1

    Hey Brandon,
    Excellet Tutorial, Although I have one doubt, why do we need to create the script file and ModelFile? Can't we directly set the model to "ollama/mistral" etc
    Awaiting for you reply, thank you for working hard for this tutorial :)

    • @bhancock_ai
      @bhancock_ai  8 месяцев назад

      Hey! You could use a model file directly but your results wouldn’t turn out as well. It’s important to include the stop words that we set up in the Modelfile.
      Hope that clears things up!

  • @yosephsamuel726
    @yosephsamuel726 10 месяцев назад +1

    thank you so muchh
    !!!

  • @youssefmahboub7981
    @youssefmahboub7981 11 месяцев назад +1

    thanks , that's amazing
    I wonder if you can make a video to show to use crew ai as a part of an api, you trigger api to get crew ai to do it's magic
    preferably using flask to code the api
    thank you Brandon

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Hey Youssef! You've read my mind! I plan on doing a video on this in the upcoming weeks. There are a few more foundational videos I want to do before making a tutorial like this.

  • @lisboua06
    @lisboua06 8 месяцев назад

    this approach can be deployed to run online instead of my computer?

  • @tarikrazine
    @tarikrazine 11 месяцев назад

    Thanks brandon. Are you going to build in the future something more advance like using next.js and fastapi for crewai

  • @RakeshMakhija
    @RakeshMakhija 10 месяцев назад

    Hi, my machine does not have higher RAM. Can we connect mistral model via API like we do in case of Open AI? If yes, can you share an example? Many Thanks. 🙏

  • @changmatta
    @changmatta 10 месяцев назад

    Your tutorials are excellent. How can I install Ollama in Linux HPC cluster? I found pip install Ollama works fine for me. I created an virtual environment and installed all packages. My question: How can I setup Ollama for local models in HPC cluster after installing Ollama through Pip install command. Another question- Can we use VLLM?

  • @נתנאלרותם-פ3ה
    @נתנאלרותם-פ3ה 11 месяцев назад +1

    This is great! can it work with Pydentic and instructor also? for function calling?

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Hey! I have used Pydantic when defining tools for the Crew to use. Are you asking about something else?
      Also, I briefly talk about how to use Pydantic with CrewAI in the first CrewAI crash course that I did a little bit ago. I'm not sure where I mentioned it in the video but here's the link:
      ruclips.net/video/sPzc6hMg7So/видео.htmlfeature=shared

    • @נתנאלרותם-פ3ה
      @נתנאלרותם-פ3ה 11 месяцев назад

      @@bhancock_ai I meant to use it for defining the response from the llm with instructor. some times, you enter infine loop when the response is not quite formmated. it happen more with open source llms.

  • @Frankvegastudio
    @Frankvegastudio 5 месяцев назад +1

    Hey, How can we delete unnecessary LLMs duplicated during the install? Thanks

    • @bhancock_ai
      @bhancock_ai  5 месяцев назад

      You’d have to check the ollama docs but I think it’s ollama remove

  • @by_westy
    @by_westy 9 месяцев назад

    Many Thanks for the amazing video!! I have a question though, Why don't we access these open source LLMs such as Mistral and so on using Huggingface Api instead of downloading the model locally? is there a specific reason?

  • @crazybigyo
    @crazybigyo 10 месяцев назад

    You should show more examples of it actually working

  • @theccieguy
    @theccieguy 11 месяцев назад

    New subscriber. Good job

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 10 месяцев назад

    for the advanced, did you need the .env file, since i don't think it was mentioned.

  • @hiddenkirby
    @hiddenkirby 10 месяцев назад

    Great video. One issue I'm having with CrewAI against Local LLMs is having them properly call functions. Curious if you've run into this, and have any tips/videos. I was able to use LM Studio to serve up Mistral, and still have it wrapped in an OpenAI API ... but even with the OpenAI interface I'm still having sub-par results. (I'd prefer to use Ollama so I can spin it out to a cloud docker container with bigger resources.)

  • @AIdevel
    @AIdevel 10 месяцев назад

    Instead of building agents can I use langchain agent like csv agent , react and so if yes how to get it done ? Thanks in advance

  • @KJ-yq5gm
    @KJ-yq5gm 9 месяцев назад +1

    Can you do an upated on for llama3? I tried updating the script to llama3 but didn't work, when I type from llama3 the IDE doesn't recognize llama3. Any help would be awesome

  • @maximoguerrero
    @maximoguerrero 11 месяцев назад +1

    is it possible to do a tutorial hooking git up to stream or chainlit?

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      I haven't gotten to use chainlit before. How is it different than the agents we are building using CrewAI?

    • @maximoguerrero
      @maximoguerrero 11 месяцев назад

      @@bhancock_ai its a chat web interface on top. Chainlit works easily on top of langchain. But i haven't seen anyone do a tutorial on a web interface only terminal.

  • @TheDarrenS
    @TheDarrenS 11 месяцев назад +1

    Firstly, great job on the video I am just learning how to program. And with autism it gets a bit brain addling
    ERROR "ModuleNotFoundError: No module named 'crewai'
    Does not matter what I do to install it "visual studio terminal" pip install crewai etc any thoughts?

    • @salespusher
      @salespusher 11 месяцев назад

      I m facing samw issue

    • @TheDarrenS
      @TheDarrenS 11 месяцев назад

      @@salespusherdid you fix the error?
      if you are running vscode i found that if I click on the play button it does not work but if I click the down arrow beside it and run python file it works.

  • @andrewdarius4270
    @andrewdarius4270 11 месяцев назад

    With crypto on the raise can you show crew example writing newsletter about crypto and/or have crypto coin research crew across web and twitter

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      If you want to see a Crew build out a newsletter, you'll definitely want to check out this video here:
      ruclips.net/video/Jl6BuoXcZPE/видео.html
      In that tutorial, I show you how to build an AI newsletter. All you need to do is change the topic in the code and you instantly have a crypto newsletter.
      Hope that helps!

  • @j_stach
    @j_stach 10 месяцев назад

    Is it possible to use multiple Ollama servers or am I limited to one ENV variable?
    It would be neat to diversify models within a Crew, for example "command-r" for manager agent and "codellama" for codegen agent.

  • @whizzoknows5991
    @whizzoknows5991 11 месяцев назад +1

    I'm need an adult! I am stuck at running the llama 2 model file.sh it says I dont have extension for debugging 'shell script'.. which extension do i need? Any advice?

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Lol! You really did make me laugh with that, "I'm in need of an adult" 😂
      I actually just launched a free Skool community and it's way easier to provide support in there.
      Inside of Skool, you'll see I created a post about this video. Feel free to add a comment with the issue you're having and maybe a screenshot, and we'll all be able to better help you out!
      www.skool.com/ai-developer-accelerator/about

  • @JoeBurnett
    @JoeBurnett 10 месяцев назад

    Good stuff! Thanks!

  • @cj-ip3zh
    @cj-ip3zh 8 месяцев назад

    Why do we need a custom modelfile for crewai?

  • @judge_li9947
    @judge_li9947 8 месяцев назад

    so i see in visual studio you use python and for me only powershell is selectable even after installing python and adding to environment path. how can i add it so i can actually follow along what you are doing?

  • @Wallstbillie
    @Wallstbillie 9 месяцев назад

    hey Brandon thanks for the video, its a mazing to see what your doing here with the channel everything is soo well put and extremely helpful. THank you,
    I have a question while following allong, there is a strange error on line 2
    "Traceback (most recent call last):
    File "/Users/billie/Documents/GitHub/crew-ai-local-llm-main/crewai-advanced-example/main.py", line 1, in
    from crewai import Crew
    ModuleNotFoundError: No module named 'crewai'"
    basicly it saying it cant find the 'crewai', i have installed poetry, done everything step by step. couldnt wrap my head around am i missing a step Brandon? Thanks again for the information

  • @Enkumnu
    @Enkumnu 7 месяцев назад

    Thank you very much!

  • @dankoyy42
    @dankoyy42 11 месяцев назад +1

    thank you!

  • @mattlaw4633
    @mattlaw4633 7 месяцев назад

    Is there a requirements.txt that you can share? Im getting package incompatibility errors.

    • @mattlaw4633
      @mattlaw4633 7 месяцев назад

      nevermind, I see that's what poetry does for you

  • @victorsilva-jb2sf
    @victorsilva-jb2sf 11 месяцев назад

    First of all, I'd like to thank you for the tutorial! I followed the tutorial and tried to run with SerperDevTool and it is not working with neither Llama2 nor Mistral. Do you have any clue about that?

  • @muraliytm3316
    @muraliytm3316 9 месяцев назад

    Your videos are very informative sir, but I am running a Windows laptop and it is difficult to follow even after downloading the files from your github I am unable to run the files properly I am facing so many errors

  • @adinathdesai6880
    @adinathdesai6880 8 месяцев назад +1

    How to follow the first 10 minutes of the video in the Windows operating system?

    • @bhancock_ai
      @bhancock_ai  8 месяцев назад

      I would check out the Ollama windows instructions on their site! I don’t have a windows machine but they looked pretty simple!
      The only gotcha is I think it’s still in beta

  • @travelingbutterfly4981
    @travelingbutterfly4981 9 месяцев назад

    can we run this on our custom data?

  • @dbwstein
    @dbwstein 10 месяцев назад

    I'm not seeing the modelfile configuration requirement on the crew ai docs. Is making this adjustment still necessary?

  • @videosmaster6174
    @videosmaster6174 11 месяцев назад

    Good video. Is it possible to create two agents using the same model? So that I only have to download the model once but they are shaped differently due to the parameters of the Modelfile. If I make two modelfiles with the same model, does it download 2 times the same model or does it download once and both agents make requests to the same model?

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Hey! I'm not 100% sure what you're asking. If you want to get more support with CrewAI and using local LLMs, I created a skool community for you guys to ask your questions and get more support:
      www.skool.com/ai-developer-accelerator/about

  • @jimmygohhanjie9011
    @jimmygohhanjie9011 10 месяцев назад

    thanks for your tutorial regards to crew ai, it has helped me greatly.
    do you have any successful examples of using local Ollama openhermes and "from crewai_tools import CSVSearchTool". my crew is able to run, however during inserting of embedding into chromadb, it encountered "404 missing page" error.

  • @RyanJohnson
    @RyanJohnson 10 месяцев назад

    It would be great to get a windows version of this.

  • @avelinecash
    @avelinecash 10 месяцев назад

    cool vid. does this work with windows?

  • @mikedelaconcepcion5910
    @mikedelaconcepcion5910 10 месяцев назад +1

    having hard time following the codes, using Win11 here, tried WSL already and Ubuntu installed. just stopped in modelfile...

  • @M60studio
    @M60studio 10 месяцев назад

    It fails to recognize/understand the tools and fails to use them correctly. What I can do?

  • @yashrawat1232
    @yashrawat1232 11 месяцев назад +1

    Thanks man

  • @lucaknipfer4373
    @lucaknipfer4373 11 месяцев назад +2

    Does this also work on an Windows machine?

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад +1

      Yes! If you go to ollama's website, they have instructions on how to setup ollama on your windows machine

  • @JNET_Reloaded
    @JNET_Reloaded 10 месяцев назад +1

    404 page not found on /v1 folder how do i resolve that?

  • @JNET_Reloaded
    @JNET_Reloaded 10 месяцев назад

    How do i enable auto ACTION so i dont have to press enter every minute?

  • @Srini-bz1lk
    @Srini-bz1lk 7 месяцев назад +1

    Please update for llama3

  • @SaiBharadwajKakunuri
    @SaiBharadwajKakunuri 14 дней назад

    does these run with vllm

  • @rosemaryng7994
    @rosemaryng7994 10 месяцев назад

    Half way through the video and i realised some parts of this video doesnt work for windows. For example: %chmod command is for linux right?

  • @Cheng32290
    @Cheng32290 10 месяцев назад

    Hi! I can make it works with OpenAI, but once I try to run it with Ollama, it start to show me errors like:
    Action '' don't exist, these are the only available......
    Do you have any idea about how to fix it?

  • @voxyloids8723
    @voxyloids8723 9 месяцев назад

    How to run it for Tavern?

  • @judge_li9947
    @judge_li9947 8 месяцев назад

    on second note i believe the problem is the chomd command. i have zero dev experience, can someone help pls?

  • @travelingbutterfly4981
    @travelingbutterfly4981 9 месяцев назад

    can we deploy this as a chatbot? if yes how?

  • @3barazi1
    @3barazi1 7 месяцев назад

    How to run this on macbook air m2 ?

  • @R3alR00tux
    @R3alR00tux Месяц назад

    Does anyone have the commands to execute for a Windows setup? I'm getting hung at every command to execute.

  • @junktrash6725
    @junktrash6725 11 месяцев назад

    Hey this is unrelated to this video and instead focuses on a question about Crewai. Do you have any knowledge on this error:
    "Failed to convert text into a pydantic model due to the following error: Unexpected message with type at the position 1."
    Its been a big road block for me trying to run my crew.

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      Based on what I've seen from similar errors when working with CrewAI, I usually get that error when I'm adding the @tool decorator to a python function that I want my crew to call.
      Is that where the issue is happening for you or is this happening somewhere else?

    • @junktrash6725
      @junktrash6725 11 месяцев назад

      @@bhancock_aiSo it's happening else where. I talked to MrSentinel over on his channel and he had mentioned that an older version of crewai doesn't produce that error. I went ahead and tested that out myself and it did get rid of the error. I saw there was an update for crewai 4 days ago and another today. I will test it again with the newest version. Thanks for the help though!

  • @JNET_Reloaded
    @JNET_Reloaded 10 месяцев назад

    i dont have the endpoint /api/generate or /chat howdo i get that im using openweb-ui and that worrks...?

  • @mauriciogomes3315
    @mauriciogomes3315 3 месяца назад

    Hello Brandon. I tried to register on your page but I didn't receive an email like your website says I would. Maybe something isn't working. I would like to be part of your training. If you can send the code I would be happy.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 10 месяцев назад

    why is it called ChatOpenAI() if we are just using the local model and not OpenAI?

  • @Andrewlongron-k9i
    @Andrewlongron-k9i 7 дней назад

    you need a simple version for those of use who are not experts in coding bc i cant get this to work

  • @JuergenChia
    @JuergenChia 10 месяцев назад

    My laptop "die" suddenly running this, may be due to the heat. The laptop specs: Ryzen 7840, RTX-4050, 32 GB, Windows 11, Anaconda env. I noticed that the task had trouble getting some financial data. Any suggestion is appreciated -- thanks!

  • @yashkumarshukla4979
    @yashkumarshukla4979 10 месяцев назад

    I'm trying replace open AI with llama2
    I'm doing a project on danswerAI I'm using docker and cloned the danswer respositry
    can you help me out

  • @jimlynch9390
    @jimlynch9390 10 месяцев назад

    I say it didn't work. but it gave me similar output but never finished.

  • @changmatta
    @changmatta 10 месяцев назад

    How can we use python Ollama for this?

  • @BhaumikSolanki-vu6wj
    @BhaumikSolanki-vu6wj 10 месяцев назад

    How to run chmod in windows?

  • @DarkWolfislive
    @DarkWolfislive 10 месяцев назад +1

    i'm not receiving the email of source code

    • @bhancock_ai
      @bhancock_ai  10 месяцев назад

      Did the search code email come through? If not, let me know and I'll make sure you get it!

  • @luizcamillo9933
    @luizcamillo9933 8 месяцев назад

    Is there a Windows version?

    • @luizcamillo9933
      @luizcamillo9933 8 месяцев назад

      This worked in Windows11:
      :: File name: create-mistral-model-file.bat
      @echo off
      :: Variables
      set model_name=mistral
      set custom_model_name=crewai-mistral
      :: Get the base model
      ollama pull %model_name%
      :: Create the model file
      ollama create %custom_model_name% -f .\MistralModelfile

  • @thmo_
    @thmo_ 11 месяцев назад

    Gracias Platano.

  • @RichardBond5566
    @RichardBond5566 11 месяцев назад +1

    is it free of censorship?

    • @bhancock_ai
      @bhancock_ai  11 месяцев назад

      It's from Meta. I doubt it.

  • @bradydyson65
    @bradydyson65 10 месяцев назад

    lmao "I'm gonna teach you how to run Crew AI using local LLMs so you don't rack up a huge Open AI bill like I just did."
    Yes, this is exactly why I'm here. 😂

  • @bakeee
    @bakeee 9 месяцев назад

    code does not work on windows.

  • @jimlynch9390
    @jimlynch9390 10 месяцев назад

    I tried to edit my comment but it gives me an error.