100% LOCAL AI Agents with CrewAI and Ollama

Поделиться
HTML-код
  • Опубликовано: 9 янв 2025

Комментарии • 27

  • @jafarekrami
    @jafarekrami 2 месяца назад +7

    from crewai import Task, Crew, Process, Agent, LLM
    # Initialize the local LLAMA 3 model via Ollama
    llm =LLM(model="ollama/llama3.1:8b")

    • @NelitoCalixto
      @NelitoCalixto 2 месяца назад

      this worked for me: llm = LLM(model="ollama/phi3.5:3.8b", base_url="localhost:11434")

    • @matthewchang-kit2885
      @matthewchang-kit2885 27 дней назад

      this worked! Thanks for this man! I've been stuck getting some stupid errors with the llm.py script when working off the video and other videos. Everywhere seems to be outdated!

    • @DailyBread777Devotionals
      @DailyBread777Devotionals 26 дней назад

      Thank you for this, it still took me some time to figure out. The main issue was getting the correct name for my model instance. If anybody has that issue, run the following command: ollama list
      This will give you a list of all available models, and copy the name as is. In my case it was: llama3.2-vision:latest, which was a far cry from what I was using in my c# app: llama3.2:8b

    • @todayispromised
      @todayispromised 12 дней назад

      You are a godsend.

  • @apcwss
    @apcwss 3 месяца назад +1

    Thank you so much. I was having an issue on conection creai wuth llama, but your template helped me.

  • @jarrod752
    @jarrod752 5 месяцев назад +5

    _Paid Nothing..._
    Electricity: _Am I a joke to you?!_

  • @DihelsonMendonca
    @DihelsonMendonca 5 месяцев назад +5

    Hello, please make a review about Open WebUI. It's the great hype currently. "Open WebUI", is a frontend for LLMs, which offers the ability for users to talk hands free, with excellent voices, using internal or external TTS APIs from Eleven Labs, Groq, etc, has web search, RAG, long term memory, it's compatible with OpenAI API, can import GGUF models directly from Hugging Face and convert for using on Ollama, can fine tune models for special needs, can work with multimodal models, image and video, can engage with multiple models simultaneously, local and external at the same time, can accept new plugins, external tools and functions... And it's also open source. 🎉❤

  • @none-hr6zh
    @none-hr6zh 2 месяца назад

    Thank you so much.Why there is need of api if I using local llms .Suppose I want to see the tokenization and detokenization or any model related information,how to see that model file .I am using ollama pull llama and ghen using in crewai but reposne coming is from api . Please give hint how to see the local llms file

    • @TylerReedAI
      @TylerReedAI  2 месяца назад +1

      Hmm your response is still from OpenAI? So if you set your OpenAI api key to like sk-1111, does it still work? You don’t need an api key, but you just need a placeholder, so like any string really.

    • @none-hr6zh
      @none-hr6zh 2 месяца назад

      @@TylerReedAI Thank you for the reply.My question is that how to go to the details of local llms like(How it is tokenizing ,encoding decoding) .If I have to change iany method inside model I must have access to model file.I am using ollama(model="llama",base url), it is using rest api to send data to llms and recieve response from llms .I am not able to find how does message is preprocessed and encoded befor giving to llms .

    • @none-hr6zh
      @none-hr6zh 2 месяца назад

      I have one doubt
      from langchain.from langchain.llms import Ollama
      llm=Ollama(model='llama3')
      The response going through rest api using post method , can we acess server side code as it is local host , I want to see the internal working how my input is going to lma model and all the internal detail of model in .py itself

  • @ZaneLing-t3m
    @ZaneLing-t3m 2 месяца назад

    if i want to use my ollama on my server not loadl, how can i change my code?JUST change the url to my server ollama url make no sense.

    • @TylerReedAI
      @TylerReedAI  2 месяца назад

      So you have a separate server with ollama? I guess if you have a server running, maybe create an API that you can call. With ollama running on your computer yeah you would just change the base url. It seems you have a different setup, so how would you normally connect to your own server?

  • @DihelsonMendonca
    @DihelsonMendonca 5 месяцев назад +1

    Now we can with GPT 4o mini. The prices are 90% cheaper ! ❤

  • @NateGinn-u9m
    @NateGinn-u9m 5 месяцев назад +2

    now do this with groq

  • @mobilesales4696
    @mobilesales4696 5 месяцев назад

    Can you write a script in which we can add unlimited amount of ai which use api system and seprate functions in which we can store our offline llms like ollama all version or any ai system and store that offline llms to github so we can use them to our desire place by running simple script 😅😊

  • @ade7456
    @ade7456 5 месяцев назад

    Great! Can I create multiple agents? E.g One to code, one to test, one to correct code etc?

  • @m.c.4458
    @m.c.4458 3 месяца назад

    I only use Ollama now, and I dont use Crewai but moa, and make logic for it. I am done with frameworks .. too many API and bias data.

  • @themax2go
    @themax2go 4 месяца назад

    would you still recommend this approach - crewai - over autogenstudio (ruclips.net/video/IjqAMWUI0r8/видео.html) ?

  • @darleisonrodrigues3365
    @darleisonrodrigues3365 5 месяцев назад +1

    🇧🇷🇧🇷🇧🇷👏👏👏

  • @JNET_Reloaded
    @JNET_Reloaded 4 месяца назад +1

    Nice