Complete AI Agent Tutorial with Ollama + AnythingLLM

Поделиться
HTML-код
  • Опубликовано: 13 янв 2025

Комментарии • 24

  • @PetersonCharlesMONSTAH
    @PetersonCharlesMONSTAH Месяц назад +1

    I'm so happy I found this video because docker won't open on my mac although i chose the correct model to install it never worked, so thank you so much for the upload.

  • @parsival9603
    @parsival9603 11 дней назад +3

    Wish I had an nvidia 4k series to be able to better play with this stuff. New goal unlocked!🔓

    • @sychrov81
      @sychrov81 День назад

      @@parsival9603 Im running 8b model on 4070, so far it was ok and fast

  • @sychrov81
    @sychrov81 День назад

    Hello, nice video. I was just wondering, since the agent can do get http requests, they might be able to execute other things. Im looking for some more IT admin executable agent, any tip? I was trying to create it yesterday and at the end of the day I found out that ollama (api) doesnt remeber the individual chats, but I have to send the whole coversation to it with wvery request. Wanted to create multiagent system to process helpdesk tasks...is there something "ready to go"?

  • @NickyDiesel
    @NickyDiesel 18 дней назад

    Good shit Kenny, haven’t seen ya since the independence village days 💪💪

  • @zandanshah
    @zandanshah 13 дней назад

    Thanks, very well done.

  • @PetersonCharlesMONSTAH
    @PetersonCharlesMONSTAH Месяц назад

    How do you clear the chat in AnythingLLM?

  • @CynicalMournings
    @CynicalMournings 3 дня назад

    Hoping for a lotto windfall so I can purchase 2 Nvidia Digits

  • @Six5Rider
    @Six5Rider Месяц назад

    🔥

  • @Charles-l2z1j
    @Charles-l2z1j Месяц назад +1

    Thanks for sharing such valuable information! I have a quick question: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How can I transfer them to Binance?

  • @investmentanalyst779
    @investmentanalyst779 29 дней назад

    This was just a giant Nvdia ad. Didn't even showcase why the cards are better than my M2 MAX Macbook with highly efficient chips that can store an entire LLM in memory. Who is this guy?

    • @MrRubyGray
      @MrRubyGray 19 дней назад

      1. How many cuda core and tensor core have your "hightly efficient macbook"? 2. Run llama 3.2-vision 11b on your m2...

    • @investmentanalyst779
      @investmentanalyst779 19 дней назад

      @@MrRubyGray I run LLama 3.2-vision all the time in llmstudio LMFAOOOOOOOOOOOO that's the power of the M2 MAX my boy. In fact, my M4 Mac Mini matches the performance o_0 magical.

    • @simont.n.4229
      @simont.n.4229 17 дней назад

      @@MrRubyGray Hey man, MacBooks might not pack the punch of an RTX 4090, but I’ve gotta say, the unified memory is a game changer for running local LLMs. I’m also on an M2 MAX with 32GB RAM and currently running QWQ 32B 4bit modell. Getting around 15 tokens/sec-not too shabby for a laptop, right?

  • @rahuldinesh2840
    @rahuldinesh2840 Месяц назад +2

    Nvidia is advertising through influencers?

    • @HoldYourSeahorses
      @HoldYourSeahorses Месяц назад +2

      He’s telling the truth. Other gpus don’t do well with AI

    • @zandanshah
      @zandanshah 13 дней назад

      @@HoldYourSeahorses I am using a Radeon RTX7900 XTX 24 GB and It's superfast. Best part, I did not have to sell my Kidney.

  • @venuev
    @venuev Месяц назад +4

    it is not free. What was your initial hardware/software cost, and weekly electricity cost ?

  • @marclrx2495
    @marclrx2495 11 дней назад

    I get NVIDIA GPU's are probably the best for this kind of application but comparing your laptop to a desktop and saying there is a night and day difference... OF COURSE THERE IS WHAT but the difference IS NOT because NVIDIA... How... there must be other talking points NVIDIA gives you (sorry but this made me so mad)