Getting Started with Groq API | Making Near Real Time Chatting with LLMs Possible

Поделиться
HTML-код
  • Опубликовано: 16 окт 2024
  • НаукаНаука

Комментарии • 45

  • @martg0
    @martg0 7 месяцев назад +4

    Thanks for the video! I will start testing this API with a POC I am working now to learn.

  • @thierry-le-frippon
    @thierry-le-frippon 7 месяцев назад +10

    They should sell their LPUs instead and compete with Nvidia. They would surely get lots of backup and investments. They will probably be copied instead othetwise and fade away quickly.

  • @jonoburcham4059
    @jonoburcham4059 7 месяцев назад +6

    Great video! Can ou make a voice chatbot using groq in one of your next videos please? I would also love to see if you do this on streamlit or if it's too slow and you use something else. Thanks so much for your videos

    • @engineerprompt
      @engineerprompt  7 месяцев назад +1

      Planning on making that. For voice chatbot, might just do cli though

  • @KOTAGIRISIVAKUMAR
    @KOTAGIRISIVAKUMAR 7 месяцев назад +1

    why you cant use the conversational retrieval chain instead of the conversation chain Because it can handle the memory by default no need maintain externally?
    @prompt Engineering

  • @osamaa.h.altameemi5592
    @osamaa.h.altameemi5592 7 месяцев назад

    this is next level. OpenAI got some serious competition.

  • @MikiyasZelalem-h2b
    @MikiyasZelalem-h2b 7 месяцев назад +3

    Please Create a step-by-step video guide on using the Groq API with Streamlit.

  • @DestanBegu
    @DestanBegu 7 месяцев назад +1

    Thanks for your content! I´m using Streamlit as well and want to give Content as the System role. For Example "answer me in short sentences in italian" so it will do this for each prompt i do. Where can i do this in the code? I used the Streamlit Chatbot Repo.
    Thanks in advance

  • @dhruvpatel2554
    @dhruvpatel2554 7 месяцев назад +4

    Awesome stuff !!!!

  • @mickelodiansurname9578
    @mickelodiansurname9578 7 месяцев назад

    Heres the question, can Groq cards also work on inference for art and audio and voice models? or is it just LLM inference specific? It is like, well superfast... the only worry is literally the latency from you to the endpoint... so if its say, a streaming interruptible feed you are giving the model then the use cases for TTS and Speech applications just went through the damn roof!

    • @engineerprompt
      @engineerprompt  7 месяцев назад

      I am not sure but I was listening to Chamath (who is an investor in Groq) and he was talking about the initial use cases of the hardware. Seems like they were focused on vision so it might have the ability

    • @engineerprompt
      @engineerprompt  7 месяцев назад +2

      I am trying to put together an example for end to end speech conversation, let's see how that goes

  • @shaheerabdullah6738
    @shaheerabdullah6738 5 месяцев назад

    Very Helpful.

  • @vishnuprabhaviswanathan546
    @vishnuprabhaviswanathan546 7 месяцев назад

    How to control the output of LLM for a single input?

  • @ramimithalouni6592
    @ramimithalouni6592 7 месяцев назад +1

    what is the time to receive the first chunk in streaming?

    • @easy-dashboard
      @easy-dashboard 7 месяцев назад

      Depends on the amount of input tokens. With a one line instructions it's below 1 second. If you include context of a RAG-System it will go up to 3 seconds to start the first token (30k tokens of context)

  • @Francotujk
    @Francotujk 6 месяцев назад

    What are the rate limits of the free api? Is it necessary to provide credit card?

    • @engineerprompt
      @engineerprompt  6 месяцев назад +2

      It's free at the moment and there is a rate limit as well. Seems to keep changing. Last time I checked, it was around 20 messages per minute

  • @benben2846
    @benben2846 7 месяцев назад +1

    tu est fort man ^^👍

  • @jesusleguiza77
    @jesusleguiza77 6 месяцев назад

    Hi, this api have function calling? regards

  • @hmsfaceface8925
    @hmsfaceface8925 7 месяцев назад +4

    how can the groq fpga use mixtral 8x7b with just 250gigs of vram?

    • @coyoteq
      @coyoteq 7 месяцев назад

      Bcoz of groq tpu...

  • @ConnectorIQ
    @ConnectorIQ 2 месяца назад

    almost a baby version of a quantum computer if you can actually perfect a model based on speed of responses to your questions and using the groq gpu...

  • @jmay3230
    @jmay3230 7 месяцев назад

    If temp can adjust to minus what is impact on generation ( consider it as hypothetical if case don't exist )

    • @engineerprompt
      @engineerprompt  7 месяцев назад

      It will be same as setting it zero :) basically if you set it zero, it will pick the next most probable token. If you set a higher value, it can to sample among the most probable tokens

  • @bobsmithy3103
    @bobsmithy3103 7 месяцев назад

    Can it run other models?

  • @prestonmccauley43
    @prestonmccauley43 7 месяцев назад

    I tried a few things with this and it is incredibly fast.

  • @ranaayushmansingh2368
    @ranaayushmansingh2368 4 месяца назад

    can we fine tune this and use it?

    • @engineerprompt
      @engineerprompt  4 месяца назад

      You can't fine-tune via their api yet.

  • @CharlesDonboscoA
    @CharlesDonboscoA 7 месяцев назад

    Hi whether it's free or paid ?

  • @siriyakcr
    @siriyakcr 7 месяцев назад +1

    Wow

  • @ZombieJig
    @ZombieJig 7 месяцев назад +3

    Fuck all these cloud only AI services, release the cards!

    • @thierry-le-frippon
      @thierry-le-frippon 7 месяцев назад +1

      Yes, otherwise they will fade away quickly. Their window of opportunity is small. Money is looking at eating in the nvidia cake now not tomorrow.

  • @conciousaizielia
    @conciousaizielia 7 месяцев назад

    Grok is not a llm it can run a llm

  • @TheJscriptor09
    @TheJscriptor09 7 месяцев назад

    YALLM ... it is almost becoming daily news ... Yet Another LLM.

  • @savire.ergheiz
    @savire.ergheiz 7 месяцев назад +1

    Fast but useless. These oss models still way far behind cgpt4.

  • @geo4design
    @geo4design 7 месяцев назад

    This is an AD

  • @sausage4mash
    @sausage4mash 7 месяцев назад +7

    did someone say free