Groq Function Calling Llama 3: How to Integrate Custom API in AI App?

Поделиться
HTML-код
  • Опубликовано: 28 май 2024
  • 🚀 Join me in this comprehensive tutorial where we explore the incredible capabilities of Llama 3 for creating and integrating AI applications with APIs. We'll start from scratch, setting up our environment, and proceed through building our own API and integrating it with a powerful AI app. By the end of this video, you'll learn how to create a seamless user interface that interacts dynamically with both AI and API to deliver real-time responses.
    🔍 What You Will Learn:
    Setting up a virtual environment with Gro Python 3.11.
    Creating an API using Flask for handling data requests.
    Integrating an AI application with the API using Llama 3.
    Designing a user-friendly interface for real-time AI interactions.
    🛠️ Setup Steps:
    Initialise and activate the virtual environment.
    Install necessary packages like Flask, Gradio, Requests.
    Code along as we build the API and link it with our AI application.
    Launch the application and test it through a user interface.
    📺 Why Watch This Video?
    Understand the synergy between AI and APIs.
    Gain practical skills in setting up and integrating advanced software solutions.
    Witness real-time application functionality through a simple user interface.
    ✨ Timestamps:
    0:00 - Introduction to AI & API Integration
    0:47 - Setting up the Development Environment
    1:04 - Building the API with Flask
    2:31 - Integrating the AI Application
    3:50 - Creating the User Interface
    7:58 - Running the Complete System
    👍 Like, Share, and Subscribe for more videos on Artificial Intelligence tutorials. Hit the 🔔 to stay updated!
    🔗 Resources:
    Sponsor a Video: mer.vin/contact/
    Do a Demo of Your Product: mer.vin/contact/
    Patreon: / mervinpraison
    Ko-fi: ko-fi.com/mervinpraison
    Discord: / discord
    Twitter / X : / mervinpraison
    Code: mer.vin/2024/04/groq-tools-ll...
    #Groq #ToolCall #FunctionCalling #GroqFunctionCalling #GroqToolCall #GroqApiIntegration #GroqLlama3 #GroqLlama #LlamaGroq #Llama3 #Llama3Groq #Llama38bGroq #LlamaFunctionCalling #LlamaToolCall #Llama3ToolCalls #GroqToolCalls #GroqToolCalling #GroqFunction #GroqTool #GroqTools #Tools #Tool #Calling #FunctionCall #ToolCalls
  • ХоббиХобби

Комментарии • 49

  • @d.d.z.
    @d.d.z. Месяц назад +1

    Short and concise. Thank you

  • @ryzikx
    @ryzikx Месяц назад +4

    with groq's speed this could be extremely powerful if implemented properly, thanks for the video

  • @MeinDeutschkurs
    @MeinDeutschkurs Месяц назад +9

    Thank you for sharing this!

  • @nexuslux
    @nexuslux Месяц назад +2

    Very cool! Need to see how to implement document embedding with this setup you shared.

  • @MeinDeutschkurs
    @MeinDeutschkurs Месяц назад +3

    Gorgeous! Exactly what I was in need of! Thank you so much, Mervin! I wondered, what is the way to ask Llama 3, and your video appeared. 😎 - And yes, AMAZING! I’ll use it with LM Studio instead of groq.

  • @BradleyKieser
    @BradleyKieser Месяц назад +3

    Absolutely brilliant! THank you

  • @ZRowton
    @ZRowton Месяц назад

    Very nice tutorial. Well done and thank you

  • @zamanganji1262
    @zamanganji1262 Месяц назад

    Hi, Mervin. Thank you for your excellent presentation and tutorial. Could you please perform the procedures in Docker Compose?

  • @samfisher92sc
    @samfisher92sc Месяц назад

    Wow. This is what I was looking for. Thank you so much

  • @mernik5599
    @mernik5599 Месяц назад +2

    Great video! Just wondering if similar function calling is possible through ollama web ui? I'm currently using llama 3 with with web ui and serving it through ngrok to use on phone and already loving it for most of my use cases but really want to implement function calling for particular use cases could you please let me know if something like that would be possible or to enable internet access to all the queries like if I ask it for weather updates or todays news highlights then it can do so?

  • @drisssa2603
    @drisssa2603 Месяц назад

    Thanks for sharing Mervin! I need your help to implement such a tool in WordPress pages.

  • @lalamax3d
    @lalamax3d Месяц назад

    hey Mervin, just wanna say, huge thanks. its amazing.
    In future, if you manage to get some time, please create/share a video similar to this but change last part of webui to whatsapp chat bot....(with intent of support agent)
    i have so much more long list, but ideally if it can change multiple lanaguges that would be super awesome.......

  • @charagga
    @charagga Месяц назад

    Hi there, thanks for the video! ! I'm currently grappling with an issue in the llama3 model and could really use your expertise. We've set up a weather information function, but it's hit-or-miss when it comes to recognizing and activating in response to user prompts. While this function operates seamlessly with the gpt-4-turbo model, llama3 struggles, often providing inaccurate data or none at all. This is despite the function being clearly defined with all necessary parameters, suggesting that the model's trigger mechanism might be at fault. Could you shed some light on how to enhance llama3's consistency with function activation? Thanks!

  • @jarad4621
    @jarad4621 Месяц назад

    Hi Mervin, the groq rate limit is a big issue for my use case, can i use this same method to use another similar hosted llama 3 70b with crewai like openrouter api or can any api be used instead of groq with your method using web scraping?

  • @another_ahmed
    @another_ahmed 16 дней назад

    If I give it a task that requires it to run multiple functions. What do you think is the most reliable way to achieve this? My current implementation is a loop. It calls 1 function each iteration and loops. Each time aware of which function it has requested to be called in previous iterations. When it is satisfied that no more functions need to be called, it calls an end function.
    So if I were to do that with this approach I guess I would add the end function in the "tools" with the description run this when no more functions need to be run. The user prompt would need to provide the functions called on previous iterations. And the system prompt should instruct to only call 1 function at a time?
    Please let me know how you would achieve this, thanks.

  • @lalamax3d
    @lalamax3d Месяц назад

    now i also have more questions....after finished watching...
    Q. can this be done with local llama8b installed via ollama.??? if so, what steps do i need to change.??

  • @cuteporo
    @cuteporo Месяц назад

    Great short concise video. Thanks for making it. Looking at the source code link, I noticed that ui.py is not listed. Could you update. Thanks

    • @MervinPraison
      @MervinPraison  Месяц назад

      Added now :) Thanks for the reminder

  • @mohamedfouad1309
    @mohamedfouad1309 Месяц назад +2

    "this is amazing"🎉

  • @sanjeewarathnayake598
    @sanjeewarathnayake598 28 дней назад

    I tried the same code with locally running llama3 but it didn't trigger the function calling any idea why?

    • @MervinPraison
      @MervinPraison  18 дней назад

      Llama3 8B is not as effective as 70b in regards to function calling

  • @RobynLeSueur
    @RobynLeSueur Месяц назад +1

    Fantastic stuff. Going to give this a go myself!

  • @HyperUpscale
    @HyperUpscale Месяц назад +1

    This is not only on stuff from the top of the AI wave, but the crème de la crème
    Mervin

  • @BLAKE99msu
    @BLAKE99msu 18 дней назад

    Any reason you don't put these on github so people can run them and play with them?

    • @MervinPraison
      @MervinPraison  18 дней назад +1

      It’s in the description

    • @BLAKE99msu
      @BLAKE99msu 17 дней назад

      @@MervinPraison i see a link to your blog but no link to github even when i search this whole page?

  • @d.d.z.
    @d.d.z. Месяц назад

    Can you make a video usin Crew Ai?

  • @thecooler69
    @thecooler69 Месяц назад

    Awesome stuff. Really useful.

  • @shivachaturvedhi8840
    @shivachaturvedhi8840 Месяц назад

    cant we use gro wtihotu API_key?
    plese guide anyone

  • @PoGGiE06
    @PoGGiE06 Месяц назад

    This is cool. But (forgive the n00b questions) I’m unfamiliar with ‘groq’, I thought that was Elon Musk’s LLM? presumably it’s a library for using ‘tools’. And this is an illustrative example, the ‘tool’ could be anything, an executable, for example. Otherwise why not just use a RAG for this particular example, given that LLMs are unreliable/unpredictable?

    • @MervinPraison
      @MervinPraison  Месяц назад +1

      Groq is different from Grok
      Grok is Elon Musk’s LLM.
      Here we are discussing about Groq which provides faster response when we ask a question.
      RAG generally used for unstructured text data. In this video tutorial the data is structured and hence we are using tools instead of RAG to be more efficient.

    • @PoGGiE06
      @PoGGiE06 Месяц назад

      @@MervinPraison Thanks Mervin. BTW love your concise code snippets, they are *very* helpful. Ollama is gaining traction, fast, but the documentation and other content providers were still focused on Open AI, Langchain, and associated integrations until only a few weeks ago, so your coverage has been helpful.

  • @Armoredcody
    @Armoredcody Месяц назад

    could you zoom out a little.

    • @MervinPraison
      @MervinPraison  Месяц назад

      Any specific area of the video you wish to zoom out?
      Because I used various zoom level for each part of the video. Just so that next time it is easy for you to view.

  • @farexBaby-ur8ns
    @farexBaby-ur8ns День назад

    Groq and Grok very confusing .. latter is Elons.