Mistral AI
Mistral AI
  • Видео 3
  • Просмотров 35 153
Mistral Office Hour: three fine-tuning hackathon winning projects
We hosted three winners from our fine-tuning hackathon to discuss their projects:
00:00 intro
1:15 "Alplex" by Antoine Madrona, Antoine Masanet, Guillaume Raille: fine-tuned Mistral 7b for legal assistants
26:43 "Midistral" by Francois: fine-tuned Mistral Small to create music
40:40 "Le Chat Robot" by Johann Diep and Stefan Karmakov: fine-tuned Mistral 7b to control robotic arms
Просмотров: 516

Видео

Advance RAG control flow with Mistral and LangChain: Corrective RAG, Self-RAG, Adaptive RAG
Просмотров 15 тыс.6 месяцев назад
github.com/mistralai/cookbook/tree/main/third_party/langchain
Function Calling with Mistral AI
Просмотров 19 тыс.8 месяцев назад
- Notebook: colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/function_calling/function_calling.ipynb - Docs: docs.mistral.ai/guides/function-calling/

Комментарии

  • @markus_EU_AT
    @markus_EU_AT 3 дня назад

    How does the model know which function to run? I have to explain to it somehow what a function does..

  • @Readidno
    @Readidno Месяц назад

    very cool

  • @sergiovasquez7686
    @sergiovasquez7686 2 месяца назад

    Hey, can we implement it with all together?

  • @RUSHABHPARIKH-vy6ey
    @RUSHABHPARIKH-vy6ey 2 месяца назад

    Does the structured output work with llm calls using bedrock?

  • @TheInternet81
    @TheInternet81 3 месяца назад

    but better if you can store several version of perspective and has calculation the benefit do a perspective than other perspective. because...in academic world there is several perspective to solve a problem. what you build here is only enhancement a perspective. event that we all here should appreciate this is a BIG STEP forward of improvement in field AI Knowledge. cheers...

  • @davidvukotic
    @davidvukotic 3 месяца назад

    The best 💯

  • @davidvukotic
    @davidvukotic 3 месяца назад

    Mistral will be big one!

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 4 месяца назад

    thank you for the wonderful insights in the latest RAG developments. Can someone explain in simple terms the benefit of implementing "LangGraph" ? from what I understand it allows for more accurate LLM executions by limiting the "routes" the output of a certain LLM flows trough, improving it's reliability in execution. But why can't we empower LangChain "Agents" with the same functionality ? wouldn't the ideal agent have LangGraph capibilities built in ?

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 3 месяца назад

      after diving deep for 3 weeks I know the answer to my own question (lol): - LangGraph is an extension to LangChain that allows for managing "state" e.g.: controlling what goes in and out your LLM. By restricting what the LLM is able to do we increase it's reliability allowing us to build better, safer & more complex LLM systems. Instead of making 1 massive prompt technique we utilize the techniques of traditional System Design to map out our software in a more traditional sense.

  • @aipt32
    @aipt32 4 месяца назад

    What happens if the graph gets stuck in a loop? (Web Search > not use full > Web Search > not usefull > ...) Do i have to add a "tries" counter to my state and end after x tries to prevent an infinte loop?

  • @hxxzxtf
    @hxxzxtf 5 месяцев назад

    🎯 Key points for quick navigation: 00:00 *- Advance RAG control flow with Mistral and LangChain* 00:12 *- Combining small steps into comprehensive control flow for large language model applications* 00:25 *- Flow engineering uses a flow diagram to check response intent and construct answer iteratively* 01:06 *- Corrective RAG uses retrieval evaluator to assess document quality and trigger web search for additional information* 02:14 *- Hallucination note checks answer support by document, and answer question node checks generated answer relevance* 10:31 *- Bind MRAW to schema* 10:43 *- Convert JSON output* 10:59 *- Mock retrieval example* 11:12 *- Grading documents relevance* 11:25 *- Confirm binary score* 11:39 *- Define RAG chain* 12:05 *- Graph State explained* 21:11 *Adversarial Tax Routing* 21:52 *Hallucination Grader Defined* 22:18 *Router Conditional Edge* 22:47 *Web Search Fallback* 24:03 *Control Flow Implemented* Made with HARPA AI

  • @pvp8349
    @pvp8349 5 месяцев назад

    what other types of functions? is there any good documention?

  • @ryanfarran
    @ryanfarran 5 месяцев назад

    Mistral rocks!

  • @kuldeepsinhjadeja3668
    @kuldeepsinhjadeja3668 5 месяцев назад

    In the last part, when the flow went twice to the web search tool, it basically searched on the same query, then how did it produce valid result 2nd time and not first time. How to ensure that it does not get stuck in the loop, because basically it does the same thing again and again without changing anything hoping to get correct result.

  • @RajaSekharaReddyKaluri
    @RajaSekharaReddyKaluri 6 месяцев назад

    Thank you Sophia and Lance!

  • @ayubsubhaniya6516
    @ayubsubhaniya6516 6 месяцев назад

    How does a request gets translated into llm input? Are you using special tokens to denote function call or response messages. Thanks for the help.

  • @nicolaspellerin2207
    @nicolaspellerin2207 6 месяцев назад

    Thanks for this ! Learned a ton of good stuff, very well explained, will definitely be playing with your notebooks 😊 You’re fantastic for sharing such high quality work

  • @eddyjens4948
    @eddyjens4948 6 месяцев назад

    nice

  • @chuanjiang6931
    @chuanjiang6931 6 месяцев назад

    Calling the API costs money, right?

  • @AlbertJinkuGu
    @AlbertJinkuGu 6 месяцев назад

    Awesome job! Thank you for sharing! What's the best way to do the RAG based on the relational database? We need to understand the question, go to the correct table of a database and find the most relevant records. Looks like we should support both keyword search and sematic search. For the keyword search, we need to extract the parameters, like the keyword, date of that question, the person who generated that record, etc.

  • @8eck
    @8eck 6 месяцев назад

    Glad to see you in Mistral AI! 🥰

  • @Taskade
    @Taskade 6 месяцев назад

    Excited to bring Mistral into Taskade with our upcoming Multi-Agent update! 😊

  • @Taskade
    @Taskade 6 месяцев назад

    Can't wait to incorporate Mistral into Taskade in our next Multi-Agent update :)

  • @luanorionbarauna8555
    @luanorionbarauna8555 6 месяцев назад

    What about this document be csv file? How can I do it?

    • @eeee8677
      @eeee8677 6 месяцев назад

      Its impossible

  • @pabloe1802
    @pabloe1802 6 месяцев назад

    What is the purpose of Mistral client? can we replace with a model run locally

  • @deathdefier45
    @deathdefier45 6 месяцев назад

    You guys are amazing ❤❤

  • @NarendraChennamsetty
    @NarendraChennamsetty 6 месяцев назад

    This is an amazing tutorial. so much valuable information packed in 30 min. Subscribed, Thank you!

    • @bqmac43
      @bqmac43 6 месяцев назад

      Lance's videos always have great insights. I'd recommend checking more of his videos out if you liked this one.

  • @ChrisSMurphy1
    @ChrisSMurphy1 6 месяцев назад

    Smokin hott

  • @RiteshKumar-xo3ll
    @RiteshKumar-xo3ll 6 месяцев назад

    i don't want use it as a open source llm but instead i want it as local and deployed in my cloud service. I need to deploy it in the Azure cloud then what is the cpu and gpu requirement ???and can i use langchain.???

  • @nikitakuznetsov4592
    @nikitakuznetsov4592 6 месяцев назад

    Guys, this is crazy good! Please don't stop your demos and explaining of concepts. If you read this - can you explain a lil bit more the concepts of action tools (usage, own implementations and so on). Thx in advance!

    • @bqmac43
      @bqmac43 6 месяцев назад

      Tools are functions that the agent can call. To decide which tool to use, an agent can send the available tools to the LLM and say "which one should I use?" Once a tool has been selected, the LLM can then provide arguments to pass into the tool's function. The agent takes the information from the LLM to call the tool, and then goes back to the LLM to say "What tool should I use now?" In the example that is shown here, the agent is given specific routes to take. This simplifies each step because the agent is focused on a specific outcome at each step. So at each step, the available tools are scoped down to the task at hand. An alternative to this flow are ReAct agents. ReAct agents are given a set of tools and a task and can reason for themselves how to accomplish the task given the tools they have. Each type of flow has it's place (as Lance points out nicely with his pros and cons). Personally, I start with ReAct agents because they're easier to set up and if I feel myself getting frustrated by the steps it takes, then I move to a more deterministic flow (i.e. LangGraph, what Lance does in the video). That's a long explanation and hopefully it makes sense. You can read more on how to implement them with Langchain here. python.langchain.com/docs/modules/tools/custom_tools/

    • @choiswimmer
      @choiswimmer 6 месяцев назад

      The langchain channel has more

  • @AurobindoTripathy
    @AurobindoTripathy 6 месяцев назад

    There's a mention of tool execution on the "model" side? What's the use-case for that?

  • @shackyalla
    @shackyalla 7 месяцев назад

    I tried to do this with the OpenAI client and base_url set to my local Mistral-7b endpoint, basically using Mistral7b as a stand-in replacement for the OpenAI models. The tools format should be the same, right? It works with the gpt models but not with Mistral. Any idea, why?

    • @MistralAIOfficial
      @MistralAIOfficial 7 месяцев назад

      Function calling is currently only available for mistral-small and mistral-large

  • @MaximoPower2024
    @MaximoPower2024 7 месяцев назад

    Is function calling and system prompt compatible features? Setting tool_choice in "auto" but with usecase demanding a function call, the model write the JSON to call the function but includes it as a part of the content, instead of using tool calls explicitly.

  • @ramikanimperador2286
    @ramikanimperador2286 8 месяцев назад

    *The Mistral AI impressed me a lot because it gave me a good code to train an AI without a gpu and using the technique of dividing the data.txt dataset into mini-batches freeing the memory at each training of the mini-batch, which came to work but unfortunately with an error at the end of the training of the last mini-batch of index out of range in self and could not solve it at all, But I was very impressed by the fact that she gave me this good code with few requests for hits (about 3 only)...*

  • @derax3878
    @derax3878 8 месяцев назад

    is there functions calling support for typescript and Nextjs or only possible with python ?

    • @MistralAIOfficial
      @MistralAIOfficial 8 месяцев назад

      we have JS support: github.com/mistralai/client-js/blob/main/examples/function_calling.js

  • @parkersettle460
    @parkersettle460 8 месяцев назад

    Does Mixtral 8x7b have function calling or just the Large API model?

    • @MistralAIOfficial
      @MistralAIOfficial 8 месяцев назад

      Currently it's available for Mistral-small and Mistral-large

    • @parkersettle460
      @parkersettle460 7 месяцев назад

      @@MistralAIOfficial Can you speak to the plan for future open sourcing models? Also, is there any thoughts on releasing data sets such as a function calling data sets for the current open sourcing models?

  • @g0d182
    @g0d182 8 месяцев назад

    cool

  • @Techonsapevole
    @Techonsapevole 8 месяцев назад

    Great, is Mistral 7B capable of function calling ?

    • @MistralAIOfficial
      @MistralAIOfficial 8 месяцев назад

      currently we only have function calling for Mistral-small and Mistral-large

    • @NouhaBelhajyoussef
      @NouhaBelhajyoussef 5 месяцев назад

      @@MistralAIOfficial is there bind_tool() for mistral 7B v0.3 ? I can't make it able to use the tool.

  • @anton9690
    @anton9690 8 месяцев назад

    I am getting this error at step 10, both on the colab and in my local interpreter. Any clue? ValidationError: 1 validation error for ChatCompletionResponse choices.0.finish_reason Input should be 'stop', 'length', 'error' or 'tool_calls' [type=enum, input_value='tool_call', input_type=str]

    • @MistralAIOfficial
      @MistralAIOfficial 8 месяцев назад

      could you try it again? it should work now

    • @anton9690
      @anton9690 8 месяцев назад

      @@MistralAIOfficial fixed, thanks! :)

  • @andfanilo
    @andfanilo 8 месяцев назад

    Congratulations on the launch of the channel ☺ great video, looking forward to the next ones!

  • @joelwalther5665
    @joelwalther5665 8 месяцев назад

    Great ! can we change the ENDPOINT = "localhost" (or base_url) & api_key="NONE" ? It would be excellent !

    • @nbbhaskar3294
      @nbbhaskar3294 8 месяцев назад

      I think Mistral large is a proprietary model, that cannot be run locally. You either have to use Mistral API or Microsoft Azure has this model in their AI Studio services. I am using the latter at work. But you could always run a 7B variant locally and use function calling as described here in this video ruclips.net/video/MQmfSBdIfno/видео.html.