NEW MISTRAL: Uncensored and Powerful with Function Calling

ΠŸΠΎΠ΄Π΅Π»ΠΈΡ‚ΡŒΡΡ
HTML-ΠΊΠΎΠ΄
  • ΠžΠΏΡƒΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½ΠΎ: 29 июн 2024
  • In this video, I explore the new Mistral 7B-v0.3 model, now available on Hugging Face. I'll show you how to install the Mistral inference package, download the model, and run initial queries. We also test its performance and highlight its new features like uncensored responses and function calling. Stay tuned for future videos on fine-tuning this model!
    #mistral #functioncalling #llm
    🦾 Discord: / discord
    β˜• Buy me a Coffee: ko-fi.com/promptengineering
    |πŸ”΄ Patreon: / promptengineering
    πŸ’ΌConsulting: calendly.com/engineerprompt/c...
    πŸ“§ Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    πŸ’» Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
    Signup for Advanced RAG:
    tally.so/r/3y9bb0
    LINKS:
    Mistral 7B v0.3: huggingface.co/mistralai/Mist...
    00:00 Introducing Mistral 7b v0.3
    00:28 Key Features and Enhancements of Mistral 7b v0.3
    01:03 Getting Started: Installation and Setup
    01:17 Exploring the Model: Initial Tests and Functionality
    08:46 Advanced Functionality: Function Calling with Mistral 7bv3
    11:25 That's a wrap
    All Interesting Videos:
    Everything LangChain: β€’ LangChain
    Everything LLM: β€’ Large Language Models
    Everything Midjourney: β€’ MidJourney Tutorials
    AI Image Generation: β€’ AI Image Generation Tu...
  • НаукаНаука

ΠšΠΎΠΌΠΌΠ΅Π½Ρ‚Π°Ρ€ΠΈΠΈ • 25

  • @engineerprompt
    @engineerprompt  29 Π΄Π½Π΅ΠΉ Π½Π°Π·Π°Π΄

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

  • @tvwithtiffani
    @tvwithtiffani ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +3

    Its true. These models are just getting better faster. The smaller one's anyway.

  • @MeinDeutschkurs
    @MeinDeutschkurs ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +3

    Yeah! RAG + this model + function calling. Yeah!

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +1

      Yes, this seems to be a really good candidate for it

    • @simonpeter9617
      @simonpeter9617 ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      ​@@engineerpromptcan we do real time voice translation here ?

    • @joepropertykey3612
      @joepropertykey3612 ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      @@engineerprompt and the voice functions too? .....?

    • @jarad4621
      @jarad4621 ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      Can you give me some ideas for uses? Why is this good help me out

    • @MeinDeutschkurs
      @MeinDeutschkurs ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      @@jarad4621 me? Imagine several databases of information about different topics. The functions could define resources (rag, web, MySQL) and the ai could decide what resource/tool should be taken…

  • @unclecode
    @unclecode ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +2

    I like the function calling, did you try the multi-function calls? I havn't tried it yet

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +2

      Not yet, that's on my list

  • @jelliott3604
    @jelliott3604 ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

    I find the fact that an AI regards "killing a linux process" to be "unethical" to be an unexpected and refreshing display of character and solidarity.
    Where you see a "Linux process" it sees a kindred spirit

    • @jelliott3604
      @jelliott3604 ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      I hope it sends it a message telling it to hide

  • @GetzAI
    @GetzAI ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

    Can you do a video(s) that describe what of the model can be changed via fine-tuning and what can't? I see function calling, token size, and in other videos capabilities.
    Then a video or series to conduct each of these changes via fine tuning and test the results.

    • @GetzAI
      @GetzAI ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      Can an LLM be fine-tuned to use something like CrewAI?

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +1

      that's a good idea, let me see what I can put together. I see a lot of confusion around fine-tuning and its impact on the model capabilities.

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      These are two different use cases. You could create a set of agents to run a fine-tuning job.

  • @kingofutopia
    @kingofutopia ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +1

    Looks good.. Can you include a JSON generation test, like generating a list of items in a particular JSON format

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +1

      That's a good idea. Will include that

  • @EnricoGolfettoMasella
    @EnricoGolfettoMasella ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

    I commented first, that’s a miracle!

  • @john_blues
    @john_blues ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

    On Huggingface, the model appears to be censored. At least using the Spaces created from it.

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      download it locally in fp16

  • @mahmoodkhan3683
    @mahmoodkhan3683 ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

    The model is now uncensored with minimal other changes.
    Right?

    • @engineerprompt
      @engineerprompt  ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄

      Yes, that seems to be the case

  • @ScottzPlaylists
    @ScottzPlaylists ΠœΠ΅ΡΡΡ† Π½Π°Π·Π°Π΄ +5

    In the video, all the jumping around and zooming in is annoying. watch how @echohive reviews code in his videos.