NEW MISTRAL: Uncensored and Powerful with Function Calling
HTML-ΠΊΠΎΠ΄
- ΠΠΏΡΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½ΠΎ: 29 ΠΈΡΠ½ 2024
- In this video, I explore the new Mistral 7B-v0.3 model, now available on Hugging Face. I'll show you how to install the Mistral inference package, download the model, and run initial queries. We also test its performance and highlight its new features like uncensored responses and function calling. Stay tuned for future videos on fine-tuning this model!
#mistral #functioncalling #llm
π¦Ύ Discord: / discord
β Buy me a Coffee: ko-fi.com/promptengineering
|π΄ Patreon: / promptengineering
πΌConsulting: calendly.com/engineerprompt/c...
π§ Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
π» Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
Mistral 7B v0.3: huggingface.co/mistralai/Mist...
00:00 Introducing Mistral 7b v0.3
00:28 Key Features and Enhancements of Mistral 7b v0.3
01:03 Getting Started: Installation and Setup
01:17 Exploring the Model: Initial Tests and Functionality
08:46 Advanced Functionality: Function Calling with Mistral 7bv3
11:25 That's a wrap
All Interesting Videos:
Everything LangChain: β’ LangChain
Everything LLM: β’ Large Language Models
Everything Midjourney: β’ MidJourney Tutorials
AI Image Generation: β’ AI Image Generation Tu... ΠΠ°ΡΠΊΠ°
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
Its true. These models are just getting better faster. The smaller one's anyway.
Yeah! RAG + this model + function calling. Yeah!
Yes, this seems to be a really good candidate for it
β@@engineerpromptcan we do real time voice translation here ?
@@engineerprompt and the voice functions too? .....?
Can you give me some ideas for uses? Why is this good help me out
@@jarad4621 me? Imagine several databases of information about different topics. The functions could define resources (rag, web, MySQL) and the ai could decide what resource/tool should be takenβ¦
I like the function calling, did you try the multi-function calls? I havn't tried it yet
Not yet, that's on my list
I find the fact that an AI regards "killing a linux process" to be "unethical" to be an unexpected and refreshing display of character and solidarity.
Where you see a "Linux process" it sees a kindred spirit
I hope it sends it a message telling it to hide
Can you do a video(s) that describe what of the model can be changed via fine-tuning and what can't? I see function calling, token size, and in other videos capabilities.
Then a video or series to conduct each of these changes via fine tuning and test the results.
Can an LLM be fine-tuned to use something like CrewAI?
that's a good idea, let me see what I can put together. I see a lot of confusion around fine-tuning and its impact on the model capabilities.
These are two different use cases. You could create a set of agents to run a fine-tuning job.
Looks good.. Can you include a JSON generation test, like generating a list of items in a particular JSON format
That's a good idea. Will include that
I commented first, thatβs a miracle!
:)
On Huggingface, the model appears to be censored. At least using the Spaces created from it.
download it locally in fp16
The model is now uncensored with minimal other changes.
Right?
Yes, that seems to be the case
In the video, all the jumping around and zooming in is annoying. watch how @echohive reviews code in his videos.