- Видео 32
- Просмотров 131 650
VideotronicMaker
США
Добавлен 25 сен 2019
Learn with me, as I explore the fascinating world of Arduino, Single Board Computers, Artificial Intelligence, 3D printing and Electronics. Topics including LLMs, AI, Visuino, Arduino IDE, Arduino Starter Kits, RUclips Video Creation and all things related to Video Production.
Digital assistant w/ Gemini 1.5 flash- App with UI using Google Gemini Advanced - AI assistant w/ UI
Using #Google #geminiadvanced I was able to create a custom Whisper Transcription app and a #local-host, Digital Assistant app or #aichatbot app with a custom UI. Gemini Advanced was able to assist from scratch using only the #python script provided by #googlecloud and #googleaistudio . The digital assistant app utilizes the Gemini 1.5 flash 001 #api which is the gemini ai api.
Gemini Advanced has definitely advanced over the last few months and I have been using it frequently to build scripts and apps. I also show how, with the help of Gemini I was able to create another custom #whispertranscription #app with a #ui .
The Whisper app with user interface enables you to add audio files and ...
Gemini Advanced has definitely advanced over the last few months and I have been using it frequently to build scripts and apps. I also show how, with the help of Gemini I was able to create another custom #whispertranscription #app with a #ui .
The Whisper app with user interface enables you to add audio files and ...
Просмотров: 336
Видео
Offline Hugging Face Model Inferencing without LM Studio, Llama.cpp, Ollama or Colab
Просмотров 7003 месяца назад
Trying to run a local Hugging Face model on an old 2012 MacBook Pro? In this video I will show you how to run an offline/local #HuggingFace base model. I will do this without using #LMStudio, #Ollama, #Llama.cpp, #JupiterNotebooks or #GoogleColab. Of course, the models will be no bigger than 7b parameters. This will be a foundation so that over time, we can add features like conversational loop...
Google Gemma 2B on LM Studio Inference Server: Real Testing
Просмотров 3 тыс.8 месяцев назад
In my candid review of Google's Gemma 2B, I run the model on LM Studio's Local Inference Server. In my sincere evaluation, I base my assessment on tasks that align with my typical usage of Language Model Systems (LLMs). Without relying on benchmarks or scores, I go through some personal use cases to determine the practical utility of this model. Join me as I provide insights into my exploration...
Openai's Sora Videos. Disrupting the Film and Television Industry
Просмотров 8068 месяцев назад
We have seen AI video generation building quickly. With the release of the sample videos created with Sora, it appears that it is now official. Video Production has changed forever. For a good while, there will still be a need for videography, drone photography and videography, non-linear editing, motion graphics and 3d animation, but it appears that a new way of gathering assets and assembling...
The Rise of Artificial Intelligence Compared with Other Historical Technological Innovations
Просмотров 648 месяцев назад
For the past year, I've wrestled with the decision of whether or not to embrace artificial intelligence (AI). It's been a journey of deliberation and discovery, filled with uncertainties and contemplation. But now, after diving deep into learning about AI, I've come to a realization, and this video reflects my newfound position. Join me on this exploration through history and AI as we delve int...
LM Studio-Local Inference Server-NLP Upgrade Using Free Google Text to Speech API w Code-Part 3
Просмотров 2,4 тыс.8 месяцев назад
In this video, I'm upgrading and exploring the implementation of the Google Voice API to infuse my LLMs with a more natural and expressive voice, for better natural language processing. All code and instructions are in the GitHub repo linked below. In this updated iteration I present the latest upgrades in deploying an open-source Large Language Model (LLM) on a local inference server, using LM...
Google Gemini Advanced Review: Usefulness Over Benchmarks - A Google Enthusiast's Honest Insights
Просмотров 3228 месяцев назад
Here is my honest review of Google's Gemini Ultra. As a long-standing Google enthusiast, my journey has been filled with highs and lows, from the excitement of new releases to my final conclusion about what I have seen so far. In this video, I share my firsthand experiences with Google's AI advancements, including the transition from Gemini Pro to the unveiling of Gemini Ultra, and my ventures ...
LM Studio-Local Inference Server-Voice Conversation-with Text Input Option and Code-Part 2
Просмотров 7 тыс.8 месяцев назад
Join me with LM Studio as I delve deeper into deploying an open source LLM on a local inference server. In this tutorial, I showcase my progress in developing an AI assistant implementing voice conversation and optional text typing GUI. This video is tailored for non-coders and demonstrates hot to guide GPT-4 to write the code. This tutorial builds upon the foundational concepts introduced in m...
LM Studio: How to Run a Local Inference Server-with Python code-Part 1
Просмотров 14 тыс.8 месяцев назад
Tutorial on how to use LM Studio without the Chat UI using a local server. Deploy an open source LLM on LM Studio on your pc or mac without an internet connection. The files in this video include the function to initiate a conversation with the local-model and establishes roles and where the instructions come from. The setup allows the script to dynamically read the system message from text fil...
How an LLM Generates a Response - Stable LM 3B on LM Studio
Просмотров 2799 месяцев назад
How do llms generate responses? Take a one-minute view inside LM Studio, showcasing the Stable LM 3B LLM model processing a response. See the process of AI response generation, word by word and line by line. This screen recording offers a glimpse into the inner workings of a Large Language Model, revealing how it meticulously constructs sentences, one word at a time, before presenting the final...
SpaceX Launch-Falcon 9 Rocket-January 3, 2024-Staurt Florida
Просмотров 4509 месяцев назад
Witness the breathtaking launch of SpaceX's Falcon 9 rocket, carrying a state-of-the-art geostationary communications satellite for Ovzon. Captured from the heart of Stuart, Florida, this video showcases the raw power and precision of modern space exploration. Filmed on January 3, 3024, along US 1 near the bustling downtown of Stuart, the footage offers a unique perspective of the rocket slicin...
Your Arduino and Raspberry Pi Classmate | VideotronicMaker
Просмотров 496Год назад
Your Arduino and Raspberry Pi Classmate | VideotronicMaker
How to use an Arduino Proto Shield and Elegoo Prototyping Shield-Part 2
Просмотров 8 тыс.Год назад
How to use an Arduino Proto Shield and Elegoo Prototyping Shield-Part 2
How to use an Arduino Proto Shield and Elegoo Prototyping Shield
Просмотров 35 тыс.3 года назад
How to use an Arduino Proto Shield and Elegoo Prototyping Shield
I Just Bought Electronics Components at Radio Shack-2021
Просмотров 1613 года назад
I Just Bought Electronics Components at Radio Shack-2021
Arduino Tutorials for Beginners Website-Videotronicmaker
Просмотров 894 года назад
Arduino Tutorials for Beginners Website-Videotronicmaker
How to Set Time Easily with Visuino-by Tishin Padilla-Arduino for Beginners
Просмотров 1,8 тыс.4 года назад
How to Set Time Easily with Visuino-by Tishin Padilla-Arduino for Beginners
Make videos with the tools you already have-Episode 2
Просмотров 2025 лет назад
Make videos with the tools you already have-Episode 2
Arduino Time and Date Tutorial-DS3231 and I2C LCD with Code | Visuino-Tishin Padilla
Просмотров 11 тыс.5 лет назад
Arduino Time and Date Tutorial-DS3231 and I2C LCD with Code | Visuino-Tishin Padilla
Arduino Project for Beginners with Code-Temperature and Humidity on I2C LCD-Visuino
Просмотров 7 тыс.5 лет назад
Arduino Project for Beginners with Code-Temperature and Humidity on I2C LCD-Visuino
Arduino, Raspberry Pi, & Hobby Electronics | How I Got Hooked
Просмотров 4205 лет назад
Arduino, Raspberry Pi, & Hobby Electronics | How I Got Hooked
Arduino Proto Shield Tutorial-Soldering
Просмотров 28 тыс.5 лет назад
Arduino Proto Shield Tutorial-Soldering
What are the difference between "User" and "Assistant" on the chat. Also give example for "Assistant" if you know
You are the user. The AI is the assistant.
@@videotronicmaker lol... I know that - if you have LM Studio, make sure you upgrade to the most recent version and go to the chat, you will see that there are now "User" and "Assistant" mode you can input with both. Previously "System" role: Use this field to provide background instructions to the model, such as a set of rules, constraints, or general requirements.
This is cool there's a speech to speech app for LM Studio here it recognizes speech in 90 languages and has over 1400 Voices to choose from ruclips.net/video/l1uYTuZoB6Q/видео.html
I took a peak. I will look at it more later. Looks good. Thx.
i installed text-generation-webui-main + bark . responses became so slow so it is unusable. i have rtx3070, 64 gb ram i7 12700 k but it works vey slow. 2-3 minutes per answer
what is the version of openai library? when you type 'pip list' at your cmd.
openai==0.28.0 Here is the GitHub repo: github.com/VideotronicMaker/LM_Studio_Local_Server
Just to let you know. It is possible to use a powerful llm with out a gpu using gpt4all interface which is an lm studio alternative with the main difference being you can use a powerful open source llm with even the most modest computer even if you have no gpu. the trick is to add the token count to the python script.
👀
What do the labels mean please
Can anyone tell me
I am not sure which labels you are referring to exactly. There are several. Here is the link to the Uno Pinout Diagram. Most of the labels are the same: docs.arduino.cc/retired/boards/arduino-uno-rev3-with-long-pins/#osh-schematics If you want the labels for the others sections of the shield you can look at my other two videos that go deeper into the shields to get a better sense of how to use the shield and what each section of the shield may be. ruclips.net/video/gOo70ULv-yI/видео.html ruclips.net/video/czGsopX2CKI/видео.html I don't believe that Arduino made a diagram for this shield. That is why I originally made the video. I hope this helps a little. I was hoping someone else would jump in. Let me know exactly which labels you are referring to.
Thanks for the inspiration. Ive been wanting a "Jarvis"
Re we suposed to download a library beforehand because I got the error " ModuleNotFoundError: No module named 'openai' "
Yes. It sounds like you haven't installed the Openai library. 2 solutions: 1- go to my GitHub repo and run the requirements.txt file 2- Alternatively you can just use an AI assistant. Whenever you get errors just copy them and paste them into your AI assistant and it will tell you what to do.
That's wonderful. Thank you very much. 🎉❤
ur so funny lol you also sound like a teddy bear
Hey, just to let you know, I was currently programming this into java spring boot, and this is very helpful to see. This is a sub from me and a like. Will update comment when I upload the video on my account @JA_RON
How do you feel about Gemma 2 (9B and 27B)? They're way better than Gemma 2B and 7B, I haven't found a better LLM.
I recently broke my main PC and I have been preoccupied with figuring out how to use non quantized models on my old MacBook Pro. I have still been stuck on Gemma 2b and Stable LM Zephyr 1.6b. I tried to inference the base Gemma 9b with no success. I hope to inference it very soon. I will say that because I was very impressed with 2b, I am sure that 9b and 27b will be impressive. I am gpu poor these days so...In time. Also, Google has really started to impress me lately with Gemini. All good things! I will be sure to edit this in a few days after I get a chance to try them. I guess I will try it out on Vertex AI in the meantime. Thanks for the question. I now have a next project to get into!
Pero puedes mantener una conversacion con este modelo? Habla en español? ¿Podemos descargarlo?
The model used in this video is the Phi 2 model by Microsoft. This model only understands English. Here is the huggingface page with more information. huggingface.co/microsoft/phi-2 You would need a different model for Spanish. I am not sure which models this size understand Spanish. However, I have seen models where other people have added a second version of the base model and made it in a different language so I would say check on hugging face and search for something like, "Phi-3-sp".
That's great. I wish I had your knowledge to implement this in my LM Studio. But I would need an eternity to learn python and several other things. I already use voice mode on Chatgpt 4o for Android, but it's different than running our own local LLMs. 🎉❤
Thank you for commenting. I want to both encourage you and make something clear. I believe that all you need to implement or create this from scratch and on your own, is to utilize something like Chat GPT 4 or Google Gemini Advanced. That is exactly how I did it. I am not a coder, mathematician, computer science major or anything like that. My profession is television production and I also do a little web design. I was only able to do this by going back and forth with GPT 4. I gave it some code I found in another video and asked it to give me variations of that. Then, step by step I asked it to add an additional feature. It gave me the code. I pasted the code and usually there was an error. I then copied and pasted that error back into GPT 4 until it gave me code that worked. I knew nothing about python environments, anaconda, libraries or dependencies. When those items came up I asked it to explain. It did and I learned and moved to the next step. I must say that as of the release of GPT 4 o, I had a difficult time with GPT 4 o's performance so I stopped using it. I started using Gemini Advanced and to my surprise it has been great. I do not know how to code in Python. One thing I wanted people to get from this is that AI is amazing because it enables you to complete projects like this one. Use it. Contrary to what some say, AI is very useful beyond writing emails, planning trips and shopping. Also, I am using Gemini Advanced to tutor me in Python. I ask it to start simple and take me step by step and be my instructor. It works. That is what this channel is about. I show people what I am learning as I learn it. It is amazing to me so that is why I tell others. In this video, even though the AI is writing the code, I am learning little by little in the process. It may not be the most common approach but it is working for me. I hope this helps.
@@videotronicmaker Thank you for the excellent and lengthy answer that I will read with time, it's a true class what you did. I will follow your advice. I'm also a video producer and a musician. I know a little bit of computers and coding, but too little. Thanks. 🙏👍💥❤️
Hi glad i found this video. Does this mean I can use LM studio to be connected online like Chatgpt using Local Server so it should give me answer from realtime internet? like "which party get the most vote Democrats or Republican right now?"
LM Studio is for running an offline llm. There is no connection to the Internet as far as I know. No realtime interaction or retrieval of info from the web. LM Studio has made many updates lately and added many new features so I can't say for certain. That is a great idea actually. I bet there is a python library that can be used to accomplish that.
@@videotronicmaker Thanks for the clarity. Many youtubers didn't talk abt this except you I think. I really wish they integrate this feature. It will be a game changer once and for all!!
Hopefully Open WebUI will support this server interface in the future.
It's need more context
KITT, is that car KITT?
Yes. Next will be Twiggy.
@@videotronicmaker 👍
I had fun watching! Great learning pal.
Glad you enjoyed it
I have been trying to find a Free way to add a local TTS to local LM Studio in such a way that I could walk around the house with a headset and converse with an LM Studio LLM. Something at least on the level of coqui tts voice quality.
Yes. So have I. That is one goal. I hope to get there soon. For now GPT 3.5 is the closest option. Maybe GPT 4 o will be the next best. However, I haven't gotten the headset version working yet
@@videotronicmaker Yeah 4o looked pretty awesome, but Im sure it will be a while before its available to put on a home PC.. if its made public at all
very nice video. comprehensive and easy to follow
Bro make more content don't give up 😢
Omg this is f up
Grok's LLM 334Gb on an external SSD card and on a M2Pro Mac , can it be used with LM studio ?
I haven't tried it yet. I see some gguf Grok models on Hugging Face. Give it a shot. But I would not use a file that size on an SD card. I would use a file smaller than 15gb on an internal SSD drive otherwise you will have to wait 2 days to get a response to "Hello."
@@videotronicmaker external thunderbolt 4, newest version , with a throughput of 40Gb /sec, and a external NVME 20 GB/sec 2 TB ,..not an SD ...LOL
Typo? If you have a 334GB LLM then you'll need to run it on cloud hardware because otherwise you'd need 334GB of VRAM or RAM in order to run it locally. I tried a 60GB model and it was generating one token per minute. Not per second, per minute 😂
Do we have to install the openai package? I mean we are running locally, why do we need that?
As far as I know, yes. If you're using software that incorporates the OpenAI library to run a local inference server, even if you're not directly using OpenAI's models, there could still be benefits. For example, the software might leverage the OpenAI library for compatibility, ease of integration, or additional functionalities provided by the library, such as optimized model loading, efficient inference processing, or support for various model architectures. In this scenario, using the OpenAI library within the software could streamline the setup and operation of the local inference server, enhancing its performance and capabilities. You might also want to read the GitHub repo info for the openai library as it contains more information regarding this. github.com/openai/openai-python And just to be clear, the openai library is not the same thing as the openai API or the chat GPT llm. The library contains all the other things required to use an llm. Everything is still local
This is so helpful!! Thank you for sharing! :)
Glad it was helpful!
getting an error that system_message.txt nit found
Make sure you first create the file named system_message.txt. Second, make sure you put the correct path to that file in your code.
Is there the possibility of choosing in which language the model responds to me? greetings!
As far as I know there are two steps you can take to accomplish this. First, choose a model that speaks in your preferred language and then speak to the model in that language. Next, you can specify in the system prompt / system message that the model should always respond in a particular language. I have not attempted this yet because I do not speak any other language other than English however this would be my first approach if I wanted to do this.
Great video man. I believe the issue you had with the original code is to simply install the openai third party library using "pip install openai" in your vscode cmd terminal. That worked for me.
Thanks for the comment. I tried that. For some reason it didn't work. I may try one more time. I definitely have that library installed in my env.
Bro you are the best, keep making videos,
Amazing!!!
Thank you! OpenAI changed the way you interact with their key when using API so it made most of the videos around YT from 3-2 months ago kind of missing out on how to use your key especially when using local machine such as the LM Studio.. great help!
I'm glad to know I'm not the only one that was having trouble with that. I'm starting to think it was just me.
Ask this question from the Gemma 2B model " Who is the first person to set foot on the moon" . You will get an amazing answer 😂
That was funny! Thanks for the laugh. Gemma said, "There is no evidence or record of a person setting foot on the moon." Well...at least it's consistent. 🤣
@@videotronicmaker yeah 😂, may be Google has begun to use data from an alternate universe to train these AI models. That's why Google's AI products are behaving so weirdly recently.
No matter what I change with the Hugging Face model hyperparameters via LM Studio it responds the same. So that is probably my error because when I go on Vertex AI, choose the Gemma model and use these settings on the right where it says, "Try out Gemma" : temperature = .8 top-p = .7 top-k = 30 ** Gemma responds: "Neil Armstrong was the first man to walk on the moon. So the answer is Neil Armstrong. I hope this helps! Please let me know if you have any further questions.**Best regards,** **The Answer Guy****Additional information:** Neil Armstrong was born in 1930 and was an American astronaut. He was the first human to set foot on the moon on July 20, 1969.He was a pioneer in the field of space exploration and his accomplishment is one of the greatest in human history. I hope this additional information is helpful.**" Here is the link to where I got that response: console.cloud.google.com/vertex-ai/publishers/google/model-garden/
@@videotronicmaker Thank you so much for the answer, I still haven't tried to use it on vertex AI, so I will try it now. Thanks!
Nope. I was wrong. I think I had the 7B selected in Vertex AI. I tried again to verify and no matter what, it said , "...no evidence..." For now, here is my conclusion. Google states: USE CASES Content Creation and Communication Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy and email drafts. Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- I don't think the 2B models are trained on enough data to know the answer. The 7B does. I'm guessing that this is an example of the compromise when we choose a small model. I am making a video of my user experience now.
OMG bro i wouldn't notice such thing between versions,thanks a lot i was really sad that i wasn't able to run 2b model 💀now everything is okay
i tried yesterday on hugging face playroom, and the 8k model, and the result was pretty bad.
I definitely understand. The way I look at it is that it's not going to perform at the level of GPT 3.5 and 4 or at the level of Gemini Pro and Ultra however, out of all the small and fast models that you can find on hugging face I really believe this is the absolute best one. It has the ability to brainstorm and have conversations but it's definitely no GPT 3.5. I see it as a great way to get started on a local server if you have a small or low end computer. It's great for testing out and I believe that there is great potential for it to be trained and fine-tuned. I guess the short of it is that it may be the best starting point so far for models at that size. I would definitely be interested to hear some details of your experience because I am getting ready to make a part two to this where I go a little more in depth and take my time a little bit and make a more organized video. I made this video last night at 3:00 am and I was scrambling because it was a breaking news situation. Today I want to give it a little more time and use the q8 version.
How to use it?
It's coming out soon. Not released yet.
Wow, everything in the video looked so real yet they were all created by Open AI, it's amazing. Can't wait for more content!!
Thanks for your dedication, your content is top level.
pretty cool stuff and may be as we generate more options there will be a way to build an open source TTS model for the right job. Thanks
All instructions and files are in the GitHub repo linked in the description.
is there a shield like this with access to the digital pins for the mega 2560 that you know of.
ElectroCookie has this one for $15. It's a 2 pack! I made a video about ElectroCookie shields for the Uno. They make good shields. This seems to be the best answer: www.amazon.com/ElectroCookie-Arduino-Prototype-Stackable-Expansion/dp/B0841ZFP1C/ref=sr_1_13?crid=CZFWT2PHTQUW&keywords=arduino+proto+shield&qid=1707607942&s=electronics&sprefix=arduino+proto+shield%2Celectronics%2C384&sr=1-13 This is the one I bought but it is not available right now: www.amazon.com/gp/product/B0169WHVGS/ref=ppx_yo_dt_b_search_asin_image?ie=UTF8&psc=1 Elegoo has this but it's a kit and costs $65: www.amazon.com/EL-KIT-008-Project-Complete-Ultimate-TUTORIAL/dp/B01EWNUUUA/ref=sr_1_1_sspa?crid=2W8V77GCU2ZX8&keywords=arduino+mega+kit&qid=1707607831&s=electronics&sprefix=arduino+mega%2Celectronics%2C386&sr=1-1-spons&ufe=app_do%3Aamzn1.fos.006c50ae-5d4c-4777-9bc0-4513d670b6bc&sp_csd=d2lkZ2V0TmFtZT1zcF9hdGY&psc=1
Shield without headers soldered on. store-usa.arduino.cc/products/arduino-mega-proto-shield-rev3-pcb
I have been trying throughout the day to install the Gemini app and I just saw that they have removed it from the Play store. I am on the east coast USA.
Thank you.....Very helpful.
Glad it was helpful!
i Like the aproach of your content. Congrats!
I appreciate that!
Great video! This really helped me a lot especially since I've been looking for a more detailed explanation!
Glad to hear it!
Cool bro, please integrate it with ai style tts instead of local tts. thanks !!
Definitely. Coming shortly.
I found this video very helpful!!
Glad I could be of service ;-)
Here you go. Better natural language processing. Latest video with code: ruclips.net/video/lSnyTC3lh5o/видео.html
It came out almost the same as my recording. You were in Stuart and I live in Port St Lucie.
Yes. I saw your video. It appears that my shot starts just after yours ended. It was a very unique thing to see in the sky. It was unlike anything I ever saw before.
Wielkie dzięki, dopiero zaczynam z Visuino. Bardzo potrzebny instruktaż.
You are very welcome.