this is the intro video to hugging face I was looking for. having limited knowledge on AI and ML this video helped me answer so many questions I had about hugging face. thank you. ♥
Been here for ages. Welcome aboard. N9 has a way of taking the seemingly impossible to navigate and making it look super easy, barely an inconvenience.
Thanks a lot, that's better than any other explaination videos on Hugging Face lol. I did not understand what it was, how it could be useful and else, but now i understand, so thanks.
Thanks a lot. I see that HF is also a sort of hosting for the applications we create on HF. For example we can create a model let’s say for changing the contrast of an image, right? Now the question. Since HF is also a hosting for these models, how can we use them inside our own website when the space we create is private?
Interesting video. If I have no need for StableDiffusionPipeline from the first example, how to delete this from my local machine? In my case I use a M1 Mac. If you first run the python code the model will be downloaded to your machine
cuda means you have to have a nvidia gpu. If you have intel gpu like arc a770 16GB then you use Intel IPEX-LLM and that requies coding not just copy paste off HF.
But the advantage of that is you can leverage your fast 7400MB/sec NVME ssd and its 5TB to run large models.... RAM can also be employed so you are not limited to GPU VRAM.
Thank you very much for this very clear exploration. I have a question about microsoft's TTS model, provided it only supports certain languages, how can it be used in the language like Darijan from Morocco? The words are written in the French alphabet but the pronunciation differs. Can you help me? Any references? Thank you.
Does your vidstream python module also have a way to mirror mouse movements and/or keystrokes when screensharing? That would be awesome, if not, I'll have to make my own
I want videos related to GenAI using HugginFace OpenSource Models. If possible I want to learn this from your channel. Please make the content around this.
Amazing vid! Does it make sense to try to "recreate" an LLM using a HF model running on our own hardware? Is that even possible, or are these models meant to be small components of an app? Thank you, I am a newbie to all of this.
You can dl the model to a folder of your choice. I found using the cli , command line interface aka terminal code to be the most reliabel way to dl models from HF.
man, how did you learn all this and coding? i want to be proficient and just know how to do most basic functions. Of course im using ai to get me through for the most part, i still dont exactly understand why or where it came from
Getting into Stable Diffusion (via ComfyUI) and recently LLMs (via Ollama) introduced me to Hugging Face about a year ago, and I had downloaded several models from there, but until now the rest of it had me totally intimidated (as being so above my meager programming chops and egghead power that I really didn't explore it any further than that). Fear is the mind killer. Now that I grok how easy it really is... time to go break some eggs! Thanks yet again. You continue to keep my mind perpetually blown and showing us the potentials we didn't even know exist. 🖖👽👍
Woah dude, you didn't create a virtual environment, you dshould prob tell folks to create a virtual env first so they dont clutter their cpu and screw up any other projects' dependencies, no?
The goat. Explaining in 10 minutes what others can't in 1 hour
That's the goal! Glad you like it! :)
this is the intro video to hugging face I was looking for. having limited knowledge on AI and ML this video helped me answer so many questions I had about hugging face. thank you. ♥
Just found your channel and completed the tutorial of matplotlib and now scrolling your channel. Thank you for this amazing tutorials and videos❤
Been here for ages. Welcome aboard. N9 has a way of taking the seemingly impossible to navigate and making it look super easy, barely an inconvenience.
I like you being concise and informative ❤
Thank you! I like your way of explaining things. Just subscribed… keep up the good works.
@@trancepriest Thanks! Happy to hear that!
Thank you, the knowledge was very blur, but with your help seems very clear now
Really helpful and detailed!
Thanks a lot, that's better than any other explaination videos on Hugging Face lol. I did not understand what it was, how it could be useful and else, but now i understand, so thanks.
Thanks a lot. I see that HF is also a sort of hosting for the applications we create on HF. For example we can create a model let’s say for changing the contrast of an image, right? Now the question. Since HF is also a hosting for these models, how can we use them inside our own website when the space we create is private?
i heard about this and didnt know how to use it super cool
Nice video! Straight to the point! Thanks for this.
Thank you , it is a great learning
Interesting video. If I have no need for StableDiffusionPipeline from the first example, how to delete this from my local machine? In my case I use a M1 Mac. If you first run the python code the model will be downloaded to your machine
This channel has the best intro music 🎶
MC_LOOPER - "Reaching" (inst.)
cuda means you have to have a nvidia gpu. If you have intel gpu like arc a770 16GB then you use Intel IPEX-LLM and that requies coding not just copy paste off HF.
But the advantage of that is you can leverage your fast 7400MB/sec NVME ssd and its 5TB to run large models.... RAM can also be employed so you are not limited to GPU VRAM.
Thank you very much for this very clear exploration.
I have a question about microsoft's TTS model, provided it only supports certain languages, how can it be used in the language like Darijan from Morocco? The words are written in the French alphabet but the pronunciation differs. Can you help me? Any references?
Thank you.
Does your vidstream python module also have a way to mirror mouse movements and/or keystrokes when screensharing? That would be awesome, if not, I'll have to make my own
I want videos related to GenAI using HugginFace OpenSource Models. If possible I want to learn this from your channel. Please make the content around this.
Hello. Can anyone recommend a text summarization model that is lightweight or fast?
Need one for a school project
Thank you. Just what I needed
Great information! Thank you!
Amazing vid! Does it make sense to try to "recreate" an LLM using a HF model running on our own hardware? Is that even possible, or are these models meant to be small components of an app?
Thank you, I am a newbie to all of this.
Does this download the model everytime you execute the code or is there any way i can provide a local directory and use the code?
You can dl the model to a folder of your choice. I found using the cli , command line interface aka terminal code to be the most reliabel way to dl models from HF.
man, how did you learn all this and coding? i want to be proficient and just know how to do most basic functions. Of course im using ai to get me through for the most part, i still dont exactly understand why or where it came from
Getting into Stable Diffusion (via ComfyUI) and recently LLMs (via Ollama) introduced me to Hugging Face about a year ago, and I had downloaded several models from there, but until now the rest of it had me totally intimidated (as being so above my meager programming chops and egghead power that I really didn't explore it any further than that).
Fear is the mind killer. Now that I grok how easy it really is... time to go break some eggs!
Thanks yet again. You continue to keep my mind perpetually blown and showing us the potentials we didn't even know exist.
🖖👽👍
Super helpful, thank you!
What machine do you have? I have a thinkpad with integrated intel cpu. I also use wsl with 10g memory. I can't even run the text to image code.
A Desktop PC with a 3060 Ti and 32 GB of RAM. If you are working on a laptop, using cloud platforms might make more sense.
Very clear and useful. Thank you!
shouldn't we Install CUDA and cuDNN to operate this in GPU?
wow, incredible content. Subscribed
different video but much appreciated
I wonder if there is a free limit on using these models? Or can we run it on our GPU without any limits?
how can i deploy the models?
Hey. My GPU is Intel. I installed accelerate but i still need to use CPU, which takes forever. any idea how i can make use of my Intel GPU?
Super Helpful video
Clear and useful 🥰
Great video 💯 Thank you :)
This is amazing!!!!
Thank you, man!
can we use those datasets for our portfolio project??
Yes. Everything on HF is open source
Thanks that was informative
Woah dude, you didn't create a virtual environment, you dshould prob tell folks to create a virtual env first so they dont clutter their cpu and screw up any other projects' dependencies, no?
Yes, he is not a serious coder....
So... Being a ML beginner, should i create a virtenv?
@TomSaw_de yes to be on the safe side...
Great video thanks
Good overview
Thanks for teaching
Thank you very much! 👏
Спасибо за видео 😊❤️
that repository ruwnayml/stable-diffusion-v1-5 is gone
like a playstore but with code that you can work with
Funny how this isn't basic enough for me...the search continues...
Thanks
yo
❤❤❤