I have watched like 3-4 of your videos because you keep appearing in my feed, but this is the one that I was like okay -- why am I not subscribed to him yet? This is the tutorial standard that everyone needs to be hold as a standard from here on now. Like actually. Whenever I want a tutorial from now on, I'm just going to e-mail you if you can make one. Thank you. A million times thank you.
A tip for anyone seeking to download LLMs: They run purely off your VRAM. The more VRAM you have, the more parameters you can have. If the LLM is running sluggishly, then tone it down to a less powerful model.
@@ogstringer well more vram naturally means a better gpu. even if not, LLMs will run better on Nvidia GPUs because of their CUDA cores than AMD or Intel GPUs. Meaning even if two GPUs have equal VRAM, LLMs will run better on Nvidia GPUs
Nice tutorial! Should note however this isn't the full 600-something B model of R1 but a finetune using the reinforcement learning method they used for it. Running the full model at ~1tok/s or more is challenging on consumer hardware.
@@hashflame In this case, the fewer parameters (B), then more stupid the model. Then I tested 7B, it was stupid as hell. But I didn't dig much. Test by yourself, its free
hey, would it be cool if you made a tutorial on how to setup immich? all the tutorials ive seen arent helpful to me and its seems confusing to setup. thanks!
This is a great tutorial, but who exactly is this for? and how does this differ than using their website or app interface (other than the reasons you mentioned in your video)
Dear Mr. Bog, i love your video and I'm interested in how your able to know this kind of stuff about techs, i really want to start this kind of tech knowledge but i have no idea where to start. If it's not to much would you recommend me form where i should start🙏
You didn't select the same model in the web version (hosted by the Chinese), you have to toggle the DeepThink and it will be slow too! (and much better than what you run locally) Isn't using a complete Docker container, just to run a local webserver, a bit much? A virtual/containerized machine taking gigabytes of disk space for that, lol.
This is the first tutorial I see that includes the "uninstall" part :D
ma che ci fai qua
Hey it's very useful cause programs that require multiple parts to work are a pain to fully remove.
This has to be one of the most straightforward tutorial for deepseek ever to be released
I'd let the penguin stay 🐧❤
Never
@@NxVernxual whats wrong with linux?
He's just scared of typing @@KillerKannibal
@@KillerKannibal nothing, it's just not fully suitable for everyone and not everyone needs it
@@NxVernxual you do need it.. windows server is linux just like pretty much all servers
A "how to" video that also shows how to properly get rid of all the stuff you've installed. You have my respect.
i didn't expect bog to do this
Me too
those views are tempting to anyone
@@Troylessits a bad thing he made a video on ai model?
I was already waiting, true bog fan ❤
What is wrong with Bog doing this?
first time seeing 'how to ' video that good
0:49 AHH!
1:36 AHH!^4
2:24 AHH!^16
😂😂
Where can I find this actual video ?😂
Can someone tell me from which penguinZ0 video this is taken from?
@@xninja2369 The video called "This Is Actually Scary" by penguinz0 at 0 min 12 secs
An instructional guide that not only teaches but also ensures a clean slate by covering proper uninstallation. Kudos to you.
bro fr, upload more, you are the only youtube channel i love to watch and laugh at the same time. You are amazing brother
damn he's speedy
I have watched like 3-4 of your videos because you keep appearing in my feed, but this is the one that I was like okay -- why am I not subscribed to him yet?
This is the tutorial standard that everyone needs to be hold as a standard from here on now. Like actually. Whenever I want a tutorial from now on, I'm just going to e-mail you if you can make one. Thank you. A million times thank you.
I was waiting for you to drop another video. I loved your Arch Linux video series. I also love your personnality.
you are a blessing. didn.t think of getting this in my homelab until now. thank you
finally YOU made a video on it im happy
I was waiting and you delivered
Really good video, straight to instructions for installation and uninstall of program.
Bro is gona become a full time linux user and coder within a few months 🗿🗿
I love this, straight to the point
Threw it on my computer as soon as I heard about how easy it was to run. It's so cool to run something like this on my mid pc.
this is the most beautiful tutorial ever
i hope you made more
Our boy bog will soon be the largest youtube channel on the youtubes.
Another banger video from Bog, and the day gets better instantly
crisp and concise from start to end.
Charlie's Cameo really changed the video 😂
I am waiting for a person who install deep seek 404 giga version
someone already did by combining a bunch of macs together
I think you would need a few graphics card for it.
I was waiting for an Indian to make this tutorial.
Love it, i am actually trying to make an LLM chatbot (open source) and it will help me learn too. Thanks!
This is a beautiful work; I would love it you show us more things we can do once we have it offline.
3:43 I actually have this on because i like the penguin :)
Sir you are one of the best No bs creators out there.
I love the pacing and the style and information I get from your videos. Its just perfect for the kinda learning style I have.
straight to the point what a chad
Would you do "The blender experience" video?
That is so clear and concise!!
Great tutorial! Thank you :D
i really liked that you showed steps the steps to uninstall the programs
A tip for anyone seeking to download LLMs: They run purely off your VRAM. The more VRAM you have, the more parameters you can have. If the LLM is running sluggishly, then tone it down to a less powerful model.
You can run on RAM too if you have too much of it, but it's going to be 50x slower
@@ogstringer It's both. More VRAM usually means more powerful gpu though.
@@ogstringer well more vram naturally means a better gpu. even if not, LLMs will run better on Nvidia GPUs because of their CUDA cores than AMD or Intel GPUs. Meaning even if two GPUs have equal VRAM, LLMs will run better on Nvidia GPUs
AHHH! You can't scare me like that with the AI jumpscares. 😖 spoopy
Nice tutorial, and penguin
i don't understand why anyone would dislike this video, AIIII!!
the "beautiful" at the end is dangerous
Nice tutorial! Should note however this isn't the full 600-something B model of R1 but a finetune using the reinforcement learning method they used for it. Running the full model at ~1tok/s or more is challenging on consumer hardware.
it's a great time to start on an archlinux quest. many would love to join!
Finnally a BOG video comeing ❤ 😅 after waited so many times 😅😅...🎉
i miss those crazy hand movements
Could you do a Mac version?
on macOS you can use fullmoon, which is a OLLAMA wrapper
Hi!
It would be cool to see a video, where you struggle to learn assembly, trying to build a snake in it or something
Actually, only 671B version is a deepseek-r1, the rest are distilled models (you can't selfhost R1 on non-server spec PC)
is the 4 gig version good tho? like GPT 3.5? is it fast enough on a normal PC?
@@hashflame if u have 16 gigs of ram u can use the 8b model
@@hashflame In this case, the fewer parameters (B), then more stupid the model. Then I tested 7B, it was stupid as hell. But I didn't dig much. Test by yourself, its free
Even the 32b model is quite bad
straight to the point
This is fairly easy. I was scared by the thought of running ai locally. I thought it would require much more technical knowledge
blender tutorial pls
I actually kind of needed this cause im dum an ai smart :D
Kinda crazy this is how I did it word for word before you even posted
Now THIS is tutorial-making
Love from iran❤❤❤
Good job bro
I didnt exepct u do this
Bog our man
Makes sense honestly
Bro just deported the poor penguin 😥
Gotta hate the penguin's in my file explorer man
happens to me all the time! like get outta here penguins!!1!
bros converting himself to a software developer
Getting blessed by a Bog upload ✨
I was waiting for this lol
Bro giving tough competition to indian youtubers by skipping formal intro and outro and going straight to the content
thank you boggy woggy
@@landenmp4 please never speak again 😭
please speak all the time
Didn't expect you on windows after all those arch vids
you started to use windows again and I didn't even notice.
hey, would it be cool if you made a tutorial on how to setup immich? all the tutorials ive seen arent helpful to me and its seems confusing to setup. thanks!
You forgot to mention just one more thing
Another simpler option is LM Studio, which has full AMD GPU support, including ROCm
I was expecting bog to do this
Why not the LM Studio?
Dude thanks!! I wasn't able to get rid of ther penguin ❤.
So this is how Neo felt
This is one of the best guides I have seen so far.
It doesn't work for me with an AMD GPU though haha.
Try LM Studio instead, works on my 6900XT
why does ur computer look so different, and much better, you should make a video on how to make it look like urs
This is a great tutorial, but who exactly is this for?
and how does this differ than using their website or app interface (other than the reasons you mentioned in your video)
Bog why not play another rage inducing game called "getting over it"
I'm still recovering from Jump King
@@bogxd ptsd moment
So easy its almost criminal. Compare this to setting up models a few years ago with all the python libraries and incompatibilities.
Thanks
Ohhhh 👻spooky Chinese AI ohhhhhh 👻
this is such a good video to demystify the miss info about deepseek.
would've been nice if you could've showed how to use deepseek with web search
Dear Mr. Bog, i love your video and I'm interested in how your able to know this kind of stuff about techs, i really want to start this kind of tech knowledge but i have no idea where to start.
If it's not to much would you recommend me form where i should start🙏
Or, if you use a real operating system, you can just install the Alpaca flatpak.
I was so excited to see you do it on Arch 😅
How is this not a 40 mins long vid?
Could you do a video where you attempt a Linux From Scratch install? I want to do the same, and I think you would make for a good guide to follow.
Thanks I can now learn about what happened in 1989
bro, did u used some debloat for ur win11? it looks so clean and neat!
A bit late on the hype...
But it's Bog we're talking about 🔥
Noooo, the penguin!!!
the whale is the mascot of deepseek because it looks like ji xinping
note: No account needed for docker
Is there a way to also see the thinking process in web ui or does it just disregard it?
Deepseek: having drama
Bog: lemmie test it
I never noticed that the Deepseek logo and the Chinese map looked so familiar
dont worry we will do it part by part got me
Thanks! Now does anyone have a similar tutorial for mac users?
The penguin 🐧 😢
You didn't select the same model in the web version (hosted by the Chinese), you have to toggle the DeepThink and it will be slow too! (and much better than what you run locally)
Isn't using a complete Docker container, just to run a local webserver, a bit much? A virtual/containerized machine taking gigabytes of disk space for that, lol.
im going to use deepseek to create art