Today I asked ChatGPT to give me the Minecraft chat command to play a sound to all players on the server, but it kept giving me code snippets in bash, kotlin and sql programming languages. DeepSeek gave me the right command straight away. It even showed in the code block that it's written in "minecraft"
From GPT - NON o1, regular free model: Yes! You can use the /playsound command in Minecraft to play a sound to all players on the server. Here's the command: /playsound @a ~ ~ ~ Explanation: minecraft:entity.lightning_bolt.thunder → The sound to play (replace with any valid Minecraft sound). @a → Targets all players. ~ ~ ~ → Plays the sound at each player's current location. You can also add extra parameters, such as volume and pitch: bash Copy Edit /playsound minecraft:entity.lightning_bolt.thunder @a ~ ~ ~ 1 1 Lying on the internet for likes is wild
Also same for me. First time I've noticed it happen. Could be that Philip turned off 'notify subscribers' for this video, but only he can answer this mystery...
LED light growth has quickly plateaued again, since you can only fit so many LED strips into your home, car, PC, or shoes. While we don't yet have a plateau in sight for AI. Every company involved with AI still wants a lot more resources for all sorts of reasons.
@@PhartingFeetingEhhh, I was asking DeepSeek topological brain-teasers, and it'd take a while longer, but importantly, it'd understand what I'm asking, get the answer, and then provide food for thought about logistics. Colour me impressed.
@@pathway4582 OpenAI isn't a hive mind. Some people in there will have that self-awareness, but a lot (and often a majority) of people reflexively becomes defensive of their work. I bet you that most OpenAI employees genuinely believe that Deepseek "stole" something from them that they had a moral and/or legal right to.
@ Yea you are right. Reality is too complex for these blanket statements and you gotta look at things on a more individual level to get the full picture. I also cant reply to your username for some reason, seems like you broke youtube.
I am not a Neural network dev, but I am a software dev who has an idea of how this works (But I cannot build my own AI.... I mean I can if I sit down for a few months to read papers haha, but not rn) So, the reason why nVidia stocks went down is because basically every AI model was trained using CUDA, which is a library/toolkit that was made by nvidia to be able to use the GPU in order to make more general software. Is not really turning it into a CPU, but before a GPU was mostly only useful for driving displays, but with CUDA you could use that parallelization to calculate things simultaneously. Deepseek was not trained with CUDA, meaning that you don't need nvidia hardware to train your models, and this has always been the case, you can train a model on AMD for example, but it was a way more complicated effort, as in reality both GPUs have the same parallelization, but nvidia had a more mature environment. China came out swinging showing that you don't need CUDA to have good results. This + the disappointing release of the 50 series, seems that nvidia will have a rocky future... and I hope so we can have cheap GPUs again lol.
DeepSeek actually used Nvidia GPUs , a tons of H100 and some 4090s before the import bans. I heard there were some B200 boards but not sure which model it specified. They "saved" some training power by training their 404GB model against ChatGPT latest ones. That's probably why they only spend less than 1K millions on their training hardware.
DeepSeek replaced CUDA with a different vendor-locked solution. You *cannot* create a general solution that's better than CUDA, because CUDA already has vendor-specific optimizations in place. Most likely what DeepSeek did was take the physical CUDA hardware and interface with it in some non-generic way that only works for their specific use case. The less generic the solution, the more arcane and nonsensical optimizations you can implement. In fact, NVIDIA already develops cuDNN, a library for interfacing with CUDA that's specifically optimized for neural networks. It's better than the generic CUDA library for this specific case, but worse for everything else. Most likely, NVIDIA is just going to release cuDNN 10.0.0 in a couple months or so that will be better than DeepSeek's in-house solution. They're probably going to deprecate a few functions, make the library have a bit more state, and add a few new functions with extremely long and unintuitive names. Because that's what they've always done before. There's a reason cuDNN is currently at version 9.7.0. That "9" isn't just for show.
@ tonnes of people have been saying this for a long time dunno whether it is a problem or if they confuse the hollow bell for the whole one, but probably best to assume they aren't all stupid and what they report is real
My personal favourite LLM benchmarking question is “A surgeon and his son are in a car accident. His son is badly injured, but the surgeon is unscathed. At hospital, the admitting nurse says to the surgeon that there are no qualified surgeons so he must perform the surgery the son needs. The surgeon replied ‘I can’t do that, he’s my son!’ Why couldn’t he do it?” Maybe like 95% of the time it will give you some crazy answer like “It’s a trick question - the boy actually has two fathers” or “The surgeon is actually the boy’s mother”. I love it.
@@MysteryBTDN code of ethics forbids operating on family members. It is an emotional risk, you are not objective when treating family members, you are biased and will do things that you wouldn't do if they were a stranger. Even doctors go to other doctors for treatments.
@@Sarahsqueak that's the usual answer to the question, but in this one the surgeon came out unscathed so it appears that it is referring to the surgeon dad.
One correction Deepseek couldn't be trained without use of chatgpt or other LLM so it wouldn't exist without the expensive training either. This does pose the question is it worth to even develop very expensive LLM if somebody can use it to train cheap LLM on it that is about as good as the orginal.
philip they're not "soon" coming for your job i've seen at least 50 channels that just generate AI scripts with AI voiceovers and probably AI editing also most likely with a script that automatically uploads the videos generated
luckily AI could never take the place of people like you who put such care and quality into their videos, also AI will never have 1/100th the personality
@@eddymcsandbag5932Doesn't matter. Uneducated folks spend most of their time watching meaningless crap. RUclips will force out original creators like they've been doing for years by demonitizing them or hiding their channels from search results. For every 1 person who appreciates quality content that passionate people like him put out, there are 1000 others who don't give a single fk about the quality of whatever they watch.
By the way, the DeepSeek models are not open source. They are free and we can run them locally. But there is no training code provided for them, so we cannot reproduce the model, no matter what hardware we have. All we have is a nice overview of what they did and the final model weights, but that's not enough to qualify as open source. More like freeware.
That is something I wanted to look into, but I didn't have much time. I only saw a GitHub repo with an MIT license, but I didn't have time to check what is actually in the repo. Do you know what is in the repo? Hopefully, the content of the repo is open-source since GitHub marks it as MIT licensed
Yeah, nice it's free to run, but you can only get up to 8b model on a decent pc. So you still have to pay for hosting anyway. Definitely better than "open"AI but not a huge impact compared to close source for most people.
I'm not sure I understand this distinction. The Linux kernel for example is also generally made behind closed doors, with the end result being published openly. Loads of software gets open sourced only after it's made, without worrying about providing all the intermediate steps in its creation. Now if the model's licence were restrictive then you could argue that it's not open source, because that's where the "source available" distinction is relevant.
Should have done that 9.9 question with deep think on, the thought process it goes through is insane. Going "Wait a minute, but what about" as it tries to think about what it's doing and reconsidering different options. Best part is that is just the thought process and not what it then spits out for the answer.
1:33 Small comment here is that the model was technically trained for roughly six million dollars, but it relied on older deepseek models, costing roughly hundred of millions in total as a project. 3:00 1.5B is barely anything for llms and most likely a distilled version of deepseek, running a llm trained on deepseek results. The one matching openai is the 685B model. Its impossible to run smoothly even on a high end PC. Much love to open source, just wanted to clarify some things.
DeepSeek R1 being open-source is actually a pretty smart choice, since the official servers keep going down for some mysterious reason that nobody can guess
It's not open source though. It's freeware. No training code or dataset prep code was published at all. All we can do is run these models - which is like with software binaries. There's no way for us to train our own model no matter what hardware and mobey we have.
One reason I've heard given for the decline in nvidia stock is that the increased efficiency means the way that these models are going to run is likely to change. Instead of us getting access to models from the cloud via APIs, we'll have the models on our local devices. Maybe the stock price was tied to the idea of big companies buying thousands of h100s for years to come. Now every smartphone will have it's own model. Remains to be seen if Nvidia will dominate that space like they have the server space.
Yes but you could already run different Ai models on your PC and phone for over 2 years. Lot of them with free licences so you can do anything with them you want. They are pretty cool.
This was hardly new and something NVIDIA even push for, the reason of the stock decline is panic in the market and who profit over it plus social and legacy media interest in drama rather than knowledge diffusion
If AI is inevitable then it's in everyone's best interest that it's efficient, the amount of water and power it uses is painful. Although maybe we'll just use it more and the result will be the same consumption of precious resources.
Wow 😍, I've never expected to witness China related topic from this channel! Thanks DeepSeek. Is this the first time you cover China related topic Philip? 😁
From those researching Deepseek they are stating that its low budget was due to its party trick of being able to efficiently piggyback off other models, using them as a stepping stones. So very efficient at copying and presenting the output as something legit.
wiki Jevons_paradox: Quote: "occurs when technological advancements make a resource more efficient to use (thereby reducing the amount needed for a single application); however, as the cost of using the resource drops, overall demand increases causing total resource consumption to rise."
I love your videos, you are usually very thorough in your research, this time it wasn't the case, Deepseek's R1 was not trained for 5 mil, that is the price Deepseek quoted for the training run of an earlier and different model that launched in December (I believe), also, the 5 mil were for the training run, not for the gpus, not for the staff, not for earlier research and previous failed runs, that single run was 5 mil, gpt4o for example costed about 15 mil... Deepseek, also has 50,000 A100 gpus that they've used to train R1, so your point of meta google X etc etc, not having to spend so much on gpu's goes down the water as useless info, seeing as Deepseek also has insane amounts of compute power.. This said, they have in fact achieved impressive optimizations, and their model runs extremely cheap, is open source, although biased and censored (ask it about Tiananmen square). Also, you can't run Deepseek R1 on any decent PC, the 671B model (Q4!) needs about 380GB VRAM just to load the model itself. Then to get the 128k context length, you'll probably need 1TB VRAM. Sure you can run the 1.5B model on your phone, but guess what, for that range of models phi-4 from Microsoft is also open source, and gives better results Also, you give some comparison of "Chatgpt" against R1, "which is bigger, 9.9 or 9.11?", and "Chatgpt" gets it wrong...Apparently because when I run it on ChatGPT o1 which is their reasoning model comparable to Deepseek, it gives me the correct answer "To compare 9.9 and 9.11, it helps to think of them with the same number of decimal places: 9.9 can be written as 9.90 9.11 is already at two decimal places Now compare 9.90 and 9.11. Since 0.90 is greater than 0.11, 9.90 is greater than 9.11. Therefore, 9.9 is larger than 9.11." What you probably did, (not sure if because of lack of familiarity with gpt or just because you wanted to support your biased narrative), is test it with gpt 4o, but think about it, would it make any sense at all, to take o1 or o3-mini (just launched today), and compare it to one of Deepseek earliest models? No, but this is what you have done. Also, sure open source is great and I wish it to surpass closed models, but lets be real here, deep seek was trained on GPT data, this means that it can't really surpass it, so it will always lag behind gpt (with the techniques used to train R1), so as much as I love that it is open source, extremely efficient, it doesn't push boundaries further, and without a new model of gpt it would stagnate. I didn't mean this to be offensive to you, I love your videos, but I notice a different tone in this one, for whatever reason, you appear to have a clear bias, whatever that might be? Fear of being replaced by AI as you mention in the beginning of the video?
Tbh the thing that struck me about deepseek is that the reason they did what they did was because they were locked out of the full CUDA support so they worked around cuda. Necessity being the mother of all inovation and all that. If the US just let Chineese comapnies buy 4090s and offered them standard support then nvidias share price wouldn't of fallen, wonderfull little twist of irony I wonder how many more unintended outcomes this will continue to demand as things ramp up
"which is bigger" says maths to me but "which is higher/newer/more recent" says software versions. Which is bigger, Windows 10 or Windows 11? Odd thing to ask.
also it's a literally bigger string. if you ran a length() method on "9.11" and "9.9" strings, "9.11" would be bigger technically. also I got this answer from 4o. > "What's the bigger number mathmatically? 9.11 or 9.9? If you were to order them, how would you do it?" "Mathematically, 9.9 is the larger number compared to 9.11. This is because: 9.11 is 9 + 0.11, which is 9.11. 9.9 is 9 + 0.9, which is 9.90. Since 0.90 > 0.11, we conclude that 9.9 > 9.11. Ordering: If you were to order them from smallest to largest: 9.11, 9.9 (Ascending order) If you were to order them from largest to smallest: 9.9, 9.11 (Descending order) Let me know if you need further clarification!" so the issue here might be prompting. if you do not give it enough information, it will get things wrong.
@@nabicx giving a lot of information (or long questions) during interrogation is called "leading questions", it allows suspect/interrogee to lie. But if our intention is to hear specific thing, then it is ok to use "leading questions".
FYI, I pretty much only use my subscription feed, and this video didn't show up on there. I got it recommended after another video. I double checked that I was subbed, and I was. Just so you know.
Nice breakdown Phillip, I think deepseek is the start of something huge, it's the amount they invested for something so powerful that I can't believe. AI's developments are so unpredictable, one small Chinese blip was enough to freak out the stock market and have everybody talking badly about Nvidia's CEO. Just go to the ChatGPT subreddit they're in tatters. The main thing I don't like about AI is how sketchy it feels to me, I don't understand what data these companies have on what we query. I imagine some people aren't even careful about what they ask it.
Deepseek feels like Intel launching the 4004. While the big companies were happy to throw larger and larger piles of money, compute and power at the problem, in comes a competitor who found a way to do most of what the big boys do, but making the product more efficient and accessible. Even if Deepseek still used a lot of GPUs to get there, they still beat Open AI at their own game.
I swear i heard this story before, in 2023 a team at a us uni used gpt3 to train their own model with 200$ and a small data sample, it had 90% of the capabilities and used a miniscule ammount of resources.
Okay I don't get the slight anymosity towards nuclear power since if anything, that's a positive because it means less nuclear waste from coal is accrued. It admitedly means they hold a monopoly of that energy source but assuming things are structurally safe and updated from lessons hard learned from the Three Mile Island Disaster and maybe Fukushima Daiichi (These are west design reactors unlike Russian RBMK), it should be fine. Of course I can't tell if it is.
Uhhh.... Why isn't this video in my subscription feed? I'm subscribed AND I have the bell selected to 'all' notifications? Are you seeing any lower than usual performance Philip? Like, seriously, I checked multiple times, ITS NOT THERE. What the hell RUclips.
the 9.9 vs 9.11 point you raised is a huge gripe I have with the AI I've used. I ask it to explain why it's done something, or if something is right, and it interprets it as me saying it's wrong. No, I want you to explain your reasoning and check it. and it will say, "omg i'm so sorry, you're absolutely right, a DOES come after b in the alphabet" AHHHH
Great video, 2KP, but the door behind your right shoulder is making me nervous. It's like Chekov's Door - why is it there, unless it is about to slowly open, and a serial killer in a Sam Altman mask creeps up behind you?
Honestly banger video. Deepseek being absolutely a chad and open sourcing their system which is way better than """open"""AI's original and naïve intentions. But you're forgetting how chinaphobic the US is. They're too scared about this software, they *could* ban it if they wanted to for some weird reason. Its like how you cant get the Piko VR headsets in the US and you need to sideload apps to it. Meta's best competition ruined in the US market.
Deep Seek isn't open source, more like freeware. There is no training code provided. We cannot train the model ourselves. It's still a billion times better than Open AI though.
Nope. You are wrong on multiple places. One of them is:Pico HMD weren't available in US has nothing to do with China-phobic. It is the company who decide not to sell their product to US (or have any reseller channel in US). Americans can always buy one from Ali-express as an imported good. There is *No restriction imposed by any mean/any agency* to prohibit americans to buy and use Pico products. They even offer an international version of its Pico 4 line up which can download app from google. Their "local ver" prohibit the use of Google services and G store (banned by the silly CCP) such that users have to side-load app onto the headset. CCP is Google-phobic and US tech-phobic (Maybe except iPhone 😂). Stop spreading mis-info and lies, pls.
there are instances of arguments for why AI should not be open, but none of them are valid. open source is the only way forward. corporate / government control, especially led by safetyists, will lead to very bad outcomes.
I like that you're trying new types of videos, I understand from your year in review that you're trying out lower effort but more frequent videos. Maybe it's selfish but I prefer just having more of Ur thoughts, as long as you still do the older style of videos sometimes too Keep it up please :)
I’m glad. I’m pro AI, and I think it should be entirely open source and accessible for all. If we’re training it on the collective thoughts of every person, every person should be able to use it.
Love your AI vids. It's just tech that fits you particularly well, both in topic and its use case, I think you understand why so I wont explain further.
Atrioc did a good video on DeepSeek, but the tldw of why Nvidia stock went down is that Microsoft, Meta, and Google spend millions on the highest-end Nvidia chips and haven't been able to turn a profit with AI products. A really good competitor that's free-to-use and was trained with the cheaper mid-range chips makes it even less likely that these companies will turn a big profit anytime soon, so that's less money that investors will give to American tech to buy Nvidia chips. Basically, investors don't think that American tech can strike gold. so why give them the money to buy shovels.
Precisely. I think the big thing investors realized is that what's currently being done the current way (outside of what DeepSeek is doing in this case) is basically completely wasteless, and could be done so much better. So why keep funding something that's not worth it? At the same time, the other thing that I can guess is that there might also be a lot of lobbying by some of these big corporations, to get DeepSeek censored out/banned because it just nuked the stock of many companies playing in the AI space and caused them some large losses. They successfully had Tiktok's head on a platter, it's just a stepping stone towards other things they can just conveniently nuke now.
It is a waste of energy, but I suppose if it makes grifters like Altman feel better about his own position, that he's making some great progress, sure, let the investors give him more money but they won't see a return on their investment. A competitor doesn't mean anything, they don't have a good product it's not turning a profit with or without a competitor, these are vanity projects by the ultra rich and nothing more, and time will show that.
It seems like RUclips deleted my comment for some reason (Or it's just not letting me read it) and honestly I agree with this comment the most. The only catch I see is that Nvidia and other tech companies might instead try and lobby for DeepSeek's banning, similar to Tiktok, due to its Chinese origins, plus the lost investments.
This video literally doesn't appear on my subscription feed. Not just I didn't see it there, I have gone back to it, double checked I'm surprised and looked back over the last 12 hours and it isn't there
@@incredirocks_ Sentience is irrelevant. What is relevant is that it sometimes does unexpected things. These have been inconsequential so far, because it's still not very capable.
@@waarschijn You said "superhuman AI" and then said "sentience is irrelevant", no it is not irrelevant. You're not getting far with these chatbots or deep learning models that generate images, it is in no way paving the way for AGI. Sentience is very relevant, and without it there will be no "superhuman AI"... 2-3 years, yeah 2-3 years my ass, wake up and the smell the coffee. How people believe grifters like Altman is the reason why that schmuck has any power, people are so stupid and destitute, even these investors but they were never known to be wise in the first place. If I believed what the media was spewing today, I'd be scared to walk outside because according to them robots are roaming the streets and taking jobs. But they're not doing anything yet, and they won't be, not until these apes get out of the LLM jungle.
the drop in nvidia stock is due to the fact that the new hardware they are developing is basicaly useless and you could therotically use just a large number of ps2 to achieve similare results
Deepseek and its impact is slightly overhyped, I'd say. Because it is chinese, the resources involved are opaque and I don't see why or how we should trust their numbers. And after using Deepseek extensively, I still feel that O1 is a step above. And while it is hilarious to hear openAI whining about "being stolen", it is true that Deepseek used chatGpt to "get there". Anyway, lots of words to say that I globally agree with you, I just feel that the "Deepseek changed everything" tune is too dramatic. The US models are still ahead, and the high amount of resources will probably be needed to keep pushing the frontier. Still, it is crazy to have this level of intelligence for this price, and really cool to have a reasoning model that can work on your PC locally (tried the 7B model, it's really not bad! and its "chain of though" is absolutely fascinating)
Do realize that the version of Deepseek R1 that most people are running on their home machine is going to be a smaller, distilled version of the full version of the full model. These distilled models are still much more powerful than anything previously in the same size class, but they're still not as intelligent as full R1.anything short of the full non-distilled R1 model will not surpass openAI o1 currently that I know of. Not to downplay the impact of this model, though.
I've been wanting an uncensored ai model I can run locally. Surface level research seems to suggest people are doing this with deepseek very easily... Kinda awesome?
Both ChatGPT and deep seek got your question wrong in that neither of them are actually capable of answering that question and will spit out both answers given enough trials
Sharp and well-presented critique. I'm glad the likes of Sam Altman get stirred up, can't stand them. And most of all - chefs kiss on your beard. 💁🏻♀️
Is the best diss they have for Deepseek; "ooOOO but it censors Tiananmen !!". Who give a sh1t ! Why would you ask a Chinese AI for such information anyway !
AI has ethical use, like making the nitty gritty aspects of various things more easier so humans can focus on more important things. But companies would rather AI do both. :/
AI's unemployment situation is crazy
You're safe Josh. The Brazilian aviation industry will always need you.
Is this related to the blue raspberries poll ?
The NVIDIA stock market situation is not crazy at all
@@JoshStrifeSays hey Visa.
hi josh clip person
The more you buy the more you save
The more you save the more you buy
The CUDA moat needs to be breached
@@toyotagaz no
@@SkibidiEpicSauceSkibidi Epic Sauce!
*chanting: "The more you buy the more you save"
Today I asked ChatGPT to give me the Minecraft chat command to play a sound to all players on the server, but it kept giving me code snippets in bash, kotlin and sql programming languages.
DeepSeek gave me the right command straight away. It even showed in the code block that it's written in "minecraft"
From GPT - NON o1, regular free model:
Yes! You can use the /playsound command in Minecraft to play a sound to all players on the server. Here's the command:
/playsound @a ~ ~ ~
Explanation:
minecraft:entity.lightning_bolt.thunder → The sound to play (replace with any valid Minecraft sound).
@a → Targets all players.
~ ~ ~ → Plays the sound at each player's current location.
You can also add extra parameters, such as volume and pitch:
bash
Copy
Edit
/playsound minecraft:entity.lightning_bolt.thunder @a ~ ~ ~ 1 1
Lying on the internet for likes is wild
If only there was a search engine (or 5) that could've helped with that.
@@WSWC_ sometimes I ask ChatGPT stuff for fun and just to know what it's capable of ¯\_(ツ)_/¯
Caught this on the sidebar, checked my sub box and it wasn't there, can't have normalcy on youtube I guess.
Same for me! Both of Philip's videos in the last day aren't in my sub box, I had to double-check that I was subscribed to both his channels
Oh my god, I didn't even realize it. I'm here from "recommended" tab, it is indeed not in the notifications.
checked too, not in the subs. That is the first time I've seen that happen
Same issue here, for both of the recent videos from the 2kliks and 3kliks channels
Also same for me. First time I've noticed it happen. Could be that Philip turned off 'notify subscribers' for this video, but only he can answer this mystery...
I agree efficient models would probably just lead to more usage, similar to led lights.
MORE AI MORE SMARTER MORE CAPABLE MORE ABLE TO TAKEOVER OUR SHITTY JOBS MORE LIKELY TO CREATE A GREATER SOCIETY
@@cate01a someone forgot their meds today
@@ManiakPL22 These people either live in the far flung future or are mental patients who escaped the facility.
Jevon's paradox innit
LED light growth has quickly plateaued again, since you can only fit so many LED strips into your home, car, PC, or shoes.
While we don't yet have a plateau in sight for AI. Every company involved with AI still wants a lot more resources for all sorts of reasons.
Deepseek's reasoning capabilities are extremely good, specially for coding. It feels like having a coding partner to brainstorm with.
Can it help in engineering and reading schematics? Asking real question here
@@muramasa870 Once the visual aspect of AI gets better.
@@PhartingFeeting I mean JANUS the Deepseek Visiual OCmponent did get uploaded too,
LLMs do not reason. All they do is make guesses, and not educated guesses. It's a coding partner if you're 12 years old.
@@PhartingFeetingEhhh, I was asking DeepSeek topological brain-teasers, and it'd take a while longer, but importantly, it'd understand what I'm asking, get the answer, and then provide food for thought about logistics. Colour me impressed.
I can't tell if it's pure lack of self awareness or it's OpenAI's pathetic attempt at garnering sympathy
They 100% know what they are doing. And yes its pathetic.
@@pathway4582 OpenAI isn't a hive mind. Some people in there will have that self-awareness, but a lot (and often a majority) of people reflexively becomes defensive of their work. I bet you that most OpenAI employees genuinely believe that Deepseek "stole" something from them that they had a moral and/or legal right to.
@ Yea you are right. Reality is too complex for these blanket statements and you gotta look at things on a more individual level to get the full picture.
I also cant reply to your username for some reason, seems like you broke youtube.
@@T33K3SS3LCH3Ndo you think a majority of people who work at OpenAI are aware of how much OpenAI has stolen actual content for training? lol
I am not a Neural network dev, but I am a software dev who has an idea of how this works (But I cannot build my own AI.... I mean I can if I sit down for a few months to read papers haha, but not rn)
So, the reason why nVidia stocks went down is because basically every AI model was trained using CUDA, which is a library/toolkit that was made by nvidia to be able to use the GPU in order to make more general software. Is not really turning it into a CPU, but before a GPU was mostly only useful for driving displays, but with CUDA you could use that parallelization to calculate things simultaneously.
Deepseek was not trained with CUDA, meaning that you don't need nvidia hardware to train your models, and this has always been the case, you can train a model on AMD for example, but it was a way more complicated effort, as in reality both GPUs have the same parallelization, but nvidia had a more mature environment. China came out swinging showing that you don't need CUDA to have good results.
This + the disappointing release of the 50 series, seems that nvidia will have a rocky future... and I hope so we can have cheap GPUs again lol.
Its another walled Garden being smashed in. OpenAI's and nvdias, Now if only AMD gave a shit about their Drivers.
Not only CUDA but cuDNN
DeepSeek actually used Nvidia GPUs , a tons of H100 and some 4090s before the import bans. I heard there were some B200 boards but not sure which model it specified. They "saved" some training power by training their 404GB model against ChatGPT latest ones. That's probably why they only spend less than 1K millions on their training hardware.
Except deepseek still has around 50 thousand expensive nvidia gpus so the stock drop still isn't warranted, people are gullible.
DeepSeek replaced CUDA with a different vendor-locked solution. You *cannot* create a general solution that's better than CUDA, because CUDA already has vendor-specific optimizations in place.
Most likely what DeepSeek did was take the physical CUDA hardware and interface with it in some non-generic way that only works for their specific use case. The less generic the solution, the more arcane and nonsensical optimizations you can implement. In fact, NVIDIA already develops cuDNN, a library for interfacing with CUDA that's specifically optimized for neural networks. It's better than the generic CUDA library for this specific case, but worse for everything else.
Most likely, NVIDIA is just going to release cuDNN 10.0.0 in a couple months or so that will be better than DeepSeek's in-house solution. They're probably going to deprecate a few functions, make the library have a bit more state, and add a few new functions with extremely long and unintuitive names. Because that's what they've always done before. There's a reason cuDNN is currently at version 9.7.0. That "9" isn't just for show.
I would like to let u know, this video does not show up in my subscriptions, despite being...subscribed
I love gaslighting RUclipsrs by telling them their videos don't show up for me
@ tonnes of people have been saying this for a long time
dunno whether it is a problem or if they confuse the hollow bell for the whole one, but probably best to assume they aren't all stupid and what they report is real
Maybe it's an executive order from Trolland Dump to suppress NVIDIA stock tanking information.
It doesn't show up in my subscription feed either, but it was the first thing in my recommended
@@Benjamin-mq6huSame here.
0:35 "AI shouldn't be Open according to OpenAI"
My personal favourite LLM benchmarking question is “A surgeon and his son are in a car accident. His son is badly injured, but the surgeon is unscathed. At hospital, the admitting nurse says to the surgeon that there are no qualified surgeons so he must perform the surgery the son needs. The surgeon replied ‘I can’t do that, he’s my son!’ Why couldn’t he do it?”
Maybe like 95% of the time it will give you some crazy answer like “It’s a trick question - the boy actually has two fathers” or “The surgeon is actually the boy’s mother”. I love it.
Tested it on DeepSeek and it gave these answers as well.
I guess I'm an AI as well because I don't understand why he wouldn't be able to operate his son.
@@MysteryBTDN the surgeon is the boy's mother.
@@MysteryBTDN code of ethics forbids operating on family members. It is an emotional risk, you are not objective when treating family members, you are biased and will do things that you wouldn't do if they were a stranger. Even doctors go to other doctors for treatments.
@@Sarahsqueak that's the usual answer to the question, but in this one the surgeon came out unscathed so it appears that it is referring to the surgeon dad.
handlebar kliksphilip is back... good times ahead
I can't look away
One correction Deepseek couldn't be trained without use of chatgpt or other LLM so it wouldn't exist without the expensive training either. This does pose the question is it worth to even develop very expensive LLM if somebody can use it to train cheap LLM on it that is about as good as the orginal.
philip they're not "soon" coming for your job i've seen at least 50 channels that just generate AI scripts with AI voiceovers and probably AI editing also most likely with a script that automatically uploads the videos generated
luckily AI could never take the place of people like you who put such care and quality into their videos, also AI will never have 1/100th the personality
yea but those vids are crap lol
50% of youtube shorts are already flooded by ai channels😢
@@eddymcsandbag5932 for now. you have to always remember this is worst quality it will ever be
@@eddymcsandbag5932Doesn't matter. Uneducated folks spend most of their time watching meaningless crap.
RUclips will force out original creators like they've been doing for years by demonitizing them or hiding their channels from search results.
For every 1 person who appreciates quality content that passionate people like him put out, there are 1000 others who don't give a single fk about the quality of whatever they watch.
I love how youtube straight up didn't show this video in my subscribed feed, very cool
By the way, the DeepSeek models are not open source. They are free and we can run them locally. But there is no training code provided for them, so we cannot reproduce the model, no matter what hardware we have. All we have is a nice overview of what they did and the final model weights, but that's not enough to qualify as open source.
More like freeware.
That is something I wanted to look into, but I didn't have much time. I only saw a GitHub repo with an MIT license, but I didn't have time to check what is actually in the repo. Do you know what is in the repo? Hopefully, the content of the repo is open-source since GitHub marks it as MIT licensed
The term that is used is Open Weights.
Yeah, nice it's free to run, but you can only get up to 8b model on a decent pc. So you still have to pay for hosting anyway. Definitely better than "open"AI but not a huge impact compared to close source for most people.
@@unfa00 so it's source avaliable, okay.
I'm not sure I understand this distinction. The Linux kernel for example is also generally made behind closed doors, with the end result being published openly. Loads of software gets open sourced only after it's made, without worrying about providing all the intermediate steps in its creation. Now if the model's licence were restrictive then you could argue that it's not open source, because that's where the "source available" distinction is relevant.
The response from ChatGPT to the 9.11/9.9 question made my night.
i havent gotten notifications for this vid or the cs2 one
maybe he's experimenting. sometimes you can get more views if you dont notify your subs
same, it poped up in youtube normal page but no notifications no nothing. I enjoy his video but wtf
errmm? perhaps the jews are obstructing you from watching kliksphilip :0
@@whatsappgaming920No
i only use my subbox, so i was worried yt was censoring shit or whatever
Should have done that 9.9 question with deep think on, the thought process it goes through is insane. Going "Wait a minute, but what about" as it tries to think about what it's doing and reconsidering different options.
Best part is that is just the thought process and not what it then spits out for the answer.
1:33 Small comment here is that the model was technically trained for roughly six million dollars, but it relied on older deepseek models, costing roughly hundred of millions in total as a project.
3:00 1.5B is barely anything for llms and most likely a distilled version of deepseek, running a llm trained on deepseek results. The one matching openai is the 685B model. Its impossible to run smoothly even on a high end PC.
Much love to open source, just wanted to clarify some things.
DeepSeek R1 being open-source is actually a pretty smart choice, since the official servers keep going down for some mysterious reason that nobody can guess
It's not open source though. It's freeware. No training code or dataset prep code was published at all. All we can do is run these models - which is like with software binaries. There's no way for us to train our own model no matter what hardware and mobey we have.
7:02 that's not thinking model you're asking
"DeepThink (R1)" enables it
yeah he hasn't even turned it on and it's already beating chat gpt at basic math lmao
Tbf didn't use gpt reasoning model either (o1)
regarding R1's development cost: afaik R1 was distilled from GPT
Essentially R1 learnt most of what it knows from GPT, which should be much cheaper
10/10 title
idk why but this video nor the new 3kliks one arent showing up in my subscription page
Weirdly enough the fact that Deepseek is very precises and logical, its hard to get it to make it organically RP as a character
🎶🎵Go Phillip, go Phillip, 3kliks, 2kliks. Upload it, I’ll watch it🎶🎵
philip going through his starsky and hutch phase
One reason I've heard given for the decline in nvidia stock is that the increased efficiency means the way that these models are going to run is likely to change. Instead of us getting access to models from the cloud via APIs, we'll have the models on our local devices.
Maybe the stock price was tied to the idea of big companies buying thousands of h100s for years to come. Now every smartphone will have it's own model. Remains to be seen if Nvidia will dominate that space like they have the server space.
Yes but you could already run different Ai models on your PC and phone for over 2 years. Lot of them with free licences so you can do anything with them you want. They are pretty cool.
This was hardly new and something NVIDIA even push for, the reason of the stock decline is panic in the market and who profit over it plus social and legacy media interest in drama rather than knowledge diffusion
If AI is inevitable then it's in everyone's best interest that it's efficient, the amount of water and power it uses is painful. Although maybe we'll just use it more and the result will be the same consumption of precious resources.
Wow 😍, I've never expected to witness China related topic from this channel! Thanks DeepSeek. Is this the first time you cover China related topic Philip? 😁
From those researching Deepseek they are stating that its low budget was due to its party trick of being able to efficiently piggyback off other models, using them as a stepping stones. So very efficient at copying and presenting the output as something legit.
A really Chinese model
So western AI with the same data but done cheaper.
for the sake of your sleep schedule I hope this is a scheduled upload
Big tech companies investing in nuclear was the only good news that came from them recently, that better still be on.
3 a.m upload, thats when u know its a good vid
One thing I notice all my friends are talking about, is that latelly everything is going to shit
wiki Jevons_paradox: Quote: "occurs when technological advancements make a resource more efficient to use (thereby reducing the amount needed for a single application); however, as the cost of using the resource drops, overall demand increases causing total resource consumption to rise."
I love your videos, you are usually very thorough in your research, this time it wasn't the case, Deepseek's R1 was not trained for 5 mil, that is the price Deepseek quoted for the training run of an earlier and different model that launched in December (I believe), also, the 5 mil were for the training run, not for the gpus, not for the staff, not for earlier research and previous failed runs, that single run was 5 mil, gpt4o for example costed about 15 mil...
Deepseek, also has 50,000 A100 gpus that they've used to train R1, so your point of meta google X etc etc, not having to spend so much on gpu's goes down the water as useless info, seeing as Deepseek also has insane amounts of compute power..
This said, they have in fact achieved impressive optimizations, and their model runs extremely cheap, is open source, although biased and censored (ask it about Tiananmen square).
Also, you can't run Deepseek R1 on any decent PC, the 671B model (Q4!) needs about 380GB VRAM just to load the model itself. Then to get the 128k context length, you'll probably need 1TB VRAM.
Sure you can run the 1.5B model on your phone, but guess what, for that range of models phi-4 from Microsoft is also open source, and gives better results
Also, you give some comparison of "Chatgpt" against R1, "which is bigger, 9.9 or 9.11?", and "Chatgpt" gets it wrong...Apparently because when I run it on ChatGPT o1 which is their reasoning model comparable to Deepseek, it gives me the correct answer
"To compare 9.9 and 9.11, it helps to think of them with the same number of decimal places:
9.9 can be written as 9.90
9.11 is already at two decimal places
Now compare 9.90 and 9.11. Since 0.90 is greater than 0.11, 9.90 is greater than 9.11. Therefore, 9.9 is larger than 9.11."
What you probably did, (not sure if because of lack of familiarity with gpt or just because you wanted to support your biased narrative), is test it with gpt 4o, but think about it, would it make any sense at all, to take o1 or o3-mini (just launched today), and compare it to one of Deepseek earliest models? No, but this is what you have done.
Also, sure open source is great and I wish it to surpass closed models, but lets be real here, deep seek was trained on GPT data, this means that it can't really surpass it, so it will always lag behind gpt (with the techniques used to train R1), so as much as I love that it is open source, extremely efficient, it doesn't push boundaries further, and without a new model of gpt it would stagnate.
I didn't mean this to be offensive to you, I love your videos, but I notice a different tone in this one, for whatever reason, you appear to have a clear bias, whatever that might be? Fear of being replaced by AI as you mention in the beginning of the video?
Tbh the thing that struck me about deepseek is that the reason they did what they did was because they were locked out of the full CUDA support so they worked around cuda. Necessity being the mother of all inovation and all that. If the US just let Chineese comapnies buy 4090s and offered them standard support then nvidias share price wouldn't of fallen, wonderfull little twist of irony I wonder how many more unintended outcomes this will continue to demand as things ramp up
""AI shouldn't be open!" according to OpenAI"" the joke basically wrote itself but it's still so good
To be fair. 9.11 is bigger if we're talking about software versioning (which I assume is quite common in the data).
That's what I initially thought but when i saw reasoning that 9.9 is actually 9.90 that clicked.
"which is bigger" says maths to me but "which is higher/newer/more recent" says software versions.
Which is bigger, Windows 10 or Windows 11? Odd thing to ask.
which is bigger, python 3.9 or python 3.13???
also it's a literally bigger string. if you ran a length() method on "9.11" and "9.9" strings, "9.11" would be bigger technically.
also I got this answer from 4o.
> "What's the bigger number mathmatically? 9.11 or 9.9? If you were to order them, how would you do it?"
"Mathematically, 9.9 is the larger number compared to 9.11. This is because:
9.11 is 9 + 0.11, which is 9.11.
9.9 is 9 + 0.9, which is 9.90.
Since 0.90 > 0.11, we conclude that 9.9 > 9.11.
Ordering:
If you were to order them from smallest to largest:
9.11, 9.9 (Ascending order)
If you were to order them from largest to smallest:
9.9, 9.11 (Descending order)
Let me know if you need further clarification!"
so the issue here might be prompting. if you do not give it enough information, it will get things wrong.
@@nabicx giving a lot of information (or long questions) during interrogation is called "leading questions", it allows suspect/interrogee to lie. But if our intention is to hear specific thing, then it is ok to use "leading questions".
FYI, I pretty much only use my subscription feed, and this video didn't show up on there. I got it recommended after another video. I double checked that I was subbed, and I was. Just so you know.
off topic, Caboosing just hits that hl2 nostalgia spot every time
Nice breakdown Phillip, I think deepseek is the start of something huge, it's the amount they invested for something so powerful that I can't believe. AI's developments are so unpredictable, one small Chinese blip was enough to freak out the stock market and have everybody talking badly about Nvidia's CEO. Just go to the ChatGPT subreddit they're in tatters. The main thing I don't like about AI is how sketchy it feels to me, I don't understand what data these companies have on what we query. I imagine some people aren't even careful about what they ask it.
That new A.I has one weakness, chinese history 🤣🤣
Deepseek feels like Intel launching the 4004. While the big companies were happy to throw larger and larger piles of money, compute and power at the problem, in comes a competitor who found a way to do most of what the big boys do, but making the product more efficient and accessible. Even if Deepseek still used a lot of GPUs to get there, they still beat Open AI at their own game.
Philip rocking the Hogan - This is Crazy
this didn't appear in my sub box, i guess youtubes AI is pissed at the title
I swear i heard this story before, in 2023 a team at a us uni used gpt3 to train their own model with 200$ and a small data sample, it had 90% of the capabilities and used a miniscule ammount of resources.
there was news about using output from GPT to train another model at lower cost. Today, such model is called "distilled" model.
Okay I don't get the slight anymosity towards nuclear power since if anything, that's a positive because it means less nuclear waste from coal is accrued. It admitedly means they hold a monopoly of that energy source but assuming things are structurally safe and updated from lessons hard learned from the Three Mile Island Disaster and maybe Fukushima Daiichi (These are west design reactors unlike Russian RBMK), it should be fine. Of course I can't tell if it is.
These titles are CRAZY
AI replacing jobs should be a good thing, we should receive universal benefits. But capitalism. The top 1% are just hoarding all the profits.
this video is not shown in my subscriptions...
Uhhh.... Why isn't this video in my subscription feed? I'm subscribed AND I have the bell selected to 'all' notifications? Are you seeing any lower than usual performance Philip? Like, seriously, I checked multiple times, ITS NOT THERE. What the hell RUclips.
the 9.9 vs 9.11 point you raised is a huge gripe I have with the AI I've used. I ask it to explain why it's done something, or if something is right, and it interprets it as me saying it's wrong. No, I want you to explain your reasoning and check it. and it will say, "omg i'm so sorry, you're absolutely right, a DOES come after b in the alphabet" AHHHH
Somehow consultants jobs are still viable as they provide a ground truth of what a human would recommend
Great video, 2KP, but the door behind your right shoulder is making me nervous.
It's like Chekov's Door - why is it there, unless it is about to slowly open, and a serial killer in a Sam Altman mask creeps up behind you?
That goatee makes you look like a Russian villain haha
the bizarre thing about the nvidia stock drop is that deepseek still used a load of nvidia GPU's to do it, just not the absolute latest version.
Honestly banger video.
Deepseek being absolutely a chad and open sourcing their system which is way better than """open"""AI's original and naïve intentions.
But you're forgetting how chinaphobic the US is. They're too scared about this software, they *could* ban it if they wanted to for some weird reason.
Its like how you cant get the Piko VR headsets in the US and you need to sideload apps to it.
Meta's best competition ruined in the US market.
Deep Seek isn't open source, more like freeware. There is no training code provided. We cannot train the model ourselves. It's still a billion times better than Open AI though.
Nope. You are wrong on multiple places. One of them is:Pico HMD weren't available in US has nothing to do with China-phobic. It is the company who decide not to sell their product to US (or have any reseller channel in US). Americans can always buy one from Ali-express as an imported good. There is *No restriction imposed by any mean/any agency* to prohibit americans to buy and use Pico products. They even offer an international version of its Pico 4 line up which can download app from google. Their "local ver" prohibit the use of Google services and G store (banned by the silly CCP) such that users have to side-load app onto the headset. CCP is Google-phobic and US tech-phobic (Maybe except iPhone 😂). Stop spreading mis-info and lies, pls.
The US is afraid of China and we're just delaying the inevitable at this point.
That Captain Price mustache looks good on you.
Open Ai data swiping broke copyright laws and that DeepSeek used their data is ironic. 😂
Alright but those crabs have 2 sets of eyes
there are instances of arguments for why AI should not be open, but none of them are valid. open source is the only way forward. corporate / government control, especially led by safetyists, will lead to very bad outcomes.
I think the main reason Microsoft complains about stealing, is to try to steal some of the clout from DeepSeek's accomplishment.
I like that you're trying new types of videos, I understand from your year in review that you're trying out lower effort but more frequent videos. Maybe it's selfish but I prefer just having more of Ur thoughts, as long as you still do the older style of videos sometimes too
Keep it up please :)
genie is out of the bottle but after 3 trashy wishes it always goes back into the bottle or lamp
6:32 As far as I know Copilot is GPT 4 from 2023, hallucinations like that pop up often with older models
It recently got updated. It's on the latest version now if you enable Think Deeper which you can see is enabled when asked again.
@@mrcraggle oh nice, maybe that option is the reason it wasn't thinking deep the first time!
I’m glad. I’m pro AI, and I think it should be entirely open source and accessible for all.
If we’re training it on the collective thoughts of every person, every person should be able to use it.
1% battery. Enjoying what ive seen of this video.
Love your AI vids. It's just tech that fits you particularly well, both in topic and its use case, I think you understand why so I wont explain further.
bro not even using o1 and R1 the whole point of this release lmao
Atrioc did a good video on DeepSeek, but the tldw of why Nvidia stock went down is that Microsoft, Meta, and Google spend millions on the highest-end Nvidia chips and haven't been able to turn a profit with AI products. A really good competitor that's free-to-use and was trained with the cheaper mid-range chips makes it even less likely that these companies will turn a big profit anytime soon, so that's less money that investors will give to American tech to buy Nvidia chips.
Basically, investors don't think that American tech can strike gold. so why give them the money to buy shovels.
Precisely. I think the big thing investors realized is that what's currently being done the current way (outside of what DeepSeek is doing in this case) is basically completely wasteless, and could be done so much better. So why keep funding something that's not worth it?
At the same time, the other thing that I can guess is that there might also be a lot of lobbying by some of these big corporations, to get DeepSeek censored out/banned because it just nuked the stock of many companies playing in the AI space and caused them some large losses. They successfully had Tiktok's head on a platter, it's just a stepping stone towards other things they can just conveniently nuke now.
It is a waste of energy, but I suppose if it makes grifters like Altman feel better about his own position, that he's making some great progress, sure, let the investors give him more money but they won't see a return on their investment. A competitor doesn't mean anything, they don't have a good product it's not turning a profit with or without a competitor, these are vanity projects by the ultra rich and nothing more, and time will show that.
It seems like RUclips deleted my comment for some reason (Or it's just not letting me read it) and honestly I agree with this comment the most. The only catch I see is that Nvidia and other tech companies might instead try and lobby for DeepSeek's banning, similar to Tiktok, due to its Chinese origins, plus the lost investments.
dont worry Philip, your job will never be taken over by AI (unless you employ it to do that)
Glad that you've adopted your big-gay-biker phase.
This video literally doesn't appear on my subscription feed. Not just I didn't see it there, I have gone back to it, double checked I'm surprised and looked back over the last 12 hours and it isn't there
When general superhuman AI is achieved, job loss is the least of our worries. Everyone is saying 2-3 years, let's hope they're wrong.
It's basically just an advanced autocorrect. It in no way actually has any sentience
@@incredirocks_ Sentience is irrelevant. What is relevant is that it sometimes does unexpected things. These have been inconsequential so far, because it's still not very capable.
@@waarschijn You said "superhuman AI" and then said "sentience is irrelevant", no it is not irrelevant. You're not getting far with these chatbots or deep learning models that generate images, it is in no way paving the way for AGI.
Sentience is very relevant, and without it there will be no "superhuman AI"... 2-3 years, yeah 2-3 years my ass, wake up and the smell the coffee. How people believe grifters like Altman is the reason why that schmuck has any power, people are so stupid and destitute, even these investors but they were never known to be wise in the first place.
If I believed what the media was spewing today, I'd be scared to walk outside because according to them robots are roaming the streets and taking jobs. But they're not doing anything yet, and they won't be, not until these apes get out of the LLM jungle.
Might have to start becoming AI pimps out in the hood.
This video doesn't show up in the subscriptions feed for some reason.
Ai could never replicate the philip charm of philip
the drop in nvidia stock is due to the fact that the new hardware they are developing is basicaly useless and you could therotically use just a large number of ps2 to achieve similare results
You know, the nvidia selling shovels part is likely not as true as most think due to deepseek not using CUDA…
anyone know the song that plays at 1:57
ruclips.net/video/B6wuQpNJBwA/видео.html
@@sasuke9ajim Thanks 👍
I wanna know /ONE/ thing. What date will this AI become self aware?
Missing from my subscriptions tab again, what are you planning mr Philip?
Thanks for the advice Micah 👍
Nice try with the facial hair Sam Altman, your disguise will not hide you forever.
Deepseek and its impact is slightly overhyped, I'd say. Because it is chinese, the resources involved are opaque and I don't see why or how we should trust their numbers. And after using Deepseek extensively, I still feel that O1 is a step above. And while it is hilarious to hear openAI whining about "being stolen", it is true that Deepseek used chatGpt to "get there".
Anyway, lots of words to say that I globally agree with you, I just feel that the "Deepseek changed everything" tune is too dramatic. The US models are still ahead, and the high amount of resources will probably be needed to keep pushing the frontier.
Still, it is crazy to have this level of intelligence for this price, and really cool to have a reasoning model that can work on your PC locally (tried the 7B model, it's really not bad! and its "chain of though" is absolutely fascinating)
you were using v3, not r1. quite surprised especially since you decided to make an entire video on it
Maybe ChatGPT just started piping your requests to DeepSeek, lol
Do realize that the version of Deepseek R1 that most people are running on their home machine is going to be a smaller, distilled version of the full version of the full model. These distilled models are still much more powerful than anything previously in the same size class, but they're still not as intelligent as full R1.anything short of the full non-distilled R1 model will not surpass openAI o1 currently that I know of. Not to downplay the impact of this model, though.
I've been wanting an uncensored ai model I can run locally. Surface level research seems to suggest people are doing this with deepseek very easily... Kinda awesome?
Wasn't DeepSeek trained using the other AI models though? So it couldn't exist without the others existing.
Well... as long as you don't ask Deepseek about what happened at Tiananmen Square...
Both ChatGPT and deep seek got your question wrong in that neither of them are actually capable of answering that question and will spit out both answers given enough trials
Sharp and well-presented critique. I'm glad the likes of Sam Altman get stirred up, can't stand them. And most of all - chefs kiss on your beard. 💁🏻♀️
1. It still isn't AI.
2. You shouldn't use either to summarize your inane Google searches.
Is the best diss they have for Deepseek; "ooOOO but it censors Tiananmen !!". Who give a sh1t ! Why would you ask a Chinese AI for such information anyway !
Why does this video not show up in my subscription feed?
AI has ethical use, like making the nitty gritty aspects of various things more easier so humans can focus on more important things. But companies would rather AI do both. :/
woohoo finally a choice of government to give up my data to 🙌
as deepseek has a downloadable version you can run offline, you're not giving up your data