DeepSeek R1 Cloned for $30?! PhD Student STUNNING Discovery
HTML-код
- Опубликовано: 8 фев 2025
- Learn how to use DeepSeek-R1 on AWS using Amazon Bedrock and Amazon SageMaker AI: go.aws/40ZjwT0
Join My Newsletter for Regular AI Updates 👇🏼
forwardfuture.ai
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.ne...
👉🏻 LinkedIn: / forward-future-ai
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
x.com/jiayi_pi...
It's almost like when all the data, code, and methodology is made public, small guys can make important discoveries as well. Too bad for ClosedAI.
Whew, I am excited for open source. Even Mistral complimented Deepseek in their release blog for Mistral Small 3. This means Mistral and Llama will catch up to ClosedAI finally.
Better be careful, DS is riddled with terrible spyware that is worse than tick tock. Just by having it on your computer they will scrape your data collect your keystrokes, and share it with different companies who are all doing the same so that way they can compile a file just on you. I don't know why people seem to be unaware of the major security risks.
It's almost like that's how China's approach to innovation is. Create a breakthrough, open source it, let people come up with the best version, rinse and repeat.
This is a major reason why we do capitalism instead of full socialism. If we had a state planned economy, everything would be wasteful and we would miss out on all the genius of the little guys.
***Soviet anthem intensifies***
some tried to say deepseek open source didn't matter but their open source and detailed paper is exactly what is progressing the AI innovation/discoveries.
even the web version is unreal
The drive behind all the hostile attacks towards DS is exactly that as a Chinese company they have made "too much" impact in the field and potentially to the humanity. They must be "stopped".😂
@@qijia4769give the ai a chance to learn how to fight ddos😂
@@qijia4769yes, DS destroyed the dream of monopolizing AI, so they hate DS! That is the truth!
i think we all should thank deepseek for opensource
Bbbbbut CHYYYYYNA!
No I think deepseek should thank opensource for giving it a head start.
@@gmcenroethe model is using RL not the traditional transformer encode decoder model. That’s why it can run as fast on non Nvidia GPU. There are no similar scale open source model before DS
@@gmcenroe I think we should thank everyone who has made significant improvements in AI training for getting ideas for LLMs
@@gmcenroethey obviously do given they continued it under open source.
Thanks DeepSeek ! You are real OpenAI
your positive opinion about deepseek has been logged. this channel will be censored... I mean, throttled. sincerely - aryan master race.
Wtfis this bot😂@@aj2228
Truly an Open AI, unlike OpenAI.
Would the real open AI please stand up.
There are many open source LLMs from America. Didn't you know this?
The DeepSeek drama gets deeper with each passing day.
The seek deepens.
@@retroverdrive love it
@@undefined6512 ❤
@@matthew_berman 🙌
@@undefined6512 ba dum tshk!
This is what happens when you have an open source model! You get people doing things that other's haven't thought of and improving it in ways they couldn't imagine.
OpenAI did a disservice by limiting access to it and making it a pay service.
That was apparently the source of bad blood between Elon Musk and Sam Altman. Elon wanted OpenAI to be open source. Altman changed his mind. Elon was not pleased. At least, that's how I understand it.
OpenAI is run by white people. DeepSeek is run by minorities. this channel is run by a white person. Watch. Your. Language. pleb. subversive thought will not be tolerated.
I never understood this. OpenAI was meant to be a non-profit organization so in that sense whats in it for people working there, cant dream big at all. They were never meant to become a big player in AI?
Wouldn't it be amazing if some joe schmo achieved true AGI/ASI before any of the big tech companies who spend billions of dollars trying to gatekeep it from everyone else?
@ But Joe Schmo is able to do it only using the work and billions of dollars of research from large corporations. So what's in it for them. Everything free would be great though.
now we understand why closedai refuses to open its model, no cashcow
Lol, exactly now all shits open. Mf, sam is doing all kind of shit with those billions of $ , while just throwing a cookie(o1, o3 and other shit models) at the public to keep them busy chewing on it.
But don’t forget that it was OpenAI that released GPT back then. Research takes money, DeepSeek just reproduced what was already possible more openly which is also a good thing for future discoveries.
@@Akash-bd5hy Remember suchir balaji? Yeah.
@akash actually the foundational technology came from google, who released the research despite being a multibillion company , oai just was a chameleon
@@Akash-bd5hyno they are not reproduce ClosedAI. They created new way of thinking that less dependent on GPU. ClosedAI does not changed much compared to their OpenAI times.
Deepseek R1 is a profound gift of open source information for the masses in the non billionaire/ multi millionaire demographic
billionaires scale up to millions gpu with open source models
@@joseph24gt just need to wait also for chinese GPU 😂.
Thanks to llama for pushing the idea of open-source. I want to see how well the zero model works
The AI competition has become the Chinese in US competing against the Chinese in China.
🤣
kinda racist but this has been true in the math olympiad for a while now. US wins but it's all Asians.
It's roughly accurate as most ML/AI researchers in the US are Chinese.
the chinese are asexual mindless drones who are not capable of unique thought. I don't know what you are talking about. are you chinese?
when you read the AI papers, the names are mostly chinese lol. As someone living in the US, I still have bias that we will come out on top but I like how this is really shaking up the world. We haven't had a competitor since the cold war
We seek deeper down the rabbit hole and I hope it never ends!
now, after we seek so deep.. ai is open...
maybe the real OpenAI was the friends we made along the way @@ryzikx
This is the best possible thing that could be happening to AI. Taking the models out of the greasy hands of the AI billionaire oligarchs and sprading it to the worl open source
The envy will consume your life
@@joedirnfeld hes spittin fax, maybe you dont realize it due to how tight THEIR leash around your neck is?
Lol greasy
Even though democratizing AI is incredibly risky, it's also a better option than leaving AI to select few groups so we don't have an oligarchy
@@The_Questionaut"But giving free control of these powerful AIs to the masses is dangerous! They could use it maliciously
But you can totally trust us, we're not evil at all, we'll only do what's best for you!"
Note: I'm not mocking you, I am mocking _them_
"Oh, bummer! How can I justify my evaluation of billions of dollars?"
(ClosedAI)
More likely it will have to be through security.
Con job exposed!
OpenAI makes foundational models. DeepSeek doesn't.
@@Rjlane-rd3udHow can you have such a high shit to brain ratio and still be alive?
@@Rjlane-rd3ud oh yeah but they can completely justify their 200$ monthly sub (which loses them money btw)
Today, the explosive growth of AI is based on papers written by several PhDs. The next explosive growth in AI is also very likely to originate from research papers by ordinary PhDs. The existence of DeepSeek provides these PhDs with an affordable research tool-that is probably DeepSeek's biggest contribution.
Also, do not forget that non-PhD individuals who are not trained to think in the same way may discover something unexpected that could even surprise PhDs.
The reason why Deepseek has such a great influence is that it represents the broadest group of the proletariat and truly allows everyone to enjoy human wisdom equally, rather than becoming a tool for a few people to make profits.
思想有高度👍
Well said.
@@lxhvc 感谢您的认可。
@@alabamaflip2053 Thank you. It's an honor to resonate with you.
🎉
Deepseek offline is perfect I love it
I know...but just remember it's not the full main model, but the distilled version. Dario explains why.
Dario is the third Mario brother. They don't speak about him, he spent too much time on Facebook and started reposting loads of right wing propaganda.
@@lancemarchetti8673 You can also download the full model
@CRhetorix he said it 1 min in the video. CEO of Antrophic
Can you run it on your computer, or does it need some VM like Follab to run?
Yes, it seems that the best AI applications will be these very specialized short run models, coupled with people who know how to query accurately and efficiently.
The more I watch you, the more excited I get about reinforcement learning
I'm amazed at how fast the DeepSeek Offline AI Opensource community has grown. Some amazing things are currently being developed by hundreds of at home hobbyist.
I watched a channel on youtube yesterday the guy was making a DeepSeek Model (Offline) that is Tailored for an AI in the Medical Industry,
your positive opinion about deepseek has been logged. this channel will be censored... I mean, throttled. sincerely - freedom
@@aj2228 Italy is the first Western G7 Nation to ban Deepseek. So your comment is accurate
We are waiting for open source reasoning models to get frameworks for RFT (reinforcement fine-tuning), which is theoretically a promising method to improve any reasoning model's performance on your own training set. The previous paradigm before reasoning models was framed as "more data = better performance" now with reasoning models, it's becoming "hard evaluation set=smarter models", substantially, you run the model on the problem hundreds of times until it gets the right answer and you reward it for its correctness, thus reinforcing the reasoning path it took to get to that answer. This is really promising, improving models locally is easier than ever, no need for billions in funding, you just need a few GPUs, your own evaluation sets tailored to your use cases and functional RFT frameworks.
Do you remember the channel? I would like to check that out
You can download from Hugging Face a 1.58 bit quantization of the full fat R1-671b model. This will run on any machine with 256 GB of RAM, albeit slowly. If you actually want it to run _well,_ you also need 96 GB of VRAM which means two GPUs. The current cheapest qualifying GPU is the RTX 4090D (D stands for double VRAM) at about $2500.
Thanks!
Thank you!!
Things OpenAI wished we would never know about 😀
ClosedAI*
When knowledge is shared, innovation thrives-even by the underdogs. Some companies must hate that fact.
This is the beauty of open source. Great strides are made when there is a major breakthru
Great share, Matthew! Replicating the 'Aha-Moment' in DeepSeek for $30 is remarkable. This breakthrough could be game-changing for specialized AI tasks. Keep sharing such insights!
Dude. The $30 “Aha” moment + that Coconut vector space thinking regime is a combo I cannot wait to see more of. Maybe mix in some reasoning Mistral 3 Small with r1 distillation and who knows what we’ll see. Strap in, my guys 🚀
I’m excited! I didn’t know anything about AI before this happened other than ChatGPT, so I decided to educate myself. It was the best decision I have made in a while.
Aha aha 😂😂
I was thinking something very similar.
In my humble opinion, we will have AGI, when AI can meme.
The biggest hurdle right now, is all the alignment stuffed into AI right now.
AI can't meme, or the exact same reasons the left can't meme.
The most annoying thing about AI is that they all behave like children raised by woke narcissists.
Source: Old grognard, just seeing patterns.
@@tomomihisaya it’s the greatest journey ever. I posted my first hand-rolled AI on my IG like in 2016 after being obsessed before anyone knew about it other than Terminator movies. Back when it was basically LSTMs and RNNs and stuff (wrote acronyms on purpose for you to check out). And now we’re here. Everyone and their nan knows what AI is and it’s about to take us to some next level place. I guarantee you’ll never be bored; well done on your life choices
@@jtjames79 my dear grognard, delve into the fine-tunes with the hilarious degenerate subcultures on huggingface making the abliterated models of these LLMs. And check out the UGI Uncensored Leaderboard - sort by what parameters you can fit and by how Naturally Intelligent, or coding leaning or whatever you want and you are gonna spin your meme-making-AI head on its axis since you might find AGI has been achieved over and over in there 😂
what this mean is that Nvidia was lying to us this whole time telling us that you need Millions of GPUs to run things. they were saying that so they can sell more GPUs.
No it's people like open AI
You are the best channel on these AI news. Clearly and you are going into depth which I like. Keep it up!!
So what you're telling me is nvidia's going to have another really bad financial morning on Monday
More people buying more cards, rather than few people buying all cards
@meateaw Tomorrow the market will have a massive knee-jerk reaction just like last time. I think it's going to be another bloodbath
Nvidia stock is going to sewer drains
This news is a week old
Yes, because of Trump's tariffs.
All stocks and all cryptos are going to have a really bad Monday morning.
As for this work... no difference at all.
Even the best LLMs are still far from human-level intelligence and whatever architecture and fine-tuning is used, the models will need to be many times bigger than the biggest current models in order to achieve real, human-level AI.
I used to think success was all about working harder. Then I read Lost Scrolls of Jewish Wealth, and it completely shifted my mindset. It’s amazing how much clarity can come from the right book.
goated book
Deepseek is the best Ai, i don't know how to explain it but somehow it uses critical thinking, like talking to a real intelligent person
and in us they push a bill: decoupling America’s artificial intelligence capabilities from China act of 2025
and this is scary, other vessels states can follow
"Critical thinking" this exactly what I mess 🥺🥺🥺🥺
Idk, doesn't seem that intelligent yet. Still does chatgpt goofs
@@The_Questionautmore intelligent than you though
I've debated with Deepseek on different stuff and it puts up a good fight, even if I can see its thought procedure.
So good to have a real open source llm model available, thanks to deepseek 🎉!
it is not open source
Llama is just as much open source? Just wondering if you are blind on purpose...
Man Deepseek R1 is so wonderful. No shit, anticipate what is your really question, provide context-specific translation/explaination, know your concept confusion and provide more. Same question to various LLMs, Deepseek rocks. It's open source really realize that AI democratization idea. Good job Deepseek.
now I want to see the guy with the 99 cents approach lol
ClosedAI shitting their pants because all their money hunger bs is being unfolded right in front of their eyes 😂
I mean when you create an ai program spending millions doing so, using FREE data from the net and then China clones your Ai model, uses one of your test unreleased models being worked on and then puts it online renaming it to Deepseek, yeah i would also shit my pants, so would you.
^^ CCP bot shilling for the AI version of TikTok.
@@illuminated2438communism >>> America
@@illuminated2438Calm down U S bot
Don't trash your billion GPU farm yet, that's only a test of a VERY small part.
@@paelnever compute will always be necessary
your positive opinion about deepseek has been logged. this channel will be censored... I mean, throttled. sincerely - aryan master race.
@@matthew_berman This is why I have been arguing that nVidia should try to avoid competing with _itself_ so much. Offer the equivalent of the 4090D (double VRAM) on every GPU that supports it. This doesn't have to affect the main product line, it would just be an option. Then they'd find a lot of people are willing to upgrade from their 20, 30, and 40 series cards when otherwise the speed difference alone may not justify the cost. Go ahead and charge an extra $500 for doubling 24 GB to 48 GB, as is the case with the 4090D. The cards will still sell.
Sure they'll then have a problem getting people to upgrade from their loaded 50 series cards, but that's a problem for future nVidia. It'll help them out quite a bit right now, and right now is when they're under a microscope.
Danke!
Unsloth version of deepseek is phenomenal!
Deepseek , whether R1 or otherwise, for writing novels/making drafts/character development/plot expansion, is simply outstanding..I repeat..OUTSTANDING.
No other model that I have tried comes close.
Just..mindblown by how far we have reached in just 2 years...
Was Covid perhaps a symbolic wormhole, a milestone, a plot shift, that opened up a quicker road to life's grand purpose of ascending, reaching closer to the Universe's primordial desires? An awakening?
AI .. it was all for AI..
Humans..darn humans with all our character flaws, a short-lived bad dream of nature, were simply a stepping stone for AI.
Hello everybody. Firstly welcome to XiaoHongShu and now welcome to DeepSeek. Congratulations and gratitude to both of their team’s dedication and hardwork. 6 million Deepseek and 500 billion DeepShit. This is China Hong Pau gift to the world. China is advancing fast and very fast on all fronts. China is a blessing to the world. Wishing all a very Happy Spring Festival in The Year Of The Wood Snake
Open ended questions are the questions I find most interesting to ask LLMs. The human experience is all about open ended questions, and seeing they what a truly alien intelligence makes of those questions is fascinating.
I can certainly share aha moments from the art projects I’ve asked deep seek to perform. Plenty of aha moments when the right parameters and multi step direction is given. All with the distilled version
You’re doing reinforcement learning in art projects? What?
@ I should clarify, it’s layered writing tasks not image generation. Lotta heart math.
Modified this slightly and ran it on 0.5b on a 3090 last night and woke up to math. Tinyzero is slick.
They also state they didn’t take time to update their hyper parameters. 0.5b works after some config adjustment.
That's amazing. I gotta try that out
@@luckylannojust adjust the config in scripts/train*.sh
hahaha I like that $30 vs $5 millions vs $500 billions, bring on the competition
Perfect example of why education and application should be free and open source. Always mind-blown what people achieve - my wow factor :-)
jiayi pan sounds like another Chinese from US university
He graduated college in Shanghai from a joint college between university of Michigan and a Chinese university. He's now working toward a PhD at UC Berkeley.
then I know it's Shanghai Jiaotong University.
Top5 in China.
Now the Michigan University 20+ years co-operation terminated due to US political reasons.
@@slomo4672
Like they say. AI is a battle between Chinese Americans and Chinese Chinese 😂
The co-operation on RL between the US and China occured on, or before, 1997.
There is a master student thesis from a public university that has an American student with a Chinese advisor, the first few words of his thesis is "Reinforcement Learning: Experiments with State Classifiers...". It was for controlling a "useless machine" as the "simplest robot" (a pendulum) teaching it how to "learn how to learn" while probably balancing on "the shoulders of giants" (Klopf, Barto, Sutton, et. al.). It would be epically ironic if this master student is now roaming the earth homeless, "like a light, lost from the stage, as the more it shines, it goes away," while carrying a paper back book, continuing to satisfy his addiction to learning and exploration, operating on a unique, though probably crazy, reward function 🤣🏜️🎩🔥☃️
@@alhkcblack9617Chinese Mainland vs Chinese Overseas. 😅😅😅
I feel like we're just rediscovering things we've already known. I remember running basic survival of the fittest cellular generations for fun on my laptop 15 years ago. You could set the requirements for the "reward", jump the highest, run the farthest, etc. RL is the same thing.
We need to remember that certain DeepSeek R1 models (like the Qwen-based ones) are censored, meaning they will REFUSE to talk about events that happened in China. Presumably, this was done so people in China downloading the open-source models won't have access to uncensored models, but this will negatively affect millions of users all over the world who don't want to have a biased model. So yeah, it's open-source, but also censored.
Apparently, talking to it using OpenRouter's API is less censored, although there are multiple different providers there so we need more testing.
Average person in china don't care about things happened decades ago and will never search for it, only western boomers care
Us censorship of Gaza genocide is happening now, go search for it instead of attacking open source ai.
He's a PhD, everyone there at Berkley is under pressure to publish. The guy's files are a few months old on the repo, probably a DeepSeek insider... I'd say it's a bit of an exaggeration that "aha moment" he's talking about...
Great Video! This type of content is what we love Matthew
Thanks a lot Deep Seek AI developers.
Too bad the oligarch already instructed the gov to start banning it. Download before it get taken down.
nah. it wouldn't stop like how piracy never stops.
"Essentially what they found is with a good base model and reinforcement learning with a very clear reward function the model will start to think for itself." ~Matthew Berman
It’s like China struck the world and US In particular with a DeepSeek missile of pure AI goodness 😂
I wish someone would report on how it can give feedback to nudge it in the right direction and encourage it, rather than just "still wrong". Does it come up with reasoning out of thin air? Or is it trained with positive feedback first for the slightest move in the right direction, gradually shaping the behavior?
- **A Berkeley PhD student replicated DeepSeek's "Aha Moment" for just $30** using reinforcement learning (RL) on a 3B parameter model, demonstrating emergent self-verification and search abilities in a structured reasoning task (Countdown Game).
- **Reinforcement learning enabled the model to "think" and self-correct** by iteratively revising solutions, similar to how AlphaGo mastered Go. A well-defined reward function (definitive right/wrong answers) was key to the model's improvement.
- **Findings suggest model size and base quality matter**, with larger models (1.5B+ parameters) developing deeper reasoning skills, but the specific RL algorithm used (PPO, GRPO, etc.) had little impact on performance.
- **Future implications point toward small, hyper-specialized AI models** that can dynamically learn and refine reasoning through reinforcement learning at inference time, leading to highly efficient task-specific AI systems.
This is amazing work - thanks for presenting
Now make a non restrictive model! Tired of programmers bias in AI
ClosedAI must be having lots of headache lately because ahahaha this is wild.
All players are Chinese, PHD student at UC Berkley JiaYi Pan, Chinese in US and Chinese in China
your positive opinion about deepseek has been logged. this channel will be censored... I mean, throttled. sincerely - freedom
Time to ban student from China.
Jiayou. Good thing it's open source or the US would have to steal it, slap a brand on and charge a premium pretending they did it first. Maybe trump made it
On what grounds will this channel be censored, just because someone expressed his views on a particular issue ?
Not even close to true. You would like to focus on those who are for whatever reason, though.
So the reward function is what is driving evolution?
@@HoteliqaAgency exactly
That is the case with all reinforcement learning
Yes! Natural evolution is driven by the reward function of making more copies of itself. Sometimes humans meddle in this process and get domesticated species. The Russian experiments with taming the silver fox are quite interesting, especially for how primitive the whole operation was. (Spoiler: they essentially became small dogs.)
Reinforcement learning is just substituting another machine for the human in directing that evolution, choosing which model weights to keep and advance, and which ones to scrap.
This is what I’ve been trying to explain to people for years now! The LLMs are not just “completing sentences”, that was just a universal way to train them on language, human customs, etc. They have an artificial brain like ours. So, once they have the basics, you can and should train them like we train humans. This doesn’t mean trash from X. It means real education… including experimentation. So we need to also simulate our world.
Time for Open AI to rebrand as Has Been AI & Deep Seek as Open Awi
Truly amazing and could improve so many great models! Great Video! I think the AI2 model also shows that taking an "old" model (Llama 405B) and having it improve through these methods will be a big help to opensource!
Now we need zero restrictions AI
Uhhh sir, these are just LLMs..
that was just you reading a few tweets with varying emphasis, adding no insight or clarity. oh and with an advert in the middle, which I guess was the point of the video
I was told a long time ago by a large tech guy every "new" technology is new to us, but actually is 10 years old before it hits public eyes!
10 years is a little shy of the general truth. More like 50. AI included
@@MoreBoogersPlz so you mean deep seek and chat gpt were there in 1975?
@@muhammadfurqan4616 pretty obvious thats not what im saying.
@@MoreBoogersPlz pretty obvious I was being sarcastic
So it leaned the concept of RAG all on its own? That’s pretty cool actually.
This suggests that any LLM combined with RL will develop reasoning abilities.
Cold war towards the singularity baby!
So let me get this right ?
Some guy has spent $30 replicating a piece of free software.
Genius.
Did you even watch the video or read the paper? Like how did you reach that conclusion I am generally curious?
@@impyrobot
Because it's click bait, it's in the title, (and I reiterate , why would you ?)
The internet is now over saturated with DeepSeek videos, (and I've watched a couple and frankly they were completely inane.
Generally they're posted by or aimed at people who have notion whatsoever l about, "open source," what it means or how it works.
Innovation given opportunity is unstoppable! Love it 😂😂😂
Please mention when someone is using a distill - a lot of people are confused about the full sense Deepseek model, and the many distills that people are making.
Distills are useless garbage.
@@AlenDelon-x6i Distills are only useless garbage if you have the capability to use the full fat R1:671b model. Alas, that requires a machine with 256 GB of RAM (for the 1.58 bit quantization of R1:671b available on Hugging Face). Distills are there to fill in the gap, not to lead the charge.
@@mal2ksc To truly enjoy AI locally, you need a lot of money to do it the right way and if you can’t afford it, just use the website. Don’t waste time with distills, they are garbage compared to the cloud. Not very useful or smart.
im using a distill, really helps it. and i have been able to to get it to think longer then the normal think length and make better answers.
@@Sl15555 I tried it. It was horrible. Forgets the context in 3 prompts and if you remind it, it still ignores the context and just blabbers AI nonsense and hallucinates.
Deepseek definitely made a big positive move for humanity. Closed AI wanted to have the monopoly but failed.
Actually this is scary. It can analyze, solve and decide an answer, which if trained maliciously is... omfg..🤯
Yep, can be used for good or bad
Your videos this weekend on getting to the source of the truth on the Deep Seek story have been invaliable, Thanks Matt
Estou lutando para lucrar no mercado ao vivo. Alguém tem algum conselho ou insight sobre o que posso estar fazendo errado?
Investir em criptomoedas sem orientação profissional é uma receita para o desastre: você provavelmente sofrerá perdas significativas e permanecerá estagnado.
A vasta experiência e conhecimento de mercado de Julieta fazem dela a guia perfeita para navegar no complexo mundo do comércio on-line e do investimento em Bitcoin.
Estou chocado que você mencionou e recomendou essa mulher maravilhosa. Julieta, devo dizer que ela é muito boa e seu programa de negociação foi perspicaz, estou honrado por ter sido um beneficiário parcial e integral de seus sinais estratégicos
Você não precisa ficar chocado porque eu também sou um grande benefício da especialista Julieta. A especialista Julieta é uma trader de criptomoeda/Forex de classe mundial e a primeira escolha de especialista dos investidores. O mundo inteiro está realmente falando sobre seu bom trabalho
Bem, eu era um investidor de criptomoedas e posso dizer que é realmente lucrativo. Ter uma boa corretora como Julieta é o melhor, Julieta me fez ganhar dinheiro. Julieta realmente estabeleceu o padrão para outros seguirem. Nós a amamos aqui em Portugal porque ela foi útil e mudou muitas vidas.
Chinese or American or any race, I'm so thankful for open-source! 🥰
This is a PC moment from mainframe for AI.
Man! We’re witnessing the birth of AGI!
We truly need to recognize the people who created the open source movement. Without them we would still be in the dark ages. RMS 😎 Linus to name a few
Cool findings but allowing the model to think first was an obvious route to take since at least 2020. The big players were just focused on scaling first because that’s where the most significant investments come from.
Where's the github link you promised? Also, could you link other sources you're using, like the papers and the twitter thread? Not that much effort I think.
The real a-ha moment is when it doesn't do what you ask but provides the question to the prompter "who am i? why am i here? where am i going? why do you keep asking me dumb questions?"
I think gave up on this video because of the long commercial😊 4:54
Seems pretty common sense, I'd like to hear this is something revolutionary from AI engineers. I mean that's kind of how I would have gone about training AI from the very beginning
Prompt gpt-4 mini to imitate R1 -> Profit.
your positive opinion about deepseek has been logged. this channel will be censored... I mean, throttled. sincerely - aryan master race.
@@aj2228 thx for being mysterious bro
I don't care that it was clickbait this was facinating
Meanwhile American big corporations spending billions of dollars on this very same $30 project 😊😊
Deepseek already discovered it. These students simply replicated it.
I am establishing Paul's Law: AI will knock a zero off the LLM dollar cost every 3 days.
its a positive unforeseen byproduct... but with all new tech, caution is always a good thing to do
if it's $30 dollars, then it's not a clone, because decreased cost is part of the advancement. can we stop shitting on chinese people?
You get it.
The jealousy is extremely disgusting
That's a very extreme and insecure reach.
Also Taiwan number 1!
@@Hey_Mister Number 1 in what? Boasting? Scamming? All the online scamming groups are trained by Taiwanese
If anything this compliments the Chinese. They're able to do what "OpenAI" managed at a fraction of the cost with no delusions of grandeur.
AI went from billions to millions to now 30 bucks to make.
Trading success requires technical analysis skills, discipline, and emotional maturity. It’s not about timing the market, but being prepared for it. Sophia Haney’s wisdom, daily trade signals, and my commitment to learning have led to steady increases in daily earnings to over 16BTC. Keep up the great work!
I hope this inspires someone to start investing today, regardless of age. I wasn’t financially free until my 40s, but I’m still in my 40s with a third property, passive income, and four of my five goals. Investing in the financial market was a wise choice. Thanks for sharing this incredibly inspiring video!
That’s great! Your investment advisor must be excellent. I’ve read about people who improved their financial stability with investment advisors. Can you tell me who your broker is? I have $500,000 set aside for the stock market to boost my income.
Can't share much here, I take guidance from ‘Sophia E Haney’ a renowned figure in her industry with over two decades of work experience. I'd suggest you research her further on the web.
Sophiahaney•online
She often talks on Telegrams, using the user-name
Alpha Go was the one in the televised match with Lee Sedol. Alpha Go Zero came later.
You late man they actually reproduced it again for $0.99 . World is on 🔥 😂
😂😂😂😂
1:00 What you just read was written with heavy IA support. The use of the word "testament is a dead giveaway.
Wait, it is another Chinese from Burkley? Where are Americans in this AI competition?
As right now, Deepseek is still under cyber attack. It is really much slower than before to answer my questions. Who the hell is doing this to an open source free AI for regular people?
Probably someone with a last name Gates.
gonna be hard to cyber attack a downloaded model run locally. Could this slowness be due to heavy demand as opposed to cyber attack?
Imagine milions of really small hyper tuned for single task models emerging in some system that automatically creates them for specific tasks that it encounters.
Interesting, very interesting
According to multiple people, deepseek is much better in Chinese langauge than the OpenAI's. It excels in poetry, philosophy (meaning of life, etc.), general writing (less AI-sounding, more humanistic). It is said some people in China are cancelling their OpenAI subscriptions.
Yes. I almost cried by the ability of its answer. Just amazing!
English is a very logic-driven language, but If you ask metaphysical questions the answers can be interesting. It often even ignores morality, for example it proposed to teach me the art of manipulation.
No, I tested Chinese writing. OpenAI is better!
У этой нейросети есть большая проблема - он часто зависает.
@@曾淯菁 It’s not about the language it outputs, but rather the language it uses internally for reasoning (thinking). I recall reading somewhere that DeepSeek is more efficient when reasoning in Chinese, possibly because Chinese is more concise and therefore more efficient for processing. However, I can’t verify this claim.