Chapters (Powered by ChapterMe) - 0:00 Intro 1:15 The intelligence age 4:18 YC o1 hackathon 12:09 4 orders of magnitude 14:42 The architecture of o1 21:52 Getting that final 10-15% of accuracy 32:06 The companies/ideas that should pivot because of o1 34:44 Outro
@@brucebain7340Yeah they did but I think they are quite biased towards OpenAI. I have used premium versions of Gemini, OpenAI and Claude. Claude is still neck to neck, if not ahead of OpenAI. Also YC needs to be more objective in their POV. I think it happens because most of them talk to founders and read articles etc. rather than using the models extensively themselves to build stuff. Makes a lot of difference in how one perceives technology.
@@brucebain7340The bias is not about what to order for dinner. The bias is that company 'A' is going to unlock AGI, Company 'A's models are going to be orders of magnitude better because Mr. Sam told us so. This bias can lead to misjudgements on YC's part which can impact its own investments which run into millions. I mean won't it be sad that someone can simply walk into your office and skew your own vision of something.
@@brucebain7340 The bias is not about what to order for dinner. The bias is also not from a colleague of mine who thinks that Apple built the Apple Intelligence(AI)! The bias is from a group of people running one of the topmost accelerators in the world. The bias is that company 'A' is going to unlock AGI, Company 'A's models are going to be orders of magnitude better because Mr. S told us so. This bias can lead to misjudgements on YC's part which can impact its own investment which runs into millions.
Advice to everyone building a product. Ai is for scaling your solution your insight in a particular domain. So work on insight building and use ai model to scale that solution to solve the problem at a deeper level for target user.
The people in chat against OpenAi have no idea what's going on. It's too late to stop AI, it's not a fad and it is not going away. There is no new fad. The only thing that kept humanity together was intelligence, now things are going to accelerate. But the benefits will only reach a few of course.
It's more so that LLM cannot truly reason. The reasoning from o1 is chain of thought using the same LLM backend. This will likely not scale to AGI. I'm here for the ride, let's see
LOL. Nobody needs to "stop AI" since there is NO AI at all. There is no intelligence in LLMs and in what OpenAI does. Yes, it's a nice tool that can process a lot of data and structure or recombine it one or another way but it has many many limitations that's impossible to overcome since they require totally different technology. It's not even close to anything that could be called AI.
I love the girl. she's so authentic. When she speaks, it feels like you're right there with her, having a conversation as if you're sitting together in a cozy coffee shop, where everyone feels like friends.
Yeah, AI for customer support is solving something like 20-35% of most customers' support tickets. Tickets are usually the easier ones that come up most frequently.
Feels like alot of coolaid being chugged back here. I am personally becoming more skeptical by the day. Microsoft probably has invested the most money and time in this space and their AI driven products are under whelming. I am still wanting and waiting to see a compelling AI LLM driven productivity tool. All I am seeing is demos that include a high degree of deception. For example, the first Gemini duck demo, Devin Upwork demo, Tesla robots with human controllers etc. I am at the stage where I need to actually have access to the product, to believe any claims.
hmm there are already products using AI .. just look at google already generating 25% of their code by AI, think of all the AI features in Adobe Lightroom and other Adobe products, AI features in phones, self driving cars, AI finding 0 day hack .. I even used co-pilot this week at work to generate some reports .. and this is just the beginning
You want AGI? Heres how: 1. Make an LLM good enough to create and implement highly accurate environments for use in RL. This should work with any arbitrary task. 2. Train a good policy 3. Profit?
Indian startups and have some of the worst ROI. They’re usually not internationally trusted so many VCs won’t fund them. You’re better off looking for domestic investors.
I get confused when Gary uses 'eval' to refer to testing, as it means something different in Python. Terms like 'metrics' or 'benchmarks' are more common in the LLM context and feel more precise.
It usually means that there are missing parameters that it has to guess. Instead it should understand ambiguity and be more interactive and say "I don't know" or "it depends" and ask for what it needs to answer the request.
I do not believe that AI will be able to do chip design better than humans. PCBs maybe, chips no. It took the brightest people around the globe (literally any good chips you see today touches at least decades of research from Japan, Europe, US IP). There are so many specialties in chip design including testing, production ready, simulation, RF, documentation, security, compiler that’s beyond the capabilities of a closed loop LLM. If LLMs PCB design capability transfers to SoCs, China would have already made and beat Apple & NVIDIA. The first half is absolutely make believe, solve physics, nuclear power, climate… seriously? AI will solve societal issues? I find it hard to believe that next gen data centers for AI will need nuclear power. The second half I agree more on the real progress of AI: do tests, create a moat by building agents and accumulate proprietary data. Speculation is dangerous, I hope people can think for themselves.
All this advancement is amazing but what I do not understand is, how is most of this advancement actually helping humanity? How is it helping the majority of humanity and not the one percent of investors?
you can now talk to one of these LLMs and learn almost anything / ask questions about anything. Every kid with access to the internet can has a personal tutor for every subject for free or $20/month. I’d say humanity is being helped.
4 people agreeing on everything is boring. Get a homeless guy in there or something. I can't wait for them to replace all the customer service agente with AI because ill make an AI call center that calls all their customer service centers and tries to manipulate them. Adoption isnt going to be super quick because having a surface area for attack thay massive is a huge liability.
Still not seeing these tools achieve anything that humans are not capable of. Some efficiency gains maybe and useful as a learning tool. I think it's a limitation of the stochastic parrot from training data approach, it's never going to be creative and bring new innovations. That will need a new approach entirely.
AGI means you have solved the horizontal scalability problem. Which means having the ability to access any companies database public or private. So can one of you geniuses explain how exactly that is going to happen?? My guess is that all of you are being duped by Altman because he has to say s**t like that to keep his investors satisfied and feeling good about themselves lol 😂
Chapters (Powered by ChapterMe) -
0:00 Intro
1:15 The intelligence age
4:18 YC o1 hackathon
12:09 4 orders of magnitude
14:42 The architecture of o1
21:52 Getting that final 10-15% of accuracy
32:06 The companies/ideas that should pivot because of o1
34:44 Outro
@chapterme great marketing
are they not allowed to talk about claude or something?
They mentioned claude several times in the last episode genius
@@brucebain7340Yeah they did but I think they are quite biased towards OpenAI. I have used premium versions of Gemini, OpenAI and Claude. Claude is still neck to neck, if not ahead of OpenAI. Also YC needs to be more objective in their POV. I think it happens because most of them talk to founders and read articles etc. rather than using the models extensively themselves to build stuff. Makes a lot of difference in how one perceives technology.
Of course they are biased. They're human.
Whoever expects them to be impartial will be disappointed.
@@brucebain7340The bias is not about what to order for dinner. The bias is that company 'A' is going to unlock AGI, Company 'A's models are going to be orders of magnitude better because Mr. Sam told us so. This bias can lead to misjudgements on YC's part which can impact its own investments which run into millions. I mean won't it be sad that someone can simply walk into your office and skew your own vision of something.
@@brucebain7340 The bias is not about what to order for dinner. The bias is also not from a colleague of mine who thinks that Apple built the Apple Intelligence(AI)!
The bias is from a group of people running one of the topmost accelerators in the world.
The bias is that company 'A' is going to unlock AGI, Company 'A's models are going to be orders of magnitude better because Mr. S told us so. This bias can lead to misjudgements on YC's part which can impact its own investment which runs into millions.
4:30 Diode Computers is doing PCB (printed circuit board) design, not chip design (like NVIDIA)
Timing is a little off here, just as plateau conversations and supporting data keep coming up
LOL. What is Gary's obsession with 'Raw Dogging', he's said this on multiple videos. Bro, stay safe out there 😂
those hackathon results were really impressive 🤯
Given all these developments, I need a remote job. PhD since 2022.
Advice to everyone building a product. Ai is for scaling your solution your insight in a particular domain. So work on insight building and use ai model to scale that solution to solve the problem at a deeper level for target user.
Bro omg, are we hitting a wall or not
The people in chat against OpenAi have no idea what's going on. It's too late to stop AI, it's not a fad and it is not going away. There is no new fad. The only thing that kept humanity together was intelligence, now things are going to accelerate. But the benefits will only reach a few of course.
It's more so that LLM cannot truly reason. The reasoning from o1 is chain of thought using the same LLM backend. This will likely not scale to AGI. I'm here for the ride, let's see
Who’s saying it’s a fad ?
LOL. Nobody needs to "stop AI" since there is NO AI at all. There is no intelligence in LLMs and in what OpenAI does. Yes, it's a nice tool that can process a lot of data and structure or recombine it one or another way but it has many many limitations that's impossible to overcome since they require totally different technology. It's not even close to anything that could be called AI.
Can you show the info you are looking at....
Thanks a lot for shout out and adapting us Garry 😂
I love the girl. she's so authentic. When she speaks, it feels like you're right there with her, having a conversation as if you're sitting together in a cozy coffee shop, where everyone feels like friends.
very inspiring and exciting podcast to look forward to the future.
The AI bot beating Dota 2 pros at The International put OpenAI on the map for me.
Yeah, AI for customer support is solving something like 20-35% of most customers' support tickets. Tickets are usually the easier ones that come up most frequently.
This could be more than 35%.. most common issues are often 70% to 80%
Please share link to Sam Altman's essay referred early in the video.
Bruh, Google it.
google The Intelligence Age
can you not google it? Is it that hard?
Feels like alot of coolaid being chugged back here. I am personally becoming more skeptical by the day. Microsoft probably has invested the most money and time in this space and their AI driven products are under whelming. I am still wanting and waiting to see a compelling AI LLM driven productivity tool. All I am seeing is demos that include a high degree of deception. For example, the first Gemini duck demo, Devin Upwork demo, Tesla robots with human controllers etc. I am at the stage where I need to actually have access to the product, to believe any claims.
Kinda agree..
Same
hmm there are already products using AI .. just look at google already generating 25% of their code by AI, think of all the AI features in Adobe Lightroom and other Adobe products, AI features in phones, self driving cars, AI finding 0 day hack .. I even used co-pilot this week at work to generate some reports .. and this is just the beginning
you must be living under a rock then
@@winnerswritethestory3370Name a successful commercial LLM product that isn't a base model.
Interesting
You want AGI? Heres how:
1. Make an LLM good enough to create and implement highly accurate environments for use in RL. This should work with any arbitrary task.
2. Train a good policy
3. Profit?
The mind expands, not stacks.
lbh current models and APIs already can handle the scaling cases of use
Garry is obsessed with evals 😊
I would like to see more Indian startups funded by YC. Please consider opening an YC India to invest in Indian startups.
Indian startups and have some of the worst ROI. They’re usually not internationally trusted so many VCs won’t fund them. You’re better off looking for domestic investors.
I get confused when Gary uses 'eval' to refer to testing, as it means something different in Python. Terms like 'metrics' or 'benchmarks' are more common in the LLM context and feel more precise.
I see "eval" more commonly used than "metrics".
"If an LLM task is hallucinating, it’s likely doing too much. Break it down into steps"
The mistake people make when using these tools is they are not specific enough. Treat is like a human or team of humans and your results improve.
@@pin65371 Treat it like a child; Adults are smart and can understand ambiguity and ask for clarity.
It usually means that there are missing parameters that it has to guess. Instead it should understand ambiguity and be more interactive and say "I don't know" or "it depends" and ask for what it needs to answer the request.
I do not believe that AI will be able to do chip design better than humans. PCBs maybe, chips no. It took the brightest people around the globe (literally any good chips you see today touches at least decades of research from Japan, Europe, US IP). There are so many specialties in chip design including testing, production ready, simulation, RF, documentation, security, compiler that’s beyond the capabilities of a closed loop LLM.
If LLMs PCB design capability transfers to SoCs, China would have already made and beat Apple & NVIDIA.
The first half is absolutely make believe, solve physics, nuclear power, climate… seriously? AI will solve societal issues? I find it hard to believe that next gen data centers for AI will need nuclear power.
The second half I agree more on the real progress of AI: do tests, create a moat by building agents and accumulate proprietary data.
Speculation is dangerous, I hope people can think for themselves.
AI should eliminate the need for customer support, so an AI customer support solution seems destined to fail
We keep getting 101 from open AI & thats the problem.
Amazing!
I hope you guys are not blinded by Openai
"thousands of days" maybe just say years
more biomimicry --> intelligence on tap
didn't Orion disappoint? I don't think we can just assume it will keep scaling. I am excited for the o series.
shoutout atopile
All this advancement is amazing but what I do not understand is, how is most of this advancement actually helping humanity? How is it helping the majority of humanity and not the one percent of investors?
There's no law in physics to ensure that
That's not what companies are for
you can now talk to one of these LLMs and learn almost anything / ask questions about anything. Every kid with access to the internet can has a personal tutor for every subject for free or $20/month. I’d say humanity is being helped.
you know when companies are more efficient that means you have cheaper products/services or any other goods
Capitalism ... is for capitalists. 😉 It's in the name.
4 people agreeing on everything is boring. Get a homeless guy in there or something.
I can't wait for them to replace all the customer service agente with AI because ill make an AI call center that calls all their customer service centers and tries to manipulate them.
Adoption isnt going to be super quick because having a surface area for attack thay massive is a huge liability.
If AI reasons then people will start believing in God sooner.
First
I was so close!
hahahah
lol
Diana Hu beautiful
third
Second
You should listen to the Marvin Minsky conversations from 80s AI Winter. Please stop talking about AGI until then. You're clueless!!!
Still not seeing these tools achieve anything that humans are not capable of. Some efficiency gains maybe and useful as a learning tool. I think it's a limitation of the stochastic parrot from training data approach, it's never going to be creative and bring new innovations. That will need a new approach entirely.
No way. I can't put bugs all over my code base nearly as fast as Cursor can.
@@AlexWilkinsonYYC You surprised a lol out of me.
Fifth
AGI means you have solved the horizontal scalability problem. Which means having the ability to access any companies database public or private.
So can one of you geniuses explain how exactly that is going to happen??
My guess is that all of you are being duped by Altman because he has to say s**t like that to keep his investors satisfied and feeling good about themselves lol 😂