Used new Sonnet 3.5 today for work (coding). It's def a solid improvement. I'd say it's on par with o1-preview or o1-mini but much faster. Haven't had a chance yet to try it with very long instructions because claude models are typically super strong on instruction following. Can't wait to keep building with it tomorrow!
🎯 Key points for quick navigation: 00:00:00 *🚀 Introduction and New Model Overview* - Announcement of two new Claude models: 3.5 Sonnet and 3.5 Haiku. - Overview of how the new models fit into existing frameworks. - Mention of Opus 3.5, which is anticipated but not yet available. 00:01:00 *📊 Performance and Benchmark Comparisons* - 3.5 Sonnet outperforms previous models on most benchmarks. - Benchmarked against GPT-4o, Gemini 1.5 Pro, and others. - Highlight of SWE Bench score improvement from 33.4% to 49%. - Focus on agentic tool use and coding enhancements. 00:03:27 *⚡ Haiku Model Details and Future Potential* - Haiku 3.5 expected to outperform Claude 3 Opus. - Limitations: initially released as text-only, with image input support to follow. - Potential for fast and affordable performance in many tasks. 00:04:23 *🖥️ API Development and Computer Interaction* - Introduction of an API that enables Claude models to interact directly with computers. - Allows searches and task execution through a browser autonomously. - Benchmarked on OSWorld; possible risks highlighted. 00:06:20 *🧪 Demonstrations and Precautions* - Demo videos showcase model abilities like filling Google Sheets and performing searches. - Identified risks include errors during testing and potential misuse. - Suggested using a separate computer for safety when testing the API. 00:08:25 *📋 Conclusion and Summary* - Summary of the benefits of using Sonnet for coding and Haiku for fast tasks. - Speculation about the release of Opus 3.5. - Invitation for viewer feedback and future exploration of the API usage. Made with HARPA AI
yeah, one of the biggest missing pieces for a mostly autonomous SWE. if we can automatically feed console errors back into the prompt (easy) and have the agent actually test various aspects of the app (hard), then that's really all you need to set a coding agent up with a list of product requirements, leave it alone for a while, and come back the next day to see what it's managed to build in an iterative fashion
A very small thing - but one of my 'bots' that was using sonnet 3.5 seems to now be automatically aware of the tool/function-calls it has available. As in, it'll mention them in it's response as 'something you might want to ask me to do'. Not sure if it's just a quirk - but I never had previous models seem user-facing 'aware' of their available tools. It's responses with an eye to a nuanced take on it's system prompt also seems much better. Looking forward to trying Haiku!
I think it isn't the architecture but the foundation model weights (i.e., the weights may change due to fine tuning, quantization, etc., but based on the same training) that are the same. If you mean architecture as in the Model architecture, I agree 😉
OpenAI does the same annoying thing. Why denominate 15 different versions of GPT4 by date instead of just using the versioning number like a normal person
Looking forward to compare gpt4o-mini and the new haiku, as they definitely have their place. And trying the new sonnet asap obviously (assuming price is same..)
Funny, about 4 hours ago, I got one very unfortunate session with Claude in which it basically forgot Latex. I wonder if it has something to do with the update. Because it looked VERY odd. (like writing pi as a symbol and not as \pi etc).
Computer use will be great ONCE IT IS RUN LOCALLY. I don't trust cloud machines owned by others to be using my computer, that makes it not my computer anymore and it's a pain making a VM for each time.
@@justtiredthings Oh, I'll check it out. I have been exploring Open Usd as an alternative. First using a huge library of 3d assets, that can be composed by an llm into an open usd scene. Then i've successfully created a 3d environment. For a more dynamic approach, I've been exploring the use of the Kolmogorov-Arnold theorem to create continuous functions that project 2d gaussian splats onto a 3d plane. Most projects have been focusing on 3d generation models when tools exist that an llm should be able to use to produce any 3d scene.
Version numbers are kind of useless if vendors don't increase them when they actually upgrade the functionality. I don't know why they wouldn't call the new model Claude 3.6 or so.
The assumption is it can be integrated into any computer and system, instead of relying on an API for every part of the computer. Can you imagine trying to write an API just to open a window, on windows Mac and Linux? Times that by the thousands of different functions required. Just train the AI to use the PC like we do, way more adaptable in the long term
@@yellowboat8773 you wouldn't use an API to open a window, you'd use it to get the data directly from the application backend in a reliable way. Front ends are point solutions to the inefficiencies of having a human agent. Why replicate this inefficient and buggy layer?
On a Mac (or, I suppose, a Linux box) you could sandbox all app interactions under a user with diminished privileges to protect both your machine and your data. It will be interesting to see which model will prevail. Apple's very complete restrictions, Anthropic's (as I suggest) sandboxed restrictions or Google's (and perhaps MS) lack of restrictions.
Should be some online VM desktop you could use the computer use on. Reduce risks and give more people a way to use it safely.
Used new Sonnet 3.5 today for work (coding). It's def a solid improvement. I'd say it's on par with o1-preview or o1-mini but much faster.
Haven't had a chance yet to try it with very long instructions because claude models are typically super strong on instruction following. Can't wait to keep building with it tomorrow!
🎯 Key points for quick navigation:
00:00:00 *🚀 Introduction and New Model Overview*
- Announcement of two new Claude models: 3.5 Sonnet and 3.5 Haiku.
- Overview of how the new models fit into existing frameworks.
- Mention of Opus 3.5, which is anticipated but not yet available.
00:01:00 *📊 Performance and Benchmark Comparisons*
- 3.5 Sonnet outperforms previous models on most benchmarks.
- Benchmarked against GPT-4o, Gemini 1.5 Pro, and others.
- Highlight of SWE Bench score improvement from 33.4% to 49%.
- Focus on agentic tool use and coding enhancements.
00:03:27 *⚡ Haiku Model Details and Future Potential*
- Haiku 3.5 expected to outperform Claude 3 Opus.
- Limitations: initially released as text-only, with image input support to follow.
- Potential for fast and affordable performance in many tasks.
00:04:23 *🖥️ API Development and Computer Interaction*
- Introduction of an API that enables Claude models to interact directly with computers.
- Allows searches and task execution through a browser autonomously.
- Benchmarked on OSWorld; possible risks highlighted.
00:06:20 *🧪 Demonstrations and Precautions*
- Demo videos showcase model abilities like filling Google Sheets and performing searches.
- Identified risks include errors during testing and potential misuse.
- Suggested using a separate computer for safety when testing the API.
00:08:25 *📋 Conclusion and Summary*
- Summary of the benefits of using Sonnet for coding and Haiku for fast tasks.
- Speculation about the release of Opus 3.5.
- Invitation for viewer feedback and future exploration of the API usage.
Made with HARPA AI
Exciting ! Thank You !!
computer use has a big big usecase for Software QA specifically. Really excited
yeah, one of the biggest missing pieces for a mostly autonomous SWE. if we can automatically feed console errors back into the prompt (easy) and have the agent actually test various aspects of the app (hard), then that's really all you need to set a coding agent up with a list of product requirements, leave it alone for a while, and come back the next day to see what it's managed to build in an iterative fashion
Thanks, very informative
Computer use is going to be a game changer
Yeah this is exactly what I talked about in the Agent-S video yesterday, just didn't expect it to be here so quickly
Is this like RPA on steroids?
A very small thing - but one of my 'bots' that was using sonnet 3.5 seems to now be automatically aware of the tool/function-calls it has available. As in, it'll mention them in it's response as 'something you might want to ask me to do'. Not sure if it's just a quirk - but I never had previous models seem user-facing 'aware' of their available tools. It's responses with an eye to a nuanced take on it's system prompt also seems much better. Looking forward to trying Haiku!
*excitement intensifies!*
Why did they not change the name to Claude 4 or at the very least 3.6.. Isn't that what those numbers are for?
Agree I almost called it 3.6 in the Thumbnail to show it was new
my assumption is that they're using the same architecture, as in 3.5 v1.
I think it isn't the architecture but the foundation model weights (i.e., the weights may change due to fine tuning, quantization, etc., but based on the same training) that are the same. If you mean architecture as in the Model architecture, I agree 😉
@@toadlguy In my understanding the first denominator is the architecture, and the decimals the weight tuning.. but that just from pure intuition
OpenAI does the same annoying thing. Why denominate 15 different versions of GPT4 by date instead of just using the versioning number like a normal person
Looking forward to compare gpt4o-mini and the new haiku, as they definitely have their place. And trying the new sonnet asap obviously (assuming price is same..)
Thats why he was saying AGI by 2026..the new era of autonomous machines
The next question is how to make all agents work together and check/verify in one company? Maybe beyond one company.
computer use is beyond over hyped agents of langchain, we need powerful ocr and and powerful llm for this to replicate
Amazing.
How to use previous model because i wana to use previous model but don't show any option to use previous model
Bring it on 😁
Funny, about 4 hours ago, I got one very unfortunate session with Claude in which it basically forgot Latex. I wonder if it has something to do with the update. Because it looked VERY odd. (like writing pi as a symbol and not as \pi etc).
interesting I wonder if that was during the swap over
@@samwitteveenai Very likely, because I've never seen Claude be so stupid. But after few prompts it normalized.
an interesting update:)
LMAO! 😂 Yellowstone is quite beautiful ❤️
AGI just wants to see people's nice pics
please do more
can you make a video on how to use the computer model to do an action 🙂
just released it !
Computer use will be great ONCE IT IS RUN LOCALLY. I don't trust cloud machines owned by others to be using my computer, that makes it not my computer anymore and it's a pain making a VM for each time.
Hard agree
I've been waiting for a model that can use blender efficiently. i describe the scene i want and then it gets to work to build the scene in blender
It looks like Adobe is working on something like that with project scenic
@@justtiredthings Oh, I'll check it out. I have been exploring Open Usd as an alternative. First using a huge library of 3d assets, that can be composed by an llm into an open usd scene. Then i've successfully created a 3d environment. For a more dynamic approach, I've been exploring the use of the Kolmogorov-Arnold theorem to create continuous functions that project 2d gaussian splats onto a 3d plane. Most projects have been focusing on 3d generation models when tools exist that an llm should be able to use to produce any 3d scene.
@@marilynlucas5128 yeah, we really need more tooling like that for consistency in AI filmmaking
Chapters?
Version numbers are kind of useless if vendors don't increase them when they actually upgrade the functionality. I don't know why they wouldn't call the new model Claude 3.6 or so.
It's playwright framework or similar, then LLM interacts with it, it's not new.
Software services should provide APIs and SDKs. The idea of an agent clicking around a screen like a person is so unbelievably dumb and inefficient.
The assumption is it can be integrated into any computer and system, instead of relying on an API for every part of the computer. Can you imagine trying to write an API just to open a window, on windows Mac and Linux? Times that by the thousands of different functions required. Just train the AI to use the PC like we do, way more adaptable in the long term
@@yellowboat8773 you wouldn't use an API to open a window, you'd use it to get the data directly from the application backend in a reliable way. Front ends are point solutions to the inefficiencies of having a human agent. Why replicate this inefficient and buggy layer?
saying you need another computer makes no sense, just don't use an admin role and do not provide passwords to sensitive conent/service.
Yeah, computer use will not pass security audits
On a Mac (or, I suppose, a Linux box) you could sandbox all app interactions under a user with diminished privileges to protect both your machine and your data. It will be interesting to see which model will prevail. Apple's very complete restrictions, Anthropic's (as I suggest) sandboxed restrictions or Google's (and perhaps MS) lack of restrictions.
Really good point