Atty and Michelle, you presentation was tight and your delivery was spot on! From a dev's perspective who need convincing that this is worth using, Michelle's explanation at 25:00 is really key. FWIW, consider front loading that on future explanations.
This was so valuable, I have a feeling this work was fundamentally required for o1, and we are now all benefiting 😊 This innovation has potentially the ability to upgrade every UX to a interoperable, lightweight and fast API. Bringing context to all the data moving around. Agents having an ability to exchange a cache of how to structurally interact with what they see. The fact index is a tree means that selecting the appropriate index cache can also be an inference step, opening up the ability to scale down the effort to compile novel versions. With most UX being derivative. I’m very impressed, and love the philosophy surrounding this work effort. The ending comments about our collective ability to see and make the future was lovely ❤
19:20 - not too far from agentic workflow that’s fully automated, this is promising. 25:28 - I just realised she’s incredibly pleasant to listen to, both professional and professionally but friendly presented. It would be a huge improvement if we had Sam Altman replaced with her for all the important presentations. 36:07 - 🧐 cumbersome but I’ll wait until after I actually test it. 39:34 - the agentic improvement is seriously impressive. 40:19 - AGI is certainly shaping up to be positive if this is how we are going to get there. 40:39 - thank you 🙏🏻
Pls OpenSource that fictitious convex app, as I like to see how you guys made the generative ui work without pre-building components with that schema or point me to a documentation that describes the concept.
This sounds trivial (compared to O3 or Sora), but it is really useful. 100% accuracy! Amazing! And I can't wait to update my code now to implement this!
Pretty much all of current models work most of the time, sometimes they have some json formating errors but that's it. There's a lot of tools to extract json. Also I normally use them to output multiple code blocks at the same time. The amount of times I have triggered a format error with my json schema validator (I use ajv) can be counted with my fingers 😅. Most of the errors I see are not adding quotation marks to the keys.
There are also some features of JSON schema that strict mode doesn’t support e.g. min length, extra properties, optional properties, and so on. If those are important to you, you can still use response formats (and expose your schema to the model in the way it was trained) but not force those limitations during constrained decoding.
22:42 If tokens are made up of fragmented words which are map to full language content then tokens should help to improve general guessing of LLM models. I wonder what eles will be better than such structured design (tokens design). 💭🤔💭🤔💭🤔
I don't see why someone would use function calling over response format. Function calling seems like a subset of response format. If I get a response with a format that I wanted, I can then use the response to call a function or any other use case.
Choosing from a set of well defined and modular functions is often easier for the model than handling highly variable context-dependent outputs, even with a fixed schema imposed.
The model has been fine-tuned on the concept of tools / functions, so the quality of function choice and function bodies will be better than using plain response formats. Like all features of LLMs, what’s in the training data affects the model quality, so always good to align your usage with the data!
People missed this: OpenAI has not only showcased groundbreaking development in AI but also its talent from all walks of life. Ethnicity, Gender, Age; all balanced. I love how smart you're. Keep this burning 💥
I would say function calling & structured output is the best thing ever of LLM.
Atty and Michelle, you presentation was tight and your delivery was spot on! From a dev's perspective who need convincing that this is worth using, Michelle's explanation at 25:00 is really key. FWIW, consider front loading that on future explanations.
Guys, we developers are the first to see the future, what an honor OpenAI has given us. Let's do it
This was so valuable, I have a feeling this work was fundamentally required for o1, and we are now all benefiting 😊 This innovation has potentially the ability to upgrade every UX to a interoperable, lightweight and fast API. Bringing context to all the data moving around. Agents having an ability to exchange a cache of how to structurally interact with what they see. The fact index is a tree means that selecting the appropriate index cache can also be an inference step, opening up the ability to scale down the effort to compile novel versions. With most UX being derivative. I’m very impressed, and love the philosophy surrounding this work effort. The ending comments about our collective ability to see and make the future was lovely ❤
19:20 - not too far from agentic workflow that’s fully automated, this is promising.
25:28 - I just realised she’s incredibly pleasant to listen to, both professional and professionally but friendly presented.
It would be a huge improvement if we had Sam Altman replaced with her for all the important presentations.
36:07 - 🧐 cumbersome but I’ll wait until after I actually test it.
39:34 - the agentic improvement is seriously impressive.
40:19 - AGI is certainly shaping up to be positive if this is how we are going to get there.
40:39 - thank you 🙏🏻
Thanks for the erectile timeline. Get out of the basement once in a while.
This is amazing. I had goosebumps watching this video. It's such a powerful tool, elegant design, and research.
The idea of token masking is so cool. Nicely explained too. Thank you OpenAI.
13:20
16:13 - Can use function calls to control the client UI
39:29 - Agentic flows can work 100% of the time
Pls OpenSource that fictitious convex app, as I like to see how you guys made the generative ui work without pre-building components with that schema or point me to a documentation that describes the concept.
This is awesome and much needed . Great work
Great work love structured output
This sounds trivial (compared to O3 or Sora), but it is really useful. 100% accuracy! Amazing! And I can't wait to update my code now to implement this!
thank you for this great work, as a developer I'm super excited to try them all :D
They literally said "you will help us to reach AGI, thankyou for building us."
Does anyone know if some version of structured outputs is available for llama models?
Pretty much all of current models work most of the time, sometimes they have some json formating errors but that's it. There's a lot of tools to extract json. Also I normally use them to output multiple code blocks at the same time. The amount of times I have triggered a format error with my json schema validator (I use ajv) can be counted with my fingers 😅. Most of the errors I see are not adding quotation marks to the keys.
That's exactly my sentiments. Even 3B models can output json, which can easly cleaned using a simple python function
Any idea to build a useful LLM like Sonnet 3.5
31:10 Some regular expression implementations support balancing expressions which would work, but Regex only still wouldn't be the right solution.
Very well explained
It's just wonderfull, amazing news
Please can we please get the access for app and repo 12:19
amazing presentation
Is this a new vid or recap on Dev day in Oct?
its from oct
What is her name
This is super cool!
Great Presentation! good to know the insights Thank you.
explaining basic properties of regular expressions was not something I expected from an OpenAI video
Am I getting something wrong? I thought these were already in place what’s new with this?
I've said it before, I'll say it again: structured outputs move the needle on GDP
Can u talk about the Indian who died ( murdered )who worked in open a.i for justifying about privacy
Why would we ever need strict property to be false?
you wouldn't - but we needed to keep the option there for backwards compatibility with functions that existed before structured outputs was launched.
@@nikunj-openai I had the same question, and I think @nikunj-openai gave a reasonable answer.
@@nikunj-openai to not overfit it down the wrong thinking path with your prompts and assumpstions. Or so it can be flexible and creative etc.
There are also some features of JSON schema that strict mode doesn’t support e.g. min length, extra properties, optional properties, and so on. If those are important to you, you can still use response formats (and expose your schema to the model in the way it was trained) but not force those limitations during constrained decoding.
Amazing! Can I get the example code?
22:42 If tokens are made up of fragmented words which are map to full language content then tokens should help to improve general guessing of LLM models. I wonder what eles will be better than such structured design (tokens design). 💭🤔💭🤔💭🤔
Amazing
Justice for Suchith Balaji...
How function calling implemented? Token mask?
Yes
Justice for OpenAI whistleblower Suchith Balaji
I don't see why someone would use function calling over response format. Function calling seems like a subset of response format. If I get a response with a format that I wanted, I can then use the response to call a function or any other use case.
Choosing from a set of well defined and modular functions is often easier for the model than handling highly variable context-dependent outputs, even with a fixed schema imposed.
The model has been fine-tuned on the concept of tools / functions, so the quality of function choice and function bodies will be better than using plain response formats. Like all features of LLMs, what’s in the training data affects the model quality, so always good to align your usage with the data!
Could you share the code for recruiting app?
People missed this: OpenAI has not only showcased groundbreaking development in AI but also its talent from all walks of life. Ethnicity, Gender, Age; all balanced. I love how smart you're. Keep this burning 💥
Why didn't you guys edit out the alarm that was going of at the beginning. Really annoying to concentrate
Justice for suchir balaji
wow
Гении презентации...чтобы не заснуть нужно очень стараться
Pls share resume app code
I did say it last night, "Why the hell you are not doing what I am saying you to do!!"
ChatGPT was launched on November 30, 2022 as a prototype by OpenAI, an artificial intelligence research company:
*Justice for Suchir Balaji*
2024 was all about chatbots and 2025 is all about AI Agents
You could almost do the same talk today that you did yesterday and have it already be obsolete...
damn she kinda bad tho
What do you mean 😢
Killer 😢😢😢 Ban open AI in India..
nb
Crazy’s
Alright clickbait.
Good. We can now get rid of the CEOs. They cost too much.
I guess this guy is Indian
Is she ai
first again😅😅😅😅😮
Nobody cares
Omg so cool
What a boring presentstion get to the point
能不能不用咖喱英语
I would say function calling & structured output is the best thing ever of LLM.
Amazing