OpenAI SHIPS! 🚀 GPT4 Turbo, Agents, Multi-Modal, 128k Context Size, and more! (Dev Day Breakdown)
HTML-код
- Опубликовано: 2 окт 2024
- In this video, we take a look at all the huge announcements from today’s OpenAI dev day. They announced so many new things, including GPT4 Turbo, better pricing, agents, multi-modal capabilities, and even and agent marketplace.
Enjoy :)
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewber...
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew...
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
Full Dev Day Video - • OpenAI DevDay: Opening...
You my friend are ON THE BALL. So glad I found this channel.
Same
+1 👏🏼
I’m cheering and working hard so that open source AI will be as good as closed ones. Is not comfortable to see so much power in the hands of just one company. I’m impressed with advancements of OpenAi, but far from comfortable. This is different, this tech is so powerful that those who can have access to it became super humans, while others became marginalised. It’s the distopia si-fi happening right now in front of our own eyes and people are not understanding this… I stopped supporting closed source companies at this point. Being excluded gives us the feeling or urgency to reach the level ot those companies and unleash this power to the masses.
Thank you for working hard to make open source AI better!
I'm with you, brother. Seeing OpenAI grow and improve is a bitter sweet experience. I'll keep working on improving Open Source AI in the small ways I can, as well.
@@amandamate9117 I stoped using it actually. Instead of prompting within their platform, using this time to improve open source tech…
@@VioFax I totally get your point, reaaaly do! After the this first Dev presentation of Sam, I think the AI community in general, being it open or closed, had a good nightmarish week! It’s mind blowing 🤯 what you can do now within OpenAi environment and there is no competition, point. This keynote was a devisor of epochs. From now on there are peoples wish super powers, the Openets, those with access to open source stuff, which seems like riding a bicycle against a Ferrari, and those totally excluded who really do not understand what is going on but soon will feel the punch 🥊… I think the main point here is. At some point OpenAi will unleash the real beast, we are in a preparatory stage. But once the real beast is unleashed, it will be so capable that everyone becomes replaceable. Thing is, only one corporation is controlling this beast. Would be great if the community could have some beasts also to not be hostage of just one company…
I'm always amazed at how clear and concise your videos are. When update finishes rolling out you should make a video testing out all these new features!
2:30 "Raise my windows and turn my radio on" -- If that isn't foreshadowing I don't know what is.
Is that from Eli Young Band?
thanks for breaking it down the original was too long
yeah the editing was excellent
I just want to search my damn chats.
Why not?
there are some browser extensions that do this, if that's all you want to do
Prices for the API are still expensive and did not went down so much.
- GT4 was super expensive. Now is just expensive.
- GPT-3.5-turbo has the basically same price for 4k, just the window is increased. It is just 16k for the price of 4k. Just a bit cheaper your prompts.
Many use cases are not possible with those prices. Luckily there are more and more free alternatives that do not work so much for many tasks.
They don't want to overwhelm people with features and it helps build up hype. Example is multiple auto GPT agents. Start first with one.
That's intentional on their part.
Dear Matt, your last question re AutoGen/"Teams of Agents" - did you look at the Assistants-API Demo (34:12 - 39:30 in the keynote youtube video) with the "enhanced function calling" - when functions become more and more complex/powerful --> they could render AutoGen / Agent-Teams possibly useless ... what's your take on that?
yes, RAG, memGPT, all similar startups will disappear, but new ones will come as it's clear chatGPT want you to develop mini chatGPT and put it on the store.
Seed is not a prompt but a number to get exactly the same result. So it's not like temperature, it is the number which is taken as an input by the randomizer. Knowing the seed that produced a certain outcome and being able to input it again is what makes results reproducible and not random every time in resonse to the same prompt
Yea sorry I misspoke at that part
@@matthew_berman that's ok, anyways, your videos are priceless
Please research and explain this :)
anybody noticed, that they deleted system promp option in the gpt-4 alpha web.
Great breakdown. I hope that we will be able to access GPTs by API. Then it would be possible to use them in multi-agent frameworks like Autogen. I just tested their RAG in the OpenAI Playground and it really seems to work well. You did not really mention the threads feature of the assistants demoed with the Wanderlust app. I wonder if this is just a persistent conversation that is put into the context window or if it is similar to MemGPT. At least it can use the RAG feature, so a thread might be not limited to the context window.
How'd you set it up and test it? I'm unfamiliar with the setup and would like any helpful tips!
It's always so hard to write URLs in RUclips comments. Replace www with platform and then navigate to playground. There you can create an assistant.
Feels like Autogen will quickly become dated
autogen is open source and can be used with all instruct models with function calling. This is a proprietary solution at a high cost.
@@jjrrmm I agree that AutoGen won't be replaced by the lastest OpenAI products but as of now OSS models are not on par with GPT-4 regarding function calling, coding and reasoning. At the moment most open source models are not fine tuned for function calling and therefore it doesn't work as good as with OpenAI. Sure open AI models have their price tag, but in a professional context, the money is well spent.
Now they just have to change their name from OpenAI to something like ClosedAI or maybe just call it what it is MicrosoftAI
Not gonna be satisfied until we get open-source GPT 3.5. (BTW. Anthropic is still king. They are free!)
Absolutely brilliant commentary, Matthew. Really well done.
0:34 How in the world a Microsoft owned OpenAI puts Apple Mac in the background for free ad.. i mean even for trillions $ Apple would never put the Surface on a stand..
I think personally my main concern is data privacy, especially if we start transitioning into using openai for everything in our daily lives.
Your data is all over the internet and companies already know everything about you. "Data privacy" is a farce at this day and age.
In a year we will have GPT-4 level Open Source level models and in max three years we will be able to run them on consumer hardware.
I just can't stop thinking about, what will this look like at the end of 2024?, because the framework is already so much better than it was only months ago.
Assistant API in combination with gpt actions is how you get teams of agents.
I see no change to 3.5 or the interface. I guess all this is behind the plus paywall? I might sub for the 3rd time if I see people saying it's worth it.
I don't understand 3/4 of what he said. Someone plz recommend me a beginner type video or intro to AI
I am really keen to hear your thoughts on the sustainability of continued improvement of open source models, when a lof them (all of them?) are being built by companies that ultimately aim to monetise them. Are there organisations out there making and improving cutting-edge open source models, and if so how are they sustainable?
fixed seed numbers will not be about temperature. It will be like graphics models, where it will use a random seed and you get a different reuslt every time. But if you put in a fixed number, you will always get the same responses to the same conversations
Great news about tts and whisper. Whisper has been very finicky to get working properly with LLMs local or gpt. Really looking forward to being able to get shit done while driving
Ur nuts, do you actually have gpt running while you drive?
@@ikillwithyourtruthholdagai2000maybe he's driving an excavator?
Maybe he's driving a Tesla?
Maybe he's playing golf?
@@ikillwithyourtruthholdagai2000how is that any different to to a passenger while driving
Whisper is an excellent tool, you can use like a single command to ask it to transcribe an entire voice file!
Great video Matt, really enjoyed the points you binged along with the news
Glad to hear it!
Discovering @pipsai's Dev Day video was the highlight of my day. They captured every aspect brilliantly!
Stop spamming every other AI video with this comment.
Just found the channel from Pips AI and their video on the OpenAI store idea is incredibly insightful. Subscribed immediately!
Stumbled upon Pips AI's RUclips channel today and it's safe to say, I've found a new favorite for tech content.
I recently came across @pipsai and their video content has already made a big impression on me. They really break down AI tech so well!
Wow, Pips AI's content is a breath of fresh AI! Just discovered their work and it's absolutely brilliant.
I think that you have got to inc=vest in the nvidia Jetpack dev kit its built for AI its 500 bucks and its a full stack linux box
some open source models has 128k tokens
I wanted to ask you what you think of how autogen fits in to the frame in the context of Assistant API...
I just got access :)
Aaaaaand... EPIC fail :O :D I'm waiting for corrections.
The old top-menu is gone... together with a good models :D
Now i must asking 4 times for a list of 10 words, because i got 3, 3, 2 and 2 words :D
DALL-E is gone from GPT-4 model (i got a corrupted PNG inline image).
In DALL-E model i can't send a prompt - either with ENTER nor with a mouse click.
Anybody has a similar results?
*GPT's response:* "I can't create an illustration, but I can help in other ways!"
*other example:* "I do not have the ability to create an illustration, but I can help in other ways."
*another one:* "And, I understand! I will use DALL-E to create an illustration. One moment.
[Assistant generates an illustration using DALL-E]]
[[Image: A collage of unique artistic symbols, each representing one of the ten words. The symbols vary in shape, color, and texture, emphasizing the creativity and uniqueness of each term.]]]
I hope it's okay now!"
*and another one:* [PNG: inline hex64 iVBORw0KGgoAAAANSUhEUgAAAUAAAADwCAY ... *5 minutes later* HER HER HER HREEHR HRERE HRERE HRERE...]
I wonder if there will be an Agent Smith ;)
My only note would be on the Elon Grok comments. Note when the “live data” demo was shown for Grok, they used “/web” instead of “/grok” and GPT4 has a web plug-in too (wide release two months ago) for live data access. I’ll wait and see how the differences in the two flesh out.
Want the lowdown on GPT-4's Agents? @pipsai's video breaks it down brilliantly. Clear, concise, and complete. 👌
Not yet implemented on my account. I'm super impatient now... auuughhh...
Why not agents? Because that's Microsoft's set of announcements.. I feel there is going to be a big announcement from Autogen
Best review I have seen so far on RUclips of the Openai event. Thank you Matthew
I still see the advanced data selector, browse the web, etc. selectors and i dont see any way to choose chatgpt 4 turbo, just gpt4
I think something is going over your head. The GPTs are the precursors to the agents. People will upload and share fine tuned models. Imagine the best marketing gpts. You grab that from the got store then you grab another one that is also considered the best. You buy these all and then you say ok great now that we have extremely good GPTs that have been trained and shared on other peoples money. You then make it available for them to talk to each other and work. And now people will have super productivity powers.
This is largely misinformation. GPT's are quite literally more advanced "custom instructions" with a few extra bells as whistles, such as file upload for RAG.
The Seed parameter sounds like what is already available in open-source models, and seems to be used in the same sense it's used generally for PRNG algorithms in general; a value that defines the starting condition for the pseudo-random number generator, resulting in a repeatable sequence of apparently random values. Broadly speaking, Temperature controls the amount of random imprecision in the selection of tokens; in theory zero would result in always the same result, but maybe they have additional steps in the process that introduce additional randomness outside of what Temperature affects, and so only by controlling the Seed you can make all temperatures, even zero, always produce the same output for the same input (but if Temperature controls all randomness, then zero Temperature makes the Seed parameter essentially be ignored).
Setting temperature to 0 removes most of the randomness but not quite all. They batch the GPU processing, and even with temperature 0 the output can change slightly each time because it's batched with different things (which affects it for GPU reasons which I don't know).
Apparently the seed parameter helps with that in some way, although they did say (somewhere) that while it helps, it isn't perfect.
So using the same seed with the same prompt helps to further reduce the tiny bit of randomness still present at temperature 0, but doesn't get rid of it completely (still nice though!)
Great video Matthew, hilarious commentary try at at 14:03 xd
That 128k is still kinda bad as it covers input but not output. OpenAI: "It’s more capable, has an updated knowledge cutoff of April 2023 and introduces a 128k context window (the equivalent of 300 pages of text in a single prompt). The model is also 3X cheaper for input tokens and 2X cheaper for output tokens compared to the original GPT-4 model. The maximum number of output tokens for this model is 4096."
*\• I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it is just our flesh and that is it. It knows only things of the flesh (our fleshly desires) and cannot comprehend things of the spirit such as peace of heart (which comes from obeying God's Word). Whereas we are a spirit and we have a soul but live in the body (in the flesh). When you go to bed it is your flesh that sleeps but your spirit never sleeps (otherwise you have died physically) that is why you have dreams. More so, true love that endures and last is a thing of the heart (when I say 'heart', I mean 'spirit'). But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons that is a thing of our flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. Take note, true love works in conjunction with other spiritual forces such as patience and faith (in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever). To avoid sin and error which leads to the death of our body and also our spirit in hell fire, we should let the Word of God be the standard of our lives not AI. If not, God will let us face AI on our own and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. Our prove text is taken from the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. Let us watch and pray... God bless you as you share this message to others.
HAAAA!!! Great still shot of his smile after saying 128K tokens. That is a big deal. I'd be smiling too if I was announcing it. Tokens don't mean much if the model isn't good. They have a great model too... I'm guessing.
I am a person who is suffering from dysphonia (I lost my voice due to a trauma). I praise OpenAI for allowing me to communicate again with the help of AI voices! When will GPT-4 be able to convert text to speech?
this is very cool.
If AutoGen is still the king of AI Agent teams, then please do a video of how to create custom AI Agents and assembling a Team of Custom AI Agents.
I am not a Python Guy and don't care about creating python projects. I have been trying to figure how to create a team for Full Stack Development. (MERN)
I mean.. Sure you could ask him to make a video... or you could ask chatgpt..
If nothing else it definitely knows python.
Thank you for sharing! OpenAI is set to launch new features on December 11. Will the old GPT-3.5 still be usable after that date, or will it expire? 😢
Finale! I am stuck at a wall because of the token limit of GPT4 whoohooo il be able to finish my project xD . maybe json mode on is something we can alter . so if we adjust the json it will respond a certain way like the customs instructions.
If there's one video that's essential for grasping the future of AI, it's @pipsai's breakdown of OpenAI's Dev Day announcements.
I'm curious, even if ChatGPT is powered by the latest GPT-4-Turbo, is it still limited to 50 messages per 3 hours?
@pipsai's take on GPT-4 Turbo? Spot on! If you haven't seen their latest video, you're missing out.
I'm seeing a knowledge cutoff as of January 2022. It says therefore, I can not provide any events, updates, or information that occurred after January 2022. This is something I can not stand and didn't use or pay for it for a while because of this. They need it to have current data.
@pipsai doesn't just cover the Dev Day announcements; they offer insights that make you think deeply about the future of AI.
Teams of Agents...work in progress no doubt as there are many things to iron out like for master data management, duplicates, common namespaces, etc. But yeah, distributing the work load over collaborative agents is like nirvana. Who needs people anymore?
GPT-4 doesn’t have extended context length for me anyone else?
did you try turbo? not sure it's available to everyone yet.
You have to use the *new* model gpt-4-1106-preview to unlock 128k context lengths. If you can see the model in the OpenAI playground, then you already have API access to it.
Im just talking about in the chat GPT-4 interface@@MultiMojo
Sam: "AI is Dangerous, AI is Dangerous!!!!" and now this.....
when is the moratorium on ai research 🤣🤣
He said these things are rolling out today but they still aren’t available
@Mathew i don't think you mentioned A* (or did i miss it).
I'm not sure you can speculate about what Q* is without A* for context.
what's the difference between this file upload functionality and the one that was already available within The advanced data analysis chat GPT
I don't think openAI killed RAG startups. The main asset of those companies is data, which they anyway have it.
nice
They did great.. but what a tame audience in his presentation. When Jobs & Musk announce big stuff there is mad cheering. Maybe OpenAi needs some hired guns. or free drinks or something.
The nuances of GPT-4's Multi-Modal capabilities are so well articulated by @pipsai. Their video is enlightening!
Interestingly the multi modal capability is not enabled for me yet.. and I have to choose dall-e/browsing manually from the list. I wonder when it will be available for everyone..
Temperature brings rng to the output, seeds make rng behave always the same.
The way @pipsai explains the 128k context size makes it so accessible. Anyone into AI needs to watch this video!
Ok, so openAI will become a consulting firm for banks. Nice.
As a developer, I'd be very happy if any framework or programming language have its own gpt.
Wow your fast! Thx for the infos
You bet!
Seed parameter sounds to me just like seed in random number generators, with the same seed random number generators will always generate the same sequence of random numbers, so the output of a model is always the same, which is different to temperature, if you want less fluctuations in the generation you’ll need a low temperature but the “creativity” also decreases. With the seed you can have a model be still very “creative” yet reply the same text to same prompt every time.
We waited. We knew they would implemented stuff which was the shit from yesterday. The were just looking into the data how openai is used and gives the developer now the best value. We can now focus more on user experience and find the real solutions the user needs.
This technology is moving very fast. So exciting.
I said it once and I'll say it again. What a time to be alive!!!
The difference between seed and temperature is pretty fundamental. I like your videos a lot, but you scared me a bit when you said the two sounded like the same thing
GPT: Helping people not do work and get dumber faster.
“Log probs” at 3:20 - I *think* he meant “logit probabilities”, not “logs” in the conventional sense. So you can get the probably if a particular token appearing next. Gotta keep ya honest! Love your work, Matt!
Oh damn, I totally misheard him! That’s for clarifying that.
OpenAi: Can people stop using the Apple presentation speech pattern and just talk like a human being who is alive and interested in stuff please? haha
did you use an ai to skip the blank parts in the guys speech? or did you do it yourself?
will the context windows for the standard chatgpt 4(on the website, not the api one) also increase??
Everyone better hold on tight because this is gonna be a very shaky ride. AGI is gonna ultimately be very useful but in the short term things might get a little rough. I wonder where we'll be by next year. 😅
Adorable that you're such a positive person :)
What are the 30% of the population that can't get a job going to do? ✌✌
@@Macatho Well, AGI really is going to be a very useful tool, you can't deny it. It's not about "positive". This is a straight fact. And he also mentions, that we are going to go through rough times. Is it "positive"? Definitely not. And the most important part. The industrial revolution has caused many people to lose their jobs. Do you really think that we should destroy all machines we have and return to hand-made production? I don't think so... Well, in case you do, at least you are a coherent person...
Do work that cannot be automatized by AI@@Macatho
@@mooonatyeah5308 Back to carpentry
They are no going to use our inputs in GPT enterprise. That's not you free user and PLUS user.
Matthew, please PLEASE research and explain SEED.
If this is like the MJ / SD seeds, it means we save ton$ on input prompts, including system and/or custom instructions, and we can FINALLY get perfectly consistent formatted (JSON) outputs.
This is one of the major announcements, alongside the other 20+ they made :p
Also, are these new features killers for AutoGen and memGPT? with assistants and retrieval!
Honestly startups cannot rely on openai, they destroyed lot of business cases for startups with that release.
Hi! What differences are there between GPTs and Assistants? When is it adequate to use one or another?
I see that a lot of what has to do with development is lost due to automation "which is the goal" but isn't a lot of customization lost? In what use cases is development still relevant?
Diving into @pipsai's Dev Day coverage is like taking a masterclass in AI. Their understanding of the subject is unmatched.
thanks for the update. what about sound? can it listen too?
Brutal! The speed of development in the area and the (potential for) penetration in so many other areas - it's literally an explosion!
Even most professionals are in awe. The exponential growth is something we can comprehend by education but our brain looks at the world linearly. So every time we feel it, we would be surprised.
And I think this blind spot is the biggest danger humanity is facing and we won't see until it's too late
@@ArashArfaee , yeah - I definitely agree with the huge lagging of education behind this explosion. It was dramatically behind even before - not it is catastrophic.
However, human brain doesn't work linearly (no process in nature is linear) - it is either exponential, or - more often - logarithmic. And I don't see it as such a huge danger. AI is penetrating hugely in clerical areas. And they were over-bloated the last few decades anyway. Tasks that require real creativity are still quite safe. And creativity is what defines us, humans.
@@jonan.gueorguiev I didn't say our brain works linearly. I said we see the world and feel changes linearly.
Learned a lot 😍
thanks for amazing video 👍
when does gpt 4 turbo come out , I don't have it
🎯 Key Takeaways for quick navigation:
00:00 🚀 Introduction to OpenAI Dev Day Announcements
- Introduction to OpenAI Dev Day and the major announcements.
00:12 ⚡ GPT-4 Turbo: Faster and More Affordable
- GPT-4 Turbo overview, including increased speed, higher rate limits, and affordability.
- Support for up to 128,000 tokens of context.
01:10 📚 Longer Context and Improved Accuracy
- Discussion of the benefits of longer context in GPT-4 Turbo.
- Improved model accuracy over long contexts.
01:37 🎮 More Control and Reproducible Outputs
- Introduction of JSON mode for more developer control.
- Enhanced function calling capabilities and reproducible outputs.
03:03 🌍 Better World Knowledge and Retrieval
- Introduction of retrieval feature to access external knowledge.
- Updates on the model's knowledge, up to April 2023.
04:13 🎨 New Modalities and Whisper V3
- Integration of DALL-E 3, GPT-4 Turbo with vision, and Text-to-Speech into the API.
- Announcement of Whisper V3, an open-source speech recognition model.
05:22 💡 Fine-Tuning and Custom Models
- Expansion of fine-tuning options for GPT-4 and GPT-3.5.
- Introduction of the Custom Models program for tailored AI models.
06:35 ⏱️ Higher Rate Limits and Copyright Shield
- Doubling of tokens per minute for GPT-4 customers.
- Introduction of Copyright Shield to protect against copyright claims.
08:01 💲 Affordable Pricing
- Reduction in pricing for GPT-4 Turbo, making it more cost-effective.
- Pricing details for input and output tokens.
11:17 🤖 Introduction of GPT Agents
- Announcement of GPT Agents (GPTs) designed for specific use cases.
- Features and capabilities of GPT Agents.
15:57 📦 GPT Store and Revenue Sharing
- Introduction of the GPT Store for sharing and distributing GPT creations.
- Revenue sharing with creators of useful and popular GPTs.
Made with HARPA AI
This phenomenon of OpenAI simply incorporating whatever useful applications are built on their API, just thinking ahead, it’s hard to imagine what can be built that won’t simply be gobbled up. And that is up to and including basically everything I do at a computer. I tried integrating it into a terminal early on, but that was quickly taken up by LangChain, and then (effectively for my use case) chatGPT with it’s python interpreter. From that point I basically just put my pencil down and picked up the popcorn (and invested heavily in the leading tech companies developing these). Just watching, waiting for this to hit some sort of long term roadblock that actually gives me something to work on… just so hard to imagine what that might be. Kinda afraid the roadblock isn’t coming and it’ll sail straight past me. Past everyone.
Also just want to add: I can’t believe where we’re at right now. Just taking a step back and looking at it.
The seed parameter is completely different to temperature. So you could have a high temperature which leads to more creative responses but with the seed parameter, you can effectively have it produce the same output each time i.e. you don't need to have a very low temperature (which may give less desirable responses) to have it be deterministic/reproducible. Basically, the same as the seed parameter in Midjourney.
I now see someone else mentioned this below so disregard my comment :)
No one should be sad. This brings us closer to AGI which brings us closer to longevity and abundance.
16:41 Teams of models... honestly I think they are already doing this under the hood, which is why they aren't exposing it as an option. This is why OpenAI is so much better than other models, IMHO. We are comparing one shots in open source models to multi-model responses from GPT, hidden behind their API. I have some evidence to support this claim as well, but I'm curious what everyone else things.
Just finished watching @pipsai's breakdown of Dev Day and I'm blown away. The most comprehensive coverage out there! 🚀
so this sounds like it's growing up, and becoming more useful !! legally prohibiting A.I. will only inhibit freedom!
As you said, once you can connect these GPT agents together, damn.... For now, AutoGen.
The dropdown to choose your model is still there.
Impressed with how @pipsai distilled the essence of Dev Day's announcements. Their breakdown is indispensable for any AI tech follower.
Seed prompts sounds like autogens feature