Sam Altman: "We hope that you'll come back next year. What we launch today is going to look very quaint relative to what we're busy creating for you now."
Man, i love the 'unpolished' video! It's like I'm hanging out with my AI guru buddy talking about fun stuff, instead of getting a THIS CHANGES EVERYTHING energy drink pitch. Thanks so much for keeping up with everything!
Yeah, using hyperbole for every video is jarring and doesn't even make me click on a lot of his videos. Everything being OMGWHATTHEFUCKLETSGOOO gets boring really fast, as disruptive as things are, I feel like he needs to tone the clickait down and give us more informative video titles.
As a non chatGPT user i had some fun testing out GPT-4 Turbo Preview by using OpenAI's Playground platform. I proceeded to add random PDFs that i had laying around in my pc and it did a great job at retrieving info, but it doesn't work with PNGs or Jpegs, so it can't read anything visual but does a great job with actual text.
Just awesome what OpenAI is doing there. I imagine GPTs for books which may be valuable and people pay for it. Imagine asking, discussing or excerpting whatever you want about whatever book or author or topic within a group ode books. Or of course Wikipedia or whatever. Mind blowing.
🎯 Key Takeaways for quick navigation: 00:00 🚀 Overview of Open AI's Dev Day - Insights and announcements from Open AI's first Dev Day, and how to access new features. 00:55 📈 GPT-4 Turbo Introduction - Introduction of GPT-4 Turbo with six major improvements, - Including increased context length up to 128,000 tokens. 02:05 🛠️ Developer-Centric Features - New features for developers using GPT-4 Turbo, like JSON mode and reproducible outputs. 02:48 🧠 Enhanced Knowledge and Retrieval - GPT-4 Turbo's improved knowledge cutoff and the ability to inject external information. 03:14 🎨 New Modalities and API Enhancements - Incorporation of DALL-E 3, text-to-speech, and improved speech recognition in the API. 04:10 🤝 Customization and Enterprise Features - Discussing custom models for businesses and the prospect of enterprise collaborations. 04:52 ⚖️ Copyright Shield and Legal Support - Introduction of copyright shield to assist with legal costs from potential copyright issues. 05:32 💰 Pricing and Cost Efficiency - Announcement of reduced costs for using GPT-4 Turbo, promising cheaper development and operation of AI apps. 06:55 🗣️ Chat GPT and GPT-4 Turbo Updates - Chat GPT's adoption of GPT-4 Turbo improvements and potential implications for users. 07:52 🔄 Introduction to GPT-4's Integration and Accessibility - Introduction to the seamless integration of browsing, plugins, and DALL·E 3 into Chat GPT for streamlined user experience. - Predictive model selection based on the type of user query in Chat GPT. - Announcement that all Chat GPT Plus users should receive the update shortly. 08:32 📉 GPT-4 Cost Efficiency and Features Expansion - GPT-4 becoming more cost-effective for API users. - Enhanced features: larger context windows, higher rate limits, more customizability, and document querying capabilities. 08:47 🛠️ Introduction to GPTs and Customization - Launch of GPTs, allowing for the creation of customized Chat GPT versions. - GPTs are built with expanded knowledge, tailored instructions, and actionable capabilities. - Accessibility for anyone to create GPTs using natural language and publish them for others. 10:11 🤝 Interaction with Custom GPTs and Security Features - Detailed interaction with a custom GPT that connects to calendars and identifies schedule conflicts. - Security measures in GPTs, requiring user permission before performing actions or sharing data. 10:51 🐦 Case Study of X Optimizer GPT - Example of a GPT named X Optimizer GPT that was created to optimize Twitter posts for engagement. - The process of creating a GPT using custom data and instructions for targeted functionalities. 12:13 💼 Developing a GPT for Startup Advice - Creation of a GPT aimed to assist startup founders with advice and critical feedback. - The step-by-step process of customizing a GPT, including naming, instruction configuration, and data uploading. 13:23 🏪 Launch of the GPT Store - The announcement of a GPT Store, akin to an app store for Chat GPT. - Details on revenue sharing for popular and useful GPTs, and the potential for creators to earn through their custom GPTs. 14:19 📈 Strategies for Success with GPTs - Discussion on the necessity of proprietary data or creative instructions for a successful GPT. - Speculation on the future of GPT cloning and the potential for creators to profit from unique data offerings. 15:26 🧠 GPT-4's Persistent Memory Feature - GPT-4 can now reference previous conversations for contextual understanding, - It features persistent threads which allow for more natural long-form conversations. 16:08 🗺️ Creation of Specialized AI Assistants - OpenAI's Playground introduced an AI assistant named Wanderlust, - The assistant can execute code, use digital maps for information display, and reference its own updating version. 17:04 📚 Retrieval Feature for Extended Knowledge - The retrieval feature allows the AI to read and comprehend long documents, - Assistants can parse a range of documents from short texts to detailed specifications. 18:01 🛠️ Custom AI Assistant Development - Developers can create custom AI assistants using OpenAI's API, - Assistants can utilize specific knowledge sources, like transcripts, to inform responses. 19:49 🤑 Monetization and API Integration - OpenAI's new features may enable monetization opportunities for developers, - The platform's evolution may inadvertently render third-party services obsolete. 21:23 🚀 OpenAI's Platform Evolution - OpenAI encourages development within its platform rather than external applications, - The trend suggests that many external SaaS offerings may become redundant. 22:17 🛸 Future Tools and AI Community Engagement - The host discusses the future tools website for AI enthusiasts, - It serves as a hub for AI news, tool curation, and a newsletter service. Made with HARPA AI
For a non-GPT-5 announcement day, I thought that was a very good series of announcements. (Particularly that massively improved context length! Claude just lost its advantage, at least for now.) Only things that would have improved it remarkably would have been the announcement of custom UIs and - especially - the ability to organize chats in folders.
Matt, I'm a big fan of your channel! Your expertise in this niche really shines through in your videos. It's clear you put a lot of hard work into researching and creating fresh, high-quality content regularly, which is so valuable for viewers like me. I appreciate how you consistently put out insightful new videos on current events despite the challenges of covering such a fast-paced niche. It must require a lot of work to stay on top of the cycle compared to more evergreen content. But your dedication really shows - your upbeat personality also makes the videos so engaging. Please keep up the great work!
Great summary and clarification of the event - what is available and what is not, adding info about how the payment structure works and overall comment! Good job, Matt 👏
Matt! I soo appreciate seeing your 'can't live without ' content without the sound and visual effects. Really outstanding breakdown of Dev day- clear, professional and beautiful. Thank you so much!
Persistent threads seems like a huge time (and potentially cost) saver for devs, no more clunkily passing the entire chat history in the prompt every time! Itching to get access already!
Thank you so much for reviewing OpenAI's DevDay on your channel! Your insights and analysis really helped me understand the event better. Keep up the great work!
We will all have AI romantic companions. We can program them to be exactly how we want. No more arguing or anything. No more "Oh I have a headache so no sex tonight, honey!" Oh, and they will look however we want. 😂
thx for the update Matt! IMHO editing and polish = 10%... Content = 90% so defo no need to apologise for the lack of polish, I didn't even notice! Will definitely watch this one a couple of times to get it all down!
Nice job, Matt. I helped launch the whole neural network AI thing back in 1980 (Rumelhart had me edit & test his side of PDP while he was writing it + I invented VLIW computer architecture which allowed the first generation of multiprocessor GPUs + I was almost selected as editor of AI:AMA 4th ed.) I am definitely signing up for your newsletter.
Matt, I've noticed you've added the side camera, and your editor splices in some of those shots occasionally. It would be fun if you used it for some Zach Morris style side comments, or just cutting to that camera for some of my favorite moments in your videos, when you say things like "why did I say that?" 😆 It's been so fun watching your journey, seeing your office change, your followers grow, your videos improve, and you stay your humble awesome self. I appreciate you.
You are, by far, the best and most comprehensive Guru on the subject of everything AI. You don't have to apologize for it not being polished, It shows that you are a real human and not AI generated. LOL
@@ReezyResells yo brother I am so glad you made a comment on this. I just clicked on your picture and realized I'm already subscribed. Subed about a year ago but I haven't seen anymore of your videos. I just turned on the notification bell, hopefully that will help. 👍
Great headline @MattWolf. I saw the launch on 2x, I love hearing from my “go to’s” in Ai and I was going to skip your video as I’ve extracted everything I needed from the OpenAI Launch, and “patiently” wait untiluntil they added it to the UI (or @Poe) until I saw your headline. That extra (and how…) got my click and full watch. Props to the team. And like always great video.
The last mote to profit using AI will be organic real time data. At some point, the AI will be used to predict that data, and in an effort to be as accurate as possible in its predictions, it will learn to try and affect the outcomes that the data is drawn from. That might seem unrealistic now in many situations, but the infiltrations of AI into every aspect of our world will give it a surprising level of command and control much sooner than anyone can anticipate.
That's why I have been sitting around, just waiting for this AI to evolve. OpenAI is improving itself exponentially, which means it will reach AGI (Artificial General Intelligence) in much less time than everyone estimated. All AI applications will become obsolete much more quickly now. With the advent of the first mass-produced humanoid androids, everyone's life will be greatly changed. This is expected to happen between 2024 and 2025. Less is more; no action is still an action. I am not overreacting to any AI news; instead, I'm waiting to see the destination it will eventually lead us to.
I got the update! I’ve been working on something called prompt software where you convert the chat bot into a smart interactive living software. Similar to GPTs before knowing this. The new update fixed my initial prompt to work. I feel like they may have saw my chats and others chats to implement GPTs but whatever, it will help me go further now
I would use your bot. There are so many times when I think "damn what was that AI tool I heard about in Matt's video" and if you put it out on the GPT store I'd love to support you directly by buying that bot :)
The first major announcement was GPT-4 Turbo, which features a larger context window of up to 128,000 tokens, making it the largest context window of any publicly available large language model. Other updates include Json mode for easier API calling, reproducible outputs for consistent model behavior, retrieval to inject external knowledge, new modalities such as Dolly 3, and customization with fine-tuning models specific to business needs. OpenAI also announced that they have 2 million developers using their API and 100 million weekly active users of Chat GPT. The company aims to give users more control and customization options for their language models, making them more accessible and useful for various industries.
A great video. You are really making an impact - saving us time to understand things, putting things into context. I like the inserts with the view of you from the side. Unpolished? Looks perfect to me. And I like the priming with your brand colours in the back of the room on your left 😂
Awesome update! Sam A. and all those big brains at OpenAI are too smart to let SAAS companies runaway with all the custom bot-building and other apps using their API. The changes you highlighted make this abundantly clear. Only the most creative apps, or companies with huge, unique, proprietary datasets will prosper using their API.
what I would be interested in is using an API to have the model schedule future self prompts. So it can do arbitrary time sensitive planning. So during/after the initial prompt from you, it can set a 'time to check the status of something', and at that time some program behind the API will be triggered and initiate a follow up prompt to it to evaluate some time-dependent condition, and depending on that condition, either schedule another future self prompt to evaluate the condition again, abort the plan all together, or take immediate action on another API. Think: stock trading, or for your twitter example, just posting the tweet itself at the optimal time, and in the interim between now and then, maybe consider other time dependent variable that would modify when that optimal time is. Imagine if the time dependent actions were things like contacting you and asking you or someone else for additional instructions (like, via a text messaging API), giving you a status report on an ongoing action, or just building up chat contexts/instructions for itself or other time-aware agents for you to interact with about various things that have changed since the last time you prompted it.
@@justtiredthings that is what i am talking about. 'self prompting' just means defering the timing element to an api, by having the api prompt gpt whenever necessary. but it loops back on itself because the thing scheduling the prompts with the api is also GPT. the API is necessarily actually performing the checks because GPT only has the ability to check anything at all through an api.
Thumbnail Prompt: Giant Matt Wolfe in red trousers eats a microphone, whilst standing inside a tornado that's passing in front of the OpenAI supermarket. Makes perfect sense 😮
Hi Matt, can you make a video on a few of the different use cases for all the OpenAI upgrades that were just announced? How custom GPTs can be used Whats the best use case for the increased context window sizes How APIs can be used How the OpenAI assistant can be used Some of this content is abstract and difficult to apply to our day-to-day needs.
I'm soo happy that Voice / Vision / Browsing and Dalle 3 are now all available in just one GPT 4 Chat. Love u can send it a picture and finally say just generate images based on the picture I send.
Ya know what, my idea to get rich beyond belief was squashed at 14:30 because I never considered to just ask for what I just saw. Now I can set my sights on custom data sets I can monetize. This is exactly why I stay subscribed to this channel. Thank you again.
Год назад+3
Thank you once again, Matt, for your tremendous effort in compiling all this news.
One of the key benefits for 3rd parties is the ability to take actions on their platform based on a Chat GPT conversation with a user. Essentially expanding the userbase of these 3rd parties. So it's not just the custom data 3rd parties can leverage and make profit but also the integration and interactive operations with their platforms.
As I understood the API way, your files and data from your users will be private. While using the chatGPTs GPTs it doesn't say if openAi will read your stuff or not. Also didn't understand the Seed feature. Can someone explain?
Would've been fine with 32k, but 128k is absolutely bonkers. Alas, I only have it in my playground for now. And I've already spent 3 bucks having my usual long-winded conversations full of copy-pastes of my personal writing projects. Hopefully the chatGPT version gets implemented to my account soon. I might just run out of money if this keeps up!
@@tigreytigrey8537 It's not about how I've spent 3 bucks. It's about HOW FAST that 3 bucks got eaten up. If I kept up my usual API use, that 3 bucks would skyrocket to my limit in just a couple of days. ChatGPT+ costs 20 a month. I ate up 3 bucks, 15% of that, in just a few hours.
Mark my words, somebody is *already* working on a "Star Wars GPT" and a "Star Trek GPT" and a "Harry Potter GPT" and a "MARVEL GPT" and a " GPT". Not to mention it'll be great for language-learning.
I feel like you are the best and only UNBIASED source of information for the latest AI information. Thank you for hours of hard work to stay on top of this massive change in technology!
I appear to have the new version. You might as well and not know it. I used the default, my mistake. Gave it the instructions to review two web pages and a YT video and then to create a blog post about the topic. It used Bing to search and my YT plugin to read the video. Then I asked it to create an image for the post. And it used Dalle-3 to create two image options. Again, this was all under the default, GPT 4 plus.
Man I need to create an assistant, feed it a few books on sales and how to make friends then find a way to use it in chat conversations in Go high level or WhatsApp 🤔
Consider this. I suck at learning apps such as Photoshop. I'm stuck using MS-Paint because no matter how much I follow YT tutorials, every time I come back to Photoshop I've forgotten almost everything. I just find the icons to be unintuitive, too many etc. Photoshop is not a cheap subscription, but can guarantee that if there was an AI assistant that could visually assist me with an onscreen pointer and I could verbally talk to it and it reply with voice so I just need to tell it what I'm trying to make and it guides me through it visually and audio wise and even open stuff and activate features for me then I would pay for that service without any doubt. I also use simple video editors such as Shotcut but would use super advanced ones if it had an AI assistant that could look at what's on my screen and talk me through using it. Visual AI for complex applications will do for the applications subscription/purchase rate what Apple GUI and Windows and also iOS and Android did for computer and smart phone uptake. Companies such as Adobe see how many users they have but I'm not just they are aware of the vast number of people like me who would like to pay for and use their suite of programs but just get lost. An audio-visual AI assistant would 10x their revenue.
Been waiting all day for your takes. Thanks Matt! Edit: Is anyone having any luck with assistants or am I the only one getting "Run failed We're currently processing too many requests - please try again later."
I am a person who is suffering from dysphonia (I lost my voice due to a trauma). I praise OpenAI for allowing me to communicate again with the help of AI voices! When will GPT-4 be able to convert text to speech?
I want gpt as the standard personality of my phone! A device with character and intelligence! Let the phone come to life! Its got eyes and ears already! C'mon devs, I'll love you forever! ❤
The future is amazing and scary IMO. Imagine a tool that has lots of information access, even personal ones, your schedule, your habits, your reactions, your ideas, your desires. Many companies will be willing to pay millions to have access on this. Crazy.
Now, imagine all these amazing features, but with GPT-5 and a context size of at least 3-5 times of GPT-4 Turbo (near 500k tokens) if not more. Next year is going to be insane, and we are almost definitely going to see AGI before 2025. edit: I can't wait to make a text-based adventure GM/narrator that will also output images when describing something, along with my own TTRPG rules and a few campaign settings I've made in the last couple of years. Until now it wasn't possible with the small token limit, but 128k is more than enough (even 1/3 of that is good for now if ChatGPT doesn't have the full size). edit 2: I tried to make one assistant in the Playground. I used a few json files to have as examples for Character, World, and Summary info, along with some rules on how to run a text-based adventure. So far it's worse than what I was able to get in ChatGPT from GPT-4 (custom instructions and the same files uploaded using the interpreter during the chat). I'll probably wait for some more advanced users to show the way on how to create them.
A lot of people over the last few months tried to insinuate that OpenAI's GPT would soon be surpassed by competitors but what these people ignored was that OpenAI of course didn't rest on their laurels but continued to work on improving GPT-4. It was to be expected that OpenAI would launch a massively improved version while everyone else is still catching up. And considering the months of work and resources that goes into these improvements, OpenAI can probably maintain their edge for quite a while.
Can someone answer this question I have? Will the expanded context for gpt - 4 turbo be available via the website/premium plan? Or will this expanded context only work through the API?
To be fair, multimodality had been announced before plugins became a thing, so anybody who created PDF-plugins, picture plugins, video plugins, excel plugins, etc. should have known that the days of their plugins were already numbered.
What gets me is that I try and mention how close we are to seeing the world change in a fundamental way and it will never be the same as we knew It. But people just say hmm sounds interesting or sure it will.. 🤣 They're going to be in a shock when AGI appears. It's ridiculous ha!
Thank you, outstanding information from you as always! Did they say anything regarding security and privacy when using these new GPTs and Assistants? If I upload documents for my assistants and interact with them, is this information including what is contained in the documents, training their LLM and perhaps getting exposed to other users outside my organization? GPT-enterprise has lots of promises regarding security and privacy and I am wondering if it is safe to use these new tools now in that regard. Thanks!
🎯 Key Takeaways for quick navigation: 00:00 🎉 *Introduction to OpenAI Dev Day and access to new features* - Overview of OpenAI Dev Day, including big announcements and access to new features. - Mention of how to access new features even if not yet in ChatGPT account. 00:27 🚀 *Recap of OpenAI's progress and user engagement* - Recap of OpenAI's achievements, including GPT-4, ChatGPT app, and user statistics. - Highlight of 2 million developers using OpenAI's API and 100 million weekly active ChatGPT users. 00:55 💡 *Introduction of GPT-4 Turbo and major improvements* - Announcement of GPT-4 Turbo featuring six major improvements. - Enhancements include extended context length and user control features. 02:05 🔧 *Developer-specific updates for GPT-4 Turbo* - Detailed explanation of GPT-4 Turbo features for developers. - Introduction of Json mode for easier API integration and reproducible outputs for consistent model behavior. 03:01 📚 *Enhanced knowledge and custom information injection* - Update on knowledge cut-off and ability to integrate external knowledge sources. - Introduction of retrieval feature for incorporating external documents or databases. 03:28 🖼️ *New modalities including DALL·E 3, GPT-4 with vision, and text-to-speech* - Integration of new modalities like image input and natural sounding text-to-speech voices. - Mention of Whisper version 3 for improved speech-to-text translation. 04:52 🛡️ *Copyright shield announcement and developer support* - Introduction of copyright shield for legal support in copyright issues. - Emphasis on OpenAI's commitment to covering legal costs for developers using their models. 05:32 💸 *Pricing updates for GPT-4 Turbo and potential impact on AI app market* - Announcement of reduced pricing for GPT-4 Turbo and its implications for developers and the market. - Predictions on increased profitability for AI app developers and potential market expansion. 07:08 🗣️ *ChatGPT updates and integration of GPT-4 Turbo features* - Update on ChatGPT using GPT-4 Turbo with latest improvements and knowledge cut-off. - Announcement of unified access to various models without the need to switch between them. 08:47 🛠️ *Introduction of GPTs and the GPT store for customizable AI tools* - Launch of GPTs, allowing users to build customized ChatGPT versions for specific tasks. - Announcement of a GPT store for publishing and monetizing custom GPTs. 15:26 🤝 *Assistance API and persistent threads for improved AI interactions* - Introduction of Assistance API with persistent threads for continuous context in conversations. - Demonstration of new AI assistant functionalities, emphasizing memory and specialized assistance capabilities. 16:50 🗺️ *Enhanced AI Assistant Capabilities and Retrieval Features* - Introduction of advanced functionalities for AI assistants, including dynamic map annotation and retrieval from uploaded documents. *- AI assistants can now interpret and act on complex functions, like annotating maps based on user inputs.* *- The retrieval feature allows assistants to access and present information from uploaded files, enhancing their knowledge base beyond immediate user messages.* 17:47 🛠️ *Exploring GPT-4 Turbo and Custom AI Assistants via OpenAI Playground* - Overview of using GPT-4 Turbo in OpenAI's Playground, including creation of personalized AI assistants. *- Discussion on the cost of using the Playground and the process of creating a custom AI assistant with specific knowledge sources.* *- Highlights the ability to upload transcripts and other documents to inform the assistant's responses, providing a tailored interaction experience.* 20:02 🚀 *OpenAI's Strategic Direction and Impact on Third-party Developers* - Analysis of OpenAI's updates and their implications for third-party developers and startups. *- OpenAI's advancements could potentially obsolete third-party solutions by integrating similar functionalities directly into its offerings.* *- Emphasis on the shift towards encouraging development within the OpenAI ecosystem, particularly through the creation and monetization of GPTs within ChatGPT.* *- Speculation on the future of SaaS startups relying on OpenAI's APIs and the strategic importance of building within the OpenAI framework to maintain relevance.* Made with HARPA AI
It updated for me sometime around 2 yesterday while I was away at a late lunch. I noticed the interface was a little bit different. Things were in some different places. And the send button looked different. It also crashed a lot more.
when you ask it to generate code, even 128k tokens is very little. it's like a good party conversation when in the end you forgot the name and how you got there.
Sam Altman: "We hope that you'll come back next year. What we launch today is going to look very quaint relative to what we're busy creating for you now."
I’d love to see what they’re actually doing that’s not available to the public. They must be confident if he’s making statements like that.
AGI
Hype keeps the investment coming but yeah it will be epic, everything else they do has been. @@lamsmiley1944
Gpt-5
Agent swarms
Man, i love the 'unpolished' video! It's like I'm hanging out with my AI guru buddy talking about fun stuff, instead of getting a THIS CHANGES EVERYTHING energy drink pitch. Thanks so much for keeping up with everything!
Yeah, using hyperbole for every video is jarring and doesn't even make me click on a lot of his videos. Everything being OMGWHATTHEFUCKLETSGOOO gets boring really fast, as disruptive as things are, I feel like he needs to tone the clickait down and give us more informative video titles.
As a non chatGPT user i had some fun testing out GPT-4 Turbo Preview by using OpenAI's Playground platform. I proceeded to add random PDFs that i had laying around in my pc and it did a great job at retrieving info, but it doesn't work with PNGs or Jpegs, so it can't read anything visual but does a great job with actual text.
Love how on top of all this you are! And, I respect the side view, to let us know you're human 😌🙏
💯👍
You mean that crazy sprig of hair sticking out the back of head?? Nice touch.
Also need back of head cam
Until he stands up and his upper body is basically a stick with wheels at the bottom.
Wow, BusyWorksBeats! The best music producer tutorials with the worst beats as examples, always!
Thank you for dedicating the time to release this so quickly! I'm so excited about the news. Your channel is my go-to for all things AI. 😁
Just awesome what OpenAI is doing there. I imagine GPTs for books which may be valuable and people pay for it. Imagine asking, discussing or excerpting whatever you want about whatever book or author or topic within a group ode books. Or of course Wikipedia or whatever. Mind blowing.
🎯 Key Takeaways for quick navigation:
00:00 🚀 Overview of Open AI's Dev Day
- Insights and announcements from Open AI's first Dev Day, and how to access new features.
00:55 📈 GPT-4 Turbo Introduction
- Introduction of GPT-4 Turbo with six major improvements,
- Including increased context length up to 128,000 tokens.
02:05 🛠️ Developer-Centric Features
- New features for developers using GPT-4 Turbo, like JSON mode and reproducible outputs.
02:48 🧠 Enhanced Knowledge and Retrieval
- GPT-4 Turbo's improved knowledge cutoff and the ability to inject external information.
03:14 🎨 New Modalities and API Enhancements
- Incorporation of DALL-E 3, text-to-speech, and improved speech recognition in the API.
04:10 🤝 Customization and Enterprise Features
- Discussing custom models for businesses and the prospect of enterprise collaborations.
04:52 ⚖️ Copyright Shield and Legal Support
- Introduction of copyright shield to assist with legal costs from potential copyright issues.
05:32 💰 Pricing and Cost Efficiency
- Announcement of reduced costs for using GPT-4 Turbo, promising cheaper development and operation of AI apps.
06:55 🗣️ Chat GPT and GPT-4 Turbo Updates
- Chat GPT's adoption of GPT-4 Turbo improvements and potential implications for users.
07:52 🔄 Introduction to GPT-4's Integration and Accessibility
- Introduction to the seamless integration of browsing, plugins, and DALL·E 3 into Chat GPT for streamlined user experience.
- Predictive model selection based on the type of user query in Chat GPT.
- Announcement that all Chat GPT Plus users should receive the update shortly.
08:32 📉 GPT-4 Cost Efficiency and Features Expansion
- GPT-4 becoming more cost-effective for API users.
- Enhanced features: larger context windows, higher rate limits, more customizability, and document querying capabilities.
08:47 🛠️ Introduction to GPTs and Customization
- Launch of GPTs, allowing for the creation of customized Chat GPT versions.
- GPTs are built with expanded knowledge, tailored instructions, and actionable capabilities.
- Accessibility for anyone to create GPTs using natural language and publish them for others.
10:11 🤝 Interaction with Custom GPTs and Security Features
- Detailed interaction with a custom GPT that connects to calendars and identifies schedule conflicts.
- Security measures in GPTs, requiring user permission before performing actions or sharing data.
10:51 🐦 Case Study of X Optimizer GPT
- Example of a GPT named X Optimizer GPT that was created to optimize Twitter posts for engagement.
- The process of creating a GPT using custom data and instructions for targeted functionalities.
12:13 💼 Developing a GPT for Startup Advice
- Creation of a GPT aimed to assist startup founders with advice and critical feedback.
- The step-by-step process of customizing a GPT, including naming, instruction configuration, and data uploading.
13:23 🏪 Launch of the GPT Store
- The announcement of a GPT Store, akin to an app store for Chat GPT.
- Details on revenue sharing for popular and useful GPTs, and the potential for creators to earn through their custom GPTs.
14:19 📈 Strategies for Success with GPTs
- Discussion on the necessity of proprietary data or creative instructions for a successful GPT.
- Speculation on the future of GPT cloning and the potential for creators to profit from unique data offerings.
15:26 🧠 GPT-4's Persistent Memory Feature
- GPT-4 can now reference previous conversations for contextual understanding,
- It features persistent threads which allow for more natural long-form conversations.
16:08 🗺️ Creation of Specialized AI Assistants
- OpenAI's Playground introduced an AI assistant named Wanderlust,
- The assistant can execute code, use digital maps for information display, and reference its own updating version.
17:04 📚 Retrieval Feature for Extended Knowledge
- The retrieval feature allows the AI to read and comprehend long documents,
- Assistants can parse a range of documents from short texts to detailed specifications.
18:01 🛠️ Custom AI Assistant Development
- Developers can create custom AI assistants using OpenAI's API,
- Assistants can utilize specific knowledge sources, like transcripts, to inform responses.
19:49 🤑 Monetization and API Integration
- OpenAI's new features may enable monetization opportunities for developers,
- The platform's evolution may inadvertently render third-party services obsolete.
21:23 🚀 OpenAI's Platform Evolution
- OpenAI encourages development within its platform rather than external applications,
- The trend suggests that many external SaaS offerings may become redundant.
22:17 🛸 Future Tools and AI Community Engagement
- The host discusses the future tools website for AI enthusiasts,
- It serves as a hub for AI news, tool curation, and a newsletter service.
Made with HARPA AI
For a non-GPT-5 announcement day, I thought that was a very good series of announcements. (Particularly that massively improved context length! Claude just lost its advantage, at least for now.)
Only things that would have improved it remarkably would have been the announcement of custom UIs and - especially - the ability to organize chats in folders.
right?! I mean seriously, date org for chats? Who does that?
@@murdermittensnycAnother thing I forgot: some way to tell how many GPT-4 prompts we have left. Like Bing does.
Matt, I'm a big fan of your channel! Your expertise in this niche really shines through in your videos. It's clear you put a lot of hard work into researching and creating fresh, high-quality content regularly, which is so valuable for viewers like me.
I appreciate how you consistently put out insightful new videos on current events despite the challenges of covering such a fast-paced niche. It must require a lot of work to stay on top of the cycle compared to more evergreen content. But your dedication really shows - your upbeat personality also makes the videos so engaging. Please keep up the great work!
Great summary and clarification of the event - what is available and what is not, adding info about how the payment structure works and overall comment!
Good job, Matt 👏
Matt! I soo appreciate seeing your 'can't live without ' content without the sound and visual effects. Really outstanding breakdown of Dev day- clear, professional and beautiful. Thank you so much!
Persistent threads seems like a huge time (and potentially cost) saver for devs, no more clunkily passing the entire chat history in the prompt every time! Itching to get access already!
Thank you so much for reviewing OpenAI's DevDay on your channel! Your insights and analysis really helped me understand the event better. Keep up the great work!
Man this is all so rad! I can't even imagine what the world looks like in 10 years 🤯
You literally can’t. I’d bet even 5 years will be crazy. 2030s will be sci-fi.
I feel like things are already pretty sci-fi. We are definitely living in the future now.
@CdawgAMVsFilmEditing Utopian*
We will all have AI romantic companions. We can program them to be exactly how we want. No more arguing or anything. No more "Oh I have a headache so no sex tonight, honey!" Oh, and they will look however we want. 😂
Jobless. lol
thx for the update Matt! IMHO editing and polish = 10%... Content = 90% so defo no need to apologise for the lack of polish, I didn't even notice! Will definitely watch this one a couple of times to get it all down!
Nice job, Matt. I helped launch the whole neural network AI thing back in 1980 (Rumelhart had me edit & test his side of PDP while he was writing it + I invented VLIW computer architecture which allowed the first generation of multiprocessor GPUs + I was almost selected as editor of AI:AMA 4th ed.) I am definitely signing up for your newsletter.
Damn it must suck knowing youre prob gna die before you can help learn and help build this out with them?
Matt, I've noticed you've added the side camera, and your editor splices in some of those shots occasionally. It would be fun if you used it for some Zach Morris style side comments, or just cutting to that camera for some of my favorite moments in your videos, when you say things like "why did I say that?" 😆
It's been so fun watching your journey, seeing your office change, your followers grow, your videos improve, and you stay your humble awesome self. I appreciate you.
You are, by far, the best and most comprehensive Guru on the subject of everything AI. You don't have to apologize for it not being polished, It shows that you are a real human and not AI generated. LOL
waiting for the day we see MattGPT giving us our AI news updates/tips/tricks...etc.
Omg that would have been the perfect name
🅂🅄🄲🄺 🄱🄰🄻🄻🅂
@@ReezyResells yo brother I am so glad you made a comment on this. I just clicked on your picture and realized I'm already subscribed. Subed about a year ago but I haven't seen anymore of your videos. I just turned on the notification bell, hopefully that will help. 👍
yeah 😂 starts the morning with voice commands "hey matt gpt what's new in AI world"
You don't need to wait, once you have access to GPTs you can go ahead and create it yourself. :-)
Great headline @MattWolf. I saw the launch on 2x, I love hearing from my “go to’s” in Ai and I was going to skip your video as I’ve extracted everything I needed from the OpenAI Launch, and “patiently” wait untiluntil they added it to the UI (or @Poe) until I saw your headline. That extra (and how…) got my click and full watch. Props to the team. And like always great video.
Thanks for the update, Matt. It is always good to watch your videos to keep up to date with the current AI news & trends.
The last mote to profit using AI will be organic real time data. At some point, the AI will be used to predict that data, and in an effort to be as accurate as possible in its predictions, it will learn to try and affect the outcomes that the data is drawn from. That might seem unrealistic now in many situations, but the infiltrations of AI into every aspect of our world will give it a surprising level of command and control much sooner than anyone can anticipate.
Ah man the GPTs are going to awesome, had a use case today on how can train on my style of writing and responses. This is GOLD!
amazing update Matt. I actually watched the presentation but your update provide more clarity. Great job. Matt is an AI update God.
It would have been helpful to compare gpt4 and gpt turbo responses via playground.
That's why I have been sitting around, just waiting for this AI to evolve. OpenAI is improving itself exponentially, which means it will reach AGI (Artificial General Intelligence) in much less time than everyone estimated. All AI applications will become obsolete much more quickly now. With the advent of the first mass-produced humanoid androids, everyone's life will be greatly changed. This is expected to happen between 2024 and 2025. Less is more; no action is still an action. I am not overreacting to any AI news; instead, I'm waiting to see the destination it will eventually lead us to.
I just checked and I found out I have the multi model! I have been waiting for this! Thank you for keeping us updated Matt.
Multi-model is horribly broken currently so don't get excited just yet.
@@sveinndagur jealous? lol
Thanks so much for these videos. They’re so organized, clear, and informative!
It was an amazing day! Thanks to everyone involved grea4t to meet a bunch of you. Thanks, Matt for taking the time to cover this.
I got the update!
I’ve been working on something called prompt software where you convert the chat bot into a smart interactive living software. Similar to GPTs before knowing this.
The new update fixed my initial prompt to work.
I feel like they may have saw my chats and others chats to implement GPTs but whatever, it will help me go further now
I would use your bot. There are so many times when I think "damn what was that AI tool I heard about in Matt's video" and if you put it out on the GPT store I'd love to support you directly by buying that bot :)
i dodn't notice any missing editing in this video, all good, thanks for the incredible update!
The first major announcement was GPT-4 Turbo, which features a larger context window of up to 128,000 tokens, making it the largest context window of any publicly available large language model. Other updates include Json mode for easier API calling, reproducible outputs for consistent model behavior, retrieval to inject external knowledge, new modalities such as Dolly 3, and customization with fine-tuning models specific to business needs. OpenAI also announced that they have 2 million developers using their API and 100 million weekly active users of Chat GPT. The company aims to give users more control and customization options for their language models, making them more accessible and useful for various industries.
A great video. You are really making an impact - saving us time to understand things, putting things into context. I like the inserts with the view of you from the side. Unpolished? Looks perfect to me. And I like the priming with your brand colours in the back of the room on your left 😂
Awesome update! Sam A. and all those big brains at OpenAI are too smart to let SAAS companies runaway with all the custom bot-building and other apps using their API. The changes you highlighted make this abundantly clear. Only the most creative apps, or companies with huge, unique, proprietary datasets will prosper using their API.
what I would be interested in is using an API to have the model schedule future self prompts. So it can do arbitrary time sensitive planning. So during/after the initial prompt from you, it can set a 'time to check the status of something', and at that time some program behind the API will be triggered and initiate a follow up prompt to it to evaluate some time-dependent condition, and depending on that condition, either schedule another future self prompt to evaluate the condition again, abort the plan all together, or take immediate action on another API.
Think: stock trading, or for your twitter example, just posting the tweet itself at the optimal time, and in the interim between now and then, maybe consider other time dependent variable that would modify when that optimal time is.
Imagine if the time dependent actions were things like contacting you and asking you or someone else for additional instructions (like, via a text messaging API), giving you a status report on an ongoing action, or just building up chat contexts/instructions for itself or other time-aware agents for you to interact with about various things that have changed since the last time you prompted it.
Tbh, if you're using the API anyway, you could just program something that would do those kinds of checks and prompt GPT whenever necessary
@@justtiredthings that is what i am talking about. 'self prompting' just means defering the timing element to an api, by having the api prompt gpt whenever necessary. but it loops back on itself because the thing scheduling the prompts with the api is also GPT. the API is necessarily actually performing the checks because GPT only has the ability to check anything at all through an api.
Thanks for your energy ! Excellent video once again ^^
Thumbnail Prompt: Giant Matt Wolfe in red trousers eats a microphone, whilst standing inside a tornado that's passing in front of the OpenAI supermarket.
Makes perfect sense 😮
Hi Matt, can you make a video on a few of the different use cases for all the OpenAI upgrades that were just announced?
How custom GPTs can be used
Whats the best use case for the increased context window sizes
How APIs can be used
How the OpenAI assistant can be used
Some of this content is abstract and difficult to apply to our day-to-day needs.
I'm soo happy that Voice / Vision / Browsing and Dalle 3 are now all available in just one GPT 4 Chat. Love u can send it a picture and finally say just generate images based on the picture I send.
Ahhh I need this to update!!!
@@alexfitchcreatesIts currently available for me in the Android App, not on Web for me yet.
This was an absolute well made summary of the event. You were one step ahead of all of my questions. Thank you 🙏
13:05 It's interesting that Sam considers that a dumb question.
Wow! The context thing is awesome!
Really interested in hearing more about the privacy and legal implications or risks of this technology
Thanks Matt. Your passion and insight is contagious :)
Ya know what, my idea to get rich beyond belief was squashed at 14:30 because I never considered to just ask for what I just saw. Now I can set my sights on custom data sets I can monetize.
This is exactly why I stay subscribed to this channel.
Thank you again.
Thank you once again, Matt, for your tremendous effort in compiling all this news.
So much to look forward too! And a game changer!
One of the key benefits for 3rd parties is the ability to take actions on their platform based on a Chat GPT conversation with a user. Essentially expanding the userbase of these 3rd parties. So it's not just the custom data 3rd parties can leverage and make profit but also the integration and interactive operations with their platforms.
As I understood the API way, your files and data from your users will be private. While using the chatGPTs GPTs it doesn't say if openAi will read your stuff or not.
Also didn't understand the Seed feature. Can someone explain?
Would've been fine with 32k, but 128k is absolutely bonkers. Alas, I only have it in my playground for now. And I've already spent 3 bucks having my usual long-winded conversations full of copy-pastes of my personal writing projects.
Hopefully the chatGPT version gets implemented to my account soon. I might just run out of money if this keeps up!
You've only spent 3 bucks and you're already gna run out of money? Use your smarts to make money?
@@tigreytigrey8537 It's not about how I've spent 3 bucks. It's about HOW FAST that 3 bucks got eaten up. If I kept up my usual API use, that 3 bucks would skyrocket to my limit in just a couple of days. ChatGPT+ costs 20 a month. I ate up 3 bucks, 15% of that, in just a few hours.
I am actually a developer using the GPT-4 API and these updates change pretty much everything!
Your Thumbnail is really Nice! 🎉
Mark my words, somebody is *already* working on a "Star Wars GPT" and a "Star Trek GPT" and a "Harry Potter GPT" and a "MARVEL GPT" and a " GPT".
Not to mention it'll be great for language-learning.
You're the best Matt!
I also still have browse and dalle-3 options under GPT-4 but in the default mode I was able to fetch data from the web and generate images
Oh damn! I do not have that yet.
Insanely fun time’s ahead. Thanks Matt.
Thank you for the side view. That really clarified everything 👍
as text to 3d modelling improves we'll be seeing better and better sideviews and maybe one day even the back of his head.
Awesome new for us developers :D I'm very happy with this development of Chat Gpt
I feel like you are the best and only UNBIASED source of information for the latest AI information. Thank you for hours of hard work to stay on top of this massive change in technology!
I appear to have the new version. You might as well and not know it. I used the default, my mistake. Gave it the instructions to review two web pages and a YT video and then to create a blog post about the topic. It used Bing to search and my YT plugin to read the video. Then I asked it to create an image for the post. And it used Dalle-3 to create two image options. Again, this was all under the default, GPT 4 plus.
Man I need to create an assistant, feed it a few books on sales and how to make friends then find a way to use it in chat conversations in Go high level or WhatsApp 🤔
Consider this. I suck at learning apps such as Photoshop. I'm stuck using MS-Paint because no matter how much I follow YT tutorials, every time I come back to Photoshop I've forgotten almost everything. I just find the icons to be unintuitive, too many etc.
Photoshop is not a cheap subscription, but can guarantee that if there was an AI assistant that could visually assist me with an onscreen pointer and I could verbally talk to it and it reply with voice so I just need to tell it what I'm trying to make and it guides me through it visually and audio wise and even open stuff and activate features for me then I would pay for that service without any doubt.
I also use simple video editors such as Shotcut but would use super advanced ones if it had an AI assistant that could look at what's on my screen and talk me through using it.
Visual AI for complex applications will do for the applications subscription/purchase rate what Apple GUI and Windows and also iOS and Android did for computer and smart phone uptake.
Companies such as Adobe see how many users they have but I'm not just they are aware of the vast number of people like me who would like to pay for and use their suite of programs but just get lost.
An audio-visual AI assistant would 10x their revenue.
Been waiting all day for your takes. Thanks Matt!
Edit: Is anyone having any luck with assistants or am I the only one getting "Run failed
We're currently processing too many requests - please try again later."
I am having the same issue
I am a person who is suffering from dysphonia (I lost my voice due to a trauma). I praise OpenAI for allowing me to communicate again with the help of AI voices! When will GPT-4 be able to convert text to speech?
Wow! My head is spinning. I’ll admit I didn’t fully understand most of what was discussed. Time you educate myself. It all seems very exciting.
I want gpt as the standard personality of my phone! A device with character and intelligence! Let the phone come to life! Its got eyes and ears already! C'mon devs, I'll love you forever! ❤
The future is amazing and scary IMO. Imagine a tool that has lots of information access, even personal ones, your schedule, your habits, your reactions, your ideas, your desires. Many companies will be willing to pay millions to have access on this. Crazy.
Now, imagine all these amazing features, but with GPT-5 and a context size of at least 3-5 times of GPT-4 Turbo (near 500k tokens) if not more.
Next year is going to be insane, and we are almost definitely going to see AGI before 2025.
edit: I can't wait to make a text-based adventure GM/narrator that will also output images when describing something, along with my own TTRPG rules and a few campaign settings I've made in the last couple of years. Until now it wasn't possible with the small token limit, but 128k is more than enough (even 1/3 of that is good for now if ChatGPT doesn't have the full size).
edit 2: I tried to make one assistant in the Playground. I used a few json files to have as examples for Character, World, and Summary info, along with some rules on how to run a text-based adventure. So far it's worse than what I was able to get in ChatGPT from GPT-4 (custom instructions and the same files uploaded using the interpreter during the chat).
I'll probably wait for some more advanced users to show the way on how to create them.
I for one welcome our MattBot3000 overlord.
Wow GPT all in one is crazy af, I love it so far!
I still don’t have it
@@rish8917 Me neither, and it's beginning to piss me off!🤬😂
@@rish8917 , bing is very good.
Amazing! Thank you so much, Matt!
A lot of people over the last few months tried to insinuate that OpenAI's GPT would soon be surpassed by competitors but what these people ignored was that OpenAI of course didn't rest on their laurels but continued to work on improving GPT-4. It was to be expected that OpenAI would launch a massively improved version while everyone else is still catching up. And considering the months of work and resources that goes into these improvements, OpenAI can probably maintain their edge for quite a while.
Thanks Matt, we can feel you are passionate about the topic
Can someone answer this question I have? Will the expanded context for gpt - 4 turbo be available via the website/premium plan? Or will this expanded context only work through the API?
I am wondering the same thing. If not, it isn't fair. We pay $20 a month, so we should get it too.
Did you find an answer
What an exciting time to be alive. Honestly.
As soon as I saw the email from openai, this was the video I was waiting for
To be fair, multimodality had been announced before plugins became a thing, so anybody who created PDF-plugins, picture plugins, video plugins, excel plugins, etc. should have known that the days of their plugins were already numbered.
Super crisp video Matt! What camera and screen recording software do you use?
What gets me is that I try and mention how close we are to seeing the world change in a fundamental way and it will never be the same as we knew It. But people just say hmm sounds interesting or sure it will.. 🤣 They're going to be in a shock when AGI appears. It's ridiculous ha!
Would be great if Future Tools kept track of useful Chatbots (GPTs) that people create!
Thanks for breaking it down and making it more understandable.
Thank you, outstanding information from you as always! Did they say anything regarding security and privacy when using these new GPTs and Assistants? If I upload documents for my assistants and interact with them, is this information including what is contained in the documents, training their LLM and perhaps getting exposed to other users outside my organization? GPT-enterprise has lots of promises regarding security and privacy and I am wondering if it is safe to use these new tools now in that regard. Thanks!
I was wondering the same thing
Thank you for the summary!
🎯 Key Takeaways for quick navigation:
00:00 🎉 *Introduction to OpenAI Dev Day and access to new features*
- Overview of OpenAI Dev Day, including big announcements and access to new features.
- Mention of how to access new features even if not yet in ChatGPT account.
00:27 🚀 *Recap of OpenAI's progress and user engagement*
- Recap of OpenAI's achievements, including GPT-4, ChatGPT app, and user statistics.
- Highlight of 2 million developers using OpenAI's API and 100 million weekly active ChatGPT users.
00:55 💡 *Introduction of GPT-4 Turbo and major improvements*
- Announcement of GPT-4 Turbo featuring six major improvements.
- Enhancements include extended context length and user control features.
02:05 🔧 *Developer-specific updates for GPT-4 Turbo*
- Detailed explanation of GPT-4 Turbo features for developers.
- Introduction of Json mode for easier API integration and reproducible outputs for consistent model behavior.
03:01 📚 *Enhanced knowledge and custom information injection*
- Update on knowledge cut-off and ability to integrate external knowledge sources.
- Introduction of retrieval feature for incorporating external documents or databases.
03:28 🖼️ *New modalities including DALL·E 3, GPT-4 with vision, and text-to-speech*
- Integration of new modalities like image input and natural sounding text-to-speech voices.
- Mention of Whisper version 3 for improved speech-to-text translation.
04:52 🛡️ *Copyright shield announcement and developer support*
- Introduction of copyright shield for legal support in copyright issues.
- Emphasis on OpenAI's commitment to covering legal costs for developers using their models.
05:32 💸 *Pricing updates for GPT-4 Turbo and potential impact on AI app market*
- Announcement of reduced pricing for GPT-4 Turbo and its implications for developers and the market.
- Predictions on increased profitability for AI app developers and potential market expansion.
07:08 🗣️ *ChatGPT updates and integration of GPT-4 Turbo features*
- Update on ChatGPT using GPT-4 Turbo with latest improvements and knowledge cut-off.
- Announcement of unified access to various models without the need to switch between them.
08:47 🛠️ *Introduction of GPTs and the GPT store for customizable AI tools*
- Launch of GPTs, allowing users to build customized ChatGPT versions for specific tasks.
- Announcement of a GPT store for publishing and monetizing custom GPTs.
15:26 🤝 *Assistance API and persistent threads for improved AI interactions*
- Introduction of Assistance API with persistent threads for continuous context in conversations.
- Demonstration of new AI assistant functionalities, emphasizing memory and specialized assistance capabilities.
16:50 🗺️ *Enhanced AI Assistant Capabilities and Retrieval Features*
- Introduction of advanced functionalities for AI assistants, including dynamic map annotation and retrieval from uploaded documents.
*- AI assistants can now interpret and act on complex functions, like annotating maps based on user inputs.*
*- The retrieval feature allows assistants to access and present information from uploaded files, enhancing their knowledge base beyond immediate user messages.*
17:47 🛠️ *Exploring GPT-4 Turbo and Custom AI Assistants via OpenAI Playground*
- Overview of using GPT-4 Turbo in OpenAI's Playground, including creation of personalized AI assistants.
*- Discussion on the cost of using the Playground and the process of creating a custom AI assistant with specific knowledge sources.*
*- Highlights the ability to upload transcripts and other documents to inform the assistant's responses, providing a tailored interaction experience.*
20:02 🚀 *OpenAI's Strategic Direction and Impact on Third-party Developers*
- Analysis of OpenAI's updates and their implications for third-party developers and startups.
*- OpenAI's advancements could potentially obsolete third-party solutions by integrating similar functionalities directly into its offerings.*
*- Emphasis on the shift towards encouraging development within the OpenAI ecosystem, particularly through the creation and monetization of GPTs within ChatGPT.*
*- Speculation on the future of SaaS startups relying on OpenAI's APIs and the strategic importance of building within the OpenAI framework to maintain relevance.*
Made with HARPA AI
As a consultant for digital transformation I can’t start to wrap my head around this. Amazing stuff! Thanks Matt! Great video as always
Need to get another job soon probably 😅
@@YuriyBraterskyySame for all of us!
Matt, thank you once again for keeping us on the front foot. Top job!!
The Zapier interaction demo was a bit clunky and looks like an annoying workflow with the constant action approvals.
I’m so excited 😆 I can build my language chatbot sooner than I thought! Woohoo 🥳 12:00
Matt, don't worry about "polished". I enjoyed this video more than the ones with all the fancy unneeded graphics. :) Your content is your product.
It updated for me sometime around 2 yesterday while I was away at a late lunch. I noticed the interface was a little bit different. Things were in some different places. And the send button looked different. It also crashed a lot more.
Tried Assistants API with 1106 . Cool but no browsing capabilities or at least not yet.
Good information density. Thanks
when you ask it to generate code, even 128k tokens is very little. it's like a good party conversation when in the end you forgot the name and how you got there.
Thanks for great summary as always, and so fast after the session. Mat you and your team is awesome. Keep rolling.
Another step towards ACI (Artificial Capable Intelligence) and AGI. Soon all will be out of business 😂😅. The power of platform economy.
Those Agents are like Baby AGI and we are witnessing the first steps of Baby AGI ^^
I learned everything a.i from your channel! Thank you Matt Wolfe!
Thank you Matt, looking forward for you new tutorial video for chat gpt
Man, you are on top of the top of the creme de la creme of AI resources Ever! Thanks for all the valuable insights!