15:45 What a dishonest person evading direct simple question like that. He thinks we are all stupid. I like how she called him out on it. This company can't be trusted.
@@tomas.bednar From what I've seen, I think, all the companies that are training AI models don't like to answer questions or go into the specifics about how they've trained the models or what data they've used. OpenAI is not the only one doing that. It's clear that all of them use all of the data they can get. I, personally, don't care about it so much. As long as AI's don't straight up copy original material 1 to 1, who cares.
According to ChatGPT, when I asked evaluation of the whole discussion (hint: ChatGPT kinda agrees on the corpo b.s. ): Corporate Vagueness and Jargon: There is considerable use of generalized, optimistic language about AI's future capabilities and OpenAI's mission, which can be seen as somewhat vague. Terms like "transformative," "mission-oriented company," and "iterative approach" are typical of corporate discourse that aims to be inspiring but lacks specificity. This can be seen as necessary from a marketing perspective but may feel like "bullshitting" to someone looking for concrete details. Discussion on Ethical Concerns and AGI: Brad addresses the ambitious goal of developing AGI (Artificial General Intelligence) and acknowledges public concerns about such advanced AI. His response here is a mix of acknowledging the complexity and potential dangers while reaffirming the company's commitment to safety and ethical considerations. This is somewhat reassuring but still leaves room for skepticism about the practical handling of such issues. Future Predictions and Capabilities: There's a typical corporate forward-looking optimism about AI's capabilities and its impact on society. Brad's comments on how current AI systems will seem "laughably bad" in comparison to future developments and his discussions about the potential societal benefits are optimistic. This could be viewed as an overstatement, typical of corporate attempts to drum up excitement and confidence in the technology's future. Product and Market Focus: Brad's discussions about how OpenAI's products are used in enterprises and his description of partnerships and applications in various sectors (like media and Hollywood) are relevant and provide some substantive details. However, the discussion often veers into broad statements about potential impacts without substantial backing at the moment, which can feel like over-promising.
A masterclass in the art of corporate bullshitting. Absolutely magnificent! She asks a very clear and easy to understand question at 15:45 and just look at Brad go! A stallion in bullshitting! Nothing but complete bullshit coming from his mouth! Take notes people: what's really important is that bit at the end. The best of the best corporate bullshitters always end with a statement of goodwill, something that says "hey, we're all on the same time, we're all good people striving for the same thing". So he ends with "If you have any ideas, we'll take 'em!". How can you not love it? He's my friend! He's your friend! He's so open minded! 10/10 he's honestly better than Adam Neumann 🫡
@02:23: "When the Transformers came out"? They came out in 2017, but you are describing some time AFTER 2018. How? Did you guys invent a time machine too?
@01:08 First time
@06:27 Second time
@11:40 Third time
@13:44 Fourth time
@15:24 Fifth time
@15:49 Sixth time
Thoughts?
Sam feeding him lines through hidden ear piece
@@phillaysheo8 But Sam or GPT-5?
@@reza2kn If it was GPT-5 I would have expected less BS answers from him. Sam on the other hand is mostly BS. So went with the latter.
@@phillaysheo8 can't argue with that!
feels like sings of deception
15:45 What a dishonest person evading direct simple question like that. He thinks we are all stupid. I like how she called him out on it. This company can't be trusted.
In what reality was he going to give the answer were you looking for? You wanted him to straight up incriminate the company during an interview?
@@Baxtyr He could say he refuses to answer. Or tell the truth. And if the truth incriminates them, it kinda proves my point.
@@tomas.bednar From what I've seen, I think, all the companies that are training AI models don't like to answer questions or go into the specifics about how they've trained the models or what data they've used. OpenAI is not the only one doing that.
It's clear that all of them use all of the data they can get.
I, personally, don't care about it so much. As long as AI's don't straight up copy original material 1 to 1, who cares.
According to ChatGPT, when I asked evaluation of the whole discussion (hint: ChatGPT kinda agrees on the corpo b.s. ):
Corporate Vagueness and Jargon: There is considerable use of generalized, optimistic language about AI's future capabilities and OpenAI's mission, which can be seen as somewhat vague. Terms like "transformative," "mission-oriented company," and "iterative approach" are typical of corporate discourse that aims to be inspiring but lacks specificity. This can be seen as necessary from a marketing perspective but may feel like "bullshitting" to someone looking for concrete details.
Discussion on Ethical Concerns and AGI: Brad addresses the ambitious goal of developing AGI (Artificial General Intelligence) and acknowledges public concerns about such advanced AI. His response here is a mix of acknowledging the complexity and potential dangers while reaffirming the company's commitment to safety and ethical considerations. This is somewhat reassuring but still leaves room for skepticism about the practical handling of such issues.
Future Predictions and Capabilities: There's a typical corporate forward-looking optimism about AI's capabilities and its impact on society. Brad's comments on how current AI systems will seem "laughably bad" in comparison to future developments and his discussions about the potential societal benefits are optimistic. This could be viewed as an overstatement, typical of corporate attempts to drum up excitement and confidence in the technology's future.
Product and Market Focus: Brad's discussions about how OpenAI's products are used in enterprises and his description of partnerships and applications in various sectors (like media and Hollywood) are relevant and provide some substantive details. However, the discussion often veers into broad statements about potential impacts without substantial backing at the moment, which can feel like over-promising.
Apparently they beeped that out. Didn’t catch the live stream today. Curious
Alter ego
What did they beep out??
Alter ego
.....Thought Lightcap was some kind of new app being built by OpenAI.
sam altman's what?
Secret weapon
@@calliped1 Thankss
Thx
A masterclass in the art of corporate bullshitting. Absolutely magnificent! She asks a very clear and easy to understand question at 15:45 and just look at Brad go! A stallion in bullshitting! Nothing but complete bullshit coming from his mouth! Take notes people: what's really important is that bit at the end. The best of the best corporate bullshitters always end with a statement of goodwill, something that says "hey, we're all on the same time, we're all good people striving for the same thing". So he ends with "If you have any ideas, we'll take 'em!". How can you not love it? He's my friend! He's your friend! He's so open minded!
10/10 he's honestly better than Adam Neumann 🫡
Will ML make milk and bread cheaper?
I mean it's not impossible.
@@reza2kn Nice. I will keep hope alive.
@02:23: "When the Transformers came out"? They came out in 2017, but you are describing some time AFTER 2018. How? Did you guys invent a time machine too?
I want my bread and milk to get cheaper. Can open AI fix that?
No, but you can use it to make more money, so the bread and milk doesn’t seem as expensive
this is the biggest issue with technology today. It creates virtual smoke and mirrors on top of the physical issues impacting people in reality.
Any takeaway from this interview about the "Business Applications of AI"? Trying so hard not to share anything new.
We need a “Like” counter
Prompt engineering happens all the time with adults. That's why some people are considered great communicators.
Most adults can understand any type of communication level. Babies can’t
I think they like each other
Maybe you set up AI asking questions to every different humans everyday in schedules?
They are talking down so much of what they have created. So that they have a room to wiggle and take advantage of their lead
It's called humble Bragging. It's like sure I scored 99 points, but I didn't play my best game. Next time I'll focus more.
I learned nothing . Not very insightful
How did Sam Altman get this job if he always comes off like he has no idea what is going on? Or is that just a play dumb strategy?
theyre being sued by the people whose data they stole. They cant answer the question without screwing up their case.
What is it with people saying like all the time?!?
i askedAI to sum this interview. RESULT. BLA BLA....corporate wind.
Would you ask stupid questions?
Over hyped technology..
Corporate farting