Chris’s comments (rant?) on how hard is to use AI modules because of nonexistent support for developers are very important. I would be happy to use a clip with it in my LinkedIn post on a topic about poor business models of leading AI vendors. Would you allow it?
Also being an android/windows user, the apple AI proposal is intriguing. How far can we trust them, I wonder? I WANT to trust them as what they envision sounds great, including those new earpods with head gestures. Imagine if google came out with exactly the same model and presentation - who would trust google that "your data is yours and we will never view it"?
And that is the future of learning. People I talk to generally are still stuck in an old paradigm of thinking that tries to glue chest onto existing modes. But that's stupid because this is a whole new mode. Teachers still trying to teach and evaluate like personal ability is the point. It's not. It's how well you can use chat to augment yourself. That's what we need to be teaching and learning so that your abilities to use chat to do stuff is what's evaluated. It looks like specialisms and identity around what you are and what you can do can and are changing. What does this mean for workers? Generic skills will be understanding llm, effective critical evaluation of output, double checking the output for correctness. Maybe once reasoning gets good enough agents will be able to do that effectively. But I see a future where people with these skills will replace entire departments of workers.
A lot of text to image models seem to have big problems with people in odd orientations - they have a strong bias to people standing or sitting upright, and asking them to be lying down at an odd angle, or doing a cartwheel or something equally unusual often has horrifying results. I'm sure it's how they're trained, but it does seem like some equivalent to RLHF could be used to add the ability to give them the ability to rotate bodies far better.
No, I wouldn't swap from Android to Apple for their AI, I would just wait for Google to replicate the Apple AI integration. I'd rather run an open-source OS on my phone, running an open-source model.
Great podcast guys. I hate to bring it up, but you DO say ‘GBT’. I’m a New Zealander and if I hear it like that, I wouldn’t expect better from an AI atm. Anyway, looking forward to the next ep.
Thanks!
App intents have been a huge part of apple since the launch of Siri in 2011
Chris’s comments (rant?) on how hard is to use AI modules because of nonexistent support for developers are very important. I would be happy to use a clip with it in my LinkedIn post on a topic about poor business models of leading AI vendors. Would you allow it?
Yes, you can use it.
Also being an android/windows user, the apple AI proposal is intriguing. How far can we trust them, I wonder? I WANT to trust them as what they envision sounds great, including those new earpods with head gestures. Imagine if google came out with exactly the same model and presentation - who would trust google that "your data is yours and we will never view it"?
And that is the future of learning. People I talk to generally are still stuck in an old paradigm of thinking that tries to glue chest onto existing modes. But that's stupid because this is a whole new mode. Teachers still trying to teach and evaluate like personal ability is the point. It's not. It's how well you can use chat to augment yourself. That's what we need to be teaching and learning so that your abilities to use chat to do stuff is what's evaluated. It looks like specialisms and identity around what you are and what you can do can and are changing. What does this mean for workers? Generic skills will be understanding llm, effective critical evaluation of output, double checking the output for correctness. Maybe once reasoning gets good enough agents will be able to do that effectively. But I see a future where people with these skills will replace entire departments of workers.
It's the first thing that's seriously made me consider switching
A lot of text to image models seem to have big problems with people in odd orientations - they have a strong bias to people standing or sitting upright, and asking them to be lying down at an odd angle, or doing a cartwheel or something equally unusual often has horrifying results. I'm sure it's how they're trained, but it does seem like some equivalent to RLHF could be used to add the ability to give them the ability to rotate bodies far better.
Have tried dream machine - it doesn't follow instructions IMO very well
Backdoors?
Judging by how well Google has been doing in this AI war, your frustrating experience isn't surprising at all Tbh.
No, I wouldn't swap from Android to Apple for their AI, I would just wait for Google to replicate the Apple AI integration.
I'd rather run an open-source OS on my phone, running an open-source model.
Apple did open-source some AI models
Great podcast guys.
I hate to bring it up, but you DO say ‘GBT’. I’m a New Zealander and if I hear it like that, I wouldn’t expect better from an AI atm.
Anyway, looking forward to the next ep.