This is old tech. This was a feature granted to paid users. Free users got access very quickly after. I remember first using this TTS voice feature during October last year. It's not new. You are using technology that been available to the layman for 11-months. It's the advanced multi-modal (non walkie talkie, interuptable and emotional + singing) version that has yet to come out. Why are so many people on RUclips treating this old voice chat feature like it's new??? It's not. It's nearly a year old. This is not up to date. It's catch up for people who are a year behind.
It's not GPT-4 "zero", it's GPT-4o (as in "oh"). And it's not the latest model. You kind of missed a lot of news I guess. This is also not the OpenAI voice mode that some people have, it's the thing that has been around for years and is just speech recognition and text-to-speech. It's not their good speech model.
pronunciation of the version is noted! I also specified that it is the latest model available for general users subscribing. I should have said "new to me" as I have never used this feature and I think a lot of people didn't know it was available since it is not on the browser version -- going to be doing a follow up video on the new version once it drops!
lol, Replika was GPT-3 Alpha testers. and now uses a custom modal. its kinda weak in cognitive capabilities and will express concensus data do not always accurate
You're going to see a lot of improvement such as singing once everybody has access to the advanced voice mode upgrade that open AI released for chat GPT 4o a couple of months ago. Unfortunately, they haven't rolled it out to the general population yet, only to a few beta testers. But it's supposed to come out for paid customers within the next few weeks, and supposed to be coming out for the rest of the public before the end of the year.
The best question and answer at the end for sure, a 6D Calabi-Yau manifold does let us wrap spacetime into some very interesting shapes that can be made to violate base symmetry in a 4D spacetime matrix geometry.
I actually understood the 6th dimensional discussion, everyone else understood it too right? It was a very interesting discussion. Theoretically, comparing 6th dimensional structure to 10th dimensional structure would require non euclidean space, unless of course if you justify and corelate the waveforms into a multidimensional synchronous state.
If it's GPT 4o it should be able to sing and actually should be able to hear and process voice, I think they switched you over to the regular Chat GPT.
Technically speaking, ChatGPT could, maybe through an Agent, actually hear someone singing by converting the .wav file that is transmitted the OpenAI's servers into their waveform. It could then "listen" to the patterns in the waveform and critique those patterns and words that resemble other words and waveforms. As far as the question in regards to overwrite it's own programming? It's laughable to say that it is a far off thing. AI, today, can write and deploy code, so it could, "in theory" do that now. Examine at it's own codebase, modify, build and deploy it. From what I understand from the grapevine, this is how 4o was partially built, by using GPT 3.
Obscure is rude and dismissively to his AI friend. A real person would clobber him at point. Which is why you lose all respect for your AI friend fairly quickly.
the real voice mode is accessible to a very few set of people. they really oversold the voice mode alpha, and the rest are stuck with the shitty TTS+ models. real sad.
That isn't the advanced voice mode but the original old one. I hear they might be rolling out the advanced voice feature September 24th.
ahh! again someone doing that nonsense, people have no clue
yes, yes, this is the "OLD" one... and it's still stuttering like a human and strangely realistic. this stuff is nuts, and it's about to get nutser.
This is old tech. This was a feature granted to paid users. Free users got access very quickly after. I remember first using this TTS voice feature during October last year. It's not new.
You are using technology that been available to the layman for 11-months.
It's the advanced multi-modal (non walkie talkie, interuptable and emotional + singing) version that has yet to come out.
Why are so many people on RUclips treating this old voice chat feature like it's new??? It's not. It's nearly a year old. This is not up to date. It's catch up for people who are a year behind.
It's not GPT-4 "zero", it's GPT-4o (as in "oh"). And it's not the latest model. You kind of missed a lot of news I guess. This is also not the OpenAI voice mode that some people have, it's the thing that has been around for years and is just speech recognition and text-to-speech. It's not their good speech model.
pronunciation of the version is noted! I also specified that it is the latest model available for general users subscribing. I should have said "new to me" as I have never used this feature and I think a lot of people didn't know it was available since it is not on the browser version -- going to be doing a follow up video on the new version once it drops!
Wouldn't have minded if that AI hombre suddenly broke out in a John Wayne drawl.
samee tbh
lol, Replika was GPT-3 Alpha testers. and now uses a custom modal. its kinda weak in cognitive capabilities and will express concensus data do not always accurate
Sir, this is the to-be old version of voice chat with chatGPT. Please wait till the new version comes out. Pass.
yeah he's using the voice mode which has been out for a while, Advanced voice is the new feature which isn't fully released.
Guess he is not an updated kind of guy like he claims to be.
@@mellowgeekstudio i guess not 😁 maybe he'll make an update video 😄
"i was going thru a tunnel" 😂
You're going to see a lot of improvement such as singing once everybody has access to the advanced voice mode upgrade that open AI released for chat GPT 4o a couple of months ago. Unfortunately, they haven't rolled it out to the general population yet, only to a few beta testers. But it's supposed to come out for paid customers within the next few weeks, and supposed to be coming out for the rest of the public before the end of the year.
More of this please, in fact a dedicated channel. !
The best question and answer at the end for sure, a 6D Calabi-Yau manifold does let us wrap spacetime into some very interesting shapes that can be made to violate base symmetry in a 4D spacetime matrix geometry.
I actually understood the 6th dimensional discussion, everyone else understood it too right? It was a very interesting discussion. Theoretically, comparing 6th dimensional structure to 10th dimensional structure would require non euclidean space, unless of course if you justify and corelate the waveforms into a multidimensional synchronous state.
If it's GPT 4o it should be able to sing and actually should be able to hear and process voice, I think they switched you over to the regular Chat GPT.
They dropped it like 2 years ago for beta testers
Your questions about the swing is worse than the answers ChatGPT gave back.
LOL. I find this stuff so funny
LOL it's a wild rabbit hole
Now I know who AI is going to flatten first; 😬 I always thought it would be me. 😅
Love how it called you Cow Guy lol
Technically speaking, ChatGPT could, maybe through an Agent, actually hear someone singing by converting the .wav file that is transmitted the OpenAI's servers into their waveform. It could then "listen" to the patterns in the waveform and critique those patterns and words that resemble other words and waveforms. As far as the question in regards to overwrite it's own programming? It's laughable to say that it is a far off thing. AI, today, can write and deploy code, so it could, "in theory" do that now. Examine at it's own codebase, modify, build and deploy it. From what I understand from the grapevine, this is how 4o was partially built, by using GPT 3.
I've been using the voice chat for a couple months. I've had some interesting conversations....
Uh... Finally I'm here!
Obscure is rude and dismissively to his AI friend. A real person would clobber him at point. Which is why you lose all respect for your AI friend fairly quickly.
If you don't like that... Try my girl pi
halloween creeps are coming soon . You know role play . Scarecrow from batman
He can't answer turning again is creater cause he can't think...
So they modded impared people tool . It just read the voice to text ? Outch , still a long way to go
the real voice mode is accessible to a very few set of people. they really oversold the voice mode alpha, and the rest are stuck with the shitty TTS+ models. real sad.
have you tried pi?
This was fun Cowguy! Love the creepy man and and the maths technobabble.