Hey i have a really important question i want to ask about your PHD... Before you got your PHD in Physics or whatever, how did you managed distractions and mind wandering? Especially in your teenage years how did you avoided distractions? Whats your moto for motivation and if you could not stick to your motivation how did you get back on track??
@@RealfrontMAN_001 Hmm it was mostly intrinsic self motivation to be independent, but also mostly discipline. You can’t expect to be motivated 100% of the time to do what needs to be done. There are days where you just don’t feel motivated but still need to show up and perform whether that’s at school or a job. If you can master discipline, then motivation is just a plus on top of that when it comes.
@KyleKabasares_PhD thanks so I see from your response that you mean, we basically figure out to do what WE NEED to do without getting distracted, it's a gradual process and the desire to do it no matter what, and that's how success basically comes?
@ Essentially, yes. What also helps is having a goal in mind of what it is you want out of life. Not just in terms of a career even, but ask: what will make me proud of myself when I look back on my life? Then, when you have an idea of what that looks like, make a rough plan of how you will get there. If you have that, then it’s there to fall back on when things get difficult and you forget why you are where you are.
@@KyleKabasares_PhD thanks... I will take these texts to heart... Thankyou for your advice... I really needed a nudge from someone smart and you became the person to help. Sending well wishes. Take care. See you again if we ever cross paths
From the 3 benchmarks they showed, wouldn’t o3 be enough for recursive self improvement, and thus singularity, even if initially at 30k a piece for reasoning?
You guys do realize that Sam Altman just pronounce that he's releasing 03 within 2 weeks after he got the researcher feedback. I think it was just after this video. Bad timing
Tbh. When o3 mini comes out, it's just going to be another tiny hype thing. Then everyone will just say it's not reasoning again and we'll all wait for o3, then o4, etc. I'm beginning to just think it's all just hype again to make open ai rich, like people have been saying.
I've been saying this since 4o. It's starting to look like a scam to me. If this is the case, Sam will me sharing a cell with the other Sam (Bankman-Fried) :)). Peace!
i mean yeah, capabilities wont be improved its the price which is improved which is the crazy thing but people dont take note of that nearly as much so obviouisly it isnt going to get massive headlines but its great sign for the trajectory
@mgscheue Exactly, we're about to have robots everywhere in a couple of years, hundreds of thousands of jobs have been lost to AI, and these people implying this is just hype is ridiculous.
Please consider giving DeepSeek R1 same or similiar probles that you have tested o1 with. There are claims that it is "open source o1" but in coding at least it just doesn't seem like anything close to o1.
The more I think about it, I think there's a good chance Ai fizzles out. Hope I'm wrong, but I'm seeing better ability at solving math problems and puzzles but not anything I would call human level intelligence.
Μeanwhile there is a gpt 5 coming out. There is some juice left still for the next decade on conventional silicon, which is about to run out about 2030-2035. So some bigger models are definitely coming. Until of course a change in paradigm both as software and hardware happens
Just a small interesting point, the passing of time increasing in speed is a well-known phenomenon to us Muslims due to us being in the end of times. Whoever knows the many, many prophecies which are felt right now knows them. And whoever is ignorant of them, is ignorant of them Ahmad narrated (10560) that Abu Hurayrah said: The Messenger of Allaah (peace and blessings of Allaah be upon him) said: “The Hour will not begin until time passes quickly, so a year will be like a month, and a month will be like a week, and a week will be like a day, and a day will be like an hour, and an hour will be like the burning of a braid of palm leaves.”
Or... as one gets older, each year is a smaller and smaller percentage of the time that one has already experienced. One year is 10% of a 10 year-old's life and 2% of a 50 year-old's.
It's just hype from OpenAI. They slowly fall behind others. Altman is panicking, he is cherry picking stuff to show us because he needs investors to pour more money in it. This year OpenAI's bubble will burst. Save a screenshot for this comment, you will see. Anyways, is always a pleasure to watch your videos Dr. Kabasares. Peace and love from Romania!
@stickman1695 just follow internet of bugs, and read recent research papers, like Google's Titans. Maybe you'll understand. It's a gimmick. The real usage I find in research, but people are hyped by AI videos, AI images... Gimmicks
@@stickman1695 It's like crypto. I use a lot of AI, but not from OpenAI. I used their products for 1 year, and I started to notice that the newer models are 4o with better construct reasoning instructions in the model's meta prompt. It takes forever to "reason" for o1, and other models like Gemini 2.0 flash or regular 2.0 does it in seconds. Just watch Dr. Kabasares's previous videos.
@@iuliusRO82 I think everyone who has followed OpenAI for a while knows that if nothing has changed o3 will be just a minor improvement over o1. So far Sam has been overhyping every model so I don't think this time will be any different. "Strawberry" aka o1 was supposed to be mindblowing and it's just 4o with extra steps. Same thing with GPT-4 and 4o. I wouldn't be surprised if the whole industry soon hit a wall. For AGI you need a model that is fundamentally creative and can think out of the box not imitate it like current models.
These language models will argue in such a way that "it's not a trash can, it's a bucket in which you store the junk you don't want" lol, those system are bad about definitional consistency or equality tracking, or using sequences of words that break algebraic rules (orthogonal vector comparisons, without proportionality factors relative to a specific end point equality comparison). It's like when Richard Dawkins tries to scape goat trans people for views, by smearing social preferences or social identity as biological identity, where the vectors are orthogonal to each other.
Hey i have a really important question i want to ask about your PHD... Before you got your PHD in Physics or whatever, how did you managed distractions and mind wandering? Especially in your teenage years how did you avoided distractions? Whats your moto for motivation and if you could not stick to your motivation how did you get back on track??
@@RealfrontMAN_001 Hmm it was mostly intrinsic self motivation to be independent, but also mostly discipline. You can’t expect to be motivated 100% of the time to do what needs to be done. There are days where you just don’t feel motivated but still need to show up and perform whether that’s at school or a job. If you can master discipline, then motivation is just a plus on top of that when it comes.
@KyleKabasares_PhD thanks so I see from your response that you mean, we basically figure out to do what WE NEED to do without getting distracted, it's a gradual process and the desire to do it no matter what, and that's how success basically comes?
@ Essentially, yes. What also helps is having a goal in mind of what it is you want out of life. Not just in terms of a career even, but ask: what will make me proud of myself when I look back on my life? Then, when you have an idea of what that looks like, make a rough plan of how you will get there. If you have that, then it’s there to fall back on when things get difficult and you forget why you are where you are.
@@KyleKabasares_PhD thanks... I will take these texts to heart... Thankyou for your advice... I really needed a nudge from someone smart and you became the person to help. Sending well wishes. Take care. See you again if we ever cross paths
Perceptual time dilation: welcome to getting old.
Couldn't agree more with your time statement! Thank you for the content, always interested in your perspective.
From the 3 benchmarks they showed, wouldn’t o3 be enough for recursive self improvement, and thus singularity, even if initially at 30k a piece for reasoning?
that's the dream. if and when that happens, govts will likely get involved. so we might not find out until it's polished
You guys do realize that Sam Altman just pronounce that he's releasing 03 within 2 weeks after he got the researcher feedback. I think it was just after this video. Bad timing
lol whoops
check-out the paper from Google with the Titans model.
3:10 Our dev has definitely sped up the simulation clock.
They could hit on some red-teaming problems at any time, with o3-mini and o3, so late january is probably the optimal or earliest release date.
O3 will be like you talking to a company 😂😂
Tbh. When o3 mini comes out, it's just going to be another tiny hype thing. Then everyone will just say it's not reasoning again and we'll all wait for o3, then o4, etc. I'm beginning to just think it's all just hype again to make open ai rich, like people have been saying.
I've been saying this since 4o. It's starting to look like a scam to me. If this is the case, Sam will me sharing a cell with the other Sam (Bankman-Fried) :)).
Peace!
You guys seem to not have been paying attention. Look around.
i mean yeah, capabilities wont be improved its the price which is improved which is the crazy thing but people dont take note of that nearly as much so obviouisly it isnt going to get massive headlines but its great sign for the trajectory
@@lordnikon6809 Right? Do these people actually try using these things that they're commenting about?
@mgscheue Exactly, we're about to have robots everywhere in a couple of years, hundreds of thousands of jobs have been lost to AI, and these people implying this is just hype is ridiculous.
Please consider giving DeepSeek R1 same or similiar probles that you have tested o1 with.
There are claims that it is "open source o1" but in coding at least it just doesn't seem like anything close to o1.
I heard it would come out late jan
Let's all sign the formulary and if someone of us gets access let's share prompts with kale!!!
The more I think about it, I think there's a good chance Ai fizzles out. Hope I'm wrong, but I'm seeing better ability at solving math problems and puzzles but not anything I would call human level intelligence.
yup, end of January is still the plan
give us the reaction on o3. you want to react we know it
28th Jan for mini
Μeanwhile there is a gpt 5 coming out. There is some juice left still for the next decade on conventional silicon, which is about to run out about 2030-2035. So some bigger models are definitely coming. Until of course a change in paradigm both as software and hardware happens
gpt-5 is not coming out
@holykim4352 Yes, I know, so they say! I don't believe completely. I think they try to make it work.
I agree, AGI that I understand is cognitive + motor... but OAI is just focused on cognitive AGI
Just a small interesting point, the passing of time increasing in speed is a well-known phenomenon to us Muslims due to us being in the end of times. Whoever knows the many, many prophecies which are felt right now knows them. And whoever is ignorant of them, is ignorant of them
Ahmad narrated (10560) that Abu Hurayrah said: The Messenger of Allaah (peace and blessings of Allaah be upon him) said: “The Hour will not begin until time passes quickly, so a year will be like a month, and a month will be like a week, and a week will be like a day, and a day will be like an hour, and an hour will be like the burning of a braid of palm leaves.”
Or... as one gets older, each year is a smaller and smaller percentage of the time that one has already experienced. One year is 10% of a 10 year-old's life and 2% of a 50 year-old's.
Things that don't exist: God, o3.
It's just hype from OpenAI. They slowly fall behind others. Altman is panicking, he is cherry picking stuff to show us because he needs investors to pour more money in it. This year OpenAI's bubble will burst. Save a screenshot for this comment, you will see.
Anyways, is always a pleasure to watch your videos Dr. Kabasares.
Peace and love from Romania!
people have been saying this abt every technology since forever ago
@stickman1695 just follow internet of bugs, and read recent research papers, like Google's Titans. Maybe you'll understand.
It's a gimmick. The real usage I find in research, but people are hyped by AI videos, AI images... Gimmicks
@@stickman1695 It's like crypto. I use a lot of AI, but not from OpenAI. I used their products for 1 year, and I started to notice that the newer models are 4o with better construct reasoning instructions in the model's meta prompt. It takes forever to "reason" for o1, and other models like Gemini 2.0 flash or regular 2.0 does it in seconds. Just watch Dr. Kabasares's previous videos.
@@iuliusRO82 I think everyone who has followed OpenAI for a while knows that if nothing has changed o3 will be just a minor improvement over o1. So far Sam has been overhyping every model so I don't think this time will be any different. "Strawberry" aka o1 was supposed to be mindblowing and it's just 4o with extra steps. Same thing with GPT-4 and 4o. I wouldn't be surprised if the whole industry soon hit a wall. For AGI you need a model that is fundamentally creative and can think out of the box not imitate it like current models.
Given the advancements OpenAI has made in the past year, this doesn't seem to jibe much with reality.
These language models will argue in such a way that "it's not a trash can, it's a bucket in which you store the junk you don't want" lol, those system are bad about definitional consistency or equality tracking, or using sequences of words that break algebraic rules (orthogonal vector comparisons, without proportionality factors relative to a specific end point equality comparison). It's like when Richard Dawkins tries to scape goat trans people for views, by smearing social preferences or social identity as biological identity, where the vectors are orthogonal to each other.
ruclips.net/video/eVeiiJgtwJI/видео.htmlsi=rHVpfaOXJiI4PlWx - Internet Of Bugs
love that guy ... he's a breath of fresh air in an internet polluted by madness and stupidity ... and bugs