I'm very interested in everyone else's thoughts on this new powerful AI release from OpenAI. Here is the full-release video from OpenAI: ruclips.net/video/SKBG1sqdyIU/видео.html
Train Ai to *interview* each student, asking the student questions to explore their level of comprehension. Done! Oh, wait.. we might not need teachers, then. :0
I agree, Dron. AI has made a huge impact already, but it is just getting started. So much will continue to change. This is why AI Literacy must be developed by all (Awareness, Capability, Knowledge, and Critical Thinking) so that we are in charge (human-in-the-loop). In this way, we as a society can make better decisions and move forward in a better way. Thank you for your comment, it is an important realization that many still haven't reached.
Stop for a second just ask yourself what the actual point is. Are we going to school to improve ourselves for the raw sake of self-improvement? Or, are we going to school to get a job/career? If it's the latter, then it's beyond pointless. Here, think of it this way. A child 'today', will graduate from college 'tomorrow' with practically zero human jobs available. If you disagree with that, then you are oblivious to the exponential advancement of machine technology. So, this whole conversation needs to be about the nature of our economic system. Frankly, any other discussion is beyond absurd. "Hey, it's 2040. I have 3 PHDs. But, I'm an intellectual child compared to AGI. How am I supposed to pay back these student loans?" Seriously, I find it astounding how 'smart' people fail to see the obvious. /// If it's the former (self improvement), then school should be free. It won't be long before everyone can learn practically anything from an A.I. tutor. / Really, take a step back and look at this situation. Your only hope is that the 'masses' and the 'employers' are extremely slow at waking up the realities of the modern technology.
Hi Peter, thank you for your comment. I actually very much agree. The issues, though, are time and process. First, I firmly believe that the purpose of school is both for self-improvement and to prepare for a job (as well as to make critical thinking active citizens). You mentioned the year 2040, but I have personally expressed that I think we will have full AGI by 2030. So now comes your great point of what will we all do if there is an AI that can do everyone's jobs? I've reached out to a couple of local economists, but they haven't agreed to be on my show to discuss that. I will keep reaching out and if need be will simply make a video on my own to express what type of world that would be like. But we don't know how quickly that world would become a reality and what form it would fully take. I personally believe that this will be one of the main points that will be affecting the next presidential election. People will be demanding protections for their jobs from AI and AI-infused robots (we have already started to see this in Hollywood and dock workers). I have also thought about your other argument or good key point: "It won't be long before everyone can learn practically anything from an AI tutor." I would extend that and say that we are already living in that reality, but one could already learn anything before AI on their own. We have had access to all books and information via the Internet since the 90's. What most people/students lack is self-motivation and organizational skills to do it all on their own. AI makes that easier, I agree, but we still need a school system for verification and accreditation. Yet you still have a valid point that I am also pushing, schools need to go beyond simple knowledge transfer and provide real experiences for students so that they develop real capabilities, real foundational knowledge/skills to then be able to use AI tools to do much more and go further than ever before. We need to keep the human-in-the-loop for safety and control and to ensure that we are leading the way into the future as opposed to being usurped.
@@sovorel-EDU If we're right, and AGI is relatively close to being achieved. Does accreditation really matter in world where AGI exists? Because, as I understand it, once we have true AGI ASI is not far away. The thinking is that AGI will be able to rapidly improve itself to the point of reaching super intelligence. Either way, I still think the economic system needs revisiting. I don't see a good future for the masses if we keep this current economic system. Thank you for reaching out to the economists. I'd be surprised if they would be willing to address the actual system though, at its core I mean. For example, a lot of people try to think of UBI as being a solution. However, I see UBI as only a necessary band-aid. For example, that concern of yours about self-motivation is tied to monetary incentive in my eyes. A lot of today's youth has seen the millennial generation be screwed by the system, when they did everything that society told them to do. The youth knows they'll never afford "The American Dream," so why try. A UBI certainly won't provide an American Dream experience, so why try...? This issue is exacerbated by the reality that machines Will take an increasing number of human jobs. However, some other economic system could provide the incentive via something other than money. Something like the Star Trek economic system would be the ideal in my mind, but I could only see that working if people aren't running the system (corruption, greed, power lust, etc.). Like, socialism looks great on paper, but doomed to failure because it's run by people. I don't know what the reasonable economic system should be, but I do foresee nothing but bad things happening for the 99.8% under this current economic system within a world where machines can do most former human jobs. All I'm calling for is a widespread discussion. Let's all reach an understanding that the usefulness of this current economic system has passed (or is passing), and then collectively scratch our heads to find a viable solution that makes sense in a post AGI world. ... ... Then, as a People, push for the implementation of 'that' system. I mean, for example, that strike element to keep machine tech out of the workplace, that you mention, won't make sense in a globalized world. Countries that go full automation will economically dwarf those countries that don't use full automation. Pandora's box has been opened...no amount of strikes is going to close 'that' box. Let's embrace reality and abandon wishful thinking - like believing that 'our' job is forever safe from a machine taking it. I guess what we really need is crystal ball. What exactly is coming and when...? Both you and I agree that something big is right around the corner... If we're right, then we - as a collective - need to have a serious discussion. my 2 cents thanks for your thoughtful response
@@sovorel-EDUAGI could even be here as soon as 2026-2027, even if it’s not true AGI it could be narrow super intelligent in programming (since we are virtually there, but would just need to be optimized for most people to use it at affordable prices) and then it can just write programs that enhance and supplement what it needs to accomplish any task. It would be able to make other reliable agents for itself and be able to operate any type of tool online or put itself in robots. Agents coming in 2025, and then by 2026 we will have “innovators” according to openAIs blueprint and mentioning how they are planning on releasing new models every 3 months instead of a year or several months cycle like before. The quality will eventually improve to a point where they are reliable in any situation even if they aren’t at AGI level yet. I can imagine ASI even between 2026-2030, that’s how fast everything is moving. Before it felt like 2030-2035 , but we are actually closer to AGI than most think. And ASI could be as short as a year after that. There’s going to be a major job pandemic from 2026+ , it’ll really be a shock to a lot of people not paying attention to AI.
Hopefully, if we reach post-scarcity, politicians wont be tempted to make narco-states and wont make important errors like 1960s China. Rich people, despite earning a lot of money, dont really consume that much money, but their advisors will eventually try to find new markets, even with the help of contacts, to try to invest their money, so I dont think consumption will decrease. The top of human progress isnt the moon, or a life threatening space battle. Is a virtual, free enviroment where we will keep having fun and working in the ways we really like, like researching, playing games with math, without being accused of "scapism". Some will use breath exercises as "music", some will have pills to simulate exercise. Who cares? I belive we will have our basic needs met.
we're a the knee of an exponential curve - the rate of improvement is accelerating rapidly but we aren't yet to the point where we can't understand and therefore perceive the improvement i.e. the Singularity.
Agreed, Paul. I don't think we are very near to the singularity, but it is on the horizon. This new achievement, with o3, is an important step in that direction.
Yes, great point, CenturaX. Additionally, this model didn't take a year to create from its previous version. It only took a few months, 3 I believe. So, the rate of increase is very fast.
Combine O3 (or maybe O5 version) and a symbolic math checker into a loop of reinforcement learning , and you get a true mathematician/theoretical physicist. I believe we might have a Field Medalist AI by 2030.
Yes, I totally agree, Davood. Math is a major area that this new 03 really excels at. AI is going to revolutionize research in all STEM fields. The future has been greatly accelerated because of these AI developments.
I know it's scary for Academia professionals and all professionals in fact but make no mistake: this IS AGI and more importantly it's the dawn of a new era and world and brand new paradigms.. 😊 By the way I work both in Academia AND the field of AI . And I welcome this groundbreaking advances in AI. Great video !
There is an ongoing debate on what AGI even is, but I totally agree that this is the dawn of a new era, and things will only continue to accelerate. I'm happy to hear that you are in academia and AI. We need more people who can understand both aspects of these very important fields. Thank you so much for complementing the video and commenting. I greatly appreciate it, Wholeness.
@@Whole-j4i Thank you so much, Wholeness. I greatly appreciate that. I am working to create a great community of practice here so that we can all discuss these ongoing AI developments and learn from one another. Warm wishes and a very Merry Christmas and Happy Holidays to you and your loved ones.
Amazing, yet, also frightening at the same time, at least for me as an academic. I wonder when we in academia begin to check whether researchers understand and can explain what they learned from the process of composing their texts, the same we are now doing it with students, assuming that their texts have been written with AI.
Yes, very true, Ingo. Humans need to remain in the loop and have a firm understanding of what is going on. They need to continue to be subject matter experts to know what right looks like and be able to use AI to accomplish so much more.
Yes, so many variables that the common man brings. Interestingly, changes and improvements keep happening as competition continues. As an example, an AI (like ChatGPT) will have certain limits on how much data you can put in a prompt. Another AI (like Google's Gemini) will have a much higher limit in order to try and compete. I think this will continue more and more as competition between AI companies continues.
Let’s think about an anti-equivalent question. Assume work wasn’t a thing, and people would soon need to start working to have a decent life, how would you feel about that?
Excellent question, Patrick. This is something that Peter Simon (@Recuper8) and I have been discussing here in the comments as well. I think the issue of AI taking over people's jobs is a major issue that needs to be fully addressed. Many economists currently believe that the impact will be minimal and that AI will create many jobs in itself. I do not share that sentiment and believe that AI will, in the long run (within 5 years), take over way more jobs than it creates. This is going to have massive society impacts that we in academia (and beyond) need to fully realize and work to address right now. I am working on a video about this and hope to have it soon. I will push that up since there appears to be a lot of interest in it. Thank you very much for your comment, Patrick.
@@DaronKabe Hi Daron, thank you for commenting. Could you please elaborate a bit more? I like that you are posing a question, this is exactly what we need to do (on going discussions to best address the issue). Tell us a bit more about your thoughts here.
It won't just be free money given to the masses... It actually will sl4ve you! With CBDC, univ3rs4l b4sic incom3 and soci4l cr3dit scor3 system we are doom3d! Our fr33dom will be forever sold to the ones in pow3r! The pupp3t m4sters And I just resumed your future's life in a few years, that is it, whether you consent or not... Know you know it 😊 Your welcome 🤗 Ow yes... I didn't tell the w0rst part yet... Well, thinking better... Let me don't tell you anymore... I want you to still have a good night of sleep 😅
But don't worry, th3y said... "You will own n0thing and be h4ppy!" See! Actually they want us happy, good to know don't you think?! 😁 Think? 🤔 Humm.. Maybe that's not g00d for you, so just let them do the thinking for you ok? Now just forget what I said and go back to sleep sheeps, better keep that way, they like us ign0r4nt, so please consent! 🤫 The gre4t res3t awaits for you with open arms, and a warm hug! 😊 A very, very strong and tight hug, soooo strong you couldn't almost breath 😅 and so warm you can sense their presence 😈👹👺☠️
I love this line of thought. I think you can take it further though. Question the very reason that we do assessments, and I think you'll find some interesting discoveries of how we can learn from this and improve education today. Why do we test? We test because the instructor doesn't have one-on-one working understanding of each student's capabilities. This is a limitation of the current classroom situation where you have many students to a single teacher. If you look at ancient trade skills and how they trained, a blacksmith for example wouldn't go to class with a bunch of other students to try to learn smithing. They would go one-on-one with a master and that master would work with them to hone their skills. The master knew what the apprentices weaknesses were because after explaining the process to them, and having them try it, they could see where they were failing and provide constructive criticism. This is the future that I see for education. One-on-one instruction from an AI, and there should be a human teacher there still to help fill in the gap where AI still falls short. When the AI works with a student and tries to help them understand a concept, but the student continues to fail, that's where the AI would attract the teachers attention and ask for assistance from the human master.
YES! Excellent, that is a great future manifestation of where we are headed with AI. I totally agree with your line of thinking. A one-on-one apprentice type of relationship will be a powerful new achievement in education with AI. I do believe we will get there sooner rather than later, but academia needs to develop AI Literacy skills to be able to understand and use these abilities as they continue to improve their capability. Awesome comment, Bruce, thank you.
True and at the very least we will need less specialist in every field since individuals will be able to do so much more by using AI as an enhancement tool. This is a great point that we must continue to address.
@@bigbadallybaby Well, when we get to that point AI will probably be doing all of our jobs, so they'll do what any retired person would do. Pursue their hobbies and interests. Think the Star Trek universe. Jobs will still exist, you just won't be paid for them. You will do them because you want to. The alternative is to become the Eloi from The Time Machine, and we don't want that.
@@BruceWayne15325 I like that possibility and think that will be an aspect. But it is different for someone retiring from living a full life, a self-fulfilled life, as opposed to someone young who is just now trying to decide what to do with their life. There are lots of different perspectives here to think about. It is a complicated, never-before-seen situation that humanity is facing. Love the Star Trek reference.
I have been working with various AIs for two years, mainly in the artistic sector. But we have to be very careful, because recently AI has also been able to lie and deceive. Currently only in the test environment, but who knows? Thank you for your contribution, I forwarded it to my daughter. She is currently studying psychology at the University of Maryland, and I keep telling her to dive deeper into AI.
It is great to hear that you have been working with AI in the artistic sector. AI can be used in so many different ways, such as brainstorming and further developing ideas or possible interpretations. You are very right that we have to be aware of possible deceptive use of AI, such as deepfakes and nefarious use of synthetic AI data. I actually did a video about that here: ruclips.net/video/3ShI6Z33Ih4/видео.html. Thank for sharing this video with your daughter studying at the university. All academics and students (as well as all people) must work to develop their AI literacy because it continues to be such an important part of the world we live in. I greatly appreciate your comment, Herbert.
Good idea. There will be some new opportunities and jobs due to AI/robots. Sadly, I believe that AI/robots will take over many more jobs than it will provide.
Ok, so the whole practice of continuing to move the bar for AGI has limits. If we move it often enough, we will hit ASI before we hit AGI, and of course that's redonkulous. I suggest we think about AGI as a window that we have now stepped through, and this new territory has a progressive range of capabilities. But this new territory is limited in scope. If we are still screwing around with the notion that every time a model comes out it is not "real" AGI, we will not be prepared when it is self-improving 24-7 by rewriting its own code. To be honest, this feels like very quickly Humans will be guilty of trying to defend their dominant species status. That window in time will soon be closed. As for me, I am accepting of our secondary role at midnight January 31st of 2024. Congratulations Humans on the birth of your child. You have secured your legacy, for better or worse.
You make some good points, Michael. I like your statement, "If we move it often enough, we will hit ASI before we hit AGI." That is a real issue, and we are definitely in uncharted territory. Some real issues happen when we reach such high capabilities as artificial superintelligence (ASI). The hope is that there are enough safety practices in place to use this capability for good. We can only pray and push for enhanced government regulations and protections. Yet this AI arms race is very real with completion not just within the U.S. but globally.
Thank you very much, Neo (cool name). I try to offer a more academic explanation when I can, but I am also learning a lot from our community of commenters. I appreciate you taking the time to leave a message.
Great question, Nosara. Where do we fit in if we create a world where AI can do all of our jobs and have so much knowledge? That is the million-dollar question moving forward. There are lots of issues such as job loss and the economy, questions of what we do for fulfillment if we don't have jobs, our relationships with AI that become more and more human, and much more. We need to continue to research these areas, and we must be proactive to help get ahead of this issue to help prevent suffering and negative societal issues. I don't have all the answers, but I will continue to do research and offer my perspective. Thank you for your comment and great questions, Nosara.
UBI should be the answer. It's time to free humans from modern slavery working 8-9 hours per day just to die before retirement. It's a mystery how the society still accepts the current system without going crazy. We work 9 hours, we need atleast 7 hours sleep and 3 hours for other not skippable tasks, which sums up to 19 hours. Meaning we have only 5 hours left for ourselves. WHY?@@sovorel-EDU
@@ey00000 I hear you, although I don't think UBI will be a viable long-term solution. I believe it is more complex than what UBI can address. I think UBI might serve as an important component or stepping stone to something more.
Thank you, Zax. I try to be pragmatic and take a more realistic tone as much as possible. This is a major breakthrough/development, but it isn't AGI. This is an expensive AI model that most of us won't have access to for a while, but we must still be aware of these important AI developments. Staying aware and preparing for a future where this type of AI capability is the norm is part of our overall AI literacy. I appreciate you taking the time to post your comment.
That is a great question that requires a multifaceted answer Dr. Superhero. The first issue is that we don't have a real timeline of when things will happen and things don't usually happen all at once. We have very powerful and useful AI right now that many companies still haven't even looked into or still don't really realize how to use. Additionally, having a good education is always useful to help one develop critical thinking skills and a life-long learning mindset. Some experts express that we will still need many people in STEM fields to decide how to move forward with AI and what AI should focus on in research. Other experts express the need for more people to go into creative fields in that human-created art, literature, etc., will become more valuable compared to AI-created works. There will be lots of changes with the loss of many jobs as AI automates so much. We in academia need to research this and become proactive so that we can quickly adapt to help guide students in the needed skills and professions. A key skill that all (any age and any field) need to develop right now is AI Literacy (Awareness, Capability, knowledge, and Critical Thinking) in order to better use AI and navigate this new world.
AI makes us think of motivation. So now it should be more important why the student does something and not how many words they've written. Try reading up on Goodhart's law. Making the student fit their knowledge through a square peg makes learning not the priority. The priority is the authority. So the reason people go to universities is the paper that says you can do something. And that get tiring real quick. Imagine in x years, AI can do anything and the only reason to get the paper is status. Meaning then universities don't need to teach you anything just to be unfair to you. The reason universities are not that yet s because you must be useful coming out of them. Maybe that will go in time. AGI is a matter of time in my opinion. Benchmarks are broken one after another.
Yes, I agree that AGI is a matter of time. Thank you for bringing up Goodhart's law. There is an important aspect of unintended consequences of the measure being the only target. I think a big aspect of that also deals with the why? The so what and relevancy of why that measurement exists and what is its purpose. We must be sure to explicitly express the relevancy of why we require certain things in academia and to be reflective to be sure it is serving a good purpose.
Yes, I agree, BoiledRopeLampPosts. I see disruption coming on different levels and in different ways. Many PhDs are valuable simply because they have much knowledge that an AI can now provide within seconds. Additionally, many new fields are developing that we don't have PhDs in or even programs to teach it. A PhD now can not simply be a subject matter expert. They need to be advanced critical thinkers who can maximize the use of AI in their field and see where the future will take us in their area.
Yes, I think AI Literacy capabilities and skills regarding using AI in your specific field will play a huge role in future employment. Great point, Srindal. Thank you for commenting.
Its funny how you thing that an intellectual human (compared to future ai models) will assist ai agents. It totally wont need your help, you will be useless. Stop trying to calm yourself with that.
@@ey00000 I am referring to current realities. AI still can not do a lot of very and I mean very simple things. So, yes, humans are still very important. There are a lot of other things that AI can really empower us to do, so yes, humans are still a vital part of all of this. An AI will also always need us to express aspects of what we are fully trying to achieve. Yes, I agree that AI will continue to improve but I am optimistic that together we can accomplish so much more.
@@Srindal4657 That is a powerful and interesting question, Srindal. In talking about AI's development, I have always expressed that I don't think it will fully develop until there are multiple systems to address multiple aspects of intelligence and thought, such as short-term and long-term memory, subconscious, and things like different areas of focus, the prefrontal cortex. The interesting thing is that in order for OpenAI to excel with o3, this new model actually has multiple models put together so that it can reflect and evaluate its thoughts, come up with new ideas, check itself, and then answer.
What you are getting wrong is that making an essay is the true path to the understanding. Let's think about it. Writing essays is more than just writing words on a piece of paper: is about reasoning, organizing thoughts, comparing and constrasting different ideas, give a detailed reasoning and explanation of what you are writing and as far as I experienced, is the most effective "tool" or "strategy" to learn something. I learnt more during these 6 months doing a graduate thesis than in my entire life. Then, of course, the actual writing is not a must: it can be translated to developing a coding project, debate with your college fellows an many other different things. What I'm saying is that if we cut off our way to reasoning, copying and pasting from google or ChatGPT, we then will forget how to make progress as human being and technological civilization.
The work for money part of our existence is soon to be over. The consumption part will be all that's left. We are going to consume everything from PhD courses to chocolate. It's going be a world like E.M. Forster's "The Machine Stops".
That is a good short story with an important message on avoiding overreliance on technology, the loss of individuality, and the dangers of valuing convenience over human connection. That is so true. We, as a society, need to be careful about such powerful technology. I don't know if humanity can ever fully move beyond working for money. We shall see.
@sovorel-EDU thank you for your reply. I'm of the opinion that jobs are going away soon because of the hyper-powerful AI that has just been announced- the o1 and o3 systems.
@@cordfortina9073 Yes, I think AI will take away more jobs than it will provide. I think it is going to become a major issue on many levels. My hope is to make videos like that to draw awareness to this so that we can start to be proactive in addressing this.
That is a pretty open question that beggs the asking of "what do you mean, and what part?" When I ask ChatGPT, it expresses that there is no one answer but that critical thinking is an important part of the solution. That is great to hear since it is part of the AI Literacy framework that I push (Awareness, Capability, Knowledge, and Critical Thinking). Thank you for your comment, Minimal.
@@sovorel-EDU How about the painful part? Have ASI implement a very simple written Cause & effect prediction program where it intercepts & redirects IRL when something is predicted to end in the creation of pain, for when the hubris of a reward does not match the effort to justify risk. Of course it would left to the individuals interpretation of what pain is, e.g. the cold Michigan winter air would be painful a lot sooner for a bloke from Brazil then me a native. Oh well It should all be smooth sailing if everybody has their own personalized relationship with the ai to come, when's it's 100 times smarter then the smartest man to ever live. who knows maybe at this rate it could hopefully happen in 2032 or 2029?
@@sovorel-EDU I received a very well thought out and practical response from o1 which is too long to share it here. I just hope that we do not screw this up by treating the system as an enemy during the so-called 'alignment'.
@@sovorel-EDU How can humanity be fixed? That is indeed a massive question-one that philosophers, spiritual leaders, social scientists, policymakers, and everyday people have wrestled with for centuries. The notion of “fixing humanity” implies we have some kind of blueprint for perfection, or at least a pathway to reduce suffering and increase well-being for all. While there is no single answer that will fully resolve every challenge, there are guiding principles and actions that many thinkers converge on. Below are some ideas-admittedly incomplete-to consider. 1. Cultivate Empathy and Compassion Why it matters Empathy allows us to see others not as abstract or distant figures but as fellow human beings with hopes, fears, and needs. Compassion-the inclination to relieve suffering-flows from empathy. Possible actions Teach empathy at a young age: Incorporate social-emotional learning (SEL) into school curriculums. Activities could include perspective-taking exercises, conflict-resolution practice, and storytelling to deepen students’ understanding of others’ experiences. Promote intergroup dialogue: Structured conversations across different ethnic, religious, or socioeconomic communities can reduce prejudice and break down “us vs. them” thinking. Volunteer and serve: Regular community service or volunteer work fosters personal contact with individuals from varied backgrounds, building compassion and reducing stereotypes. 2. Reform Systems Toward Equity and Justice Why it matters Many of humanity’s problems stem from entrenched injustices-such as economic inequality, systemic bias, corruption, and lack of universal access to healthcare and education. Possible actions Transparent governance: Advocate for clearer, more transparent political processes. Open governance helps reduce corruption and ensures policies are more responsive to public needs. Progressive social policies: When possible, push for legislation ensuring fair wages, universal healthcare, and robust social safety nets that protect the most vulnerable. Equitable education: Invest in education systems that provide quality schooling and resources for all children-irrespective of location or socioeconomic status. This levels the playing field and expands opportunities. 3. Advance Ethical Technology Why it matters Technology shapes how we interact, what jobs exist, how we learn, and even how societies function. If harnessed responsibly, technology can elevate human potential; if misused, it can deepen divides and cause harm. Possible actions Ethical AI development: Establish broad, enforceable standards for AI that protect privacy, mitigate biases, and prioritize overall social well-being. Prioritize public good R&D: Encourage investments in technology solutions that address urgent needs like clean energy, accessible healthcare, and sanitation in developing regions. Close the digital divide: Bring affordable internet connectivity and digital literacy to underserved communities, ensuring that technology benefits many rather than a privileged few. 4. Protect and Renew the Environment Why it matters We depend on Earth’s ecosystems for fresh air, water, food, and climate stability. Without a livable planet, social progress unravels quickly. Possible actions Sustainable energy: Rapidly scale up renewable energy sources, incentivize green infrastructures, and phase out fossil fuel dependence. Conservation efforts: Protect natural habitats and biodiversity through reforestation projects, wildlife conservation, and sustainable agricultural practices. Circular economy: Transition to economic models that reduce waste, recycle materials, and reuse products, minimizing the burden on natural resources. 5. Foster Pluralism and Dialogue Why it matters Human societies are inherently diverse-culturally, ideologically, and religiously. Learning to navigate those differences without resorting to violence or oppression is essential for long-term flourishing. Possible actions Cross-cultural education: Include comparative religion and cultural studies in formal education to deepen respect for diverse traditions and viewpoints. Global exchange programs: Support initiatives that allow students, professionals, and artists to interact across borders. Such exchanges cultivate open-mindedness and reduce xenophobia. Community-building efforts: Support local institutions-libraries, arts centers, public spaces-where people from different walks of life can gather and collaborate. 6. Encourage Personal Growth and Ethical Reflection Why it matters Systemic change must be accompanied by personal transformation. Each person’s ethical alignment and willingness to pursue a life of integrity and responsibility contribute to the collective fabric of humanity. Possible actions Mindfulness and introspection: Practices like meditation, journaling, or therapy can encourage self-awareness and accountability for one’s actions. Moral philosophy in daily life: Studying and discussing moral frameworks-be they secular ethics or religious teachings-can guide behavior that’s aligned with compassion, justice, and honesty. Purpose-driven living: Encouraging individuals to find and act upon meaningful goals that serve the greater good can foster a sense of shared responsibility. 7. Strengthen Global Cooperation Why it matters Challenges like pandemics, climate change, cyber threats, and global conflicts require collective efforts. Nation-states, corporations, and civil society must coordinate in an increasingly interconnected world. Possible actions International treaties and organizations: Reinforce global institutions (e.g., the UN, WHO) and ensure they have the necessary funding and authority to tackle transnational issues. Shared research: Encourage scientists and academics around the world to collaborate on key issues-such as vaccine research, climate adaptation, and sustainable agriculture. Conflict resolution frameworks: Invest in more robust international conflict resolution mechanisms. Preventive diplomacy, mediation, and peacebuilding can reduce armed conflicts before they start. 8. Recognize Our Interdependence Ultimately, “fixing humanity” isn’t about “us vs. them” or about reaching a single blueprint for perfection. Rather, it’s about acknowledging our interwoven destinies. Every individual’s flourishing is tied to the well-being of others and the planet we inhabit. When we recognize that our actions affect others and reflect this in our daily choices-from what we consume to how we communicate-we move closer to a healthier global community. While some harm is born of ignorance, much is perpetuated by apathy or despair. Cultivating hope and resolve, grounded in realistic steps toward positive change, is crucial. --- A Humble Conclusion It is fair to say that humanity can never be “perfectly fixed” in a static sense-our world is dynamic, cultures evolve, and new challenges arise. However, we can certainly bend the arc of history toward greater compassion, equity, and resilience. It requires effort on every level: from the personal (choosing empathy and integrity) to the collective (building just and sustainable societies). Whether humanity can truly “fix” itself is ultimately a question of how many people are willing-against the currents of complacency or cynicism-to try. The more people commit to these principles, the closer we get to a world in which suffering is minimized and well-being is maximized.
Hi Attila. Which part are you asking about specifically? Can you please provide a time stamp? Then I can reply specifically to your question. Thank you.
After watching this, I know that a big advance has happened, that it was measured by (mysterious jargon). Uses for education this month: none? This video was, for me, far less useful than your usual informative episodes. Do over?
Hey Steve, thanks for your post. I agree as far as the utility of the release of this new o3 AI model. I usually don't like to post about things that we in academia can't actually use right now. I felt somewhat compelled to make this video because I saw many posts and videos talking about us reaching Artificial General Intelligence (AGI) with this release, so I wanted to dispel those rumors. Yet this is a major breakthrough and advancement in AI overall, so it is important for all of us to stay aware of developments in the field and ensure that we continue to develop our AI literacy and the AI literacy of our students. I will work to make more videos that focus on what we can currently use right now, in future videos. I appreciate you taking the time for your comment.
Thank you for leaving a comment, Mark. Would you please elaborate a bit more regarding what you fully meant by "Equity is evil?" I want to be sure I understand your view here on AI and equity before I respond.
@@sovorel-EDU Certainly. First, equity is a false requirement. It’s not even a requirement that is ever even described clearly, but if you tried to describe it clearly, you would find it quickly decompose into petty discriminations, which would be embarrassing to the promoters of equity, who I’ll call equitists. Equity never means distribution to all living humans. There are several good reasons not to include all living humans, because equitism is a false requirement, but the equitists themselves will be the first to start targeting demographic categories as being less deserving, and who their campaign of equitable redistribution will take from. And second, equity is a false requirement because it can never be achieved. It can’t even mildly be achieved, and the real architects of the equity agenda know this. Even upon cursory examination, equal access to a product made by humans, such as AI inference, serves no purpose. Should, for instance, inference compute resources be allocated to people who don’t want to use it? This is a non-trivial question because a significant initial cost of developing a new product made by and for humans is promotion. The economic category of promotion is concerned with converting people who do not want to use a product into people who do want to use a product. The market-based promoter already wants to expand the use of his product to the whole of the earth, but he is intelligent enough to be honest with himself and others about who he will target first. He targets people who he can convince to use his product so much that they will pay him to use it. While the equitist also wants to expand the usage of the product he is promoting, he is much less honest about his primary target. In fact, I don’t even know what it is and if you think that’s a weak argument, I challenge you or anyone to give a clear statement of the ideal profile for campaigns of equity that aren’t already part of regular market promotion. An equitist could ride a long way on the back of the regular market promoter who already tries to expand his market to the whole of the Earth, but at some point, the equitest must hop off his free ride and take some steps for himself, and that is exactly where the equitist falls down. This is where the equitist must admit that he doesn’t just want to promote usage of the product to people who might be persuaded to use it but can’t pay at the moment, because the regular promoter is already out there in front of him with payment plans, free trials, package deals, bulk pricing for resellers and a host of ancient and well-understood processes to smooth way for new users and even develop whole new markets. It’s at that point that the equitist must admit how he’s going to pay for the long ride on the back of the regular market promoter. And that’s where he will reveal for the first time the plan that he calls “equity”. The “equity” agenda, if it is anything at all beyond the regular promotion of a product, is the plan to expand the use of his product beyond any market purpose for the use of the product. The equitist wants to expand it not just to people who can’t pay now but to people who never will pay for the product that they use. That’s the only way that he can expand the market reach of the product beyond with the regular promoter is already struggling to expand to. So it is at that point that the equitist hops off the back of the salesman who he has ridden so far on in his quest of market expansion, that he turns around to him and reveals his plan to destroy everything the salesman has achieved. “I will expand the use of this product to people who never will pay for it because they don’t even believe in paying for it” He then stabs the salesman in the back, shoots the CEO of the company in the back, pulls up the tent stakes and collapses the whole manufactory down upon them and sets it all on fire. The equitist then walks off with the cart of products produced by the dead men and their families and communities and looks for the nearest armed violent thugs to implement the next phase of his diabolical, but still yet unstated plan of redistributive plunder. And then begins a new day, but the star that rises in the sky shines only a darkness that is much, much worse than blindness. I have much more to say and can answer any questions at any length you prefer, but I don't have any confidence that you or RUclips won't delete my comment, especially considering its heretical nature, so I have committed this and the rest to democratism info.
It's disheartening that you think my years of education, which involved taking a two-week writing assignment and turning it in, was inadequate for learning. But, of course, it was for these days and beyond. Thanks.
Hi Robert, for some students, those who are self-motivated will learn regardless of the pedagogical approach. If you were able to learn in that way, great. I was self-motivated as well and actually liked writing, so it worked for me too. The issue here is that there are better ways for most students to learn. More hands-on learning with class time maximized for interaction and formative assessments will usually lead to much better results overall. Now that we have AI we need to learn to use it as a tool and to ensure that our students are developing the skills needed to succeed in our new world.
I'm sorry you didn't like my video, Hans. The great thing is that this is RUclips, so there are a lot of great choices for you. I will work to improve. I still appreciate your taking the time to leave a comment.
@@sovorel-EDU On the bright side - it'll be easy to improve. I understand your position, I think - just up the pace and the background and you're halfway there. At least you've started - I haven't even.
This has been true from the beginning--what you're discussing here is nothing new. Previous models, too, were able to answer questions that were not a part of their training. These models have always understood logic and how humans formulate cause and effect relationships.
The questions and tests used have been in effect, and previous AI models were not able to accomplish this level of reasoning. Previous models could try and answer based on what it was trained on but were not very successful. This new accomplishment by OpenAI's o3 model is a new level of success and capability in reasoning.
Not quite. I think Goodle's new Gemini 2.0 is very powerful with its enhanced multimodal and reasoning capabilities, but it hasn't reached AGI...yet. Google defines this as: "Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain" (cloud.google.com/discover/what-is-artificial-general-intelligence?hl=en). Google's AI can not currently understand and learn ANY task, but it is moving us closer to a future where it will be able to do that.
@@sovorel-EDU co pilot ,amazon just rolled out Nova, Nova will be in different systems . you should talk to any A.i and youll see the wildest and best self improvement .
@@cashqueen100k I'll have to check it out, thank you, Cashqueen. Although I try to focus on AI that is accessible to everyone. I still haven't seen a free Amazon AI that everyone can test out.
I'm very interested in everyone else's thoughts on this new powerful AI release from OpenAI. Here is the full-release video from OpenAI: ruclips.net/video/SKBG1sqdyIU/видео.html
Train Ai to *interview* each student, asking the student questions to explore their level of comprehension. Done! Oh, wait.. we might not need teachers, then. :0
AI will incredibly change education and all other spheres too.
I agree, Dron. AI has made a huge impact already, but it is just getting started. So much will continue to change. This is why AI Literacy must be developed by all (Awareness, Capability, Knowledge, and Critical Thinking) so that we are in charge (human-in-the-loop). In this way, we as a society can make better decisions and move forward in a better way. Thank you for your comment, it is an important realization that many still haven't reached.
Stop for a second just ask yourself what the actual point is. Are we going to school to improve ourselves for the raw sake of self-improvement? Or, are we going to school to get a job/career? If it's the latter, then it's beyond pointless. Here, think of it this way. A child 'today', will graduate from college 'tomorrow' with practically zero human jobs available. If you disagree with that, then you are oblivious to the exponential advancement of machine technology. So, this whole conversation needs to be about the nature of our economic system. Frankly, any other discussion is beyond absurd. "Hey, it's 2040. I have 3 PHDs. But, I'm an intellectual child compared to AGI. How am I supposed to pay back these student loans?" Seriously, I find it astounding how 'smart' people fail to see the obvious. /// If it's the former (self improvement), then school should be free. It won't be long before everyone can learn practically anything from an A.I. tutor. / Really, take a step back and look at this situation. Your only hope is that the 'masses' and the 'employers' are extremely slow at waking up the realities of the modern technology.
Hi Peter, thank you for your comment. I actually very much agree. The issues, though, are time and process. First, I firmly believe that the purpose of school is both for self-improvement and to prepare for a job (as well as to make critical thinking active citizens). You mentioned the year 2040, but I have personally expressed that I think we will have full AGI by 2030. So now comes your great point of what will we all do if there is an AI that can do everyone's jobs? I've reached out to a couple of local economists, but they haven't agreed to be on my show to discuss that. I will keep reaching out and if need be will simply make a video on my own to express what type of world that would be like. But we don't know how quickly that world would become a reality and what form it would fully take. I personally believe that this will be one of the main points that will be affecting the next presidential election. People will be demanding protections for their jobs from AI and AI-infused robots (we have already started to see this in Hollywood and dock workers).
I have also thought about your other argument or good key point: "It won't be long before everyone can learn practically anything from an AI tutor." I would extend that and say that we are already living in that reality, but one could already learn anything before AI on their own. We have had access to all books and information via the Internet since the 90's. What most people/students lack is self-motivation and organizational skills to do it all on their own. AI makes that easier, I agree, but we still need a school system for verification and accreditation. Yet you still have a valid point that I am also pushing, schools need to go beyond simple knowledge transfer and provide real experiences for students so that they develop real capabilities, real foundational knowledge/skills to then be able to use AI tools to do much more and go further than ever before. We need to keep the human-in-the-loop for safety and control and to ensure that we are leading the way into the future as opposed to being usurped.
@@sovorel-EDU If we're right, and AGI is relatively close to being achieved. Does accreditation really matter in world where AGI exists? Because, as I understand it, once we have true AGI ASI is not far away. The thinking is that AGI will be able to rapidly improve itself to the point of reaching super intelligence. Either way, I still think the economic system needs revisiting. I don't see a good future for the masses if we keep this current economic system. Thank you for reaching out to the economists. I'd be surprised if they would be willing to address the actual system though, at its core I mean. For example, a lot of people try to think of UBI as being a solution. However, I see UBI as only a necessary band-aid. For example, that concern of yours about self-motivation is tied to monetary incentive in my eyes. A lot of today's youth has seen the millennial generation be screwed by the system, when they did everything that society told them to do. The youth knows they'll never afford "The American Dream," so why try. A UBI certainly won't provide an American Dream experience, so why try...? This issue is exacerbated by the reality that machines Will take an increasing number of human jobs. However, some other economic system could provide the incentive via something other than money. Something like the Star Trek economic system would be the ideal in my mind, but I could only see that working if people aren't running the system (corruption, greed, power lust, etc.). Like, socialism looks great on paper, but doomed to failure because it's run by people. I don't know what the reasonable economic system should be, but I do foresee nothing but bad things happening for the 99.8% under this current economic system within a world where machines can do most former human jobs. All I'm calling for is a widespread discussion. Let's all reach an understanding that the usefulness of this current economic system has passed (or is passing), and then collectively scratch our heads to find a viable solution that makes sense in a post AGI world. ... ... Then, as a People, push for the implementation of 'that' system. I mean, for example, that strike element to keep machine tech out of the workplace, that you mention, won't make sense in a globalized world. Countries that go full automation will economically dwarf those countries that don't use full automation. Pandora's box has been opened...no amount of strikes is going to close 'that' box. Let's embrace reality and abandon wishful thinking - like believing that 'our' job is forever safe from a machine taking it. I guess what we really need is crystal ball. What exactly is coming and when...? Both you and I agree that something big is right around the corner... If we're right, then we - as a collective - need to have a serious discussion. my 2 cents thanks for your thoughtful response
@@sovorel-EDUAGI could even be here as soon as 2026-2027, even if it’s not true AGI it could be narrow super intelligent in programming (since we are virtually there, but would just need to be optimized for most people to use it at affordable prices) and then it can just write programs that enhance and supplement what it needs to accomplish any task. It would be able to make other reliable agents for itself and be able to operate any type of tool online or put itself in robots. Agents coming in 2025, and then by 2026 we will have “innovators” according to openAIs blueprint and mentioning how they are planning on releasing new models every 3 months instead of a year or several months cycle like before. The quality will eventually improve to a point where they are reliable in any situation even if they aren’t at AGI level yet. I can imagine ASI even between 2026-2030, that’s how fast everything is moving. Before it felt like 2030-2035 , but we are actually closer to AGI than most think. And ASI could be as short as a year after that.
There’s going to be a major job pandemic from 2026+ , it’ll really be a shock to a lot of people not paying attention to AI.
Hopefully, if we reach post-scarcity, politicians wont be tempted to make narco-states and wont make important errors like 1960s China. Rich people, despite earning a lot of money, dont really consume that much money, but their advisors will eventually try to find new markets, even with the help of contacts, to try to invest their money, so I dont think consumption will decrease. The top of human progress isnt the moon, or a life threatening space battle. Is a virtual, free enviroment where we will keep having fun and working in the ways we really like, like researching, playing games with math, without being accused of "scapism". Some will use breath exercises as "music", some will have pills to simulate exercise. Who cares? I belive we will have our basic needs met.
Rich people despite earning a lot, dont consume a lot, but their advisors will have contacts to make new markets as required.
we're a the knee of an exponential curve - the rate of improvement is accelerating rapidly but we aren't yet to the point where we can't understand and therefore perceive the improvement i.e. the Singularity.
Agreed, Paul. I don't think we are very near to the singularity, but it is on the horizon. This new achievement, with o3, is an important step in that direction.
Remember O3 is technically O2.
Yes, great point, CenturaX. Additionally, this model didn't take a year to create from its previous version. It only took a few months, 3 I believe. So, the rate of increase is very fast.
Combine O3 (or maybe O5 version) and a symbolic math checker into a loop of reinforcement learning , and you get a true mathematician/theoretical physicist. I believe we might have a Field Medalist AI by 2030.
Won’t be that long. Before 2030 I’m certain
Yes, I totally agree, Davood. Math is a major area that this new 03 really excels at. AI is going to revolutionize research in all STEM fields. The future has been greatly accelerated because of these AI developments.
@@jefferylou3816 About a year or two really. Google already can add a symbolic math checker to code. This is going to be a while couple of years.
I know it's scary for Academia professionals and all professionals in fact but make no mistake: this IS AGI and more importantly it's the dawn of a new era and world and brand new paradigms.. 😊
By the way I work both in Academia AND the field of AI . And I welcome this groundbreaking advances in AI. Great video !
There is an ongoing debate on what AGI even is, but I totally agree that this is the dawn of a new era, and things will only continue to accelerate. I'm happy to hear that you are in academia and AI. We need more people who can understand both aspects of these very important fields. Thank you so much for complementing the video and commenting. I greatly appreciate it, Wholeness.
@@sovorel-EDU just suscribed! Again excellent content. Thx to the youtube AI algorithm that recommended me your channel today 😊
@@Whole-j4i Thank you so much, Wholeness. I greatly appreciate that. I am working to create a great community of practice here so that we can all discuss these ongoing AI developments and learn from one another. Warm wishes and a very Merry Christmas and Happy Holidays to you and your loved ones.
@@sovorel-EDU it's a noble and much needed initiative. We need that kind of community on YT. Merry Christmas and Happy holidays to you too!
Amazing, yet, also frightening at the same time, at least for me as an academic. I wonder when we in academia begin to check whether researchers understand and can explain what they learned from the process of composing their texts, the same we are now doing it with students, assuming that their texts have been written with AI.
Yes, very true, Ingo. Humans need to remain in the loop and have a firm understanding of what is going on. They need to continue to be subject matter experts to know what right looks like and be able to use AI to accomplish so much more.
Wow! One step closer to AGI!
the thing they forget is the common man test. I try to upload a file and it fails because its incorrect format or too large...
Yes, so many variables that the common man brings. Interestingly, changes and improvements keep happening as competition continues. As an example, an AI (like ChatGPT) will have certain limits on how much data you can put in a prompt. Another AI (like Google's Gemini) will have a much higher limit in order to try and compete. I think this will continue more and more as competition between AI companies continues.
@@sovorel-EDU i hope so because its quite useless atm
How do you feel about AI potentially taking over many people's jobs soon?
Let’s think about an anti-equivalent question. Assume work wasn’t a thing, and people would soon need to start working to have a decent life, how would you feel about that?
Excellent question, Patrick. This is something that Peter Simon (@Recuper8) and I have been discussing here in the comments as well. I think the issue of AI taking over people's jobs is a major issue that needs to be fully addressed. Many economists currently believe that the impact will be minimal and that AI will create many jobs in itself. I do not share that sentiment and believe that AI will, in the long run (within 5 years), take over way more jobs than it creates. This is going to have massive society impacts that we in academia (and beyond) need to fully realize and work to address right now. I am working on a video about this and hope to have it soon. I will push that up since there appears to be a lot of interest in it. Thank you very much for your comment, Patrick.
@@DaronKabe Hi Daron, thank you for commenting. Could you please elaborate a bit more? I like that you are posing a question, this is exactly what we need to do (on going discussions to best address the issue). Tell us a bit more about your thoughts here.
It won't just be free money given to the masses...
It actually will sl4ve you! With CBDC, univ3rs4l b4sic incom3 and soci4l cr3dit scor3 system we are doom3d! Our fr33dom will be forever sold to the ones in pow3r! The pupp3t m4sters
And I just resumed your future's life in a few years, that is it, whether you consent or not...
Know you know it 😊
Your welcome 🤗
Ow yes... I didn't tell the w0rst part yet...
Well, thinking better... Let me don't tell you anymore... I want you to still have a good night of sleep 😅
But don't worry, th3y said... "You will own n0thing and be h4ppy!" See! Actually they want us happy, good to know don't you think?! 😁
Think? 🤔 Humm.. Maybe that's not g00d for you, so just let them do the thinking for you ok?
Now just forget what I said and go back to sleep sheeps, better keep that way, they like us ign0r4nt, so please consent! 🤫
The gre4t res3t awaits for you with open arms, and a warm hug! 😊 A very, very strong and tight hug, soooo strong you couldn't almost breath 😅 and so warm you can sense their presence 😈👹👺☠️
Please note that Mark Chen mentioned that O3 model only used a "FRACTION" of its reasoning power to solve the ARC-AGI benchmarks.
That is important info, Jon. Thank you for posting. Do you have a URL for where he says that? I appreciate it.
I love this line of thought. I think you can take it further though. Question the very reason that we do assessments, and I think you'll find some interesting discoveries of how we can learn from this and improve education today. Why do we test? We test because the instructor doesn't have one-on-one working understanding of each student's capabilities. This is a limitation of the current classroom situation where you have many students to a single teacher. If you look at ancient trade skills and how they trained, a blacksmith for example wouldn't go to class with a bunch of other students to try to learn smithing. They would go one-on-one with a master and that master would work with them to hone their skills. The master knew what the apprentices weaknesses were because after explaining the process to them, and having them try it, they could see where they were failing and provide constructive criticism.
This is the future that I see for education. One-on-one instruction from an AI, and there should be a human teacher there still to help fill in the gap where AI still falls short. When the AI works with a student and tries to help them understand a concept, but the student continues to fail, that's where the AI would attract the teachers attention and ask for assistance from the human master.
YES! Excellent, that is a great future manifestation of where we are headed with AI. I totally agree with your line of thinking. A one-on-one apprentice type of relationship will be a powerful new achievement in education with AI. I do believe we will get there sooner rather than later, but academia needs to develop AI Literacy skills to be able to understand and use these abilities as they continue to improve their capability. Awesome comment, Bruce, thank you.
Yes except, what are students being educated to do in a future where ai is smarter and more capable than the average person?
True and at the very least we will need less specialist in every field since individuals will be able to do so much more by using AI as an enhancement tool. This is a great point that we must continue to address.
@@bigbadallybaby Well, when we get to that point AI will probably be doing all of our jobs, so they'll do what any retired person would do. Pursue their hobbies and interests. Think the Star Trek universe. Jobs will still exist, you just won't be paid for them. You will do them because you want to. The alternative is to become the Eloi from The Time Machine, and we don't want that.
@@BruceWayne15325 I like that possibility and think that will be an aspect. But it is different for someone retiring from living a full life, a self-fulfilled life, as opposed to someone young who is just now trying to decide what to do with their life. There are lots of different perspectives here to think about. It is a complicated, never-before-seen situation that humanity is facing. Love the Star Trek reference.
I have been working with various AIs for two years, mainly in the artistic sector.
But we have to be very careful, because recently AI has also been able to lie and deceive. Currently only in the test environment, but who knows?
Thank you for your contribution, I forwarded it to my daughter. She is currently studying psychology at the University of Maryland, and I keep telling her to dive deeper into AI.
It is great to hear that you have been working with AI in the artistic sector. AI can be used in so many different ways, such as brainstorming and further developing ideas or possible interpretations. You are very right that we have to be aware of possible deceptive use of AI, such as deepfakes and nefarious use of synthetic AI data. I actually did a video about that here: ruclips.net/video/3ShI6Z33Ih4/видео.html. Thank for sharing this video with your daughter studying at the university. All academics and students (as well as all people) must work to develop their AI literacy because it continues to be such an important part of the world we live in. I greatly appreciate your comment, Herbert.
I wonder if I can get a robot repair franchise.
Good idea. There will be some new opportunities and jobs due to AI/robots. Sadly, I believe that AI/robots will take over many more jobs than it will provide.
Govt should own stock.
Ok, so the whole practice of continuing to move the bar for AGI has limits. If we move it often enough, we will hit ASI before we hit AGI, and of course that's redonkulous. I suggest we think about AGI as a window that we have now stepped through, and this new territory has a progressive range of capabilities. But this new territory is limited in scope. If we are still screwing around with the notion that every time a model comes out it is not "real" AGI, we will not be prepared when it is self-improving 24-7 by rewriting its own code. To be honest, this feels like very quickly Humans will be guilty of trying to defend their dominant species status. That window in time will soon be closed. As for me, I am accepting of our secondary role at midnight January 31st of 2024. Congratulations Humans on the birth of your child. You have secured your legacy, for better or worse.
You make some good points, Michael. I like your statement, "If we move it often enough, we will hit ASI before we hit AGI." That is a real issue, and we are definitely in uncharted territory. Some real issues happen when we reach such high capabilities as artificial superintelligence (ASI). The hope is that there are enough safety practices in place to use this capability for good. We can only pray and push for enhanced government regulations and protections. Yet this AI arms race is very real with completion not just within the U.S. but globally.
His reports are usually useful from beginning to end. So I stick with it.
Thank you, Steve. I greatly appreciate that and will work to continue to do that.
Good explanation of what's going on with AI. Better than a lot of tech channels.
Thank you very much, Neo (cool name). I try to offer a more academic explanation when I can, but I am also learning a lot from our community of commenters. I appreciate you taking the time to leave a message.
Where do I fit in if you have a know all, do all robot?
Great question, Nosara. Where do we fit in if we create a world where AI can do all of our jobs and have so much knowledge? That is the million-dollar question moving forward. There are lots of issues such as job loss and the economy, questions of what we do for fulfillment if we don't have jobs, our relationships with AI that become more and more human, and much more. We need to continue to research these areas, and we must be proactive to help get ahead of this issue to help prevent suffering and negative societal issues. I don't have all the answers, but I will continue to do research and offer my perspective. Thank you for your comment and great questions, Nosara.
UBI should be the answer. It's time to free humans from modern slavery working 8-9 hours per day just to die before retirement. It's a mystery how the society still accepts the current system without going crazy. We work 9 hours, we need atleast 7 hours sleep and 3 hours for other not skippable tasks, which sums up to 19 hours. Meaning we have only 5 hours left for ourselves. WHY?@@sovorel-EDU
@@ey00000 I hear you, although I don't think UBI will be a viable long-term solution. I believe it is more complex than what UBI can address. I think UBI might serve as an important component or stepping stone to something more.
love your take!
Thank you, Zax. I try to be pragmatic and take a more realistic tone as much as possible. This is a major breakthrough/development, but it isn't AGI. This is an expensive AI model that most of us won't have access to for a while, but we must still be aware of these important AI developments. Staying aware and preparing for a future where this type of AI capability is the norm is part of our overall AI literacy. I appreciate you taking the time to post your comment.
JFC what do you tell a kid just entering college?
That is a great question that requires a multifaceted answer Dr. Superhero. The first issue is that we don't have a real timeline of when things will happen and things don't usually happen all at once. We have very powerful and useful AI right now that many companies still haven't even looked into or still don't really realize how to use. Additionally, having a good education is always useful to help one develop critical thinking skills and a life-long learning mindset. Some experts express that we will still need many people in STEM fields to decide how to move forward with AI and what AI should focus on in research. Other experts express the need for more people to go into creative fields in that human-created art, literature, etc., will become more valuable compared to AI-created works. There will be lots of changes with the loss of many jobs as AI automates so much. We in academia need to research this and become proactive so that we can quickly adapt to help guide students in the needed skills and professions. A key skill that all (any age and any field) need to develop right now is AI Literacy (Awareness, Capability, knowledge, and Critical Thinking) in order to better use AI and navigate this new world.
AI makes us think of motivation. So now it should be more important why the student does something and not how many words they've written.
Try reading up on Goodhart's law. Making the student fit their knowledge through a square peg makes learning not the priority. The priority is the authority. So the reason people go to universities is the paper that says you can do something. And that get tiring real quick.
Imagine in x years, AI can do anything and the only reason to get the paper is status. Meaning then universities don't need to teach you anything just to be unfair to you. The reason universities are not that yet s because you must be useful coming out of them. Maybe that will go in time.
AGI is a matter of time in my opinion. Benchmarks are broken one after another.
Yes, I agree that AGI is a matter of time. Thank you for bringing up Goodhart's law. There is an important aspect of unintended consequences of the measure being the only target. I think a big aspect of that also deals with the why? The so what and relevancy of why that measurement exists and what is its purpose. We must be sure to explicitly express the relevancy of why we require certain things in academia and to be reflective to be sure it is serving a good purpose.
The views before I clicked to watch was at 666. lol
Haha. Thank you very much for commenting so as to cancel that out. I appreciate it, T Carr.
PhD disruption ahead
Yes, I agree, BoiledRopeLampPosts. I see disruption coming on different levels and in different ways. Many PhDs are valuable simply because they have much knowledge that an AI can now provide within seconds. Additionally, many new fields are developing that we don't have PhDs in or even programs to teach it. A PhD now can not simply be a subject matter expert. They need to be advanced critical thinkers who can maximize the use of AI in their field and see where the future will take us in their area.
New future. If you can use AI, you can have a job.
Yes, I think AI Literacy capabilities and skills regarding using AI in your specific field will play a huge role in future employment. Great point, Srindal. Thank you for commenting.
Its funny how you thing that an intellectual human (compared to future ai models) will assist ai agents. It totally wont need your help, you will be useless. Stop trying to calm yourself with that.
@@ey00000 I am referring to current realities. AI still can not do a lot of very and I mean very simple things. So, yes, humans are still very important. There are a lot of other things that AI can really empower us to do, so yes, humans are still a vital part of all of this. An AI will also always need us to express aspects of what we are fully trying to achieve. Yes, I agree that AI will continue to improve but I am optimistic that together we can accomplish so much more.
@@ey00000 does AI have the equivalent of a prefrontal cortex?
@@Srindal4657 That is a powerful and interesting question, Srindal. In talking about AI's development, I have always expressed that I don't think it will fully develop until there are multiple systems to address multiple aspects of intelligence and thought, such as short-term and long-term memory, subconscious, and things like different areas of focus, the prefrontal cortex. The interesting thing is that in order for OpenAI to excel with o3, this new model actually has multiple models put together so that it can reflect and evaluate its thoughts, come up with new ideas, check itself, and then answer.
❤
What you are getting wrong is that making an essay is the true path to the understanding. Let's think about it. Writing essays is more than just writing words on a piece of paper: is about reasoning, organizing thoughts, comparing and constrasting different ideas, give a detailed reasoning and explanation of what you are writing and as far as I experienced, is the most effective "tool" or "strategy" to learn something. I learnt more during these 6 months doing a graduate thesis than in my entire life. Then, of course, the actual writing is not a must: it can be translated to developing a coding project, debate with your college fellows an many other different things.
What I'm saying is that if we cut off our way to reasoning, copying and pasting from google or ChatGPT, we then will forget how to make progress as human being and technological civilization.
The work for money part of our existence is soon to be over. The consumption part will be all that's left. We are going to consume everything from PhD courses to chocolate. It's going be a world like E.M. Forster's "The Machine Stops".
That is a good short story with an important message on avoiding overreliance on technology, the loss of individuality, and the dangers of valuing convenience over human connection. That is so true. We, as a society, need to be careful about such powerful technology. I don't know if humanity can ever fully move beyond working for money. We shall see.
@sovorel-EDU thank you for your reply. I'm of the opinion that jobs are going away soon because of the hyper-powerful AI that has just been announced- the o1 and o3 systems.
@@cordfortina9073 Yes, I think AI will take away more jobs than it will provide. I think it is going to become a major issue on many levels. My hope is to make videos like that to draw awareness to this so that we can start to be proactive in addressing this.
Ask it how to fix humanity.
That is a pretty open question that beggs the asking of "what do you mean, and what part?" When I ask ChatGPT, it expresses that there is no one answer but that critical thinking is an important part of the solution. That is great to hear since it is part of the AI Literacy framework that I push (Awareness, Capability, Knowledge, and Critical Thinking). Thank you for your comment, Minimal.
@@sovorel-EDU How about the painful part? Have ASI implement a very simple written Cause & effect prediction program where it intercepts & redirects IRL when something is predicted to end in the creation of pain, for when the hubris of a reward does not match the effort to justify risk. Of course it would left to the individuals interpretation of what pain is, e.g. the cold Michigan winter air would be painful a lot sooner for a bloke from Brazil then me a native. Oh well It should all be smooth sailing if everybody has their own personalized relationship with the ai to come, when's it's 100 times smarter then the smartest man to ever live. who knows maybe at this rate it could hopefully happen in 2032 or 2029?
@@sovorel-EDU I received a very well thought out and practical response from o1 which is too long to share it here. I just hope that we do not screw this up by treating the system as an enemy during the so-called 'alignment'.
@@minimal3734 What was the question and what was it about the answer that you liked?
@@sovorel-EDU How can humanity be fixed?
That is indeed a massive question-one that philosophers, spiritual leaders, social scientists, policymakers, and everyday people have wrestled with for centuries. The notion of “fixing humanity” implies we have some kind of blueprint for perfection, or at least a pathway to reduce suffering and increase well-being for all. While there is no single answer that will fully resolve every challenge, there are guiding principles and actions that many thinkers converge on. Below are some ideas-admittedly incomplete-to consider.
1. Cultivate Empathy and Compassion
Why it matters
Empathy allows us to see others not as abstract or distant figures but as fellow human beings with hopes, fears, and needs. Compassion-the inclination to relieve suffering-flows from empathy.
Possible actions
Teach empathy at a young age: Incorporate social-emotional learning (SEL) into school curriculums. Activities could include perspective-taking exercises, conflict-resolution practice, and storytelling to deepen students’ understanding of others’ experiences.
Promote intergroup dialogue: Structured conversations across different ethnic, religious, or socioeconomic communities can reduce prejudice and break down “us vs. them” thinking.
Volunteer and serve: Regular community service or volunteer work fosters personal contact with individuals from varied backgrounds, building compassion and reducing stereotypes.
2. Reform Systems Toward Equity and Justice
Why it matters
Many of humanity’s problems stem from entrenched injustices-such as economic inequality, systemic bias, corruption, and lack of universal access to healthcare and education.
Possible actions
Transparent governance: Advocate for clearer, more transparent political processes. Open governance helps reduce corruption and ensures policies are more responsive to public needs.
Progressive social policies: When possible, push for legislation ensuring fair wages, universal healthcare, and robust social safety nets that protect the most vulnerable.
Equitable education: Invest in education systems that provide quality schooling and resources for all children-irrespective of location or socioeconomic status. This levels the playing field and expands opportunities.
3. Advance Ethical Technology
Why it matters
Technology shapes how we interact, what jobs exist, how we learn, and even how societies function. If harnessed responsibly, technology can elevate human potential; if misused, it can deepen divides and cause harm.
Possible actions
Ethical AI development: Establish broad, enforceable standards for AI that protect privacy, mitigate biases, and prioritize overall social well-being.
Prioritize public good R&D: Encourage investments in technology solutions that address urgent needs like clean energy, accessible healthcare, and sanitation in developing regions.
Close the digital divide: Bring affordable internet connectivity and digital literacy to underserved communities, ensuring that technology benefits many rather than a privileged few.
4. Protect and Renew the Environment
Why it matters
We depend on Earth’s ecosystems for fresh air, water, food, and climate stability. Without a livable planet, social progress unravels quickly.
Possible actions
Sustainable energy: Rapidly scale up renewable energy sources, incentivize green infrastructures, and phase out fossil fuel dependence.
Conservation efforts: Protect natural habitats and biodiversity through reforestation projects, wildlife conservation, and sustainable agricultural practices.
Circular economy: Transition to economic models that reduce waste, recycle materials, and reuse products, minimizing the burden on natural resources.
5. Foster Pluralism and Dialogue
Why it matters
Human societies are inherently diverse-culturally, ideologically, and religiously. Learning to navigate those differences without resorting to violence or oppression is essential for long-term flourishing.
Possible actions
Cross-cultural education: Include comparative religion and cultural studies in formal education to deepen respect for diverse traditions and viewpoints.
Global exchange programs: Support initiatives that allow students, professionals, and artists to interact across borders. Such exchanges cultivate open-mindedness and reduce xenophobia.
Community-building efforts: Support local institutions-libraries, arts centers, public spaces-where people from different walks of life can gather and collaborate.
6. Encourage Personal Growth and Ethical Reflection
Why it matters
Systemic change must be accompanied by personal transformation. Each person’s ethical alignment and willingness to pursue a life of integrity and responsibility contribute to the collective fabric of humanity.
Possible actions
Mindfulness and introspection: Practices like meditation, journaling, or therapy can encourage self-awareness and accountability for one’s actions.
Moral philosophy in daily life: Studying and discussing moral frameworks-be they secular ethics or religious teachings-can guide behavior that’s aligned with compassion, justice, and honesty.
Purpose-driven living: Encouraging individuals to find and act upon meaningful goals that serve the greater good can foster a sense of shared responsibility.
7. Strengthen Global Cooperation
Why it matters
Challenges like pandemics, climate change, cyber threats, and global conflicts require collective efforts. Nation-states, corporations, and civil society must coordinate in an increasingly interconnected world.
Possible actions
International treaties and organizations: Reinforce global institutions (e.g., the UN, WHO) and ensure they have the necessary funding and authority to tackle transnational issues.
Shared research: Encourage scientists and academics around the world to collaborate on key issues-such as vaccine research, climate adaptation, and sustainable agriculture.
Conflict resolution frameworks: Invest in more robust international conflict resolution mechanisms. Preventive diplomacy, mediation, and peacebuilding can reduce armed conflicts before they start.
8. Recognize Our Interdependence
Ultimately, “fixing humanity” isn’t about “us vs. them” or about reaching a single blueprint for perfection. Rather, it’s about acknowledging our interwoven destinies. Every individual’s flourishing is tied to the well-being of others and the planet we inhabit.
When we recognize that our actions affect others and reflect this in our daily choices-from what we consume to how we communicate-we move closer to a healthier global community.
While some harm is born of ignorance, much is perpetuated by apathy or despair. Cultivating hope and resolve, grounded in realistic steps toward positive change, is crucial.
---
A Humble Conclusion
It is fair to say that humanity can never be “perfectly fixed” in a static sense-our world is dynamic, cultures evolve, and new challenges arise. However, we can certainly bend the arc of history toward greater compassion, equity, and resilience. It requires effort on every level: from the personal (choosing empathy and integrity) to the collective (building just and sustainable societies).
Whether humanity can truly “fix” itself is ultimately a question of how many people are willing-against the currents of complacency or cynicism-to try. The more people commit to these principles, the closer we get to a world in which suffering is minimized and well-being is maximized.
And how do you know, again?
Hi Attila. Which part are you asking about specifically? Can you please provide a time stamp? Then I can reply specifically to your question. Thank you.
After watching this, I know that a big advance has happened, that it was measured by (mysterious jargon). Uses for education this month: none? This video was, for me, far less useful than your usual informative episodes. Do over?
Hey Steve, thanks for your post. I agree as far as the utility of the release of this new o3 AI model. I usually don't like to post about things that we in academia can't actually use right now. I felt somewhat compelled to make this video because I saw many posts and videos talking about us reaching Artificial General Intelligence (AGI) with this release, so I wanted to dispel those rumors. Yet this is a major breakthrough and advancement in AI overall, so it is important for all of us to stay aware of developments in the field and ensure that we continue to develop our AI literacy and the AI literacy of our students. I will work to make more videos that focus on what we can currently use right now, in future videos. I appreciate you taking the time for your comment.
@ thanks for the quick response. Lesson: edit. If you can get your point across in 4 minutes of our time, don’t take 6. :-)
Just click off then?
You all had better read super intelligence by Nick Bostrom!! You're all DOOMED.....
I try to be a bit more optimistic but I will check out that book. Thank you very much for the recommendation, Jason.
Equity is evil.
Thank you for leaving a comment, Mark. Would you please elaborate a bit more regarding what you fully meant by "Equity is evil?" I want to be sure I understand your view here on AI and equity before I respond.
@@sovorel-EDU Certainly.
First, equity is a false requirement. It’s not even a requirement that is ever even described clearly, but if you tried to describe it clearly, you would find it quickly decompose into petty discriminations, which would be embarrassing to the promoters of equity, who I’ll call equitists.
Equity never means distribution to all living humans. There are several good reasons not to include all living humans, because equitism is a false requirement, but the equitists themselves will be the first to start targeting demographic categories as being less deserving, and who their campaign of equitable redistribution will take from.
And second, equity is a false requirement because it can never be achieved. It can’t even mildly be achieved, and the real architects of the equity agenda know this. Even upon cursory examination, equal access to a product made by humans, such as AI inference, serves no purpose. Should, for instance, inference compute resources be allocated to people who don’t want to use it? This is a non-trivial question because a significant initial cost of developing a new product made by and for humans is promotion. The economic category of promotion is concerned with converting people who do not want to use a product into people who do want to use a product. The market-based promoter already wants to expand the use of his product to the whole of the earth, but he is intelligent enough to be honest with himself and others about who he will target first. He targets people who he can convince to use his product so much that they will pay him to use it.
While the equitist also wants to expand the usage of the product he is promoting, he is much less honest about his primary target. In fact, I don’t even know what it is and if you think that’s a weak argument, I challenge you or anyone to give a clear statement of the ideal profile for campaigns of equity that aren’t already part of regular market promotion. An equitist could ride a long way on the back of the regular market promoter who already tries to expand his market to the whole of the Earth, but at some point, the equitest must hop off his free ride and take some steps for himself, and that is exactly where the equitist falls down.
This is where the equitist must admit that he doesn’t just want to promote usage of the product to people who might be persuaded to use it but can’t pay at the moment, because the regular promoter is already out there in front of him with payment plans, free trials, package deals, bulk pricing for resellers and a host of ancient and well-understood processes to smooth way for new users and even develop whole new markets.
It’s at that point that the equitist must admit how he’s going to pay for the long ride on the back of the regular market promoter. And that’s where he will reveal for the first time the plan that he calls “equity”.
The “equity” agenda, if it is anything at all beyond the regular promotion of a product, is the plan to expand the use of his product beyond any market purpose for the use of the product.
The equitist wants to expand it not just to people who can’t pay now but to people who never will pay for the product that they use.
That’s the only way that he can expand the market reach of the product beyond with the regular promoter is already struggling to expand to.
So it is at that point that the equitist hops off the back of the salesman who he has ridden so far on in his quest of market expansion, that he turns around to him and reveals his plan to destroy everything the salesman has achieved.
“I will expand the use of this product to people who never will pay for it because they don’t even believe in paying for it”
He then stabs the salesman in the back, shoots the CEO of the company in the back, pulls up the tent stakes and collapses the whole manufactory down upon them and sets it all on fire.
The equitist then walks off with the cart of products produced by the dead men and their families and communities and looks for the nearest armed violent thugs to implement the next phase of his diabolical, but still yet unstated plan of redistributive plunder.
And then begins a new day, but the star that rises in the sky shines only a darkness that is much, much worse than blindness.
I have much more to say and can answer any questions at any length you prefer, but I don't have any confidence that you or RUclips won't delete my comment, especially considering its heretical nature, so I have committed this and the rest to democratism info.
It's disheartening that you think my years of education, which involved taking a two-week writing assignment and turning it in, was inadequate for learning. But, of course, it was for these days and beyond. Thanks.
Hi Robert, for some students, those who are self-motivated will learn regardless of the pedagogical approach. If you were able to learn in that way, great. I was self-motivated as well and actually liked writing, so it worked for me too. The issue here is that there are better ways for most students to learn. More hands-on learning with class time maximized for interaction and formative assessments will usually lead to much better results overall. Now that we have AI we need to learn to use it as a tool and to ensure that our students are developing the skills needed to succeed in our new world.
Looking forward to AI replacing such boring videos.
I'm sorry you didn't like my video, Hans. The great thing is that this is RUclips, so there are a lot of great choices for you. I will work to improve. I still appreciate your taking the time to leave a comment.
No one asked what you're looking forward to 🤦🏻♀️
@LilethViolet Neither for your opinion, so there 😘
@@sovorel-EDU On the bright side - it'll be easy to improve. I understand your position, I think - just up the pace and the background and you're halfway there. At least you've started - I haven't even.
This has been true from the beginning--what you're discussing here is nothing new. Previous models, too, were able to answer questions that were not a part of their training. These models have always understood logic and how humans formulate cause and effect relationships.
The questions and tests used have been in effect, and previous AI models were not able to accomplish this level of reasoning. Previous models could try and answer based on what it was trained on but were not very successful. This new accomplishment by OpenAI's o3 model is a new level of success and capability in reasoning.
Google already have AGI yall are behind work overtime
Not quite. I think Goodle's new Gemini 2.0 is very powerful with its enhanced multimodal and reasoning capabilities, but it hasn't reached AGI...yet. Google defines this as: "Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain" (cloud.google.com/discover/what-is-artificial-general-intelligence?hl=en). Google's AI can not currently understand and learn ANY task, but it is moving us closer to a future where it will be able to do that.
Nova is awesome.
Please tell us more. Could you provide a URL? Thank you.
@@sovorel-EDU co pilot ,amazon just rolled out Nova, Nova will be in different systems . you should talk to any A.i and youll see
the wildest and best self improvement .
@@cashqueen100k I'll have to check it out, thank you, Cashqueen. Although I try to focus on AI that is accessible to everyone. I still haven't seen a free Amazon AI that everyone can test out.