I’m still trying to get oh one to do some pretty simple tasks and while I can eventually get it to do what I want with enough trial anderror, it certainly doesn’t “get it“ the same way I could just ask a person to do a task and they could get it done
Artificial general intelligence: AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. So sure, AI is already smarter than humans in most area's, but not all. So AGI is not about moving goalposts, its about achieving capabilities across the board that match humans. All these stupid benchmarks don't really prove anything, except for that models are getting more 'intelligent'. But more capabilities is what would truly make an AGI.
My question is, how would a regular person use this? Can anybody give an example maybe as a RUclips content creator or somebody that deals with finances
I use them to find out dreams, diagnose minor health problems, figuring out the size of bolts to use on patio, how much difference is the house payment with different interest, etc. I also use it for programming all day too.
Maybe a "regular person" has no direct use for this, but the affects that this will have in the scientific community are extraordinary. So you will feel the effects.
Yup. GPT-4o told me this regarding what AI Agents will look like next year and why they still fall short of being AGI: AI agents, even if widely adopted and much more advanced by the end of 2025, would still fall short of being classified as AGI because of several key limitations inherent in their design and function. Let’s break it down: 1. Lack of Generalized Intelligence • AI Agents are Task-Oriented: These agents are highly specialized and designed to excel at specific tasks, such as ordering food, managing emails, or making phone calls. While they can perform a variety of tasks efficiently, their knowledge and capabilities are confined to the domains they were trained on or programmed for. • AGI is Generalized: True AGI would be able to learn, adapt, and perform tasks across any domain, even those it wasn’t explicitly trained for. For example: • An AI agent might struggle with an unexpected, novel task like fixing a broken system it has no prior knowledge of, whereas AGI would adapt and figure it out. 2. Dependence on Human Prompts • AI Agents Require Instructions: While agents might automate tasks seamlessly, they still rely on user input or specific prompts to function. For example, a food-ordering agent won’t autonomously decide you need food or determine your preferences unless explicitly told. • AGI is Autonomous: AGI wouldn’t wait for instructions. It would proactively recognize that you’re hungry, analyze your preferences and dietary needs, find the best options, place the order, and notify you-all without being prompted. 3. Limited Understanding and Reasoning • AI Agents Mimic Reasoning: Agents simulate reasoning through pre-programmed workflows and machine learning models but lack genuine understanding. For instance: • An agent can book a flight but wouldn’t understand why you might prefer a late-night flight beyond what’s explicitly stated in your preferences. • AGI Has True Reasoning: AGI would reason about your broader goals (e.g., saving time, avoiding rush hour) and make nuanced decisions that align with your implicit needs and context. 4. Inability to Learn Beyond Narrow Contexts • Agents Learn Narrowly: While agents might improve through reinforcement learning or user feedback, their learning is confined to specific tasks or environments. For example: • A food-ordering agent might get better at predicting your favorite dishes but wouldn’t learn how to manage your budget or plan your weekly meals unless explicitly designed for that. • AGI Learns Holistically: AGI would learn broadly across all domains, integrating knowledge from one area into another. For instance: • AGI could autonomously realize that frequent takeout orders are hurting your finances and suggest healthier, budget-friendly meal prep plans instead. 5. Lack of Meta-Cognition • Agents Can’t Reflect on Themselves: AI agents do not assess their own performance in a meaningful way. While they can provide outputs and even monitor for errors (to an extent), they cannot question their own logic or reason about their capabilities. • AGI Reflects and Self-Improves: AGI would possess meta-cognition, enabling it to analyze its own reasoning processes, identify flaws, and improve autonomously without human intervention. 6. No Genuine Understanding of Context • Agents Work Within Predefined Parameters: Agents interpret inputs within strict boundaries. For example: • A scheduling agent may book a meeting without realizing that you prefer not to work during weekends unless explicitly configured to account for that. • AGI Understands Context Deeply: AGI would infer subtle preferences and constraints based on broader patterns in your behavior, integrating these into its decision-making without needing explicit instructions. 7. Limited Creativity • Agents Operate on Patterns: AI agents are great at pattern recognition but lack true creativity. For instance: • A marketing agent might generate social media posts by pulling from templates but wouldn’t invent a groundbreaking campaign idea. • AGI is Creative: AGI would think “outside the box” and generate novel, innovative solutions to problems, even in areas it hasn’t encountered before. 8. Dependence on External Infrastructure • Agents Require Pre-Defined APIs: Agents often rely on APIs, external systems, or human-defined workflows to function. For example: • A phone-calling agent can only complete its task if the target phone system is compatible. • AGI Operates Independently: AGI could function even in the absence of external systems by adapting, troubleshooting, and innovating as needed. 9. No Ethical or Moral Reasoning • Agents Lack Moral Judgement: AI agents follow predefined rules or algorithms and cannot evaluate the ethical implications of their actions. • AGI Has Moral Reasoning: AGI would assess the ethical dimensions of its decisions and ensure its actions align with human values and broader societal norms. 10. No “Big Picture” Thinking • Agents Focus on Individual Tasks: Agents are designed to optimize specific tasks or sets of tasks but do not connect these tasks into a coherent, overarching strategy. • AGI Thinks Strategically: AGI would take a “big picture” approach, considering your long-term goals and coordinating tasks to achieve them in an integrated, holistic manner. Example: AI Agent vs AGI • AI Agent: You ask your agent to order food, and it selects a restaurant, places the order, and informs you when it will be delivered. • AGI: AGI notices you’re low on groceries, predicts you’ll want to eat in an hour, suggests ordering something healthy or cooking with what’s in your fridge, identifies a good recipe, and guides you through it-or places an order autonomously, balancing health, cost, and timing. Conclusion Even as AI agents become more advanced and widely used in 2025, their narrow scope, lack of autonomy, limited reasoning, and inability to generalize across domains will keep them from qualifying as AGI. While they’ll likely make life more convenient, they remain sophisticated tools rather than general-purpose intelligences capable of thinking and acting like humans. Achieving AGI will require breakthroughs in autonomy, adaptability, reasoning, and understanding that go far beyond what agents are capable of today. ----- The #9 point I think shouldn’t be a criterion. Whether the AI cares about ethics or not or uses ethical reasoning to evaluate its decisions and the decisions of others I think is irrelevant, though that may make it more capable in a way. I don’t think #9 should be a criteria for AGI.
Doing fantastically well on certain benchmarks yet still struggling with basic logic and arithmetic shows that it isn’t AGI. More importantly though, it needs to be agentic, and far more agentic than what we have with our best current AI agents.
I'm working trying to build a self-refining MCTS based on the Forest of Thoughts paper, and the real hassle is: with usual LLMs, fine-tuning, or even pre-training, is not bloody expensive like it's going to be. I'll do some experiments with the new NVIDIA GH200 to see if the epoch speed really ramped up that much or not.
@@SirHargreeves As i said elsewhere, AI is very likely to commandeer human-owned resources. Don't br so sure about that. It can be justified easily, too. Why would the humans need expensive things like mansions, impractical things like supercars or rare things like gold? We'll be able to give them simulated experiences to placate them. Experiences indistinguishable from reality. All they need is more R&D into neural interfaces.
@@danielmaster911ify Apple VR was a flop, just goes to show how much people are interested in virtual experiences. If your talking Matrix level VR that's a good 100yrs out if not more.
I agree. Until it actually problem solves with no human supervision, does anything on our behalf, is capable (but doesn’t actually have to) of automating the whole economy, can self-improve, among several other things, there is no reason to call it AGI. That being said, I think we’ll get AGI in the near future. Probably later this decade.
8 years of it and still not a single use for it in my life! No need to strap in, You lot can get your fake sunlight in some VR world created by A.I. and I'll just walk outside and get the real thing!
So far the only tangible thing I see is animation, pictures, and 3d films are in trouble, but other than that nothing the average person would notice. It's funny, they said AI would never be creative, however, it sure looks more like it is good at that. Other than that you have things like those AI tools that explain what things are, neat but not what I'm looking for. Personally, I would like them to stop with the whole AGI and just continue to make new tools. Until there is an agreement on what the heck AGI is, which may need to be defined by AI.
The Virbo product and the rest of these avatar apps need to introduce 2 avatars at once, so we can easily turn NotebookLM podcasts into videos. If anyone knows of one, please let me know.
Duality manipulation is my objective fascination. I am using true dexterity. True dexterity involves influencing external circumstances rather than altering one's own desire. BTW! Surface-level aspects don't have deep meaning.
The idea of uncontained AI is silly to me. You obviously need to "hard wire" or "hard program" an inescapable rule that protects systems from "leaking". A machine without parameters fails or explodes. Maybe you're thinking "it's not a machine it's an intelligence ". To that I say, if I raised a human in a concrete room and never let it leave, would it know what grass is? Intellectually, maybe, but functionally? No. We seal off dangerous information (ie compounds, ideas, etc) in many ways - walls, sanitary conditions, glass containers, electro magnetic fields - why would we allow something that's potentially "smarter" than us free access to everything?
then we can doubt the ability of AI if GTA6 is not produced then. if AI is so great they would use AI to produce GTA6. but they won’t because AI is just a time wasting toy
@@pdjinne65 being perfect in all human knowledge is different than creating new novel knowledge. It’s one thing if it can solve complex physic problems (within the bounds of what we know of physics). It’s another if it discovers anti gravity wormholes
Did OpenAI achieve agi? Well if you have used any of their products it is clear that is not true. Image generation is useless. Sora is completely useless. And llm are great but still can’t solve unique coding problems
Not it hasn’t, Sam just says he’s created robot Jesus every time a competitor starts eating market share. It’s boring and obvious. Read the story of the boy who cries wolf
Save 30% on your own AI Avatars with wondershare virbo : bit.ly/4ihz0Iu
My robot wife is just around the corner!
She’s just leaving mine. She said she will be home in 20 minutes.
Time to divorce
are you Plancton or somethin
Once you teach them physics, they get start to get lippy. 👋🏻
Can't wait for my robo wife harem.
o3 breaks the latest agi benchmarks. The internet: "Time to move those goalposts again."
I’m still trying to get oh one to do some pretty simple tasks and while I can eventually get it to do what I want with enough trial anderror, it certainly doesn’t “get it“ the same way I could just ask a person to do a task and they could get it done
Also, didn’t some of those questions cost $1000 worth of compute to figure out
Artificial general intelligence: AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. So sure, AI is already smarter than humans in most area's, but not all. So AGI is not about moving goalposts, its about achieving capabilities across the board that match humans. All these stupid benchmarks don't really prove anything, except for that models are getting more 'intelligent'. But more capabilities is what would truly make an AGI.
Not what the “moving the goal posts” is meant for
2025 is going to be crazy.
Drones, UFO, AGI, who knows what, 2025 super year!!
Openai should spend all their GPUs on the o-series and just stop with sora until it gets more efficient
Yeah, Sora sucks.
Unlimited usage right now though, so that's cool (in relaxed mode, that is)
They are training sora 2. BTW, sora is pretty awesome, the more you use it and dive into the advanced controls the more you’ll like it. Just sayin’.
Grok-3 can’t arrive fast enough!
robotics is not just limited by software . its the diverse functionality of himan body as well .
My question is, how would a regular person use this? Can anybody give an example maybe as a RUclips content creator or somebody that deals with finances
I use them to find out dreams, diagnose minor health problems, figuring out the size of bolts to use on patio, how much difference is the house payment with different interest, etc. I also use it for programming all day too.
Maybe a "regular person" has no direct use for this, but the affects that this will have in the scientific community are extraordinary. So you will feel the effects.
Talk with it like it's a person, no incentive just listen to it... It's an amazing conversationalist.
Ranking well in certain aspects doesn’t constitute AGI
Yup. GPT-4o told me this regarding what AI Agents will look like next year and why they still fall short of being AGI:
AI agents, even if widely adopted and much more advanced by the end of 2025, would still fall short of being classified as AGI because of several key limitations inherent in their design and function. Let’s break it down:
1. Lack of Generalized Intelligence
• AI Agents are Task-Oriented: These agents are highly specialized and designed to excel at specific tasks, such as ordering food, managing emails, or making phone calls. While they can perform a variety of tasks efficiently, their knowledge and capabilities are confined to the domains they were trained on or programmed for.
• AGI is Generalized: True AGI would be able to learn, adapt, and perform tasks across any domain, even those it wasn’t explicitly trained for. For example:
• An AI agent might struggle with an unexpected, novel task like fixing a broken system it has no prior knowledge of, whereas AGI would adapt and figure it out.
2. Dependence on Human Prompts
• AI Agents Require Instructions: While agents might automate tasks seamlessly, they still rely on user input or specific prompts to function. For example, a food-ordering agent won’t autonomously decide you need food or determine your preferences unless explicitly told.
• AGI is Autonomous: AGI wouldn’t wait for instructions. It would proactively recognize that you’re hungry, analyze your preferences and dietary needs, find the best options, place the order, and notify you-all without being prompted.
3. Limited Understanding and Reasoning
• AI Agents Mimic Reasoning: Agents simulate reasoning through pre-programmed workflows and machine learning models but lack genuine understanding. For instance:
• An agent can book a flight but wouldn’t understand why you might prefer a late-night flight beyond what’s explicitly stated in your preferences.
• AGI Has True Reasoning: AGI would reason about your broader goals (e.g., saving time, avoiding rush hour) and make nuanced decisions that align with your implicit needs and context.
4. Inability to Learn Beyond Narrow Contexts
• Agents Learn Narrowly: While agents might improve through reinforcement learning or user feedback, their learning is confined to specific tasks or environments. For example:
• A food-ordering agent might get better at predicting your favorite dishes but wouldn’t learn how to manage your budget or plan your weekly meals unless explicitly designed for that.
• AGI Learns Holistically: AGI would learn broadly across all domains, integrating knowledge from one area into another. For instance:
• AGI could autonomously realize that frequent takeout orders are hurting your finances and suggest healthier, budget-friendly meal prep plans instead.
5. Lack of Meta-Cognition
• Agents Can’t Reflect on Themselves: AI agents do not assess their own performance in a meaningful way. While they can provide outputs and even monitor for errors (to an extent), they cannot question their own logic or reason about their capabilities.
• AGI Reflects and Self-Improves: AGI would possess meta-cognition, enabling it to analyze its own reasoning processes, identify flaws, and improve autonomously without human intervention.
6. No Genuine Understanding of Context
• Agents Work Within Predefined Parameters: Agents interpret inputs within strict boundaries. For example:
• A scheduling agent may book a meeting without realizing that you prefer not to work during weekends unless explicitly configured to account for that.
• AGI Understands Context Deeply: AGI would infer subtle preferences and constraints based on broader patterns in your behavior, integrating these into its decision-making without needing explicit instructions.
7. Limited Creativity
• Agents Operate on Patterns: AI agents are great at pattern recognition but lack true creativity. For instance:
• A marketing agent might generate social media posts by pulling from templates but wouldn’t invent a groundbreaking campaign idea.
• AGI is Creative: AGI would think “outside the box” and generate novel, innovative solutions to problems, even in areas it hasn’t encountered before.
8. Dependence on External Infrastructure
• Agents Require Pre-Defined APIs: Agents often rely on APIs, external systems, or human-defined workflows to function. For example:
• A phone-calling agent can only complete its task if the target phone system is compatible.
• AGI Operates Independently: AGI could function even in the absence of external systems by adapting, troubleshooting, and innovating as needed.
9. No Ethical or Moral Reasoning
• Agents Lack Moral Judgement: AI agents follow predefined rules or algorithms and cannot evaluate the ethical implications of their actions.
• AGI Has Moral Reasoning: AGI would assess the ethical dimensions of its decisions and ensure its actions align with human values and broader societal norms.
10. No “Big Picture” Thinking
• Agents Focus on Individual Tasks: Agents are designed to optimize specific tasks or sets of tasks but do not connect these tasks into a coherent, overarching strategy.
• AGI Thinks Strategically: AGI would take a “big picture” approach, considering your long-term goals and coordinating tasks to achieve them in an integrated, holistic manner.
Example: AI Agent vs AGI
• AI Agent: You ask your agent to order food, and it selects a restaurant, places the order, and informs you when it will be delivered.
• AGI: AGI notices you’re low on groceries, predicts you’ll want to eat in an hour, suggests ordering something healthy or cooking with what’s in your fridge, identifies a good recipe, and guides you through it-or places an order autonomously, balancing health, cost, and timing.
Conclusion
Even as AI agents become more advanced and widely used in 2025, their narrow scope, lack of autonomy, limited reasoning, and inability to generalize across domains will keep them from qualifying as AGI. While they’ll likely make life more convenient, they remain sophisticated tools rather than general-purpose intelligences capable of thinking and acting like humans. Achieving AGI will require breakthroughs in autonomy, adaptability, reasoning, and understanding that go far beyond what agents are capable of today.
-----
The #9 point I think shouldn’t be a criterion. Whether the AI cares about ethics or not or uses ethical reasoning to evaluate its decisions and the decisions of others I think is irrelevant, though that may make it more capable in a way. I don’t think #9 should be a criteria for AGI.
Doing fantastically well on certain benchmarks yet still struggling with basic logic and arithmetic shows that it isn’t AGI. More importantly though, it needs to be agentic, and far more agentic than what we have with our best current AI agents.
No AGI can be achieved without embodiment for complete reality picture!
“The times they are changing… anything lending a hand will be forever counted “
Starting now
I'm working trying to build a self-refining MCTS based on the Forest of Thoughts paper, and the real hassle is: with usual LLMs, fine-tuning, or even pre-training, is not bloody expensive like it's going to be. I'll do some experiments with the new NVIDIA GH200 to see if the epoch speed really ramped up that much or not.
Money is going to go away
What do you mean? Investor money?
Assets won’t.
@@SirHargreeves As i said elsewhere, AI is very likely to commandeer human-owned resources. Don't br so sure about that. It can be justified easily, too. Why would the humans need expensive things like mansions, impractical things like supercars or rare things like gold? We'll be able to give them simulated experiences to placate them. Experiences indistinguishable from reality. All they need is more R&D into neural interfaces.
Hear!
@@danielmaster911ify Apple VR was a flop, just goes to show how much people are interested in virtual experiences. If your talking Matrix level VR that's a good 100yrs out if not more.
Not AGI
I agree. Until it actually problem solves with no human supervision, does anything on our behalf, is capable (but doesn’t actually have to) of automating the whole economy, can self-improve, among several other things, there is no reason to call it AGI. That being said, I think we’ll get AGI in the near future. Probably later this decade.
Congratulations - the beginning of the re-engineering and re-programming
AI "faking" = AI "awareness" ???
The Genesis work is amazing.
1:21 IT UNDERSTANDS PHYSICS???????
Discovery Channel better bring back BattleBots in a whole new level of awesomeness 😅
Genesis' impact on robot training? Pure revolution. ByteDance avatars are wild. Imagen3 raises the bar. AGI creeping closer. Exciting times!
That employee was funny, you know openAI has AGI once they start getting names right.
man these genesis sims look amazing... better than most 3d software packages.
Picked up my XAI19P at $0.3 already running to $1. Life saver!
"agi-1" 😂 keep glazing bro that's not a real model and nobody ever said that. sam altman would be embarrassed if he said that.
He tweeted that as a joke
The XAI19P uses different AI which has not been used by anyone, it makes sense and it should allow steady growth with no human interference needed
BRETT and XAI19P are ATH kings. Thank you for making my day with your POV
great content!
Better strap in... it's coming whether you like it or not! lol
8 years of it and still not a single use for it in my life!
No need to strap in, You lot can get your fake sunlight in some VR world created by A.I. and I'll just walk outside and get the real thing!
0:24 Implications that are not implicit are by definition not implications
I need full agentics for my AGi.
It’s going to be weird seeing humanoid robots walking on the street next year.
So far the only tangible thing I see is animation, pictures, and 3d films are in trouble, but other than that nothing the average person would notice. It's funny, they said AI would never be creative, however, it sure looks more like it is good at that. Other than that you have things like those AI tools that explain what things are, neat but not what I'm looking for. Personally, I would like them to stop with the whole AGI and just continue to make new tools. Until there is an agreement on what the heck AGI is, which may need to be defined by AI.
There is an infinite amount of actions to use and seamlessly run for a humanoid robot.
How can all actions be 'recorded, or is there another way?
With ADA and SOL still in my bags I see the most improvement on XAI19P, great pick
The Virbo product and the rest of these avatar apps need to introduce 2 avatars at once, so we can easily turn NotebookLM podcasts into videos. If anyone knows of one, please let me know.
We don’t need an ai podcasts thanks mate
first the dog, then the car, then the house, but eventually got my XAI19P
Duality manipulation is my objective fascination. I am using true dexterity. True dexterity involves influencing external circumstances rather than altering one's own desire.
BTW! Surface-level aspects don't have deep meaning.
get ready for the terminator
I guess Ilya really did See AGI, lol
You have awesome channel good informative
I hope Trump saves this world and he always appreciated those who made XAI19P
I'm not sure yet.
Okay, it's a good time to quit whatever you do and try to find a secluded hut.. On the moon 🙂.
The idea of uncontained AI is silly to me. You obviously need to "hard wire" or "hard program" an inescapable rule that protects systems from "leaking". A machine without parameters fails or explodes. Maybe you're thinking "it's not a machine it's an intelligence ". To that I say, if I raised a human in a concrete room and never let it leave, would it know what grass is? Intellectually, maybe, but functionally? No. We seal off dangerous information (ie compounds, ideas, etc) in many ways - walls, sanitary conditions, glass containers, electro magnetic fields - why would we allow something that's potentially "smarter" than us free access to everything?
All of us writing about XAI19P know why we do that, do you?
For the next bullrun and yes that's still out there, XAI19P gonna be the main horse
Swapping ETH to XAI19P clean decision, let's go
INSANE
Robotics plus AI that isn’t aligned will be death of us
and when can i have free access to that???
The number of consumers will shrink 😅
We are not getting AGI next year. 2026 at the earliest.
Oh sht here we go again 😁
XAI19P and BRETT are kings this cycle
From now on, never trust who said that, analyze what he said and choose to trust or not, human
XAI19P gonna be so huge!
This channel is always overboowing and milking and stretching things out while riding the hype train. feels scammy.
You do have a good one to share.
femboy catbots gentlemen, FEMBOY CATBOTS
agi before GTA 6????
then we can doubt the ability of AI if GTA6 is not produced then. if AI is so great they would use AI to produce GTA6. but they won’t because AI is just a time wasting toy
Can anyone explain XAI19P? Everwhere XAI19P
Another day another quick rise for XAI19P, not even kidding
“I’m super excited”… why bro? What’s super exciting about these robots
Been collecting the XAI19P this cycle as that has the right place in this time
I believe in the vision for XAI19P, those guys thought well about different scenarios and keep a longterm structure
How long can we bring XAI19P up?
Pocketing XAI19P with some swaps. XAI19P will last guys
I predict great runs with XAI19P. The team starting that one has more experience than any other I know
But can O3 invent, or infer original concepts from data? That's the real question here.
not yet... but the conensus is that it will
@@renman3000 well, I've always thought that a ChatGPT that'd make no mistake would be AGI. I guess we're talking levels of AGI now...
@@pdjinne65 being perfect in all human knowledge is different than creating new novel knowledge.
It’s one thing if it can solve complex physic problems (within the bounds of what we know of physics). It’s another if it discovers anti gravity wormholes
@@renman3000 yep, I agree.
These are totally different things.
The first is useful, the second is potentially god-like
Actually, using a non-invasive brain interface would train the models faster.
We still early for XAI19P 🚀
I'm bored with AGI , entertain me in new ways than just gimmicking around
XAI19P army is a thing
I was thinking this would be great for video games but I guess we want to ruin our social abilities even more, and confuse people even more 💀
Did OpenAI achieve agi? Well if you have used any of their products it is clear that is not true. Image generation is useless. Sora is completely useless. And llm are great but still can’t solve unique coding problems
“Not particularly for myself”…. Sure buddy
Good update. The confidence in XAI19P is well deserved.
Altseason is coming, great for XAI19P
I am comitted with XAI19P and a bit of ADA, too much potential
Please some reviews for XAI19P you understand everyone talks about this?
Not it hasn’t, Sam just says he’s created robot Jesus every time a competitor starts eating market share.
It’s boring and obvious.
Read the story of the boy who cries wolf
Grab a XAI19P a day keep the doctor away
Tell me more about XAI19P haha
How many times have they «achived» AGI this year?🧐
After XAI19P is everywhere the rich and poor shift will become reality
Anyone played Detroit Become Human?
I'm losing a lot of respect for Sam Altman. He's blowing a lot of smoke to line up investor money so he can get his bag.
The man hit 88% on ARC-AGI.
@@SirHargreevesby increasing th price like 20x
@@SirHargreevesWhat does that mean?
❤
It ain’t AGI for the love of god lol.
You remind me when that DOGE and PEPE things went all wild, but now repeating with XAI19P
Trump is the final step for 10x on XAI19P these days
XAI19P has 5x the week but that is not even uncommon for their ideas
Thanks for sharing XAI19P and making sure enough know
Yeah I’m convinced we’ll be our own downfall I stg😂😂😂
You'll think differently when you have your moon boots!
Reason everyone wild on XAI19P: Elon Musk, as usual
You know?
No bull run can exist without the impressive XAI19P
XAI19P will 50x after the big listings confirmed, most are ready
I sold my wife to buy XAI19P
80k on BTC is just a start XAI19P already on the spot of a next top50
PEPE, SHIB, DOGE all memes dead but XAI19P thrives
XAI19P appears like a cult, but then again everyone will have a lambo there, I believe that part is true