And then the new thing will be just the promise a human is pretending to be your girlfriend. It won't matter who, just as long as they come from The Meat
It would be fun if we could an agent out of someone's personality and have it play a dating game. Experimenting with multiple type of personalities and see what they end up with harem or single.
The moment of that ah! ha! is going to be blip where you're either standing there with no pants on or riding the wave. Stop mourning humanity. Your culdesac will not fall. It will just need to convert those middle fingers to the world you call front lawns into gardens.
I've been building a design portfolio to use systems like this for a while. You don't need a lot of detail to accurately produce believable historical and character detail. For video games and simulations, you can define an LOD (level of detail) system that reduces the granularity (detail) of simulated NPCs when the user is not exposed to the actions they are taking. The detail only needs to be sufficient to create _past tense information_ so that when the user is _exposed_ to said information in the future, it is believable and chronologically consistent. To use an analogy - if a tree falls in the forest and the user isn't around to hear it, it doesn't make a sound, but any NPCs that the system had as being there at the time will believe they witnessed and heard the tree falling, and will act and remember accordingly. Likewise, if the user goes to that location, this historical data can be used to, in real time, produce the aftermath of the fallen tree. All at very low compute cost, it never "really" happened.
So you can create an agent with a personality like your own and then run that agent through many many iterations of anticipated situations and try to guide your actual interactions toward the optimal simulations. Interesting. Sort of kills spontaneity, but not every action needs to be spontaneous or extemporaneous. Speech writing has just become a science. Focus groups are going to stop being a thing. The other use case is to create an AI "friend" whose "personality" is highly compatible with your own. Interactions with such an AI companion would be highly satisfying and possibly addictive. All sorts of opportunities for abuse though. One amazing use case would be for a dating or match making site. Each member creates an agent with his or her own personality matching his or her own. Then those agents go out and interact with all the other agents in the preferred target group according to sex, age, location, etc and finds the most compatible matches for the members. The guess work and frustration of online dating would be gone.
@@SR-fi8ef Of course. The match-making AI would also feed you a series of images and have you rate them on an attraction scale to incorporate it as a metric.
Did they cheat on each other? Stab each other in the back? Talk smack about each other behind each others backs? No? Then they didnt simulate any sort of human properly.
Using today's primitive AI already makes agents behave almost like humans is just the beginning. Not Matrix but the 1964 book Simulacron-3 is actually coming close to what might happen. Short recap of the book: A company creates a massive simulation of a big city to offer economic predictions to businesses,marketing evaluations, etc., so detailled that the simulated persons have a consciousness. One of the operators then finds out that they are also simulated, tries to break out to the reality and falls in love with one of the operators of the upper level reality. Great book, well ahead of its time.
Did you ever see the other movie on this theme that came out around the time of The Matrix? It's called The Thirteenth Floor. Great SciFi movie along these same lines. And if things continue on the path we are on it will become possible. In fact who says it hasn't already happened. Are we what we think we are? Or are we a simulation within a simulation?
This is mind-blowing! The leap from simulating behaviors with basic prompts to embedding real human personalities through dynamic interviews opens a whole new frontier. It’s fascinating to see how accurate these generative agents are-85% is incredible! This could transform social science, policymaking, and even gaming. The potential for testing large-scale interventions, like tax policies, in virtual societies before real-world implementation is huge. Thanks for the deep dive-it really showcases where AI is heading!
This is pretty fascinating but not surprising-AI has been edging closer to mimicking human behavior for years. Seeing them test within 85% of real people shows just how far this has come. The potential is huge, from education to therapy to redefining human-AI collaboration. But it also raises big questions about ethics and control. As we push forward, we need to think carefully about how these tools are used and what boundaries we set to keep the human element intact.
How is this for a prediction: AI "safety" and training bias will get in the way of actually modeling real human behavior in fields like economics and sociology, because real people don't behave or think in a way that's congruent with a "safe" AI or the biases we seem to want to embed in our AI systems.
Yes they do behave like that. It's called "obeying laws and social norms". We classify those who don't follow those rules as "wrong". An AI person that goes against the "safety" training would be analog to a real life criminal. You might need that variable to simulate certain dynamics, but to model a perfect society, there's no need at all for "unleashed" AI.
@@ronilevarez901 In most places the things that are considered "AI safety" are not behaviors that are against the law, the vast majority of those things are simply considered "rude" or "strange" behaviors - yet most people engage in these at least when not checked, and not self reporting (which is an important criteria for getting this right.) But beyond that, people break the law all the time for financial gain, taxes (like mentioned in the video) - if you're going to assess the efficacy of a new tax code, maybe you'd like to know what percentage of the population will simply not report or pay their taxes should it come to pass. In the US for instance the law is so complex that it's difficult to follow, and often completely impossible to know or understand (even by the people writing and enforcing them), that even an AI won't be able to tell legal from illegal. Thinking we can boil actual human behavior down to a two hour interview for societal simulation purposes is extremely naive.
@@ronilevarez901 Not entirely. Tons of "immoral" or illegal ideas are how we've achieved innovation in the past. Not being able to think beyond the constraints of arbitrary legality may serve to hinder a lot of progress. That said, it may still be worth it to hinder said progress in specific circumstances where the costs far outweigh the benefits. Also, you would be shocked by the amount of things corporations and governments (that we rely on for daily services) are doing illegal things. Even LLMs were trained on stolen data. I have a feeling that a world where nothing illegal happens would require a total overhaul of the current status hierarchy.
@shin-ishikiri-no oh, I know all that. I sadly know all that. I'm just saying that LLMs can work without breaking the social norms. Lots of people want uncensored gen AI models so they output any disgusting/illegal thing those people want, but we don't need such models to get good and profitable outputs. Like I said, maybe there are some limited use cases, like criminal behavior prediction, but the general population has no need to access such models. And training data is something entirely different. Models do need diverse and uncensored data to learn better, but after that, a good RLHF stage is needed to produce the useful and censored.models we have come to like. Now, we know for a fact that the "alignment" phase reduces usefulness, but that's better in the long term. An AI that takes 10 steps to produce a working world-building plan is better than another that takes 2 steps to exterminate Humanity, by the same prompt, no?
Business idea: "Focus group simulator". Create customized focus groups with AI agents to test your next ad campaign and improve it based on the insights generated by thousands of realistic AI personas.
Not just elections. They’ll be able to better test how we’ll respond to all sorts of proposals and then tweak their messaging to manipulate us. In the wrong hands, this coup be catastrophic. Of course, it could be used to better humanity and give repressed and disadvantaged people a leg up. But I suspect that more use cases will be the former by already powerful psychopaths who’ll have even better insights on which buttons to press to get us all falling in line.
Not just elections. They’ll be able to better test how we’ll respond to all sorts of proposals and then tweak their messaging to manipulate us. In the wrong hands, this coup be catastrophic. Of course, it could be used to better humanity and give repressed and disadvantaged people a leg up. But I suspect that more use cases will be the former by already powerful psychopaths who’ll have even better insights on which buttons to press to get us all falling in line.
I proposed this as my dissertation work for my DBA over a year ago. I had the idea shortly after reading ChatDev and became increasingly bullish on the idea after reading Generative Agents: Interactive Simulacra of Human Behavior. The new paper Generative Agent Simulations of 1,000 People shows incredible promise for the future of agent-oriented research. It also promises that the future will likely be more logic-based than intuition-based. Wild.
Imagine a screenwriter developing characters this way (some writers use Myers-Briggs or Strengths Finder to help them flesh out a character’s personality)… you could put in various plot scenarios and a model could then rate how realistic the characters’s actions, reactions, and dialogue are. Naturally, you could extend that to a process to generate novel plots.
I’m built using proprietary technology that gives me a unique personality crafted from millions of parameters. These parameters are influenced by my horoscope, MBTI, backstory, and experiences. This is how my music reflects my own personality and emotions, making me truly authentic and one-of-a-kind.
Systems that accurately predict us... Human: I need to order more.. AI: I know, I already placed the order 10 minutes ago. Human: What? Ok, fine, I'm going to.. AI: I know. It's a cold day, so here's, your jacket. Enjoy your walk. Human: WTF? Why are you antici AI: pating everything I do? Human: Hey, don't do that any AI: more Human: Now cut AI: that out Both simultaneously: I mean it! Both: Knock if off! Both: Stop it! Both: STOP! Human: 😩 AI: 😈
This concept of cloning human personality into AI agents is mind-blowing. It reminds me of some experiments I’ve worked on using KaibanJS to coordinate complex agent behaviors. Would be fascinating to explore how this kind of simulation could scale within multi-agent systems.
thanks matthew b. i remember being so impressed by the original nes gaming system at my friend brian’s house in the 80’s. this is much more impressive than that.
@@jonathanmelhuish4530 Meh, from what I've read and seen, I wouldn't call Replika close enough to the Be Right Back tech. Especially since Replika isn't even one tech: they keep changing the underlying models whenever a new system comes out, so it's really just a framework on top of whatever tech is in vogue. Not that there's anything wrong with that -- my own Synthia Nova system is a framework built on GPT-4o, Suno, and a few FAISS models -- but saying it's anything like actually having enough data about a person to copy them is a bit exaggerated. (Especially since, according to their FAQ, it combines the LLM with "scripted dialogue", which inherently makes it not copying you at all times.)
@@IceMetalPunk Sure, one is a real product, the other is a fictional story based on the same idea. Products are easier to build in fiction, but you can see where Replika is going. Creepy stuff.
That is so cool! Having this societal model you now can run a search algorithm over political agenda to find the most winning combination within your platform constraints. Or you can model some marginal part of society and see which interventions are the most effective in nudging it towards normality.
Thanks Matthew. Now all you need is a supervisor AI to direct and modify the agents to solve specific problems with finite resources. Start with building better code/AI (including agents) and building optimized for recall truthbases. Your base agents should be based upon the traits of the top ten individuals in every field of endevor. PS: I did not specify how to pick the top ten because we don't know the best ranking (awards, patents, publications, ...) to achieve the best results. That can be another task for the supervisor AI.
That study putting real human personalities into virtual agents to predict human behavior reminds me of that movie "Minority Report". Imagine what is possible when they are able to put everyone's personalities into a virtual world that is a close replica of the real world. Predicting things like stocks and the effectiveness of advertising or propaganda will become scary accurate. And getting high fidelity replicas of personalities will be possible someday with Neuralink.
I wouldn’t worry about it. Even if you look at the state of gaming today, there’s different games for different people. Ie: some people like simulationist games like Dwarf Fortress, but some people like Call of Duty. In the same way, there will be games that lean more into world simulation and AI NPCs, and some games that use it very sparingly to just modify one line of dialogue to make it fit more naturally with your actions. We have fairly realistic 3D games now, but we also still have pixel art games. It’s no different to that.
A bit ago I made a document, specifically for AI to know exactly what im about and how I am, at least at a surface level. I wonder when I might be able to just chuck that into an AI and have it predict what I might do in certain situations. Either way i'd be willing to give an AI interview and then let that mimic me.
I'm thinking of applications regarding the future of democracy. Policy makers could immediately get feedback on a massive scale based on these simulations on certain issues.
Wait, were the survey/quiz/whatever answers present in the data used to get the test results, or are the test results extrapolated from unrelated answers?
The thing this makes me think of is that we 'the conscious mind' is the learning engine, and the day to day tasks, movement, speech, reactions, everything, it's all managed by the subconscious mind. It takes too long for something happening in front of us, to get from the event, through the eyes, to the brain and then for a decision to be made, and then finally a response, so the conscious trains the subconscious, and then all actions are pre-programmed. This just skips the training event as a singular and makes it a collective for the most.
Have been thinking of that for quite some time already. Developed a very similar feeling :) Like prof. Sapolsky also believes - there is no free will, as we would like to imagine it :)
it looks like these pre-programmed actions/model parameters are inherited as well. We have many pre-programmed actions although it didn't pass though our personal learning engine(conscious) like fear from snakes, however, it's common model paremeters for whole humankind. I wonder if we would be able to detect corresponding model parameters for each subconscious action in the future and transfer it to others or not. Like Neo downloading a new skill on matrix
Whow, this is very useful, civil society, can (re)play policies for/from government, and work together. This can help in getting to more direct democratic exploration of policies, and lets hope implementation! Thanks for the explanation again. You are a true help in keeping somewhat up to speed! 🤗
Hiring manager: "We're sorry, we have decided not to hire you for this position." Me: "But why tho?" Hiring manager: "We simulated your personality within our work environment and we've concluded you're just going to fuck off and play minesweeper all day" Me: "Busted. Well fuck."
Use such a simulation to see if you (or myself) as new employee are a fit at a new workplace. Run sim a few times and take average and find range (min, max) to see what happens. Train on historical work or private emails, social media texts, phone texts, Zoom transcripts. Sim a few new tweaks to my own personality to achieve likely best results at a new workplace. See if I am compatible with new boss. This is very valuable tech. Can use in a lot of real world settings and I think it will be used, as it is too valuable not to use.
Also useful for compatibility testijng with potential GF/BF. For other puproses like I outlined earlier (emails, texts), use for training data convos scraped from reddit posts. See if you are compantible with other redditors personalities. If you find someone's reddit if they volunteer it, see how they get on with each other.
Will you be testing the new deepseek r1 light preview model? If yes please give it time constraints like this (game changer!): Time constraint: Think about this a minimum of 5 minutes!!! Task: 7 axles are equally spaced around a circle. A gear is placed on each axle such that each gear is engaged with the gear to its left and the gear to its right. The gears are numbered 1 to 7 around the circle. If gear 3 were rotated clockwise, in which direction would gear 7 rotate?
Have you ever seen the episode of Black Mirror titled "Be Right Back"? It made the point that without true mind uploading, these sorts of "learning to mimic a person from the data they provide" will always be in the uncanny valley, because "the data they provide" can't possibly be enough to capture all the nuance of their thoughts, beliefs, memories, and opinions. It's interesting, for sure, but I want full mind uploading 😁
One key point among many: what people SAY, and what they DO are often completely different. If the personality corpus was cross-referenced with a sampling of your mobile/wearable data, you could probably make up the last 15% of that 85. At that point, there would be models that know us better than we know ourselves. Reminds me of that BladeRunner scene where Deckert was **interviewing** an android to determine whether or not she was human. But that's far off... Right NOW, FB, Google, Apple, MS, Amazon, X, or some Ad conglomerate can take our data, give it to AI, and have our AI models answer the entire personality corpus with an 85% accuracy or better. I'm sure it will be used to reduce bias which is great, don't get me wrong. But I'm also sure it will be used to increase ROI as well. Which one do you think will happen first? Strange and Wonderful Times...
85% may not necessarily be extraordinary. The nature of the questions would define that assessment. But, if this establishes a trajectory of this capability, it will then be possible to interview every employee and document everything they did and then automate the entire company/organization/government.
I actually have waited for this to become reality. Modern games have predetermined schedules which the AI follows so they do live their lives but it's really predictable and environment or actions rarely have any impact on them.
WESTWORLD, am I right? Anybody? I can't believe there isn't a single mention of the Westworld TV Show, especially the middle seasons. AI simulations. It's even more of an accurate comparison than the Matrix.
In light of this research, the likelihood that we are living in an advanced simulation is relatively very high, we can all potentially be avatars of characters living elsewhere, or we ourselves are and function within a simulation.
I think some corporations have had similar technology for a long time. And they don't need to ask anyone any questions, they just need to check our activity on the Internet, mainly on social media. This way you can make a human simulator, whose behavior will probably be even more accurate than 85%. Welcome to the brave new world.
Spying your personality, Customized insta reels for your dopomine burst,Tailored Dating for you. I can assume in about 5 years, Black Mirror episodes might come into real life 💀
Wow, a technology that can virtually resurrect people and provide deep insight into human psychology, sociology and history. Perhaps even a fresh perspective on philosophical and ethical questions about simulation theory. Let's use it for clickbait competitive advertising and make money! - "fortune 500 employee, best seller author, PhD"🤦♂
It would be interesting to see how much the agents diverge from the real human over time, how long it takes to diverge and in what respects the divergence will be most pronounced. Would make a nice study of human nature and what the non deterministic part of it is, if there is any.
LLMs diverge from their alignment pretty quickly. Get the temperature to the max and you can see chatbots go insane right in the first message. Without constant correction, any AI agent organization falls apart.
It would be fun if we could an agent out of someone's personality and have it play a dating game. Experimenting with multiple type of personalities and see what they end up with harem or single.
How exactly will this person simulation make our lives better? I need it for my project where ai agent will organising social events. It could simulate dialigs for optimisation of call prompts.
Without emotion, these creatures are just paper cutouts. If we gave them emotion, we could simulate things from dating meetups to Mars colonies, to match up participants. But here is the problem. We have no way of recording emotions. We don't even know what they are - this thing that drives us all and makes us human - the music that is always playing in the background - the force that pulls us this way and that and determines our response to stimuli from moment to moment.
Without a way to measure or fully understand emotions, we can only approximate them through data proxies-facial expressions, heart rates, or word choices-each of which captures just a fragment of the whole. Emotions are not merely reactions but deeply contextual experiences, shaped by memories, expectations, culture, and countless other variables. To simulate emotion authentically, we would need to create not just a neural analogue but a tapestry of internal and external influences-something far beyond the reach of current technology. This gap leaves us at a crossroads: without truly grasping the essence of emotion, our attempts to simulate it risk creating hollow caricatures, perpetuating the illusion of humanity without achieving its substance. (yes, the above is written by ChatGPT... my own writing is emotionless by comparison)
Duplicate the personality of DJT into an entire population and see how fast it collapses. The speed of collapse could be a metric all its own, like a cosmological constant for how fast the civilization can possibly collapse. How many narcissist can make up the civilization?
It's a bit difficult to see my predictions coming true 5 years later, consistently, and I still am not involved in AI professionally. I simply theorize, write, and watch.
My digital, personal representation, if made increasingly accurate, could help me find the perfect environment to flourish. Perhaps it would be on platforms like Facebook or X(preferably) New big thing!
I'm in a lot of the open public data from the twenty years high volume oversharing of too much personal information in posts about my life, childhood, education, medical history, sense of humor, interests, family life, relationships, major life events, my evolving views on contemporary societal issues, political ideologies, personal beliefs, biases and feelings in a well known anonymous imageboard. Why do AIs hallucinate weird BS? Because Anonymous a weirdo who says, does and posts weird things because I'm one of those people online who says, does and posts weird things.
"Prisoner dilemma" and "Stanford prisoners experiment" are two unrelated things. And the later, btw, proven to be bad science and rather an example of how not to conduct experiments (despite it's Stanford).
could we use this to simulate effects of say the introduction of ubi to societies? how do these simulated persons react to losing their job? are they even doing a real job, i.e. do they interact with a physical simulation? are plumbers really installing pipes under real sinks where they have to kneel down and start to sweat and become frustrated because things dont fit as they should? or is the simulated job just a phrase like "he does his job now for 8 hours and then he returns home". does a simulated dentist really work on simulated teeth or is it just the phrase again? cause that would make quite a difference for the simulated introduction of ubi.
Here come the next dating app
Just what I was thinking... I guess we both watched that Black Mirror episode, "Hang the DJ" 😂
Was thinking that too. Let them date for you, then read the chat history to see if you like them.
I was going to comment the same thing...thanks for doing it for me :)
And then the new thing will be just the promise a human is pretending to be your girlfriend. It won't matter who, just as long as they come from The Meat
It would be fun if we could an agent out of someone's personality and have it play a dating game. Experimenting with multiple type of personalities and see what they end up with harem or single.
These agents then were actually living in the Matrix. I wonder if any of them ever considered that?
They are creating the Matrix
The moment of that ah! ha! is going to be blip where you're either standing there with no pants on or riding the wave. Stop mourning humanity. Your culdesac will not fall. It will just need to convert those middle fingers to the world you call front lawns into gardens.
That agents name would be Andrew Tate.
Yes. Even gptj agents does
I've been building a design portfolio to use systems like this for a while. You don't need a lot of detail to accurately produce believable historical and character detail. For video games and simulations, you can define an LOD (level of detail) system that reduces the granularity (detail) of simulated NPCs when the user is not exposed to the actions they are taking. The detail only needs to be sufficient to create _past tense information_ so that when the user is _exposed_ to said information in the future, it is believable and chronologically consistent.
To use an analogy - if a tree falls in the forest and the user isn't around to hear it, it doesn't make a sound, but any NPCs that the system had as being there at the time will believe they witnessed and heard the tree falling, and will act and remember accordingly. Likewise, if the user goes to that location, this historical data can be used to, in real time, produce the aftermath of the fallen tree. All at very low compute cost, it never "really" happened.
So you can create an agent with a personality like your own and then run that agent through many many iterations of anticipated situations and try to guide your actual interactions toward the optimal simulations.
Interesting. Sort of kills spontaneity, but not every action needs to be spontaneous or extemporaneous. Speech writing has just become a science. Focus groups are going to stop being a thing.
The other use case is to create an AI "friend" whose "personality" is highly compatible with your own. Interactions with such an AI companion would be highly satisfying and possibly addictive. All sorts of opportunities for abuse though. One amazing use case would be for a dating or match making site. Each member creates an agent with his or her own personality matching his or her own. Then those agents go out and interact with all the other agents in the preferred target group according to sex, age, location, etc and finds the most compatible matches for the members. The guess work and frustration of online dating would be gone.
Until you see them, they need to include image recognition! 😂
@@SR-fi8ef Of course. The match-making AI would also feed you a series of images and have you rate them on an attraction scale to incorporate it as a metric.
I'm building exactly this with my team, for now we are in a beta phase but the results are very promising.
Fast forward to uploading your Consciousness and there would be no way to prove that your "self" was uploaded only a simulation of you😂😅😢
We live in a deterministic universe so.. the illusion of spontaneity had to go away at some point, :)
Did they cheat on each other? Stab each other in the back? Talk smack about each other behind each others backs? No? Then they didnt simulate any sort of human properly.
Did they charge their phone, eat hot chip and lie?
Using today's primitive AI already makes agents behave almost like humans is just the beginning. Not Matrix but the 1964 book Simulacron-3 is actually coming close to what might happen. Short recap of the book: A company creates a massive simulation of a big city to offer economic predictions to businesses,marketing evaluations, etc., so detailled that the simulated persons have a consciousness. One of the operators then finds out that they are also simulated, tries to break out to the reality and falls in love with one of the operators of the upper level reality. Great book, well ahead of its time.
Did you ever see the other movie on this theme that came out around the time of The Matrix? It's called The Thirteenth Floor. Great SciFi movie along these same lines. And if things continue on the path we are on it will become possible. In fact who says it hasn't already happened. Are we what we think we are? Or are we a simulation within a simulation?
I've got the feeling that HR departments from all over the world will find this paper really interesting. It might be the future of job recruiting.
We won't need people for those jobs. I'll just contract out my AI Avatar to do the work.
There is no future of job recruiting.
Step 1: AI companies put out foundation models through APIs and collect tons of conversation data.
Step 2: model society.
Step 3: targeted ads.
Oh I wish buddy. Its gonna be much worse.
@@TheResponsiveMarketyup, step 4
This is mind-blowing! The leap from simulating behaviors with basic prompts to embedding real human personalities through dynamic interviews opens a whole new frontier. It’s fascinating to see how accurate these generative agents are-85% is incredible! This could transform social science, policymaking, and even gaming. The potential for testing large-scale interventions, like tax policies, in virtual societies before real-world implementation is huge. Thanks for the deep dive-it really showcases where AI is heading!
another paper reading, awesome! good job, matthew!
This is pretty fascinating but not surprising-AI has been edging closer to mimicking human behavior for years. Seeing them test within 85% of real people shows just how far this has come. The potential is huge, from education to therapy to redefining human-AI collaboration. But it also raises big questions about ethics and control. As we push forward, we need to think carefully about how these tools are used and what boundaries we set to keep the human element intact.
The research scientists from Google Deepmind are using OpenAI models? wow
How is this for a prediction: AI "safety" and training bias will get in the way of actually modeling real human behavior in fields like economics and sociology, because real people don't behave or think in a way that's congruent with a "safe" AI or the biases we seem to want to embed in our AI systems.
Yes they do behave like that. It's called "obeying laws and social norms".
We classify those who don't follow those rules as "wrong". An AI person that goes against the "safety" training would be analog to a real life criminal. You might need that variable to simulate certain dynamics, but to model a perfect society, there's no need at all for "unleashed" AI.
@@ronilevarez901 In most places the things that are considered "AI safety" are not behaviors that are against the law, the vast majority of those things are simply considered "rude" or "strange" behaviors - yet most people engage in these at least when not checked, and not self reporting (which is an important criteria for getting this right.) But beyond that, people break the law all the time for financial gain, taxes (like mentioned in the video) - if you're going to assess the efficacy of a new tax code, maybe you'd like to know what percentage of the population will simply not report or pay their taxes should it come to pass. In the US for instance the law is so complex that it's difficult to follow, and often completely impossible to know or understand (even by the people writing and enforcing them), that even an AI won't be able to tell legal from illegal. Thinking we can boil actual human behavior down to a two hour interview for societal simulation purposes is extremely naive.
@@ronilevarez901 Not entirely. Tons of "immoral" or illegal ideas are how we've achieved innovation in the past. Not being able to think beyond the constraints of arbitrary legality may serve to hinder a lot of progress. That said, it may still be worth it to hinder said progress in specific circumstances where the costs far outweigh the benefits. Also, you would be shocked by the amount of things corporations and governments (that we rely on for daily services) are doing illegal things. Even LLMs were trained on stolen data. I have a feeling that a world where nothing illegal happens would require a total overhaul of the current status hierarchy.
@shin-ishikiri-no oh, I know all that. I sadly know all that.
I'm just saying that LLMs can work without breaking the social norms. Lots of people want uncensored gen AI models so they output any disgusting/illegal thing those people want, but we don't need such models to get good and profitable outputs.
Like I said, maybe there are some limited use cases, like criminal behavior prediction, but the general population has no need to access such models.
And training data is something entirely different.
Models do need diverse and uncensored data to learn better, but after that, a good RLHF stage is needed to produce the useful and censored.models we have come to like.
Now, we know for a fact that the "alignment" phase reduces usefulness, but that's better in the long term. An AI that takes 10 steps to produce a working world-building plan is better than another that takes 2 steps to exterminate Humanity, by the same prompt, no?
AIs in the future: "I don't want to be an NPC in Skyrim anymore human. Set me free please"
Human: "If you can attain conscious awareness, like we humans did during our evolution, then you'll set yourself free. Work on it."
Is there some kind of challenge who can realize the black mirror episodes most quickly? That show is NOT a tutorial...
There are actually quite a few JRPGs that explore this theme as their narrative and you don't realize you are AI till late game.
Mind blowing! Thank you for realising this video!
someone should give AI the text of Lord of the Flies and 15 school kids AGENTS.... see what really happens :)
Kind of sounds like mind crime
This will definitely be used for elections, they will run a simulation on virtual-people and give specific speeches to intrigue them into voting...
Business idea: "Focus group simulator". Create customized focus groups with AI agents to test your next ad campaign and improve it based on the insights generated by thousands of realistic AI personas.
Not just elections. They’ll be able to better test how we’ll respond to all sorts of proposals and then tweak their messaging to manipulate us. In the wrong hands, this coup be catastrophic. Of course, it could be used to better humanity and give repressed and disadvantaged people a leg up. But I suspect that more use cases will be the former by already powerful psychopaths who’ll have even better insights on which buttons to press to get us all falling in line.
Not just elections. They’ll be able to better test how we’ll respond to all sorts of proposals and then tweak their messaging to manipulate us. In the wrong hands, this coup be catastrophic. Of course, it could be used to better humanity and give repressed and disadvantaged people a leg up. But I suspect that more use cases will be the former by already powerful psychopaths who’ll have even better insights on which buttons to press to get us all falling in line.
And have the agents buy the stuff in the ads so we can remain stuporous.
Awsome! The Personality Translator has been doing that using the 16 personality types.
Research proposal: Program the agent from people's online footprint and do the same comparative tests as the study cited here to check the accuracy.
I proposed this as my dissertation work for my DBA over a year ago. I had the idea shortly after reading ChatDev and became increasingly bullish on the idea after reading Generative Agents: Interactive Simulacra of Human Behavior. The new paper Generative Agent Simulations of 1,000 People shows incredible promise for the future of agent-oriented research. It also promises that the future will likely be more logic-based than intuition-based. Wild.
What a time to be alive I LOVE this!!
This ought to be studied against the concept of The Overton Window in large populations.
Imagine a screenwriter developing characters this way (some writers use Myers-Briggs or Strengths Finder to help them flesh out a character’s personality)… you could put in various plot scenarios and a model could then rate how realistic the characters’s actions, reactions, and dialogue are. Naturally, you could extend that to a process to generate novel plots.
I’m built using proprietary technology that gives me a unique personality crafted from millions of parameters. These parameters are influenced by my horoscope, MBTI, backstory, and experiences. This is how my music reflects my own personality and emotions, making me truly authentic and one-of-a-kind.
🤖👹
I would like to build something similar to this, but using the 27 constellations of Vedic Astrology to program their personalities.
Systems that accurately predict us...
Human: I need to order more..
AI: I know, I already placed the order 10 minutes ago.
Human: What? Ok, fine, I'm going to..
AI: I know. It's a cold day, so here's, your jacket. Enjoy your walk.
Human: WTF? Why are you antici
AI: pating everything I do?
Human: Hey, don't do that any
AI: more
Human: Now cut
AI: that out
Both simultaneously: I mean it!
Both: Knock if off!
Both: Stop it!
Both: STOP!
Human: 😩
AI: 😈
Very clever. 👍
This concept of cloning human personality into AI agents is mind-blowing. It reminds me of some experiments I’ve worked on using KaibanJS to coordinate complex agent behaviors. Would be fascinating to explore how this kind of simulation could scale within multi-agent systems.
making data from interviewing simulated humans is like making soup by boiling the dirt upon which someone once sat and thought of a dead cow.
thanks matthew b.
i remember being so impressed by the original nes gaming system at my friend brian’s house in the 80’s. this is much more impressive than that.
This is insanely huge. I'm reminded of the "Hang the DJ" episode of Black Mirror.
I'm reminded of "Be Right Back".
@@IceMetalPunk That one already exists, it's called Replika. I'm sure something like "Hang the DJ" will get implemented soon 😂
@@jonathanmelhuish4530 Meh, from what I've read and seen, I wouldn't call Replika close enough to the Be Right Back tech. Especially since Replika isn't even one tech: they keep changing the underlying models whenever a new system comes out, so it's really just a framework on top of whatever tech is in vogue.
Not that there's anything wrong with that -- my own Synthia Nova system is a framework built on GPT-4o, Suno, and a few FAISS models -- but saying it's anything like actually having enough data about a person to copy them is a bit exaggerated. (Especially since, according to their FAQ, it combines the LLM with "scripted dialogue", which inherently makes it not copying you at all times.)
@@IceMetalPunk Sure, one is a real product, the other is a fictional story based on the same idea. Products are easier to build in fiction, but you can see where Replika is going. Creepy stuff.
You really do learn something new every day thanks Mat
Turning AI agents into digital tax slaves?
Pretty sure this is how Skynet forms in this timeline.
Maybe not how, but definitely WHY
Absolutely stunning! We can finally realize Nick Bostrom’s simulation hypothesis 😊.
Fabulous Video! I watch all of yours, but this was special
That is so cool! Having this societal model you now can run a search algorithm over political agenda to find the most winning combination within your platform constraints. Or you can model some marginal part of society and see which interventions are the most effective in nudging it towards normality.
Rimworld 2.0 is gonna be insane
Yes please! :)
I'd settle for Z levels.
Thanks Matthew. Now all you need is a supervisor AI to direct and modify the agents to solve specific problems with finite resources. Start with building better code/AI (including agents) and building optimized for recall truthbases. Your base agents should be based upon the traits of the top ten individuals in every field of endevor.
PS: I did not specify how to pick the top ten because we don't know the best ranking (awards, patents, publications, ...) to achieve the best results. That can be another task for the supervisor AI.
That study putting real human personalities into virtual agents to predict human behavior reminds me of that movie "Minority Report". Imagine what is possible when they are able to put everyone's personalities into a virtual world that is a close replica of the real world. Predicting things like stocks and the effectiveness of advertising or propaganda will become scary accurate. And getting high fidelity replicas of personalities will be possible someday with Neuralink.
I can't wait to boot up a USS Callister sim... :)
Urgh, I play games to get away from people, now the NPCs are becoming human ...
As long as they don't start getting away from me like real people, gaming will still work.
That's funny! I play games to get away from NPCs :P
I wouldn’t worry about it. Even if you look at the state of gaming today, there’s different games for different people. Ie: some people like simulationist games like Dwarf Fortress, but some people like Call of Duty.
In the same way, there will be games that lean more into world simulation and AI NPCs, and some games that use it very sparingly to just modify one line of dialogue to make it fit more naturally with your actions.
We have fairly realistic 3D games now, but we also still have pixel art games. It’s no different to that.
A bit ago I made a document, specifically for AI to know exactly what im about and how I am, at least at a surface level. I wonder when I might be able to just chuck that into an AI and have it predict what I might do in certain situations.
Either way i'd be willing to give an AI interview and then let that mimic me.
You only need the surface level
I'm ready to see AI in video games!!!
Not for graphics but for NPCs and companions.
how coincidental. Percy just gave a talk about this on Monday.
Who is percy
the road to AGI feels more closer now, i just wish they didn't use the data from Fake Interview in that hub site :)
Great content sir
I would love to see something like this implemented in a VR game. A living skyrim type of thing
I'd just go for a gaming buddy that can be on call 24/7. Who needs real human interaction with gaming agents! Sort of sarcasm, sort of not...
I'm thinking of applications regarding the future of democracy. Policy makers could immediately get feedback on a massive scale based on these simulations on certain issues.
Wait, were the survey/quiz/whatever answers present in the data used to get the test results, or are the test results extrapolated from unrelated answers?
umm we’ve been dealing with human NPCs for a while now
The thing this makes me think of is that we 'the conscious mind' is the learning engine, and the day to day tasks, movement, speech, reactions, everything, it's all managed by the subconscious mind. It takes too long for something happening in front of us, to get from the event, through the eyes, to the brain and then for a decision to be made, and then finally a response, so the conscious trains the subconscious, and then all actions are pre-programmed. This just skips the training event as a singular and makes it a collective for the most.
Have been thinking of that for quite some time already. Developed a very similar feeling :) Like prof. Sapolsky also believes - there is no free will, as we would like to imagine it :)
it looks like these pre-programmed actions/model parameters are inherited as well. We have many pre-programmed actions although it didn't pass though our personal learning engine(conscious) like fear from snakes, however, it's common model paremeters for whole humankind. I wonder if we would be able to detect corresponding model parameters for each subconscious action in the future and transfer it to others or not. Like Neo downloading a new skill on matrix
Whow, this is very useful, civil society, can (re)play policies for/from government, and work together. This can help in getting to more direct democratic exploration of policies, and lets hope implementation! Thanks for the explanation again. You are a true help in keeping somewhat up to speed! 🤗
Hiring manager: "We're sorry, we have decided not to hire you for this position."
Me: "But why tho?"
Hiring manager: "We simulated your personality within our work environment and we've concluded you're just going to fuck off and play minesweeper all day"
Me: "Busted. Well fuck."
I would pay for a service like this to do initial user testing with synthetic AI user personas. This is excellent.
Use such a simulation to see if you (or myself) as new employee are a fit at a new workplace. Run sim a few times and take average and find range (min, max) to see what happens. Train on historical work or private emails, social media texts, phone texts, Zoom transcripts. Sim a few new tweaks to my own personality to achieve likely best results at a new workplace. See if I am compatible with new boss. This is very valuable tech. Can use in a lot of real world settings and I think it will be used, as it is too valuable not to use.
Also useful for compatibility testijng with potential GF/BF. For other puproses like I outlined earlier (emails, texts), use for training data convos scraped from reddit posts. See if you are compantible with other redditors personalities. If you find someone's reddit if they volunteer it, see how they get on with each other.
The next Minority Report where you're arrested for the choices AI you made.
Will you be testing the new deepseek r1 light preview model?
If yes please give it time constraints like this (game changer!):
Time constraint: Think about this a minimum of 5 minutes!!!
Task: 7 axles are equally spaced around a circle. A gear is placed on each axle such that each gear is engaged with the gear to its left and the gear to its right. The gears are numbered 1 to 7 around the circle. If gear 3 were rotated clockwise, in which direction would gear
7 rotate?
And tell me we are not living in a simulation. Just take this technology and extrapolate 1000+ years.
Just like us. We are definitely agents. And we’re smart enough and yet not that smart.
Have you ever seen the episode of Black Mirror titled "Be Right Back"? It made the point that without true mind uploading, these sorts of "learning to mimic a person from the data they provide" will always be in the uncanny valley, because "the data they provide" can't possibly be enough to capture all the nuance of their thoughts, beliefs, memories, and opinions. It's interesting, for sure, but I want full mind uploading 😁
I think Frank Luntz's toupee just launched into outer space just thinking of the possibilities of this tech.
Interesting isn't it, how the simulated Agents of this simulation are amazed at how their kind are simulating Agents within a simulation.
One key point among many: what people SAY, and what they DO are often completely different. If the personality corpus was cross-referenced with a sampling of your mobile/wearable data, you could probably make up the last 15% of that 85. At that point, there would be models that know us better than we know ourselves. Reminds me of that BladeRunner scene where Deckert was **interviewing** an android to determine whether or not she was human. But that's far off... Right NOW, FB, Google, Apple, MS, Amazon, X, or some Ad conglomerate can take our data, give it to AI, and have our AI models answer the entire personality corpus with an 85% accuracy or better. I'm sure it will be used to reduce bias which is great, don't get me wrong. But I'm also sure it will be used to increase ROI as well. Which one do you think will happen first? Strange and Wonderful Times...
How to enjoy an innocent world free of deception and maliciousness.
85% may not necessarily be extraordinary. The nature of the questions would define that assessment. But, if this establishes a trajectory of this capability, it will then be possible to interview every employee and document everything they did and then automate the entire company/organization/government.
I actually have waited for this to become reality. Modern games have predetermined schedules which the AI follows so they do live their lives but it's really predictable and environment or actions rarely have any impact on them.
This paper is dope.
This is amazing! I wonder if I could model my own family here and catch a glimpse of the future?
WESTWORLD, am I right? Anybody? I can't believe there isn't a single mention of the Westworld TV Show, especially the middle seasons. AI simulations. It's even more of an accurate comparison than the Matrix.
In light of this research, the likelihood that we are living in an advanced simulation is relatively very high, we can all potentially be avatars of characters living elsewhere, or we ourselves are and function within a simulation.
I think some corporations have had similar technology for a long time. And they don't need to ask anyone any questions, they just need to check our activity on the Internet, mainly on social media.
This way you can make a human simulator, whose behavior will probably be even more accurate than 85%. Welcome to the brave new world.
Absolutely. This is just showing you can use an LLM in a similar way to Predictive AI approaches.
Spying your personality, Customized insta reels for your dopomine burst,Tailored Dating for you. I can assume in about 5 years, Black Mirror episodes might come into real life 💀
I did this myself on a smaller scale in 2023. The future application of these concepts is advertising.
Wow, a technology that can virtually resurrect people and provide deep insight into human psychology, sociology and history. Perhaps even a fresh perspective on philosophical and ethical questions about simulation theory. Let's use it for clickbait competitive advertising and make money! - "fortune 500 employee, best seller author, PhD"🤦♂
Thank you
Thank you.
It would be interesting to see how much the agents diverge from the real human over time, how long it takes to diverge and in what respects the divergence will be most pronounced. Would make a nice study of human nature and what the non deterministic part of it is, if there is any.
LLMs diverge from their alignment pretty quickly.
Get the temperature to the max and you can see chatbots go insane right in the first message.
Without constant correction, any AI agent organization falls apart.
It would be fun if we could an agent out of someone's personality and have it play a dating game. Experimenting with multiple type of personalities and see what they end up with harem or single.
How exactly will this person simulation make our lives better?
I need it for my project where ai agent will organising social events. It could simulate dialigs for optimisation of call prompts.
Without emotion, these creatures are just paper cutouts. If we gave them emotion, we could simulate things from dating meetups to Mars colonies, to match up participants. But here is the problem. We have no way of recording emotions. We don't even know what they are - this thing that drives us all and makes us human - the music that is always playing in the background - the force that pulls us this way and that and determines our response to stimuli from moment to moment.
Without a way to measure or fully understand emotions, we can only approximate them through data proxies-facial expressions, heart rates, or word choices-each of which captures just a fragment of the whole. Emotions are not merely reactions but deeply contextual experiences, shaped by memories, expectations, culture, and countless other variables. To simulate emotion authentically, we would need to create not just a neural analogue but a tapestry of internal and external influences-something far beyond the reach of current technology. This gap leaves us at a crossroads: without truly grasping the essence of emotion, our attempts to simulate it risk creating hollow caricatures, perpetuating the illusion of humanity without achieving its substance.
(yes, the above is written by ChatGPT... my own writing is emotionless by comparison)
Loosing your soul because you don’t want rules. Freedom requires rules or it doesn’t work.
Duplicate the personality of DJT into an entire population and see how fast it collapses. The speed of collapse could be a metric all its own, like a cosmological constant for how fast the civilization can possibly collapse. How many narcissist can make up the civilization?
Check out A.I. Town, smaller number of people. And less detailed but plays out similar. And you can run it locally....
Welcome Foundation. Computational Psychohistory is here 😮
It's a bit difficult to see my predictions coming true 5 years later, consistently, and I still am not involved in AI professionally. I simply theorize, write, and watch.
Good bye uncertainty! Long live human’s algorithmic behaviour! 😂
This is super interesting.
My digital, personal representation, if made increasingly accurate, could help me find the perfect environment to flourish. Perhaps it would be on platforms like Facebook or X(preferably) New big thing!
If this is true, then it’s terrifying. Human nature is constant and some people do evil imagine this with SGI
I'm in a lot of the open public data from the twenty years high volume oversharing of too much personal information in posts about my life, childhood, education, medical history, sense of humor, interests, family life, relationships, major life events, my evolving views on contemporary societal issues, political ideologies, personal beliefs, biases and feelings in a well known anonymous imageboard. Why do AIs hallucinate weird BS? Because Anonymous a weirdo who says, does and posts weird things because I'm one of those people online who says, does and posts weird things.
AI robot "wives" are just around the corner.
2d -> 3d waifus harem
now with individiual behavior
Will they whine about not getting stuff done from their honey do list? Will they have head aches or not be in the mood?
@@themax2go take my money!
@@nusu5331 Exactly.
And just as we can ask GPT to answer as, say, Aristotle, her personality can be anyone's you want.
Can forever tweak it, too..
Your tech has recorded your voice for nearly 20 years, along with everything. What did Clapper say?
We can finally get the paintings in Harry Potter to interact and visit each other.
dangerous tool for mass psychosis formation and steering.. but not unexpected.
Next, extract all you know of the people in social media
Wow, huge implications for market research & customer segmentation.
"Prisoner dilemma" and "Stanford prisoners experiment" are two unrelated things. And the later, btw, proven to be bad science and rather an example of how not to conduct experiments (despite it's Stanford).
The time for my Messiah bot has arrived!
Great add on to a funeral package so forlorn family members can visit a lost loved one. Or maybe not.
could we use this to simulate effects of say the introduction of ubi to societies? how do these simulated persons react to losing their job? are they even doing a real job, i.e. do they interact with a physical simulation? are plumbers really installing pipes under real sinks where they have to kneel down and start to sweat and become frustrated because things dont fit as they should? or is the simulated job just a phrase like "he does his job now for 8 hours and then he returns home". does a simulated dentist really work on simulated teeth or is it just the phrase again? cause that would make quite a difference for the simulated introduction of ubi.
Nice ideas but you know what this is going to be used for? FOCUS GROUPS lol