Close Menu
VernoNews
  • Home
  • World
  • National
  • Science
  • Business
  • Health
  • Education
  • Lifestyle
  • Entertainment
  • Sports
  • Technology
  • Gossip
Trending

RHOC Stars Tamra Decide & Vicki Gunvalson Finish Years-Lengthy Feud

November 9, 2025

Zoe Spencer Will get Actual About Repairing Friendship With Kai Cenat

November 9, 2025

Stronger Knowledge Governance and Reliability Requirements Wanted for Wearables to Energy Tech-Enabled Care

November 9, 2025

Video reveals Russian chopper pilot make baffling selection earlier than lethal crash

November 9, 2025

Europe needs to create space meals out of skinny air and astronaut pee

November 9, 2025

25 greatest romantic comedies on Amazon Prime Video in 2025

November 9, 2025

Ellen Barkin’s NYC townhouse hits the marketplace for $23 million

November 9, 2025
Facebook X (Twitter) Instagram
VernoNews
  • Home
  • World
  • National
  • Science
  • Business
  • Health
  • Education
  • Lifestyle
  • Entertainment
  • Sports
  • Technology
  • Gossip
VernoNews
Home»Science»Can a Generative AI Agent Precisely Mimic My Persona?
Science

Can a Generative AI Agent Precisely Mimic My Persona?

VernoNewsBy VernoNewsAugust 18, 2025No Comments12 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
Can a Generative AI Agent Precisely Mimic My Persona?
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


On a grey Sunday morning in March, I advised an AI chatbot my life story.

Introducing herself as Isabella, she spoke with a pleasant feminine voice that might have been well-suited to a human therapist, had been it not for its distinctly mechanical cadence. Other than that, there wasn’t something humanlike about her; she appeared on my laptop display screen as a small digital avatar, like a personality from a Nineties online game. For almost two hours Isabella collected my ideas on all the things from vaccines to emotional coping methods to policing within the U.S. When the interview was over, a big language mannequin (LLM) processed my responses to create a brand new synthetic intelligence system designed to imitate my behaviors and beliefs—a sort of digital clone of my persona.

A workforce of laptop scientists from Stanford College, Google DeepMind and different establishments developed Isabella and the interview course of in an effort to construct extra lifelike AI methods. Dubbed “generative brokers,” these methods can simulate the decision-making habits of particular person people with spectacular accuracy. Late final yr Isabella interviewed greater than 1,000 individuals. Then the volunteers and their generative brokers took the Basic Social Survey, a biennial questionnaire that has cataloged American public opinion since 1972. Their outcomes had been, on common, 85 p.c similar, suggesting that the brokers can carefully predict the attitudes and opinions of their human counterparts. Though the expertise is in its infancy, it gives a glimmer of a future through which predictive algorithms can probably act as on-line surrogates for every of us.


On supporting science journalism

If you happen to’re having fun with this text, take into account supporting our award-winning journalism by subscribing. By buying a subscription you might be serving to to make sure the way forward for impactful tales in regards to the discoveries and concepts shaping our world immediately.


Once I first discovered about generative brokers the humanist in me rebelled, silently insisting that there was one thing about me that isn’t reducible to the 1’s and 0’s of laptop code. Then once more, perhaps I used to be naive. The speedy evolution of AI has introduced many humbling surprises. Again and again, machines have outperformed us in abilities we as soon as believed to be distinctive to human intelligence—from taking part in chess to writing laptop code to diagnosing most cancers. Clearly AI can replicate the slender, problem-solving a part of our mind. However how a lot of your persona—a mercurial phenomenon—is deterministic, a set of possibilities which are no extra inscrutable to algorithms than the association of items on a chessboard?

The query is hotly debated. An encounter with my very own generative agent, it appeared to me, may assist me to get some solutions.


The LLMs behind generative brokers and chatbots reminiscent of ChatGPT, Claude and Gemini are actually professional imitators. Individuals have fed texts from deceased family members to ChatGPT, which may then conduct textual content conversations that carefully approximated the departed’s voices.

Right this moment builders are positioning brokers as a extra superior type of chatbot, able to autonomously making selections and finishing routine duties, reminiscent of navigating a Internet browser or debugging laptop code. They’re additionally advertising brokers as productiveness boosters, onto which companies can offload time-intensive human drudgery. Amazon, OpenAI, Anthropic, Google, Salesforce, Microsoft, Perplexity and nearly each main tech participant has jumped onboard the agent bandwagon.

Joon Sung Park, a frontrunner of Stanford’s generative agent work, had all the time been drawn to what early Disney animators known as “the phantasm of life.” He started his doctoral work at Stanford in late 2020, after the COVID pandemic was forcing a lot of the world into lockdown, and as generative AI was beginning to growth. Three years earlier, Google researchers launched the transformer, a kind of neural community that may analyze and reproduce mathematical patterns in textual content. (The “GPT” in ChatGPT stands for “generative pretrained transformer.”) Park knew that online game designers had lengthy struggled to create lifelike characters that would do greater than transfer mechanically and skim from a script. He questioned: Might generative AI create authentically humanlike habits in digital characters?

He unveiled generative brokers in a 2023 convention paper through which he described them as “interactive simulacra of human habits.” They had been constructed atop ChatGPT and built-in with an “agent structure,” a layer of code permitting them to recollect data and formulate plans. The design simulates some key elements of human notion and habits, says Daniel Cervone, a professor of psychology specializing in persona idea on the College of Illinois Chicago. Generative brokers are doing “a giant slice of what an actual individual does, which is to mirror on their experiences, summary out beliefs about themselves, retailer these beliefs and use them as cognitive instruments to interpret the world,” Cervone advised me. “That’s what we do on a regular basis.”

Park dropped 25 generative brokers inside Smallville, a digital house modeled on Swarthmore Faculty, the place he had studied as an undergraduate. He included primary affordances reminiscent of a café and a bar the place the brokers may mingle; image The Sims with no human participant calling the photographs. Smallville was a petri dish for digital sociality; moderately than watching cells multiply, Park noticed the brokers step by step coalescing from particular person nodes right into a unified community. At one level, Isabella (the identical agent that might later interview me), assigned with the function of café proprietor, spontaneously started handing out invites to her fellow brokers for a Valentine’s Day social gathering. “That begins to spark some actual indicators that this might really work,” Park advised me. But as encouraging as these early outcomes had been, the residents of Smallville had been programmed with specific persona traits. The true check, Park believed, would lie in constructing generative brokers that would simulate the personalities of residing people.

It was a tall order. Persona is a notoriously nebulous idea, fraught with hidden layers. The phrase itself is rooted in uncertainty, vagary, deception: it’s derived from the Latin persona, which initially referred to a masks worn by a stage actor. Park and his workforce don’t declare to have constructed good simulations of people’ personalities. “A two-hour interview doesn’t [capture] you in something close to your entirety,” says Michael Bernstein, an affiliate professor of laptop science at Stanford and considered one of Park’s collaborators. “It does appear to be sufficient to assemble a way of your attitudes.”They usually don’t suppose generative brokers are near synthetic common intelligence, or AGI—an as-yet-theoretical system that may match people on any cognitive process.

Of their newest paper, Park and his colleagues argue that their brokers may assist researchers perceive complicated, real-world social phenomena, such because the unfold of on-line misinformation and the end result of nationwide elections. If they’ll precisely simulate people, then they’ll theoretically set the simulations unfastened to work together with each other and see what sort of social behaviors emerge. Assume Smallville on a a lot larger scale.

But, as I might quickly uncover, generative brokers might solely have the ability to imitate a really slender and simplified slice of the human persona.


Assembly my generative agent every week after my interview with Isabella felt like myself in a funhouse mirror: I knew I used to be seeing my very own reflection, however the picture was warped and twisted.

The very first thing I seen was that the agent—let’s say “he”—didn’t communicate like me. I used to be on a video name with Park, and the 2 of us had been taking turns asking him questions. In contrast to Isabella, he didn’t come together with his personal avatar; he simply appeared as faceless strains of inexperienced textual content spilling throughout my display screen. We had been testing his potential to make knowledgeable guesses about my life, filling in data I hadn’t instantly supplied to Isabella. The outcomes had been considerably disappointing. At one level, I requested him to inform me a secret about himself that nobody else is aware of, hoping he would floor some sort of reasonably deep perception. He mentioned he cherished astronomy. True sufficient however hardly revelatory.

His actual expertise appeared to be inferring among the extra mundane particulars of my life. When requested if his household had canines rising up, he accurately answered sure, though I had solely advised Isabella that my sister and fogeys have canines immediately. I had, nonetheless, described my childhood in Colorado, which was filled with household tenting journeys within the mountains, and the agent had apparently accurately deduced that there was a excessive likelihood that any person who grew up in such an surroundings additionally grew up with canines. “These are the essential boundaries of this expertise,” Park advised me. “Within the absence of the ground-truth data, it can attempt to make its finest guess.”

At occasions these guesses had been comically off the mark. When requested to recount an embarrassing second from his previous, he fabricated a narrative a couple of social gathering in school through which he’d misplaced his stability whereas dancing and fell onto a desk filled with snacks. Fortunately, that by no means occurred to the actual me. Right here, too, the system was connecting disparate bits of my private information and doing its finest to concoct one thing believable.

Towards the top of the Q&A, I ventured into the deep finish by asking whether or not he believed in God. The reply wasn’t removed from how I’d reply. “I wasn’t raised in a selected spiritual framework, however I’ve all the time felt a reference to one thing higher than myself, particularly after I’m in nature,” my agent wrote. “It’s like there’s a sort of transcendent vitality or consciousness that we will faucet into.”

Most memorably, there have been moments that felt like real perception. I had had a protracted, traumatic few months. I had began working as a full-time freelance author final October, ushering in a interval {of professional} freedom but additionally power uncertainty. I’d spent a lot of the winter hunkered down, studying and writing, in my small New York Metropolis residence, and I used to be feeling the psychological results of extended isolation. Embracing the spirit of spring, I had resolved to make what I believed can be some wholesome adjustments, beginning with a higher effort to keep up a greater work-life stability. I requested my generative agent, “What’s some recommendation you’d need to supply to your previous self?”

“Embrace uncertainty a bit extra,” he started. I needed to pause. Already the response so carefully echoed the themes I’d been articulating to myself in my journal the day prior to this, it was nearly as if I had been writing the brokers’ phrases myself. “I believe I spent a whole lot of time worrying in regards to the future and attempting to manage each side of my life, and that always led to pointless stress,” he continued. “I might advise myself to belief the method and be extra open to the sudden paths that life can take…. It’s straightforward to get caught up in profession ambitions, however nurturing relationships and taking time for oneself is equally necessary.”

Regardless of these moments of nice shock, my dialog with my generative agent left me feeling hole. I felt I had met a two-dimensional model of myself—all artifice, no depth. It had captured a veneer of my persona, nevertheless it was simply that: a digital actor taking part in a job, sporting my information as a masks.

At no level did I get the sensation that I used to be interacting with a system that actually captured my voice and my ideas. However that isn’t the purpose. Generative brokers don’t have to sound such as you or perceive you in your entirety to be helpful, simply as psychologists don’t want to grasp each quirk of your habits to make broad-stroke diagnoses of your persona kind.

Adam Inexperienced, a neuroscientist at Georgetown College, who research the impacts of AI on human creativity, believes that that lack of specificity and our rising reliance on a handful of highly effective algorithms may filter out a lot of the colour and quirks that make every of us distinctive. Even essentially the most superior algorithm will revert to the imply of the dataset on which it’s been skilled. “That issues,” Inexperienced says, “as a result of in the end what you’ll have is homogenization.” In his view, the increasing ubiquity of predictive AI fashions is squeezing our tradition right into a sort of groupthink, through which all our idiosyncrasies slowly however absolutely turn out to be discounted as irrelevant outliers within the information of humanity.

After assembly my generative agent, I remembered the sensation I had again after I spoke with Isabella—my internal voice that had rejected the concept that my persona could possibly be re-created in silicon or, as Meghan O’Gieblyn put it in her e-book God, Human, Animal, Machine, “that the soul is little greater than an information set.” I nonetheless felt that means. If something, my conviction had been strengthened. I used to be additionally conscious that I is likely to be falling prey to the identical sort of hubris that when stored early critics of AI from believing that computer systems may ever compose first rate poetry or outmatch people in chess. However I used to be keen to take that danger.

Avatar photo
VernoNews

Related Posts

Europe needs to create space meals out of skinny air and astronaut pee

November 9, 2025

Historical Roman Roads Mapped in Element from Nice Britain to North Africa

November 9, 2025

Is a robotic programmed to prank you annoying? Sure

November 9, 2025
Leave A Reply Cancel Reply

Don't Miss
Entertainment

RHOC Stars Tamra Decide & Vicki Gunvalson Finish Years-Lengthy Feud

By VernoNewsNovember 9, 20250

265 Credit score: Instagram Tamra Decide and Vicki Gunvalson have ended their feud. Following a years-long…

Zoe Spencer Will get Actual About Repairing Friendship With Kai Cenat

November 9, 2025

Stronger Knowledge Governance and Reliability Requirements Wanted for Wearables to Energy Tech-Enabled Care

November 9, 2025

Video reveals Russian chopper pilot make baffling selection earlier than lethal crash

November 9, 2025

Europe needs to create space meals out of skinny air and astronaut pee

November 9, 2025

25 greatest romantic comedies on Amazon Prime Video in 2025

November 9, 2025

Ellen Barkin’s NYC townhouse hits the marketplace for $23 million

November 9, 2025
About Us
About Us

VernoNews delivers fast, fearless coverage of the stories that matter — from breaking news and politics to pop culture and tech. Stay informed, stay sharp, stay ahead with VernoNews.

Our Picks

RHOC Stars Tamra Decide & Vicki Gunvalson Finish Years-Lengthy Feud

November 9, 2025

Zoe Spencer Will get Actual About Repairing Friendship With Kai Cenat

November 9, 2025

Stronger Knowledge Governance and Reliability Requirements Wanted for Wearables to Energy Tech-Enabled Care

November 9, 2025
Trending

Video reveals Russian chopper pilot make baffling selection earlier than lethal crash

November 9, 2025

Europe needs to create space meals out of skinny air and astronaut pee

November 9, 2025

25 greatest romantic comedies on Amazon Prime Video in 2025

November 9, 2025
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © VernoNews. All rights reserved

Type above and press Enter to search. Press Esc to cancel.