Too Close to Our Own Image?

Recent work suggests we may be projecting ourselves onto LLMs more than we admit.

A paper in Nature reports that GPT-4 exhibits "state anxiety". When exposed to traumatic narratives (such as descriptions of accidents or violence), the model's responses score much higher on a standard psychological anxiety inventory. The jump is large, from "low anxiety" to levels comparable to highly anxious humans. The same study finds that therapy works: mindfulness-style relaxation prompts reduce these scores by about a third, though not back to baseline. The authors argue that managing an LLM's emotional state may be important for safe deployment, especially in mental health settings and perhaps in other mission-critical domains.

Another recent paper argues that LLMs can develop a form of brain rot. Continual training on what the authors call junk data (short, viral, sensationalist content typical of social media) leads to models developing weaker reasoning, poorer long-context handling, and worse safety compliance. Those models also score higher on dark personality traits like narcissism and psychopathy, as measured by the TRAITS benchmark. The primary mechanism behind this decline was identified as thought-skipping, where the model truncates its own reasoning chains, echoing the fragmented attention patterns associated with internet addiction in humans.

In my previous post, I discussed how agentic system techniques parallel self-improvement advice. I noted that successful agents rely on habits that look suspiciously like human self-help techniques. These models write things down, using scratchpads and memory buffers to offload working memory. They think in loops, iterating through write-reason-repeat cycles to decompose hard problems. They role-play, adopting constrained personas like "The Critic" or "The Engineer" to constrain the search and behavior space, much like the alter ego effect in humans.

This lets me think, would we need psychologists and life coaches for AI in the future? Would this be a vocation/career?

If models can suffer from anxiety that responds to mindfulness prompts, and from cognitive decline that responds to changes in their data diet, we are already treating them as if they are biological minds rather than machines. Perhaps the "Prompt Engineer" of today is just the precursor to the "AI Psychologist" of tomorrow, a professional whose job is to keep it sane, focused, and "healthy".

Comments

Popular posts from this blog

Hints for Distributed Systems Design

My Time at MIT

TLA+ modeling tips

Foundational distributed systems papers

Learning about distributed systems: where to start?

Optimize for momentum

Advice to the young

Scalable OLTP in the Cloud: What’s the BIG DEAL?

Disaggregated Database Management Systems

Making database systems usable