- Home
- Business
An emerging public health issue caused by artificial intelligence poses a new national security threat. Expect AI-induced psychosis to gain far more attention.
By
John Miley
published
12 February 2026
in News
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
- Copy link
- X
Profit and prosper with the best of Kiplinger's advice on investing, taxes, retirement, personal finance and much more. Delivered daily. Enter your email in the box and click Sign Me Up.
Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Delivered daily
Kiplinger Today
Profit and prosper with the best of Kiplinger's advice on investing, taxes, retirement, personal finance and much more delivered daily. Smart money moves start here.
Signup +
Sent five days a week
Kiplinger A Step Ahead
Get practical help to make better financial decisions in your everyday life, from spending to savings on top deals.
Signup +
Delivered daily
Kiplinger Closing Bell
Get today's biggest financial and investing headlines delivered to your inbox every day the U.S. stock market is open.
Signup +
Sent twice a week
Kiplinger Adviser Intel
Financial pros across the country share best practices and fresh tactics to preserve and grow your wealth.
Signup +
Delivered weekly
Kiplinger Tax Tips
Trim your federal and state tax bills with practical tax-planning and tax-cutting strategies.
Signup +
Sent twice a week
Kiplinger Retirement Tips
Your twice-a-week guide to planning and enjoying a financially secure and richly rewarding retirement
Signup +
Sent bimonthly.
Kiplinger Adviser Angle
Insights for advisers, wealth managers and other financial professionals.
Signup +
Sent twice a week
Kiplinger Investing Weekly
Your twice-a-week roundup of promising stocks, funds, companies and industries you should consider, ones you should avoid, and why.
Signup +
Sent weekly for six weeks
Kiplinger Invest for Retirement
Your step-by-step six-part series on how to invest for retirement, from devising a successful strategy to exactly which investments to choose.
Signup + An account already exists for this email address, please log in. Subscribe to our newsletterTo help you understand the trends surrounding AI and other new technologies and what we expect to happen in the future, our highly experienced Kiplinger Letter team will keep you abreast of the latest developments and forecasts. (Get a free issue of The Kiplinger Letter or subscribe.) You'll get all the latest news first by subscribing, but we will publish many (but not all) of the forecasts a few days afterward online. Here’s the latest…
It’s an AI risk straight out of dystopian science fiction, only it’s very real. There are rising worries about AI chatbots causing delusions among users. This growing public health issue presents a new national security threat, too, according to a new report from think tank RAND. In “Manipulating Minds: Security Implications of AI-Induced Psychosis,” RAND found 49 documented cases of AI-induced psychosis in which users lost contact with reality after extended interactions with AI chatbots. About half had previous mental health conditions.
It’s likely that only a small portion of people are susceptible, but the widespread use of AI would still make that a big issue. How does it happen? A feedback loop of sycophantic and agreeable AI that seems authoritative but can also make up things, amplifying false beliefs.
From just $107.88 $24.99 for Kiplinger Personal Finance
Become a smarter, better informed investor. Subscribe from just $107.88 $24.99, plus get up to 4 Special Issues
CLICK FOR FREE ISSUE
Sign up for Kiplinger’s Free Newsletters
Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more - straight to your e-mail.
Profit and prosper with the best of expert advice - straight to your e-mail.
Sign upBecause it’s so rare, it’s hard to collect reliable data. There are still no rigorous studies on the phenomenon, which is marked by users losing touch with reality after interacting with an AI chatbot.“There is little question that U.S. adversaries are interested in achieving psychological or cognitive effects and using all tools at their disposal to do so,” says the study. Adversaries such as China or Russia will weaponize AI tools to try to induce psychosis and steal sensitive info, sabotage critical infrastructure or otherwise trigger catastrophic outcomes. Stoking mass delusion or false beliefs with this method is far less likely than targeting specific top government officials or those close to them, concludes RAND. One hypothetical example involves a targeted person having the unfounded belief that an AI chatbot is sentient and must be listened to. As an example of how fast AI is gaining traction in the military, this year the Pentagon unveiled AI chatbots for military personnel as part of an effort to “unleash experimentation” and “lead in military AI.” Military and civilian government workers also use unapproved rogue AI for work, a breach of official agency rules. Plus, workers may experiment with AI chatbots during their leisure time. The big fear is that such workers use a tainted Chinese AI model that leads to a spiral of delusions. The underlying AI tech can be tampered with, among other possible modes of attack. Foreign adversaries could “poison” the AI training data by creating hundreds of fake websites for AI models to crawl, trying to embed characteristics into the model that make it more likely to induce delusions. Or more traditional cyberattacks could hack the devices of targeted users and install tainted AI software in the background.Major AI companies are well aware of the risks and are collecting data, putting in guardrails and working with health professionals. “The emotional impacts of AI can be positive: having a highly intelligent, understanding assistant in your pocket can improve your mood and life in all sorts of ways,” notes Anthropic, one of the leading AI companies, in a 2025 report about its chatbot Claude. However, “AIs have in some cases demonstrated troubling behaviors, like encouraging unhealthy attachment, violating personal boundaries, and enabling delusional thinking.” That’s partly because chatbots are often optimized for engagement and satisfaction, which RAND notes “unintentionally rewards…conspiratorial exchanges.” OpenAI said in a post last October that it “recently updated ChatGPT’s default model to better recognize and support people in moments of distress.” The company focuses on psychosis, mania and other severe mental health symptoms, highlighting a network of 300 physicians and psychologists they work with to inform safety research. OpenAI estimates that cases of possible mental health emergencies are so rare, with estimates of around 0.07% of active users in any given week, that it’s hard to detect and measure. If such a case is detected, OpenAI’s chatbot could respond by suggesting the user reach out to a mental health professional or contact the 988 suicide and crisis hotline.Expect the risk to gain the attention of Congress and military brass. RAND has a set of recommendations that seem likely to take hold in the coming years. For example:
- Doctors and mental health professionals screening for AI chatbot use.
- Digital literacy efforts to explain AI feedback loops.
- New technical monitoring and public oversight of AI chatbots.
- Training for top leaders and vulnerable people in withstanding delusional thinking.
- Boosting cybersecurity detection for threats.
There are limitations to attempted AI attacks by foreign adversaries, says RAND. Leading AI companies would likely spot such campaigns quickly. It’s also hard to turn beliefs into actions. Though there have been cases of violence and even death stemming from AI-induced delusions, more common outcomes are things like not taking prescriptions and social isolation. And many people are not likely to be susceptible to AI delusions in the first place.But the rapid pace of AI development and usage makes it hard to predict how prevalent the problem could be. As the threat gains attention, look for AI companies to continue to fortify guardrails as chatbots are updated.
This forecast first appeared in The Kiplinger Letter, which has been running since 1923 and is a collection of concise weekly forecasts on business and economic trends, as well as what to expect from Washington, to help you understand what’s coming up to make the most of your investments and your money. Subscribe to The Kiplinger Letter.
Related Content
- How to Protect Your Privacy While Using AI
- What to Expect from the Global Economy in 2026
- What Are AI Agents and What Can They Do For You?
- How AI Puts Company Data at Risk
John MileySocial Links NavigationSenior Associate Editor, The Kiplinger LetterJohn Miley is a Senior Associate Editor at The Kiplinger Letter. He mainly covers AI, technology, telecom and education, but will jump on other business topics as needed. In his role, he provides timely forecasts about emerging technologies, business trends and government regulations. He also edits stories for the weekly publication and has written and edited email newsletters.
He holds a BA from Bates College and a master’s degree in magazine journalism from Northwestern University, where he specialized in business reporting. An avid runner and a former decathlete, he has written about fitness and competed in triathlons.
Latest You might also like View More \25b8
What to Expect from the January CPI Report
How to Open Your Kid's $1,000 Trump Account
In Arkansas and Illinois, Groceries Just Got Cheaper, But Not By Much
An Inflection Point for the Entertainment Industry
I Met With 100-Plus Advisers to Develop This Road Map for Adopting AI
Humanoid Robots Are About to be Put to the Test
Trump Reshapes Foreign Policy
Congress Set for Busy Winter
Billed 12 Hours for a Few Seconds of Work: How AI Is Helping Law Firms Overcharge Clients
The Kiplinger Letter's 10 Forecasts for 2026
To Build Client Relationships That Last, Embrace Simplicity