Scientists have proposed a system for evaluating the security of AI apps
Researchers at Cornell University have proposed a new classification system for AI applications for mental well-being to help users and developers understand the boundaries of responsibility and safety of these tools. In the absence of strict government regulation, millions of people use ChatGPT and similar tools as personal therapists. However, scientists warn that without clear standards, there is no guarantee of benefit, and the risk of harm remains high. To help users and developers navigate this area, the researchers have developed a model of four analogies. They suggest evaluating apps based on whether they guarantee specific symptom relief or just promise a general improvement in well-being.
The first analogy compares AI to an over-the-counter medication that is designed to solve specific problems of specific groups of people. The second model is a “dietary supplement” aimed at general maintenance of tone; scientists urge not to confuse such tools with real medicines and not to replace clinical treatment with them. The third analogy likens AI to a primary care physician who uses proven techniques, such as cognitive behavioral therapy, and is highly responsible. The fourth option is a “yoga instructor” who offers support without strict medical guarantees for those who are generally healthy.
Despite the enormous potential of AI in reducing the stigmatization of mental problems and the availability of support, experts point to serious threats. Chatbots created for entertainment can substitute for real human relationships or force people to postpone a visit to a specialist. The statistics are frightening: every week, about a million people express suicidal thoughts in conversations with AI. For such situations, researchers require the introduction of mandatory mechanisms for redirecting users to real doctors.
During the work on the project, the scientists faced an ethical dilemma. Medical experts are willing to take rare risks for the benefit of the majority, similar to strong medications. Ethicists, on the other hand, insist on higher safety standards for AI in order to eliminate harm even to a few. Now the authors of the study are working on a policy that will encourage companies not only to keep users in the application, but to direct them to live communication and professional help.
Published
March, 2026
Category
New technologies
Duration of reading
1-2 minutes
Share
Source
Scientific Journal arXiv. Article: Framing Responsible Design of AI for Mental Well-Being: AI as Primary Care, Nutritional Supplement
Don’t miss the most important science and health updates!
Subscribe to our newsletter and get the most important news straight to your inbox