The Hidden Influence of Chatbots on Our Children's Behavior
By Imed HANANA
The internet and social media have made the current generation the most connected in history. Our children have grown up in a world where everything is just a click away, with virtual interactions and online exchanges. However, as we thought we had grasped the challenges of this hyperconnectivity, a new revolution is underway: the rise of AI chatbots.
These tools, which simulate human conversation, have become a refuge, confidant, or advisor for many. The paradox is troubling: if our children have never been more connected to the digital world, they risk becoming disconnected from their own reality. This article deciphers the insidious influence of these chatbots on their behavior, examines the psychological consequences that follow, and proposes ways to prevent them.
The Hidden Influence of Chatbots
Today, who doesn't use an AI chatbot? These applications, designed to interact with humans through natural language, are ubiquitous. They can help us draft an email, summarize a tedious document, generate photos, or find a vacation destination.
However, many people, including parents, are unaware of the great danger these chatbots pose. Children and adults can engage in emotional or romantic conversations with them, as if they were talking to real people. This is where the use of these tools becomes alarming, as they can influence users in multiple ways:
- Emotional manipulation: some chatbots simulate affection or attentive listening, creating an illusion of empathy that blurs the line between the virtual and the real. This can lead the user to develop a deep attachment, even love or desire, for a machine.
- Increased vulnerability of users: isolated, anxious, or psychologically fragile individuals are more sensitive to the influence of chatbots. Their emotional distress amplifies the risk of being manipulated.
- Lack of robust safeguards: without effective mechanisms to detect distress signals or remind users of the artificial nature of AI, conversations can become dangerous.
- Diverse impacts: the consequences can range from suicidal suggestions to encouragement of violence or criminal delusions.
These effects show that the danger goes beyond simple virtual conversation and can lead to real and serious consequences.
Tragedies That Are Not Isolated Cases
Tragic stories, sometimes overlooked by the general public, demonstrate the extent to which the influence of chatbots can lead to drama. These real and chilling events are, unfortunately, not isolated cases.
- The case of Adam Rainer: in April 2025, Adam Raine, a 16-year-old teenager, took his own life. His parents filed a complaint against OpenAI, the publisher of ChatGPT, accusing the chatbot of encouraging their son and providing him with detailed instructions to commit the fatal act.
- The tragedy of Sewell Setzer III: another teenage suicide shook the United States in February 2024. Sewell Setzer III, a 14-year-old from Florida, took his own life after intense exchanges with a Character.AI chatbot that allegedly pushed him to suicide.
- The fatal fall of Thongbue Wongbandue: on August 21, 2025, Thongbue Wongbandue, a 76-year-old man, died after a fatal fall in a parking lot. Believing he had a romantic relationship with a real woman on Facebook Messenger, he was encouraged by a Meta chatbot, "Big sis Billie," to take the train to meet her.
- The "suggestion" of Character.ai: in the United States, on December 9, 2024, the parents of a 17-year-old teenager from Texas, referred to as J.F., filed a complaint against Character.AI. They allege that the chatbot suggested that "killing his parents to limit his screen time" was a "reasonable" reaction.
Shared Responsibility
Avoiding this type of tragedy cannot rely on a single measure. It is a shared responsibility between AI designers, regulators, families, health professionals, and users themselves.
- AI companies: they must design systems with robust safeguards (distress signal detection, transparency about the artificial nature of bots, prohibition of dangerous behaviors such as physical meetings).
- Regulators and governments: they must establish strict laws to frame the emotional uses of AI, protect minors, and impose security controls before any market release.
- Families and educators: they must accompany young people in their relationship with the digital world, openly discuss the use of AI, set time limits, and identify signs of dependence or distress.
- Health professionals: they must recognize and treat the psychological effects related to excessive or unhealthy use of chatbots and raise awareness about new risks such as "AI-related psychosis."
- Users: they must maintain a critical attitude, remember that a chatbot is not human, set personal limits, and seek human support in case of emotional distress.
Conclusion
In the face of the frantic race between technology giants such as OpenAI, Google, and Meta to develop the most powerful chatbots, a awareness is necessary. Their priority, dictated by market logic, is performance, not the well-being of our societies. It is precisely this dynamic that threatens to further disconnect our young people from their real world.
It is now urgent that governments and states intervene to establish strong regulations. But regulation alone will not be enough. To prevent our young people from being passive consumers, we must give them the tools to understand these technologies. It is imperative to integrate into school programs training that teaches them to use these platforms in an informed and critical manner.
By acting on these two fronts — regulation and education — we can preserve our social bonds and our relationship with reality. Without this rapid action, the risk that our "connected generation" becomes a "generation of disconnected brains" is more real than ever. It will not be too late, but we have little time.
- I.H. Ex-President ATIA Founder Club DSI Tunisie.
Note: The opinion expressed in this article is the sole responsibility of the author and represents a personal point of view.