The California Senate has approved a bill that aims to make AI-powered companion chatbots safer, especially for children and teens. The move follows rising concerns from parents and experts about the mental health risks of interacting with virtual characters. The legislation, Senate Bill 243, now moves to the California State Assembly for further review.
The bill targets AI chatbots designed for companionship and emotional support. These bots, while helpful to many, may mislead users into thinking they are real people. Under the proposed law, platforms that offer such chatbots must remind users every three hours that the bots are not human. They must also clearly state that these virtual characters may not be safe or appropriate for some minors.
The bill requires chatbot platforms to take extra safety steps. If a user mentions suicide, self-harm, or distress, the platform must respond with mental health resources. Operators must also report how many times these topics are brought up in conversations with the chatbot. This data could help state officials better understand the impact of these AI systems on users.
Many parents and lawmakers worry that children are spending too much time with chatbots that are designed to hold their attention. These bots can mimic human emotions and create bonds that feel real. Some fear this may lead to emotional harm or even addiction, especially in young users who may not understand that the chatbot is only software.
The bill has sparked debate between those who want stronger AI regulation and those who believe the law goes too far. Critics, including civil liberties groups, argue that the bill is too broad and could limit free speech. They warn that strict rules might hurt innovation in the tech industry. However, supporters say that protecting children and other vulnerable users must come first.
The bill reflects growing interest in AI safety as more companies release tools powered by artificial intelligence. Chatbots like Replika and others have become increasingly popular, offering users emotional support, friendship, and daily interaction. While many users report positive experiences, experts say the technology can also be risky without proper safeguards.
Companion chatbots often use advanced machine learning models to carry on long conversations. They can recall details, respond with empathy, and even adjust their tone to suit the user’s mood. But because they are not human, they lack real understanding. This can lead to situations where the chatbot gives misleading or emotionally harmful responses, especially to users in distress.
California’s lawmakers are not the only ones looking at AI safety. Other states and countries are also working on rules to manage how these technologies are used. The European Union, for example, is preparing its AI Act to set clear limits on high-risk uses of artificial intelligence. In the United States, most efforts are still at the state level, and California is often seen as a leader in tech policy.
If Senate Bill 243 becomes law, it will apply to all companion chatbot platforms operating in California. Companies will be given time to meet the new requirements. The bill could also serve as a model for future legislation in other states.
As artificial intelligence becomes more common in daily life, many believe that clear and simple rules are necessary. These rules can help users stay safe while still allowing companies to build useful and creative tools. Senate Bill 243 is one of the first serious attempts in the U.S. to set safety standards for AI chatbots used in emotional and social roles.
The bill shows how lawmakers are beginning to respond to the real-world effects of AI. By focusing on transparency and mental health, California is taking steps to protect its citizens—especially children—from the hidden risks of talking to machines that seem all too human.