A new bill introduced in the US state of California seeks AI chatbot warnings for minor users, but is it actually feasible? The post New US Bill Proposes AI Chatbot Warnings for Minors, But Is It Really Feasible? appeared first on MEDIANAMA.

We missed this earlier: A new bill has been proposed in the US state of California requiring AI companies to periodically remind minor users that their chatbots aren’t human. Introduced by California State Senator Steve Padilla, the legislation aims to “protect children from predatory chatbot practices”.
The bill states that chatbot operators must dispense “clear and conspicuous notification periodically” to remind users that the chatbot is artificially generated. AI companies must also furnish an annual report to California’s State Health Care Services Department detailing several instances as follows:
Instances detecting “suicidal ideation” by minor users.
Instances when a minor user whose “suicidal ideation” was detected, attempted suicide or died doing the same.
Instances of the chatbot suggesting such ideation.
Further, AI chatbot operators must undergo regular third-party audits to ensure compliance with this proposal.
Why this matters?
The proposal follows two lawsuits against AI chatbot service Character.AI for encouraging suicide and perpetrating sexual abuse, alongside other adverse impacts on a teenager’s mental health. Further, in a press release announcing the bill, Padilla recounted several troubling conversations between an AI chatbot and minors that risked the safety of the latter, through absurd recommendations. Besides this, a study by the University of Cambridge highlighted the risk of minors displaying anthropomorphism (assigning human attributes to non-human entities) while putting a greater sense of trust in AI chatbots in contrast to humans.
How feasible is this solution?
Padilla’s efforts seek to prohibit companies from deploying “addictive engagement patterns” affecting “impressionable users”. To identify whether the proposed measures would hold weight and safeguard children, MediaNama conversed with the mental health wellness company Trijog’s Founder Anureet Sethi and its Supervisor of the Child Adolescent and Assessment Wing, Mihika Shah. Here is what they had to say:
disclaimers fail in certain situations
Sethi and Shah explained that cognitive biases like anthropomorphism and the Eliza effect (where users project human-like understanding onto AI) can cause children to treat chatbots as having emotions or genuine understanding. “Overall, despite repeated reminders young children may struggle to fully differentiate chatbots from humans. This tendency is stronger in children who are more imaginative or socially isolated”, they added. Additionally, they claimed repeated interactions can also reinforce familiarity and trust enabling users to overlook disclaimers.
While there exists little evidence about the impact of warning labels on AI-generated content for teenagers, experiments have been conducted to analyse its overall impact on individuals. For instance, according to two online experiments surveying over 7,500 Americans, it was found that issuing warning labels about AI-generated content “significantly reduced individuals’ belief in the posts’ core claims”, while driving down their likelihood to share the post. Notably, while the research deployed four types of labels to such content- “AI-generated”, “artificial”, “manipulated”, and “false”, the first one that focused on the process used for creating the content had a significantly lesser impact on viewers. Moving forward, this could be potentially compared to the notifications prescribed by the new proposed California bill displaying that chatbots are “artificially generated” and not “human”.
While the research examined the impact of AI content labels on social media, it remains unclear how such disclaimers affect chatbot users who are already aware they are interacting with AI.
ALTERATION OF EMPATHY
Moving forward, regular interactions of children with chatbots despite repeated reminders raise questions about its impact on their perceptions of human interactions. Elaborating on this, Sethi and Shah opined that children may develop expectations that real, human interactions should provide quick, structured, and non-judgemental responses akin to chatbots. “This could cause frustrations in real-life relationships, where responses are slower, less predictable, and influenced by emotions”, they added.
DIGITAL LITERACY VS DISTRUST IN ONLINE INTERACTIONS
While disclaimers can be ineffective in certain situations, they can have some positive impacts too, like building digital literacy, they explain. However, Sethi and Shah caution that in case the messaging is “too rigid” or “fear-based”, it could spur distrust in all online interactions, including legitimate sources of support. They explain, “If children are frequently told ‘Chatbots cannot help you with real problems’, they may generalise this scepticism to other digital platforms, including online counselling services, mental health forums, or educational tools”.
CHATBOTS FOR EMOTIONAL DISTRESS
Speaking of the usage of AI chatbots to counter emotional distress and mental health woes, they note that constant reminders can serve as a reality check, preventing over-reliance on AI for serious issues. However, these reminders have a dual side to them as well, for instance potentially deterring children from confiding in any form of digital support and leading them to feel isolated due to lacking alternative sources of help, they added.
Importantly, while researchers have found that mental health chatbots significantly reduce short-term depression and distress, experts have persistently cautioned against using them due to a lack of technical expertise and tendency to make up information, National Geographic reported. Further, researchers have concluded that many such chatbots currently moonlighting as mental health counsellors are “untested and unsafe”.