Honest news. Real Talk.

Science | Tech

‘There’s a Window of Opportunity to Create Change’ in AI Chatbots

Seton Hall legal scholar says quick, collective action — lawsuits, public pressure, legislation — can affect how AI chatbots operate.

The chatbot developer Character.AI has said it will ban users under 18 years old from using its virtual companions, an unprecedented move that comes after the mother of a 14-year-old user sued the company in federal court last year, saying the boy talked to a Character.AI chatbot almost constantly in the months before he killed himself in February 2024. 

The “dangerous and untested” chatbot, the mother said, “abused and preyed on my son, manipulating him into taking his own life.” It essentially assisted his suicide, the mother alleges, prompting him to isolate from friends and family and at one point even asking if he had a suicide plan, according to the lawsuit.

In its Oct. 29 announcement, the company said the change will go into effect no later than Nov. 25. Character.AI will limit teen users to two hours per day with chatbots before then, ramping it down in the coming weeks.

It also said it will establish its own AI Safety Lab, an independent non-profit “dedicated to innovating safety alignment for next-generation AI entertainment features.”

To offer perspective on the move and on issues surrounding AI safety, privacy and digital addiction, The 74’s Greg Toppo spoke with Gaia Bernstein, a Seton Hall University law professor and director of its Institute for Privacy Protection. Bernstein has also created a school outreach program for students and parents, introducing many for the first time to the idea of “technology overuse.” 

An intellectual property lawyer, Bernstein noticed around 2015 or 2016 that “things were changing around me” when it came to technology. “I had three small kids, and I realized that I would go to birthday parties — the kids are not talking to each other. They’re looking at their phones! I’d go to see school plays, and I couldn’t see my kids on the stage because everybody was holding their phones in front of them.”

Likewise, she felt less productive “because I was constantly texting and emailing instead of focusing.”

But it wasn’t until whistleblowers began revealing the hidden designs behind so many social media tools that Bernstein considered how she could help herself and others limit their use.

In 2021, the whistleblower Frances Haugen, the primary source for The Wall Street Journal’s Facebook Files series, told congressional lawmakers that her employer’s products “harm children, stoke division, and weaken our democracy.” Creating better, safer social media was possible, Haugen said, but Facebook “is clearly not going to do so on its own.”

In her testimony, Haugen zeroed in on the social media giant’s algorithm and designs. In her writing and speaking, Bernstein maintains that tech companies like Facebook — rebranded as Meta — manipulate us to keep us online as long as possible, with invisible designs that “target our deepest human vulnerabilities.” For instance, they use a tool called infinite scroll, prominently on display on Facebook and Instagram, in which the page never ends. “We just keep scrolling,” she wrote recently. “They took away our stopping cues.”

Similarly, video apps such as YouTube and TikTok rely on autoplay, in which one video automatically follows another indefinitely.

In 2023, Bernstein put her findings into a book, Unwired: Gaining Control over Addictive Technologies. Since then, dozens of state attorneys general and school districts have sued to force social media companies to reform — and Bernstein says this approach may also help parents and schools battle the growing threat of AI companion bots. 

Late last month, a bipartisan group of U.S. senators unveiled legislation to make AI companions off-limits to minors. Sen. Josh Hawley, R-Mo, a co-sponsor, said more than 70% of kids now use them. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide,” he wrote. “We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”

The move comes weeks after the Federal Trade Commission said it was investigating seven chatbot developers, saying it was looking into “how these firms measure, test and monitor potentially negative impacts of this technology on children and teens.”

In her conversation with The 74, Bernstein said the FTC probe amounts to “another pressure point” that may help change how tech companies operate. “But it’s not just the FTC. It’s the lawsuits, and it’s bad PR that comes from the lawsuits, and hopefully there’ll be regulation. Litigation is expensive. Investors might not want to invest in these new products because there’s risk.”

This conversation has been edited for clarity and length.

The obvious interest we have in this is that we’re seeing Character.AI’s new policy, which limits access to its chatbot companions to users 18 or older. I imagine folks like you would say it’s only the first step.

Just the fact that they are taking some precautions means hopefully some kids will not be exposed to what’s been happening — convincing them to kill themselves, convincing them to not talk to their parents, to stay away from their friends. That’s a good thing. 

On the other hand?

I’ve researched how tech companies, especially Meta and other companies, have been behaving for years. So I’m a bit suspicious, because we tend to see these kinds of moves when they’re threatened legally. So it’s not so surprising that it’s happening. They’re under pressure.

In my mind, there are two questions: First of all, what will this look like exactly? In the past, for example, you would see Meta, every time there’s a big privacy breach, they would apologize and say, “We’re fixing it,” and they’ll fix something small and not fix the big thing. So what are they really doing? What kind of age verification mechanisms are they going to use? Secondly, they said they’re creating some space for teens. What is this going to look like? We don’t know. And I believe that until there’s real regulation at stake, we can’t be sure that they will take real precautions. 

I read a speech you delivered earlier this year in which you used the phrase “collective legal action,” saying that this is what’s needed to exert pressure on tech companies to change their designs, which trap users into “overuse.” That’s a fairly recent development, correct?

At the beginning, the people who were writing on this were mostly psychologists. Parents thought it was their own fault. The idea was, “Let me just fix my habits.” It’s self-help. The books that came before me were mostly talking about self-help methods. And when I was thinking about collective action, I realized: Parents can’t really change things by themselves, because you can’t isolate your kid and not give them a cell phone, not give them social media. It becomes an endless fight. And so I thought this has to be changed through collective action, through pressure — through governmental pressure, litigation. 

Jonathan Haidt’s book The Anxious Generation talks about collective action through parents doing things together in order to not have your kid be the only one who does not have social media or a phone. The idea is that it’s not our fault. It has to be done differently.

And to your point, a lot of this is by design, whether it’s social media or games or AI companions. By design, they’re meant to keep you there, keep you in place, keep you engaged. That’s something that, until recently, was not on a lot of people’s radar.

It took whistleblower after whistleblower to come out and explain how it works, to understand it as a business model. There’s no accident. We’re getting these products for free: Gmail for free, Facebook for free. We are paying with our time and our data. They collect data on us in order to target advertising — that’s how they make money. And they need us online for as long as possible so they can collect the data — and also so we will see the ads. So they need to find ways to keep us online. And there are different mechanisms like the infinite scroll. And they come up with new ones. AI companions have new addictive mechanisms: the way that they always agree with you, they always flatter you. For kids it’s even more addictive, but even for adults it’s, “You’re always doing a great job.”

It’s meant to keep you talking, meant to keep you engaged. You focus a lot on games and social media, but it strikes me that AI companions make those things seem quaint in terms of their addictive qualities, or the potential for real peril.

I agree with you. If you have a spectrum where social media is addictive — people spend many hours online, and they’re not interacting face-to-face — that’s an issue. And you see this with AI companions too. But what’s concerning about AI companions is that it’s much worse for kids. If you think about it, if you’re a kid and you go to middle school, kids are not nice. It’s much nicer to chat with somebody who’s always nice to you. Falling in love and getting your heart broken is not fun. There are many websites that just offer girlfriends that cater to you. So for me, the scariest thing is that kids will just never really develop the skills to have these relationships. And some adults may also stop preferring them.

About a year ago, I wrote a piece in which I talked to a college student, maybe 19 or 20 years old, who admitted that essentially he had outsourced advice about his romantic life to ChatGPT — he had a girlfriend, and whenever they had a fight or disagreement, he would excuse himself, go into the bathroom and ask ChatGPT what he should be doing. I can see that both ways: On the one hand, it just seems incredible. On the other hand, I can see where he’s basically looking for good advice. He’s looking for guidance. What do you make of that?

People say you can get advice, and you can practice your dating skills. I’ll give you something that happened to me, which is on a different scale: I was traveling abroad, and I was in this restaurant, and the menu was in a different language. So what did I do? I took a picture of the menu and uploaded it to ChatGPT and got it translated to English. While I was doing it, a young man came up to my partner and asked to translate. So what happened? I was already busy looking at my phone because I had a translation. My partner was speaking to this young man who was very happy to speak, and they were having a great conversation. 

That’s an example of the kind of things we’re giving up. This guy you wrote about, instead of going to the bathroom, maybe could have asked a friend, developed a deeper relationship with a friend. Maybe they would share experiences. But he gets used to getting the immediate answer from somebody else, and you didn’t develop these relationships. 

We miss out on the possibility of having a human interaction. 

Yes.

In its announcement, Character.AI actually apologized to its younger users, saying that many of them had told the company how important these characters had become to them. And I’ve heard that before. I wonder: How do we as adults start to think about the flip side of this, that it’s difficult for young people to tear themselves away from these things they’ve created? Do you have any sympathy for that?

I have concern, actually, because these kids, sometimes they kill themselves for these bots. So I am concerned about what will happen to kids who are very attached when these bots are suddenly gone. And you hear news stories even of adults who suddenly lost characters they were attached to. It’s a bit like how do you get people who are addicted off the addiction when you suddenly cut them off? These are things we’ve never even thought of.

Is there anything I haven’t asked you that you think is an important piece of this?

An important piece of this is that you don’t yet have every teen, every kid, attached to an AI companion. So there’s a window of opportunity to create change. Social media is much more difficult, because by the time we realized how bad it was, everybody was on social media.

The money interests were so big that they would fight every law in court. So it’s really important to move fast and also understand that Character.AI is a small part of the problem. Because it’s not just these specialized websites like Character.AI. It’s ChatGPT — one of the last lawsuits was against ChatGPT. The AI bots in ChatGPT are becoming more human, so it’s important that any action is against these bots, against the type of characteristics they have and to regulate how they behave. Just getting rid of Character.AI is not going to solve the problem.

This story was produced by The 74, a non-profit, independent news organization focused on education in America

Greg Toppo is a Senior Writer at The 74 and a journalist with more than 25 years of experience, most of it covering education. He spent 15 years as the national education reporter for USA Today and was most recently a senior editor for Inside Higher Ed. He is also the author of The Game Believes In You: How Digital Play Can Make Our Kids Smarter (St. Martin’s Press, 2015) and co-author, with educator James Tracy, of Running with Robots: The American High School’s Third Century, which looks at automation, AI and the future of high school (2021, MIT Press). From 2017 to 2021, he was president of the Education Writers Association. He has previously taught journalism for Northwestern University and was a Visiting Journalist in Residence at Knox College in Illinois in 2022.

Related Posts