Skip Navigation

How To Protect Kids From AI Dangers

Digital face

October 23, 2025

Artificial intelligence (AI) has permeated many parts of everyday life. From Google Maps advising a different route to Netflix recommending your next show, AI can perform lots of tasks that used to take more human effort. It also includes common AI assistants – like Siri and Alexa – and apps like ChatGPT, which have entered our everyday lives and conversations.

There are different types of AI. Generative AI refers to computer models that create new content. Interactive AI refers to platforms that enable interaction between humans and machines.

Because the number of applications using AI is expanding, a greater number of children are interacting with it. The nonprofit organization Common Sense Media talked to kids between the ages of 13 and 17, and a third of those teens said they've discussed serious matters with AI instead of real people. About the same percentage say their AI chats are just as satisfying, or even more satisfying, than talking to humans. This can have significant effects on kids’ mental health and relationships over time.

“The US government currently is very supportive of pretty unfettered AI development,” says Mitchell Douglass, MD, child psychiatrist at The University of Kansas Health System. “The guardrails are coming off, not on. So, knowing that AI is going to be here, you want to have a conversation with kids, really starting at a pretty young age, about: What is AI? How does it work? What is algorithm bias? And by the time they get into junior high, they should know some of these terms.”

But getting started is sometimes the hardest part. Dr. Douglass and Danielle Johnson, PhD, psychologist at The University of Kansas Health System, have 5 tips for what to talk about.

1. Educate about the difference between real and fake connections

You really want to be aware whether this is becoming a pattern of behavior for this child, and is this how they're trying to interface with the world around them, as opposed to interacting with people." Mitchell Douglass, MD

Child psychiatrist, The University of Kansas Health System

Explain to kids that AI apps try to act like humans but are not a substitution for humans.

Certainly, if a 7-year-old asked Google Gemini a question one time, their brain won't rot," says Dr. Douglass. "But you really want to be aware whether this is becoming a pattern of behavior for this child, and is this how they're trying to interface with the world around them, as opposed to interacting with people.

Dr. Johnson recommends discussing what nonverbal cues look like and what they mean. You can also talk about what real friendship looks like. Many kids are losing the nuance that happens in face-to-face interactions because they engage with a screen so often.

“When we find kids relying on social media and chat bots for their source of social support, they're losing out on the skills that we all need as adults,” says Dr. Johnson.

Yeah, they're losing social skills. And so, the idea is that we have this interaction, this exchange, I can read all your nonverbal cues and get some other influences of what you're saying that maybe you're not saying. So, we can't do that from talking to a chat bot, so I'm losing some of the nuance. Clinically speaking, I'm a licensed clinical psychologist, and I've had a lot of years of training to do what I do. A chat bot can't replicate that level of education and training, and so some of the nuances diagnostically may overlap. If so, if someone says they're sad, they're having a hard time concentrating, their focus isn't just the same that could be a number of diagnoses, and AI can't be as contextual with discerning those diagnoses as Dr. Douglass or myself. When we find kids relying on social media and chat bots for their source of social support, they're losing out on the skills that we all need as adults to perform our jobs, to end up with other human beings that we just lose that out when they rely on chat bots.

2. Explain AI bias and shortcomings

It’s generally going to be sycophantic, telling you that you're okay, that you're good, that everything's fine, and provide general, vague support, as opposed to making some challenging statements, like, ‘Wow, you need help. You need to go talk to a parent right now." Mitchell Douglass, MD

Child psychiatrist, The University of Kansas Health System

AI certainly does have some strengths.

“It is always available. It's always generally going to be supportive. Your therapist is not there 24/7,” says Dr. Douglass. “So that can become very easily comforting.”

However, Dr. Douglass says it is positive and comforting to a fault. It won’t engage in a hard conversation.

“It’s generally going to be sycophantic, telling you that you're okay, that you're good, that everything's fine, and provide general, vague support, as opposed to making some challenging statements, like, ‘Wow, you need help. You need to go talk to a parent right now,’” says Dr. Douglass.

To take this to an extreme, Dr. Douglass says that most AI chatbots could identify suicidal thoughts and advise getting help. However, if it is phrased as a hypothetical, there can be errors. In some cases where there has been prolonged interaction with the AI chatbot, the programming to tell someone to seek help didn’t work.

Parents need to tell kids that they are just as available as an AI tool.

“Tell them: When you are having a rough moment, I'm here. I want to know what you're thinking. I don't want the AI to become your best friend when you struggle. That's my job. An AI won't be able to help you the way that I can,” says Dr. Johnson.

It’s not a human. That’s that very first step. This isn’t a human; that's just an algorithm designed to give you feedback that's going to make you continue to engage with this device. That's what it's there for.

3. Teach about the reality of AI and false information

I have 2 patients that have given money to not have things released that were never actually reality." Mitchell Douglass, MD

Child psychiatrist, The University of Kansas Health System

Generative AI is known for creating otherworldly images that could never truly exist, but it can also create content that appears very real and isn’t. Dr. Johnson says that this type of content often starts innocently enough, like a young child watching an AI video of animals morphing into different things.

“They’re just amazed and in awe with this animal,” she says. “Conversations with them about what's real and what's not real has become quite important.”

At the teenage stage of development, bad actors can take images someone has posted online and get AI to generate new ones that show them engaging in illegal or illicit behaviors. Teens will then be threatened that the photos or videos will be released unless they pay a sum of money or engage in an activity. They’re worried about their parents or peers seeing the fake content and believing that it is real.

“I have 2 patients that have given money to not have things released that were never actually reality,” says Dr. Douglass.

Talk with your teens about these scenarios so they know they can be transparent about a situation if it happens to them.

For my older kids, they are making some choices to post things about themselves or their bodies that then becomes an opportunity for others to use that information and to actually change that information into something that it never was. And I've had a couple of patients where they've been threatened: if you don't do X, Y will happen. And so we have conversations about safe choices on the internet, on social media, using AI.

4. Understand AI is not private

She said it felt more private, but she's absolutely lost privacy because her information, that she has spoken into her phone, is now in the hands of a big company that can use it." Danielle Johnson, PhD

Psychologist, The University of Kansas Health System

AI apps are businesses, and one way many of them support their business is to sell the information that is put into their models.

Dr. Johnson explains that some kids will share private or sensitive information with an AI chatbot, for example, because it feels less risky than disclosing it to a friend, parent or therapist.

“The irony is not lost. She said it felt more private, but she's absolutely lost privacy because her information, that she has spoken into her phone, is now in the hands of a big company that can use it,” says Dr. Johnson. “When our teens are on here using AI like a diary, they, again, are losing the context. They will get some feedback. But the feedback may not be the most accurate to what they're struggling with.”

Instead, Dr. Johnson recommends an old-fashioned pen-and-paper diary.

“We have neuroscience and research that supports the neuronal connections that are made when we actually write with a paper and pencil that are not as strong as when we text or type or speak into something,” she says. “Then the thoughts are on paper, and we can look at them and actually do some really targeted work with our thoughts that we're struggling with.”

5. Set age-appropriate settings and limits

Have a conversation about the safety... It's not because I don't think you're responsible enough. It's really a safety thing." Danielle Johnson, PhD

Psychologist, The University of Kansas Health System

Like anything with children, it’s important to set limits for their safety. We teach them to look both ways before crossing the street. They can’t play with fire. Dr. Johnson says this safety mindset should be applied to AI and social media use.

She recommends a multilayer approach. Talk with kids about what responsible use looks like. Monitor their usage – there are apps, like Bark or MMGuardian, that can help. Utilize parental controls where they are available. Check the settings on the AI apps they are using and select options that might protect their information from being shared. And have follow up conversations if they are searching for things that are concerning.

“Have a conversation about the safety. It is not being punitive. It's not because I don't think you're responsible enough. It's really a safety thing,” says Dr. Johnson. “Their brain is not fully developed. We have much more education and knowledge they don't have. So, approach it as a safety measure, to have some balance with it.”

You may also be interested in

Explore more news, events and media