Artificial intelligence (AI) is quickly becoming part of everyday life. You can get instant answers to almost any question at your fingertips. It’s no surprise then that many people are turning to tools like ChatGPT for health information. A recent poll found that about one in three adults nationwide used AI for health-related questions in the past year.
But is that a good idea? And how much is too much?
What chatbots can do well
First, it’s okay to try it. AI tools can feel personal and are easy to use. Chatbots can break down complex health topics into language that’s easier to understand, helping you learn about conditions, symptoms, and possible treatments. It can even help you develop effective workout routines and tailored dietary plans.
Chatbots can also improve your next doctor’s visit. When you come in with a better understanding, you’re more prepared to ask detailed questions and work with your provider to decide options that are best for you.
Where chatbots fall short
Chatbots may be convenient, but they’re not a substitute for professional medical advice. These tools are only as good as the information you enter and the information that the chatbot finds online. Leave out a key symptom or detail, and the response you get may be incomplete, misleading, or fail to reflect the seriousness of your condition. That’s where a human provider makes a difference. They’re trained to ask follow-up questions about symptoms that you may not have mentioned, didn’t notice, or dismissed as minor.
Chatbots and your mental health
You might turn to a chatbot when dealing with a personal or emotional concern. While that can feel helpful in the moment, it’s important not to rely on it too heavily. There are a few signs to watch out for:
- Overuse: You find yourself turning to a chatbot repeatedly and overanalyzing your thoughts or situations.
- Dependence on decisions: You rely on it to make even simple choices, such as weekend plans.
- Catastrophizing: You take responses at face value and begin to assume the worst about your situation.
Teenagers and AI chatbots
With millions of teens using AI chatbots, there’s a good chance your child has tried one. For some, it’s just entertainment. For others, it can fill time when they’re bored or feeling lonely. Over time, these tools can start to feel like companions, always available and responsive.
If your child begins relying on AI for emotional support or reassurance, or struggles to make decisions without it, it might be time to step in. Learning how to make healthy decisions and applying their creativity are important milestones that you want them to achieve. Keep in mind, the goal isn’t to take technology away completely, but to help your child build a healthy balance.
- Start with an open conversation
Ask how they’re using AI tools and what they like about them. Keeping the tone curious—not critical—makes it more likely they’ll be honest. - Set reasonable boundaries
Create limits around when and how often AI can be used, especially during times meant for schoolwork, family, or sleep. - Encourage real-world connections
Help your child stay engaged with friends, hobbies, and activities that build social skills and confidence outside of a screen. - Teach them to think critically about AI
Remind them that chatbots don’t truly understand them and can be wrong. Encourage them to question responses rather than accept them as fact.
Help is here if you need it
If you or your child feel withdrawn, anxious, or overly dependent on AI, consider reaching out to a counselor or mental health professional. Luminis Health is here to help. Our Behavioral Health Urgent Care is open Monday through Friday, 7:30 am to 7:00 pm, and on Saturdays, 8:00 am to 1:00 pm. No appointment is necessary. The center is located on the campus of Luminis Health Doctors Community Medical Center in Lanham.
Danny Watkins is Luminis Health’s senior director of Behavioral Health Nursing and Operations