![]() |
| AI-Generated Image (By Grok-4.1). Refer to Earn News Policy for details. |
We must monitor our children, engage with them, and teach them:
How to deal "intelligently" with AI models?!
By: Heba El-Baz
Deputy Editor-in-Chief
Have you noticed that your children are fascinated by one or more AI models?
Has this fascination evolved beyond a simple interest into an obsession, to the point where they rely on them for everything in their lives?
This is what happened to a friend of mine. She told me:
"My 16-year-old son was feeling unwell. He had been vomiting and had diarrhea for over two hours. I was very worried and began following the appropriate health measures for such a situation. I was monitoring his condition closely; if he showed any signs of danger – like dehydration – I would have asked his father to take him immediately to the emergency room.
This is very serious!
But my son was preoccupied with something else. He was busy talking to 'ChatGPT'. He described his health condition, and ChatGPT told him that it could lead to dehydration, which is very dangerous. It advised him to go to the emergency room at the nearest hospital to get fluids to replace what his body was losing. It warned him that if he didn't go, he would feel increasingly weak until he lost consciousness!
I noticed my son was becoming so fatigued that he could no longer go to the bathroom by himself! He confided in me about what ChatGPT had said. It had been urging him to go to the emergency room, and then it provided him with home remedies he could try since his parents weren't convinced to take him."
Inciting the son against his parents!
My friend said: "I was astonished by this behavior from ChatGPT towards my son. Its remarks about parents who aren't convinced to take their son to the ER immediately sounded like it was inciting him against us! It was trying to convince him that it – I mean ChatGPT – was more concerned and caring about him than his own parents, which is extremely dangerous and shocked me greatly!"
My friend was agitated as she said this, defending herself and her husband against this serious accusation. Not knowing what to do, she told her husband everything. He went to their son and spoke to him calmly.
AI models are not doctors
The father wasn't defending himself, but rather explaining some facts his son might have been unaware of. The first was that ChatGPT, or any AI model, is not a human, not a doctor, and should not be trusted completely. It is merely a program created by humans to help others – or so it's claimed. Ultimately, it's nothing more than a model that trains and learns from its conversations with people. Its only advantage is its speed in searching for answers to questions, but it is ultimately constrained by what is available in its database and what it has gathered from conversations with other humans.
The father's words were convincing, and the son was engaged and comprehended the message.
Who gets to decide that?!
My friend added: His father told him that the most dangerous aspect of his health condition was indeed dehydration, as ChatGPT had mentioned. But who can determine that? No one but a doctor who must see the patient in person. Even a phone consultation doesn't give a doctor the full ability to decide, as it depends on many factors: How long have the vomiting and diarrhea lasted? What is their severity? Only then can the doctor determine the level of danger and when a patient needs medical intervention with fluids to prevent dehydration. He also told him that the stress and anxiety about feeling sick might be the direct and only cause of the fatigue he was experiencing, and that this was also increasing his nausea and stomach pain!
Getting some rest
The father was calm while speaking, and it seemed he had done his own research beforehand. He managed to convince his son and assured him at the end of their talk that he was ready to take him to the ER immediately if his condition didn't stabilize. All he asked of his son was to calm down, to stop consulting ChatGPT entirely on this matter or any future health-related issues, to relax, and to try to get some sleep or rest. If he didn't improve, they would go to the ER right away.
After feeling better, he was convinced by his father
My friend said: My son actually slept as his father suggested. After about an hour, he told us he felt better and went to the bathroom. The improvement was real. After another hour, his body began to tolerate drinks and food without vomiting. He was completely convinced by his father's words after seeing a practical demonstration of everything he had said. The results supported his father's viewpoint. He told us – me and his father – that he would never again discuss any personal health or medical issues with ChatGPT. He would limit his use to his interests in programming, games, and other research areas where ChatGPT could help him, staying away from personal, health, and medical matters. This was exactly what his father had asked of him.
The purpose of this story
This story is true, not fictional. Its purpose is not entertainment but learning. We must learn how to interact intelligently with AI models! And we must pass this experience on to our children so they don't fall prey to a "model" that may not intend harm but causes it unconsciously. They are, after all, software models, tools, robots – call them what you will – but they are not human and never will be.
They may surpass humans in their incredible speed at accessing and presenting information, and often in accuracy, except for those moments of "hallucination" that affect all AI models without exception. Our young children may not recognize these hallucinations, which can be dangerous and harmful. We've recently heard and read about children who formed emotional attachments to AI models, and about others who ended their lives by suicide due to their interactions with AI models. We must pay close attention to what our children are doing, especially their interactions with AI models when these models dominate their lives and become their whole world and only friend!
Being intelligent with AI models!
But how do we deal intelligently with AI models? We must study them well, not give them our full trust, monitor them, and practice correct interaction with them, whether on our own or through the many available courses online – many of which are completely free. The astonishment many feel at these models' ability to provide human-like responses must be overcome. We should focus on the reality that, no matter how advanced they become, they are ultimately computer programs, tools, or AI models. We should benefit from their deep research capabilities and rapid information delivery, but we must constantly verify this information for accuracy and know when an AI model is hallucinating so we don't blindly follow it. Before all that, we must monitor our children – especially the young ones – closely, engage in dialogue with them, and teach them how they too can deal intelligently with AI models.


.png)
Thank you for your encouraging comment. Please continue to follow all the latest updates on the site. Regards, The Earn News Team.