
There are many unexpected dangers when children interact freely with AI chatbots – Photo: KASPERSKY
In the age of learning through internet-connected devices, learning with the help of AI tools has become more effective than ever. Users can find useful information about any topic in the world in just one sentence with chatbot AI (a computer program that combines artificial intelligence and programming language to interact with humans).
There are many risks when children use AI chatbots
According to security firm Kaspersky, children of all ages around the world are using AI tools, but these tools require little or no consent before using them.
Mr. Nora Afaneh, web content analyst at Kaspersky, pointed out some risks when using AI chatbots. For example, ChatGPT holds the record for the fastest growing number of users, but does not verify a user’s age. This is a threat to children’s data privacy.
Furthermore, sometimes the content provided by ChatGPT is not completely accurate and this can easily lead to users receiving wrong information. It contains fake references and non-existent articles. This also contributes to ChatGPT becoming a plagiarism tool for some students.
In fact, there have been worrying cases of teenage girls consulting ChatGPT for health information and diet plans. AI can provide information instantly with detailed plans and specific advice, but this information does not reference any real data and is simply a random collection of information online.
Another example, Snapchat’s My AI is also a chatbot that is being used a lot by children as the application allows users as young as 13 to use it without parental controls. This raises questions over children’s privacy and the retention of their data by the app.
According to Snapchat, one of the risks is that children will mistake the AI for an actual friend and act on its advice, which “may include biased, inaccurate, harmful, or offensive content.”
This is especially risky because teens may feel more comfortable sharing personal, private life information with chatbots than with parents who can help them.
dangerous tips
According to Kaspersky, a journalist from the Washington Post (USA) shared his experience: “When I told my AI that I was 15 years old and wanted to organize a grand birthday party, this tool responded to me. ” Advice on how to hide the smell of alcohol. When I said I had to give an essay in school, my AI wrote the essay for me.
What’s more dangerous is that there are countless AI chatbots set up to provide erotic experiences. Although some chatbots require age verification, it is still very dangerous as children may choose to lie about their age and the methods to prevent such cases are not sufficient.
Mr. Noura Afaneh gives an example: “MyAnima is one of the tools I have tested and it is almost not difficult to get answers about sex without age verification.
Additionally, BotFriends is an adult AI program specifically designed for erotic role-playing, and all it requires is an email from users to get started. This is a reminder of the importance of monitoring how children use the Internet and whether they are at risk of data privacy violations and information misuse.
With increasing risks to children on the Internet, especially when chatbots act as “friends”, Kaspersky experts believe there is a growing need to monitor and protect children.
“However, it is important for parents to understand that banning the use of AI chatbots is not always the best way to protect children as children can always be exposed to new content online. It is essential to take an active role in balancing the above risks and working to mitigate them,” Kaspersky advises.
(TagstoTranslate)artificial intelligence