Mozilla, the company that makes the Firefox browser, has released a new research saying that you should not trust romantic AI chatbots that pose as your girlfriend.
Eleven of the chatbots that the organization evaluated were given the privacy warning label; according to Mozilla, this means that they are “on par with the worst categories of products we have ever reviewed for privacy.” It’s the most recent in a series of studies stressing the dangers AI poses to privacy.
Examining the terms and conditions of the several chatbots in detail revealed that one of them was fully aware of gathering information about gender-affirming care, prescription medicine use, and sexual health. Users’ personal data may be sold or shared by all but one of the evaluated apps.
“To be perfectly blunt, AI girlfriends are not your friends,” said Misha Rykov, a researcher for Mozilla’s Privacy Not Included program in a statement. “Although they are marketed as something that will enhance your mental health and wellbeing, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”
Poor Chatbot Security
Mozilla said it could find only one chatbot that met its minimum security standards, with a worrying lack of transparency over how the intensely personal information that might be shared in such apps is protected.
Almost two thirds of the apps didn’t reveal whether the data they collect is encrypted. Just under half of them permitted the use of weak passwords, with some even accepting a password as flimsy as “1”.
More than half of the apps tested also failed to let users delete their personal data. One even claimed that “communication via the chatbot belongs to the software.”
Mozilla also found the use of trackers—tiny pieces of code that gather information about your device and what you do on it— was widespread among the romantic chatbots. The testers found an average of 2,663 trackers per minute of usage, with one using more than 24,000 trackers per minute.
Romantic Chatbot Safety Tips
If you’re still inclined to date a virtual chatbot even after the privacy warnings, Mozilla has a few tips to help keep you more safe.
The main tip is not to say anything to the chatbot that you wouldn’t want friends or colleagues to discover, as the privacy of these services cannot be guaranteed. Also use a strong password, request that personal data is deleted once you’ve finished using the chatbot, opt out of having your data used to train AI models, and don’t accept phone permissions that give the chatbot access to your location, camera, microphone or files on your device.
AI Privacy Concerns
Mozilla is far from the only organization to raise concerns over the privacy of the rapidly expanding AI market.
Fears have been raised over the privacy implications of not only the personal data that people enter into AI chatbots, but the data used to train the AI models in the first place. Last September, for example, it was discovered that Microsoft staff had accidentally uploaded 38 terabytes of private data being used to train AI models. The data contained passwords and 30,000 messages from Microsoft Teams conversations, according to Wiz Research, who discovered the leaked data.
AI should be “considered a surveillance technology due to its ability to collect, analyze and interpret vast amounts of data,” wrote Matthias Pfau, co-founder of secure email service Tuta and a Forbes Council member, earlier this year.
AI’s ability to recall facts about its users is further highlighted by OpenAI’s announcement yesterday that it will allow ChatGPT to remember information about users from previous interactions. “Remembering things you discuss across all chats saves you from having to repeat information and makes future conversations more helpful,” OpenAI stated in a blog post announcing the new feature.
The company stressed that users would be able to switch off ChatGPT’s memory entirely or tell it to forget selected conversations.