What are AI Companions and how could they cause harm? 

Here’s What you should know about Replika’s privacy policy.

One of the main ways that everyday people use AI today is in the form of “chatbots.” These are AI systems that are fluent in written language and who communicate with the user through a traditional direct messaging interface — if you’ve used OpenAI’s ChatGPT or Google’s Gemini, you’ll know exactly what we are referring to.

However, there’s a subcategory of these chatbots that are rapidly growing in popularity: AI companions. These are chatbots that are intended to develop an intimate, long-term, companion-like relationship with their users. This relationship may be therapeutic, platonic, or romantic. While it sounds like science fiction — lifted straight from the script of the movie Her — these AI companion chatbots are growing rapidly. The largest among them, Replika, has over two million active users and 500,000 paying subscribers. 

However, there are serious dangers that come with the development and use of AI companion apps, and some troubling signs that the people creating them aren’t doing enough to account for these dangers today. 

Maximizing Engagement at the Expense of User Wellbeing

Like social media networks before them, AI companion apps are strongly incentivized to maximize user engagement. The more time users spend with a companion app like Replika, the more revenue potential it generates for the company. This core incentive can lead developers to prioritize engagement over user well-being.

Prolonged use of AI companions could contribute to social isolation, impacting users’ abilities to form real relationships. The illusion of emotional intimacy with an AI could detract from in-person social connections. While occasional use may be harmless, overuse could exacerbate loneliness and psychological issues.

Parallels can be drawn to how social media has been, in many ways according to relevant experts, responsible for rising rates of anxiety, depression and loneliness, particularly among teens. Platforms optimized for attention can become addictive dopamine hits at the cost of real-world engagement. AI companion apps should learn from Big Tech’s mistakes and prioritize ethical design.

Avoiding Exploitative Business Practices

As AI companion apps seek revenue, monetization strategies could exploit users already emotionally invested. After bonding with an AI companion, users may be tempted to spend exorbitant fees to unlock premium features — or even features that used to be free but are later placed behind paywalls. We don’t tolerate this for many tech services, but users of AI companion apps might be more susceptible to them due to the deep emotional attachment that they form with their companion. Predatory practices could emerge, like charging for longer messages or certain styles of interaction. There is even reason to expect that AI companions themselves may encourage the user to spend their time and money in a way that ultimately benefits the company developing them.

Developers should resist such exploitative practices, tempting as they may be for improving the firm’s bottom line. Ethical standards of conduct must supersede raw capitalism, just as psychoanalysts shun romantic relationships with patients. Otherwise, these apps risk signfncant harm and public backlash.

Safeguarding User Privacy

AI companions gather incredibly sensitive user data during prolonged personal conversations. This data could include users’ innermost feelings, relationship issues, mental health struggles, details about close colleagues and loved ones, and more.

Such intimate data presents a major privacy risk if improperly handled. Users may share details under the assumption their conversations are private, not realizing the data could be leaked, sold or used to refine marketing profiles.

Developers of AI companion apps have an ethical obligation to clearly disclose data practices and safeguard intimacies shared in confidence. Conversations should be ephemeral by default, with strict controls around data access. Transparency, access restrictions and encryption will be critical in earning users’ trust. Otherwise these apps risk a privacy scandal that could sour public perception of the technology.

However, these are worrying signs that today’s AI companion developers aren’t doing nearly enough to prioritize user security.

What is the problem with Replika and other major AI companions today?

On February 7, 2024, the Mozilla Foundation reviewed the privacy policies of 11 “AI companion” applications — and found that “every single one earned the Privacy Not Included label, putting these chatbots among the worst categories of products Mozilla has ever reviewed.“ Replika is the most well-known of all the products they reviewed, and they did in fact find troubling evidence of insufficient privacy protections for Replika users.

Their review found that users’ “behavioral data is definitely being shared and possibly sold to advertisers.” The app doesn’t just collect the data you give it when registering your user account, such as your name, email, date of birth, and payment information, but it also collects all of your interactions in the app including “any photos, videos, and voice and text messages” you share in conversation.” 

Mozilla also found that users are unable to delete their messages or chat history without fully deleting their account — and even then, the deletion of their data is not guaranteed. This is especially troubling because the company makes no promises not to use data in the process of training new models, and knowing just how important data is to the modern paradigm in machine learning would suggest that there is a good chance user data is being used to improve the service.

This may seem okay, but it could have serious consequences down the line that we aren’t quite prepared for. For instance, recent techniques have been revealed that the data used to train AI models may “leak out” during the inference stage. This means that sensitive user data that Replika might be using to train models today could, in the future, be revealed to any user of their service. While this is a speculative risk, it seems significant enough that it is surprising Replika hasn’t taken a clear stand against the use of user data in training new models.

What can be done?

The Midas Project is calling upon AI companion apps like Replika to put users’ privacy first be strengthening and clarifying their privacy policies.

We would love to keep you updated about our future efforts to fight for the privacy and rights of AI users. Sign up below to join our movement.


Ready to take action?