top of page
AI pixabay huge.jpg

Mental health app privacy language opens up holes for user data

Writer's picture: DHV-NETDHV-NET

In the world of mental health apps, privacy scandals have become almost routine. Every few months, reporting or research uncovers unscrupulous-seeming data sharing practices at apps like the Crisis Text Line, Talkspace, BetterHelp, and others: people gave information to those apps in hopes of feeling better, then it turns out their data was used in ways that help companies make money (and don’t help them).


It seems to me like a twisted game of whack-a-mole. When under scrutiny, the apps often change or adjust their policies — and then new apps or problems pop up. It isn’t just me: Mozilla researchers said this week that mental health apps have some of the worst privacy protections of any app category.


Watching the cycle over the past few years got me interested in how, exactly, that keeps happening. The terms of service and privacy policies on the apps are supposed to govern what companies are allowed to do with user data. But most people barely read them before signing (hitting accept), and even if they do read them, they’re often so complex that it’s hard to know their implications on a quick glance.


“​​That makes it completely unknown to the consumer about what it means to even say yes,” says David Grande, an associate professor of medicine at the University of Pennsylvania School of Medicine who studies digital health privacy.


So what does it mean to say yes? I took a look at the fine print on a few to get an idea of what’s happening under the hood. “Mental health app” is a broad category, and it can cover anything from peer-to-peer counseling hotlines to AI chatbots to one-on-one connections with actual therapists. The policies, protections, and regulations vary between all of the categories. But I found two common features between many privacy policies that made me wonder what the point even was of having a policy in the first place.


bottom of page