MadOverBrand
Q&A: Why do mental health chatbots need strict security measures?

Q&A: Why do mental health chatbots need strict security measures?

Mental health is still a Leading clinical approach for investors in digital health. There’s a lot of competition in the space, but it’s still a big challenge for the health care system: Many Americans live in areas with a shortage of mental health professionalslimiting access to care.

Wysa, creator of an AI-backed chatbot that aims to help users overcome concerns such as anxiety, stress, and bad moods, recently announced a $20 million Series B fundraiser, shortly after the startup received Breakthrough Device Designation from the FDA to use its tool to help adults with chronic musculoskeletal pain.

Ramakant Vempati, co-founder and president of the company, sat down with MobiHealthNews to discuss how the chatbot works, the security measures Wysa uses to monitor safety and quality, and what’s next after its latest funding round.

MobiHealthNews: Why do you think a chatbot is a useful tool for anxiety and stress?

Ramakant Vempati: Accessibility has a lot to do with it. Early in the Wysa journey, we received feedback from a stay-at-home mom who said, “Look, I love this solution because I was sitting with my family in front of the TV and did a full CBT session. [cognitive behavioral therapy]And no one had to know.”

I think it really is privacy, anonymity and accessibility. From a product standpoint, users may or may not think about it directly, but the security and safety measures that we build into the product to make sure it fits its purpose in that wellness context is an essential part of the value we provide. . I think that’s how you create a safe space.

Initially when we launched Wysa, I wasn’t quite sure how it would work. When we launched in 2017, I thought, “Will people really talk to a chatbot about their deepest, darkest fears?” Use chatbots in a customer service context, like a bank website, and frankly, the experience leaves a lot to be desired. So I wasn’t quite sure how this would be received.

I think five months after we launched, we got this email from a girl who said this was there when no one else was, and it helped save her life. She couldn’t talk to anyone else, a 13-year-old girl. And when that happened, I think that’s when the penny dropped, personally for me, as a founder.

Since then, we have gone through a three phase evolution from an idea to a concept to a product or business. I think phase one has been proving ourselves, really convincing ourselves that users like it and get value from the service. I think phase two has been testing this in terms of clinical results. So now we have 15 peer reviews publications either published or by train right now. We are involved in six randomized controlled trials with partners like the National Health Service Y harvard. And then we have Breakthrough Device Designation from the FDA for our work in chronic pain.

See also  Including Health, Pear Therapeutics announce layoffs

I think all of that is to test and build that evidence base, which also gives everyone else confidence that this works. And then phase three is taking it to scale.

NMH: You mentioned railings in the product. Can you describe what they are?

Vempati: The first is that when people talk about AI, there are a lot of misconceptions and a lot of fear. And, of course, there is some skepticism. What we do with Wysa is that, in a sense, the AI ​​is in a box.

Where we use NLP [natural language processing], we’re using NLU, Natural Language Understanding, to understand the user’s context and understand what they’re talking about and what they’re looking for. But when it responds to the user, it’s a preprogrammed response. The conversation is written by doctors. So we have a team of doctors on staff who actually write the content, and we explicitly test it.

So the second part is that since we don’t use generative models, we’re also very aware that the AI ​​will never catch what someone says 100%. There will always be cases where people say something ambiguous, or use nested or complicated sentences, and the AI ​​models will not be able to detect them. In that context, every time we’re writing a script, you write with the intention that when you don’t understand what the user is saying, the response won’t fire, it won’t hurt.

To do this, we also have a very formal testing protocol. And we comply with a safety standard used by the NHS in the UK. We have a large set of clinical safety data, which we use because we’ve now had 500 million conversations on the platform. So, we have a large set of conversational data. We have a subset of data that we know AI will never be able to capture. Every time we create a new conversation script, we test it against this dataset. What would happen if the user said these things? What would be the answer? And then our doctors look at the response and the conversation and judge whether the response is appropriate or not.

NMH: When it announced its Series B, Wysa said that it wanted to add more language support. How do you determine which languages ​​to include?

See also  Adherium obtains US FDA 510(k) to connect remote monitoring sensor for Ellipta inhaler users

Vempati: In the early days of Wysa, we used to have people writing, volunteering to translate. Someone from Brazil wrote and said, “Look, I’m bilingual, but my wife only speaks Portuguese. And I can translate for you.”

So, it’s a difficult question. His heart goes out, especially for low-resource languages ​​where people don’t get support. But a lot of work is required not only to translate, but this is almost an adaptation. It’s almost like building a new product. Therefore, you must be very careful what you assume. And it is not just a static and unique translation. You need to constantly monitor it, make sure clinical safety is in place and that it evolves and improves over time.

So from that standpoint, there are a few languages ​​that we’re looking at, driven mostly by market demand and where we’re strong. So it’s a combination of market feedback and strategic priorities, plus what the product can handle, where it’s easiest to use AI in that particular language with clinical safety.

NMH: You also noticed that it is looking to integrate with the WhatsApp messaging service. How would that integration work? How do you manage privacy and security issues?

Vempati: WhatsApp is a very new concept for us at the moment, and we are exploring it. We are very, very conscious of privacy requirements. WhatsApp itself is end-to-end encrypted, but if you break the veil of anonymity, how do you do it responsibly? And how do you ensure that you are also meeting all regulatory standards? These are all ongoing conversations right now.

See also  Direct primary care provider Everside Health lands $164 million

But I think at this stage what I really want to highlight is that we’re doing it very, very carefully. There’s a great sense of excitement around the WhatsApp opportunity because, in much of the world, that’s the primary means of communication. In Asia, in Africa.

Imagine people in underserved communities where you don’t have mental health support. From an impact standpoint, that’s a dream. But it is an early stage.

Previous Post
Diagnostic Robotics raises $45 million for AI-enabled care management and more digital health funding
Health

Diagnostic Robotics raises $45 million for AI-enabled care management and more digital health funding

Next Post
Caption Health partners with Heartbeat Health for remote cardiac care
Health

Caption Health partners with Heartbeat Health for remote cardiac care