Blog

Blog: Me, you, us, and a ‘chat’ about Mental Health and Suicide Prevention


Go to the profile of Pete Trainor

As statistics swirl and change on an almost daily basis, the role that technology plays in human life is becoming an increasingly urgent question. It’s the thing people turn to for answers (even if often those answers are wrong and can lead people to more problems) and its also the vessel we’re filling up with the knowledge — via data — to learn about ourselves and others.

Access to faster communication and information sharing has also allowed cross discipline knowledge transfer, and as the world has shrunk information wise, the rate of technological advance has increased, almost exponentially. These shifts aren’t just changing society — they’re changing what it means to be human. We went from having no World Wide Web to a full-blown World Wide Web in 20 or 25 years — that’s astonishing when you consider how much the internet has changed human life. In the case of, say, telephones, that took many decades to fully spread and become as ubiquitous as it is today. The significant impacts of social media and mobile technology will likely pale in comparison to potential revolutions coming in artificial intelligence where the data we produce can be used to fine-tune and automate everything.

I’ve been vocal for a long time about the role this relatively new technology must play in the provision of support for people when they find themselves vulnerable. For people who find themselves at the end, with nowhere else to go, or the perception of nobody to speak too. In a world awash with technological innovation, it is now more important than ever to assess the psychological impact of technologies like Smartphones and Social Media, and also decide what role they play in supporting people in ways that can be life-changing, and sometimes life-saving, rather than just spirit eroding tools.

When I looked around today, I saw glass everywhere, in everyones hands. Smartphones seem to be rapidly pushing us away from the world. We’re losing our ability to be IN the moment in a way that isn’t mediated by some electronic appendage. So we better use that to our advantage, because it’s not going away.

I’m classified as a Mental Health Activist not an expert, mainly because my work involves not just designing technology to combat, but also to lobby, challenge, and campaign for social changes in the way we talk about mental health, the way it’s viewed, the way services are funded, and also the channel’s that support is made available through. It’s also why I’ve invested so much time, money, and resources in looking at the role technology plays in helping to solve this crisis.

We have a long way to go, and a lot of work to do.

In the following article I’ll lay out the work I’m doing, and the way I’m framing technology to support people in this space. I’m using my super-power as best as I can. Everyone has a super-power. To some, it’s found very quickly on the sports field or the classroom. To others, it needs to be hunted for and nurtured. My super-power is Design, and more importantly the fusion of technology with design-thinking. I own an Ai company, but that does not necessarily make me a Data Scientist per se, but I do know how to manipulate technology and the makers of tech enough to fix a problem… you just need to know what issues to fix. Ai in particular can answer a lot of questions, and my job is making sure we know what questions to ask.

Silent but deadly

One of the greatest deceptions of mental illness lies in its uncanny ability to masquerade its deep-rooted insidious impacts. We all have mental health — every single one of us — but to some, life, and the ever-changing world eats away at our mental resilience, the armour around our psyche rusts as more is thrown at us. The depression brought on by life’s challenges, in many cases, is neither sadness or crying nor wearing black and retreating, but a paralysing numbness in our emotions and a perennial desire to be alone even when we feel lonely. That’s the first phenomenon of mental illness that I spotted many many years ago. In this pandemic of loneliness, people have turned towards the online realm either to immerse themselves in content that they find a pervasive comfort in — blogs, social media, pornography, communities of the like-minded, news, views, shopping etc — and communities where they find like-minded people to swap content and stories.

Many now argue that the internet is partially responsible for this wave of unhappiness washing over so many. But I think that’s too simplistic a view. It’s the classic causation vs correlation debate. Causation indicates that one event is the result of the occurrence of the other event; i.e. there is a causal relationship between the two events. The cause and effect. When in reality the two are probably so intricately related, that we’ll never really know. It would be too reductive for me to state that if we took away technology people would be happy, any more than I might suggest adding it into people’s lives has given them a safe space to go and discover more about their innermost complicated feelings.

Mapping the role of Ai based technology

In Hippo, I laid out a framework for how I think we should classify technology.

It’s a useful mechanism for trying to map out the various types of Ai based technologies that are here, and that is emerging. It’s also useful as a way of framing ethics. The band cutting across the middle of my grid indicates were a lot of work is being done. Moving from the bottom left, to the top-right.

At the bottom-left, we have information-centric (Supportive) services focused on the interaction between human and machine to get insight and/or gather information. This is where Google focuses on the majority of its efforts aggregating the world’s knowledge. So in the context of mental health, we’re talking here about simple chatbots that give answers to basic questions when somebody is scared, afraid, or just wants an answer.

Above that, we have the action-centric space (Service) where the interaction between a human and a machine drives actions and is typically dependent on a particular device or command. This is exemplified by Amazon and their Alexa assistant. I can ask technology to ‘do’ things for me, or do things with me. In the context of mental-health focused tech, it would be a service I can use to help me find people online when I need someone to talk too; “Please go off and find me like-minded souls who are also sitting up at 2am lonely, scared, and in need of a chat”.

In the bottom-right things start to get a little bit more emotionally involved. Conversion-centric (Predictive) type technology that looks at historical data to actively predict when someone might want to do something new or repetitive, without them explicitly asking or requesting. This already plays out passively in a lot of banking systems, offering you products that might benefit you in the future. In the context of mental-health support, this opportunity is enormous, but also incredibly controversial. The concept of predicting IF someone might say, be approaching a space where they could harm themselves, by observing previous conduct, and to administer support proactively starts to stray into the surveillance space.

Finally, and most complicated to get right is the conversation-centric (Perceptual) space in the top right. Perceptual computing is a general advancement in technology where computers are better able to sense or analyse the environment around them and respond accordingly. Perceptual computing has a lot of potential to change the interface through which humans interact with computers. Extracting value from person-to-person, or person-to-machine conversations to learn and act proactively on someone’s behalf. A good example would be the virtual assistant that’s also linked to a wearable and can learn from biometrics as well as static data cues. Its possibilities are profound, but so too will be margins of error and intrusion into people’s private space.

The concept of Emotional Analytics in this space while fascinating, and hugely exciting, does worry me because we need to give people enough time to adjust to that shift in how we interface with technology. Dealing with people who may already be feeling paranoid and afraid needs to be done very carefully. What is all this doing to our habits, to our cultural sense of who we are? When these things happened slower in previous eras, we had more time to assess the impacts and adjust. That is simply not true anymore and we should be far more worried about this than we are.

Where do the kind of services being produced or explored currently sit? I’ve attempted to put just a handful of paradigms into context on the grid below.

Because technology is developing much much faster than our culture and our institutions, and because the gap between these things will only grow so far before society becomes dangerously unstable, we need tread carefully about which path through my grid we take, and how quickly we attempt to make progress.

Is there a safe space (for now) where we can create the most meaningful impact, whilst balancing risk? A place where various types of Data, Ai and interface converge to create mass-support in a time of great need, while not wading into the complicated zone? It’s pretty apparent — literally slap-bang right in the centre where all things converge. Tools that learn from us detect when things aren’t going, or we’re off-kilter passively to give us proactive support, but also service our needs responsibly.

Help before we know we need it.

That’s the logical thing to conclude — But there is another way of looking at this. In my opinion, the glue to make anything genuinely game-changing for the provision of every-day mental-health support starts in the bottom-right — the predictive space. It’s also a controversial space. You get it wrong, and the dangers are apparent. But the tech is there to be able to do some really ground-breaking work. Facebook came in for criticism in 2017 after leaked documents revealed that it had told advertisers it could identify and predict in real time, through posts and photos, when teenagers feel ‘insecure’, ‘worthless’ and ‘need a confidence boost’, presumably so the advertiser could intervene in real time to sell them something. Cynical. But, isn’t it time that kind of opportunity was used for something more worthwhile? Something potentially life-saving?

We’ve already been a lot of work analysing conversations to identify words and phrases, and combinations of words and phrases that indicate someone might be in an inadequate space mentally. When conversations display large volumes of absolutist thinking, and sentiments like burden that can often be an indicator that something might not be where it should be.

To drive Bots driven from the bottom-right rather than the bottom-left seems to be the key to making the promise of Ai truly a game-changer for people (all of us) that consume vast amounts of this new technology-driven world, and also experience moments where we struggle to survive the changes of an ever-changing world. If the data we generate contains subtle clues about our current, past, or even future intentions, then fed enough input, a robust stochastic process, would be able to infer if something might occur and respond accordingly.

Take the example of ‘risk stratification’ tools that are currently used across the NHS to help identify which general practice patients are at highest risk of being admitted to hospital. The potential benefits of this to the NHS are that patients are prevented from experiencing an adverse event and emergency care costs are avoided. Applying that same theory to people in a vulnerable position because of mental illness offers a massive opportunity for society. Mental illness is a spectrum, some people may only require simple answers to a sign-post to a support service, where-as some people may say things that instantly trigger a warning that they are displaying signs of being a risk to themselves or others.

A big challenge, of course, is in finding the training datasets for predictive models in precision mental-health support. They must, of course, be built from diverse population samples. Algorithms developed using narrow data sets in terms of age or ethnicity will have little generalizable predictive power for the wider population. So this is our first hurdle. But that’s not to say there aren’t things we can start to do without crowd-sourcing vast amounts of public profile data.

Looking in the places that we dare not go

One hypothesis that I have is that machine learning algorithms that have been trained on suicide notes, alongside mental health professionals and psychiatric physicians could be an excellent source to learn about language. Indeed, in early work we’ve already done, we spotted language patterns, and themes in the notes of people who threatened suicide but did not go through with the threat vs notes of the people who not only threatened but unfortunately also followed through with their threat. Why is this important? It’s important because the reality is that only ⅓ of people who suffer from suicidal ideation actually write a note in the first place. A troubling stat, but perhaps there are services and cues we can learn that ultimately help everyone, but specifically the ⅔ who don’t write notes.

Analysing the content produced by the friends, family, colleagues and loved ones that are no longer with us, is a critical step in developing an evidence-based predictor of a potential or repeated suicide attempt.

The next big challenge in an attempt to design software or systems (Ai or otherwise) is in deploying to the right channel, and in language that is relevant and appropriate. Let’s be frank and honest, a lot of the problem here is that people don’t present themselves to professionals because of shame or fear. Or professionals are over-worked and cannot deal with the growing stack or cases and potential patients. There are several different channels where data can be collected better understood. Self-reporting tools are flawed, difficult to obtain, and compliance results are not always high, but they can prove useful for getting label data.

Raw text, on the other hand, is a far more abundant source. What you write and say is who you are, and recent research using data has revealed several things about what people tell about themselves through their writing, particularly around stop words such as pronouns. Pronouns (such as I, you, they), articles (a, an, the), prepositions (to, of, for), auxiliary verbs (is, am, have) have very little meaning on their own, and the English language has fewer than 500 function words. However, they account for more than half of the words we speak, hear, and read every day, and by analysing their use, we begin to learn how people are connecting with their lives, their friends, their conversational topics, and themselves. It’s why conversational agents offer a much bigger opportunity to support people with all types of mental health than perhaps we realise. Chatbots aren’t just an opportunity, they are THE opportunity to have a conversation with people. The big problem is that at the moment, they’re not very good.

Let’s take the example of a bot that is conversing with a patient once a day to find out how they are feeling, or how they’re getting on at work. People who use ‘I’ a lot in their replies and also show low levels of conscientiousness — the tendency to act in an organised or helpful way — strongly correlates with depression, in fact, low consciousness scores have 75% chance of being diagnosed with depression in some studies. While this doesn’t necessarily show causation, it definitely shows it is tied. The same with extraversion. While these findings are not necessarily surprising, not that much is currently being done to put filters or monitors in place in popular chat or social media tools, or even self-contained apps to support people with mental health. Maybe it’s time that changed?

A Qualified Friend

Finally, let’s wrap up my thinking with the opportunity — A qualified friend. What do I mean by that? Well, let’s consider the hypothesis that many will, can’t or never get to a professional. Perhaps because they’re scared, or maybe because they just don’t consider themselves worthy of clinical intervention. In fact, in a 2018 study from healthtech startup Mynurva, two-thirds of Brits polled didn’t think GPs would have the time or training to effectively treat mental health problems, and 34 per cent of UK workers with mental health issues feel their condition worsened in 2018.

We know they’ll talk, but perhaps not to another person, or professional. Or another person isn’t available when they want to talk. More often than not we turn to a friend. A friend is something who if they know us well enough can sense if we’re not well, or not acting like ourselves. A friend gives us support unconditionally, and will quite often drop everything to pick us up when we fall down. I’m sure many people don’t feel like their lives have that kind of person, and so turn to technology to fill that gap.

That’s my case for providing technology that’s been trained on enough evidence-based data to spot the patterns and cues that might represent harm. Whilst many companies and well-meaning people are rapidly trying to drive innovation from the bottom-left to top-right (straight into the high-risk ethics space), perhaps it’s more important for businesses trying to create tools for mental health to come at the problem from the bottom right, across support, into service, with just a light touch of perception because if you go hard into that space, you are basically spying on people who are already very vulnerable.

I call it the qualified friend, but ultimately that’s what we all need, regardless of where we are on the mental health spectrum — a friend. Not always a professional, and never a complicated tool that tries to diagnose us like we’re under a microscope. When we’re lonely, and feeling worthless, we just need something or someone that can detect that and offer us direction.

Conclusion

What’s most striking about us as humans is that we are unpredictable in very basic ways. We’re more complex than we can fathom, and there’s something about us that is the opposite of artificial. It’s the opposite of something made. We need machines to be learning that side of us, so they can detect, and support us more like a friend would, than a clinician. That’s the big opportunity.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social