A.I. Helps Predict and Prevent Suicides?

 A.I. Helps Predict and Prevent Suicides

Editor: Vlad Rothstein | Tactical Investor

A.I. Helps Predict and Prevent Suicides

Suicide is a growing public health concern. In Canada, 4,000 lives are claimed by suicide each year — that is 10 lives per day.

For every one of these suicide deaths, there are five people hospitalized following self-injury, 25 to 30 suicide attempts and seven to 10 people affected by each tragedy, according to an analysis by the Public Health Agency of Canada.

Suicide rates are highest among certain groups — such as Indigenous peoples, immigrants and refugees, prisoners and the lesbian, gay, bisexual, transgender, intersex (LGBTI) community — and are on the rise.

The impacts of suicide are felt widely. The Toronto Transit Commission (TTC) recently reported an increase in transit suicides at the end of 2017, with eight attempts in December alone, and a corresponding rise in rates of stress leave by TTC employees, due to the toll this took on staff.

Could artificial intelligence (AI), or intelligence demonstrated by machines, possibly help to prevent these deaths?

 AI predicts suicide rates

Early in 2018, the Public Health Agency of Canada announced a pilot project with Advanced Symbolics, an Ottawa-based AI company which successfully predicted Brexit, Trump’s presidency and results of the 2015 Canadian election.

A new approach to “therapy” involves conversational bots (or chatbots) which are computer programs designed to simulate human-like conversation using voice or text responses.

Chatbots can deliver psychological interventions for depression and anxiety based on cognitive behavioural therapy (CBT). Since chatbots uniquely respond to presented dialogue, they can tailor interventions to a patient’s emotional state and clinical needs. These models are considered quite user-friendly, and the user-adapted responses of the chatbot itself have been well reviewed.

Suicide is influenced by a variety of psychosocial, biological, environmental, economic and cultural factors. AI can be used to explore the association between these factors and suicide outcomes. Full story

AI Is Learning To Predict & Prevent Suicide

For years, Facebook has been investing in artificial intelligence fields like machine learning and deep neural nets to build its core business—selling you things better than anyone else in the world. But earlier this month, the company began turning some of those AI tools to a more noble goal: stopping people from taking their own lives. Admittedly, this isn’t entirely altruistic. Having people broadcast their suicides from Facebook Live isn’t good for the brand.

But artificial intelligence offers the possibility to identify suicide-prone people more accurately, creating opportunities to intervene long before thoughts turn to action. A study publishing later this month used machine learning to predict with 80 to 90 per cent accuracy whether or not someone will attempt suicide, as far off as two years in the future. Using anonymized electronic health records from 2 million patients in Tennessee, researchers at Florida State University trained algorithms to learn which combination of factors, from pain medication prescriptions to a number of ER visits each year, best predicted an attempt on one’s own life.

Their technique is similar to the text mining Facebook is using on its wall posts. The social network already had a system in which users can report posts that suggest a user is at risk of self-harm. Using those reports, Facebook trained an algorithm to recognize similar posts, which they’re testing now in the US. Once the algorithm flags a post, Facebook will make the option to report the post for “suicide or self-injury” more prominent on the display. In a personal post, Mark Zuckerberg described how the company is integrating the pilot with other suicide prevention measures, like the ability to reach out to someone during a live video stream. Full Story

Google Artificial Intelligence Predict Peoples’ Death Date with 95% accuracy

AI opens a new frontier for suicide prevention

In the early hours of the morning, a distraught teen posts on social media about wanting to hurt herself. Her friends and family are sleeping, but an algorithm answers, providing links to 24/7 help.

Artificial intelligence already sorts what you see on social media, but increasingly, it’s being harnessed to monitor and respond to mental health crises.

Canada is at the cutting edge of the development. The federal government recently tapped an Ottawa-based AI company to screen social media posts for warning signs of suicide. According to a contract, Advanced Symbolics will work with the government to define “suicide-related behavior” — from thoughts to threats to attempts — and conduct market research to identify related patterns of online behavior. For example, do people who self-harm tweet about it?

Based on the findings, the company will conduct a three-month pilot monitoring online discussions about suicide, after which the Public Health Agency of Canada “will determine if future work would be useful for ongoing suicide surveillance.”

The project will use public data and won’t identify individuals. According to Advanced Symbolics chief scientist Kenton White, the goal is to identify “hot spots” of suicide risk so the government can provide resources to communities before tragedy strikes. The company previously used the same technology to forecast the outcomes of elections in Canada, the United States and Europe, accurately predicting the breakdown of the popular vote in the 2016 US presidential election within 0.7 percentage points.

In November, Facebook rolled out an AI program that scans posts and live videos for threats of suicide and self-harm, and alerts a team of human reviewers, who can contact emergency responders if needed. “In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times,” Facebook founder Mark Zuckerberg . The company also reported that its AI has been able to identify and remove 99% of posts related to the terrorist groups ISIS and Al Qaeda before users flagged the content, “and in some cases, before it goes live on the site.” Full Story

Other Articles of Interest


Good Time To Buy IBM or Should You Wait? (Mar 15)

Is the Bitcoin Bull Market dead or just taking a breather? (Mar 8)

Is this the end for Bitcoin or is this a buying opportunity? (Jan 24)

Stock Market Insanity Trend is Gathering Momentum   (Jan 10)

Is value investing Dead   (Jan 9)

Irrational markets and Foolish Investor: perfect recipe for disaster   (Jan 5)

Stock market Crash Myths and Realities  (Jan 3)

Bull-Bear Markets & Arrogance   (Jan 1)