The Brave New World of A.I.-Powered Self-Harm Alerts
Dec 09, 2024
Dawn was still hours away when Angel Cholka was awakened by the beams of a police flashlight through the window. At the door was an officer, who asked if someone named Madi lived there. He said he needed to check on her. Ms. Cholka ran to her 16-year-old’s bedroom, confused and, suddenly, terrified.
Ms. Cholka did not know that A.I.-powered software operated by the local school district in Neosho, Mo., had been tracking what Madi was typing on her school-issued Chromebook.
While her family slept, Madi had texted a friend that she planned to overdose on her anxiety medication. That information shot to the school’s head counselor, who sent it to the police. When Ms. Cholka and the officer reached Madi, she had already taken about 15 pills. They pulled her out of bed and rushed her to the hospital.
Thousands of miles away, at around midnight, a mother and father in Fairfield County, Conn., received a call on their landline and were unable to reach it in time to answer. Fifteen minutes later, the doorbell rang. Three officers were on the stoop asking to see their 17-year-old daughter, who had been flagged by monitoring software as at urgent risk for self-harm.
The girl’s parents woke her and brought her downstairs so the police could quiz her on something she had typed on her school laptop. It took only a few minutes to conclude that it was a false alarm — the language was from a poem she wrote years earlier — but the visit left the girl profoundly shaken.
“It was one of the worst experiences of her life,” said the girl’s mother, who requested anonymity to discuss an experience “traumatizing” to her daughter.
In the array of artificial intelligence technologies entering American classrooms, few carry higher stakes than software that tries to detect self-harm and suicidal ideation. These systems spread quickly during Covid shutdowns as more schools began sending laptops home with students.
A law required that these devices be fitted with filters to ensure safe internet use, but educational technology companies — GoGuardian, Gaggle, Lightspeed, Bark and Securly are the big ones — also saw a way to address rising rates of suicidal behavior and self-harm. They began offering tools that scanned what students type, alerting school staff members if they appeared to be contemplating hurting themselves.
Millions of American schoolchildren — close to one-half, according to some industry estimates — are now subject to this kind of surveillance, whose details are disclosed to parents in a yearly technology agreement. Most systems flag keywords or phrases, using algorithms or human review to determine which ones are serious. During the day, students may be pulled out of class and screened; outside school hours, if parents cannot be reached by phone, law enforcement officers may visit students’ homes to check on them.
It is impossible to say how accurate these tools are, or to measure their benefits or harms, because data on the alerts remains in the hands of the private technology companies that created them; data on the interventions that follow, and their outcomes, are generally kept by school districts.
Interviews with parents and school staff members suggest that the alerts have, at times, allowed schools to intervene at critical moments. More often, they connect troubled students with counseling before they are at imminent risk.
However, the alerts have unintended consequences, some of them harmful. Rights organizations have warned of privacy and equity risks, especially when schools monitor the online activity of L.G.B.T. students. Civil rights groups charge that surveillance technologies unnecessarily put students into contact with to police.
As a mental health tool, the filters get mixed reviews. There are many false positives, which can be time-consuming for staff and unsettling for students. And in some districts, after-hours visits to students’ homes have proven so controversial that they are scaling back their ambitions by limiting interventions to the school day.
Nevertheless, many counselors say monitoring software is helping them achieve an elusive goal: identifying children who are struggling silently, and reaching them in time. Talmage Clubbs, the Neosho school district’s director of counseling services, said he was hesitant to suspend the alerts, even over summer vacation, for moral reasons.
“It is hard to switch it off,” he said. “I mean, the consequences of switching it off is that somebody can die.”
A New World of Vigilance
On a recent Thursday, Mr. Clubbs arrived at work to find a small blizzard of alerts from GoGuardian Beacon, a software that scans what students type on their school devices.
Twenty-six were false alarms — a health class was researching suicide — but there was also an “active planning” alert, set off at 9:30 a.m., when a student, typing on a Chromebook in a classroom, searched the phrase “how to die.”
Mr. Clubbs is a former police officer, and active planning alerts flip him into a familiar state of intense, blinkered focus. Straight away, the student would be pulled out of class and administered the Columbia Protocol, a suicide screening tool that begins, “Have you wished you were dead or wished you could go to sleep and not wake up?”
“When we get an active planning alert, everything drops,” Mr. Clubbs said. “We seek out that student and we talk to that student.”
Neosho is a small, conservative town on the western edge of the Ozarks, the kind of place where much of the population shows up for high school football games. It is also a place where children have struggled terribly. Between 2014 to 2018, Neosho had eight student suicides, an alarming number in a district of just under 5,000.
Often, it would often emerge that the children had told friends of their plans, Mr. Clubbs said, but the friends hadn’t reached out to adults. “We were relying on parents and students to come forward themselves and say, Hey, we’re struggling, we need help,” he said. “And that wasn’t coming in.”
Neosho was one of GoGuardian’s first customers, beta-testing the software in return for a discount. (The company declined to answer questions about pricing; GovSpend, a database that tracks local contracts, said Neosho paid the company $46,000 for the 2022-3 school year.)
Two weeks into the school’s trial of Beacon, Tracy Clements, who was then the director of counseling, received a call from GoGuardian telling her a female student was searching, “How much Tylenol does it take to die?”
Unable to reach the girl’s parents, Ms. Clements drove to her house, knocked on the door and found her home alone. “She recognized me from school,” Ms. Clements said. “I said, ‘Do you know why I’m here?’ She said, ‘Because I’m about to kill myself.’” The girl was hospitalized, and Ms. Clements became a believer, eventually leaving the school system to work at GoGuardian.
Dramatic stories like that are unusual, though. Every day, Mr. Clubbs’s team sifts through and responds to the alerts, a task that occupies about a quarter of his work hours, and a third of his counselors’. He could not say how accurate the system was. “We’re not keeping any data like that,” he said. “We’re just responding to the alerts as they come in.”
Mr. Clubbs eventually concluded that the 9:30 active planning alert had been a prank. Students said false positives were common; in interviews, they recalled being flagged for text messages about hunting trips, historical research into the Ku Klux Klan and, in one case, the Oscar Wilde play “The Importance of Being Earnest.”
But some said alerts had helped them get care. One 17-year-old recalled being called in to see a counselor, who held up a printout of an email she had sent to a friend, saying she was thinking of killing herself. “She’s like a mom figure now,” the girl said.
Roughly three times a year, a school police officer is sent to a student’s home to intervene if a suicide attempt appears to be imminent, said Ryan West, chief of the school’s police department. Generally, he said, these visits take place between 11 p.m. and 2 a.m. Parents tend to be “really unsettled” and say they had not realized such a visit was even possible, though “they all sign a tech agreement that spells it all out.”
The officer tells the parent what the child was typing, and in most cases the parent agrees to take the child for treatment the next day. Chief West said he “absolutely” believed the visits had saved students’ lives. “There are a lot of false alerts,” he said. “But if we can save one kid, it’s worth a lot of false alerts.”
Late-Night Police Visits
Identifying people at risk for suicide is a needle-in-a-haystack problem.
Thoughts of suicide are common; in the Centers for Disease Control and Prevention’s most recent survey, one in five high school students — around three million people — reported considering suicide in the last year. Half as many, or 1.5 million, reported making an attempt. Deaths by suicide in people under 24 are rarer, numbering around 7,000 a year.
The challenge for an algorithm is accuracy: How often does it miss children who are suicidal? When it flags a student as suicidal, how often does it turn out to be true? Then there is the question of follow-up: What care can schools provide for children in crisis? And does it help?
Since the technology companies providing the software to schools have not published their own findings, or submitted their data to independent researchers, it is difficult to answer those central questions, researchers and experts in mental health said.
“At this point, we don’t have enough data to understand how effective these technologies are,” said Dr. Laura Erickson-Schroth, the chief medical officer of the Jed Foundation, a nonprofit organization focused on preventing youth suicide.
“The evidence isn’t there,” said Jonathan B. Singer, co-author of “Suicide in Schools” and former president of the American Association of Suicidology. He added, however, that he saw promise in the technology.
“It takes time to train the algorithms to do the right thing,” he said. “Just because there’s not evidence yet, or just because we’re in the early days, doesn’t mean that it doesn’t work. It just means that it still needs time.”
A spokeswoman for GoGuardian said the company was “currently partnering with a reputable research institution” to publish a research study on the tool’s efficacy.
False positives become consequential when they prompt dramatic interventions, as in the case of police visits. The parents in Fairfield County said it was difficult to move past the visit from the police, which had been prompted by a poem that the girl had written years earlier. (She “needed something that rhymed with ‘pie,’” her mother said.)
The girl’s mother said she told the school about the visit, and a guidance counselor explained that she felt she had no choice but to call police; if she had failed to act, and the girl had harmed herself, she could not have forgiven herself. The family opted not to complain, fearing a confrontation would make things worse for their daughter.
But they were, they said, very angry. “There were people with guns coming to our house in the middle of the night,” the girl’s father said. “It’s not like they sent a social worker.”
Worries about police involvement have prompted some districts to revisit their use of self-harm alerts. Baltimore City Public Schools, which serves 75,000 children, introduced GoGuardian Beacon in 2021. Over the next seven months, the district sent the police to children’s homes 12 times, school officials told The Washington Post.
Dr. Leticia Ryan, the director of pediatric emergency medicine at Johns Hopkins Children’s Center, said she had treated around 10 students flagged by the software, and was “very impressed” by its accuracy. “What was also striking to me was that the majority of cases that I saw were serious or concerning enough to merit hospitalization, and that in the majority of cases the parents were not aware of the child’s suicidality,” she said.
But parents’ groups expressed discomfort about police visits. Ryan Dorsey, a Baltimore city councilman who has warned against involving police in student mental health services, said he had pressed school officials for data about how the alerts were being used, without success.
“Given the total lack of information on outcomes, it’s not really possible for me to evaluate the system’s usage,” Mr. Dorsey said. “I think it’s terribly misguided to send police — especially knowing what I know and believe of school police in general — to children’s homes.”
This fall, the district announced that interventions would be scaled back to school hours. Sherry Christian, a district spokeswoman, said the decision was driven by the large number of false positives, often resulting from student research for school projects or video games. She said the district would assess the effectiveness of more limited monitoring at the end of the year.
Called to the Principal’s Office
Parents described daytime alerts as less alarming, and some were grateful for them.
Ann Greene of Austin, Texas, said a counselor had called to convey disturbing things her 11-year-old had typed on her Chromebook: “OMG it’s not even funny how much I want to kms,” the girl wrote, using an abbreviation for “kill myself” that is used to circumvent algorithms. “I’m such a mistake. I just want to die.”
Ms. Greene said this did not come as a surprise; her daughter was in treatment for a mood disorder, and the night before they had argued over screen time. Her daughter protested that she had not meant it, but Ms. Greene looped in the girl’s pediatrician and it led to “one of our best conversations,” she said.
She also met with the school counselor and left feeling reassured. “Now the counselor is very clued in,” she said. “We have a person in the school who understands what we’re going through.”
But other parents said alerts had prompted schools to overreact. Jill Clark, who lives outside Boston, was alarmed to receive a phone call from her fifth-grader’s school after he had searched on the internet for a term related to suicide.
By the next year, though, she knew he was struggling: He had spent time in a hospital and been treated for suicidal ideation. Still, the boy was flagged by the software and pulled out of the classroom in sixth grade, something Ms. Clark said had “social and emotional repercussions.”
“It turned into much more of a negative thing because we’re getting information that, in some ways, we already have,” Ms. Clark said. “Unfortunately, this is really normal for this kid to experience distressing thoughts.”
She said she had begun to suspect that the alerts had been designed with the school’s legal liability in mind. “A kid gets pulled out of class because they feel they have to intervene, to say they have done all the things they could possibly do,” she said.
Students in some communities have protested monitoring software on privacy grounds. High-school journalists in Lawrence, Kan., campaigned for the removal of the software Gaggle from school laptops, arguing that it interfered with schoolwork and student journalism.
The campaign was only partially successful; the school agreed to exempt the student journalists’ files from monitoring. Often, when the students made their case, administrators responded that the benefit of preventing student suicides outweighed any harm, said Jack Tell, 18, who was one of the newspaper’s editors.
Mr. Tell, who is now a first-year student at Westmont College in Santa Barbara, Calif., said he expected this kind of monitoring to become pervasive. “It certainly feels like the future,” he said. “It’s a technology that hasn’t been available in the past, and it feels like we’re moving towards a world where, if we can do it, we will do it.”
Breaking Down a Culture of Silence
In Neosho, people credited the alerts, in combination with other changes like on-site therapy, with breaking down a culture of silence around suicide. Nearly four years passed without a student suicide; one student died in 2022, and another this year. Jim Cummins, Neosho’s former superintendent, said he had no doubt that the technology had contributed.
“There is no way we can quantify if we saved a life, 20 lives, no lives, but the statistics don’t seem to lie,” he said. “Even if somebody were to come back six years later and say, ‘You can’t prove you saved a single life,’ my answer would be, ‘No we can’t.’ But I know we did everything we could to try not to lose one.”
The student who died in 2022 was Madi Cholka, the same girl who was dramatically saved by a police visit to her home in 2020.
During those years, Madi cycled through hospitalizations, and her mother, Angel, took elaborate measures to protect her, securing medications and weapons in a lockbox.
That night, though, her mother was fast asleep when Madi texted a friend on her school Chromebook, saying she was planning to overdose. The alert allowed Ms. Cholka to rush Madi to an emergency room, and from there to a psychiatric hospital an hour’s drive north.
That hospitalization did not solve Madi’s problems. After she was released, she kept trying to harm herself, and now she was careful not to type about her plans on her Chromebook. She died at 17, leaving behind a suitcase packed for another hospitalization.
“I’m sorry,” she wrote in a text to her mother.
Still, Ms. Cholka said, she was grateful for the Beacon alerts, which lifted some of the burden from her during those years of vigilance. She has heard the arguments about students’ privacy, and about the intrusion on families, and she brushes them away.
“I know for a fact — that alone kept my daughter here a little longer,” she said.
If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.