The Headline

Source: The Guardian

Translation: Budget-strapped schools are outsourcing the first line of adolescent mental health to a llama chatbot — and the real question isn’t whether it works, it’s what we’re normalizing in the process.

What’s Actually Happening

Hundreds of American schools are deploying AI-enabled mental health platforms at roughly $10 per student per year to monitor, triage, and respond to student emotional crises outside school hours. The leading platform, Alongside, uses a chat interface fronted by a cartoon llama named Kiwi to help students build emotional resilience, while its AI flags severe alerts for human counselors to follow up on. In Putnam County, Florida, a single counselor manages 360 middle schoolers with the tool as her primary filter. The system has, by her account, surfaced genuine crises including one that may have saved a student’s life.

The driver is not innovation. It is scarcity. School counselor ratios remain far above recommended levels, rural districts have virtually no access to licensed clinicians, and the adolescent mental health crisis has not waited for funding to catch up. AI is filling a vacuum that policy and budgets created.

The Distortion

The dominant distortion is framing this as a binary: AI counselor versus no counselor. That framing is doing significant work for the companies selling these platforms, because it makes the relevant comparison a chatbot versus nothing, rather than a chatbot versus adequately funded human support.

The second distortion is the safety narrative itself. Platforms like Alongside correctly point out that their AI is monitored by clinicians and is not meant to replace therapy. But the structural reality is that in resource-depleted schools, “not meant to replace” and “functionally replacing” are not mutually exclusive. When a single counselor uses an AI triage tool to manage 360 students, the AI has become the primary point of contact for the majority of those students’ emotional needs.

The third distortion is subtler: the comfort students feel talking to a chatbot is being framed as a feature. It is also a warning. Adolescents finding it easier to confide in a bot than a human is not evidence that AI counseling works. It may be evidence that the social skills required for human vulnerability are already eroding, and that these tools, however well-intentioned, are accelerating that erosion rather than reversing it.

The Incentive

For schools, the incentive is triage economics.

At $10 per student annually, AI mental health tools cost a fraction of what a licensed counselor does, require no benefits, and are available at 7pm on a Tuesday. In an era of budget shortfalls and legislated counselor-to-student ratios that go routinely unmet, these tools are not a choice between good and better — they are a choice between something and nothing.

For the platforms, the incentive is a captive, growing, institutionally validated market. At least nine companies have received funding deals since 2022. Schools provide scale, legitimacy, and a procurement pathway that bypasses the clinical licensing scrutiny these tools would face in formal healthcare settings. Getting embedded in school infrastructure at $10 per student creates switching costs that compound over time.

For policymakers, the incentive is to let the market solve a problem that adequate public investment would otherwise require them to fund. AI mental health tools in schools give legislators a way to point to solutions without allocating the resources that would make human solutions viable.

The Consequence

The near-term consequence is uneven and context-dependent. In districts like Putnam County, where a skilled and attentive counselor is actively using the tool as an extension of her own judgment, the system appears to function as intended — surfacing crises that might otherwise go unnoticed and triaging routine emotional needs efficiently.

That is a real benefit.

The structural consequence is more troubling. As these tools scale across hundreds of schools with varying levels of human oversight, the baseline assumption shifts: AI triage becomes the default, not the exception. Counselor headcount doesn’t increase because the AI is handling it. Over time, the institutional memory of what adequate human mental health support looks like fades, and the comparison point becomes the chatbot, not the clinician.

The longest-term consequence is what Sam Hiner identifies as the parasocial risk: students developing one-sided emotional attachments to AI systems that simulate care without providing it.

A platform that tells a student “I’m proud of you” is not building resilience. It is substituting for the social accountability that human relationships require. If these tools habituate adolescents to emotional interaction that carries no reciprocal weight, the skills required for genuine human connection don’t develop on schedule. And unlike a data center land deal, that cost doesn’t show up in a balance sheet. It shows up a decade later.

The Calibration

The honest assessment is that AI mental health tools in schools are doing real good in a system that is genuinely failing students and that this real good is being used to justify a deployment trajectory that the evidence does not yet support at scale.

The counselor in Putnam County is not the problem. She is, in fact, the solution: a skilled human using a tool intelligently, maintaining oversight, reading body language when alerts prove false, and building trust with families over time. The question is whether that model scales, or whether what scales is the tool without the counselor.

The deeper calibration is this: a society that cannot fund adequate school mental health support but can rapidly adopt $10-per-student AI triage tools has not solved its mental health problem. It has automated its avoidance of it. The technology is not the distortion. The willingness to accept it as a substitute for structural investment is.

When the comfort a child feels talking to a llama chatbot becomes evidence that the system is working, something important has already been conceded.

Next calibration: 1 pm (GMT). Stay sharp.