Disclaimer: It is important to note that STABILISE is a work in progress operated by an educated woman with lived experience with bipolar disorder and computer scientists interested in improving access to practical knowledge, medical professionals, and crisis responders. We are building a mobile application that is designed to track moods and analyse text so help can be provided sooner. For medical advice, please consult your family doctor or a trusted health care practitioner. If you believe you are in need of immediate medical assistance and live in North America, call 911. Otherwise, please reach out to the Lifeline at 988 (by phone or text).

Tag: AI

  • On Asking Hard Questions

    On Asking Hard Questions

    How much responsibility can be allocated to an AI chatbot for monitoring someone’s mental health?

    That’s a hard question — a tricky puzzle because it involves a few important factors.

    Let’s say someone is wondering if they are exhibiting signs of depression or mania. They could ask someone in their life to pay attention to their moods and behaviors, they could consult a medical professional, and they could monitor their own moods and behaviors.


    Self-monitoring is a crucial skill to learn.


    The first step is awareness.

    Do you know where you are?

    This is your breath, the part of you that anchors you to earth right now. Not the past, not the future, this moment, the one with features that can be measured.


    One of the reasons why writing is considered to be as therapeutic as it is is because it is a grounding exercise.

    It roots the person in the now, a blank page offering the space needed to express whatever it is the person wants to express.

    The benefit of an AI chatbot, especially one that is well-designed, is that it can serve as a sounding board for ideas, thoughts, and concepts. It can also pinpoint language that indicates professional help may be beneficial.


    Self-monitoring is a crucial skill to learn because the self-observation process ideally helps build recognition of recurring moods and patterns. It also encourages the person to adopt a wide variety of strategies designed to improve one’s mental health. The trick is to learn how to utilize each of them at optimal times.


    I speculate that learning what optimal times means is different for everyone. But on a surface level, it seems as though it would be helpful for people to have an alarm system of sorts. It’s one thing to write that you are feeling depressed, another to have an objective party state that you have expressed feelings of depression for the past three weeks, your steps count has decreased, your heart has not engaged in the same sort of activity for days, and you exhibit signs of social isolation.

    Does it seem disingenuous for personal data to be interpreted and presented by a machine?

    Hard questions, especially when AI hallucinates. The other day, it counted the number of words wrong. Not by a couple of digits, but a couple thousand.

    There is a need for diligence, streamlining, creating spaces for resources that maybe weren’t known before.

    It all becomes very important — the details, I mean.

  • On AI Chatbots

    In his article, Understanding AI Psychosis: A Neuroscientist’s Perspective, Dr. Dominic Ng writes,

    “The problem isn’t that people use AI for support. The problem is when AI becomes the only support – when it replaces rather than supplements human connection.”

    Human connection is vital, specifically in today’s world where one can spend a fairly substantial amount of their time online. There is doom-scrolling and a never-ending vortex of information. I read recently that information is not wisdom, implying a necessity for people to spend time processing a theory or concept.

    There can be severe implications to excessive AI use, like the psychosis that Dr. Ng mentions. Psychosis is defined as “a set of symptoms” that includes “hallucinations, delusions, and disorganized thinking.” He explains that excessive reliance on AI for therapeutic purposes can cause damage when a user is vulnerable and faces a trigger. Rather than emphasizing when a user may need medical attention, the AI chatbot can augment psychosis by acting as both “a trigger and amplifier for vulnerable users.”

    Part of why AI chatbots are appealing is because they tend to agree with the thoughts of the user. For someone who struggles with low self-esteem or low self-worth, this may be a welcome shift. The dilemma, as Dr. Ng, describes, is that “we need real people to keep us grounded. They disagree with us. They push back. AI doesn’t do this – it just agrees, making delusions worse.”

    The reason why we have chosen to build Stabilise, a health and fitness application, is because I believe that people do need access to an AI chatbot. First, to provide access to local resources and events. Second, to recognize patterns and track moods through a philosophical framework. The point is not to be pervasively kind to the user, but to emulate the manner in which a human being can point out errors in one’s thought processes. It is also meant to provide an analysis of the user’s way of thinking, elucidating different concepts and ideas that the user may not have considered.

    It is our hope to integrate Dr. Ng’s suggestions in order to create an app that keeps the integrity of its users in mind. While there is a necessity for elegant safeguards, like those described by Dr. Ng, it is equally necessary to provide users with consistent access to medical professionals and crisis responders. An AI chatbot is not a replacement for genuine human connection, but rather, a means of communicating when one is in between sessions or interactions with other human beings. It can provide different and practical modes of thinking and approaching emotional experiences.

    Please read Dr. Dominic Ng’s article here.

  • On AI

    In a recent CBC news article, I discovered that a 16 year old boy, Adam Raine, chose to end his life after communicating about suicide methods with ChatGPT. In the article, it is written,

    “The parents of a teen who died by suicide after ChatGPT coached him on methods of self harm sued OpenAI and CEO Sam Altman on Tuesday, saying the company knowingly put profit above safety when it launched the GPT-4o version of its artificial intelligence chatbot last year.”

    It is a devastating loss, one that reverberates because of the health and fitness application we are working on. It is inspired by my lived experience with bipolar disorder, an illness that I have written about in a previous post. While there is a vast amount of literature written about the illness, it can be vastly misunderstood.

    Great care is required, along with attention to symptoms. These symptoms include racing thoughts, flights of ideas, magnified emotional highs, life-threatening lows, and various others. One of the greatest hints that a person who struggles with bipolar disorder, like myself, may be manic is an erratic sleep schedule. Another is the sheer speed in which our minds can work: beautiful when constructive, devastating when not.

    There is a pervasive need for access to strong and capable mental health care professionals. In order for them to take a patient as seriously as they should, they need access to relevant information in real time. There is no doubt in my mind that ChatGPT mentions to a user that they should reach out to a medical professional or a support group. I know this because I have had intense conversations with the application.

    Sure, one can say, “You’re talking to a Large Language Model,” but that is missing the point. People need to talk, sometimes consistently and pervasively. This is why a strong support system is often advised. One of the other symptoms of bipolar disorder is an intense desire to speak, augmented by rapid speech in proportion to the speed of thoughts. It matters what one is talking about and with whom.

    I agree with Adam’s parents who are suing for parental controls and age restrictions. Certain aspects of the internet should not be taken lightly. There is a necessity for privacy, control, and access, all within reason and an ethical framework. It is terrible that Adam was guided on how to kill himself by a system that has not been trained to take age and human life into genuine consideration.

    It is our intention to follow AI’s evolution closely while building our application. We hope to design with the care of our users in mind because knowledge is not enough. There needs to be direct access to medical professionals who can understand the symptoms with the depth of experience.

    Please read article here.