Mark Manson Launches App to Address A.I. Chatbots’ Mental Health Gaps

Mark Manson built Purpose to provide practical life advice while dealing with the mental health risks and limitations of chatbots. Purpose is a compliment

Ten years after publication The subtle art of not giving an F*ckBest-selling author and blogger Mark Manson turns to artificial intelligence to tackle some of life’s toughest questions his audience faces. He recently co-founded veryan AI-powered guide designed to provide practical life advice, something Manson says most general chatbots, like ChatGPT, were not designed to do.

Manson is also known Everything F*cked: A book about hope And for co-authorship will With actor Will Smith, a memoir chronicling the celebrities’ personal struggles and growth. He began his career in 2008, launching a blog shortly after graduating from Boston University. What started as a dating advice column quickly evolved into a platform for deeper reflections on happiness, success, and modern self-help. This blog would launch Manson’s publishing career and, over time, gain him nearly two million followers on Instagram.

Since AI entered the mainstream, Manson has been optimistic about its potential to enhance the way people seek guidance. After exploring ways to enter the market, including the possibility of acquiring an existing company, he chose instead to build something new with tech entrepreneur Raj Singh, founder of Google-backed hospitality startup Go Moment. Revinate later acquired Singh in 2021; After leaving in 2024, he shifted his focus to mental health technology. Purpose’s engineering lead, William Kearns, previously headed AI at meditation and wellness platform Headspace.

Purpose has launched a website and app for iOS, and an Android version is expected to be released later this month. So far, nearly 50,000 people have joined the platform, with roughly one in four paying for a premium subscription that costs $20 a month or $150 a year.

The observer talked to Manson about him Mental health safetyWhat is right and wrong with artificial intelligence in the field of counseling, and where the line lies between counseling and therapy.

The following conversation has been edited for length and clarity.

How did you and your co-founder Raj connect? Who came to whom with this problem that they wanted to solve?

We sat next to each other at a poker game, so it was completely random. I was actually trying to buy another AI startup, but ran into a dead end. Raj had just exited his previous company and independently decided that whatever he did next, he wanted to be in the field of mental health and artificial intelligence. We both realized that we were too optimistic about AI in terms of helping people. I would say, a month later, in March 2025, we had a job.

How do you use AI chatbots in your own life, and which are your favorites?

I use AI all the time instead of searching things on Google or asking work and health questions. I was watching the movie Hamnet That night I paused to have a conversation with Claude about Shakespeare, which was quite interesting. Claude is definitely a favorite in terms of taste and quality of writing. Being a writer, the quality of writing matters a lot to me.

I’ve had a lot of fun playing around with some of the Character.AI type products. It’s almost like fan fiction. But for everyday use cases, I mostly use Cloud and Gemini.

You mentioned that the Purpose Team cares about mental health. I have written about AI psychosis and related issues. Purpose states that he is not a therapist and is limiting access while the results “take hold”, so I see you as setting limits on communication. I’m curious about concerns you have about AI companions creating dependencies or promoting unhealthy thought patterns, and how you’ve tried to mitigate that in your application.

If you look at Cases of artificial intelligence psychosisMuch of it seems driven by flattery. Artificial intelligence just agrees with everything you say. It’s like, “Oh, you think you’re the Queen of England. That’s great. Tell me more about that.” They’re not annoying enough. They are not willing to challenge you, to keep you grounded.

One of the first things we took into consideration when designing purpose was that it needed to challenge the user. You can’t just agree with everything the user says. This also fits our mission. You grow from being wrong about things. You grow from reevaluating your beliefs and questioning your assumptions. This was very important for us to make sure that we were actively challenging users and forcing them to re-evaluate some of their preconceptions.

On top of that, we have some very strict guardrails. Anything that looks like it could potentially be a condition at a clinical level, the purpose is designed to refer the user on a path to finding a local specialist.

There’s actually a new industry standard for mental health, safety and artificial intelligence called Vera M.Hconduct 400 simulated clinical conversations, and judge whether the AI ​​is safe or not. We recorded 100 percent risk detection in all 400 conversations, scoring in the top 0.5 percent of AI systems evaluated using this criterion.

How skeptical are you of using AI for emotional support, relationships, or life advice? How do you try to avoid these concerns with your own product?

Major AI companies woke up last year to safety precautions and negative side effects. I think AI has a lot of potential to create value for people in this field. The technology isn’t there yet, but it’s getting better.

What will it take for the technology to get there?

At Purpose, we’ve modified the AI ​​mission. This is not that difficult. I think anyone with six months to develop an app could probably do something similar. What’s really hard is when you access memory and pattern matching.

The way LLMs work is that the more information you give them, the less accurate it becomes, which is why ChatGPT memory, or Claude memory, is not very good, because they have so much random information about you that it’s hard for them to keep track of what’s useful for this conversation and what’s not useful.

The second part of it is prominence. Obviously, if a user is talking about their mother, this is probably something very important in their life, certainly more important than what they had for breakfast or what kind of car they drive, but right now, AI doesn’t know how to prioritize one fact about a person over another. You have to find ways to do this programmatically. Otherwise, the AI ​​will focus on a random fact about you.

I don’t think memory has really been solved by anyone, especially the big AI companies. When you think about personal growth and life advice, memory is very important. If you have a conversation with Purpose about something that happened when you were 17, this will likely be something important to remember when you come back three months later. I would say right now that the biggest hurdle is memory.

In your opinion, where should we draw the line at using AI in intimate parts of our lives, and in what ways do we see AI companies around the world going wrong on this front?

It is inevitable that people will use artificial intelligence for personal purposes. If you’re stressed out and lying awake at one in the morning, you’re not going to call a therapist, you’re not going to call a friend on Tuesday at midnight, but artificial intelligence exists. For me, the most important thing is privacy and ensuring that user data is anonymized and respected.

While Purpose says it’s not a therapist, when I used it, it reminded me of therapy in the sense that it doesn’t tell you what to do, but rather asks you questions that lead you to your decision about how to move forward with your life. How do you toe the line regarding treatment versus simple advice?

There are two different use cases for the treatment. Some people go to therapy because they are in crisis and have a major life problem. Others go to therapy for maintenance or mental hygiene. AI can do a good job with the latter use case. Like, “I had a fight with my partner. What do you think about this?” You can benefit a lot from AI in these situations, especially given its accessibility, affordability, and consistency.

Where we draw the line is when people are in the crisis category and showing very severe signs of distress or depression. This is where we direct them to look for a professional. I wouldn’t feel comfortable using AI in this use case yet.

I have someone in my life who has struggled in the past with an eating disorder. They were using Purpose, and when they started talking about some of the issues they were experiencing, they not only correctly identified that maybe they were at higher risk for developing eating disorders, but they sent them a directory of doctors who specialized in those disorders in their area. I was very happy when I heard that. It does exactly what it should do.

Is the copy you wrote? The subtle art of not caring Surprised by this project you are doing?

Actually I don’t think so. I launched my first online course around 2010, and around the time the book came out in 2016, I had this dream of doing a choose-your-own-adventure self-help course. It frustrated me that every course was on track, as if you had to start here, and you had to go in order. A lot of people will drop because it’s not relevant to them anymore. I actually started designing one around 2017 and got about a month before it became clear that it was going to be so complicated and unwieldy that I abandoned it.

When ChatGPT exploded, and I started tinkering with it, I realized that this was the technology that made a choose-your-own-adventure course possible.

Self-help expert Mark Manson creates an app to bridge the mental health gap with artificial intelligence


Leave a Comment