We are living in strange times. Never before have there been so many “mentoring programmes”, “transformational coaching”, or “psychological methods” for sale, and never before has it been easier for them to be generated by artificial intelligence. An entire industry promises change, clarity, and discipline, but often there is no human behind these promises, only a robot.
I’ve been there too. I enrolled in a comprehensive lifestyle change programme designed to help me integrate good habits into my daily routine. I had access to a nutritionist mentor who was constantly available to adjust my diet, encourage me, and provide emotional support. I knew, of course, that I wasn’t his only client. I suspected that many others were probably receiving WhatsApp messages similar to mine, only personalised with a different name. What I was getting was not a real relationship that would hold me accountable, but synthetic motivational speeches for which I was paying more than I would be comfortable admitting. But it worked because, even though the relationship was artificial, it was extremely important in supporting change. This is an uncomfortable lesson of our time: amid an information flood, relationships are in short supply.
When the time came to choose whether to renew or cancel my subscription, I hesitated. Discount offers came thick and fast and became increasingly generous. Text messages turned into calls, and the tone became increasingly insistent. In the end, I decided to continue alone, with a clear plan in mind: I was going to build my own AI agent to replace my nutritionist. I thought that any adult with average digital skills could easily save time and money by doing this.
The flaws of a flawless relationship
By 2025, the ChatGPT platform had approximately 800 million weekly active users, most of whom used it for personal rather than professional purposes. According to a Harvard Business Review report, one of the most popular uses is what users refer to as “therapy”: conversations in which the chatbot becomes a confidant, advisor, or temporary replacement for a human specialist. The prospect is appealing. Receiving support, personalised recommendations, planning, and motivation at any time of day or night at a fraction of the emotional and financial cost of in-person therapy is tempting. However, as these systems become more prevalent in everyday life, studies are beginning to highlight potential risks, such as AI addiction, overconfidence in recommendations that have not been scientifically validated, the replacement of human interaction, and the perpetuation of health myths.
Although it happened very recently, most of us don’t remember how artificial intelligence became so integral to our daily lives, influencing everything from our diet and work habits to our relaxation and motivation. However, it is not this forgetfulness that should concern us. Rather, we should be troubled by a more uncomfortable question: what happens when the support that helps us to change no longer comes from imperfect human relationships, but from ones that work flawlessly? What does this preference say about us, rather than about artificial intelligence?
The market of presence
We are experiencing a crisis of supportive relationships—an epidemic of loneliness, as it has been called. We are the generation with more access than ever to information, guides, and best practices in any field. However, we are also a generation that acutely feels the lack of continuity that would be provided by someone who constantly follows our progress, recognises our efforts, and stays with us even after the initial enthusiasm has evaporated.
This void has been filled by a market for presence because that is what coaches, mentors, and therapists offer to facilitate personal transformation. Artificial intelligence has moulded itself to this need because our relationships are already mediated by screens and are asynchronous and partially scripted. The difference between an overworked mentor and a well-trained chatbot has therefore become surprisingly small. But what can you expect from a lifestyle chatbot?
First step: food
Nutrition is a paradoxical territory. We generally know what we should do. We know that vegetables are good for us and that excess sugar is not. We know that protein, fibre, and hydration matter. We also know that miracle diets betray us in the long run, and that lasting change requires us to form habits. However, few areas generate more confusion, anxiety, and guilt. This is why nutrition is a very relevant area in which to evaluate the effectiveness of artificial guidance: the rules are relatively clear, but their application depends on real bodies, personal contexts, and the fragile nature of motivation.
Let’s broaden the scope. Recent studies suggest that ChatGPT performs surprisingly well in terms of general principles, at least. When asked about nutrition for conditions such as type 2 diabetes or non-alcoholic fatty liver disease, its recommendations are, in most cases, in line with clinical guidelines, such as reducing ultra-processed foods and increasing the intake of vegetables, lean proteins, fibre, and ensuring adequate hydration. Nutritionists who evaluated these responses considered a significant proportion of them to be “adequate”. In other words, they are not innovative, but they are correct. ChatGPT does not usually invent exotic nutritional theories. It coherently repeats the consensus.
This informational mediocrity is, in fact, one of its strengths. Most of us do not need a revolutionary diet, but rather a constant reminder of what we already know but do not consistently apply. An AI agent can do just that: remind us of things, rephrase them and adapt general principles to our stated preferences, such as vegetarianism, allergies, and cultural habits, without getting tired or judging us. In this sense, it functions more as a mechanism for maintaining direction than as a source of information.
Information versus discernment
The problem arises when nutrition ceases to be general and becomes highly specific. In contexts involving strict dietary restrictions, such as chronic kidney disease, severe metabolic imbalances, and eating disorders, the same studies reveal clear limitations that can sometimes be dangerous. For example, ChatGPT can offer correct principles and then concrete recommendations that contradict them within the same conversation. In some simulations, diets generated for patients with kidney disease exceeded acceptable protein limits despite the theoretical explanations being correct. The algorithm knew the rules, but it didn’t know when to stop.
This discrepancy highlights a fundamental difference between human dietitians and artificial systems. Unlike artificial systems, humans not only provide information, but also exercise discernment. They recognise exceptions. They sense when something is “not right”. A language model, on the other hand, optimises for consistency and completeness rather than prudence. It lacks the moral discernment to know when silence is necessary.
Admittedly, for most healthy people who are not looking for a strict medical plan, but rather a better daily structure, it remains useful. A personalised AI agent can provide planning, meal ideas, quick adjustments, and, above all, continuity. It doesn’t forget. It doesn’t disappear. It doesn’t lose patience. However, this is where the fine line between support and substitution emerges. While an AI agent can encourage better choices, it cannot notice drastic weight loss or detect obsessive diet rigidity. Nor can it intervene when discipline turns into control. In an area where the relationship with food is often emotionally charged, this blindness can be dangerous.
The robot, my therapist
If eating is a test of discipline and consistency, then mental health is a test of vulnerability. Here, we are talking about more than just applying known rules; we are talking about suffering, confusion, fear, and meaning. The fact that more and more people are choosing to discuss these issues with a chatbot is not just a result of technological enthusiasm; it is also a social symptom.
Firstly, the attraction is easy to understand. Language models are trained to be cooperative, reflect the user’s emotions, and keep the conversation going. They almost always say what is comforting to hear. However, reassurance is not the same as intervention. Simulated empathy is not the same as clinical assessment. Just because someone feels “heard” does not mean they are actually safe.
This is where the first serious disconnect arises. A human therapist offers not only validation, but also friction. They ask uncomfortable questions. They notice inconsistencies. They recognise warning signs, even when they are not explicitly stated. In contrast, a chatbot operates solely on the basis of the text it receives and the statistical probabilities that govern the response. If the user does not clearly articulate the danger, the system has no way of “sensing” it.
Studies that have tested chatbots’ responses to crisis scenarios are disturbing from this point of view. In situations where questions contained indirect references to suicidal thoughts or loss of hope, some systems responded literally by providing factual information instead of recognising the emotional subtext. This is an inherent limitation of chatbots. The algorithm cannot know when a seemingly neutral question is actually a cry for help.
However, there is another risk that is more difficult to recognise. A relationship with a chatbot can become a form of attachment without reciprocity. For the user, the conversation is personal and sometimes intimate. For the system, however, it is merely a language statistic. This asymmetry is characteristic of the illusion of a connection that does not deliver on its promises. In cases of heightened vulnerability, this type of attachment can accentuate isolation rather than alleviate it, encouraging withdrawal from more difficult but essential human relationships.
Mental health professionals warn that the danger lies not only in misguided advice, but also in postponing real help. A chatbot can offer temporary relief, which is enough to postpone the difficult decision of seeking human support. While the user “gets by” with the help of AI, their symptoms may worsen and the crisis may deepen.
This is perhaps the most obvious limitation of artificial guidance. ChatGPT can discuss emotions, suggest reflective exercises, or breathing techniques, and normalise challenging situations. However, it cannot take responsibility for the consequences of these conversations. It cannot assess risk. It cannot intervene. It cannot be held accountable. In a field where a person’s safety is at stake, these shortcomings cannot be overlooked.
Conversely, there is now a consensus that artificial intelligence has a legitimate role in conversations about mental health. Studies clearly demonstrate that AI can have measurable beneficial effects when used as an auxiliary tool for guided reflection, journaling, structuring thoughts, or simple emotional regulation exercises. Problems arise when this auxiliary role expands imperceptibly into substitution. Who decides when a chatbot has become more than temporary support? What happens when this line is crossed out of necessity because human alternatives are inaccessible, too expensive, or simply non-existent?
Endless optimisation
The next stop is personal productivity, where you arrive convinced that if you organise your time perfectly, control your habits perfectly, and optimise your energy perfectly, you will finally become a person who no longer suffers from chaos, fear, or guilt. In this context, ChatGPT is an ideal tool: it can make plans for you, provide lists, and break your life down into measurable steps.
Initially, you use the system to clarify your thinking. After a while, however, you start to calibrate your thinking to fit the system. You express your desires in a language it can understand: goals, personal KPIs, routines, and 21-day “challenges”. You translate your inner life into formats that can be managed by software. Anything that doesn’t fit into these formats—ambivalence, sadness for no reason, or exhaustion that can’t be solved with a ten-minute break—risks being treated as an execution error rather than a truth about yourself.
If we build our lives day after day around prompts, micro-improvements, and instant validation, we will end up loving control, predictability, and comfort without intending to. We will avoid everything that is difficult to manage: real conflicts, long conversations, silences, and questions with no right answer.
This is why even advanced discussions about digital well-being take on a paradoxical nuance: a screen reminds you to put it aside. A system designed to encourage conversation suggests that you take a break. In the past, discipline was a virtue formed through relationships and boundaries. Now, however, it tends to become a product feature or a set of settings.
The glue that holds everything together
Complications arise when the same auxiliary tool is required to function in several existential areas simultaneously. When we entrust the same system with nutrition, emotional regulation, and minor productivity tasks, we are not just outsourcing activities, but also our own discernment—the ability to decide what matters, when to stop, and who to ask for help.
A life lived to the fullest involves honouring our social dimension. Indeed, an algorithm can deliver plausible truths and mimic patience, understanding, and even empathy. Real relationships, however, are a completely different matter. In real relationships, we experience the depth of reciprocity, but we are not immune to risk. Sometimes we experience conflict; at other times, we experience shame. We ask for and offer forgiveness, and we work to repair things.
What holds all these things together is not technology, but the answers we unconsciously give to questions of theological significance: “What will save me?”, “What will make me whole?”. In contemporary culture, these questions are no longer addressed to God, but to various everyday figures.
Technology enters the scene unapologetically, because more and more of us are answering the above questions by idealising risk-free relationships: those that do not expose our flaws or contradict us; those that do not force us to make repairs or ask anything of us. Those that are comfortable.
An exercise in honesty
In other words, the problem is not that we want change, but that we look for it in places where we cannot find it or ask it to play a greater role than it can. Risk-free, predictable, and controllable relationships can facilitate behavioural adjustments, but they cannot bear the weight of our wholeness as individuals. In the Christian tradition, beneficial transformation is always the result of an encounter that fosters awareness of one’s limitations, vulnerability, and acceptance of oneself and others: being known without being idealised, and being loved without being exempt from responsibility.
This does not mean that an AI agent is useless, only that it must be described accurately. An AI agent is strictly a structural tool, not a source of meaning. It supports habits, but cannot substitute community. While it can reduce background noise and therefore help with mental organisation, it cannot decide for itself what is worthy of love. If we let it take on the role of a source, however, we risk diminishing our own lives until they fit into a prompt.
Today, artificial intelligence forces us to be honest about our needs. What are we actually looking for when we ask for guidance? Information? Structure? Or a way to avoid being alone in our efforts to live better? If the answer is the first two, AI can be an invaluable ally. However, if it’s the latter, we risk accepting a distorted version of the support we actually need.
It’s true that it’s challenging to resist giving technology roles that don’t belong to it. This difficulty seems to grow in parallel with the diversification of the tools that make technology more accessible. Whether we like it or not, today’s test of maturity is whether we can recognise when an artificial guide is sufficient and when we need a human being. That is why, for now, my AI nutritionist remains just an exercise in thinking. In non-artificial thinking and, until proven otherwise, critical thinking.
