“AI is replacing human tasks faster than you think.” “Wall Street Job Losses May Top 200,000 as AI Replaces Roles.” “AI Set to Replace Workers Across 41% of Companies in the Next Five Years.” And more recently: “Bill Gates Predicts Only Three Jobs Will Survive the AI Takeover—Here’s Why”. These headlines are fueling growing anxiety among those unfamiliar with artificial intelligence or those who see it as an insurmountable threat.
But if we were to turn our attention away from alarm and toward inquiry, recognising that fear is incompatible with learning, we might begin to ask the questions that truly matter in today’s climate: What exactly is this artificial intelligence that seems poised to replace us? How does it work, and how intelligent is it really?
It’s worth taking a closer look at one of the most talked-about branches of AI—large language models—to better understand both their potential and their limitations.
Large language models (LLMs), like ChatGPT, have captivated public attention much like storytellers once did—weaving narratives that mesmerised their listeners. Today’s storytellers, however, are vast neural networks that mimic the structure of the human brain, allowing them to “understand” how we write and think, and to respond with phrases that sound remarkably human. But how, exactly, do they manage this? After all, a computer program doesn’t “know” English in the same way a person does, right?
From words to tokens…
For an LLM to understand a text in which we ask it a question, it first breaks the input down into smaller parts called tokens. These tokens can be full words or just fragments of words. The model then assigns probabilities to each token to predict: “What is the most likely next word (or word fragment)?” Once the next token is selected, the model updates the context so it knows what it has already “said” and can then “imagine” the most suitable continuation. The process is similar to the auto-complete function found on smartphones.
AI, GPT, ChatGPT, Generative
You may have come across the term GPT (Generative Pre-trained Transformer), as in ChatGPT. GPT refers to the algorithm that enables the generation of new, human-like text. “Generative” means the model creates original content, “pre-trained” indicates it has already been trained before you use it, and “transformer” refers to the neural network architecture that powers the entire system. When you interact with GPT, you provide a prompt—an input text—and receive a response based on what the model has learned from a massive dataset. In fact, every time you give GPT a prompt, it treats it as a new question or an extension of the existing context.
Mechanical kindness and built-in limits
As fascinating as they may be, LLMs don’t possess true “will” or “consciousness.” Even when they respond fluently—even when they seem to “think”—these systems do not understand the world the way we do. It’s essential not to mistake them for some kind of all-knowing entity. Yes, they can make mistakes. They can present false or misleading claims with confidence, especially when their knowledge contains blind spots—gaps that stem from the point at which their training data stopped. Their creativity is the result of highly sophisticated statistics, not personal experience.
The question, “How can it write so well without being a person?” is a fair one. The answer, though both simple and hard to grasp, lies in the nature of their training: LLMs rely on billions of patterns and correlations—far more than any human could memorise. These models analyse massive libraries of books, articles, websites, and user comments, learning the patterns of language and reproducing them in new combinations when prompted. So, there’s no magic, no mysticism—just an extraordinary amount of data organised in ways that can generate what looks like creative output.
AI: an unusual conversation partner
It might feel odd to treat an LLM as a conversation partner, but in practice, that’s exactly what’s happening: we engage with it like a kind of virtual collaborator. Still, no matter how polished the response may sound, the human role remains essential—whether that’s checking facts, adding original insights, or correcting potential errors.
Despite their impressive capabilities, LLMs have clear limitations. For instance, when a conversation exceeds a certain number of tokens, the model can start to “forget” earlier parts of the exchange—a phenomenon users may notice when revisiting a topic discussed several hundred lines earlier. It’s a bit like losing track of where a long conversation began. But that doesn’t mean the information is lost forever, nor does it suggest that the LLM absorbs everything like a sponge.
Privacy and how your data is used in training
A common concern among users is: “If I ask questions about my project, will that information stay in the model and become visible to everyone?” While the model retains its core knowledge, the information you input during a chat is processed temporarily, meant only to generate a response in that moment.
According to the companies developing these models, the training data doesn’t instantly update based on what you just typed. Conversations may eventually help improve the system, but not in a way that would recreate your exact dialogue. Instead, the process involves filtering: sensitive or irrelevant data is discarded, and only the remaining content—if useful—is annotated and potentially included in future training rounds. Anything beyond that would be incredibly complex and expensive to manage in real time, with every exchange.
However, major companies do acknowledge that they may—at times—use snippets of conversations, once they’ve been anonymised and filtered, to help refine their products. While it’s not yet entirely transparent which data is used or how, one thing is becoming increasingly clear: a large language model is not just a passive tool. That’s why, when discussing highly sensitive information, caution remains the wisest approach.
AI: fear or hope?
In essence, LLMs function like interactive libraries with vast vocabularies, capable of processing information in a fraction of a second—tasks that might take us hours. Still, it’s important to remember that we are working with language machines, not conscious beings. Recognising this allows us to harness their benefits while staying mindful of their limitations.
At a time when job displacement by AI looms large, the real choice isn’t between fear and hope—but between using these tools superficially or thoughtfully, based on critical thinking. The latter requires us to educate ourselves, to learn how these systems work so we can navigate conversations with them effectively. That means neither demonising nor idealising artificial intelligence.
Achieving this balance begins with the first conversation we need to master—the one with ourselves.