As AI becomes part of daily life, experts warn that overreliance on large language models may affect mental health, critical thinking, creativity, and human connection.
House-cleaning robots, self-driving cars, cameras linked to complex security systems – it seems that the cyberpunk world of our favourite 90s sci-fi movies is already becoming our reality.
And let’s not forget the cornerstone of many of these advancements: artificial intelligence.
AI might be the biggest buzzword of today. It is spoken about everywhere – in how it is disrupting what was once seen as a stable career for hopeful accountants, in how university students have been using it to complete assignments, in how it competes with artists in terms of creative expression.
Even in how it can play chess better than a human.
It seems almost inescapable.
With its digital hand reaching far and wide, it means that, naturally, when we talk about health, AI is present there too.
It has certainly proven to have numerous transformative uses in healthcare, for both professionals and everyday people. However, we also can’t deny that its downsides are becoming ever clearer – and much more alarming.
The Issues We Often Think About – And Don’t
AI has never played a bigger role in our lives, which means we need to take a hard look at exactly what it’s doing to us – especially our health.
AI can be seen as technologies that enhance reasoning and judgement, and simulate or mimic intelligent behaviour. When we used to talk about the ethics of AI, it more often revolved around issues of privacy and transparency, and algorithm bias.
The topic has expanded since, in worrying ways.
Our current employment of AI – specifically generative AI – in daily life has already been normalised. We use it in everything, whether that’s to help us in our studies or work assignments, or to plan outings or trips. We’ve even poured our hearts out to it at 2am when a problem has been bugging us to the point of insomnia. We’ve directly experienced its many benefits, ranging from productivity to emotional comfort.
Yet why is the news being inundated with the mental health consequences of AI dependence?
The Pitfalls of AI Dependence
As individuals, we do use a fair bit of intelligent tech; some common ones include inventions such as the Roomba, or programs like ChatGPT.
They’re both AI-based. They both make life more convenient for us.
But what makes us fear the large language models (LLMs) more than the vacuum cleaner?
When We Outsource Cognition

One eases the physical load, while the other eases the cognitive one. However, while a physical activity like cleaning can be completely outsourced with little to no consequence, it’s a mistake to surmise that our own mental processes deserve the same treatment.
ChatGPT and other similar GenAI models can be a wonderful partner for obtaining specific information, or for bouncing off ideas. But what if you always use it for your creating, your brainstorming, your own thinking?
There’s value to being intellectually stuck, to getting all your neurons firing and brain juices flowing when you’re struggling to get a problem solved. The emotional experience, the trajectory you follow from incomplete idea to ultimate solution, can be intensely satisfying because of the organic effort it took you to get there.
When we use ChatGPT, however, this trajectory is quite different. It’s a shortcut in which this trajectory doesn’t really exist – because it bypasses the journey of thought almost entirely.
Hence, the real concern comes from when we rely on this tool too heavily.
Our proficiency for innovative solutions and creative expression lies in how we manage information, make judgements, and arrive at decisions – consistently. But when we ask someone else – or something else – to do all this work for us, it can erode our capacity for critical thinking over time.
Furthermore, being able to get immediate answers from an AI-generated result can foster a sense of complacency, where we lose the desire to imagine beyond what a machine can manufacture for us. But our creativity, an intuition built up through education and experience, requires more intricate a formula than what the blunt, aggregative nature of GenAI allows us to explore.
It’s not just limiting – it’s crippling.
When We Take the Human Out of Human Relations

With loneliness on the rise, it’s not unusual for anyone to turn to chatbots for some company. Even for those of us with fulfilling social lives, the odd talk with ChatGPT can still help with the brain dump.
But it’s important to keep in mind that (as of this current generation of AI, at least), you don’t get the meaningful two-way conversation you would experience between two humans, especially if you plan to confide in the chatbot. You talk at the chatbot, and it just gives you whatever you asked for in return.
Because that is what it is designed to do.
“It makes us think that these interactions are deeply personal and we feel understood, but unfortunately, this is actually AI just generating language patterns that they’ve learned rather than actually expressing real empathy,” said Dr Annabelle Chow, principal clinical psychologist at Annabelle Psychology, in CNA.
“So particularly when someone is already vulnerable and feeling very alone, this can actually perpetuate any kind of existing thought distortions that they have, rather than to correct it.”
At most, it can only imitate complex human responses. Still our interactions with this tool can be a dangerous influence on our mental wellbeing.
Moreover, while it can inspire you to feel an emotional connection to it, becoming overly dependent on it for social engagement can disincentivise you from reaching out to actual humans, while diminishing essential people skills. When you extrapolate the way you interact with a chatbot to the outside world, it can have a real, adverse impact on how you perceive and associate with your physical, social environment.
While some may consider the notion that AI threatens individual autonomy an overreaction, there is some truth to it. Does it mean we should avoid AI entirely?
Reducing AI Dependence, Regaining Self Autonomy
How do I stop relying too much on AI / online tools and talk to real people more?
by
in
ArtificialInteligence
Whether we like it or not, AI may eventually become an indelible part of life.
So if we don’t want it inadvertently taking over our minds, it’s time to build boundaries.
On a societal and governmental level, it’s crucial for us to understand the strengths and risks of interacting with AI. However, according to Dr Chow, AI literacy is still very much an underdeveloped field.
“There’s insufficient education about this at the moment,” she stated.
This makes it more important to find ways to manage our AI use on an individual level, especially if you find you’ve been a little too preoccupied with it. For example, if you’re developing an overreliance on GPT, the warning signs include:
- Using it longer than you intended to.
- Thinking about it a lot and feeling a strong impulse to use it.
- Turning to it when you feel stressed, upset or uncertain.
- Using it to escape from your problems.
- Experiencing focus or productivity challenges, or issues engaging with daily life because of your use.
Some methods for limiting your use involve:
Identifying the Job or Task You’re Asking AI to Do
Be specific.
For instance, before you prompt the AI, write down: “I am using AI to [brainstorm ideas for my project / help me plan my vacation / help clam my nerves before a presentation].” Keep your focus on the assignment, and not the activity of using AI.
Reconnecting With Your Own Judgement.
When AI gets back to you on your prompt, don’t just take its answer. Give yourself a moment to discern what you would have instead done if you’d decided not to look through what it had produced. You should also think about what your next step should be.
Scheduling “Consulting Hours”
Try giving yourself only up to 20 minutes a day to consult with a chatbot. If you have any questions beyond that time, put it somewhere else until the next day – if you even still need it. You’ll be happy to find that a lot of problems you have can be resolved if you give your brain the time to work through them.
Making the First Draft Your Own
It’s easy to fall into the trap of getting your first thoughts from AI. Instead, try to get your initial impressions and considerations down, without dwelling over writing it well, and think about your own resourcefulness and skills by bringing to mind past examples. You’ll feel more assured handling tasks without artificial guidance.
Breaking Reassurance-seeking Habits
If you notice yourself constantly asking AI if you’re doing something right, or what the best thing to do is, then you need to step back. Avoid having it fully make emotional decisions for you. To restrict this, ask it instead to give you three viable options, then end it there so that the choice is still yours to make.
Practicing “Productive Friction”
It’s important to develop the capacity to learn and work on your own. Your cognitive processes are muscles that are built and maintained by grappling with problems. As such, at least once a week, choose a highly challenging (even frustrating), attention-heavy activity to perform, free of AI.
Just Deleting the AI App
There is this option, if you’re willing. At the very least, reducing access will mean you don’t unintentionally use AI purely out of boredom.
Remember, our thoughts and opinions are processes, not products. It’s okay to slow down and wrestle with a little inconvenience and uncertainty. It can teach us to live and learn more freely than we ever will confined under the glittering metal hand of AI.
External References
- Chimakonam, A. E. (2024). The ethics at the intersection of artificial intelligence and transhumanism: a personhood-based approach. Data & Policy, 6, e61. Retrieved from: https://www.cambridge.org/core/journals/data-and-policy/article/ethics-at-the-intersection-of-artificial-intelligence-and-transhumanism-a-personhoodbased-approach/90EF77356BB3DFF3357530F6C4E67EC0
- Chok, I., & Yang, C. (2026, March 12). Mental health professionals warn of risks from relying on AI chatbots for emotional support. CNA. Retrieved from: https://www.channelnewsasia.com/singapore/mental-health-professionals-risks-artificial-intelligence-ai-chatbots-emotional-support-5986726
- Glassman, S. (2026, January 16). 8 Tips for Managing AI Dependence. Psychology Today. Retrieved from: https://www.psychologytoday.com/us/blog/living-your-best-life/202601/8-tips-for-managing-ai-dependence
- Maral, S., Naycı, N., Bilmez, H. et al. (2025). Problematic ChatGPT Use Scale: AI-Human Collaboration or Unraveling the Dark Side of ChatGPT. International Journal of Mental Health and Addiction. Retrieved from: https://link.springer.com/article/10.1007/s11469-025-01509-y
- Nosta, J. (2026, March 10). You Complete Me? How AI Hijacks the Journey of Thought. Psychology Today. Retrieved from: https://www.psychologytoday.com/sg/blog/the-digital-self/202603/you-complete-me-how-ai-hijacks-the-journey-of-thought
- Paulaonuoha. (2025, August 30). Beyond the Hype: The Dark Side of AI Dependency. Medium. Retrieved from: https://medium.com/@paulaonuoha07/beyond-the-hype-the-dark-side-of-ai-dependency-cb349d6e4e0
- Wei, M. (2024, October 7). Spending Too Much Time With AI Could Worsen Social Skills. Psychology Today. Retrieved from: https://www.psychologytoday.com/us/blog/urban-survival/202410/spending-too-much-time-with-ai-could-worsen-social-skills
