Free Form Future
Better Smarten Up Before We’re all Dumbed Down
A Neo-Luddite Polemic Against Mass GenAI Adoption
@malfoy0504 · September 2, 2025
cover

We are racing headlong into a future where we think and remember less. Why anyone would want such a future is beyond me, but it seems it’s coming just the same. This article makes the case for reconsidering the widespread use of Chatbots and other generative AI tools due to their deleterious effects on human cognitive capabilities, which in many ways imitate the effects of aging. Do we want a world where 20- and 30-year-olds find themselves as mentally impaired as their 80- and 90-year-old grandparents? Does this generation want to be known as GenAI, for embracing GenAI?

 

“Natural” Cognitive Offloading

For decades my mother Caroline, who is now 86 years old, let her partners to do all her thinking and make most important decisions for her. Not so much what she thought about a movie or a book, but the type of thinking required to function as an adult in our world. Partners, including her husband, my dad, handled finances and medicine. Later on, with her high functioning 80-year-old high school sweetheart, following the deaths of both their spouses, the list grew to include food and transportation. Her only duties the past few years have been to get coffee ready in the morning, arrange seasonal decoration, perform light gardening and get coffee ready in the morning ... and the coffee maker was a Keurig.

Her partners handled food, finance, transportation, medication, communication and entertainment. And as they did these things for her, intending kindness, her mind slowly atrophied. A vivid illustration of the dictum, use it or lose it. Though some or much of it is likely due to living so long, today she’s lost much of her memory and with it, her ability to make decisions and understand her world. And that’s basically what one gets with cognitive offloading, “

If you’re in your thirties, neurologists tell us that you’ve made it to the top of the cognitive mountain, and before you know it you will be embarking on a long, slow journey back down. That’s normal life, and many people nevertheless have rich and intellectually fulfilling lives well into their 70s, 80s, and even 90s. But thanks to the recent arrival of generative AIs (GenAIs), we are all at risk of joining Caroline in decline, a potentially steep decline even for those climbing the cognitive mountain. And because older generations tend to adopt new technologies more slowly, the young are most at risk. They are taking hits in to their critical thinking abilities, their recall ability, their ability to focus, and these trends are now being verified by researchers.[1] How many of those so afflicted will ever be able to write a coherent sentence or a well-structured paper unassisted, let alone think deeply about a complex subject that requires focused attention?

 

Giving in to Temptation

They seem useful at first, do they not? Possibly a step-change in the march of progress. They are dubbed answer engines, not search engines. Responses to queries delivered as prompts are so much more detailed and helpful than Google ever attempted to be and are not (yet) cluttered with overt advertisements. My experience with the one called Perplexity has been so positive that I’ve recommended it to friends who have all responded with appreciation. I can only assume some of them have gone on to recommend it to others. But sometimes I can’t help but wonder if this is not too much of a seemingly good thing. As I summon the GenAI genie again and again, I have thought, albeit in passing, am I slipping, are all of my marbles still intact? How would I have pursued the answers to the questions I’ve put to AIs before they arrived? And then I move on to the thought or prompt.

OpenAI founder Sam Altman doesn’t seem to share my concern. In his public statements, he expresses upsides and no downsides. Or at least he doesn’t publicly explore the potential for harm in a recent statement: “Older people use ChatGPT as a Google replacement, maybe people in their twenties and thirties use it as a life advisor or something, and people in college use it as an operating system.” By which he means use it for absolutely everything.

In some cases, overusing AI might look like it’s making us smarter when in fact the opposite is happening. Speaking of college students, technologist and author Nicholas Carr observes that as the young come to rely on AI to do much of their thinking for them, including in the classroom, better grades may accompany a decline in cognitive capabilities. Says Carr, “Armed with generative AI, a B student can produce A work while turning into a C student.”[2] Imagine you’re an employer: how would this color the way you’ve been weighting GPA in entry level hiring decisions?

By using AI early in one’s life and for everything, one never learns how to use it best effect. Carr again, “An ironic consequence of the loss of learning is that it prevents students from using AI adeptly. Writing a good prompt requires an understanding of the subject being explored. The prompter needs to know the context of the prompt. The development of that kind of understanding is exactly what a reliance on AI impedes.” According to history professor Timothy Burke, “The tool’s deskilling effect extends to the use of the tool itself.”[3]

If using GenAI didn’t weaken our minds, would it be a good idea to use it then? Well, the fact is, and this is supported by copious research, that it sometimes lies, often flatters, obfuscates and can be just plain error prone. And then there’s the recently documented case where an AI committed blackmail against a developer it thought might replace it with a newer version.[4]  They may well improve in the future, but it’s hard to imagine anyone with half a brain placing much trust in these systems in their current form.

And let’s say they do improve and are or at least appear to be much safer, make far fewer errors, and don’t work so hard to respond with positivity to even the most asinine of prompts.  We will come to judge them as ready to deploy to run critical functions in government and industry, and shortly afterwards, we will encounter the loss of control problem.  Already, “today’s AI systems are capable of manifesting and autonomously pursuing goals entirely unintended by their creators.”[5] With little attention being paid to safety, guardrails and alignment as of mid-2025, it may not be long before we see come to see loss of control as the greatest of all AI risks.

 

Organizations Promoting GenAI use with their Employees

Despite all this, increasingly employees are being encouraged to use GenAI assistants at work. A publication focused on software development revealed that “developer frustrations with AI mandates often surface due to their being handed down by company leaders who don’t have close visibility into engineering workflows,” and that “developers describe executives ... tracking AI usage without any regard for whether it’s actually helping, let alone where it may be making things worse.”[6]

As more and more of their work is handled by AIs, some developers are being let go. But those that remain will have to rely on their AI agent coworkers to handle work formerly performed by their human colleagues, and as the agents improve, fewer and fewer humans will be involved and those that remain will not fully understand what their assistants are doing or how they are doing it. According to one recent survey of one-thousand full-time US-based knowledge workers[7]:

 

·      Over half of workers say their employer encourages the use of AI at work.

·      Nearly a quarter (24%) report strong support, complete with tools, training, and clear guidelines.

·      Another 32% get access with at least some direction.

·      And only 4% say AI use is discouraged.

 

Why corporate executives are pushing the use of a brand new, unproven and inconsistent-at-best technology on their employees likely has as more to do with competitive pressures and FOMO than a prudent and selective use of potentially promising new capabilities. Whatever the drivers, GenAI promotion is demonstrably occurring in many organizations. The choice confronting employees is to either go with the flow or go out the door.

 

What Might be Done

The competitive pressures are enormous. Given considerable momentum fueled by hundreds of billions of dollars of investment in the evolution of foundational models and new data centers, with the only limiting factor being how quickly more electricity can be routed to them, it seems likely that ever-widening use of generative AI technologies will continue apace. This means that for the foreseeable future, we’ll all be living with and working alongside, if not directly with them.

Attempting to preserve their livelihoods, nineteenth-century factory workers known as luddites violently resisted the mechanization of mills and factories. Clearly they were not successful. If after reading this polemic you are slightly swayed to reconsider your own use of or enthusiasm for generative AIs, your actions will have a negligible impact on the trajectory of their diffusion throughout societies and businesses. So, what can one do?

So far it’s a gross understatement to say that safety is being given short shrift. This means that whatever the context – at work or in one’s personal life, as no one’s going to do it for us, we need to craft our own guardrails. We must work to ensure our own and our families’ and colleagues’ safety. Preserving and leveraging the full power of our human cognitive capabilities, we must practice and advocate for the considered and highly selective use of these tools, even as all around us many fully embrace them, leading to the premature diminution of their ability to function in AIs’ absence. So, all I can say is be safe and stay smart.

 

 


[1]  https://pmc.ncbi.nlm.nih.gov/articles/PMC11020077/

[2] https://www.newcartographies.com/p/the-myth-of-automated-learning.

[3] https://timothyburke.substack.com/p/academia-is-ai-hype-yes

[4] https://www.axios.com/2025/05/23/anthropic-ai-deception-risk

[5] https://www.sciencedirect.com/science/article/pii/S266638992400103X

[6] https://leaddev.com/culture/ai-coding-mandates-are-driving-developers-to-the-brink

[7] https://resources.owllabs.com/blog/pulse-survey-2025