Generative AI is unethical, and counterproductive
AI is pervasive; you have to make a concerted effort not to use it these days. And while many people have jumped on the bandwagon willingly, others are being taken along for the ride without thinking about the implications.
Those implications are critical and I want you to at least consider them. I’m against the use of generative AI for two main reasons[i].
- It’s theft.
The large language models (LLMs) that form the basis of current AI models have been trained on the pirated work of professional authors. I’m talking about all the big technologies; ChatGPT as well as the models developed by Google, Meta, Amazon and Microsoft.
We suspected this was happening, because if you asked ChatGPT to write something in the style of a published author, it would be able to do so. Maybe not very well, but with enough recognisable traits that you could see the resemblance.
But we now have proof, following a recent US court case where Meta was forced to reveal their sources. A groundbreaking article published by The Atlantic[ii] last month linked to the pirate database Meta used to steal 7.5 million books. It also revealed that Meta had investigated the option of signing licensing agreements with authors and publishers but decided that would be “incredibly slow” and “unreasonably expensive”.
Authors have not given permission for their work to be used for this purpose, nor have they been compensated.
Yes, I have a stake in the game here. My novel THOROUGHLY DISENCHANTED is in the LibGen database that was used by Meta, and I’m furious about it. These are some of the world’s richest technology companies, who have elected to steal what they need rather than pay for it. That’s unethical and illegal and just plain wrong.
- It won’t help in long run
Generative AI is a shortcut. I love shortcuts when the end goal is more important than the journey. For example, if I could have clean sheets every day without ever having to wash them, I’m there. There’s no intrinsic value in me changing my sheets and putting them through the washing machine, and in fact it’s bad for the environment[iii].
However, many of the ways I see people using Generative AI are in moments where you will lose something by not doing it yourself. For example:
- Designers analysing research findings in a design project. See my previous post about synthesis[iv] where I argue it’s the act of sorting, not what categories you sort the data into, that’s important. The whole purpose of synthesis is to get your brain working so that you start to think about something in a way you haven’t thought about it before. Getting a computer to do that analysis for you won’t help your brain process information, and won’t get you thinking the way you need to move forward in a meaningful way.
- Authors writing a synopsis, or plotting their novel, or actually doing anything. Bastardising the words of Chuck Jones[v] to apply to writers; every writer has thousands of bad stories in them and the only way to get rid of them is to write them out. You don’t learn anything by getting someone (something) else to do the hard work for you. Please see this excellent blog post by KJ Charles[vi] for a better description of why using AI to shortcut your writing process is counterproductive.
Absolutely you can get a quicker output for a particular task by using AI, but the outcome will be worse and you will have failed to learn anything from the process. That means you stop developing, stop getting better, stop learning how to improve.
I have never used ChatGPT or any of the other generative AI tools. I downgraded my Microsoft subscription so it doesn’t include CoPilot, and I switched my search engine to Ecosia to avoid the automatic AI results that google comes up with at the top of every search. I am also removing myself from all Meta social media sites.
This is a key ethical dilemma for our time. Please think about the implications next time you go to use ChatGPT (or any of the others) as a shortcut.
Photo credit: Adrian Pope https://fusionchat.ai/news/meta-faces-backlash-authors-protest-in-london
[i] There’s also the fact that the outputs from generative AI aren’t very good (factually inaccurate, often incoherent and usually quite obviously not created by a human) but that’s not an argument that will stand the test of time. They will get better as they learn, it’s what they’re designed to do.
[ii] https://www.theatlantic.com/technology/archive/2025/03/libgen-meta-openai/682093/
[iii] AI also has an environment problem, but I’m not confident about the facts to put that as my third reason not to use it. Here’s one article about that side of things: https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about
[iv] https://alexandraalmond.com.au/synthesis-it-doesnt-matter-how-you-categorise-your-post-it-notes/
[v] https://quotefancy.com/chuck-jones-quotes
[vi] https://kjcharleswriter.com/2024/08/19/weak-ankles-and-ai-an-extended-metaphor/