
The Free Lancer Podcast: Surviving Publishing Without Burning Out or Selling Out
The Free Lancer is a show discussing all things publishing through a queer, social justice lens. It’s for authors and editors navigating the industry in a heart-centered way—one that prioritizes care, relationship-building, and sustainable work practices over the relentless grind of capitalism, tech-bro culture, and AI promises. It explores how author and editor businesses can survive and thrive while also transforming the industry to fight for a better world.
Season Two kicks off on September 4, with new episodes on the first Thursday of the month. Subscribe now and join the conversation.
The Free Lancer Podcast: Surviving Publishing Without Burning Out or Selling Out
Into the Generative AI Trenches
Is generative AI making the publishing world a better place? Or is it leaving a bad taste?
So far, it's pretty clear to me there's a lot of hype, stolen data, angry authors, some benefits, and a truckload of problems. But it does and will have some important uses!
In this episode, I take you through my AI decision making and policies, and narrate the damage (and limited positives) I see happening at the moment.
Links:
Draftsmith: https://draftsmith.ai/
The Mechanical Turk: https://www.mturk.com/
Human-in-the-loop: https://levity.ai/blog/human-in-the-loop
Expert-in-the-lead: https://www.transl8r.eu/xitl/
Gartner Hype Cycles: https://en.wikipedia.org/wiki/Gartner_hype_cycle
💎Need a human-edited transcript? Here you go: https://thefreelancer.buzzsprout.com/
🥰 Check out my newsletter for stories, opinions, and tips on how to survive publishing without exploiting yourself or others: https://thenarrativecraft.kit.com/e8debf1dd5
💎My Learning Center: https://www.thenarrativecraft.com/learning-center
🥰 And my website: https://www.thenarrativecraft.com/
Hello, and welcome, folks, to The Freelancer. This podcast discusses all things publishing through a social justice lens. My name is Andy Hodges. I'm a cultural anthropologist and fiction writer and editor, and I own a book editing business called The Narrative Craft. Please subscribe to the show so you never miss an episode, and sign up for my fortnightly newsletter in the show notes if you want more regular updates from me.
Now, today's episode is all about generative AI, and how it's showing up in my little corner of publishing. That's the editorial world. I'm going to offer you a critical perspective on generative AI technologies in editing. Now, it's fair to say, some version of these technologies will clearly have a big role in publishing. But until they're regulated, it's a Wild West out there.
With technologies such as ChatGPT, if I input clients' manuscripts into these technologies, I'd be putting myself and my clients at risk. And even if I weren't, using technologies like ChatGPT would be an ethical issue for me because these technologies are putting artists, writers, translators, and editors out of business while also delivering a subpar product. So I'm gonna start off with my position statement, and it goes something a little bit like this.
[Position statement]: Generative AI technologies trained on datasets acquired without the author's consent are theft. If you use technologies that do this to generate or edit text, or to generate artwork for your books or courses, then I will not work with you, and I will not take your course.
And this actually makes me a little bit sad, as it's one reason why I haven't taken some of the courses available in the editing world right now. Before I go and dig into this in lots more detail, I also want to share with you a skeleton version of my AI policy. And I'm sharing this so you can consider it in your own work as an author or an editor.
[Skeleton AI Policy]: So point one, AI generated texts. I do not work on text generated by AI. I work only on human generated texts.
Point number two—AI assisted texts. I do not work on texts that have used ethically dubious products trained on stolen datasets to assist them with copy editing or proofreading. Now I currently accept texts that have used responsible AI products. One example is Draftsmith.
I'll put a link to that in the show notes. But if you're using these products for copy editing or proofreading, I advise caution, because using such tools can flatten voice, and it can lead to incorrect or unfaithful edits—edits that don't match the original intention. And I'll talk a little bit more about this later on. And by choosing not to work with AI-generated text, I'm aligning myself with literary agents and most gatekeepers in traditional publishing, be it in academic or fiction publishing.
I'm fully aware that some self publishing authors are taking a more AI-positive position right now, as are some editors, but I'm not interested in working with those authors.
And a few more points while we're at it. What AI policy do I have for my own content? It's pretty simple.
[My AI content policy:] No individual or business is allowed to use any of my websites or confidentially shared content, including contracts, to train AI models. And this works both ways.
So for client content, I promise I will never put any of your text into a generative AI product, not even a responsible product right now, such as Draftsmith, etcetera.
And finally, AI and the environment. So in my spare time, I will personally experiment with tools like ChatGPT because it's important to keep up with what's happening in publishing and how these tools are changing publishing. But I will not use these tools frequently because of the environmental damage linked to them, and I will definitely not use these tools to create images.
Now I want to talk a little bit about hype-mongers because there's a lot of hype surrounding these technologies.
So a few editors, and many people who work in tech, have been heavily promoting or encouraging use of generative AI technologies. And one of the things that I've noticed quite often amongst these people is this all-or-nothing thinking. If you're not on the hype train with them, then you're a Luddite or you're scared.
Now let me give you one example. I recently bought an electric car. Didn't realize it at the time, but I'm one of the first four percent or so of people in the UK to have one. And the infrastructure isn't quite there yet. So here, I'm a fairly early adopter— not so with ChatGPT. I've experimented with it, but I've found hardly any use for it in my business. And I'll tell you what use I have found for it later on.
Point two. In this kind of hype environment—it's called a hype cycle, and I'll put a link to that in the show notes as well—editors, writers, and translators do not have the same interest as the companies promoting these technologies. In their current form, I think tools like ChatGPT are an assault on critical thinking and on creatives. And I would link these tools to the rise of an oligarchy and shifts in society and the tech industry—big social shifts.
Here's one example. Big companies often promote human-in-the-loop as a concept. This is the idea that a human and generative AI tool will work together in some kind of workflow, and the human will edit or work with the content that the machine, the generative AI produces. So this has been a useful concept in lots of fields, but what's going on there with it? It's really emphasizing the technology.
Like, the human is part of some kind of cybernetic loop here rather than taking the lead. This reminds me of concepts like the mechanical turk. If you don't know what that is, Google it. You can find lots of sources on how big businesses such as Amazon have been using people to do almost mechanical tasks over the past twenty years, often tasks that relate to language. So if we take something like human-in-the-loop, can we flip it around?
Can we talk about machine-in-the-loop, where the human takes the lead and has the power and chooses to use these other technologies, you know, in their own time and in their own way. And one translator has gone beyond that and come up with the idea of the expert-in-the-lead. So that's saying that the human expertise should be foregrounded, not the mechanical, the machine editing, and so on. And I'll put a link to that article in the show notes as well. These are all important conversations, and I think it's interesting.
A lot of [these conversations] are happening or have happened in translation over the past few years, which just seems to be one step ahead [of editing] with AI hype and exploitation. But some of these conversations are now seeping into editing as well, so watch out for them. I think we've got a lot as editors to learn from our translator friends.
Point three. And that is that this generative AI output often flatters to deceive.
It's currently not very good at editing. It might be one day. Who knows? But not everybody is aware of this. If you don't know what good editing looks like, then it could look convincing enough.
And I think that's interesting here. I noticed that that through conversations with editor friends and discussions on editorial forums and so on, the bottom half of the market for self-published copy editing and academic editing, that'd be things like thesis editing, disappeared overnight. But I genuinely believe that this is a really good thing because lots of those people didn't value editing very highly. If somebody doesn't value editing very highly, doesn't know what good editing looks like, then they may well be happy with generative AI outputs. But these kind of people, they'd be the ones who are more likely to argue over edits, insist on incorrect edits, and so on.
So I think it's a really good thing that they're not in the pool anymore, and this matches my experience over the last couple of years. When ChatGPT came out, I did notice a slight decline in the number of requests for editing at first. But since then, there's been a rebound, and there's been an increase in the average quality of the inquiry that I've received. So I feel pretty good about that.
Now I want to talk a little bit about how generative AI is showing up in my client work.
First things first, I've changed my onboarding procedure. I make sure I ask about the use of generative AI tools. It's so important. And through those conversations, some authors have disclosed using these tools to do things such as copy editing, summarizing plot, because these tools can be quite good at summarizing. Now you have to be really careful here if you're an author using a tool like ChatGPT, especially if ChatGPT is then using your input for its training, for its models, and so on.
There's a real copyright gray area here because it's not regulated properly yet. It's like the Wild West, and this can cause friction with traditional publishers. If you're self-publishing, you can do a lot of these things if you want. There is still a small chance that you will have issues further down the line, for instance, with plagiarism. But if we just look at traditional publishing, generally speaking, editors, gatekeepers, etc., won't touch AI-doctored texts from that have used services like ChatGPT.
So the main way it's shown up in my client work is through conversations about it with clients, through finding out whether and how they're using it. And then sense-checking that against my AI policy and making sure that I'm working with the kinds of authors, the kinds of people who are on the same page as me about this.
And there's been one other way in which it's shown up in my client work, and this was a one off. I sent a contract to a person. Fifteen minutes later, I received a detailed critique of that contract.
And it was clear to me as soon as I read it that this person had put the contract through something like ChatGPT and asked for feedback. In an instant, any trust that had been built up with that client was gone. So that just meant, oof, I cannot work with this client. That's only happened once. It's a one-off, and there's all sorts of reasons why a person might have done that.
They might have been distrusting, unsure, new to the whole publishing industry, and so on. So I don't have any bad feeling towards that person for having done that. But it is an interesting kind of experience that I had, and it did make me think about what guidelines I need to have in place. Because on my website, I state that you cannot take text off my website or through my emails and use them for that kind of purpose. So if these tools are not that great at the moment for for writing or editing in fiction or the creative humanities, which are the two areas I work in, then what is it good for?
And what are the specific problems that come up in these areas? Well, fiction, especially, depends on unique voice. These generative AI tools give you a mash-up of what's already out there. ChatGPT, for example, could create, in principle, a carbon copy of, say, a completely new kind of novel that becomes a new trend. But I do not believe it can do the the human creative work of creating something radically new that strikes a chord with people.
And there's one translator who uses this lovely phrase: "It takes a human to reach a human." And I firmly believe that.
Next up, it can create texts that look good on the surface. So it could be used for tidying up texts. For instance, if you write in a language that isn't your first one.
And this is the main way in which I am using it in my business right now, not for any forward-facing work, but tidying up emails I write to clients in German or Croatian, which are the two main languages I speak apart from English, that is. But tools like ChatGPT make mistakes here as well. So if you don't have a knowledge of those languages already, this use of it is wouldn't actually be that useful because you wouldn't know where it's slipping up and where it isn't. Now, in the creative humanities, and this comes out of discussions with academic colleagues and friends, I believe that the use of ChatGPT and similar technologies could lead to a two-tier system.
If you look at any humanities text, each one has a communicative aspect to it.
It's about communicating research, often to other academics, sometimes to the public as well. And it has a symbolic aspect to it. The symbolic aspect is all about author style and voice and these other considerations. And when I'm discussing this symbolic aspect, I always use the example of Pierre Bourdieu. He's a really famous sociologist.
Don't know if you know him, but if you've read any single text by him, you will know that that text was written by Pierre Bourdieu just because he writes in a really, really unique way. Now these ChatGPT-style tools, they can help often with the communicative aspect, but they tend to flatten voice. And it's fairly easy for me to notice and see when a text has been put through ChatGPT because there's this kind of bland voice to it and the use of certain words, phrases. There's been a big discussion about it defaulting to US English and em dashes and stuff like that. And I find those conversations which are nonstop on social media quite boring now, but that's another example.
It's a homogenizer, and homogenizing isn't always good. And it's not good if you're trying to create, like, a unique style. So I believe, and I think I see this happening already, you could end up with a two tier situation where some users, especially those who aren't fluent in a in a language—they're using, say, academic English as a second or third third language. Those people are using these technologies to tidy up their texts.
Whereas you have first-language English speakers and those from more elite university backgrounds really heavily invested in their own writing and in the symbolic aspects of what they're doing as well as the communicative ones. My observation is that these kinds of elite writers are flat-out refusing to use ChatGPT and similar technologies. I've noticed this across my whole academic network, and I believe you could end up with kind of a two-tier situation here, which isn't good at the end of the day. So that's a kind of snapshot of what I see going on in both fiction and academia right now. I could talk about this a lot more, but I know that I will be coming back to this topic in future episodes.
So I'll just end by saying a brief summary of what I think is happening. So first of all, one thing that I think is missing is a much more critical kind of stance taken by some people in the editorial community towards the use of these tools. The courses that I have seen so far, and the conversations I have seen taking place generally seem to be kind of either on the fence or mildly positive towards the use of generative AI. I think there's a space there, an important need for a more critical perspective, and that's what I'm trying to achieve here. How do I think these tools are going to transform editing?
Well, I've already talked a little bit about two-tier systems, and I think that will probably happen in fiction as well. I know that some publishers are even experimenting with AI-generated novels. And there's an argument that these tools are more of a stylist than an expert. They're not very good at expertise. There's the so-called ChatGPT-splaining, like mansplaining.
It tends to confidently give the prompter an answer to a question, but it doesn't necessarily give the right answer. However, it's quite good at, you know, styling texts in a certain style. So there's an argument that it will be used for certain kinds of fiction. And, you know, romance fiction is a genre that sells the most. And in some ways, it often has a more prescriptive story structure.
The more prescriptive the story structure, the more algorithmic it is, and the greater the use of algorithmic technologies like ChatGPT and so on. I believe that in the editing world, writers will still be looking for human judgment, and there'll be a shift towards more book coaching, consulting, and developmental editing services. And there'll still be room for, you know, heavy copy editing, stylistic editing, which is incredibly human and subjective. I've already noticed this a little bit in my own work as well. I've seen increased demand for Zoom sessions, for book coaching services over developmental editing services where you just deliver a report.
More widely, I think generative AI will lead to a modest decrease in the number of editors. I think in adjacent fields like translation, that has been happening already. And I think it will lead to a substantial decrease specifically in the number of proofreaders, and that's because large parts, significant parts of copy editing and proofreading will likely become automated. But I think that's a good thing because it frees editors up to focus on higher level issues with a text. And this has already happened several times already in publishing with innovations like word processors, macros, etcetera, etcetera.
And that's it for now. Those are the main things that I think will happen in the editorial world. I'm really keen to hear your perspective too. So let me know what you think about my AI policy and the decisions I've made. Are you much more AI positive?
Are you using AI in your business in new and exciting ways? Or have you taken a more critical and distanced perspective on the technology like me?
So thank you, folks, for listening. Don't forget to take a look at the show notes again for all the useful links. If you want to keep in touch, sign up for my newsletter.
And please, please, please subscribe, rate, and review. Until next time.