The Question I Get Asked Most (And My Honest Answer)
On AI ethics, mob mentality, and keeping your eyes on your own paper
I spoke at the Women in Publishing Summit this week, and about three minutes into the Q&A, someone asked The Question. You know the one. It comes up at nearly every conference, every panel, every workshop I teach. Sometimes it’s phrased politely, sometimes it’s loaded with suspicion, but it always boils down to the same thing:
“What’s your stance on using AI ethically?”
I took a breath. Not because I don’t have an answer—I have a lot of thoughts on this—but because the question deserves more than a soundbite. It deserves honesty, nuance, and the acknowledgment that reasonable people can disagree.
So here’s what I said, more or less. And here’s what I want you to hear.
I’m Not Here to Persuade You
Let me be clear about something right up front: it’s not my job to convince you to use AI. That’s never been what Author Automations is about.
My job is to educate. To show you what’s possible, explain how things work, and give you enough information to make your own informed decision. Whether you decide AI is a useful tool for your author business or you decide it’s not for you—both of those are valid choices. I’ll respect either one.
What I won’t do is pretend there aren’t real concerns. There are. And we, as humans and entrepreneurs and members of creative communities, need to pay attention to them.
Environmental impact. Training large language models requires massive computational resources, which means massive energy consumption. Data centers aren’t running on good vibes and optimism. The environmental footprint of AI is real, and it’s worth understanding what you’re contributing to when you use these tools. Some companies are more transparent about this than others. Some are investing heavily in renewable energy; some are not. This matters.
Ethical considerations. How were these models trained? What data did they learn from? Were creators compensated—or even informed—when their work was used? These aren’t hypothetical philosophy questions. They’re active legal battles and ongoing industry debates. The answers aren’t settled, and they vary significantly between different AI providers and models.
Labor displacement. Are these tools replacing jobs? In some cases, yes. In some cases, they’re augmenting jobs or shifting what work looks like. The impact isn’t uniform across industries or roles, and pretending otherwise doesn’t help anyone.
I’m not going to stand here (or sit here, typing in my pajamas with my third cup of coffee) and tell you these concerns don’t matter. They do. And you get to weigh them against the potential benefits and decide what’s right for you, your business, and your values.
Eyes on Your Own Paper
Now here’s where I’m going to get a little spicy.
I’ve watched colleagues—people I respect, people with real platforms and influence—use their social media presence to incite mobs against authors they suspect of using generative AI. One-star review brigades. Public callouts. Accusations based on vibes and writing style analysis. See that em-dash up there? Down there? I’ve prayed at the altars of the em-dash since the second grade, friends.
This mob mentality horrifies me.
Not because I think authors owe the world a detailed inventory of their creative process. They don’t. There’s a difference between secret—which carries the whiff of shame, like you’re hiding something wrong—and private, which just means how I run my business isn’t yours to audit. I don’t ask other authors which dictation software they use, whether they hire ghostwriters, or how much their developmental editor rewrote. Those are private business decisions, not public confessions.
What horrifies me is that we’ve created an environment where suspicion alone is enough to tank someone’s career. Where the court of public opinion moves faster than anyone can defend themselves. Where the energy we could be spending on our own work is instead being poured into policing other people’s creative processes.
Here’s a radical thought: the readers get to decide.
Readers are smart. They know what they like. They can tell when something feels off, when a book doesn’t resonate, when the voice feels hollow. They vote with their wallets and their reviews and their recommendations to friends. That’s how it’s always worked. That’s how it should work.
If a book written with AI assistance delights readers, who exactly are we protecting by destroying its rating? If a book written entirely by human hands doesn’t connect with its audience, no amount of “authentically human” marketing is going to save it.
The market will sort this out. It always does. And in the meantime, I’d rather see authors focused on writing great books than building dossiers on their competitors.
Keep your eyes on your own paper. Your business, your craft, your readers. That’s where your energy belongs.
AI Is Always Your Choice
In this newsletter, in the courses I teach, in any software I build—AI is optional. Always.
I want to make this crystal clear because I know some of you are here specifically for the non-AI automation content. Good news: most of what I teach doesn’t require AI at all. The vast majority of automations in Make.com, Zapier, and n8n work beautifully without a single AI module in sight.
You can automate your newsletter subscriber tagging without AI. You can sync your sales data across platforms without AI. You can create task management workflows and calendar integrations and file organization systems—all without touching anything that resembles artificial intelligence.
That said, there are decided advantages when you do incorporate AI into certain workflows. Speed, flexibility, the ability to handle unstructured data, personalization at scale. I’ll always be honest about when AI makes a workflow dramatically better and when it’s just adding complexity for the sake of being fancy.
But the choice is yours. Every single time.
Do Your Homework (But Skip the Hysteria)
If you’re trying to figure out where you stand on AI, here’s my advice: research, learn, understand, then decide.
Read the Terms of Service. Actually read them. I know, I know—they’re long and boring and written by lawyers who get paid by the word. But this is where you’ll find out what happens to your data, whether your inputs are used for training, and what rights you’re granting when you use a tool. Different platforms have wildly different policies. Assuming they’re all the same is a mistake.
For example, some AI tools explicitly state they won’t train on your inputs. Others reserve that right unless you opt out. Some have enterprise tiers with different data handling than their free versions. Some have changed their policies multiple times in the past year alone. The only way to know what you’re agreeing to is to actually look. I keep a running document of the tools I use and their relevant policy details—boring but necessary.
Subscribe to newsletters that educate. (Hi, hello, you’re already doing this one.) Find sources that explain how things work without pushing you toward a predetermined conclusion. Be suspicious of anyone who’s absolutely certain they have all the answers—the landscape is changing too fast for that kind of confidence.
Avoid the hype vortex. There are people who want you to believe AI will solve every problem and revolutionize your entire existence. There are people who want you to believe AI is a moral catastrophe that will destroy creativity forever. Both camps are selling something, and neither is giving you the full picture. The truth is messier and more boring: it’s a tool. It does some things well and some things poorly. The end.
Make your decisions from a place of information, not fear. Not hype. Not whatever your Facebook group is panicking about this week.
Humans in the Loop, Always
I’m always going to be an advocate for humans doing human jobs. That’s not a contradiction to using AI—it’s how I think AI should be used.
Here’s a real example from my own business. We used to have a team member spending significant hours every week creating social media posts. Researching content, writing captions, formatting for different platforms, scheduling—the whole production. It was tedious, time-consuming work.
We automated the creation process. AI helps draft posts, pulls from our content library, formats appropriately for each platform. The workflow handles the mechanical parts that used to eat up hours.
Did we eliminate that team member’s job? Nope. We shifted their hours to engagement. Now instead of writing posts, they’re answering questions from real humans. They’re proactively chatting with our community. They’re building relationships and solving problems and doing the work that actually requires a human being.
The posts still get reviewed by a human before they go out, by the way. We’re not just firing content into the void and hoping for the best. The automation creates the draft, suggests optimal posting times, handles the formatting—but a real person makes the final call. That’s the loop. That’s where the human judgment lives.
That’s the goal. Automate the mechanical so humans can do the meaningful.
Nobody wants AI slop—bots talking to bots, engagement pods where no actual person is reading anything, comments generated by machines responding to posts generated by other machines. That’s not a community; that’s a very sad theater production with no audience.
The human stays in the loop. The human does the human work. The robot handles the robot work. Everybody’s happier.
Why I’m Not Worried About AI Taking My Job
People ask me this too, usually with genuine concern in their voices. Aren’t you worried AI will make what you do obsolete?
Honestly? No.
Can AI build automations? Sort of, sometimes, with a lot of hand-holding and cleanup. Can AI write tutorials? Technically, though they tend toward the bland and generic. Can AI teach workshops and answer questions and help authors figure out what they actually need?
Can AI drink four cups of coffee before noon and be strategically snarky about technology and make Taylor Swift references that actually land?
No. No, it cannot.
The things that make my work mine—the voice, the perspective, the ability to read a room and adjust, the relationships built over years of showing up—those aren’t replicable. And the same is true for you. Your voice. Your stories. Your particular way of seeing the world and translating it into words.
AI is a tool. I use it. In some cases, I specifically don’t use it because of my own ethical stances on certain applications. But at the end of the day, it exists so I can get back to the thing I actually care about: storytelling.
Let the bots do the boring, so I can do the brilliant. That sounds like an excellent tagline, if I do say so myself.
The Bottom Line
Use AI or don’t. That’s your call, and I’ll support you either way.
But whatever you decide, make it an informed decision. Read the fine print. Understand what you’re trading and what you’re gaining. Stay curious, stay critical, and for the love of all that is holy, stop one-star reviewing people based on suspicion.
We’re all just trying to tell stories and build sustainable businesses. Let’s act like it.
Got thoughts on this? Disagree with me? Agree enthusiastically? Hit reply—I read every response, and this is exactly the kind of conversation I want to be having.
Chelle


