Should I start writing with AI?
Why I'm so resistant to merging myself with the machine
Last week, my friend (and arch nemesis on twitter) Stew Fortier released a new AI-first text editor called Type. Frankly, it looks dope. I have zero doubt that if I started using this, it would save me oodles of time and make me more prolific as a writer. From there, my business would probably grow faster as well. Pretty compelling. On a related note, I've officially entered the stage of daily publishing where it feels like a grind. If any of this appears easy or convenient from an outside vantage, please let me assure you it’s been the opposite. This shit is extremely difficult. And part of me is saying, "Bro, be compassionate to yourself. Make your life easier. Use the AI."
But look, I gotta be real with you. This stuff scares the shit outta me. It feels spiritually perilous in a way that's hard to articulate. Not Stew's new app, necessarily, but this entire world of LLM's and AI writing and chatbots that are eerily veering into the territory of pseudo-sentience. I have a pit in my stomach just thinking about it all. Yet there's still that other inner voice, the one saying I'm crazy not to use this stuff, and that I'll be left behind if I don't.
Dan Shipper, another twitter homie who co-founded a cool writing collective and publication called Every, recently put out an essay about how he uses AI as a writing companion. From research to brainstorming to revision, along with multiple methods of getting unstuck, Dan's been pioneering this whole field of humans augmenting their writing capabilities with AI tools. It's been cool to watch his journey and see his output grow.
Dan's essay even has a clever opening gambit about how everyone freaked the fuck out when typewriters first hit the scene. "They'll make us less human, more impersonal," those old-timey rubes complained. And then, as always, humans and culture adapted. But, as rhetorically compelling as that argument is, AI is not analogous to typewriters. It's just not. We're entering territory that goes far beyond the introduction of new tools.
This would be a good time for me to confess something. I've been a fan of Every since its early days, and I read a decent chunk of their writing. However, for the last few months, ever since they've started publicly leaning on AI, I can't help but view every new essay with a tad more suspicion than I otherwise would. Maybe it's just me, but knowing that AI is involved plants a seed of doubt in my mind. It makes me slightly less trusting. It makes me wonder, which of these words came from a human, and which came from machines? Even though my mind says that distinction should not matter, my gut and intuition tell me it does.
One of the big themes for me over the past two years has been learning to trust myself and my intuition more. I'm learning to get out of my head and into my body. I'm learning that life tends to go a lot better when I trust my intuition over my brainy rationalizations and plans. And while my head is making all sorts of compelling arguments for why I should jump on this AI bandwagon early, my intuition and body are like "whoa, easy there tiger, this is some dangerous shit and you're not ready to grapple with the deeper implications."
What's funny is that I've been happily using AI image generators like Midjourney for months, and they don't set off my internal alarm system at all. Midjourney feels like a tool to me, one that makes my life easier and my work richer. But these AI writing tools feel categorically different and existentially perilous.
I suspect it's because I identify so strongly as a writer, and I have long-professed my desire for an internet that feels more human. Personally, I'm not worried that AI will "replace me" or make my work irrelevant. In fact, I suspect the opposite is true. The proliferation of generic writing will make my work even more valuable. The Ungated philosophy not only pulls together threads from many disparate fields, but increasingly it's drawn from my direct experience of the world. AI won't be able to replicate any of that, because it has not lived my life.
No, my fear is something much deeper. I'm scared that if I begin merging myself with the machine, and letting my primary means of sensemaking and communication be influenced by something I don't understand, that I'm opening pandora's box. I know how malleable and fluid my thinking and identity are, and I don't feel comfortable injecting AI into a critical part of my system for understanding myself and the world around me. I suspect it would make me less and less human over time.
A few years ago, back when I was mired in a material reductionist worldview, I wouldn't have had any problem with that. I was constantly at war with myself and aggravated by my own humanity. I would have happily changed myself at a core level so that I could be more productive, eat less, worry less, etc. But these past few years, as I've delved deeper into emotional and spiritual work, the more I'm seeing my own humanity, and the human condition, messy as it all is, as sacred and beautiful.
One of my vows this year is to fully tell the truth. In both my writing and my relationships, I’m working on saying the scary thing or the shameful thing. I’m also trying to be honest about what my gut is telling me, and not rationalizing away those signals with my intellect. And despite my intellect telling me to dive into AI and use it for substantial personal gain, my gut says something is very, very wrong here. This feels like a slippery slope to hell.
Rob's Daily Invitation
Hello there. It's me, Rob, your friendly neighborhood non-AI internet friend, here to tell you about The Frontier, a membership for creatives who want to navigate this strange new world in the most human and soulful ways they can. Would you care to learn more?