This AI Writing Tool Changed The Way I Think

This AI Writing Tool Changed The Way I Think

It's been a year since I started writing my first book, The Five Vices of Founders. In that time, I've written over 150,000 words, outlined the book four times and cycled through three different working titles. And although I'm proud of the progress I've made on this project over the past year, I sorely underestimated just how long it takes to write a book.

Or at least, how long it would take me to write a book.

I have a very inefficient writing process. I don't go into each piece with a plan. I don't often even have a story in mind. All I have is a vague thought, feeling, or question that gnaws at me and compels me to write.

I'm an external processor, and writing is the way that I think. Through writing, I break down abstract questions into more concrete ones. Concrete questions lead to research. Research often leads to more abstract questions...which requires more writing...which requires more research. And I move through this cycle of writing and research until I feel I have an answer – or simply a point of view – on my original question.

And if I'm lucky, the wild journey I went on to reach this conclusion for myself – the characters I met, things I learned, internal struggles I dealt with – is an interesting story. The easy part is then telling that story.

For shorter pieces like this, it usually takes me a week or two to move through that entire process of discovery and synthesis. But with the book – because every chapter is so interconnected – these cycles of divergent and convergent thought often feel like I'm trying to untangle an endless knot. Which, many times this year, has left me feeling like I have an abundance of words, but no story.

It's in those moments of darkness – when the answer to the question "is this anything?" leans towards "no" – that writers are told to "trust the process". But it's also in these moments when the allure of new tools and processes is most potent.

NotebookLM

A few months ago, Google launched NotebookLM: an AI writing product that allows users to upload documents, sources, notes, even full books into the context of a large language model and prompt it for help synthesizing information, making connections between topics, finding interesting quotes to cite, or writing entire passages for their pieces.

The product's goal is to speed up the writing process – especially for non-fiction authors, academics, and journalists. Steven Johnson, an author I admire and writer of the book How We Got to Now, worked with Google to develop the application. In his blog post about NotebookLM's launch, Johnson describes the product as a "personalized AI grounded in the information you trust, designed to help you do your best thinking."

With such a glowing endorsement from a successful author whose work I respect – combined with the craving for some productive procrastination – I decided to try out the product.

Prescriptive vs. holistic technology

I started by simply pasting in my highlights from one of my favorite books, The Real World of Technology by Ursula Franklin. Instantly, NotebookLM summarized my mess of highlights and put this summary at the top of the document:

In the provided excerpts from "The Real World of Technology," Ursula Franklin, a physicist and social commentator, argues that technology is not a neutral force, but a powerful social construct that shapes society in profound ways. She explores the concept of "technology as practice" and distinguishes between holistic technologies which allow individual expression and prescriptive technologies which enforce conformity and control. Franklin examines the impact of technology on work, governance, and communication, ultimately urging for a more just and humane approach to technological development.

Along the bottom of the window was an input box that I could start using to prompt the document. Above the blank input, it had autogenerated some suggested prompts based on the contents of the document. I clicked the one that said, "What are the key distinctions between holistic and prescriptive technologies?"

NotebookLM provided me with a robust outline of both classifications. Here's an excerpt from what it spat out:

In holistic technologies, the artisan or worker controls the entire work process from beginning to end. They make situational decisions as the work progresses, drawing on their own experience and applying it to the unique situation at hand. The products of holistic technologies are one-of-a-kind and reflect the judgment and skill of the individual worker. For example, a potter making a pot will make decisions about the thickness, shape, and design as they work, relying on their experience and expertise to create a unique object.

Prescriptive technologies involve a division of labor where the work is broken down into a sequence of steps, each carried out by a different person. Control over the work process shifts from the individual worker to the one who organizes and prescribes the steps, such as a manager or boss. The work becomes standardized and the individual worker's input is limited to the execution of their specific task.

Implicit learning

I was intrigued, so I asked NotebookLM to, "Write a short section of a blog post explaining the concept of implicit learning as Ursula Franklin describes it." The following is an edited version of its output:

"Implicit learning" is a type of learning that occurs organically as a byproduct of engaging in a shared activity, often alongside more explicit learning goals. This form of learning encompasses the social teaching and subtle skills acquired through observation, participation, and collaboration within a specific context. Implicit learning, while often overlooked, is crucial for developing a wide range of social, ethical, and practical skills.

In her book, The Real World of Technology, Ursula Franklin uses the analogy of learning to ski to illustrate this concept. While ski lifts efficiently transport skiers uphill, they also eliminate the opportunity to learn valuable skills through the act of climbing:

Although I had skied before coming to Canada, I was not familiar with ski lifts. It had been my experience that once one managed to get up a hillside, one had acquired enough skill to get down reasonably safely. The ski lift removes the “by the way” opportunity to learn how to climb, fall, and get up again, as well as the ongoing reality check on fitness and resources.

This "by the way" learning, acquired through the embodied experience of navigating the slopes, encompasses not only physical skills but also an understanding of one's own capabilities, limits, and resilience.

Franklin argues that production models of education and manufacturing, with its emphasis on efficiency and measurable outcomes, often fails to recognize and account for the importance of implicit learning. This oversight can have significant consequences for the development of individuals and society as a whole. As technology increasingly automates tasks and isolates individuals, we risk losing valuable opportunities for implicit learning and the development of essential social and ethical skills.

Reciprocity

Honestly, at this point I was impressed with NotebookLM. Its ability to articulate and explain concepts from this book at what Steven Johnson describes as, "the speed of thought" from a few quotes was almost inspirational.

So, with full earnestly at this point, I asked NotebookLM to, "explain to me what Ursula Franklin means when she talks about reciprocity in the context of technology". Reciprocity is one of the more complex topics from The Real World of Technology that I've been meditating on for over a year. Maybe NotebookLM could finally help me reach whatever closure my mind was yearning for.

In a long passage of nearly 600 words, NotebookLM summarized reciprocity as the "genuine, interactive give-and-take" between people that can sometimes be "disrupted, reduced or eliminated by the introduction of technology."

It gave me the example Ursula uses in the text to explain this concept: how the television, as a one-way communications device, doesn't provide a form of feedback between the speaker and the audience. And it summarized Ursula's points about the difference between reciprocity and feedback saying:

Franklin contrasts reciprocity with feedback, a technique of systems adjustment designed to improve specific performance within a given design. While feedback can enhance efficiency, it operates within pre-determined parameters and does not allow for the transformative potential of reciprocity.

In contrast to feedback's predetermined nature, reciprocity is situationally based and unpredictable. It allows for the possibility of altering initial assumptions, negotiating meaning, and generating new and unforeseen developments through a genuine exchange between participants.

None of NotebookLM's output here, however, satisfied the gnawing curiosity inside of me that there was something deeper there. So, I began prompting it more directly – trying to articulate the questions left unanswered in my mind.

Do all technologies remove reciprocity?

While Ursula Franklin argues that many technologies remove reciprocity by interposing distance between people and prioritizing one-way communication, the sources do not explicitly state that all technologies inherently remove reciprocity.

Technologies that enable people to work together on projects, share ideas, and co-create could potentially foster reciprocity. For example, collaborative writing platforms like Google Docs allow multiple users to contribute to a document simultaneously, fostering a sense of shared ownership and a back-and-forth exchange of ideas.

Of course it defended Google docs. I pressed on, growing frustrated with the chatbot:

But in the absence of Google docs, wouldn't writers be more likely to make time to get together in-person to collaborate? Doesn't Ursula describe in-person, physical space as the ultimate environment for reciprocal interactions?

Over the course of the next 25 minutes, NotebookLM and I had a long socratic seminar in which I would read its 500 word essays answering my questions, find an inconsistency in its argument, and prompt it with another question. In the end, it said this:

While Google Docs can facilitate real-time collaboration among writers, this discussion raises important questions about whether it might actually hinder the kind of in-person interaction that could foster deeper reciprocity.

The key is not necessarily to abandon tools like Google Docs altogether but rather to use them intentionally and mindfully, recognizing their limitations and potential to hinder deeper forms of connection.

Ultimately, the most enriching collaborative experiences are likely to involve a balance of synchronous and asynchronous modes of communication, leveraging technology to facilitate connection while also prioritizing the value of shared physical space and genuine human interaction.

What is lost when time is saved

NotebookLM is a powerful piece of technology. And I think its ability to synthesize, outline, reiterate, and search material would dramatically increase my productivity as a writer.

But I won't be adding it to my writing practice.

Because the increased efficiency that comes with using tools like NotebookLM comes with a cost. And I'm not just referring to the costs to our privacy and copyright protection by uploading all of our notes and writing to Google's servers.

Today, in our technocratic and entrepreneurial culture, saving time is seen as a virtue. Time, after all, is simply a proxy for money. The more time saved, the more money made. But this assumption, this obsession with temporal efficiency, makes us blind to the fact that there is a cost to saving time.

In the case of AI and NotebookLM, using the product would likely shorten the time it takes me to write my book. But in the process, I'd sacrifice many of the things Ursula Franklin urges me to protect.

Tools like NotebookLM make the writing process more prescriptive. In my first interaction with the technology, I unconsciously chose to ask an auto-generated question rather than forming one myself. And rather than writing answers to my own questions to explore my own thoughts – which I normally do when researching – I simply read back NotebookLM's output.

Although seemingly subtle and inconsequential, it's in these moments that I am giving up pieces of control over the writing process to something external. Which makes the process more prescriptive, manufactured, and systematic than the holistic, artisanal and expressive exercise I want writing to be.

NotebookLM also robs me of the implicit learnings that come from unassisted writing. As much as I complain about feeling battered by waves as I search seas of material for stories, I would never want to give up that adventure. Because it's navigating those waters that makes me a better writer, a deeper thinker, and a more resilient person.

Finally, using NotebookLM would, I feel, remove reciprocity in my life. Explaining my thoughts to a computer that's better read than anyone I will ever meet is cool, but it takes away time I could spend doing the same thing with my wife or my friends. Although they’re often less interested in my ideas than NotebookLM, that’s actually a bonus

I always know I'm close to breaking a story when I can get my non-technical, non-founder friends whose initial reactions to my diatribes are usually, "here we go again" engaged. And talking to them about the things I'm interested in usually leads us to talk about things they're interested in – which helps me learn not only about the things I care about, but also the people I care about.

For professional writers, journalists, and academics, I think NotebookLM could help them combat the increasingly aggressive and demanding publishing culture they operate in.

But, right now, I don't have those pressures. And I'm not interested in saving time if it means sacrificing the beautiful, inefficient moments while writing that make me feel human.

In defense of efficiency

I want to be clear that I’m not suggesting we pack our bags, move to the countryside, and join the Amish. Technology has its place, and it’s undeniable that tools designed to save us time are part of what makes us human. In fact, the pursuit of efficiency feels deeply human.

Trying to save time is an act of respect for the resource. It’s our way of acknowledging the thing that many of us fear the most — our own mortality. There’s nothing more human than the quiet hum of awareness that our time here is limited. The clock is always ticking, and every one of us, no matter how much we save or how fast we move, will eventually run out of time.

My point is simply that the pursuit of efficiency isn’t a virtue in itself. If we don’t pause to think about how we’re spending the time we’ve “saved,” we risk using it in ways that pull us further from what makes us human — connection, creativity, presence.

If the technology we use helps us reclaim moments to pursue those virtues, great. But if it doesn’t, if the cost of saving time is more of our humanity than we care to give up, maybe the time is better spent being slow.

Until next time,
Drew