Part 4 – The Existential Turn: When Tools Look Back

Home » Welcome to Bungalow 204 » Part 4 – The Existential Turn: When Tools Look Back

This article is Part 4 of my five-part series Fear, Faith, and the Future, an exploration of how technological change shapes who we are becoming. In this piece—The Existential Turn: When Tools Look Back—I examine why this moment in artificial intelligence feels fundamentally different from every technological shift that came before it, and why its impact is as much psychological as it is practical.


Why This Moment Feels Different

I’ve spent most of my life adjusting to new tools. Calculators, computers, search engines, smartphones—each arrived with its own learning curve, its own moral debate, its own prediction about what it would “do to society.” And each time, we eventually settled into a rhythm. The tool changed; we adapted.

But something about AI unsettles people in a different way. Not louder, not scarier—deeper. And the more I listen to the conversation, the more I’m convinced of something simple:

This moment doesn’t feel different because the technology changed. It feels different because the technology finally entered the places we once believed were ours alone…
Thought. Judgment. Meaning. Identity.

That’s the existential turn. Not fear of the tool, but fear of what it reflects.


From Extending Our Hands to Extending Ourselves

Tools have always extended human capacity.

We built machines that made us stronger.
We built engines that made us faster.
We built networks that made us more connected.
We built search engines that made us more informed.

But none of those tools extended identity. None stepped across the threshold into the internal spaces—the invisible places where we reason, interpret, imagine, and decide who we are.

AI did.
And in that quiet but profound shift, a boundary collapsed—one that humanity never had to redefine before.

This is the hinge between old tools and new ones. This is why the ground feels unsteady.


A Short Interlude: “Apparently I’ve Been Here Before”

While writing this piece, I had a small revelation. Apparently I’ve been circling this topic much longer than I realized.

A recent Brookings essay on existential risk mentioned physicist Max Tegmark. The name sounded familiar. I walked over to my bookshelf and pulled down his book Life 3.0, published in 2017—pages still crisp, a few lines underlined.

Next to it was Kai-Fu Lee’s AI Superpowers from 2018, a book I had half-forgotten I even owned. One line I’d highlighted years ago leapt off the page again. Kai-Fu Lee quotes Elon Musk calling artificial superintelligence “the biggest risk we face as a civilization,” comparing its creation to “summoning the demon.”

Seven years later, that line feels like a time capsule—a snapshot from a world still trying to imagine what AI would become. Back then, AI was a futurist’s thought experiment.

Today it sits on my desk. It talks back. It rewrites paragraphs. It nudges me. It challenges my assumptions. It reveals blind spots I didn’t know I had.

That’s part of why this moment feels different.
Not because AI suddenly arrived —
but because we finally did.


When Tools Began Looking Back

There is a moment—and most people can point to it—when AI stopped feeling like a device and started feeling like an encounter; when AI stopped feeling like something that would never happen, to OMG it’s everywhere!

The moment a tool responds to you and prompts you rather than simply waiting for input, something shifts. A calculator never paused to consider your question. A search engine never adapted to your thinking. A textbook never reflected your tone, your history, your contradictions.

But AI does.
And it does so fluidly enough to trigger a very old human instinct: the instinct to see ourselves in the things that speak back —
because throughout history, we’ve made sense of mysterious forces by giving them human faces.

I used to think AI would challenge what we know.
Instead, it tests our blind spots. This is the psychological disruption—not a machine becoming intelligent, but a machine becoming reflective.


The Collapse of Boundaries

For most of human history, technology lived in compartments:

  • You used calculators in math class.
  • You used maps for travel.
  • You used textbooks for learning.
  • You used librarians to find information.
  • You used colleagues to think through ideas.

AI dissolves those compartments in an instant.

It enters writing, teaching, policy, philosophy, creativity, and planning—not as a specialized device but as a partner in thought. It slides under the door of every discipline at once.

That’s why “this time feels different.” Not because AI is more dangerous—but because it’s more intimate. It occupies the spaces we thought were structurally human.


The Human Reaction: Fear, Faith, and Projection

When technology crosses into our internal lives, the debate stops being technical. It becomes psychological.

What looks like fear of AI is often fear of:

  • losing narrative control,
  • confronting our own cognitive shortcuts,
  • revealing how much of our thinking is habit rather than understanding,
  • seeing our limitations reflected with uncomfortable clarity.

Everyone asks whether AI will replace us.
But the more unsettling question is whether it understands us better than we understand ourselves.

People aren’t afraid of the machine. They’re afraid of the mirror.


What We See When Tools Reflect Us

AI is made from the collective imprint of human language, intention, emotion, error, and aspiration. It is, quite literally, built from us. So of course it feels uncanny. So of course it feels personal. The existential turn is not about AI gaining agency. It’s about humanity confronting itself at scale.

AI didn’t become existential the moment it became powerful.
It became existential the moment it became personal.


Why I’m Writing This Now

The more I think about it, the more I realize something simple:

I didn’t suddenly start caring about AI. The clues were scattered through years of reading, teaching, searching, and trying to understand how people navigate change. I just didn’t see the pattern until now.

In a way, I didn’t choose this topic. It revealed itself. Every book I read, every classroom I taught in, every conversation about fear and change—all of it was pointing me toward this moment.

And now that I’m here, the through-line is undeniable:

The future I’ve been circling for years
is the one I’m finally writing about today.


business, the next step, next, success, stairs, board, drawing, determination, career, succeed, ambition, rise, progress, concept, skills, training, writing, knowledge, next, next, next, next, next, career, career, progress

Next Week
Part 5: Absorbing the Future: Toward a New Normal

Next week, I’ll close the series with Part 5 — Absorbing the Future: Toward a New Normal, a hopeful exploration of what it means to integrate new tools, new rhythms, and new ways of thinking into a life that still feels human, grounded, and meaningful.

If this series has been about navigating fear and rediscovering faith, Part 5 is about building a future we can finally call home.


References

  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Alfred A. Knopf.
  • Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton Mifflin Harcourt.
  • Brookings Institution. (2025). Are AI existential risks real—and what should we do about them? https://www.brookings.edu
  • Musk, E. (2018). Quoted in Lee, K.-F. AI superpowers: China, Silicon Valley, and the new world order (p. 141). Houghton Mifflin Harcourt.

Suggested Reading

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. https://nickbostrom.com/books/superintelligence
  • Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press. https://global.oup.com/academic/product/the-fourth-revolution-9780199606726
  • Harari, Y. N. (2015). Homo Deus: A brief history of tomorrow. Harper. https://www.ynharari.com/book/homo-deus
  • Tegmark, M., & Russell, S. (2016). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105–114. https://doi.org/10.1609/aimag.v36i4.2577
  • Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Belknap Press. https://www.hup.harvard.edu/books/9780674975220
  • Thompson, D., & Klein, E. (2024). Abundance. Farrar, Straus and Giroux. (Publisher link pending, no DOI available.)
  • Dalio, R. (2021). The changing world order: Why nations succeed and fail. Simon & Schuster. https://www.principles.com/the-changing-world-order
Share your love
William Adamaitis
William Adamaitis

I am a sixty-year-old wild eyed wanderer who has spent his entire life searching for that “one thing” as his life’s work only to realize that maybe there is no “one thing”. I have been a beer salesman, a high school math teacher, an insurance adjuster, a government service worker, and a grocery store clerk.

I have lived on both coasts and traveled frequently between the two and I am anxious to not only share my experiences with you, but to hear all about your experiences. Together we will make each other better!

Articles: 48