Johannes Kleske

Decoding and Shaping Futures

Hands-on instead of hype cycle: The practical way out of the AI discourse dilemma

Switch to Deutsch

How concrete experience creates the basis for productive debates

The AI balancing act: between philosophy and practice

“How do we protect ourselves from the existential threat posed by AI?” asks an expert at a specialist conference. Meanwhile, millions of people talk to ChatGPT and Claude every day about their mental health (therapy is the top use case for generative AI in 2025) and have lessons created for them to learn. This juxtaposition of philosophical debates on fundamental principles and pragmatic everyday use characterizes the current approach to artificial intelligence and creates a remarkable divide.

On the one hand, there is a high-flying, abstract theoretical discourse that has been circulating for years on conference stages, in feature pages and political arenas. The same existential questions dominate:

  • Will humanity be replaced?
  • How is work changing?
  • What ethical boundaries do we need to draw?

On the other hand, especially since the ChatGPT moment at the end of 2022, we are experiencing a rapidly growing practical application of these technologies in everyday life. People without special technical skills are pragmatically experimenting with AI tools, integrating them into their workflows, and sharing their experiences. According to recent data from McKinsey, more than 53% of C-level executives now use at least one AI tool regularly in their day-to-day work (despite all the justified criticism of such self-report studies).

This becomes increasingly problematic when trying to orient oneself in this field: the entrenched abstract theoretical discourse actively hinders practice-oriented debate without making any progress itself. This creates confusion, polarization, and a massive barrier to entry for people who want to engage constructively with the topic.

My thesis is that we are currently failing in both dimensions. Neither is the abstract theoretical discourse on AI making any substantial progress, nor can we fully tap into the potential of concrete practical applications or the actual problems that come with them. This holds true as long as theoretical fundamental discussions consistently eclipse them.

Over a decade on the future of work and AI

Book title: Race Against The Machine

My personal contact with the AI discourse began in 2012-2013, when I prepared and gave my first talk at re:publica. The key question: What would happen if machines were to take over mental work in the future? Up to that point, the focus of the topic of work and automation had always been on machines taking over physical work. Now a new discourse began, triggered by the book “Race Against The Machine” by Erik Brynjolfsson and Andrew McAfee, which was a major catalyst for this debate. Technologies from the field of artificial intelligence, especially machine learning at the time, suggested that cognitive work would soon be able to take over.

The discussion revolved around the question of how we want to live if a large part of mental and knowledge work is also automated. The vision of a “digital Athens”—the idea that if machines took over more and more routine work, people would have more time for art, discourse, philosophy, and politics — was propagated. The debate surrounding the basic income also received more attention.

In 2015, I then gave a second talk at re:publica. This was after the publication of the even more influential book “The Second Machine Age” by the same authors, which further fueled these hopes and raised them to an even broader stage.

In my research, however, I discovered another side: behind many supposedly “magical” AI functions was actually cheap human labor. Researcher Lilli Irani had coined the apt term “data janitors” for this. These are click workers who work for minimal wages to ensure that Google Maps is tidied up or that social media feeds remain free of problematic content.

The endless loop of theory

What has sobered me since around 2015 is the observation of how little this abstract theoretical discourse has actually developed since then. The same lines of argument are rehashed again and again, the examples merely slightly updated.

The pattern remains remarkably consistent: a technological breakthrough – be it the victory of AlphaGo over the world’s most brilliant Go players or the release of ChatGPT – is immediately extrapolated in two opposite directions. The optimists proclaim the dawn of a new era, while the pessimists paint apocalyptic scenarios.

After a few months, a phase of disillusionment follows, in which it turns out that neither the utopian nor the dystopian scenarios have materialized. And then the cycle starts all over again, without any substantial evolution of the core arguments.

Screenshot of the Economist website with the article "There is a vast hidden workforce behind AI."

Every couple of months, the same sensationalist article appears saying that machines are “really” taking over everything. Meanwhile, the latest issue of The Economist once again reports on how clickworkers in the global South are doing the dirty work for the latest AI systems (the true meaning of “human in the loop”).

This cyclical nature of the discourse—breakthrough, hype, apocalyptic warnings, disillusionment, and repetition—prevents the debate from developing substantially. The result is the impression of a sham battle that, although rhetorically brilliant, ultimately stagnates in its basic assumptions.

The dynamic can be condensed – with a slight touch of irony – into five phases:

PhaseTypical narrativesDuration
Breakthrough“It’s crazy what the new model can do!”1-2 weeks
Hype“The AI revolution is here!”1-3 months
Warning“Existential risks for humanity!”parallel to the hype
Disillusionment“Not as revolutionary as we thought.”3-6 months
RepetitionNew breakthrough, same patternendless

The catalyst: How ChatGPT made theory and practice collide

ChatGPT burst into this cyclical theoretical discourse at the end of 2022 and created a caesura. For the first time, there was a broad overlap between what people understood by “artificial intelligence” (characterized by science fiction and pop culture) and what they could actually experience and use themselves.

Before this moment, most people had only experienced AI systems indirectly, for example, through traffic systems or purchase recommendations. None of this was usually consciously perceived as AI.

ChatGPT changed this fundamentally: suddenly, anyone who had ever sent a text message could interact with a system that at least superficially corresponded to the image that many had of “artificial intelligence” in their heads.

This moment intensified the abstract theoretical discourse and opened up a broad space for concrete practical applications and experiments for the first time. The systematic problem: these two levels constantly intermingle and hinder each other without the participants being aware of it.

A typical example: you want to understand how a certain AI tool works, and you search on YouTube. But you only find videos that use superlatives to announce that superintelligence is now a reality and everything is about to change. Just try to understand AI agents on the basis of YouTube. Finding the sober instructions in between is a challenge.

Or: You ask a very specific technical question about a language model in a forum, and instead of an answer, you get an emotional tirade about job destruction or wasting energy.

This mixture of both levels – lofty theoretical discourse and pragmatic application – has not dissolved with the ChatGPT moment but intensified.

Out of the theory, into the tool: A plea for the concrete

What I am proposing in this article may sound counterintuitive: it is worth consciously ignoring the abstract theoretical discourse for now and putting it on the back burner.

Instead, I advocate taking concrete practical experiences seriously in detail. To remain critical, to question, to decode, but on a more concrete level than the abstract-philosophical one.

Mini method box: The practical introduction

  1. Define a specific application: Choose a manageable task from your day-to-day work (e.g., summarizing texts, feedback on ideas, etc.).
  2. Systematic comparison: Test at least two different models with identical prompts (e.g., with ChatGPT or Gemini).
  3. Structured reflection: Evaluate not only the results but also the process, your interaction, and any necessary adjustments.

What specific processes from your day-to-day work could you test tomorrow with the language models?

My own experiments illustrate the value of this approach:

Example 1: Language model comparisons

I am constantly comparing different models and different prompting variants to understand how language models actually work. These experiments give me a sense of how language models work as statistical tools that can capture and reproduce text patterns but not “understand” them.

Example 2: Local AI models

I regularly experiment with AI models running locally on my own MacBook. This experience gives me an immediate feel for the processor intensity and energy consumption of these systems.

When I watch my computer struggle with ten documents instead of two, I understand much more tangibly what it means when millions of requests are processed in data centers.

This kind of concrete experience creates a quality of reflection that is often lacking in abstract theoretical discourse. I find that people who have never dealt with these tools in depth find it difficult to discuss their use constructively.

Equipped with this practical experience, we can now return to the level of theoretical discussion—albeit from a significantly different perspective.

The synthesis: How practice fertilizes theory

My thesis is that we can only return to the abstract theoretical discourse in a meaningful way from concrete practical application. Equipped with empirically based experience, we can then discuss much more productively and break down entrenched positions.

Max Read gets right to the heart of this idea:

“The more people use A.I. with some regularity, the more broad familiarity they’ll develop with its specific and consistent shortcomings; the more people understand how LLMs work from practical experience, the more they can recognize A.I. as an impressive but flawed technology, rather than as some inevitable and unchallengeable godhead.”

This experience-based demystification is exactly the key we need. The more people work with AI systems on a regular basis, the better they understand their specific and consistent weaknesses. Hands-on experience makes it possible to recognize AI for what it is: an impressive but flawed technology, not an inevitable, untouchable machine god.

If we understand how these systems actually work, where their strengths and weaknesses lie, and how they fit into our everyday lives or not, then we can talk about them in a much more nuanced way. Is “automation” even the right term? Or are we not dealing with something more complex that cannot be captured with simple buzzwords?

Experimenting instead of speculating: The path to productive debate

As someone who has been observing the abstract theory discourse for a long time, I have to admit: constantly repeating the same arguments obviously doesn’t get us anywhere. It sometimes feels like you could generate every new article on artificial intelligence with a language model: The core arguments are always the same, only slightly adapted and illustrated with other examples.

That’s not how we make progress. The really fascinating developments take place in the area of concrete practical application and reflection. This is where new questions, insights, and perspectives arise that could enrich the abstract theoretical discourse if we were to allow them.

My plea is therefore, let’s pause the abstract theoretical discourse for a moment and focus on concrete practical applications. Then, armed with new insights, we can engage in a more differentiated and productive debate.

Three steps to a well-founded AI discussion:

  • Experiment instead of speculating: Gather your own experience with different AI applications.
  • First observe, then evaluate: systematically document potentials and limitations in your specific context.
  • Connect instead of separate: Use your practical insights to enrich the overarching discourse.

Do you have questions about the AI discourse, or would you like some tips on the practical use of AI tools in your context? I offer free office hours every Wednesday. Simply book a 15-minute slot.

Futures Lens – My Newsletter

Get all the articles plus exclusive content delivered directly to your inbox. Sign up now!

Name