krzysztof.przychodzki

AI Between Progress and Responsibility - 3 Lessons from the World AI Summit

Disclaimer: Written by Human Intelligence. Human Intelligence can make mistakes, including about people, so double-check it.

World AI Summit: A Different Perspective on AI #

I went to the 9th World AI Summit in Amsterdam (October 8–9, 2025), expecting to hear about how to push AI even deeper into every aspect of my life. The reality turned out to be quite different. Instead of more recipes for using LLMs, I heard about responsibility, sovereignty and the human cost of progress.

World AI Summit - seconds before opening keynote
World AI Summit - seconds before opening keynote

Unlike developer conferences, where the main topics are code, architecture and performance, this conference focused on technological, ethical and social awareness. Speakers discussed how to remain human in a world that seems to be designed by algorithms.

Many of them emphasised the need for Europe to develop its own AI models, which would be created locally and in accordance with the law and data privacy regulations.

Although it seems obvious, the need to become independent from global AI leaders (such as the USA or China) is often overlooked. In this busy world, we should ask ourselves: where are we heading?

Here are three lessons from the World AI Summit that, in my opinion, should change how each of us thinks about AI and technology.

Lesson one: The Empires of AI and the Cost of “Reliability” #

Talk title: “Empire of AI: How Silicon Valley is reshaping the world” by Karen Hao.

Karen Hao, author of the book Empire of AI, said it directly: behind our “technological miracles” lies a harsh reality. AI models are not just code and GPUs, they are also part of a global chain of dependencies where someone (or something) pays the price for our progress.

What is the real price? #

Human cost:

  • AI systems rely on vast amounts of labelled data, much of which is created by low-paid workers in the Global South. These workers spend days moderating and tagging toxic content so that our models can ‘learn ethics’. For many, this work results in lasting psychological trauma.

Environmental cost:

  • Training and running large AI models has a negative environmental impact. Each new model demands enormous amounts of energy and water. Data centres consume millions of litres of drinking water every year for cooling purposes and require energy comparable to that used by small cities. Forecasts for future consumption are not optimistic.

Social and ethical cost:

  • AI companies are gaining influence over the media, the economy and even politics. Like ancient kingdoms, they are growing in power by exploiting cheap, poorly paid labour and profiting from various raw materials such as land, minerals, art and data.

The takeaway #

Every “clean” model has a human and environmental cost. Before you deploy another AI agent, ask yourself: what data and resources does my model rely on? Use a model trained on your data or one specialised in your problem. Smaller, targeted models can be just as effective while consuming much less energy and resources. That’s our real contribution to a more responsible, sustainable AI.

Lesson two: Time Worth Wasting – The New Enemy: Efficiency #

Talk title: “Time worth wasting: Preserving meaning in a predictive world” by Jason Snyder.

Jason Snyder, futurist and philosopher of technology, asked a question that stays in your mind for a long time:

Will a world optimized by AI become a world without meaning?

In “Time Worth Wasting”, he explained that in the age of AI, everything becomes predictable, measurable, and optimized and therefore stripped of randomness, risk, and meaning. It’s precisely the “wasted time” — aimless conversations, wrong decisions, or getting lost — that gives human life its sense and direction.

At the end of his talk, Snyder left five simple rules — five truths — for a world drowning in models:

  • GUARD THE REFLECTION: Truth isn’t created by machines. It’s distorted by them if we’re not careful.
  • PROTECT YOUR AGENCY: The machine can act. Only you can decide.
  • CHOOSE WHAT’S WORTH DOING: Prediction is cheap. Pursuit is sacred.
  • DON’T OUTSOURCE JUDGMENT: The real risk isn’t in the algorithm, it’s in our willingness to surrender.
  • REMEMBER: AI IS THE MIRROR: What we feed it is what we become.

The takeaway #

AI won’t replace your responsibility or curiosity. Build systems that not only work but also reflect the values you want to see in the world.

Lesson three: Does Your Technology Serve People? #

Talk title: “AI in the city: How Amsterdam uses AI for urban well-being” by Swaan Dekker.

Amid all the warnings and reflections, there was also a ray of hope — the example of the city of Amsterdam, which implements AI the human way. Swaan Dekker showed how the city uses AI to genuinely improve residents’ lives from handling reports to intelligent infrastructure maintenance. This is AI serving public well-being, not corporate power. Their philosophy is based on three simple principles:

  • Human-centric – focused on people
  • Reliable – trustworthy and transparent
  • Future-proof – built to last

But there was something even deeper in that message, perhaps my own interpretation. Amsterdam is trying to see the human being in every step of the technological process. The goal is not just to make AI smarter, but to make institutions more responsive to the people they serve.

This approach has huge potential if applied more broadly in offices, hospitals, and courts. In places ruled by bureaucracy and impersonal procedures, AI can become a bridge between citizens and institutions, a tool that simplifies contact, speeds up decisions, and restores people’s sense of agency. This isn’t automation for efficiency, it’s automation for humanity. And perhaps that’s the true meaning of sovereign AI: not only technical independence but also ensuring that technology serves the common good.

Conclusion #

Each lesson from the World AI Summit — from Hao, through Snyder, to Dekker — was about one thing: regaining human agency in the age of automation. It’s not technology that should set our pace — we should decide the direction.

Perhaps the real test of our technological maturity isn’t how quickly we can automate, but how carefully we can choose what remains human. Not everything worth doing is worth optimizing. Not every problem worth solving requires an algorithm.

Discussion