• FF Daily
  • Posts
  • đź‘ľ The Philosophy of AI | Part I

đź‘ľ The Philosophy of AI | Part I

The Philosophy of Intelligence: Exploring Existentialism, AI, and the Essence of Human Freedom

“If man is not, but creates himself, and if, by creating himself, he assumes responsibility for the whole human species, if there is neither a value nor a morality given a priori, but if we must decide alone in every case, without support, without guidance, and yet for all, how can we not be afraid when we have to act?” (Jean Paul Sartre, Existentialism is a Humanism)

We live in a time characterized by fundamental upheavals. Sam Altman, one of the defining personalities of our era and CEO of OpenAI, calls it the “Intelligence Age” in his blog - an age in which not only our technologies but also the foundations of our lives and coexistence are being transformed. But while the speed of this development is breathtaking, there is a lack of fundamental orientation. The crucial questions remain unanswered:

  • How do we want to live in the future?

  • What values should guide our actions?

  • And what role will artificial intelligence play in this new chapter of human history?

This lack of orientation is reflected, for example, in the question of economic activity, when we can use AI to do work that historically can only be done by humans; will we then need a universal basic income if millions of people become unemployed? On the other hand, the lack of orientation is also evident in social coexistence, in the question of friendship and relationships, when artificial intelligence takes on the role of close caregivers and knows us better than our closest friends and family members. All of this presents us with unresolved questions and challenges. I have already explored these questions to some extent in my recent article, The Post-AGI-World.

Philosophy can serve as an anchor here, as a tool to penetrate the uncertainty and complexity of these changes. Historically, it has often been dismissed as a dry, academic discipline, but its practical value is always apparent when humanity is at a crossroads. Thinkers such as Kant and Hegel provided criteria for moral action and development, while the Epicureans took a radically practical approach to the question of the good life. And their ideas have survived the centuries and millennia without losing any of their topicality (the Epicurean question of the good life is still relevant today, Hegel's dialectic is an instrument for understanding development and Kant's categorical imperative is a way of balancing the right action). In today's world, philosophy could once again prove its strength - not as mere theory, but as light in the midst of the darkness of an uncertain future. And not least with the help of living thinkers and their current interpretation of classical philosophy, such as Slavoj Zizek, Robert Pfaller and Byung-Chul Han, to name but a few

With this series of articles, I would like to explore the relationship between philosophy and artificial intelligence. The idea is to use different philosophical perspectives to shed light on the complex issues of our time. It is about nothing less than understanding our own existence: What does it mean to exist? What does it mean to have consciousness? And where are the boundaries between man and machine? The first part of this series in particular, which deals with existentialism, focuses on these questions. Existentialism, which deals with the nature of being and human freedom like almost no other school of thought, offers an exciting framework for exploring the implications of AI.

Artificial intelligence challenges us to question old assumptions about human nature. Are we really free in our decisions, or are we - as Sigmund Freud argued - ultimately controlled by unconscious processes? If AI is able to “learn” and simulate creative processes, the question arises as to whether it could one day develop its own consciousness. What does it then mean to be human and how do we define ourselves in relation to this new form of intelligence?

At the same time, the debate is interdisciplinary. Neurobiology, psychology and technology offer important insights that complement philosophical reflection. For example, Freud's concept of the unconscious could enter into a dialogue with the “hallucinations” that today's AI models produce - and make us wonder whether we as humans are not also constantly hallucinating, interpreting and constructing our reality.

The aim of this series is to provide philosophical thinking tools that offer orientation and at the same time reflect the complexity of the new era. Dealing with questions of being, morality and freedom is not just a theoretical exercise, but an attempt to develop a compass for the future. Existentialism, which focuses on human existence and freedom, provides the prelude to a journey through the deep waters of thinking about AI and its significance for our world.

Existentialism as a philosophical movement has always been concerned with the fundamental questions of human existence: What does it mean to exist? What defines us as human beings and what responsibility do we bear for our existence? These questions take on a new urgency in the context of artificial intelligence. Because if we create machines that are able to learn, make decisions and possibly even develop a consciousness, the question arises: what distinguishes humans from machines - or is there ultimately no difference at all?

The core of existentialism is the emphasis on human freedom and responsibility. Philosophers such as Jean-Paul Sartre have argued that man is condemned to be free, as he must inevitably shape himself and the world in which he lives. And freedom is conceived of radically and directly by the existentialist philosophers, which initially has an irritating or perhaps off-putting effect on many people:

“When we say that an unemployed person is free, we do not mean that he can do as he pleases and instantly transform himself into a rich and peaceful citizen. He is free because he can always choose whether to accept his lot in resignation or to rebel against it.” (Sartre, Existentialism is a Humanism)

or even more drastically, “We were never freer than under the German occupation. We had lost all our rights, first of all the right to speak; we were insulted daily to our faces, and we had to remain silent; we were deported en masse, as workers, as Jews, as political prisoners; everywhere on the walls, in the newspapers, on the canvas, we found again that hideous and insipid image that our oppressors wanted to give us of ourselves: because of all this we were free. Because the Nazi poison penetrated our minds, every just thought was a conquest; because an all-powerful police wanted to force us into silence, every word became as precious as a declaration of principle; because we were hunted, every gesture we made had the weight of a commitment” (Jean-Paul Sartre)

short explanation: the quote means that the total oppression of the German occupation forced people to understand every action and every word as a conscious act of freedom and resistance. Precisely because every decision was risky, it became an expression of personal responsibility and thus an intense experience of freedom. (September 9, 1944, in Les Lettres Françaises)

But if AI increasingly takes over decisions, what will happen to our responsibility? Will it be replaced by algorithms, and won't we lose some of our freedom as a result? Or does cooperation with AI open up completely new possibilities for us to define ourselves and our existence?

Existentialists such as Martin Heidegger have examined “being” as a central theme - a concept that is not reduced to mere life, but to the ability to reflect on one's own existence. This is where one of the most exciting questions arises in the connection between existentialism and AI: can a machine programmed by humans ever develop its own “being”? Or does its existence always remain bound to the parameters it has been given?

“In any case, we can say from the outset that we understand by existentialism a doctrine which makes human life possible and which also declares that every truth and every action imply a human milieu and a human subjectivity.” (Sartre, Existentialism is a Humanism)

This discussion also leads us to the question of authenticity, another central concept of existentialism. Sartre emphasized that humans live authentically when they determine their own existence rather than merely fulfilling societal or external expectations. But as AI begins to influence or even anticipate our decisions, how can we ensure that our lives are still 'our own' - not least in the context of emerging robotics? Addressing these questions allows us to better understand not only AI, but also ourselves - and this is what makes existentialism one of the most relevant perspectives for this debate.

This discussion naturally leads to the question of authenticity, a cornerstone of existentialist thought. Sartre argued that humans live authentically when they shape their own existence rather than conforming to societal or external expectations. Yet, as AI increasingly influences or even anticipates our decisions, how can we ensure that our lives remain truly 'our own'—especially in the context of emerging robotics? Exploring these questions deepens our understanding not only of AI but also of ourselves, underscoring what makes existentialism one of the most relevant perspectives in this debate.

—

Subscribe to FF Daily to get the next article in this series, The Philosophy of AI | Part II.

Kim Isenberg

Kim studied sociology and law at a university in Germany and has been impressed by technology in general for many years. Since the breakthrough of OpenAI's ChatGPT, Kim has been trying to scientifically examine the influence of artificial intelligence on our society.

Reply

or to participate.