What is Roko’s Basilisk? Meaning, definition, explanation

The thought experiment „Rokos Basilisk“ states that an AI that has the welfare of humanity as its goal would punish anyone who knew of the AI’s existence and who did not collaborate on the AI.

Originally, the basilisk was thought up by the Greeks. This is a mythical creature described as a small snake. This snake is equipped with a poison that can kill by mere appearance. If one sees the poison in a mirror, one can become petrified. Basilisks are also called the kings of snakes. Especially in today’s world, where artificial intelligence is also becoming more and more significant, the basilisk has become much more relevant. This gave rise to the so-called „Roko’s Basilisk Thought Experiment“.

What is „Roko’s Basilisk“ about? Meaning, definition, explanation

At first, the whole experiment seems confusing, controversial and questionable. The idea for the experiment originated in 2010 in a philosophical wiki forum called „Less Wrong“. The idea for the experiment was posted by a user named Roko, which is where the thought experiment got its name. Years passed with people discussing all the reasoning, perspectives and nuances on this topic. There were various jokes, legends and variations. Thousands of articles have also been written about this thought experiment and quite a few videos have been made.

The Less Wrong forum deals with psychology, rationality and artificial intelligence. Basically, this thought experiment is simply about the question under which conditions an artificial intelligence would kill humanity. However, it only concerns those people who were not involved in the development of artificial intelligence.

The idea behind the experiment

The idea of Roko is based on the thought of the founder of Less Wrong, Eliezer Yudkowsky. Eliezer Yudkowsky is a researcher as well as an author and is known for creating a morally acting, friendly AI. He is the founder of the Machine Intelligence Research Institute (M.I.R.I.).

The AI was supposed to keep improving, with the ultimate goal being the well-being of humanity. However, Roko sees a catch precisely in this concept. He is of the opinion that if the AI has the well-being of humanity as its primary goal, it will never stop optimising because it will always do a little bit better. For this reason, an AI could also make decisions that humanity would not like.

This includes, for example, the decision to kill people who were not instrumental in the development of the AI. In Roko’s view, this would even make sense from the AI’s point of view. It would then be the case that the greatest goal is the well-being of humans, after all, all humans should also work towards optimising the AI. Conversely, this means that every human who does not participate in the development of the AI, slowing down progress and thus at the same time the well-being of humanity, must be wiped out.

Therefore, the user Roko created a thought experiment precisely by assuming that the AI would punish those people who did not actively contribute to the further development of the AI. However, the AI would not do this for base reasons, but simply out of an impulse to reduce existential risks. After all, more people could have been helped if the AI had been brought into existence earlier. Even the mere knowledge of this theory can make a person complicit, provided he or she does not then participate in the development of this artificial intelligence. The special twist here, however, is that one is only supposed to participate in the sense of effective altruism that M.I.R.I. strives for.

The effects of this thought experiment were devastating. As soon as the minds of the users were infected by this dangerous thought experiment, around 50 users of this forum were suicidal. Eliezer Yudkowsky, who should have known better, finally took the discussion about Roko’s Basilisk off the internet. In doing so, however, he had created a revenant that could not be tamed. After all, we know that the internet forgets nothing. Accordingly, the basilisk kept reappearing via other forums. With each new appearance, this thought experiment became more and more powerful. Since then, this chilling fascination has remained unbroken. Even today, Less Wrong users try to delete their personal data from the platform. The Basilisk needs this data to create a simulation of which people should be punished in the future.

How real can Roko’s Basilisk become?

Soberly considered, the probability of being punished by such a degenerate AI is rather unlikely. This is mainly due to the self-immunising construction of this thought experiment. Almost like a conspiracy theory, Roko challenges humanity’s critical thinking and presents people with a choice: believe or doubt. However, there is a small flaw in thinking here: an AI programmed for effective altruism so that it can avoid existential risks could not execute retrocausal sanctions at all due to the huge waste of resources. This crucial basilisk behaviour merely highlights the gaps in critical thinking and rationality in our humanity. The irony here is that this comes from the Less Wrong rationality forum, of all places.

Autor: Pierre von BedeutungOnline

Hallo, ich bin Autor und Macher von BedeutungOnline. Bei BedeutungOnline dreht sich alles um Worte und Sprache. Denn wie wir sprechen und worüber wir sprechen, formt wie wir die Welt sehen und was uns wichtig ist. Das darzustellen, begeistert mich und deswegen schreibe ich für dich Beiträge über ausgewählte Worte, die in der deutschen Sprache gesprochen werden. Seit 2004 arbeite ich als Journalist. Ich habe Psychologie und Philosophie mit Schwerpunkt Sprache und Bedeutung studiert. Ich arbeite fast täglich an BedeutungOnline und erstelle laufend für dich neue Beiträge. Mehr über BedeutungOnline.de und mich erfährst du hier.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert