Beyond rationality: When math and logic fail?

Beyond rationality: When math and logic fail?

Posted on by

Imagine a situation that you are the mayor of a small town of 1000 people. Right now you are facing a crisis – there has been an outbreak of lethal disease and the whole township is under quarantine.

You can take a chance and try to save everyone with a 90% probability of failure or save 100 individuals from the group of 1000. What would you do?
Try to save everyone with a 90% probability of failure or save 100 individuals out of 1000.

As the person in charge of safety and lives of all citizens, you are provided with two potential options. The first one is to wait for an experimental drug that may cure the disease. Unfortunately, there is only a 10% probability that it will work. If it works, everyone will recover. If not, all 1000 people will die. So there is the second option: Right now you can evacuate 100 people that are definitely not ill. It means that 900 people left in the city will get no medical help and will certainly succumb to the illness and die.

It’s high time you made a decision. You can take a chance and try to save everyone with a 90% probability of failure or save 100 individuals from the group of 1000. What would you do?

A cold calculation

If you think that one of those options makes more sense than the other – congratulations, you are probably a human. From the mathematical and rational point of view, it does not matter what you choose. Both options are completely equivalent. The expected utility, however heartless it may sound in this context, of treating 1000 people with a 10% probability of success equals 100. This is the exact number of people saved when you choose the second option.

This approach, when used to make decision about human lives, may make you feel a little bit uncomfortable. If is perfectly fine. We, as people, always add something to this equation – our values, ideals, preconceptions, feelings and experiences. And, it has to be repeated, this is perfectly fine because these are the very elements that make us human.

The sharing dilemma

But how it is this story connected with games? Let us think about another example. Imagine that you play a game similar to A Common Dilemma, Forest Rules or Laudato Si. You and other participants act as a community extracting common-pool resources. Let us assume that you want to work together for the benefit of the whole group and decide to exploit reserves in a sustainable way. In the context of these games, such a solution is described as the social optimum. But then you may face a dilemma: How will you divide the payoff? Equally? According to the contribution of the participants? Or maybe in a way that it will be profitable to “the poor” in the game’s world?

A social optimum

You and other participants act as a community extracting common-pool resources.
You and other participants act as a community extracting common-pool resources. How will you divide the payoff? Equally?

These are all excellent questions that, from the mathematical point of view, completely do not matter. In the common-pool resource dilemma, any distribution of the payoff is socially optimal as long as a total group decision maximizes the long term yield of the given resource. So if one person gets all, and all the rest live in in-game poverty, this solution is 100% socially optimal. Again, if it makes you feel uneasy, it is perfectly fine.

This uneasiness is what distinguishes us from robots or decision-making algorithms. And it lies at the core of the concept of using games as social simulations. Even if there is some kind of an optimal solution, and even if this solution is known to the players, it may not be so obvious. Not because of our fallibility or tendency to use heuristics that are prone to errors. Just because even in a conscious decision-making process, we still apply our broadly defined values and experiences. Individual differences in these areas may cause tensions. One person can tend towards rewarding participants according to their input, while other towards unequal distribution that benefits the “less privileged”. Such tensions, as presented before, can emerge even if both parties do present the optimal solution.

Learning from decisions

And that is why adding word “social” to simulations is so important. Artificial worlds may incorporate strategies or actions that are the best. But when they become inhabited with real humans, these solutions may become fuzzy. Therefore, we, as participants, not only learn about this specific simulated section of the reality, but also about ourselves – the drivers behind our decisions and decisions of other players. These lessons may be extremely important for experts who already know the “optimal” solutions, as they can learn why they are not regarded as such by others. For others, they provide a safe environment to experience and reflect on how norms, values, emotions and ideologies shape the world we live in. And they, in fact, do shape world around us, as we are human.

 

If you find the issue interesting, browse more games on the topic in the Gamepedia, and share your experience in comments below the post!

About author(s)

Leave a Comment

Your email address will not be published. Required fields are marked *