Logic is a human affair
Logic is fundamentally a human affair. There's a prevalent notion, especially in fiction literature and movies, that logic is in opposition to our human nature, which is perceived as more emotional, animalistic, and thus irrational and illogical. The famous television series Star Trek exemplifies this by creating an entire species of logical beings, and introducing the character Spock, who acts logically without emotions. However, Spock is not entirely logical because he is half-human. This hybrid nature allows him to possess some human emotions, enabling him to occasionally act contrary to pure logic, much like humans do. But this portrayal, I believe, gets it fundamentally wrong. Contrary to this depiction, logic is inherently human.
Consider this hypothesis: humans evolved to use logic due to their limitations. Our brains, though powerful, are relatively small, processing only a limited amount of information at any given time. We are constrained by our senses and intellect, accessing just a fraction of the world around us. Faced with limited information, data processing capacity, time constraints, and harsh environmental conditions, logic becomes a vital tool. It allows us to overlook minutiae and seek simplified, general patterns, creating shortcuts in decision-making and reasoning. It is precisely because of our human limitations that we developed logic.
In my earlier research, I focused on using logic to understand how we modify our beliefs and preferences when faced with new, conflicting information. The goal was to uncover universal logical principles that could foresee rational behavior, with an eye towards their potential application in artificial intelligence (AI). The concept was to provide AI with logical guidelines to aid in decision-making. A classic example from fiction is Isaac Asimov's three laws of robotics, which are: 1) A robot may not harm a human or allow a human to come to harm through inaction; 2) A robot must follow human orders unless these conflict with the First Law; and 3) A robot must protect its existence unless this conflicts with the first two laws.
Philosophers, however, recognize the inherent contradictions in these laws, illustrated by the trolley problem (you can learn about the problem in "The Good Place," Season 2, Episode 5, "The Trolley Problem"). In scenarios akin to the trolley problem, known as plurality problems, giving specific instructions to a robot governed by Asimov's laws can indeed lead to contradictions. For example, if you direct the robot to pull a lever to save five people, it aligns with the First Law by preventing harm to a larger group. However, this action inevitably harms the individual on the alternative track, breaching the same law. Conversely, inaction results in harm to the five people, again contradicting the First Law. This places the robot in a predicament where any decision leads to a breach of the First Law, creating a paradox. Such scenarios underscore the complexity of instilling AI systems with the capability for ethical decision-making, especially in intricate situations where harm is inescapable.
Ethical decision-making thus cannot be reduced to a set of simple, general laws. What recent advancements in AI have taught us is that computers do not rely on logical laws to outperform humans in games like chess or Go. Even advanced AI models like ChatGPT, which have surpassed our expectations compared to a decade ago, do not operate on pure logical reasoning. Unlike humans, AI is not limited by biology and thus does not need a simplified logical system to navigate the world. However, humans do, and that is why we created logic. Logic is a human affair, designed for the complexities of our world. It's time we embrace it once more, recognizing its indispensable role in our lives.