Celebrated anthropologist Margaret Mead once said in one of her lectures that a healed femur was the first sign of civilization. The fact that someone had a near-fatal fall and received enough care to heal is the start of human civilization. Our ability to reason is almost as old as our natural instinct to care for our fellow humans. In fact, for centuries, it was the ability to reason that set humans apart from other animals and machines. But now, with the reasoning in AI, that distinction has been breached.
What is Reasoning in AI?
Reasoning is the psychological process of drawing logical conclusions and forecasting an outcome based on all available knowledge, facts, and beliefs. It is concerned with obtaining accurate information or insight from current facts. The concept of reasoning is essential in AI because it allows robots to reason, exactly like a human brain, and it provides them the ability to act like humans.
In the development of AI, the ability to reason is crucial. Therefore, reasoning is using prior knowledge to make inferences, form hypotheses, or develop strategies for addressing a problem. To comprehend the human brain, the way the brain thinks, and the way the brain makes conclusions about particular things, we need the aid of reasoning, which is why it is so crucial in AI.
Types of Reasoning in AI
AI divides reasoning into the following categories:
- Deductive Reasoning: This is figuring out new information from known information that is logically tied to it. It is a type of valid reasoning, which means that if the premises are true, the result must also be true.
Example: Keep in mind that if a = b and b = c, then a = c. Let’s illustrate this with an exampleEvery number that ends in 0 or 5 is divisible by 5. As 35 ends with a 5, it must be divisible by 5.
- Inductive Reasoning: This uses generalization to conclude limited data from specific facts or data to a general assertion or conclusion.
Example: I have seen only white cats. Therefore, the majority of cats are likely white.
- Abductive Reasoning: It starts with one or more observations and finds the most likely explanation or conclusion
Example: When I walked outside this morning, the grass was completely covered with dew. Presumably, it rained last night.
- Common Sense Reasoning: This is a form of informal reasoning acquired through life experiences.
Example: Touching a stove – Common sense tells you not to touch a hot stove to avoid getting burned.
- Monotonic Reasoning: When using monotonic reasoning, the conclusion remains the same even if we add new facts to our knowledge base.
Example: Sun rises in the East and sets in the West.
- Non-monotonic Reasoning: In non-monotonic reasoning, if we find out something new, it might make some of our findings wrong.
Example: Consider a bowl of water. If we place it on the stove and turn on the flame, it will definitely boil hot, and when we switch off the flame, it will gradually cool down.
How do Machines Think?
There is skepticism regarding the reliability of machine learning because of the mystery surrounding the processes through which AI arrives at its conclusions. In a new study, researchers have unveiled a new technique for quickly analyzing the behavior of AI software by evaluating how well its thinking aligns with human reasoning.
It is crucial to comprehend how machine learning arrives at its findings and whether it is correct since these conclusions are applied in the actual world more and more. For instance, an AI software might have correctly identified a skin lesion as malignant. Still, it might have done so by concentrating on a background blot without connection to the clinical image.
ALSO READ: Human Intelligence vs. Artificial Intelligence: What are the Top Differences?
Can We Trust AI Reasoning?
The Harvard Machine Learning Foundations Group is conducting a seminar called “Teach Language Models to Reason.” It is one of many workshops that the group puts on.
The researchers said that their method is made up of four parts:
- Chain-of-thought, which means adding more thoughts before coming up with a final answer.
- Self-consistency, which means taking multiple samples and choosing the most common solution.
- Least-to-most, which means breaking problems up into smaller parts and solving each one separately
- Instruction finetuning, which means setting up an AI to be able to evaluate new problems without training.
The researchers talked about their hopes for Large Language Models (LLMs) that can think for themselves and what they could do to help people. They also said, “Larger models will make our world more efficient.”
Likewise, according to research conducted by UCLA’s faculty and staff, the popular AI-powered tool GPT-3 can reason as well as college undergraduates. The UCLA team tested GPT-3 with thinking questions that appear on IQ and standardized exams like the SAT. The AI model was given the tasks of SAT analogy question-solving and predicting the next shape in a complex arrangement, and it performed exceptionally well on both.
Can AI Reason Like Humans?
Have you ever wondered why some sites ask you to click a box that says, “I am not a robot’? What is really stopping a robot from clicking that checkbox? Actually, it’s not the clicking of the box but your mouse movement that the site tracks. Robots make a linear movement to the box while humans take a more fluid path. Similarly, reasoning in AI is different from how humans reason.
Humans tend to put themselves in other people’s shoes, so we immediately give AI human traits. But Artificial General Intelligence (AGI), if or when it comes, may not look as human as we think it will. In AI, it is assumed that any given example of reasoning would supply a mathematical combination to deliver the required consequences. It is often difficult due to the uniqueness of the decision made by the non-feeble-minded individual involved. As a result, using AI in computers and robotics to solve complex problems with several alternative answers is highly challenging.
Some experts say we are seeing the start of “true” or “strong” AI, while others say this will not happen for long. Some experts even say that the tests used to see how much an AI is like a person are flawed because they only look at specific types of intelligence.
ALSO READ: What is AI Singularity: Hope or Threat?
The AI Expert’s Vertdict
In the movie Imitation Game, Alan Turing, who is credited with building the first computer, breaks a critical German code during the Second World War but still allows the ship in question to sink. Why? It is because if he hadn’t made this ‘sacrifice,’ the Nazis would know that he had cracked their master coding machine, Enigma leading to more loss of life. Can AI make the same decisions? The simple answer would be not yet. It’s pertinent to remember that machines don’t fully understand human situations that can help them rule in favor of the greater good. They can help you think more critically but can’t replace your natural moral judgment.
Write to us at [email protected]