Introduction to Cindicator (CND) Part 5

Introduction to artificial intelligence.


Artificial intelligence is a broad term that describes the ability of machines to perform “smart” tasks. In a way, artificial intelligence is an umbrella concept that includes machine learning, deep learning, and other types of applications that enable machines to self-change their algorithms based on previous task performance.


Overview of AI over the years

The term “artificial intelligence” is not new. Even in the myths of the ancient Greece there have been mechanical men that could perform tasks in a way that mimicked the behavior of humans. In Greek mythology, Talos was a huge person made of bronze who served as a protector of Europe from pirates and other attackers. According to one of the versions, Hephaestus, the god of blacksmithing, created Talos and gave him as a gift to the kind of Crete. Talos had just one vein, which was plugged by a nail, but the blood in the vein came from the gods.

In the twentieth century, even though early computers could perform just the basic mathematical operations, they would still do so much faster compared to humans, which is why engineers thought that they were creating “mechanical brains” and “artificial intelligence” that could outperform humans. For example, one of the first machines was the calculator. A calculator can perform a very limited number of mathematical operations, yet the first calculators were so impressive, that people did think about them as “smart.”


Introduction of the term “Artificial Intelligence”

John McCarthy was a computer scientist who studied and worked both on the West Coast and East Coast. He first studied mathematics himself by using the books from California Institute of Technology, which later allowed him to bypass first two years of math classes at Caltech. Later, McCarthy received a Ph. D. from Princeton University and worked at Princeton, Dartmouth, Massachusetts Institute of Technology and Stanford, where he was a full professor from 1962 until 2000.

McCarthy first introduced the term “artificial intelligence” in 1956, when he invited a group of scientists from various disciplines to Dartmouth, where he was working on a research project. McCarthy believed that the term was neutral compared to something like “thinking machines,” because the word “thinking” could lead to controversies.

During the twentieth century, technology was not the only field of knowledge that has been changing and that saw incredible breakthroughs. The sciences of psychology, biology and others have also had a lot of discoveries and new concepts being developed, in particular about human brain and how the brain works. As technology and other sciences have been changing, so did the idea as to what could actually constitute “artificial intelligence.”

The issue with the functionality of the early machines, including calculators and computers, is that they could only perform operations programmed by the people. They could not make adjustments to the algorithms that they were using or use the history of operations to improve the performance. Today, a machine that simply follows a given set of instructions can be hardly considered “smart.”

Rather than simply defining “artificial intelligence” as an increasing number of calculations and computations that machines could perform, today the term “artificial intelligence” is more about having machines perform in a way humans make decisions and process information.

Typically, scientists that work in the field of artificial intelligence try to accomplish one of the following three goals.

First, they are working on building systems that try to mimic the way humans think as closely as possible. This field is known as strong AI. Second, they just try to build systems to accomplish results without giving too much attention to having the systems act and think like humans. These systems are known as weak AI. Finally, some scientists use human reasoning as a model when programming algorithms, but have no goal in making machines act and think like humans at all.


Two types of artificial intelligence devices

Artificial intelligence is a science, but also a name for devices. AI devices are typically broken down into two categories, applied and general.

Applied artificial intelligence is much more common than general. Applied AI devices typically perform a very narrow set of tasks. For example, there are robots that trade on stock and cryptocurrency exchanges and adjust their algorithms based on their past performance, but these robots can’t perform any other tasks. There are also artificial intelligence robots that make autonomous vehicles accelerate, slow down and make turns, but can’t trade stocks. These are examples of applied artificial intelligence.

General artificial intelligence is systems that can handle a wide variety of tasks in a number of fields. Such systems are very rare, but this is where some of the biggest breakthroughs are occurring today.