A Brief History of Artificial Intelligence. Artificial intelligence

The concept of artificial intelligence (AI or AI) includes not only technologies that allow you to create intelligent machines (including computer programs). AI is also one of the areas of scientific thought.

Artificial Intelligence - Definition

Intelligence- this is the mental component of a person, which has the following abilities:

  • adaptive;
  • learning through the accumulation of experience and knowledge;
  • the ability to apply knowledge and skills to manage the environment.

The intellect combines all the abilities of a person to cognize reality. With the help of it, a person thinks, remembers new information, perceives the environment, and so on.

Artificial intelligence is understood as one of the areas of information technology, which is engaged in the study and development of systems (machines) endowed with the capabilities of human intelligence: the ability to learn, logical reasoning, and so on.

At the moment, work on artificial intelligence is carried out by creating new programs and algorithms that solve problems in the same way as a person does.

Due to the fact that the definition of AI evolves as this direction develops, it is necessary to mention the AI ​​Effect. It refers to the effect that artificial intelligence creates when it has made some progress. For example, if AI has learned to perform any actions, then critics immediately join in, arguing that these successes do not indicate the presence of thinking in the machine.

Today, the development of artificial intelligence goes in two independent directions:

  • neurocybernetics;
  • logical approach.

The first direction involves the study of neural networks and evolutionary computing from the point of view of biology. The logical approach involves the development of systems that mimic high-level intellectual processes: thinking, speech, and so on.

The first work in the field of AI began to be conducted in the middle of the last century. The pioneer of research in this direction was Alan Turing, although certain ideas began to be expressed by philosophers and mathematicians in the Middle Ages. In particular, as early as the beginning of the 20th century, a mechanical device capable of solving chess problems was introduced.

But in reality this direction was formed by the middle of the last century. The appearance of works on AI was preceded by research on human nature, ways of knowing the world around us, the possibilities of the thought process, and other areas. By that time, the first computers and algorithms had appeared. That is, the foundation was created on which a new direction of research was born.

In 1950, Alan Turing published an article in which he asked questions about the capabilities of future machines, as well as whether they could surpass humans in terms of sentience. It was this scientist who developed the procedure that was later named after him: the Turing test.

After the publication of the works of the English scientist, new research in the field of AI appeared. According to Turing, only a machine that cannot be distinguished from a person during communication can be recognized as a thinking machine. Around the same time that the role of a scientist appeared, a concept was born, called the Baby Machine. It provided progressive development AI and the creation of machines whose thought processes are first formed at the level of a child, and then gradually improved.

The term "artificial intelligence" was born later. In 1956, a group of scientists, including Turing, met at the American University of Dartmund to discuss issues related to AI. After that meeting, the active development of machines with the capabilities of artificial intelligence began.

A special role in the creation of new technologies in the field of AI was played by the military departments, which actively funded this area of ​​research. Subsequently, work in the field of artificial intelligence began to attract large companies.

Modern life poses more complex challenges for researchers. Therefore, the development of AI is carried out in fundamentally different conditions, if we compare them with what happened during the period of the emergence of artificial intelligence. The processes of globalization, the actions of intruders in the digital sphere, the development of the Internet and other problems - all this poses complex tasks for scientists, the solution of which lies in the field of AI.

Despite the successes achieved in this area in recent years (for example, the emergence of autonomous technology), there are still voices of skeptics who do not believe in the creation of a truly artificial intelligence, and not a very capable program. A number of critics fear that the active development of AI will soon lead to a situation where machines will completely replace people.

Research directions

Philosophers have not yet come to a consensus about what is the nature of the human intellect, and what is its status. In this regard, in scientific works devoted to AI, there are many ideas that tell what tasks artificial intelligence solves. There is also no common understanding of the question of what kind of machine can be considered intelligent.

Today, the development of artificial intelligence technologies goes in two directions:

  1. Descending (semiotic). It involves the development of new systems and knowledge bases that imitate high-level mental processes such as speech, expression of emotions and thinking.
  2. Ascending (biological). This approach involves research in the field of neural networks, through which models of intellectual behavior are created from the point of view of biological processes. Based on this direction, neurocomputers are being created.

Determines the ability of artificial intelligence (machine) to think in the same way as a person. In a general sense, this approach involves the creation of AI, the behavior of which does not differ from human actions in the same, normal situations. In fact, the Turing test assumes that a machine will be intelligent only if, when communicating with it, it is impossible to understand who is talking: a mechanism or a living person.

Science fiction books offer a different way of assessing the capabilities of AI. Artificial intelligence will become real if it feels and can create. However, this approach to definition does not hold up in practice. Already now, for example, machines are being created that have the ability to respond to changes. environment(cold, warm, etc.). At the same time, they cannot feel the way a person does.

Symbolic approach

Success in solving problems is largely determined by the ability to flexibly approach the situation. Machines, unlike people, interpret the data they receive in a unified way. Therefore, only a person takes part in solving problems. The machine performs operations based on written algorithms that exclude the use of several abstraction models. To achieve flexibility from programs is possible by increasing the resources involved in the course of solving problems.

The above disadvantages are typical for the symbolic approach used in the development of AI. However, this direction of development of artificial intelligence allows you to create new rules in the calculation process. And the problems arising from the symbolic approach can be solved by logical methods.

logical approach

This approach involves the creation of models that mimic the process of reasoning. It is based on the principles of logic.

This approach does not involve the use of rigid algorithms that lead to a certain result.

Agent Based Approach

It uses intelligent agents. This approach assumes the following: intelligence is a computational part, through which goals are achieved. The machine plays the role of an intelligent agent. She learns the environment with the help of special sensors, and interacts with it through mechanical parts.

The agent-based approach focuses on the development of algorithms and methods that allow machines to remain operational in various situations.

Hybrid approach

This approach involves the integration of neural and symbolic models, due to which the solution of all problems associated with the processes of thinking and computing is achieved. For example, neural networks can generate the direction in which the operation of a machine moves. And static learning provides the basis through which problems are solved.

According to company experts Gartner, by the beginning of the 2020s, almost all released software products will use artificial intelligence technologies. Also, experts suggest that about 30% of investments in the digital sphere will fall on AI.

According to Gartner analysts, artificial intelligence opens up new opportunities for cooperation between people and machines. At the same time, the process of crowding out a person by AI cannot be stopped and in the future it will accelerate.

In company PwC believe that by 2030 the volume of the world's gross domestic product will grow by about 14% due to the rapid introduction of new technologies. Moreover, approximately 50% of the increase will provide an increase in efficiency production processes. The second half of the indicator will be the additional profit received through the introduction of AI in products.

Initially, the United States will receive the effect of the use of artificial intelligence, since this country has created the best conditions for the operation of AI machines. In the future, they will be surpassed by China, which will extract the maximum profit by introducing such technologies into products and their production.

Company experts Sale force claim that AI will increase the profitability of small businesses by about $1.1 trillion. And it will happen by 2021. In part, this indicator will be achieved through the implementation of solutions offered by AI in systems responsible for communication with customers. At the same time, the efficiency of production processes will improve due to their automation.

The introduction of new technologies will also create an additional 800,000 jobs. Experts note that this figure offsets the loss of vacancies due to process automation. Analysts, based on a survey among companies, predict their spending on factory automation will rise to about $46 billion by the early 2020s.

In Russia, work is also underway in the field of AI. For 10 years, the state has financed more than 1.3 thousand projects in this area. Moreover, most of the investments went to the development of programs that are not related to the conduct of commercial activities. This shows that the Russian business community is not yet interested in introducing artificial intelligence technologies.

In total, about 23 billion rubles were invested in Russia for these purposes. The amount of government subsidies is inferior to the amount of AI funding shown by other countries. In the United States, about 200 million dollars are allocated for these purposes every year.

Basically, in Russia, funds are allocated from the state budget for the development of AI technologies, which are then used in the transport sector, the defense industry, and in projects related to security. This circumstance indicates that in our country people are more likely to invest in areas that allow you to quickly achieve a certain effect from the invested funds.

The above study also showed that Russia now has a high potential for training specialists who can be involved in the development of AI technologies. Over the past 5 years, about 200 thousand people have been trained in areas related to AI.

AI technologies are developing in the following directions:

  • solving problems that make it possible to bring the capabilities of AI closer to human ones and find ways to integrate them into everyday life;
  • development of a full-fledged mind, through which the tasks facing humanity will be solved.

At the moment, researchers are focused on developing technologies that solve practical problems. So far, scientists have not come close to creating a full-fledged artificial intelligence.

Many companies are developing technologies in the field of AI. "Yandex" has been using them in the work of the search engine for more than one year. Since 2016, the Russian IT company has been engaged in research in the field of neural networks. The latter change the nature of the work of search engines. In particular, neural networks compare the query entered by the user with a certain vector number that most fully reflects the meaning of the task. In other words, the search is conducted not by the word, but by the essence of the information requested by the person.

In 2016 "Yandex" launched the service "Zen", which analyzes user preferences.

Company Abbyy recently introduced a system Compreno. With the help of it, it is possible to understand the text written in natural language. Other systems based on artificial intelligence technologies have also entered the market relatively recently:

  1. findo. The system is capable of recognizing human speech and searches for information in various documents and files using complex queries.
  2. Gamalon. This company introduced a system with the ability to self-learn.
  3. Watson. An IBM computer that uses a large number of algorithms to search for information.
  4. ViaVoice. Human speech recognition system.

Large commercial companies are not bypassing advances in the field of artificial intelligence. Banks are actively implementing such technologies in their activities. With the help of AI-based systems, they conduct transactions on exchanges, manage property and perform other operations.

The defense industry, medicine and other areas are implementing object recognition technologies. And game development companies are using AI to create their next product.

Over the past few years, a group of American scientists has been working on a project NEIL, in which the researchers ask the computer to recognize what is shown in the photograph. Experts suggest that in this way they will be able to create a system capable of self-learning without external intervention.

Company VisionLab introduced its own platform LUNA, which can recognize faces in real time by selecting them from a huge cluster of images and videos. This technology is now used by large banks and network retailers. With LUNA, you can compare people's preferences and offer them relevant products and services.

Working on similar technologies Russian company N-Tech Lab. At the same time, its specialists are trying to create a face recognition system based on neural networks. According to the latest data, Russian development copes with the assigned tasks better than a person.

According to Stephen Hawking, the development of artificial intelligence technologies in the future will lead to the death of mankind. The scientist noted that people will gradually degrade due to the introduction of AI. And in the conditions of natural evolution, when a person needs to constantly fight to survive, this process will inevitably lead to his death.

Russia is positively considering the introduction of AI. Alexei Kudrin once said that the use of such technologies would reduce the cost of maintaining the state apparatus by about 0.3% of GDP. Dmitry Medvedev predicts the disappearance of a number of professions due to the introduction of AI. However, the official stressed that the use of such technologies will lead to the rapid development of other industries.

According to experts from the World Economic Forum, by the beginning of the 2020s, about 7 million people in the world will lose their jobs due to the automation of production. The introduction of AI is highly likely to cause the transformation of the economy and the disappearance of a number of professions related to data processing.

Experts McKinsey declare that the process of automation of production will be more active in Russia, China and India. In these countries, in the near future, up to 50% of workers will lose their jobs due to the introduction of AI. Their place will be taken by computerized systems and robots.

According to McKinsey, artificial intelligence will replace jobs that involve physical labor and information processing: retail, hotel staff, and so on.

By the middle of this century, according to experts from an American company, the number of jobs worldwide will be reduced by about 50%. People will be replaced by machines capable of carrying out similar operations with the same or higher efficiency. At the same time, experts do not exclude the option in which this forecast will be realized before the specified time.

Other analysts note the harm that robots can cause. For example, McKinsey experts point out that robots, unlike humans, do not pay taxes. As a result, due to a decrease in budget revenues, the state will not be able to maintain infrastructure at the same level. Therefore, Bill Gates proposed a new tax on robotic equipment.

AI technologies increase the efficiency of companies by reducing the number of mistakes made. In addition, they allow you to increase the speed of operations to a level that cannot be achieved by a person.

We can assume that the history of artificial intelligence begins with the creation of the first computers in the 40s. With the advent of electronic computers, with high (by the standards of that time) productivity, the first questions in the field of artificial intelligence began to arise: is it possible to create a machine whose intellectual capabilities would be identical to the intellectual capabilities of a person (or even exceed the capabilities of a person).

The next stage in the history of artificial intelligence is the 50s, when researchers tried to build intelligent machines by imitating the brain. These attempts were unsuccessful due to the complete unsuitability of both hardware and software. In 1956, a seminar was held at Stanford University (USA), where the term artificial intelligence was first proposed - artificial intelligence.

The 60s in the history of artificial intelligence were marked by attempts to find general methods for solving a wide class of problems by simulating a complex process of thinking. The development of universal programs turned out to be too difficult and fruitless. The wider the class of problems that one program can solve, the poorer are its capabilities in solving a specific problem. During this period, the emergence of heuristic programming began.

Heuristic- a rule that is not theoretically justified, but allows to reduce the number of searches in the search space.

Heuristic programming is the development of an action strategy based on analogy or precedents. In general, 50-60 years. in the history of artificial intelligence can be noted as the time of the search for a universal thinking algorithm.

A significant breakthrough in the practical applications of artificial intelligence occurred in the 70s, when the search for a universal thinking algorithm was replaced by the idea of ​​modeling the specific knowledge of expert experts. In the United States, the first commercial knowledge-based systems, or expert systems, appeared. Came new approach to solving problems of artificial intelligence - the representation of knowledge. "MYCIN" and "DENDRAL" are created, which have already become classic expert systems for medicine and chemistry. In a certain sense, both of these systems can be called diagnostic, since in the first case (“MYCIN”), a disease is determined by a number of symptoms (signs of an organism’s pathology) (a diagnosis is made), in the second, a chemical compound is determined by a number of properties. In principle, this stage in the history of artificial intelligence can be called the birth of expert systems.

The next significant period in the history of artificial intelligence is the 80s. In this segment, artificial intelligence has experienced a rebirth. Its great potentialities were widely recognized, both in research and in the development of production. As part of new technology the first commercial software products appeared. At this time, the field of machine learning began to develop. Until now, transferring the knowledge of a specialist-expert to a computer program has been a tedious and lengthy procedure. The creation of systems that automatically improve and expand their stock of heuristic (not formal, based on intuitive considerations) rules is the most important stage in recent years. At the beginning of the decade, the largest in the history of data processing, national and international research projects were launched in various countries, aimed at "intelligent computing systems of the fifth generation."

The current state of research in this area can be characterized by the words of one of the well-known experts in the field of artificial intelligence, Professor N.G. Zagoruiko:

“Discussions on the topic “Can a machine think?” long gone from the pages of newspapers and magazines. Skeptics are tired of waiting for the promises of enthusiasts to come true. And enthusiasts, without further ado, with small steps continue to move towards the horizon, beyond which they hope to see an artificial brother in mind.

This direction was formed on the basis of the assertion that human intelligence can be described in detail and subsequently successfully imitated by a machine. Goethe Faust The idea that it was not a man who could do hard work for a man arose in the Stone Age when a man domesticated a dog. What was most valuable in this creation is what we now call artificial intelligence. For him, the idea of ​​an intensified struggle against evil, which transcends the boundaries of religious law, is legalized...


Share work on social networks

If this work does not suit you, there is a list of similar works at the bottom of the page. You can also use the search button


JOINT INSTITUTE FOR NUCLEAR RESEARCH

EDUCATIONAL AND SCIENTIFIC CENTER

ESSAY

in History and Philosophy of Science

on the topic:

HISTORY OF DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

Completed:

Pelevanyuk I.S.

Dubna

2014

Introduction 3

Before Science 4

The very first ideas 4

Three Laws of Robotics 5

First scientific steps 7

Turing test 7

Darmouth Seminar 8

1956-1960: a time of great hopes 9

1970s: Knowledge Based Systems 10

Fight on a chessboard 11

Use of artificial intelligence for commercial purposes 15

Paradigm shift 16

Data mining 16

Conclusion 21

References 22

Introduction

The term intellect (lat. intellectus) means the mind, reason, the ability to think and rational knowledge. Usually, this means the ability to acquire, remember, apply and transform knowledge to solve some problems. Thanks to these qualities, the human brain is able to solve a variety of tasks. Including those for which there are no previously known solution methods.

The term artificial intelligence arose relatively recently, but even now it is almost impossible to imagine a world without it. Most often, people do not notice his presence, but if, suddenly, he was gone, then this would radically affect our lives. The areas in which artificial intelligence technologies are used are constantly replenished: once they were programs for playing chess, then - vacuum cleaner robots, now algorithms are able to conduct trading on exchanges themselves.

This direction was formed on the basis of the assertion that human intelligence can be described in detail and, subsequently, successfully imitated by a machine. Artificial intelligence was the cause of great optimism, but soon showed staggering complexity of implementation.

The main areas of development of artificial intelligence include reasoning, knowledge, planning, learning, language communication, perception, and the ability to move and manipulate objects. Generalized artificial intelligence (or "strong AI") is still on the horizon. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. Exists great amount tools that use artificial intelligence: different versions of search algorithms, mathematical optimization algorithms, logics, methods based on probability and many others.

In this essay, I tried to collect the most important, from my point of view, events that influenced the development of technology and theory of artificial intelligence, the main achievements and prerequisites.

Before the advent of science

The very first ideas

“We are told “madman” and “fantastic”,

But, coming out of sad dependence,

Over the years, the brain of a thinker is skillful

The thinker will be artificially created.”

Goethe, Faust

The idea that a non-human could do difficult work for a human originated in the Stone Age, when a human domesticated a dog. The dog was ideally suited to the role of a watchman and performed this task much better than a person. Of course, this example cannot be considered as a demonstration of the use of artificial intelligence, because a dog is creature: it is already endowed with the ability to recognize images, orientation in space, and is also predisposed to some basic education in order to recognize “friend/foe”. However, it shows the direction of the person's thought.

Another example is the myth of Talos. Talos, according to legend, was a huge bronze knight, which Zeus gave to Europe to protect the island of Crete. His job was to keep outsiders out of the island. If they approached, Talos threw stones at them; if they managed to land, Talos set himself on fire and burned the enemies in his arms.

Why is Talos so remarkable? Constructed from the most durable material at the time, capable of detecting who is a stranger, virtually invulnerable, without the need to rest. This is how the ancient Greeks imagined the creation of the gods. What was most valuable in this creation is what we now call artificial intelligence.

Another interesting example can be taken from Jewish traditions - these are the legends about golems. A golem is a clay creature of the human species. They, according to legend, could be created by rabbis for protection Jewish people. In Prague, a Jewish folk legend arose about a golem, which was created by the chief rabbi of Prague to perform various “black” jobs or simply difficult assignments. Other golems are also known, created according to popular tradition by various authoritative rabbis, innovators of religious thought.

In this legend, folk fantasy justifies resistance to social evil by the violence of the golem. For him, the idea of ​​an intensified struggle against evil, which transcends the boundaries of religious law, is legalized; No wonder the golem, according to the legends, can exceed its powers, declaring its will, contrary to the will of its creator: the golem is able to do what is criminal for a person according to the law.

And finally, the novel Frankenstein or the Modern Prometheus by Mary Shelley. It can be called the ancestor of science fiction literature. It describes the life and work of Dr. Victor Frankenstein, who brought to life a being created from the body parts of dead people. However, seeing that it turned out to be ugly and monstrous, the doctor renounces his creation and leaves the city in which he lived. Nameless creature that people hate for appearance, soon begins to pursue its creator.

And here again the question of the responsibility that man bears for his creatures is raised. AT early XIX century, the novel raised several questions about the pair of creator and creation. How ethical was it to create such a creation? Who is responsible for his actions? Questions closely related to ideas about artificial intelligence.

There are many similar examples that are somehow related to the creation of artificial intelligence. This seems to people like a holy grail that can solve many of their problems and free them from any manifestations of lack and inequality.

Three Laws of Robotics

Since Frankenstein, artificial intelligence has appeared in literature constantly. The idea of ​​him has become a fertile ground for thinking of writers and philosophers. One of them, Isaac Asimov, will forever be remembered by us. In 1942, in his novel Round Dance, he described three laws that robots must follow:

  1. A robot cannot harm a person or by its inaction allow a person to be harmed.
  2. A robot must obey all orders given by a human, unless those orders are contrary to the First Law.
  3. The robot must take care of its safety to the extent that this does not contradict the First and Second Laws.

Before Isaac, stories about artificial intelligence and about robots retained the spirit of Mary Shelley's Frankenstein novel. As Isaac himself said, this problem became one of the most popular in the world of science fiction in the 1920s and 1930s, when many stories were written, the theme of which was robots that rebelled and destroyed people.

But not all science fiction writers followed this pattern, of course. In 1938, for example, Lester del Rey wrote the short story Helen O'Loy, a story about a robotic woman who fell in love with her creator and later became his ideal wife. Which, by the way, is very much like the story of Pygmalion. Pygmalion carved an ivory statue of a girl so beautiful that he himself fell in love with her. Touched by such love, Aphrodite revived the statue, which became the wife of Pygmalion.

In fact, the emergence of the Three Laws happened gradually. The two earliest stories about robots, "Robbie" (1940) and "Logic" (1941), did not explicitly describe the laws. But they already implied that robots must have some internal limitations. In the following story: "The Liar" (1941), the First Law was first spoken. And all three laws appeared in full only in the Round Dance (1942).

Despite the fact that today robotics is developing like never before, researchers from the field of artificial intelligence do not attach so much importance to the laws of robotics. After all, the laws, in fact, coincide with the basic principles of humanity. However, the more complex robots become, the more obvious is the need to create some basic principles and security measures for them.

There are even claims that the Laws are unlikely to be fully implemented in all robots, because there will always be those who want to use robots for destruction and murder. Science fiction scholar Robert Sawyer compiled these statements into one:

“AI development is a business, and business is not known to be interested in developing fundamental security measures especially philosophical ones. Some examples are: the tobacco industry, the automotive industry, nuclear industry. None of them were initially told that serious security measures were necessary, and all of them prevented externally imposed restrictions, and none of them adopted an absolute edict against harming people.

First scientific steps

The history of the development of artificial intelligence as a science can be traced back to early philosophical works such as Discourse on Method (Rene Descartes, 1637), Human Nature (Thomas Hobbes, 1640). If you look at the development also from a technical point of view, then we can mention the first mechanical digital computer, which appeared even earlier: in 1623, thanks to the efforts of Wilhelm Schickard. However, I would like to turn my attention to the 1950s, when Alan Turing, in his article "Computing Machines and the Mind", set out to determine whether a machine is capable of thinking.

Turing test

A person communicates with another person and with a computer program. A person can ask questions through the terminal, his task is to determine where the program is and where the person is. The purpose of the program is to mislead a person and force them to make the wrong choice. According to the terms, correspondence had to be made at fixed intervals, because back then computers were slow and this made it possible to immediately determine where the program was. Now this rule is needed because a computer can answer much faster than a human.

In fact, there are several versions of the wording. In one of them, for example, the computer had to convince a panel of several people that they were communicating with another person. And in one of the very first options, Turing suggests that the participant in the experiment play chess with a person and with a computer, and then determine which of them is who. In fact, at the time of writing, Alan Turing has been working on artificial intelligence for about 10 years.

In the subsequent history of the Thiring test, several key events can be identified. But the main thing happened in 1966, Joseph Weizenbaum created the Eliza program, which could parody a conversation with a psychotherapist. This allowed her to answer a question with a question in conditions of uncertainty. Here is an example of what a conversation could have turned out with her:

H: Who is your favorite composer?

Elisa: Are you interested in this topic? (Answers a question with a question)

H: My head hurts.

Elisa: Why do you say you have a headache? (Uncertainty condition)

X: My father hates me

Elisa: Who else in the family hates you? (reacts to the keyword "father")

Eliza will forever remain an important milestone in the development of artificial intelligence. It was the first program that went beyond the Human/Machine communication paradigm and was able to create an imitation of Human/Human communication.

Darmouth Seminar

Thanks to the explosive leap in the speed of computers, researchers began to believe that it would not be difficult to create artificial intelligence thanks to the computer. The fact is that at that time there were two areas of research: neurocybernetics and, a little later, “black box” cybernetics.

The basis of neurocybernetics was the principle that the only object capable of thinking is a person, which means that a thinking device should model its structure. Scientists tried to create elements that would work like neurons in the brain. Thanks to this, in the late 50s, the first neural networks appeared. They were created by two American scientists:Rosenblatt and P.McCulloch. They tried to create a system that could simulate the work of the human eye. They called their device the Perceptron. It could recognize handwritten letters. Now, the main area of ​​application of neural networks is pattern recognition.

The cybernetics of the “black box” was based on the principle that it does not matter how a thinking machine is arranged inside, the main thing is that it reacts to a certain set of input data in the same way as a person. Researchers working in this area began to create their own models. It turned out that none of the existing sciences: psychology, philosophy, neurophysiology, linguistics, could not shed light on the algorithm of the brain.

The development of “black box” cybernetics began in 1956, when the Darmouth Seminar was held, one of the main organizers of which was John McCarthy. By that time, it became clear that both theoretical knowledge and technical base were not enough to implement the principles of neurocybernetics. But computer science researchers believed that through joint efforts, they could develop a new approach to creating artificial intelligence. Through the efforts of some of the most prominent scientists in the field of computer science, a seminar was organized called: Dartmouth Summer Project for Artificial Intelligence Research. It was attended by 10 people, many of whom were, in the future, awarded the Turing Award - the most honorary award in the field of informatics. The following is the opening statement:

We propose a 2-month artificial intelligence study with 10 participants in the summer of 1956 at Dartmouth College, Hanover, New Hampshire.

The research is based on the assumption that any aspect of learning or any other property of intelligence can, in principle, be described so precisely that a machine can simulate it. We will try to understand how to teach machines to use natural languages, form abstractions and concepts, solve problems that are currently only possible for humans, and improve themselves.

We believe that significant progress on one or more of these problems is quite possible if a specially selected group of scientists will work on it during the summer.”

It was perhaps the most ambitious grant application in history. It was at this conference that the new area sciences - "Artificial intelligence". And maybe nothing specific was discovered or developed, but thanks to this event, some of the most prominent researchers got to know each other and began to move in the same direction.

1956-1960: a time of great hope

At that time, it seemed that the solution was already very close and, despite all the difficulties, humanity would soon be able to create a full-fledged artificial intelligence capable of bringing real benefit. There were programs capable of creating something intellectual. The classic example is the Logic theorist program.

In 1913, Whitehead and Bertrand Russell published their Principia Mathematica. Their aim was to show that by minimum set logical means, such as axioms and rules of inference, it is possible to recreate all mathematical truths. This work is considered to be one of the most influential books ever written after Aristotle's Organon.

The Logic Theorist program was able to recreate most of Principia Mathematica by itself. Moreover, in some places even more elegant than the authors did.

Logic Theorist introduced several ideas that have become central to artificial intelligence research:

1. Reasoning as a way of searching. In fact, the program walked through the search tree. The root of the tree was the initial statements. The emergence of each branch was based on the rules of logic. At the very top of the tree, there was a result - something that the program was able to prove. The path from the root statements to the target ones was called the proof.

2. Heuristics. The authors of the program realized that the tree would grow exponentially and they would need to cut it off somehow, “by eye”. They called the rules according to which they got rid of unnecessary branches “heuristic”, using the term introduced by Gyorgy Pólya in his book “How to Solve a Problem”. Heuristics has become an important component of artificial intelligence research. It remains an important method for solving complex combinatorial problems, the so-called “combinatorial explosions” (example: the traveling salesman problem, enumeration of chess moves).

3. Processing of the “List” structure. To implement the program on a computer, the IPL (Information Processing Language) programming language was created, which used the same form of lists that John McCarthy used in the future to create the Lisp language (for which he received a Turing award), which is still used by artificial intelligence researchers. .

1970s: Knowledge Based Systems

Knowledge-based systems are computer programs that use knowledge bases to solve complex problems. The systems themselves are further subdivided into several classes. What they have in common is that they all try to represent knowledge through tools such as ontologies and rules, rather than just program code. They always consist of at least one subsystem, and more often of two at once: a knowledge base and an inference engine. The knowledge base contains facts about the world. The inference engine contains logical rules, which are usually represented as IF-THEN rules. Knowledge-based systems were first created by artificial intelligence researchers.

The first working knowledge-based system was the Mycin program. This program was created to diagnose dangerous bacteria and select the most appropriate treatment for the patient. The program operated on 600 rules, asked the doctor a lot of yes/no questions and gave a list of possible bacteria sorted according to probability, also provided a confidence interval and could recommend a course of treatment.

The Stanford study found that Mycin provided an acceptable course of treatment in 69% of cases, which is better than experts who were evaluated according to the same criteria. This study is often cited to demonstrate disagreement between medical experts and the system if there is no standard for the “correct” treatment.

Unfortunately, Mycin has never been tested in practice. Ethical and legal issues related to the use of such programs have been raised. It was not clear who should be held responsible if the program's recommendation turned out to be wrong. Another problem was the technological limitation. In those days there were no personal computers, one session took more than half an hour, and this was unacceptable for a busy doctor.

The main achievement of the program was that the world saw the power of knowledge-based systems, and the power of artificial intelligence in general. Later, in the 1980s, other programs began to appear using the same approach. To simplify their creation, the E-Mycin shell was created, which made it possible to create new expert systems with less effort. The unforeseen difficulty that the developers faced was extracting knowledge from the experience of experts, for obvious reasons.

It is important to mention that it was at this time that the Soviet scientist Dmitry Alexandrovich Pospelov began his work in the field of artificial intelligence

Fight on the chessboard

Separately, one can consider the history of the confrontation between man and artificial intelligence on a chessboard. This story began a long time ago: when in 1769, in Vienna, Wolfgang von Kempeleng created a chess machine. It was big wooden box, on the roof of which there was a chessboard, and behind which stood a wax Turk in the appropriate outfit (because of this, the car is sometimes called “Turk” for short). Before the start of the performance, the doors of the box were opened, and the audience could see many details of a certain mechanism. Then the doors were closed, and the car was started with a special key, like a clock. After that, whoever wanted to play came up and made moves.

This machine was a huge success and managed to travel all over Europe, losing only a few games to strong chess players. In fact, inside the box there was a person who, with the help of a system of mirrors and mechanisms, could observe the state of the party and, with the help of a system of levers, control the arm of the “Turk”. And it was not the last machine inside which, in fact, a living chess player was hiding. Such machines were successful until the beginning of the twentieth century.

With the advent of computers, the possibility of creating an artificial chess player became tangible. Alan Turing developed the first program capable of playing chess, but due to technical limitations, it took about half an hour to make one move. There is even a recording of the game of the program with Alik Gleny, Turing's colleague, which the program lost.

The idea of ​​creating such programs based on computers caused a resonance in the scientific world. Many questions were asked. An excellent example is the article: “The use of digital computers for games” (Digital Computers applied to Games). It raises 6 questions:

1. Is it possible to create a machine that could follow the rules of chess, could give a random correct move, or check if the move is correct?

2. Is it possible to create a machine capable of solving chess problems? For example, to say how to checkmate in three moves.

3. Is it possible to create a machine that would play a good game? Which, for example, faced with a certain usual arrangement of pieces, could, after two or three minutes of calculations, give a good correct move.

4. Is it possible to create a machine that, by playing chess, learns and improves its game over and over again?

This question brings up two more that are likely already on the reader's tongue:

5. Is it possible to create a machine that is able to answer the question in such a way that it is impossible to distinguish its answer from the answer of a person.

6.Can you create a machine that felt like you or me?

In the article, the main emphasis was on question number 3. The answer to questions 1 and 2 is strictly positive. The answer to question 3 is related to the use of more complex algorithms. Regarding questions 4 and 5, the author says that he does not see convincing arguments refuting such a possibility. And to question 6: “I will never even know if you feel everything the same way as I do.”

Even if such studies in themselves, perhaps, were not of great practical interest, however, they were very interesting theoretically, and there was a hope that the solution of these problems would become an impetus for the solution of other problems of a similar nature and of greater importance.

The ability to play chess has long been attributed to standard test tasks that demonstrate the ability of artificial intelligence to cope with the task not from the standpoint of "brute force", which in this context is understood as the use of a total enumeration of possible moves, but with the help of ..."something such,” as Mikhail Botvinnik, one of the pioneers in the development of chess programs, once put it. At one time, he managed to “break through” official funding for work on the “artificial chess master” project, the PIONEER software package, which was created under his leadership at the All-Union Research Institute of Electric Power Industry. On the possibilities of applying the basic principles of "PIONEER" for solving problems of optimizing control in national economy Botvinnik repeatedly reported to the Presidium of the USSR Academy of Sciences.

The basic idea on which the ex-world champion based his development, he himself formulated in one of his interviews in 1975: “For more than a dozen years I have been working on the problem of recognizing the thinking of a chess master: how does he find a move without a complete enumeration? And now it can be argued that this method is basically disclosed ... Three main stages of creating a program: the machine must be able to find the trajectory of the movement of the piece, then it must "learn" to form the playing area, the local battle area on the chessboard and be able to form a set of these zones . The first part of the work has been done for a long time. The zone formation subprogram has now been completed. Debugging will begin in the coming days. If it is successful, there will be full confidence that the third stage will also be successful and the car will start playing.”

The PIONEER project remained unfinished. Botvinnik worked on it from 1958 to 1995 and during this time he managed to build an algorithmic model of a chess game based on the search for a "tree of options" and the successive achievement of "imprecise goals", which were the material gain.

In 1974, the Soviet computer program Kaissa won the First World Computer Chess Championship, defeating other chess machines in all four games, playing, according to chess players, at the level of the third category. Soviet scientists introduced many innovations for chess machines: the use of an opening book, which avoided the calculation of moves at the very beginning of the game, as well as a special data structure: a bitboard, which is still used in chess machines.

The question arose whether the program could beat a person. In 1968, chess player David Levy made a £1,250 bet that no machine could beat him for the next 10 years. In 1977, he played a game with Kaissa and won, after which the tournament was not continued. In 1978, he won a game against Chess4.7, the best chess program at the time, after which he confessed that there was not much time left before the programs could defeat titled chess players.

Particular attention should be paid to the games between a human and a computer. The very first was the previously mentioned game of Alik Gleny and Turing's programs. The next step was the establishment of the Los Alamos program in 1952. She played on a 6x6 board (without bishops). The test was carried out in two stages. The first stage is a game with a strong chess player, as a result of which, after 10 hours of play, a man won. The second stage was a game against a girl who, shortly before the test, was taught to play chess. The result was the victory of the program on the 23rd move, which was an undoubted achievement at that time.

It wasn't until 1989 that Deep Thought managed to beat an international grandmaster: Bent Larsen. In the same year, a match of the same program took place with Garry Kasparov, which was easily won by Kasparov. After the match, he stated:

If a computer can beat the best of the best in chess, this will mean that the computer is able to compose the best music, write the best books. I can not believe it. If a computer with a rating of 2800, that is, equal to mine, is created, I myself will consider it my duty to challenge it to a match in order to protect the human race.

In 1996, the Deep Blue computer lost a tournament to Kasparov, but for the first time in history won a game against a world champion. And only in 1997, for the first time in history, a computer won a tournament against a world champion with a score of 3.5:2.5.

After Kasparov's matches, many FIDE leaders repeatedly expressed the idea that holding mixed matches (a person against a computer program) is inappropriate for many reasons. Supporting this position, Garry Kasparov explained:Yes, the computer does not know what winning or losing is. And how is it for me?.. How will I feel about the game after a sleepless night, after blunders in the game? It's all emotions. They place a huge burden on the human player, and the most unpleasant thing is that you understand that your opponent is not subject to fatigue or any other emotions.».

And if even now in chess combat the advantage is on the side of computers, then in such competitions as the game of Go, the computer is suitable only for playing with beginners or with intermediate level players. The reason is that in Go it is difficult to assess the state of the board: one move can make a winning position from an unambiguously losing position. In addition to this, a complete enumeration is practically impossible, because without using a heuristic approach, a complete enumeration of the first four moves (two on one side and two on the other) may require an estimate of almost 17 mln options alignment.

Of similar interest may be the game of poker. The difficulty here is that the state is not completely observable, unlike in Go and chess, where both players see the entire board. In poker, it is possible that the opponent says a pass and does not show his cards, which can complicate the analysis process.

Anyway, Mind games are as important to artificial intelligence developers as fruit flies are to geneticists. This is a convenient field for testing, a field for research, both theoretical and practical. This is also an indicator of the development of the science of artificial intelligence.

Use of artificial intelligence for commercial purposes

In the 80s, inspired by the advances in artificial intelligence, many companies decided to try new technologies. However, only the largest companies could afford such experimental steps.

One of the earliest companies to adopt artificial intelligence technologies was DEC (Digital Equipment Corp). She was able to implement the XSEL expert system, which helped her configure equipment and select alternatives for clients. As a result, the three-hour task was reduced to 15 minutes, and the number of errors decreased from 30% to 1%. According to company representatives, the XSEL system made it possible to earn $70 million.

American Express used an expert system to decide whether to issue a loan to a client or not. This system was one-third more likely to offer credit than experts did. She is said to have made $27 million a year.

The payoff provided by intelligent systems has often been overwhelming. It was like going from walking to driving, or from driving to flying.

However, not everything was so simple with the integration of artificial intelligence. Firstly, not every task could be formalized to the level at which artificial intelligence could handle it. Secondly, the development itself was very expensive. Thirdly, the systems were new, people were not used to using computers. Some were skeptical, and some were even hostile.

An interesting example is DuPont, which was able to spend $10,000 and one month to build a small auxiliary system. She could work on a personal computer and allowed to receive an additional profit of $ 100,000.

Not all companies have successfully implemented artificial intelligence technologies. This showed that the use of such technologies requires a large theoretical base and a lot of resources: intellectual, temporary and material. But if successful, the costs paid off with a vengeance.

Paradigm shift

In the mid-1980s, mankind saw that computers and artificial intelligence were able to cope with difficult tasks without worse than a man and, in many ways, even better. At hand were examples of successful commercial use, advances in the gaming industry, and advances in decision support systems. People believed that at some point computers and artificial intelligence would be able to cope with everyday problems better than humans. A belief that has been traced since ancient times, and more precisely, since the creation of the three laws of robotics. But at some point, this belief moved to a new level. And as proof of this, one more law of robotics can be cited, which Isaac Asimov himself preferred to call “zero” in 1986:

“0. A robot cannot harm a person unless it can prove that it will ultimately benefit all of humanity.”

This is a huge shift in the vision of the place of artificial intelligence in human life. Initially, machines were given the place of a weak-willed servant: the cattle of the new age. However, having seen its prospects and possibilities, a person began to raise the question of whether artificial intelligence could manage people's lives better than people themselves. Tireless, fair, selfless, not subject to envy and desires, perhaps he could arrange people's lives differently. The idea is not really new, it appeared in 1952 in Kurt Vonnegut's novel Mechanical Piano or Utopia 14. But then it was fantastic. Now, it has become a possible prospect.

data mining

The history of this trend towards Data mining began in 1989, after a seminar by Grigory Pyatetsky-Shapiro. He wondered if it was possible to extract useful knowledge from a long sequence of seemingly unremarkable data. For example, it could be an archive of database queries. In the event that by looking at it, we could identify some patterns, this would speed up the database. Example: every morning from 7:50 to 8:10, a resource-intensive request is initiated to create a report for the previous day, in which case by this time it can already be generated in between other requests, so the database will be more evenly loaded with requests. But imagine that this request is initiated by an employee only after he enters new information. In this case, the rule should change: as soon as a specific employee has entered information, you can start preparing a report in the background. This example is extremely simple, but it shows both the benefits of data mining and the difficulties associated with it.

The term datamining has no official translation into Russian. It can be translated as “data mining”, and “mining” is akin to that carried out in mines: having a lot of raw material, you can find a valuable object. In fact, a similar term existed back in the 1960s: Data Fishing or Data Dredging. It was used by statisticians, signifying the recognized bad practice of finding patterns in the absence of a priori hypotheses. In fact, the term could be more correctly called Database mining, but this name turned out to be a trademark. Himself, Grigory Pyatetsky-Shapiro, proposed the term “Knowledge Discovery in Databases”, but in the business environment and the press the name “Data mining” was fixed.

The idea that using a certain database of some facts, you can predict the existence of new facts appeared a long time ago and constantly developed in accordance with the state of the art: 1700s - Bayes' theorem, 1800s - regression analysis, 1930s - cluster analysis, 1940s - neural networks, 1950s - genetic algorithms, 1960s - decision trees. The term Data mining united them not according to the principle of how they work, but according to what their goal is: having a certain set of known data, they can predict what data should turn out next.

The goal of data mining is to find “hidden knowledge”. Let's take a closer look at what "hidden knowledge" means. First, it must be new knowledge. For example, that on weekends the number of goods sold in the supermarket increases. Secondly, knowledge should not be trivial, not reduced to finding the mathematical expectation and variance. Thirdly, this knowledge should be useful. Fourth, knowledge that can be easily interpreted.

For a long time, people believed that computers could predict everything: stock prices, server loads, the amount of resources needed. However, it turned out that it is often very difficult to extract information from the data dump. In each specific case, it is required to adjust the algorithm, if it is not just some kind of regression. People believed that there was a universal algorithm that, like a black box, was able to absorb some large amount of data and start making predictions.

Despite all the limitations, tools that facilitate data mining are improving from year to year. And since 2007, Rexer Analytics has published the results of a survey of experts about existing tools every year. The survey in 2007 consisted of 27 questions and involved 314 participants from 35 countries. In 2013, the survey already included 68 questions, and 1259 specialists from 75 countries of the world took part in it.

Data mining is still considered promising direction. And again, its use raises new ethical questions. A simple example is the use of data mining tools to analyze and predict crimes. Similar studies have been carried out since 2006 by various universities. Human rights activists oppose this, arguing that knowledge gained in this way can lead to searches, which are not based on facts, but on assumptions.

Recommender systems are by far the most tangible result of the development of artificial intelligence. We can encounter it by going to one of the popular online stores. The task of the recommender system is to determine, for example, a list of products viewed by a specific user, by some observable features, to determine which products will be most interesting to the user.

The task of finding recommendations also comes down to the task of learning the machine, just like with data mining. It is believed that the history of the development of recommender systems began with the introduction of the Tapestry system by David Goldberg at the Xerox Palo Alto Research Center in 1992. The purpose of the system was to filter corporate mail. It became a kind of progenitor of the recommender system.

On the this moment There are two recommender systems. David Goldberg proposed a system based on collaborative filtering. That is, in order to make a recommendation, the system looks at information about how other users similar to the target user evaluated a certain object. Based on this information, the system can assume how highly the target user will rate a particular object (product, movie).

Content filters are another kind of recommender systems. Necessary condition for the content filter to exist, there is a certain database that must store metrics for all objects. Further, after several user actions, the system is able to determine what type of objects the user likes. Based on existing metrics, the system can pick up new objects that will be in some way similar to those already viewed. The disadvantage of such a system is that you first need to build a large database with metrics. The process of building the metric itself can be a challenge.

Again, the question arises whether the use of such systems is not a violation. There are two approaches here. The first is explicit data collection, which represents the collection of data exclusively within the framework in which the recommender system operates. For example, if this is a recommendation system for an online store, then it will offer to evaluate some product, sort products in order of interest, and create a list of favorite products. With this type, everything is simple: the system does not receive information about the user's activity outside its boundaries, all that it knows is the user himself. The second type is implicit data collection. It includes techniques such as using information from other, similar resources, keeping a record of user behavior, checking the contents of the user's computer. This type of information gathering for recommender systems is troubling.

However, in this direction, the use of private information causes less and less controversy. For example, in 2013, at the YAC (Yandex Another Conference) conference, the creation of the Atom system was announced. Its purpose is to provide website owners with the information they may need to create recommendations. This information, initially, should be collected by Yandex services. That is, in this case, implicit data collection is carried out. Example: a person enters a search service to find out the most interesting places in Paris. After some time, a person visits the site of a travel agency. Without Atom, the agency would simply have to show the person the most popular tours. Atom could advise the site to first of all show the user a tour to Paris and make a personal discount on this particular tour in order to distinguish it from others. Thus, confidential information does not go beyond the Atom service, the site knows what to advise the client, and the client is happy that he quickly found what he was looking for.

To date, recommender systems are the clearest example of what artificial intelligence technologies can achieve. With one such system, work can be done that even an army of analysts could not handle.

Conclusion

Everything has a beginning, as Sancho Panza said, and this beginning must be described.

turn to something that precedes it. The Hindus invented the elephant, which

which held the world, but they had to put it on the tortoise. Need

note that invention consists in creating not from emptiness, but from

chaos: first of all, you should take care of the material ...

Mary Shelley, Frankenstein

The development of artificial intelligence as a science and technology for creating machines began a little more than a century ago. And the achievements that have been achieved so far are stunning. They surround people almost everywhere. Artificial intelligence technologies have a peculiarity: a person considers them something intellectual only at first, then he gets used to them and they seem natural to him.

It is important to remember that the science of artificial intelligence is closely related to mathematics, combinatorics, statistics and other sciences. But not only do they influence him, but the development of artificial intelligence allows you to take a different look at what has already been created, as was the case with the Logic Theorist program.

An important role in the development of artificial intelligence technologies is played by the development of computers. It is hardly possible to imagine a serious data mining program, which would be enough for 100 kilobytes of RAM. Computers allowed technologies to develop extensively, while theoretical research served as prerequisites for intensive development. We can say that the development of the science of artificial intelligence was a consequence of the development of computers.

The history of the development of artificial intelligence is not over, it is being written right now. Technologies are constantly being improved, new algorithms are being created, and new areas of application are opening up. Time constantly opens up new opportunities and new questions for researchers.

This abstract does not focus on the countries in which certain studies were conducted. The whole world has contributed bit by bit to the area that we now call the science of artificial intelligence.

Bibliography

Myths of the peoples of the world. M., 1991-92. In 2 vols. T.2. S. 491,

Idel, Moshe (1990). Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. Albany, New York: State University of New York Press. ISBN 0-7914-0160-X. page 296

Asimov, Isaac. Essay No. 6. Laws of robotics // Robot dreams in . M.: Eksmo, 2004. S. 781784. ISBN 5-699-00842- X

See Nonn. Acts of Dionysus XXXII 212. Clement. Protreptic 57, 3 (reference to Philostephanes).

Robert J. Sawyer. On Asimovs Three Laws of Robotics (1991).

Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind LIX (236): 433460

McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955)A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

Crevier 1993, pp. 4648.

Smith, Reid (May 8, 1985). "Knowledge-Based Systems Concepts, Techniques, Examples"

Alan Turing, "Digital computers applied to games". n.d. AMT "s contribution to "Faster than thought", ed. B.V. Bowden, London 1953. Published by Pitman Publishing. TS with MS corrections. R.S. 1953b

Kaissa - World Champion. Journal "Science and Life", January 1975, pp. 118-124

Geek, E. Grandmaster "Deep Thought" // Science and Life. M., 1990. V. 5. P. 129130.

F. Hayes-Roth, N. Jacobstein. The State of Enowledge-Based Systems. Communications of the ACM, March, 1994, v.37, n.3, pp.27-39.

Karl Rexer, Paul Gearan, & Heather Allen (2007); 2007 Data Miner Survey Summary, presented at SPSS Directions Conference, Oct. 2007, and Oracle BIWA Summit, Oct. 2007.

Karl Rexer, Heather Allen, & Paul Gearan (2013); 2013 Data Miner Survey Summary, presented at Predictive Analytics World, Oct. 2013.

Shyam Varan Nath (2006). “Crime Pattern Detection Using Data Mining”, WI-IATW "06 Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, page 41-44

David Goldberg, David Nichols, Brian M. Oki and Douglas Terry (2006). “Using collaborative filtering to weave an information Tapestry”, Communications of the ACM, Dec 1992, vol.35, n12, p.61-71

Other related works that may interest you.vshm>

14280. The idea of ​​artificial intelligence systems and the mechanisms of their functioning 157.75KB
Consideration of the structure and mechanisms of functioning of intelligent systems, on the one hand, requires a detailed presentation, taking into account the influence of specific features of applications, and on the other hand, it requires a generalization and classification of introduced concepts, structures, mechanisms.
609. 12.42KB
In lighting installations intended for lighting enterprises, gas discharge lamps and incandescent lamps are widely used as light sources. The main characteristics of light sources include: rated voltage V; electric power W; luminous flux of pits: luminous efficiency lm W this parameter is the main characteristic of the efficiency of the light source; service life h. The type of light source at enterprises is chosen taking into account the technical and economic indicators of the specifics of production ...
6244. History of CIS development 154.8KB
It should be noted that any type of system includes systems of earlier types. This means that systems of all types peacefully coexist today. The general model of the CIS system architecture Until recently, the technology of creating information systems was dominated by the traditional approach, when the entire architecture of the information system was built from top to bottom from application functionality to system engineering solutions, and the first component of the information system was entirely derived from the second. Initially, systems of this level were based ...
17626. History of the development of swimming 85.93KB
The enormous importance of water in the life of primitive man, the need for the industrial development of this unusual environment demanded from him the ability to swim so as not to die in the harsh struggle for existence. With the advent of the state system, the ability to swim became especially necessary in labor and in military affairs.
9769. History of the development of ethnopsychology 19.47KB
History of development of ethnopsychology Conclusion. So Hippocrates in his work On Airs, Waters, and Localities wrote that all differences between peoples, including in psychology, are due to the location of the country, climate and other natural factors. The next stage of deep interest in ethnic psychology begins in the middle of the 18th century. Montesquieu perhaps most fully expressed the general methodological approach of that period to the essence of ethnic differences in the spirit of psychology.
9175. History of the development of natural science 21.45KB
Among the natural science revolutions, the following types can be distinguished: global, covering all natural science and causing the emergence of not only fundamentally new ideas about the world of a new vision of the world, but also a new logical structure of science, a new way or style of thinking; local in individual fundamental sciences v. Formation of a new ...
9206. History of the development of mechatronics 7.71KB
In the last decade, a lot of attention has been paid to the creation of mechatronic modules for modern cars of a new generation of technological equipment of machine tools with parallel kinematics of robots with intelligent control of micromachines of the latest computer and office equipment. The first serious results in the creation and practical application robots in the USSR date back to the 1960s. The first industrial samples of modern industrial robots with positional control were created in 1971 at the Leningrad Polytechnic Institute...
11578. History of information technology development 41.42KB
The results of scientific and applied research in the field of informatics, computer technology and communications have created a solid basis for the origin of a new branch of skill and production of the information industry. constitutes the infrastructure and information space for informatization of society. Stages of emergence and development information technology At the very beginning of the situation, in order to synchronize the effects performed, a person needed coded communication signals. Representation of information thinks the self-control of Two objects: the source of information and...
3654. History of the development of inorganic chemistry 29.13KB
Chemistry, as a science, originated in ancient Egypt and was used mainly as an applied science: to obtain any substances and products with new properties that were still unknown to a wide range of people. Priests ancient egypt used knowledge of chemistry to obtain artificial jewelry, embalming people
14758. The history of the development of genetics as a fundamental science 942.85KB
The history of the development of genetics as a fundamental science. Methods for the study of human genetics. The history of the development of genetics as a fundamental science.2 The main stages in the development of genetics: the classical period.

Lecture 1

Introduction. The concept of information system and technology, intellectual information system (IIS). Historical aspects of the development of methods for representing and processing signals, methods for constructing signal processing systems, their intellectualization. Difference of IIS from traditional information systems. Types and characteristics of intelligent systems. The concept and types of intelligent control. Approaches to the construction of intelligent information systems. The main classes of IIS. Distinguishing features of each class.

In today's world, a programmer's productivity growth is practically achieved only in those cases when computers take over part of the intellectual load. One of the ways to achieve maximum progress in this area is "artificial intelligence", when the computer not only takes on the same type of repetitive operations, but can also learn itself. In addition, the creation of a full-fledged "artificial intelligence" opens up new horizons of development for humanity.

Before starting to consider the issues of building efficient intelligent information systems, let's turn to some definitions and basic concepts of the topic.

Information- information about objects, phenomena and events, processes of the surrounding world, transmitted orally, in writing or in any other way and reducing the uncertainty of knowledge about them.

The information must be reliable, complete, adequate, i.e. have a certain level of relevance, concise, clear and understandable, timely and valuable.

System- a set of elements, united by links between them and having a certain integrity. That is, a system is a set of interacting interconnected elements, united by a certain goal and common (purposeful) rules of relationships.

Automatic information systems perform all information processing operations without human intervention.

Automated information systems involve the participation in the process of information processing of both a person and technical means, with the computer playing the main role. In the modern interpretation, the term "information system" necessarily includes the concept of an automated system. It is necessary to distinguish between the concepts of information system and information technology.

Information technology- techniques, methods and methods of using computer technology in the performance of the functions of collecting, storing, processing and using data (according to GOST 34.003-90).

Information system- an organizationally ordered set of documents and information technologies, including the use of computer technology and communications that implement information processes.

Such an understanding of the information system involves the use of computers and means of communication as the main technical means of processing information, realizing information processes and issuing information necessary in the process of making decisions about tasks from any area.

InfSist is an environment, the constituent elements of which are computers, computer networks, software products, databases, people, various kinds of technical and software communications, etc. Although the very idea of ​​IS and some principles of their organization arose long before the advent of computers, however, computerization increased the efficiency of IS by tens and hundreds of times and expanded the scope of their application.

under the term " system" is understood as an object that is simultaneously considered both as a single whole and as a set of interconnected heterogeneous elements that are united in the interests of achieving the goals set, working as a single whole. Systems differ significantly from each other both in composition and in main goals. This whole acquires some property that is absent from the elements separately.

Signs of consistency are described by three principles:

    External integrity - isolation or relative isolation of the system in the surrounding world;

    Internal integrity - the properties of the system depend on the properties of its elements and the relationships between them. Violation of these relationships can lead to the fact that the system will not be able to perform its functions;

    Hierarchy - in the system, various subsystems can be distinguished, on the other hand, the system itself can also be a subsystem of another larger system or subsystem.

In computer science, the concept of "system" is widespread and has many semantic meanings. Most often it is used in relation to the set technical means and programs. The system can be called the hardware part of the computer. A system can also be considered a set of programs for solving specific applied problems, supplemented by procedures for maintaining documentation and managing calculations.

Depending on the specific field of application, ISs can vary greatly in their functions, architecture, and implementation. It is possible to single out the main properties that are common to all ISs :

    the structure of the IS, its functional purpose must correspond to the set goals;

    IS uses networks to transfer data;

    since any IS is designed to collect, store and process information, then any IS is based on the environment for storing and accessing data. And since the task of IS is the production of reliable, reliable, timely and systematized information based on the use of databases, expert systems and knowledge bases, it must provide the required level of storage reliability and access efficiency that correspond to the scope of IS;

    IP must be controlled by people, understood and used in accordance with the basic principles implemented in the form of an enterprise standard or other standard for IP. The IS user interface should be easy to understand on an intuitive level.

The main tasks of information systems and IS developers:

    Search, processing and storage of information that accumulates for a long time and the loss of which is irreparable. Computerized ICs are designed to process information faster and more reliably, so that people do not waste time, to avoid human-like accidental errors, to save costs, to make people's lives more comfortable;

    Storage of data of different structure. There is no developed IS that works with one homogeneous data file. Moreover, a reasonable requirement for an information system is that it can evolve. New functions may appear that require additional data with a new structure to perform. In this case, all previously accumulated information should remain saved. Theoretically, this problem can be solved by using several external memory files, each of which stores data with a fixed structure. Depending on the way the file management system used is organized, this structure can be a file record structure or supported by a separate library function written specifically for this IC. There are examples of actually functioning ISs in which the data storage was planned to be based on files. As a result of the development of most of these systems, a separate component has emerged in them, which is a kind of database management system (DBMS);

    Analysis and forecasting of information flows of various kinds and types moving in society. Streams are studied with the aim of their minimization, standardization and adaptation for efficient processing on computers, as well as the features of information flows flowing through various channels of information dissemination;

    Investigation of ways to represent and store information, creation of special languages ​​for the formal description of information of various nature, development of special techniques for compressing and encoding information, annotating voluminous documents and summarizing them. Within the framework of this direction, work is being developed to create large-scale data banks that store information from various fields of knowledge in a form accessible to computers;

    Construction of procedures and technical means for their implementation, with the help of which it is possible to automate the process of extracting information from documents that are not intended for computers, but focused on human perception;

    Creation of information retrieval systems capable of perceiving requests to information repositories formulated in natural language, as well as special query languages ​​for systems of this type;

    Creation of information storage, processing and transmission networks, which include information data banks, terminals, processing centers and communication facilities.

The specific tasks that must be solved by the information system depend on the application area for which the system is intended. The areas of application of information applications are diverse: banking, production management, medicine, transport, education, etc. Let's introduce the concept of "subject area" - a fragment isolated from the surrounding world is called the area of ​​expertise or subject area. There are also many tasks and problems that need to be solved using entities and relationships from this subject area, so a broader concept is used - the problem environment is the subject area + tasks to be solved.

We will take a closer look at two types of information systems. These are expert and intellectual systems.

Expert systems(Expert System) - information consulting and / or decision-making systems based on structured, often poorly formalized procedures that use experience, intuition, i.e. intellectual features that support or model the work of experts; systems are used both in long-term and short-term operational forecasting and management.

Intelligent systems or systems based on knowledge (Knowleadge Based System) - systems for supporting decision-making tasks in complex systems where it is necessary to use knowledge in a fairly wide range, especially in poorly formalized and poorly structured systems, fuzzy systems and fuzzy decision criteria; these systems are the most effective and are used to reduce the problems of long-term, strategic management to problems of a tactical and short-term nature, to improve manageability, especially in a multi-criteria environment. Unlike expert systems, knowledge-based systems should more often avoid expert and heuristic procedures and resort to cognitive procedures to minimize risk. Here, the influence of the professionalism of the personnel is more significant, because the development of such systems requires cooperation and mutual understanding not only of developers, but also of users, managers, and the development process itself, as a rule, occurs iteratively, with iterative improvements, gradual transformation (transition) of procedural knowledge (how to ) into non-procedural, declarative (what to do).

Let us now consider the question of intelligence of information systems.

Term intelligence(intelligence) comes from the Latin intellectus, which means "mind, reason, mind; the mental abilities of a person." Respectively artificial intelligence (artificial intelligence) - AI (AI) is usually interpreted as the property of automatic systems to take on individual functions of human intelligence, for example, to choose and make optimal decisions based on previously gained experience and a rational analysis of external influences. It can be said that intelligence is the ability of the brain to solve (intellectual) problems by acquiring, remembering, and purposefully transforming knowledge in the process of learning from experience and adapting to a variety of circumstances. The term “artificial intelligence” itself was proposed in 1956 at a seminar at Dartsmouth College (USA). The word intelligence, in fact, means "the ability to reason reasonably", and not at all "intelligence", for which there is a term intellect.

In 1950, the British mathematician Alan Turing published his work "The Computing Machine and Intelligence" in the Mind magazine, in which he described a test for checking a program for intelligence. He proposed to place the researcher and the program in different rooms, and until the researcher determines who is behind the wall - a person or a program, consider the behavior of the program reasonable. This was one of the first definitions of intelligence, that is, A. Turing proposed to call such behavior of a program as intelligent, which will simulate the reasonable behavior of a person. Since then, many definitions of intelligent systems (InS) and artificial intelligence (AI) have emerged. We present some of these definitions. one. AI defined as the field of computer science concerned with the study and automation of intelligent behavior. 2. another definition: " AI- this is one of the areas of computer science, the purpose of which is the development of hardware and software tools that allow a non-programmer user to set and solve their own, traditionally considered intellectual tasks, communicating with a computer in a limited subset of natural language. 3. IS is an adaptive system, allowing to build programs of expedient activities to solve the tasks assigned to them on the basis of the specific situation that is currently developing in their environment. Wherein adaptive system is defined as a system that remains operational in case of unforeseen changes in the properties of a controlled object, control objectives or the environment by changing the functioning algorithm, behavior program or searching for optimal, in some cases simply effective, solutions and states. Traditionally, according to the method of adaptation, self-adjusting, self-learning and self-organizing systems are distinguished.

So, using intelligent systems, a person solves intellectual problems. To determine the difference between a simple task and an intellectual task, it is necessary to introduce the concept of an algorithm. Under algorithm understand the exact prescription about the execution in a certain order of a system of operations for solving any problem from some given class (set) of problems. The term "algorithm" comes from the name of the Uzbek mathematician Al-Khwarizmi, who in the 9th century proposed the simplest arithmetic algorithms. In mathematics and cybernetics, a class of problems of a certain type is considered solved when an algorithm is established for its solution. Finding algorithms is a natural human goal in solving various classes of problems. Finding an algorithm for problems of a given type is associated with subtle and complex reasoning that requires great ingenuity and high skill. It is generally accepted that this kind of activity requires the participation of the human intellect. Problems related to finding an algorithm for solving a class of problems of a certain type will be called intelligent. Those. intellectual tasks are complex, poorly formalized tasks that require the construction of an original solution algorithm depending on the specific situation, which may be characterized by uncertainty and dynamism of the initial data and knowledge.

Different researchers define artificial intelligence as a science in different ways, depending on their view of it, and are working to create systems that:

    think like people;

    think rationally;

    act like people;

    act rationally.

When recreating reasonable reasoning and action, certain difficulties arise. Firstly, in most cases, when performing some actions, a person does not realize how he does it, the exact way, method or algorithm for understanding text, recognizing faces, proving theorems, solving problems, writing poetry, etc. is not known. Secondly, at the present level of development, the computer is too far from the human level of competence and works according to other principles.

Artificial intelligence has always been an interdisciplinary science, being both science and art, technology and psychology. Artificial intelligence methods are diverse. They are actively borrowed from other sciences, adapted and changed to the task being solved. To create an intelligent system, it is necessary to involve specialists from the applied field, therefore, linguists, neurophysiologists, psychologists, economists, computer scientists, programmers, etc. cooperate within the framework of artificial intelligence.

History of the development of artificial intelligence

The idea of ​​creating an artificial likeness of a person to solve complex problems and simulate the human mind has been in the air since ancient times. So, in ancient Egypt, a “reviving” mechanical statue of the god Amon was created. In Homer's Iliad, the god Hephaestus forged humanoid creatures.

Artificial intelligence is, in a sense, the science of the future, in which there is no rigid division into areas and the connection between individual disciplines is clearly visible, which only reflect a certain facet of knowledge.

The exact set of laws governing the rational part of thinking was formulated by Aristotle (384-322 BC). However, the ancestor of artificial intelligence is considered to be the medieval Spanish philosopher, mathematician and poet Raymond Lull, who, back in the 13th century, tried to create a mechanical machine for solving various problems based on the general classification of concepts he developed. In the 18th century, Leibniz and Descartes independently continued this idea, proposing universal languages ​​for classifying all sciences. The works of these scientists can be considered the first theoretical works in the field of artificial intelligence. Game theory and decision theory, brain data, cognitive psychology have all become building material for artificial intelligence. But the final birth of artificial intelligence as a scientific direction occurred only after the creation of computers in the 40s of the XX century and the release by Norbert Wiener of fundamental works on new science- cybernetics.

Formation of artificialintellect how science happened in 1956. D. McCarthy, M. Minsky, K. Shannon and N. Rochester organized a two-month seminar in Dartmouth for American researchers involved in automata theory, neural networks, and intelligence. Although research in this area has already been actively conducted, it was at this seminar that the term and a separate science appeared - artificial intelligence.

One of the founders of the theory of artificial intelligence is the famous English scientist Alan Turing, who in 1950 published the article "Computing Machines and the Mind" (translated into Russian under the title "Can a machine think?"). It was in it that the classic “Turing test” was described, which allows evaluating the “intelligence” of a computer by its ability to have a meaningful dialogue with a person.

The first decades of the development of artificial intelligence (1952-1969) were full of success and enthusiasm. A. Newell, J. Shaw and G. Simon created a program for playing chess based on the method proposed in 1950 by C. Shannon, formalized by A. Turing and modeled by him manually. A group of Dutch psychologists led by A. de Groot, who studied the playing styles of outstanding chess players, was involved in the work. In 1956, this team created the programming language IPL1 - practically the first symbolic language for processing lists, and wrote the first program "Logic-Theorist", designed to automatically prove theorems in propositional calculus. This program can be attributed to the first achievements in the field of artificial intelligence.

In 1960, the same group wrote the GPS (General Problem Solver) program, a universal problem solver. She could solve a number of puzzles, calculate indefinite integrals, solve some other problems. The results attracted the attention of specialists in the field of computing, and programs for automatic proof of theorems from planimetry and the solution of algebraic problems appeared.

Since 1952, A. Samuel has written a number of programs for playing checkers that played at the level of a well-trained amateur, and one of them learned to play better than its creator.

In 1958, D. McCarthy defined a new high-level language, Lisp, which became the dominant language for artificial intelligence.

The first neural networks appeared in the late 50s. In 1957, F. Rosenblatt attempted to create a system that simulates the human eye and its interaction with the brain - the perceptron.

The First International Conference on Artificial Intelligence (IJCAI) was held in 1969 in Washington DC.

In 1963, D. Robinson implemented a method of automatic theorem proving, called the "resolution principle", and on the basis of this method, the logic programming language Prolog was created in 1973. .

In the United States, the first commercial knowledge-based systems appeared - expert systems. Artificial intelligence is being commercialized. Growing annual investment and interest in self-learning systems, industrial expert systems are being created. Methods of knowledge representation are being developed.

The first expert system was created by E. Feigenbaum in 1965. But it was still far from commercial profit. In 1986 alone, DEC's first commercial R1 system saved approximately $40 million a year. By 1988, DEC had deployed 40 expert systems. Du Pont used 100 systems and saved about 10 million a year.

In 1981, Japan, as part of a 10-year plan to develop intelligent computers based on Prolog, began to develop a 5th generation knowledge-based computer. 1986 was the year of a resurgence of interest in neural networks.

In 1991, Japan stops funding the 5th generation computer project and starts a project to create a 6th generation computer - a neurocomputer.

In 1997, the Deep Blue computer defeated the world champion G. Kasparov in a game of chess, proving the possibility that artificial intelligence can equal or surpass a person in a number of intellectual tasks (albeit under limited conditions).

A huge role in the struggle for the recognition of artificial intelligence in the USSR was played by Academicians A. I. Berg and G. S. Pospelov.

In 1954-1964. separate programs are created and research is carried out in the field of finding solutions to logical problems. The program ALPEV LOMI is created, which automatically proves theorems. It is based on Maslov's original inverse derivation, similar to Robinson's resolution method. Among the most significant results obtained by domestic scientists in the 60s, it should be noted the “Cortex” algorithm by M. M. Bongard, which simulates the activity of the human brain in pattern recognition. Outstanding scientists M. L. Tsetlin, V. N. Pushkin, M. A. Gavrilov, whose students were the pioneers of this science in Russia, made a great contribution to the development of the Russian school of artificial intelligence.

In 1964, a method was proposed for automatically searching for proofs of theorems in predicate calculus, called the "inverse Maslov method".

In 1965-1980. there was a birth of a new direction - situational management (in Western terminology, it corresponds to the representation of knowledge). Professor D. A. Pospelov became the founder of this scientific school.

At the Moscow State University in 1968, V. F. Turchin created the REFAL symbolic data processing language.

1 Literary review.


  1. A brief history of the development of artificial intelligence.

Artificial intelligence (AI) is an area of ​​research at the intersection of sciences, specialists working in this field are trying to understand what behavior is considered reasonable (analysis) and create working models of this behavior (synthesis). The practical goal is to create methods and techniques necessary for programming "intelligence" and its transfer to computers (VM), and through them to all kinds of systems and means.

In the 1950s, AI researchers tried to build intelligent machines by mimicking the brain. These attempts were unsuccessful due to the complete unsuitability of both hardware and software.

In the 1960s, attempts were made to find general methods for solving a wide class of problems by simulating a complex process of thinking. The development of universal programs turned out to be too difficult and fruitless. The wider the class of problems that one program can solve, the poorer are its capabilities in solving a specific problem.

In the early 1970s, AI experts focused on developing programming methods and techniques suitable for solving more specialized problems: representation methods (ways to formulate a problem for solving in computer technology (CT)) and search methods (ways to control the course of solution so that it does not require too much memory and time).

And only in the late 70s was it adopted in principle new concept, which lies in the fact that in order to create an intellectual program, it must be provided with a lot of high-quality special knowledge about a certain subject area. The development of this direction has led to the creation of expert systems (ES).

In the 80s, AI experienced a rebirth. Its great potential was widely recognized both in research and in the development of production. As part of the new technology, the first commercial software products appeared. At this time, the field of machine learning began to develop. Until now, transferring the knowledge of a specialist-expert to a computer program has been a tedious and lengthy procedure. The creation of systems that automatically improve and expand their stock of heuristic (not formal, based on intuitive considerations) rules is the most important stage in recent years. At the beginning of the decade, the largest national and international research projects in the history of data processing were launched in various countries, aimed at "fifth generation intelligent VMs."

AI research is often classified based on its field of application rather than on the basis of different theories and schools. Each of these areas has been developing its own programming methods, formalisms for decades; each of them has its own traditions, which may differ markedly from the traditions of the neighboring field of study. Currently, AI is applied in the following areas:


  1. natural language processing;

  2. expert systems (ES);

  3. symbolic and algebraic calculations;

  4. proofs and logic programming;

  5. game programming;

  6. signal processing and pattern recognition;

  7. and etc.

1.2 AI programming languages.

1.2.1 Classification of programming languages ​​and styles.
All programming languages ​​can be divided into procedural and declarative languages. The vast majority of currently used programming languages ​​(C, Pascal, BASIC, etc.) are procedural languages. The most significant classes of declarative languages ​​are functional (Lisp, Logo, APL, etc.) and logical (Prolog, Planer, Coniver, etc.) languages ​​(Fig. 1).

In practice, programming languages ​​are not purely procedural, functional, or logical, but contain features of languages ​​of various types. In a procedural language, it is often possible to write a functional program or part of it, and vice versa. Maybe it would be more accurate to talk about the style or method of programming instead of the type of language. Naturally different languages ​​support different styles to varying degrees.

A procedural program consists of a sequence of statements and clauses that control the sequence in which they are executed. Typical statements are assignment and control transfer statements, I/O statements, and special clauses for organizing loops. They can be used to compose program fragments and subroutines. Procedural programming is based on taking the value of some variable, performing an operation on it, and storing the new value with an assignment operator, and so on until the desired final value is obtained (and possibly printed).

PROGRAMMING LANGUAGES

PROCEDURAL LANGUAGES DECLARATION LANGUAGES

Pascal, C, Fortran, ...

LOGIC LANGUAGES FUNCTIONAL LANGUAGES

Prologue, Mandala... Lisp, Logo, ARL, ...

Fig.1 Classification of programming languages
Logic programming is one approach to computer science that uses first-order predicate logic in the form of Horn phrases as its high-level language. First-order predicate logic is a universal abstract language designed to represent knowledge and solve problems. It can be seen as a general theory of relations. Logic programming is based on a subset of first-order predicate logic, yet it is equally broad in scope. Logic programming enables the programmer to describe a situation using predicate logic formulas, and then, to draw conclusions from these formulas, apply an automatic problem solver (i.e., some procedure). When using a logic programming language, the focus is on describing the structure of an applied problem rather than telling the computer what to do. Other computer science concepts from fields such as relational database theory, software engineering, and knowledge representation can also be described (and therefore implemented) with logic programs.

A functional program consists of a collection of function definitions. Functions, in turn, are calls to other functions and statements that control the sequence of calls. Calculations begin with a call to some function, which in turn calls the functions included in its definition, etc. in accordance with the hierarchy of definitions and the structure of conditional sentences. Functions often either directly or indirectly call themselves.

Each call returns some value to the function that called it, the evaluation of which then continues; this process is repeated until the function that started the calculations returns the final result to the user.

"Pure" functional programming does not recognize assignments and control transfers. The branching of calculations is based on the mechanism of processing the arguments of the conditional sentence. Repeated calculations are carried out through recursion, which is the main means of functional programming.


  1. Comparative characteristics of AI languages.

At the first stage of AI development (in the late 1950s and early 1960s), there were no languages ​​and systems focused specifically on knowledge areas. The universal programming languages ​​that had appeared by that time seemed to be a suitable tool for creating any (including intelligent) systems, since these languages ​​can distinguish between declarative and procedural components. It seemed that any models and systems of knowledge representation could be interpreted on this basis. But the complexity and laboriousness of such interpretations turned out to be so great that application systems for implementation were not available. Studies have shown that the productivity of a programmer remains constant regardless of the level of the instrumental language on which he works, and the ratio between the length of the source and resulting programs is approximately 1:10. Thus, the use of an adequate instrumental language increases the productivity of the system developer by an order of magnitude, and this is with a single-stage translation. Languages ​​intended for programming intelligent systems contain hierarchical (multilevel) translators and increase labor productivity by 100 times. All this confirms the importance of using adequate tools.


  1. Symbolic information processing languages.

The Lisp language was developed at Stanford under the direction of J. McCarthy in the early 60s. According to the original plans, it was supposed to include, along with all the possibilities of Fortran, tools for working with matrices, pointers and structures from pointers, etc. But there were not enough funds for such a project. The finally formed principles underlying the Lisp language: the use of a single list representation for programs and data; using expressions to define functions; bracket syntax of the language.

Lisp is a low-level language, it can be considered as an assembler oriented to work with list structures. Therefore, throughout the existence of the language, there have been many attempts to improve it by introducing additional basic primitives and control structures. But all these changes, as a rule, did not become independent languages. In its new editions, Lisp quickly absorbed all the valuable inventions of its competitors.

After the creation of the powerful Lisp systems MacLisp Interlisp in the early 70s, attempts to create AI languages ​​different from Lisp, but on the same basis, come to naught. Further development of the language goes, on the one hand, along the path of its standardization (Standard-Lisp, Franz-Lisp, Common Lisp), and on the other hand, in the direction of creating conceptually new languages ​​for representing and manipulating knowledge in the Lisp environment. Currently, Lisp is implemented on all classes of computers, starting with PCs and ending with high-performance computing systems.

Lisp is not the only language used for AI tasks. As early as the mid-1960s, languages ​​were being developed that offered other conceptual foundations. The most important of them in the field of processing symbolic information are SNOBOL and Refal.


SNOBOL.

This is a string processing language, within which the concept of pattern matching first appeared and was implemented to a fairly full extent. The SNOBOL language was one of the first practical implementations of an advanced production system. The most famous and interesting version of this language is Snoball-4 Here, the technique of setting samples and working with them significantly outstripped the needs of practice. In essence, it has remained the "proprietary" programming language, although the concepts of SNOBOL have certainly influenced Lisp and other AI task programming languages.


Refal.

The Refal language is an algorithmic language for recursive functions. It was created by Turchin as a metalanguage designed to describe various, including algorithmic, languages ​​and various types of processing of such languages. This also meant the use of Refal as a metalanguage over itself. For the user, this is the language for processing symbolic information. Therefore, in addition to describing the semantics of algorithmic languages, it has found other applications. These are the execution of cumbersome analytical calculations in theoretical physics and applied mathematics, the interpretation and compilation of programming languages, the proof of theorems, the modeling of purposeful behavior, and, more recently, AI tasks. Common to all these applications are complex transformations on objects defined in some formalized languages.

The Refal language is based on the concept of a recursive function defined on a set of arbitrary symbolic expressions. The basic data structure of this language is lists, but not singly linked, as in Lisp, but bidirectional. Character processing is closer to the production paradigm. At the same time, the concept of search by pattern, characteristic of SNOBOL, is actively used.

A program written in Refal defines a set of functions, each of which has one argument. The function call is enclosed in function brackets.

In many cases, it becomes necessary to call programs written in other languages ​​from programs written in Refal. This is simple, because from Refal's point of view, primary functions (Functions that are not described in Refal, but which nevertheless can be called from programs written in this language.) are just some functions external to this program, therefore, when calling a function, you may not even know that it is the primary function.

The semantics of the Refal program is described in terms of an abstract Refal machine. Refal machine has a field of memory and a field of view. The program is placed in the memory field of the Refal machine, and the data to be processed with its help is placed in the field of view, i.e., before starting work, a description of the set of functions is entered in the memory field, and the expression to be processed is entered in the field of view.

It is often convenient to break a Refal program into parts that can be processed independently by the Refal compiler. The smallest part of a Refal program that can be processed independently by the compiler is called a module. The result of compiling a source module in Refala is an object module, which, before executing the Refal program, must be combined with other modules obtained by compiling from Refala or other languages; this combination is performed using the linker and loaders editor. The details depend on the OS you are using.

Thus, Refal has absorbed the best features of the most interesting character information processing languages ​​of the 60s. Currently Refal language is used to automate the construction of translators, systems of analytical transformations, and also, like Lisp, as a tool environment for the implementation of knowledge representation languages.


Prologue.

In the early 70s, a new language appeared that competed with Lisp in the implementation of knowledge-oriented systems - Prolog. This language does not provide new super-powerful programming tools compared to Lisp, but it supports a different model for organizing calculations. Its practical attraction is that, just as Lisp hid the computer's memory device from the programmer, Prolog allowed him not to care about the flow of control in the program.

Prologue is a European language developed at the University of Marseille in 1971. But he began to gain popularity only in the early 80s. This is due to two circumstances: firstly, the logical basis of this language was justified and, secondly, in the Japanese project of fifth generation computing systems, it was chosen as the base for one of the central components - the inference engine.

The Prolog language is based on a limited set of mechanisms, including pattern matching, tree representation of data structures, and automatic return. Prolog is especially well suited for solving problems that involve objects and relationships between them.

Prolog has powerful tools for extracting information from databases, and the methods of data search used in it are fundamentally different from traditional ones. The power and flexibility of Prolog's databases, and the ease with which they can be extended and modified, make this language very suitable for commercial applications.

Prolog is successfully used in such areas as: relational databases (the language is especially useful when creating relational database interfaces with the user); automatic problem solving; natural language understanding; implementation of programming languages; representation of knowledge; expert systems and other AI tasks.

The theoretical basis of Prolog is the predicate calculus. Prolog has a number of features that traditional programming languages ​​do not have. Such properties include a search-and-return output mechanism, a built-in pattern matching mechanism. Prolog is distinguished by the uniformity of programs and data. They are just different points of view on Prolog objects. The language lacks pointers, assignment operators, and unconditional jumps. The natural method of programming is recursion.

The prolog program consists of two parts: a database (a set of axioms) and a sequence of target statements that together describe the negation of the theorem being proved. The main thing fundamental difference interpretation of a Prolog program from the procedure for proving a theorem in the first-order predicate calculus is that the axioms in the database are ordered and their order is very significant, since the algorithm itself, implemented by the Prolog program, is based on this. Another significant limitation of Prolog is that formulas of a limited class are used as logical axioms - the so-called Horn clauses. However, when solving many practical problems, this is sufficient for an adequate representation of knowledge. In Horn's sentences, a single conclusion is followed by zero or more conditions.

The search for "useful" formulas for proving is a combinatorial task, and as the number of axioms increases, the number of derivation steps grows catastrophically fast. Therefore, in real systems, various strategies are used to limit blind enumeration. The Prolog language implements the strategy of linear resolution, which suggests using at each step the negation of a theorem or its "descendant" as one of the compared formulas, and one of the axioms as the other. At the same time, the choice of one or another axiom for comparison can immediately or after several steps lead to a "dead end". This forces you to return to the point at which the choice was made in order to try out a new alternative, and so on. The order in which alternative axioms are looked up is not arbitrary - it is set by the programmer, placing the axioms in the database in a certain order. In addition, Prolog provides quite convenient "built-in" means to prohibit returning to one or another point, depending on the fulfillment of certain conditions. Thus, the proof process in Prolog is simpler and more focused than in the classical resolution method.

The meaning of a Prolog program can be understood either from the standpoint of a declarative approach or from the standpoint of a procedural approach.

The declarative meaning of the program determines whether a given goal is true (achievable) and, if so, under what values ​​of the variables it is achieved. It emphasizes the static existence of relationships. The order of subgoals in a rule does not affect the declarative meaning of the rule. The declarative model is closer to the semantics of predicate logic, which makes Prolog an effective language for knowledge representation. However, in the declarative model, it is impossible to adequately represent those phrases in which the order of the subgoals is important. To explain the meaning of phrases of this kind, it is necessary to use a procedural model.

A procedural treatment of a Prolog program defines not only the logical connections between the head of a sentence and the targets in its body, but also the order in which those targets are processed. But the procedural model is not suitable for explaining the meaning of phrases that cause side effects of control, such as stopping the execution of a query or removing a phrase from a program.

To solve real AI problems, machines are needed that must exceed the speed of light, and this is possible only in parallel systems. Therefore, serial implementations should be considered as workstations for creating software for future high-performance parallel systems capable of performing hundreds of millions of inferences per second. Currently, there are dozens of models for the parallel execution of logic programs in general and Prolog programs in particular. Often these are models that use the traditional approach to organizing parallel computing: a set of parallel working and interacting processes. Recently, much attention has been paid to more modern schemes for organizing parallel computing - streaming models. Parallel execution models consider traditional Prolog and its inherent sources of parallelism.

The effectiveness of Prolog is greatly affected by limited resources in time and space. This is due to the inability of the traditional architecture of computers to implement the Prolog method of executing programs, which provides for the achievement of goals from a certain list. Whether this will cause difficulties in practical applications depends on the problem. The time factor is practically irrelevant if a prolog program that runs several times a day takes one second, and a corresponding program in another language takes 0.1 seconds. But the difference in efficiency becomes significant when the two programs take 50 and 5 minutes, respectively.

On the other hand, in many areas where Prolog is used, it can significantly reduce program development time. Prolog programs are easier to write, easier to understand and debug than programs written in traditional languages, i.e. Prolog is attractive because of its simplicity. A Prolog program is easy to read, which is a factor that improves programming productivity and maintainability. Since Prolog is based on Horn phrases, the source code of Prolog programs is much less affected by machine-specific features than the source code of programs written in other languages. In addition, there is a tendency towards uniformity between different versions of Prolog, so that a program written for one version can easily be converted to a program for another version of that language. In addition, Prolog is easy to learn.

When choosing the Prolog language as the base programming language in the Japanese project of fifth generation computing systems, the lack of a developed programming environment and the inability of Prolog to create large software systems were noted as one of its shortcomings. Now the situation has changed somewhat, although it is premature to talk about a truly logic-oriented programming environment.

Among the languages, with the advent of which there were new ideas about the implementation of intelligent systems, it is necessary to single out languages ​​focused on programming search tasks.


  1. Programming languages ​​for intelligent solvers.

The group of languages ​​that can be called intelligent solver languages ​​is mainly focused on such a subfield of AI as problem solving, which is characterized, on the one hand, by fairly simple and well-formalizable problem models, and, on the other hand, by sophisticated methods for finding their solution. Therefore, the focus in these languages ​​has been on the introduction of powerful control structures rather than on ways to represent knowledge. These are such languages ​​as Planner, Koniver, KyuA-4, KyuLisp.


Planer.

This language gave impetus to a powerful language creation in the field of AI. The language was developed at the Massachusetts Institute of Technology in 1967-1971. Initially, it was an add-on for Lisp, in this form the language was implemented on Maclisp under the name Micro Planer. Later, Plainer was significantly expanded and turned into an independent language. In the USSR, it was implemented under the name Plener-BESM and Plener-Elbrus. This language introduced many new ideas into programming languages: automatic backtracking, pattern searching, pattern calling, deductive method, etc.

As a subset, Planner contains virtually all of Lisp (with some modifications) and retains many of its specific features. The structure of data (expressions, atoms and lists), the syntax of programs and the rules for their evaluation in Planer are similar to those in Lisp. For data processing in Planer, the same tools are mainly used as in Lisp: recursive and block functions. Almost all of Lisp's built-in functions, including the EVAL function, are included in the Planner. New functions are defined similarly. As in Lisp, property lists can be associated with atoms.

But there are differences between Lisp and Planner. Let's note some of them. In Lisp, when referring to a variable, only its name is indicated, for example X, the atom itself as given is indicated as 'X. The Planner uses the reverse notation: atoms stand for themselves, and when referring to variables, a prefix is ​​placed in front of their name. The prefix specifies how the variable should be used. The syntax for calling functions differs from Lisp, which in the Planer is written as a list not with round brackets, but with square brackets.

Planer uses not only functions to process data, but also patterns and matchers.

Patterns describe the rules for analyzing and decomposing data, and therefore their application makes it easier to write programs and shortens their texts.

Matchers are defined just like functions, except that their defining expression starts with a keyword and the body is specified as a pattern. Their execution consists not in calculating any value, but in checking whether the expression compared to it has a certain property.

The considered subset of Planer can be used independently of its other parts: it is a powerful programming language, convenient for implementing various character processing systems. The remaining parts of the Planner orientate it to the field of AI, providing means for describing the tasks (initial situations, admissible operations, goals), the solutions of which should be sought by the AI ​​system implemented on the Planner, and tools that simplify the implementation of procedures for finding solutions to these problems.

On the Planner, you can program by describing what you have and what you need to get, without explicitly specifying how to do it. The responsibility for finding a solution to the described problem is assumed by the deductive mechanism built into the language (the mechanism for automatically achieving goals), which is based on calling theorems according to the model. However, just calling theorems by pattern is not sufficient for such a mechanism. A search mechanism is needed, and such a mechanism - the backtracking mode - is introduced into the language.

Execution of the program in the return mode is convenient for its author in that the language takes responsibility for remembering the forks and the alternatives left in them, for returning to them and restoring the previous state of the program - all this is done automatically. But such automatism is not always beneficial, since in the general case it leads to "blind" enumeration. And it may turn out that when calling theorems, the most suitable of them will be called last, although the author of the program knows in advance about its merits. With this in mind, Planner provides controls for the mode of returns.


Coniver.

The Coniver language was developed in 1972, implemented as an add-on to the Maclisp language. The authors of the Coniver language criticized some of the ideas of the Planer language. It mainly referred to the automatic backtracking regime, which generally leads to inefficient and uncontrolled programs, especially if it is written by unskilled users. The authors of Coniver abandoned the automatic return mode, believing that it is not necessary to build into the language any fixed control disciplines (except for the simplest ones - cycles, recursions) and that the author of the program should organize the control disciplines he needs himself, and for this the language should open its own control disciplines to the user. management structure and provide tools for working with it. This concept was implemented in Koniver as follows.

When a procedure is called, a place is allocated in memory where the information necessary for its operation is stored. Here, in particular, there are local variables of the procedure, access pointers (a link to a procedure whose variables are available from the given one) and a return pointer (a link to a procedure to which control must be returned). Usually this information is hidden from the user, but in the Coniver language such a memory area (frame) is open: the user can view and change the contents of the frame. In the language, frames represent a special type of data that is accessed by pointers.

The disadvantage of the language is that although the user receives flexible controls, at the same time he is subject to difficult and painstaking work that requires high qualifications. The Coniver language is good not for the implementation of complex systems, but as a base on the basis of which qualified programmers prepare the necessary control mechanisms for other users.

Given the complexity of the implementation of control disciplines, the authors of the language were forced to include in it a number of fixed control mechanisms, analogues of fork procedures and theorems of the Planer language. But unlike Planer, where the gap between the choice of an alternative in a fork and its analysis, and, if necessary, the development of failure can be arbitrarily large, in the Coniver language this gap is minimized. By doing this, Coniver gets rid of the negative consequences of global failure returns, when you have to cancel the previous work of almost the entire program.