General remarks on optimal systems. Automatic optimal systems

Automatic control systems are usually designed based on the requirements for ensuring certain quality indicators. In many cases, the necessary increase in dynamic accuracy and improvement of transient processes of automatic systems is achieved with the help of corrective devices.

Particularly wide opportunities for improving quality indicators are provided by the introduction of open compensation channels and differential connections into the circuit of an automatic system, synthesized from one or another condition of error invariance with respect to the master or disturbing influences. However, the effect of the influence of corrective devices, open compensation channels and equivalent differential connections on the performance indicators of the automatic system depends on the level of signal limitation by the nonlinear elements of the system. The output signals of differentiating devices, usually short-term in duration and significant in amplitude, are limited by the elements of the system and do not lead to an improvement in the quality of the automatic system, in particular its speed. The best results in solving the problem of improving the quality indicators of automatic systems in the presence of signal constraints are obtained by the so-called optimal control.

In a broad sense, the word "optimal" means the best in the sense of some efficiency criterion. With this interpretation, any scientifically substantiated technical and economic system is optimal, since when choosing a system, it is implied that it is better than others in some respect. The criteria by which the choice is made (optimality criteria) can be different. These criteria can be the quality of the dynamics of control processes, system reliability, energy consumption, its weight and dimensions, cost, etc., or a combination of these criteria with some weighting factors. In many cases, the necessary increase in dynamic accuracy and improvement of transient processes of automatic control systems is achieved with the help of corrective devices.

Particularly wide opportunities for improving quality indicators are provided by the introduction of open compensation channels and differential connections into automatic systems, synthesized from one or another condition of error invariance with respect to the driving or disturbing influences. However, the effect of the influence of corrective devices, open compensation channels and equivalent differential connections on the performance indicators of automatic systems depends on the level of signal limitation by the nonlinear elements of the system. The output signals of differentiating devices, usually short-term in duration and significant in amplitude, are limited by the elements of the system and do not lead to an improvement in the quality of the automatic system, in particular its speed. The best results in solving the problem of improving the quality of operation of automatic systems in the presence of signal constraints are obtained by the so-called optimal control.

The problem of synthesizing optimal systems was rigorously formulated relatively recently, when the definition of the concept of optimality criterion was given. Depending on the purpose of control, various technical or economic indicators of the controlled process can be selected as an optimality criterion. In optimal automatic systems, not just a certain increase in one or another technical and economic quality indicator is provided, but the achievement of its minimum or maximum possible value.

Optimal management is the one that is carried out in the best way according to certain indicators. Systems that implement optimal control are called optimal. The organization of optimal control is based on the identification and implementation of the limiting capabilities of systems.

When developing optimal control systems, one of the most important steps is the formulation of an optimality criterion, which is understood as the main indicator that determines the optimization problem. It is by this criterion that the optimal system should function in the best possible way.

Various technical and technical and economic indicators are used as optimality criteria, expressing technical and economic benefits or, conversely, losses. Due to the inconsistency of the requirements for automatic control systems, the choice of the optimality criterion usually turns into a complex problem with an ambiguous solution. For example, optimization of an automatic system according to the criterion of reliability may entail an increase in the cost of the system and its complication. On the other hand, the simplification of the system will reduce a number of its other indicators. In addition, not every optimal solution synthesized theoretically can be implemented in practice on the basis of the state of the art.

In the theory of automatic control, functionals are used that characterize individual quality indicators. Therefore, most often, optimal automatic systems are synthesized as optimal according to one main criterion, and the rest of the indicators that determine the quality of the functioning of an automatic system are limited by the range of permissible values. This simplifies and makes more specific the task of finding optimal solutions when developing optimal systems.

At the same time, the task of choosing competing system options becomes more complicated, since they are compared according to various criteria, and the system assessment does not have an unambiguous answer. Indeed, without a thorough analysis of many conflicting, often non-formalizable factors, it is difficult to answer, for example, the question of which system is better: more reliable or less expensive?

If the optimality criterion expresses technical and economic losses (automatic system errors, transition process time, energy consumption, funds, cost, etc.), then the optimal control will be the one that provides the minimum of the optimality criterion. If it expresses profitability (efficiency, productivity, profit,
missile flight range, etc.), then the optimal control should provide the maximum optimality criterion.

The task of determining the optimal automatic system, in particular, the synthesis of the optimal parameters of an automatic system when a master influence and noise, which are stationary random signals, are received at its input, the root-mean-square error value is taken as an optimality criterion. The conditions for increasing the accuracy of reproduction of the useful signal (setting action) and suppressing interference are contradictory, and therefore the problem arises of choosing such (optimal) system parameters for which the root-mean-square error takes the smallest value.

Synthesis of the optimal system under the root-mean-square optimality criterion is private task. General methods for the synthesis of optimal systems are based on the calculus of variations. However, the classical methods of the calculus of variations are in many cases unsuitable for solving modern practical problems that require taking into account restrictions. The most convenient methods for the synthesis of optimal automatic control systems are the Bellman dynamic programming method and the Pontryagin maximum principle.

In the general process of designing technical systems, two types of problems can be seen.
1 Designing a control system aimed at achieving the set task (formation of trajectories, modes, choice of control methods that implement trajectories, etc.). This range of tasks can be called motion design.
2 Design of structural and strength schemes (selection of geometric, aerodynamic, structural and other parameters) that ensure the implementation general characteristics and specific modes of operation. This range of design tasks is associated with the choice of resources necessary for the implementation of the tasks.

Movement design (changing technological parameters) is closely related to a group of problems of the second type, since the information obtained during the design of movements is the initial (largely determining) for solving these problems. But even in cases where there is a ready-made technical system(i.e., the available resources are determined), optimization techniques can be implemented in the process of its modification.

Problems of the first type are currently being solved most effectively and rigorously on the basis of general methods of the mathematical theory of optimal control processes. The significance of the mathematical theory of optimal control processes lies in the fact that it provides a unified methodology for solving a very wide range of optimal design and control problems, eliminates inertia and insufficient generality of previous particular methods, and contributes to valuable results and methods obtained in related fields.

The theory of optimal processes allows solving a wide range of practical problems in a sufficiently general setting taking into account most of the limitations of a technical nature imposed on the feasibility of technological processes. The role of the methods of the theory of optimal processes has especially increased in last years in connection with the widespread introduction of computers into the design process.

Thus, along with the problem of improving various indicators of the quality of the operation of an automatic system, the task arises of constructing optimal automatic systems in which extreme value one or another technical and economic indicator of quality.

The development and implementation of optimal automatic control systems helps to increase the efficiency of the use of production units, increase labor productivity, improve product quality, save electricity, fuel, raw materials, etc.

Optimal systems are classified on various grounds. Let's note some of them.
Depending on the implemented optimality criterion, there are:
1) systems that are optimal in terms of speed. They implement the criterion for the minimum time of transients;
2) systems that are optimal in terms of accuracy. They are formed according to the criterion of minimum deviation of variables during transient processes or according to the criterion of minimum root-mean-square error;
3) systems that are optimal in terms of fuel consumption, energy, etc., realizing the criterion of minimum consumption;
4) systems that are optimal in terms of invariance. They are synthesized according to the criterion of independence of output variables from external perturbations or from other variables;
5) optimal extreme systems that determine the criterion for the minimum deviation of the quality indicator from its extreme value.

Depending on the characteristics of objects, optimal systems are divided into:
1) linear systems;
2) nonlinear systems;
3) continuous systems;
4) discrete systems;
5) additive systems;
6) parametric systems.

These signs, except for the last two, do not need explanation. In additive systems, impacts on an object do not change its characteristics. If the influences change the coefficients of the equations of the object, then such systems are called parametric.

Depending on the type of optimality criterion, optimal systems are divided into the following:
1) uniformly optimal, in which each individual process proceeds optimally;
2) statistically optimal, realizing the optimality criterion, which has a statistical character due to random effects on the system. In these systems, the best behavior is not provided by each individual process, but only by some. Statistically optimal systems can be called optimal on average;
3) minimax optimal, which are synthesized from the condition of a minimax criterion that provides the best worst result compared to a similar worst result in any other automatic system.

According to the degree of completeness of information about the object, optimal systems are divided into systems with complete and incomplete information. Information about an object includes the following information:
1) about the relationship between the input and output values ​​of the object;
2) about the state of the object;
3) on the driving influence that determines the required mode of operation of the system;
4) about the goal of controlling the functional expressing the criterion of optimality;
5) about the nature of the disturbance.

Information about the object is in fact always incomplete, but in many cases this does not have a significant impact on the functioning of the system according to the chosen optimality criterion. In some cases, the incompleteness of information is so significant that the solution of optimal control problems requires the use of statistical methods.

Depending on the completeness of information from the control object, the optimality criterion can be chosen as "rigid" (with sufficiently complete information) or "adaptable", i.e. changing when information changes. On this basis, optimal systems are divided into systems with hard tuning and adaptive. Adaptive systems include extreme, self-tuning and learning systems. These systems most fully meet the modern requirements for optimal control systems.

The solution of the optimal system synthesis problem consists in the development of a control system that meets the given requirements, i.e., in the creation of a system that implements the chosen optimality criterion. Depending on the amount of information about the structure of the automatic control system, the synthesis problem is posed in one of the following two formulations.

The first statement covers the cases when the structure of the automatic system is known. Such. cases, the object and the controller can be described by the corresponding transfer functions, and the synthesis problem is reduced to determining the optimal values ​​of the numerical parameters of all elements of the system, i.e., such parameters that ensure the implementation of the chosen optimality criterion.

In the second setting, the synthesis problem is posed with an unknown structure of the system. In this case, it is required to determine such a structure and such parameters of the system that will provide a system that is optimal according to the accepted quality criterion. In engineering practice, the problem of synthesis in this formulation is rare. Most often, the control object is either specified as a physical device or described mathematically, and the synthesis problem is reduced to the synthesis of an optimal controller. It should be emphasized that in this case, too, a systematic approach to the synthesis of an optimal control system is needed. The essence of this approach lies in the fact that when synthesizing the controller, the entire system (controller and object) is considered as a single whole.

At the initial stage of the synthesis of the optimal controller, the problem is reduced to its analytical design, i.e., to the definition of its mathematical description. In this case, the same mathematical model of the controller can be implemented by various physical devices. The choice of a specific physical implementation of an analytically defined controller is carried out taking into account the operating conditions of a particular automatic control system. Thus, the problem of designing an optimal controller is ambiguous and can be solved in various ways.

When synthesizing an optimal control system, it is very important to create an object model that is as adequate as possible to a real object. In control theory, as well as in other modern fields of science, the main types of object models are mathematical models of the equation of statics and dynamics of objects.

When solving problems of synthesis of an optimal system, a single mathematical model of control objects is usually a model in the form of equations of state. The state of the automatic control system at each moment of time is understood as minimum set variables (state variables) which contains. the amount of information sufficient to determine the coordinates of the system in the current and future states of the system. The initial plant equations are usually non-linear. To bring them to the form of equations of state, methods of linear transformations of the original equations are widely used.

Statement of the main problems of optimal control in the form of a time program for an automatic system with an optimality criterion and boundary conditions is formulated as follows.

Among all program controls allowed on the segment u = u(t) and control parameters that transform the point (t0, x0) to the point (t1, x1) , find those for which the functional on the solutions of the system of equations will take the smallest (greatest) value with the fulfillment optimality conditions.

The control u(t) that solves this problem is called the optimal (program) control, and the vector a is called the optimal parameter. If the pair (u*(t), a*) delivers absolute minimum functional I on the solutions of the system, then the relation

The main problem of optimal coordinate control is known in the theory of optimal processes as the problem of synthesizing the optimal control law, and in some problems as the problem of the optimal behavior law.

The problem of synthesizing an optimal control law for a system with a criterion and boundary conditions, where for simplicity it is assumed that the functions f0, f, h, g do not depend on the vector a, is formulated as follows.

Among all admissible control laws v(x, t), find one such that, for any initial conditions (t0, x0), when this law is substituted, a given transition occurs and the quality criterion I[u] takes the smallest (greatest) solution.

The motion trajectory of the automatic system corresponding to the optimal control u*(t) or the optimal law v*(x, t) is called the optimal trajectory. The set of optimal trajectories x*(t) and optimal control u*(t) forms an optimal controlled process (x*(t), u*(t)).

Since the optimal control law v*(x, t) has the form of a feedback control law, it remains optimal for any values ​​of the initial conditions (t0, x0) and any coordinates x. Unlike the law v*(x, t), the program optimal control u*(t) is optimal only for those initial conditions for which it was calculated. When the initial conditions change, the function u*(t) will also change. This is an important, from the point of view of the practical implementation of the automatic control system, difference between the optimal control law v*(x, t) and the program optimal control u*(t), since the choice of initial conditions in practice can never be made absolutely accurately.

Any part of the optimal trajectory (optimal control) is also in turn an optimal trajectory (optimal control). This property is mathematically formulated as follows.

Let u*(t), t0< t < t1, – оптимальное управление для выбранного функционала I[u], соответствующее переходу из состояния (t0, x0) в состояние (t1, x1) по оптимальной траектории x*(t). Числа (t0, t1) и вектор x0 – фиксированные, а вектор x1 , вообще говоря, свободен. На оптимальной траектории x*(t) выбираются точки x*(t0) и x*(t1), соответствующие моментам времени t = t0, t = t1. Тогда управление u*(t) на отрезке является оптимальным, соответствующим переходу из состояния x*(t0) в состояние x*(t1), а дуга является оптимальной траекторией

Thus, if the initial state of the system is x*(t0) and the initial time t = t0, then no matter how the system came to this state, its optimal subsequent motion will be the arc of the trajectory x*(t), t0< t < t1, являющейся частью оптимальной траектории между точками(t0, x0) и (t1, x1). Это условие является необходимым и достаточным свойством оптимальности процесса и служит основой динамического программирования.

Mathematical description The task of transferring the regulated object (process) from one state to another is characterized by n phase coordinates x1, x2, x3, . . . xn. In this case, r control actions u1, u2, u3, can be applied to the automatic control object. . . ug.

Control actions u1(t), u2(t), u3(t), . . . ug(t) is convenient to consider as the coordinates of some vector u = (u1, u2, u3, . . . ug), called the control vector. Phase coordinates (state variables) of the regulated object x1, x2, x3, . . . xn can also be viewed as the coordinates of some vector or point with coordinates x = (x1, x2, x3, . . . xn) in the n-dimensional state space. This point is called the phase state of the object, and the n-dimensional space in which phase states are represented as points is called the phase space (state space) of the object under consideration. When using vector images, the managed object can be depicted as shown in the figure. Under the influence of the control action u (u1, u2, u3, . . . ug), the phase point x (x1, x2, x3, . . . xn) moves, describing a certain line in the phase space, called the phase trajectory of the considered movement of the controlled object.

Knowing the control action u(t) = u1(t), u2(t), u3(t), . . . ug(t), in the presence of disturbances, it is possible to uniquely determine the movement of the controlled object at t > t0 if its initial state is known at t = t0 . If the control u(t) is changed, then the point will move along a different trajectory, i.e., for different controls we get different trajectories emanating from the same point. Therefore, the transition of an object from the initial phase state H to the final state xK can be carried out along different phase trajectories, depending on the control. Among the set of trajectories, there is the best one in a certain sense, i.e., the optimal trajectory. For example, if the task of minimum fuel consumption during the interval of locomotive movement is set, then one should approach the choice of control and the corresponding trajectory from this point of view. The specific fuel consumption g depends on the developed thrust force of the control action u(t), i.e. g (t). The optimality criterion is usually represented as a certain functional.

The problem of synthesizing optimal automatic systems was rigorously formulated relatively recently, when the definition of the concept of optimality criterion was given. Depending on the goal of control, various technical or economic indicators of the controlled process can be selected as an optimality criterion. In optimal systems, not just a certain increase in one or another technical and economic quality indicator is provided, but the achievement of its minimum or maximum possible value.

An important step in setting and solving a general control problem is the choice of an optimality criterion. This choice is an informal act; it cannot be prescribed by any theory, but is entirely determined by the content of the task. In some cases, the formal expression of understanding the optimality of the system allows several equivalent (or almost equivalent) formulations.

If the optimality criterion expresses technical and economic losses (system errors, transition process time, energy consumption, funds, cost, etc.), then the optimal one will be the following: a control that provides a minimum of the optimality criterion. If it expresses profitability (efficiency, productivity, profit, missile flight range, etc.), then optimal control should provide the maximum optimality criterion.

In such cases, the success and simplicity of the resulting solution is largely determined by the chosen form of the optimality criterion (provided that in all cases it sufficiently fully determines the requirements of the problem to the system). After constructing a mathematical model of the control process, its further research and optimization is carried out by mathematical methods. The optimal behavior or state of the automatic system is provided when the functional reaches its extremum I = extr maximum or minimum, depending on the physical meaning of the variables.

In the practice of developing and studying dynamical systems, two tasks are most often encountered:
1) synthesis of a system that is optimal in terms of speed;
2) synthesis of a system that is optimal in terms of accuracy.

In the first case, it is necessary to ensure a minimum of the transient process time, in the second, a minimum of the mean square error (deviations of the Dyi (t) coordinate set value) under given or random influences.

The functional in this case can be defined as a function whose arguments are associated with optimality criteria and are themselves functions of variables. The total fuel consumption of interest to us, the main indicator of the quality of locomotive motion control systems in this case, is determined by the integral functional.

The integral functional that characterizes the main indicator of the quality of an automatic system (in the example under consideration, fuel consumption) is called the optimality criterion. Each control u(t), and hence the trajectory of the locomotive, has its own numerical value optimality criterion. The problem arises of choosing such a control u(t) and a trajectory x(t) for which the minimum value of the optimality criterion is achieved.

Typically, optimality criteria are used, the value of which is determined not by the current state of the object (in the example under consideration, specific fuel consumption), but by its change during the entire control process. Therefore, to determine the optimality criterion, it is required, as in the above example, to integrate some function, the value of which in the general case depends on the current values ​​of the phase coordinates x of the object and the control u, influence, i.e. such an optimality criterion is an integral functional of the form

In cases where the phase coordinates of the object are stationary random features, the optimality criterion is an integral functional not in the time domain, but in the frequency domain. Such optimality criteria are used in solving the problem of optimizing systems by minimizing the error variance. In the simplest cases, the optimality criterion may not be an integral functional, but simply a function.

In the theory of automatic control, the so-called minimax optimality criteria are used, which characterize the conditions best work systems under the worst possible conditions. An example of using the minimax criterion can be the choice on its basis of an automatic control system variant that has the minimum value of the maximum overshoot. Any optimality criterion is realized in the presence of restrictions imposed on the variables and on the indicators of control quality. In automatic control systems, the restrictions imposed on the control coordinates can be divided into natural and conditional.

In many cases, conflicting requirements are imposed on the automatic system (for example, the requirements for a minimum fuel consumption and a maximum train speed). When choosing a control that meets one requirement (criterion of minimum fuel consumption), other requirements will not be satisfied ( maximum speed movements). Therefore, of all the selection requirements, there is one main one that must be satisfied in the best way, and other requirements are taken into account in the form of restrictions on their values. For example, if the requirement of minimum fuel consumption is met, the minimum value of the movement speed is limited. If there are several equal quality indicators that cannot be combined into a common combined indicator, the choice of optimal controls corresponding to these indicators separately, while limiting the rest, gives solutions that can (during design) help in choosing the optimal compromise option.

When choosing a control action u, it should be borne in mind that it cannot take arbitrary values, since real restrictions are imposed on it, determined by the technical conditions. For example, the value of the control voltage applied to the motor is limited by its limit value, determined by the operating conditions of the motor.

Optimal control can be achieved if the object is controllable, i.e., there is at least one admissible control that transfers the object from the initial state to the given final state. The requirement to minimize the optimality criterion can be formally replaced by the requirement to minimize the final value of one of the coordinates of the control object.

If the boundary conditions in the optimal control problem are specified by the initial and final points of the trajectory, then we have a problem with fixed ends, In the case when one or both boundary conditions are specified not by a point, but by a finite region, or not specified at all. then we have a problem with free ends or one free end. An example of a problem with one free end is the problem of eliminating deviations in the automatic control system caused by an abrupt change in the driving or disturbing action.

An important special case of optimal control is the problem of optimal speed. Among all admissible controls u(t), under the influence of which the control object passes from the initial phase state xH to ​​the given final state xK, find one for which this transition is carried out in the shortest time.

The theory of optimal processes is the basis of a unified methodology for designing optimal movements, technical, economic and information systems. As a result of applying the methods of the theory of optimal processes to the problems of designing various systems, the following can be obtained:
1) optimal by one or another criterion temporary programs for changing control actions and optimal values ​​of constant control (design, tuning) parameters, taking into account various kinds of restrictions on their values;
2) optimal trajectories, modes, taking into account restrictions on the area of ​​their location;
3) optimal control laws in the form of feedback that determine the structure of the control system loop (solution of the control synthesis problem);
4) limit values ​​for a number of characteristics or other quality criteria, which can then be used as a benchmark for comparison with other systems;
5) solving boundary value problems of getting from one point of the phase space to another, in particular, the problem of getting into a given area;
6) optimal strategies for getting into some moving area.

Methods for solving optimal control problems are basically reduced to the direct search method by repeatedly finding the process with a variation of the control action.

The complexity of the problems of optimal control theory required a wider mathematical base for its construction. This theory uses the calculus of variations, the theory of differential equations, and the theory of matrices. The development of optimal control on this basis led to the revision of many sections of the theory of automatic control, and therefore the theory of optimal control is sometimes called modern theory management. Although this is an exaggeration of the role of only one of the sections, however, the development of the theory of automatic control has been determined in recent decades in many respects by the development of this section.

To date, a mathematical theory of optimal control has been constructed. On its basis, methods have been developed for constructing systems that are optimal in terms of speed and procedures for the analytical design of optimal controllers. Analytical design of regulators together with the theory of optimal observers (optimal filters) form a set of methods that are widely used in the design of modern complex control systems.

The initial information for solving optimal control problems is contained in the problem statement. The control task can be formulated in meaningful (informal) terms, which are often somewhat vague. For the application of mathematical methods, a clear and rigorous formulation of problems is required, which would eliminate possible uncertainties and ambiguities and at the same time make the problem mathematically correct. For this purpose, a general problem requires a mathematical formulation adequate to it, called a mathematical model of the optimization problem.

A mathematical model is a fairly complete mathematical description of a dynamic system and a control process within the chosen degree of approximation and detail. The mathematical model maps the original problem into some mathematical scheme, and ultimately into some system of numbers. On the one hand, it explicitly indicates (lists) all the information without which it is impossible to start an analytical or numerical study of the problem, and on the other hand, those additional information that follow from the essence of the problem and which reflect a certain requirement for its characteristics.

The complete mathematical model of the general control optimization problem consists of a number of particular models:
controlled movement process;
available resources and technical constraints;
quality indicator of the management process;
control actions.

Thus, the mathematical model of a general control problem is characterized by a set of certain mathematical relationships between its elements (differential equations, constraints such as equalities and inequalities, quality functions, initial and boundary conditions, etc.). In the theory of optimal control, general conditions are established that the elements of a mathematical model must satisfy in order for the corresponding mathematical optimization problem to be:
clearly defined
would make sense, i.e., would not contain conditions leading to the absence of a solution.

Note that the formulation of the problems and its mathematical model do not remain unchanged in the process of research, but are in interaction with each other. Usually, the original formulation and its mathematical model undergo significant changes at the end of the study. Thus, the construction of an adequate mathematical model resembles an iterative process, during which both the formulation of the most general problem and the formulation of the mathematical model are refined. It is important to emphasize that for the same problem the mathematical model may not be unique ( different systems coordinates, etc.). Therefore, it is necessary to search for such a variant of the mathematical model for which the solution and analysis of the problem would be the simplest.

The synthesis of an optimal system with an rms optimality criterion is a particular problem. General methods for the synthesis of optimal systems are based on the calculus of variations. However, the classical methods of the calculus of variations are in many cases unsuitable for solving modern practical problems that require taking into account restrictions. The most convenient methods for the synthesis of optimal automatic control systems are the Bellman dynamic programming method and the Pontryagin maximum principle.

The following are widely used in optimal control theory. mathematical methods:
- dynamic programming;
- the maximum principle;
- calculus of variations;
- mathematical programming.

Each of these methods has its own characteristics and, therefore, its own scope.

The dynamic programming method has great opportunities. However, for systems high order(above the fourth) the use of the method is very difficult. With several control variables, the implementation of the dynamic programming method on a computer requires memory, sometimes exceeding the capabilities of modern machines.

The maximum principle makes it relatively easy to take into account the restrictions on the control actions applied to the control object. The method is most effective in the synthesis of systems that are optimal in terms of speed. However, the implementation of the method even with the use of a computer is much more difficult.

The calculus of variations is used in the absence of restrictions on state variables and control variables. Obtaining a numerical solution based on the methods of the calculus of variations is difficult. The method is used, as a rule, for some very simple cases.

Methods of mathematical programming (linear, non-linear, etc.) are widely used to solve problems of optimal control in both automatic and automated systems. The general idea of ​​the methods is to find the extremum of a function in the space of many variables under restrictions in the form of a system of equalities and inequalities. The methods make it possible to find a numerical solution to a wide range of optimal control problems. The advantage of mathematical programming methods is the ability to relatively easily take into account restrictions on controls and state variables, as well as the usually allowable memory requirements.

Bellman's dynamic programming method is based on solving variational problems according to the principle that the section of the optimal trajectory from any of its intermediate points to the end point is also the optimal trajectory between these points.

Let us explain the essence of the dynamic programming method using the following example. Let it be required to transfer some object from the starting point to the end point. To do this, you need to take n steps, each of which has several possible options. However, from the set of possible options at each step, one is selected that has an extreme value of the functional. This procedure is repeated at each optimization step. Ultimately, we obtain the optimal trajectory of the transition from the initial state to the final state, subject to the optimization conditions.

Let, for example, it is required to choose the mode of operation of the locomotive passing through given points, at which the minimum fuel consumption or travel time is achieved, The optimal solution can be found by enumeration of possible options on computer, however, for large values ​​of n and l, which is the case when solving most real problems, this would require an extremely large amount of calculations. The solution of this problem is simplified by using the dynamic programming method.

For the mathematical formulation of the dynamic programming problem, we assume that the steps in solving the problem represent fixed time intervals, i.e., time quantization occurs. It is required to find, taking into account a number of restrictions, the control law u [n], which transfers the object from the point t [o] of the phase space to the point t[n], provided that the minimum optimality criterion is ensured

Due to this simplification, using the dynamic programming method, it becomes possible solution optimal control problems that cannot be solved by direct optimization of the original functional by classical methods of the calculus of variations. The dynamic programming method is essentially a method for compiling a program for the numerical solution of a problem on digital computers. Only in the simplest cases this method allows you to get an analytical expression of the desired solution and perform its study. Using the dynamic programming method, it is possible to solve not only optimal control problems, but also multi-step optimization problems from various fields of technology.

The method is widely used to study optimal control in both dynamic (technical) and economic systems. To implement the dynamic programming method, the links in the system between output variables, controls, and optimality criteria can be specified both in the form of analytical dependencies and in the form of tables of numerical data, experimental graphs, etc.

The Pontryagin maximum principle can be explained by the example of the maximum speed problem. Let it be required to transfer the representative point from the initial position of the phase space to the final position in the minimum time. For each point in the phase space, there is an optimal phase trajectory and a corresponding minimum transition time to the end point. Around this point, it is possible to construct surface isochrones, which are the geometric locus of points with the same minimum transition time to this point. The optimal speed trajectory from the starting point to the end point should ideally coincide with the normals to the isochrones (movement along the isochrones takes time without reducing the time interval until the end point is reached). In practice, the restrictions imposed on the coordinates of the object do not always allow realizing ideal, speed-optimal trajectory. Therefore, the optimal trajectory will be the one that is as close as possible, as far as the constraints allow, to the normals to the isochrones. This condition mathematically means that throughout the entire trajectory, the scalar product of the velocity vector of the representing point and the vector, the opposite (in direction) to the transition time gradient to the end point, should be maximum:

where fi, Vi are the coordinates of the corresponding vectors.

Since the scalar product of two vectors is equal to the product of their modules and the cosine between them, the optimality condition is the maximum projection of the velocity vector V onto the direction f. This optimality condition is the Pontryagin maximum principle.

Thus, when using the maximum principle, the variational problem of finding a function u that extremizes the functional H is replaced by a simpler problem of determining the control u that maximizes the Hamiltonian auxiliary function. Hence the name of the method, the maximum principle.

The main difficulty in applying the maximum principle is that the initial values ​​f (0) of the auxiliary function f are not known. Usually, they are given arbitrary initial values ​​f (0), jointly solve the object equations and adjoint equations and obtain the optimal trajectory, which, as a rule, passes by the specified endpoint. Using the method of successive approximations by setting different initial values ​​f (0) find the optimal trajectory passing through a given end point.

The maximum principle is a necessary and sufficient condition only for linear objects. For non-linear objects, it seems to be only a necessary condition. In this case, with its help, a narrowed group of admissible controls is found, among which, for example, by enumeration, an optimal control is found, if it exists at all.

Mathematical programming. Strictly linear models, which used proportionality, linearity and additivity, are far from being adequate to many real situations. In reality, dependencies such as total costs, output, etc., on the production plan are non-linear.

Often the application of linear programming models in non-linear conditions is successful. Therefore, it is necessary to determine in which cases the linearized version of the problem is an adequate representation of a nonlinear phenomenon.

The method of mathematical programming consists in finding the extremum of a function of many variables under known constraints in the form of a system of equalities and inequalities. The advantages of the mathematical programming method include:
complex restrictions on state and control variables are taken into account quite simply;
the amount of computer memory can be significantly less with other research methods.

If there is information about the allowable range of values ​​of variables in the optimal solution, then, as a rule, it is possible to construct appropriate constraints and obtain a fairly reliable linear approximation. In those cases where there is a wide range of admissible solutions and there is no information about the nature of the optimal solution, it is impossible to construct a sufficiently good linear approximation. The importance of non-linear programming and its use is constantly increasing.

Often, non-linearities in models are due to empirical observations of relationships, such as disproportionate changes in costs, yields, quality measures, or structurally derived relationships, which include postulated physical phenomena, as well as mathematically derived or management rules of conduct.

Many different circumstances lead to a nonlinear formulation of constraints or objective functions. With a small number of non-linearities, or if the non-linearities are not significant, the increase in the amount of computation may be negligible.

It is always necessary to analyze the dimensionality and complexity of the model and evaluate the impact of linearization on the decision being made. They often use a two-stage approach to solving problems: they build a non-linear model of small dimension, find an area containing its optimal solution, and then use a more detailed linear programming model of higher dimension, the approximation of the parameters of which is based on the obtained solution of the non-linear model.

For solving problems described by nonlinear models, there is no such universal solution method as the simplex method for solving linear programming problems. Any method of non-linear programming can be very effective for solving problems of one type and completely unacceptable for solving other problems.

Most non-linear programming methods do not always ensure convergence in a finite number of iterations. Some methods provide a monotonous improvement in the value of the objective function when moving from one iteration to another.

The problem of optimal performance is always relevant. Reducing the time of transient processes of servo systems allows for more short term work out the driving influences. Reducing the duration of transient processes of control systems for technical objects, robots, and technological processes leads to an increase in labor productivity.

In linear automatic control systems, an increase in performance can be achieved with the help of corrective devices. For example, reducing the influence on the transient process of the time constant of the aperiodic link with the transfer function k/(Tp + 1) is possible by including a serial differentiating device with the transfer function k1 (T1p + 1)/(T2p + 1). Effective methods for increasing the speed of servo systems are methods for suppressing the initial values ​​of slowly damped components of the transient process of systems and minimizing quadratic integral estimates using constraints on the master action. However, the effect of improving the transient process in real systems depends on the degree of limitation of the coordinates (nonlinearities) of the system. Derivatives from external influences, usually significant in magnitude and short-term in duration, are limited by the elements of the system and do not cause the desired forcing effect in the transient mode. The best results in solving the problem of increasing the speed of automatic systems in the presence of restrictions are given by a control that is optimal in terms of speed.

The problem of optimal speed was the first problem in the theory of optimal control. She played big role in the discovery of one of the main methods of the theory of optimal control of the maximum principle. This problem, being a special case of the optimal control problem, consists in determining such an admissible control action, under the influence of which the controlled object (process) passes from the initial phase state to the final state in the minimum time. The optimality criterion in this problem is time.

Necessary conditions for optimal control for various types of optimization problems are obtained based on the use of analytical indirect optimization methods and form a set of functional relationships that must be satisfied by an extremal solution.

When deriving them, we made an assumption, essential for subsequent application, about the existence of an optimal control (optimal solution). In other words, if an optimal solution exists, then it necessarily satisfies the given (necessary) conditions. However, other solutions that are not optimal can satisfy the same necessary conditions (just as the necessary condition for the minimum of a function of one variable is satisfied, for example, by the maximum points and inflection points of the main function). Therefore, if the found solution satisfies the necessary optimality conditions, this does not mean that it is optimal.

The use of only necessary conditions makes it possible, in principle, to find all solutions that satisfy them, and then select among them those that are really optimal. However, practically to find all solutions that satisfy the necessary conditions, most often it is not possible due to the high complexity of such a process. Therefore, after any solution is found that satisfies the necessary conditions, it is advisable to check whether it is really optimal in the sense of the original problem statement.

Analytical conditions, the feasibility of which on the obtained solution guarantees its optimality, are called sufficient conditions. The formulation of these conditions and especially their practical (for example, computational) verification often turns out to be a very laborious task.

In the general case, the application of the necessary optimality conditions would be more justified if, for the problem under consideration, it was possible to establish the existence or existence and uniqueness of an optimal control. This question is mathematically very complex.

The problem of existence, the uniqueness of the optimal control consists of two questions.
1 Existence of an admissible control (i.e., a control belonging to a given class of functions) that satisfies given constraints and takes the system from a given initial state to a given final state. Sometimes the boundary conditions of the problem are chosen in such a way that the system, due to its limited energy (financial, informational) resources, is not able to satisfy them. In this case, there is no solution to the optimization problem.
2 Existence in the class of admissible controls of an optimal control and its uniqueness.

These questions in the case of nonlinear systems general view have not yet been solved with sufficient completeness for applications. The problem is also complicated by the fact that the uniqueness of the optimal control does not imply the uniqueness of the control that satisfies the necessary conditions. In addition, one of the most important necessary conditions is usually satisfied (most often, the maximum principle).

Verification of further necessary conditions can be quite cumbersome. This shows the importance of any information about the uniqueness of controls that satisfy the necessary optimality conditions, as well as about the specific properties of such controls.

It is necessary to caution against inferring the existence of an optimal control based on the fact that a "physical" problem is being solved. In fact, when applying the methods of the theory of optimal control, one has to deal with a mathematical model. A necessary condition for the adequacy of the description physical process the mathematical model is precisely the existence of a solution for the mathematical model. Since various kinds of simplifications are introduced during the formation of a mathematical model, the influence of which on the existence of solutions is difficult to predict, the proof of existence is a separate mathematical problem.

In this way:
the existence of an optimal control implies the existence of at least one control that satisfies the necessary optimality conditions; the existence of an optimal control does not follow from the existence of a control that satisfies the necessary optimality conditions;
from the existence of an optimal control and the uniqueness of a control that satisfies the necessary conditions, the uniqueness of the optimal control follows; the existence and uniqueness of an optimal control does not imply the uniqueness of a control that satisfies the necessary optimality conditions.

It is rational to apply methods of control optimization:
1) in complex technical and economic systems, where finding acceptable solutions based on experience is difficult. Experience shows that the optimization of small subsystems can lead to large losses in the quality criteria of the combined system. It is better to approximately solve the problem of optimizing the system as a whole (albeit in a simplified formulation) than exactly for a separate subsystem;
2) in new tasks in which there is no experience in the formation of satisfactory characteristics of the control process. In such cases, the formulation of the optimal problem often makes it possible to establish the qualitative nature of the control;
3) on possible early stage design when there is greater freedom of choice. After defining a large number design decisions of the system becomes insufficiently flexible and subsequent optimization may not give a significant gain.

If necessary, determine the direction of change in the control and parameters that give the greatest change in the quality criterion (determining the quality gradient). It should be noted that for well-studied and long-operated systems, optimization methods can give a small gain, since practical solutions found from experience usually approach optimal ones.

In some practical problems, a certain “roughness” of optimal controls and parameters is observed, i.e., small changes in the quality criterion correspond to a large local change in controls and parameters. This sometimes gives rise to the assertion that, in practice, gentle and rigorous optimization methods are always not needed.

In fact, the "roughness" of control is observed only in cases where the optimal control corresponds to the stationary point of the quality criterion. In this case, changing the control by the value and leads to a deviation of the quality criterion by the value of the error.

In the case of controls lying along the border of the admissible region, the indicated roughness may not take place. This property should be investigated for each task specifically. In addition, in some problems, even small improvements in the quality criterion achieved through optimization can be significant. Complex problems of control optimization often place excessive demands on the characteristics of the computers used in the solution.

Optimal ACS are systems in which control is carried out in such a way that the required optimality criterion has an extreme value. Boundary conditions that determine the initial and required final state of the system are the technological goal of the system. tn It is set in cases where the average deviation over a certain time interval is of particular interest and the task of the control system is to ensure the minimum of this integral ...


Share work on social networks

If this work does not suit you, there is a list of similar works at the bottom of the page. You can also use the search button


Optimal control

Voronov A.A., Titov V.K., Novogranov B.N. Fundamentals of the theory of automatic regulation and control. M.: graduate School, 1977. 519p. pp. 477 491.

Optimal ACS these are systems in which control is carried out in such a way that the required optimality criterion has an extreme value.

Examples of optimal object management:

  1. Controlling the movement of a missile in order to achieve a given height or range with a minimum fuel consumption;
  2. Controlling the movement of a mechanism driven by an engine, in which energy costs would be minimized;
  3. Management of a nuclear reactor, which maximizes performance.

The optimal control problem is formulated as follows:

“Find such a law of change in control time u(t ), under which the system, under given constraints, passes from one given state to another in an optimal way in the sense that the functional I , which expresses the quality of the process, will receive an extreme value “ under the found control.

To solve the optimal control problem, you need to know:

1. Mathematical description of the object and the environment, linking the values ​​of all coordinates of the process under study, control and disturbing influences;

2. Restrictions of a physical nature on the coordinates and the control law, expressed mathematically;

3. Boundary conditions determining the initial and required final states of the system

(technological goal of the system);

4. Target function (quality functional

math target).

Mathematically, the optimality criterion is most often represented as:

t to

I =∫ f o [ y (t ), u (t ), f (t ), t ] dt + φ [ y (t to ), t to ], (1)

t n

where the first term characterizes the quality of control over the entire interval ( t n , t n ) and is called

integral component, the second term

characterizes the accuracy at the final (terminal) moment of time t to .

Expression (1) is called a functional, since I depends on the choice of function u(t ) and the resulting y(t).

Lagrange problem.It minimizes the functionality

t to

I=∫f o dt.

t n

It is put in cases where the average deviation over time is of particular interest.

a certain time interval, and the task of the control system is to ensure the minimum of this integral (deterioration in product quality, loss, etc.).

Functional examples:

I =∫ (t) dt minimum error criterion in steady state, where x(t)

  1. deviation of the controlled parameter from the set value;

I \u003d ∫ dt \u003d t 2 - t 1 \u003d\u003e min ACS maximum speed criterion;

I =∫ dt => min Criterion of optimal profitability.

Mayer's problem. In this case, the functional to be minimized is defined only by the terminal part, i.e.,

I = φ => min.

For example, for an aircraft control system described by the equation

F o (x, u, t),

the following problem can be posed: to determine the control u (t), t n ≤ t ≤ t to so that for

given time flight to reach the maximum range, provided that at the final time t to The aircraft will land, i.e. x (t to) \u003d 0.

Bolz problem reduces to the problem of minimizing criterion (1).

The basic methods for solving optimal control problems are:

1. Classical calculus of variations Euler's theorem and equation;

2. The principle of maximum L.S. Pontryagin;

3. Dynamic programming R. Bellman.

Euler's equation and theorem

Let the functional be given:

t to

I =∫ f o dt ,

t n

where some twice differentiable functions among which it is necessary to find such functions ( t ) or extremals , which satisfy the given boundary conditions x i (t n ), x i (t to ) and minimize the functional.

Extremals are found among solutions of the Euler equation

I = .

To establish the fact of minimization of the functional, it is necessary to make sure that the Lagrange conditions are satisfied along the extremals:

similar to the requirements for the positivity of the second derivative at the minimum point of the function.

Euler's theorem: “If the extremum of the functional I exists and is attained among smooth curves, then it can only be attained on extremals.”

MAXIMUM PRINCIPLE L.S.PONTRYAGIN

The school of L.S. Pontryagin formulated a theorem on the necessary condition for optimality, the essence of which is as follows.

Let us assume that the differential equation of the plant, together with the invariable part of the control device, is given in the general form:

To control u j restrictions may be imposed, for example, in the form of inequalities:

, .

The goal of control is to transfer the object from the initial state ( t n ) to the final state ( t to ). Process end time t to may be fixed or free.

Let the optimality criterion be the minimum of the functional

I = dt.

We introduce auxiliary variables and form a function

Fo()+f()f()+

The maximum principle states that for the optimality of the system, i.e. to obtain the minimum of the functional, the existence of such non-zero continuous functions that satisfy the equation

What for any t , which is in the given range t n≤ t ≤ t to , the value of H, as a function of the admissible control, reaches a maximum.

The maximum function H is determined from the conditions:

if it does not reach the boundaries of the region, and as the least upper bound of the function H in otherwise.

Dynamic programming by R. Bellman

R. Bellman's principle of optimality:

“Optimal behavior has the property that, whatever the initial state and the initial decision, subsequent decisions must constitute an optimal behavior relative to the state resulting from the first decision.”

The “behavior” of the system should be understood as traffic these systems, and the term"decision" refers tothe choice of the law of change in time of the control forces.

In dynamic programming, the process of searching for extremals is divided into n steps, while in the classical calculus of variations the whole extremal is searched for.

The extremal search process is based on the following prerequisites of R. Bellman's optimality principle:

  1. Each segment of the optimal trajectory is itself an optimal trajectory;
  2. The optimal process at each site does not depend on its prehistory;
  3. Optimal control (optimal trajectory) is sought using backward motion [from y (T ) to y (T -∆) , where ∆ = T/ N , N number of trajectory segments, etc.].

Heuristically, the Bellman equations for the required problem statements are derived for continuous and discrete systems.

Adaptive control

Andrievsky B.R., Fradkov A.L. Selected chapters of the theory of automatic control with examples in the language MATLAB . SPb.: Nauka, 1999. 467p. Chapter 12

Voronov A.A., Titov V.K., Novogranov B.N. Fundamentals of the theory of automatic regulation and control. M.: Higher school, 1977. 519p. pp. 491 499.

Anhimyuk V.L., Opeyko O.F., Mikheev N.N. Theory of automatic control. Mn.: Design PRO, 2000. 352p. pp. 328 340.

The need for adaptive control systems arises in connection with a significant complication of the control tasks being solved, and the specific feature of such complication is the lack of a practical opportunity for a detailed study and description of the processes occurring in a controlled object.

For example, modern high-speed aircrafts, accurate a priori data on the characteristics of which under all operating conditions cannot be obtained due to significant scatter of atmospheric parameters, large ranges of changes in flight speeds, ranges and altitudes, and also due to the presence a wide range parametric and external disturbances.

Some control objects (aircraft and missiles, technological processes and power plants) are distinguished by the fact that their static and dynamic characteristics change over a wide range in an unforeseen way. Optimal management of such objects is possible with the help of systems in which the missing information is automatically replenished by the system itself in the process of work.

adaptive (lat. adaptio ” device) are called such systems that, when changing the parameters of objects or the characteristics of external influences during operation, independently, without human intervention, change the parameters of the controller, its structure, setting or regulatory actions to maintain the optimal mode of operation of the object.

The creation of adaptive control systems is carried out in fundamentally different conditions, i.e. adaptive methods should contribute to the achievement of high quality control in the absence of sufficient completeness of a priori information about the characteristics of the controlled process or under conditions of uncertainty.

Classification of adaptive systems:

self-adjusting

(adaptive)

Control systems

Self-Tuning Self-Learning Systems with Adaptation

System systems in special phase

states

Searching Searchless - Educational - Educational - Relay Adaptive

(extremal (analysya with encouragement without self-oscillation system with

New) tic) niem rewards telny variable

System system system structure

Structural scheme of AS classification (according to the nature of the adaptation process)

Self-adjusting systems (SNS)are systems in which adaptation to changing operating conditions is carried out by changing parameters and control actions.

self-organizingcalled systems in which adaptation is carried out by changing not only the parameters and control actions, but also the structure.

self-learningthis is an automatic control system in which the optimal mode of operation of the controlled object is determined using a control device, the algorithm of which is automatically purposefully improved in the learning process by automatic search. The search is performed using the second control device, which is an organic part of the self-learning system.

Search systems, the change in the parameters of the control device or the control action is carried out as a result of searching for the conditions for the extremum of the quality indicators. The search for extremum conditions in systems of this type is carried out with the help of trial actions and an estimatethe results obtained.

In searchless systems, the parameters of the control device or control actions are determined on the basis of an analytical determination of the conditions that ensure the specified quality of control without the use of special search signals.

Systems with adaptation in special phase statesuse special modes or properties of nonlinear systems (modes of self-oscillations, sliding modes) to organize controlled changes in the dynamic properties of the control system. Specially organized special modes in such systems either serve as an additional source of working information about the changing conditions of the system’s functioning, or endow the control systems with new properties, due to which the dynamic characteristics of the controlled process are maintained within the desired limits, regardless of the nature of the changes that occur during operation.

When using adaptive systems, the following main tasks are solved:

1 . During the operation of the control system, when changing parameters, structure and external influences, such control is provided in which the specified dynamic and static properties of the system are preserved;

2 . During the design and commissioning process, in the initial absence of complete information about the parameters, the structure of the control object and external influences, the system is automatically tuned in accordance with the specified dynamic and static properties.

Example 1 . Adaptive system for stabilizing the angular position of the aircraft.

f 1 (t) f 2 (t) f 3 (t)

D1 D2 D3

VU1 VU2 VU3 f (t) f 1 (t) f 2 (t) f 3 (t)

u (t) W 1 (p) W 0 (p) y (t)

+ -

Rice. one.

Adaptive Aircraft Stabilization System

When flight conditions change, the transfer function changes W 0 (p ) aircraft, and, consequently, the dynamic characteristic of the entire stabilization system:

. (1)

Outrage from the side external environment f 1 (t), f 2 (t), f 3 (t ) , leading to controlled changes in the system parameters, are applied to different points of the object.

Disturbing influence f(t ) applied directly to the input of the control object, in contrast to f 1 (t), f 2 (t), f 3 (t ) does not change its parameters. Therefore, during the operation of the system, only f 1 (t), f 2 (t), f 3 (t).

In accordance with the feedback principle and expression (1), uncontrolled changes in the characteristic W 0 (p ) due to disturbances and interference cause relatively small changes in the parameters Ф( p) .

If we set the task of more complete compensation of controlled changes, so that the transfer function Ф(р) of the aircraft stabilization system remains practically unchanged, then the characteristic of the controller should be properly changed W 1 (p ). This is carried out in an adaptive ACS, made according to the scheme of Fig.1. Environment parameters characterized by signals f 1 (t), f 2 (t), f 3 (t ), e.g. velocity head pressure PH(t) , ambient air temperature T0(t) and flight speedυ(t) , are continuously measured by sensors D 1 , D 2 , D 3 , and the current values ​​of the parameters enter the computing devices B 1, B 2 ,B 3 , generating signals with which the characteristic is adjusted W 1 (p ) to compensate for changes in characteristic W0(p).

However, in the ASAU of this type (with an open tuning cycle) there is no introspection of the effectiveness of the controlled changes it implements.

Example 2. Extreme aircraft speed control system.

Z Disturbing

Impact

X 3 = X 0 - X 2

Auto- X 0 Reinforcing X 4 Executive X 5 Adjustable X 1

Mathematical transducer-device object

Extreme suit + - device

Measuring

Device

Fig. 2. Functional diagram of the extreme aircraft flight speed control system

The extremal system determines the most profitable program, i.e. that value x1 (required aircraft speed), which must be maintained at the moment in order to produce a minimum fuel consumption per unit path length.

Z - characteristics of the object; x0 - control action on the system.

(fuel consumption)

y(0)

y(T)

Self-organizing systems

In these standards, each component of the microclimate in the working area of ​​the production facility is separately normalized: temperature relative humidity air velocity depending on the ability of the human body to acclimatize in different time years of the nature of the clothes, the intensity of the work performed and the nature of heat generation in the working room. Changes in air temperature along the height and horizontally, as well as changes in air temperature during the shift, while ensuring optimal microclimate values ​​at workplaces, should not ... Management: the concept of signs, system and principles Public administration: the concept of types and functions. In terms of content, administrative law is a state-administrative law that implements the legal interest of the majority of citizens, for which the subjects of management are endowed with legally authoritative powers and representative functions of the state. Therefore, the object of action of legal norms are specific managerial social relations arising between the subject of management, the manager and objects ... State regulation of social economic development regions. Local budgets as a financial basis for the socio-economic development of the region. Different territories of Ukraine have their own characteristics and differences, both in terms of economic development and in social, historical, linguistic and mental aspects. Of these problems, first of all, it is necessary to name the imperfection of the sectoral structure of most regional economic complexes and their low economic efficiency; Significant differences between regions in...

In the general case, the automatic control system consists of an OS control object with a working parameter Y, a controller P and a programmer (master) P (Fig. 6.3), which generates a master action (program) to achieve control goals, subject to the fulfillment of qualitative and quantitative requirements. The programmer takes into account the totality of external information (signal AND).

Rice. 6.3. Optimal control structure

The task of creating an optimal system is to synthesize a controller and a programmer for a given control object that best solve the required control goal.
In the theory of automatic control, two related problems are considered: the synthesis of an optimal programmer and the synthesis of an optimal controller. Mathematically, they are formulated in the same way and solved by the same methods. At the same time, the tasks have specific features that require a differentiated approach at a certain stage.

A system with an optimal programmer (optimal program control) is called optimal in terms of control mode. A system with an optimal controller is called transient optimal. An automatic control system is called optimal if the regulator and the programmer are optimal.
In some cases, it is considered that the programmer is given and it is required to determine only the optimal controller.

The problem of synthesis of optimal systems is formulated as a variational problem or a problem of mathematical programming. In this case, in addition to the transfer function of the control object, restrictions are set on the control actions and operating parameters of the control object, boundary conditions and the optimality criterion. Boundary (boundary) conditions determine the state of the object at the initial and final moments of time. The optimality criterion, which is a numerical indicator of the quality of the system, is usually specified as a functional

J=J[u(t),y(t)],

where u(t) - control actions; y(t) – parameters of the control object.

The optimal control problem is formulated as follows: for a given control object, constraints, and boundary conditions, find a control (programmer or controller) for which the optimality criterion takes the minimum (or maximum) value.

28. Information processing in automated process control systems. Relationship between the interval of correlation and the sampling frequency of primary measuring transducers. Selecting the polling frequency for primary measuring transducers.

OPTIMAL SYSTEM

OPTIMUM SYSTEM, an automatic control system that provides the best (optimal) from a certain point of view functioning of the controlled object. Its characteristics and external disturbing influences can change in an unforeseen way, but, as a rule, under certain restrictions. The best functioning of the control system is characterized by the so-called. optimal control criterion (optimality criterion, objective function), which is a value that determines the effectiveness of achieving the control goal and depends on changes in time or in coordinate space and parameters systems. The optimality criterion can be various technical. and economic indicators of the functioning of the object: efficiency, speed, average or maximum deviation of the system parameters from the specified values, production cost, dep. indicators of product quality or a generalized indicator of quality, etc. The optimality criterion can apply both to a transitional and to a steady process, or to both. There are regular and statistical. optimality criteria. The first one depends on regular parameters and on the coordinates of the controlled and control systems. The second one is used when the input signals are random functions or (and) it is necessary to take into account random perturbations generated by individual elements of the system. By math. description, the optimality criterion can be either a function of a finite number of parameters and coordinates of the controlled process, which takes on an extreme value for the optimal functioning of the system, or a functional of a function that describes the control law; in this case, such a form of this function is determined for which the functional takes an extremal value. To calculate O. s. use the Pontryagin maximum principle or the theory of dynamic. programming.

The optimal functioning of complex objects is achieved by using self-adaptive (adaptive) control systems, which have the ability to automatically change during operation algorithm control, its characteristics or structure to keep the optimality criterion unchanged under arbitrarily changing system parameters and operating conditions. Therefore, in the general case, O. s. consists of two parts: a constant (invariant), including the control object and some elements of the control system, and a variable (changeable), which combines the remaining elements. see also Optimal control. M. M. Meisel.

To design an optimal ACS, complete information about the OS, disturbing and setting influences, the initial and final states of the OS is required. Next, you need to choose an optimality criterion. One of the system quality indicators can be used as such a criterion. However, the requirements for individual quality indicators are, as a rule, contradictory (for example, an increase in system accuracy is achieved by reducing the stability margin). In addition, the optimal system should have the minimum possible error not only during the processing of a particular control action, but during the entire time of the system operation. It should also be taken into account that the solution of the optimal control problem depends not only on the structure of the system, but also on the parameters of its constituent elements.

Achieving the optimal functioning of the ACS is largely determined by how the control is carried out in time, what is the program, or control algorithm. In this regard, to assess the optimality of systems, integral criteria are used, calculated as the sum of the values ​​of the system quality parameter of interest to designers for the entire time of the control process.

Depending on the accepted optimality criterion, the following types of optimal systems are considered.

1. Systems, optimal in speed, which provide the minimum time for transferring the OS from one state to another. In this case, the optimality criterion looks like this:

where / n and / k are the moments of the beginning and end of the control process.

In such systems, the duration of the control process is minimal. The simplest example is an engine control system that provides the minimum time for its acceleration to a given speed, taking into account all existing restrictions.

2. Systems, optimal in terms of resource consumption, which guarantee the minimum of the criterion

where to- coefficient of proportionality; U(t)- control action.

Such an engine management system ensures, for example, the minimum fuel consumption over the entire driving time.

3. Systems, optimal in terms of control losses(or accuracy) that provide minimum control errors based on the criterion where e(f) is the dynamic error.

In principle, the problem of designing an optimal ACS can be solved by the simplest method of enumeration of all possible options. Of course, this method requires a lot of time, but modern computers allow it to be used in some cases. To solve optimization problems, special methods of calculus of variations (maximum method, dynamic programming method, etc.) have been developed that allow taking into account all the limitations of real systems.

As an example, consider what should be the optimal speed control of the electric motor direct current, if the voltage applied to it is limited by the limit value (/ lr, and the engine itself can be represented as an aperiodic link of the 2nd order (Fig. 13.9, a).

The maximum method allows you to calculate the law of change i(d), providing the minimum acceleration time of the engine to the speed (Fig. 13.9, b). The control process of this engine must consist of two intervals, in each of which the voltage u(t) takes its maximum permissible value (in the range 0 - /,: u(t)= +?/ pr, in the interval /| -/2: u(t)= -?/pr)* To ensure such control, a relay element must be included in the system.

Like conventional systems, optimal systems are open-loop, closed-loop, and combined. If the optimal control that transfers the CO from the initial state to the final one and does not depend or weakly depends on perturbing influences, can be specified as a function of time U= (/(/), then open system program control (Fig. 13.10, a).

The optimal program P, designed to achieve the extremum of the accepted optimality criterion, is put into the PU software device. According to this scheme, the management


Rice. 13.9.

a- with a common control device; b - with a two-level manager

device

Rice. 13.10. Schemes of optimal systems: a- open; b- combined

machine tools with numerical control and simple robots, rockets are launched into orbit, etc.

The most advanced, though also the most complex, are combined optimal systems(Fig. 13.10, b). In such systems, an open loop performs optimal control according to a given program, and a closed loop, optimized to minimize the error, works out the deviation of the output parameters. Using the perturbation measurement rope /*, the system becomes invariant with respect to the entire set of setting and perturbing influences.

In order to implement such a perfect control system, it is necessary to accurately and quickly measure all disturbing influences. However, this possibility is not always available. Much more often, only averaged statistical data are known about disturbances. In many cases, especially in telecontrol systems, even the master action enters the system along with interference. And since the interference is in the general case a random process, it is possible to synthesize only statistically optimal system. Such a system would not be optimal for each specific implementation of the control process, but it will be on average the best for the entire set of its implementations.

For statistically optimal systems, averaged probabilistic estimates are used as optimality criteria. For example, for a tracking system optimized for the minimum error, the mathematical expectation of the squared deviation of the output action from a given value is used as a statistical optimality criterion, i.e. variance:

Other probabilistic criteria are also used. For example, in a target detection system, where only the presence or absence of a target is important, the probability of an erroneous decision is used as an optimality criterion R osh:

where R p q - probability of missing the target; R LO- probability of false detection.

In many cases, the calculated optimal ACS turn out to be practically unrealizable due to their complexity. As a rule, it is required to obtain exact values ​​of high-order derivatives from input actions, which is technically very difficult to implement. Often even a theoretical exact synthesis of an optimal system is impossible. However, optimal design methods make it possible to build quasi-optimal systems, although simplified to some extent, but still allowing to achieve values ​​of the accepted optimality criteria that are close to extreme.