Story of the Principle of Automatic Control
The document discusses the principles of automatic control, highlighting the differences between DCS programming and traditional programming, the challenges of human-machine interfaces, and the importance of effective control loop performance evaluation. It emphasizes the need for control engineers to understand both the technical and human aspects of their work, including the dynamics of the processes they manage. The text also covers simulation, real-time optimization, and the necessity of collaboration with operators and process engineers to ensure successful control systems.
type
status
date
slug
summary
tags
category
icon
password
When I was a child, I liked to read miscellaneous books, with nothing to read, not in those ten years. But there are two "changes": mechanization and automation. Small did not understand, this mechanization and automation in the end what difference, the machine is not their own will move? Grow up, and finally understand a little, that this mechanization is strong work, with the machine instead of human manual labor, but still have to control, otherwise, the machine does not know what to do or not to do; this automation, is the replacement of human repeated mental labor, is used to manage the machine. That is to say, automation is in charge of mechanization, or learning automation is in charge of learning machinery...
Oh, no, no, which is no!
Some people prove that there were ancient examples of automation, but the automatic control of the modern sense began with the watt steam engine. It is said that Newkomen invented the steam engine before Watt, but the speed control problem of the steam engine was not solved, it did not make the speed rise, and the machine damage does not say, but also may say a big accident. Watt installed a small stick on the shaft of the steam engine, the end of the bar and the discharge valve are connected, the vent valve closes and the speed increases; and press the valve opens, the speed is a small heavy hammer, somewhere in the middle of the stick by the fulcrum and the shaft. When the shaft turns, the small stick swings because of the centrifugal force. When the speed is too high, the stick swings high and the discharge valve is pressed to open, and the speed drops; when the speed is too low for the stick to swing, the discharge valve is released to close and the speed picks up. In this way, the steam engine can automatically maintain a stable speed, which ensures safety and convenience to use. Because of this tiny speed regulator, Watt's name is linked to the Industrial Revolution, and Newkomen's name is found in the history book.
There are many similar examples in mechanical systems, and the essential household flush toilet is another example. After the water, the water level in the tank decreases, the float drops with the water, and the inlet valve opens. As the water level rises, the inlet valve gradually closes until the water level reaches the specified height, the inlet valve is completely closed, and the water in the tank is just ready for the next use. It's a straightforward but very clever water level control system, a classic design, but not easy to analyze with the classic control theory, but it's an aside.
These mechanical systems are cleverly designed and reliable, which are exquisite. But in practical practice, if you need such creative thinking every time, it is too tiring, it is best to have a systematic method, that can solve the "all" automatic control problem, this is the origin of control theory.
Adults have taught us to walk to see the road. why? If you don't look at the road, walk walk crooked also do not know, the result is east bump west bump. What if you were looking at the road? Walk crooked, immediately see, quickly adjust the pace, and walk back to the right path. Here is the first important concept in automatic control: feedback (feedback).
Feedback is a process:
- Set goals, for the example of children walking, is the way forward.
- Measuring the state, the children's eyes looking at the road are in the measurement of their way forward.
- Compare the measured state with the set goal, compare the direction seen by the eye and the direction thought in the mind, and judge whether the direction is correct; if not, the difference.
- Adjust the action, in the mind according to the actual direction and set the target deviation, to determine the amount of adjustment.
- The actual implementation, that is, the actual move steps, back in the right direction.
Throughout the walking process, the feedback process continues again and again, so that the children will not go around. But there's a problem: if everything happens at the same time in an instant, the feedback process doesn't work. To make the feedback work, there must be some reaction time. Fortunately, in everything in the world, there is a process, which wins the time needed for the feedback.
When I was a child, my mother steamed things in a pot. Taking them out of the pot is always trouble after steamed, need a cloth and a pad, so as not to get hot. But the gap between the bowl and the pot is not big, and it is quite troublesome to put it in with a rag. I often do not know the heaven and earth and volunteered to take out the hot bowl with my bare hands. As long as the action is fast, the hand rises and the bowl falls, you can not be hot. Of course, if holding the hot bowl, the temperature of the hand will eventually be the same as the hot bowl, must be the palm, and fingers hot. From the contact with the bowl to the skin temperature is the same as the surface of the bowl, there is a gradual heating process, which is the dynamic process (dynamic process).
There are two things to note: one is how fast the temperature process is going on, and the other is how much the final temperature can rise. If you know these two parameters and know how much temperature your hand can tolerate, you can theoretically calculate how long a hot bowl can stay in your hand without getting hot. The feedback process is also called the closed-loop process. Since there is a closed loop, there is an open loop.
An open loop is a control process without feedback, a control set, and then executed, without correction for the actual measurements. Open ring control is only effective for simple processes, such as the washer and dryer according to the timing control, how the clothes are washed and dry, depends entirely on the setting at the beginning. For the washing machine, and dryer such problems, spending a little more time is a little waste, but can ensure the effect.
For the air conditioner, can not ignore the room temperature, simply set a 10-minute, 5-minute cycle, but there should be a closed loop control according to the actual temperature, otherwise, the temperature in the room will know how much will reach. I remember in the ‘80s, reportage was very popular. Xu Chi wrote a Goldbach conjecture, so the whole country strive to be scientists. Novelists also strive to write scientists, achievements are too small, so a language is not surprisingly dead endlessly, some we wrote a "no feedback fast-tracking". At that time, I was eating a brick at college, and I was very interested in this new scientific discovery, from beginning to end, I did not see how it was quickly tracked without feedback. Now think about it, the novel is a novel, but this unscrupulous writer is too nonsense, with no feedback to follow, not looking at the target, not looking at their run, with what trace ah, this is similar to the perpetual motion machine, how not to pick a better topic, cold fusion what, at least in theory is possible. The topic is out of the question.
In mathematics, the dynamic process is described by differential equations. The feedback process is to establish a correlation between the input and output terms of the differential equation describing the dynamic process, which changes the nature of the differential equation. Automatic control is used in this feedback and dynamic process. Air conditioning in the room is a simple control problem. But this is just a single room, and the problem of all the rooms in the center of the high-rise building is a rather complex problem, which is not discussed here.
In summer, the indoor temperature is set at 28 degrees, if the actual temperature is higher than 28 degrees, the air conditioner starts to cool the temperature of the room; if the actual temperature is lower than 28 degrees, the air conditioner is closed, let the room temperature is naturally increased by the ambient temperature. With such a simple switch control, the indoor temperature should be controlled at 28 degrees. However, there is a problem here, if the temperature is a little bit above 28 degrees, the air conditioner starts; a little bit below 28 degrees, the air conditioner is off; if the temperature sensor and the air conditioner switch are sensitive enough, the air conditioner switch frequency can be infinite, the air conditioner constantly on and off, to get sick, this is not good for the machine, in fact is not necessary. The solution is to set up a "dead zone" (dead band), turn it on when the temperature is above 29 degrees, and turn it off when below 27 degrees. Be careful not to turn the opposite, otherwise, the control unit will be nervous.
With a dead zone, it is no longer possible to strictly control the indoor temperature at 28 degrees but to "wander" between 27 and 29 degrees. If the ambient temperature is certain, the cooling capacity of the air conditioner is certain, the indoor heating / cooling dynamic model is known, and the temperature "swinging" cycle can be calculated. But since it's a story, we don't bother that.
This switch control looks "soil", but has many benefits. For most processes, the accuracy of switch control is not high but can be stable, or the system output is "bounded", that is, the actual measured value will be limited to a fixed range, and it is impossible to unlimited divergence. This stability is different from the so-called progressive stability emphasized in the general control theory, but the so-called BIBO stability, the former requires the output final trend set point, the latter only requires that the output is bounded under the action of bounded input, BIBO refers to bounded input bounded output.
For simple processes with low accuracy, this switch control (or relay control, relay control because the earliest this control is achieved with a relay or electromagnetic switch) is sufficient. But in many cases, this "gross assessment" control can not meet the requirements. The car is driving on the highway, the speed is set in the cruise control, the speed floating down a few kilometers, the mind feels to suffer, but if floating up a few kilometers, was caught by the police to eat a ticket, who is this?
Switch control is discontinuous control, the control effect one plus is "full dose", a reduction is also "full dose", no intermediate transition. If the cooling capacity of the air conditioner has three Settings: small, medium, and large, and according to the difference between room temperature and setting to decide whether to use small, medium, or large, the control accuracy of room temperature can be greatly improved. In other words, the amplitude of the "swing" of the temperature will be greatly reduced. So, if the air conditioner has more Settings, from small to small medium to... to large, is the control accuracy higher? Yes. In this case, why not use a step-less, adjustable air conditioner? Can't that control the room temperature more precisely? Yes. Stepless adjustable or continuous adjustable air conditioners can accurately control the temperature, but the switch control can no longer be used.
In household air conditioners, continuous adjustable is not the majority, but the hot shower is a typical continuous control problem because the faucet can continuously adjust the flow of water. When taking the shower, assume that the cold water faucet is unchanged, only adjust the hot water. The temperature is higher, the hot water is smaller; the temperature is lower, and the hot water starts a little. In other words, the control effect should change in the direction of reducing the control bias, which is called negative negative feedback. The control direction is right, and there is a problem with the control quantity. When the temperature is 1 degree higher, how much less should the hot water be closed?
Experience tells us that according to the specific faucet and water pressure, the temperature is 1 degree high, and the hot water needs to close a certain amount, for example, close a small case. In other words, the control amount is proportional to the control deviation, which is the classic proportional control law: the control amount = proportional control gain * control deviation, the greater the deviation, the greater the control amount. Control deviation is the difference between the actual measured value and the set value or the target value. Under the proportional control rule, the deviation is reversed, and the control amount is also reversed. That is, if the shower water temperature is 40 degrees, and the actual water temperature is higher than 40 degrees, the hot water faucet changes to the closed direction; when the actual water temperature is below 40 degrees, the hot water faucet changes to the open direction.
However, the proportional control law does not ensure that the water temperature can be accurate to 40 degrees. In real life, people at this time to the hot water tap for fine-tuning, as long as the water temperature is not appropriate, bit by bit to adjust, until the water temperature is appropriate. This control kind of law of gradual fine-tuning as long as the control deviation does not disappear is called integral control law in control, because the control amount is proportional to the accumulation of control deviation in time, and its scale factor is called the integral control gain. In industry, the reciprocal of the integral control gain is called the integral time constant. Its physical meaning is the time required to double the control amount when the deviation is constant. It should be noted here that the control deviation is positive and negative, all depends on whether the actual measured value is greater than or less than the set value, so as long as the control system is stable, that is, the actual measured value will eventually be stable in the set value, the accumulation of control deviation will not be infinite. Here again, the basic function of integral control is to eliminate the residual difference of control deviation (also called residual difference).
Proportion and integral control rules can deal with a large class of control problems, but not without room for improvement. If the water temperature of the water pipe changes rapidly, people will adjust the hot water faucet according to the change of water temperature: the water temperature rises, the hot water faucet changes to the closing direction, the faster the heating, the more open; the water temperature decreases, the hot water faucet changes to the opening direction, the faster the cooling, the more closed. This is the so-called differential control law because the control amount is proportional to the rate of change of the actual measured value, its proportional factor is called the proportional control gain, also called the differential time constant in industry. The differential time constant does not have a specific physical significance, but the integral is called the time constant, and the differential also follows. The focus of differential control is not on the specific value of the actual measured value, but also on its direction and speed of change. Differential control has many advantages in theory and practicality, but the limitations are also obvious. If the measurement signal is not very "clean", from time to time there is a small "burr" or disturbance, differential control will be caused by these disturbances, producing a lot of unnecessary and even wrong control signals. So the use of differential control is very cautious.
The proportional-integral-differential control law is the most commonly used in industry. People generally refer to the proportion-integral-differential English abbreviation, abbreviated as PID control. Even today, when more advanced control rules are widely used, various forms of PID control still account for more than 85% of all control loops.
In PID control, the characteristics of integral control are: as long as there is residual difference (i. e. residual control deviation), integral control will gradually increase the control effect step by step until the residual difference disappears. Therefore, the effect of the integral is relatively slow, except for special cases, as a basic control role, slow is not an emergency. The characteristic of differential control is that although the actual measured value is still lower than the set value, its rapid rise needs to be suppressed as soon as possible, otherwise, until the actual value exceeds the set value to react late, this is the place where differential control display skill. As a basic control, differential control only looks at the trend, not at the specific value, so the ideal situation is to stabilize the actual value, but where the stability depends on your luck, differential control can not be used as a basic control role. Proportion control does not have these problems, the proportion control response is fast, good stable, is the most basic control effect, is "skin", integral, differential control enhances the proportion control, rarely used alone, so it is "hair". In actual use, proportion and integral are generally used together, and proportion plays the main control role, and integral helps to eliminate the residual difference. The differential is adopted only if the accused object is slow to respond and needs to compensate early. Only proportions and differences are rare.
The accuracy of continuous control is incomparable to the switch control, but the high precision of continuous control also has a cost, which is the stability problem. The control gain determines the sensitivity of the control action to the bias. Since gain determines the sensitivity of control, isn't the more sensitive the better? This is not so. Or use the car's cruise control as an example. Lower speed, add a little throttle, lower speed, add more throttle, higher speed of course the other way around. But if the speed is a little lower, the throttle is added a lot, the speed is lower, the throttle is added, so that the speed can not be stable in the required set value, but also may be out of control. That is the instability. So the setting of controlling the gain is exquisite. There are similar examples in life. The national economy is overheated and needs economic adjustment, but after excessive adjustment, it causes a "hard landing" and recession. Similarly, too, stimulus, too much, causes "false prosperity". To achieve a "soft landing", the economic adjustment measures need to be just right. This is also a question of the stability of an economic dynamic system.
In practice, how much gain is the most appropriate, in theory, there are many calculation methods, but practical is generally to rely on experience and debugging to explore the best gain, industry argon is called parameter setting. If the system response behind the control of procrastination is, a large oscillation, it is generally integral; if the system response is very nervous, and easily hits the pendulum, showing high-frequency small amplitude oscillation, it is generally a little differential. The IF oscillation, of course, is a matter of proportion. However, the frequency of each system is not the same, in the end, what is high frequency, and what is low frequency, these a few words are not clear, should chairman MAO's words: "specific situation-specific analysis", so hit a haha.
Specifically speaking, there are two ways to set the parameters. One is to first debug the proportional gain to ensure the basic stability, and then add the necessary integral to eliminate the residual difference, only in the most necessary circumstances, such as reflecting the slow temperature process or large liquid level process, the measurement noise is very low, to add a little differential. This is the "academic" approach, which works in most cases. But the industry has a "crooked way": with a very small proportion, but greatly strengthens the integral role. This method is completely against the analysis of the control theory, but in practice is effective, the reason is that the measurement noise is serious, or system reaction, the integral control rule is moderate, not easy to motivate the unstable factors, especially the uncertainty is a high-frequency part, this is the purpose of "stable overwhelming".
In many cases, after the initial PID parameter setting, as long as the system is not unstable or has significant performance degradation, it is generally not rearranged. But what if the system is unstable? Because most of the actual system is open ring stable, that is, as long as the control constant, the system response should be stable in a value, although may not set value, so the first action is unstable proportional gain, according to the actual situation, reduce 1 / 3,1 / 2 or more, at the same time increase the integral time constant, often multiply, then reduce or even cancel the differential control. Appropriate reduction of the feedforward gain is also useful if there is feedforward control. In practice, the system performance will not inexplicably suddenly bad, the above "fire" type readjustment is often temporary after the mechanical or raw material problems in the production process are eliminated, the parameters should be set back to the original value, otherwise the system performance will be too "lazy".
For the new factory, the system has not been put into operation, and it cannot be fixed according to the actual response. Generally, an initial parameter is estimated first, and when the system is put into operation, the control loop is fixed one by one. My own experience is that for the general flow circuit, the proportion is set at about 0.5, the integral is about 1 minute, and the differential is 0, this combination generally does not cause a big problem. The temperature loop can start from 2,5 and 0.05, the liquid level loop from 5,10, and 0, and the gas phase pressure loop from 10,20 and 0. Since these are based on empirical estimates, of course, the specific situation-specific analysis, it is impossible to "apply everywhere".
Differentiations are usually used in slow systems, but there are exceptions to things. I have encountered a small condensate tank, with a diameter is only two feet, longer than 5 feet, but a flow of 8-12 tons/hour, once the wind, the liquid level changes very quickly, no matter how the proportion, integral, the liquid level is difficult to stabilize, often the control valve just began to react, the liquid level has reached the top or to the end. Finally, a differential of 0.05 was added, and when the liquid level initially changed, the control valve began to suppress but stabilized. This runs counter to the conventional parameter setting path, but in this case, it is the "only" option, because the measurement value and the saturation of the control valve become the main problem of stability.
A few more words about the integration of the industry. Academically, the stability of control is the asymptotic stability, and BIBO stability has no way to prove the asymptotic stability, which is not very much on the table. But there are two aspects of stability in an industry that look similar and substantially different: one is asymptotic stability, and the other is stability, which does not necessarily converge to the set point or stability over convergence. Specifically, it requires the system to be stable at a value, and not move, but not when the set value is not too important, as long as it's not too outrageous. Examples have many, such as reactor pressure is an important parameter, reactor is not stable, raw material feed ratio, catalyst feed is not stable, the reaction is not stable, but the reactor pressure is 10 atmosphere, pressure or 12 atmospheric pressure, there is not too big relationship, as long as slowly but steadily move to the set value is enough. This is a situation less involved in the control theory, which is also an important reason for the frequent use of integral-led control in the industry.
Before said to the frequency of the system, which is the system response sustained oscillation frequency, there are three people in the field of control: dial is a mechanical and electrical dynamic system of an electrician, including aviation, robots, etc., a wave is a continuous process of chemical origin, metallurgy, paper, etc., and one is characterized by differential equation stability of the application of mathematics. In the days of Watt and the flush toilet, the well water not only broke the river but was also peaceful. But after the control rises from art to theory, always someone likes "unity", The electrician helps Rob first, and the good control theory is stuffed into the frequency of the electrician. Boys, that's no frequency, that's... complex frequency. Since those abnormal electrician help (ah, this deer kick want to come) can toss out the virtual power, that they can also toss out the frequency, they abuse also forget, just bitter I wait for innocent people, forced to suffer this mental torture.
The reason for this is the stability of the system. As mentioned earlier, the PID may be unstable if the parameters are not set well. Apart from groping, is there a way to theoretically calculate the appropriate PID parameters? As mentioned earlier, the dynamic process can be described by differential equations, but in the PID stage, this is just a very narrow one of the differential equations: univariate linear ordinary differential equations. If we remember advanced mathematics, must remember the solution of the linear constant differential equations, in addition to the separation of the variable method, if the variable time with t, the most commonly used solution or the exp (λ t) into the differential equation, then the solution has become λ characteristics of an algebraic equation, the solution can be a real number, can also be a complex number, is a complex number, will use the triangle function expanded (how a nightmare feeling back a little?). As long as the real root is negative, the differential equation is stable, because the negative exponential term eventually converges to zero, and it doesn't matter what the complex is, which does not affect the stability. However, it is not easy to analyze, or not beyond the "specific situation-specific analysis", it is difficult to draw a general conclusion.
The French are famous for being colorful and delicious, but after they have sex, they are not honest. One of them, Laplace, invented the Laplace transformation and turned the ordinary differential equation into a polynomial of s. Then the electricians, like self-abuse, to s into j ω, is that frequency, a whole abnormal frequency analysis, used to analyze the stability of the system. However, it is not entirely fair to say that abnormal conditions exist. In the absence of computers, various charts were the most effective methods of analysis, but also called "geometric analysis". Frequency analysis is no exception.
Yankee Evans put up with a root track (root locus), and the idea is fully interesting. He uses gain as an independent variable to draw the root of the system (regardless of the real virtual) on the complex plane. If the trajectory turns around the left half plane, the real root is negative, and it is stable. Further down, the critical frequency of the system response can also be calculated. The biggest advantage is that for the common system, you can give a set of drawing rules, skilled cattle, calves, bulls, cows, eyes a glance, you can draw the root track, and then you can tell you, how much gain change, the system starts to oscillate, how much, the system will be unstable, cloud.
The root track is still more polite, there are more abnormal Nyquist, Bird, and Nicholls methods, think of the big. It's all about the electricians. Today, computer analysis is very popular, but the classical graphic analysis has enduring charm because graphic analysis not only tells you the system is stable or unstable, and some other dynamic response parameters, graphic analysis can also qualitatively tell you gain change even system parameter changes caused by closed-loop performance changes. Why, just still is not to say somebody else abnormal condition? Well, the abnormal state also has an abnormal charm, isn't it? ha-ha.
The control theory featuring frequency analysis (also known as frequency domain analysis) is called the classical control theory. Classical control theory can analyze the stability of the system, but there are two premises: first, to know the mathematical model of the controlled object, which is not easy to get in practice; second, the mathematical model of the controlled object will not change or drift, which is more difficult to do in practice. It is possible to establish differential equations for a simple process, but the control of the simple process is not troublesome, the parameters of the empirical method are fixed, without trouble, and the loop is too difficult to establish the model, or the uncertainty of the model itself is very high, which makes the theoretical analysis meaningless. Classical control theory has been successfully applied in machinery, aviation and motor, After all, starting from F=ma, Can model the dynamics of "all" mechanical systems, The weight of iron bumps does not change inexplicably, Major environmental parameters can all be measured, But classical control theory is at least very successful in chemical control, Give you a 50-plate distillation tower, A gas-phase feed, A liquid phase feed, Add a side line to discharge from the top of the tower and the bottom of the tower, Tower-top air-cooling condenser, Base reboiler plus an intermediate reboiler, Just model it slowly, When the model was set up, Air-cooled condenser is affected by wind, frost, rain and snow, The pressure of the high pressure steam of the reboiler is affected by the friendly device, The temperature and saturation of the gas-phase feed are altered by the influence of the upstream devices, The mixed component of the liquid phase feed is altered by the influence of the upstream device, However, the components cannot be measured in time (it takes 45 minutes to analyze online), The dynamic properties have all changed.
Old guy Goethe said two hundred years ago, the theory is gray, the tree of life evergreen. We know that the red deer like golden or silver, at least red, but they have to work green. In practice, PID has a lot of Cousins, who help big cousins with the world.
The characteristic of proportional control is: that the deviation is big, and the control effect is big. But in practice, sometimes it is not enough, if the deviation is large, the proportional gain is also large, further strengthening the correction of the large deviation, as early as possible to pull the system back to the set value near; when the deviation is small, of course, there is no need to roar, slowly, so the gain a little smaller, strengthen the stability. This is the origin of the dual-gain PID (also called the dual-mode PID). Think about it, anti-aircraft guns aiming at enemy aircraft is a control problem. If the barrel is still pointing at a distance away from the target, turn the barrel as quickly as possible near the target angle; but the barrel is already close to the target, so slowly aim again. There are plenty of similar problems in the industry. A special case of double gain PID is the dead area PID (PID with dead band), the gain at small deviation is zero, that is, when the measured value and the set value are not different, go with him without control.
This is used much in liquid-level control in large buffer containers. Originally the buffer container is the buffer flow change, it is not important where the liquid level is controlled, as long as it is not too high or too low. However, the flow from the buffer container to the downstream device should be as stable as possible, otherwise the downstream device is subject to unnecessary disturbance. Dead zone PID is the most appropriate for such control problems. But there is no free lunch. Dead zone PID is the premise of liquid level, in general, will be "automatic" stability in the zone, if the dead area is set improperly, or the system is often greatly disturbed, a dead zone "uncontrolled" state will lead to a level unrestricted to dead boundary "advance", finally into the "controlled" area, control fire, level to the opposite direction unrestricted "advance", the final result is the level always on the ends of the oscillation, and never stable, the industry is called hunting (hunting? What? Play deer?). Double gain PID also has the same problem, but it is better than the dead area PID. After all, there is only the difference between "strong control" and "weak control", but no "uncontrolled area". In practice, the internal and external gain difference between the inside and outside of the double gain is less than 2:1 and is not very significant, more than 5:1 should pay attention to the above continuous oscillation or hunting problem.
The problem with double gain or dead area PID is that the change of gain is discontinuous, the control effect has a sudden change on the dead area boundary, which is easy to induce the adverse response of the system, and the square error PID does not have this problem. Once the error is squared, the error curve of the control quantity becomes a parabola, which also achieves the effect of "small deviation, small gain, large deviation, and big gain", and there is no sudden discontinuous gain change. However, there are two problems with the error square: one is that when the error is close to zero, the gain is close to zero, returning to the above dead area PID; second, it is difficult to control the specific shape of the parabola, or it is difficult to determine where the gain turns.
For the first problem, a basic linear PID can be added to the error-squared PID so that the zero error gain is not zero; for the latter problem, a continuous varying gain should be calculated with another module. The specific details are trivial, and the deviation is sent into a segmented linearized (that is, the broken line) calculation unit, and then the calculation result is output to the PID controller as the proportional gain. The horizontal segment of the broken line should give different gains, and the oblique line connecting different horizontal segments corresponds to the continuous change of the gain. By setting the breakpoints of the horizontal and oblique segments, you can arbitrarily adjust the curve of the variable gain. If the "ambition" is bigger, plus a few computing units, you can make an asymmetric gain, which is a lower gain during warming and a higher gain during cooling, to deal with the common problems of fast warming and slow cooling in the heating process.
Double gain or error square is written in the proportional gain, the same thing can also be used in integrals and differentiation. A more extreme PID rule is called integral separation PID. Its idea is as follows: the stability of proportional control is good and the response is fast, so when the deviation is large, the integral in the PID is turned off; when the deviation is small, fine adjustment and elimination are the main problem, so the proportional effect is weakened or even closed, while the integral function enters the control. The concept is good, but when implemented, there are a lot of problems without disturbance switching.
These abnormal PIDs are theoretically difficult to analyze the stability of the system, but they solve many difficult problems in practice. A boast, these PIDs have been used in practice. In complex structure PID war, if the enemy is too stubborn, either change bigger guns, the enemy down; or use more clever tactics, the enemy faint. The same is true for control. Problems that are difficult to solve with single-loop PID can often be solved through a more clever loop structure.
A single PID loop can of course achieve disturbance suppression, but if the main disturbance is in the loop and is clear, it is a good idea to add an internal loop to help. Remember taking a hot shower for example? If the hot water pressure is not stable, always want to adjust the hot water tap for this, it is very troublesome. If there is a person responsible for adjusting the hot water flow according to the hot water pressure, stabilizing the hot water pressure, and stable the calibration value, then the bath, the water temperature is easy to control, as long as tell the person how much hot water flow now, do not have to worry about the impact of hot water pressure on the hot water flow. The control loop responsible for the hot water flow is the inner loop, also called the secondary loop, and the temperature of the bath is the outer loop, also called the main loop, of course, is the main loop command deputy loop, like the automatic command mechanization, self-control command mechanical and electrical people... stop, then pull away to get a deer kick, or horse kick, cow kick, donkey kick.......
The structure of the main loop set and secondary loop is called cascade control (cascade control), which was once the first "advanced process control" in the industry after the single loop PID. Now serial process control has been used a lot, and there is no longer anyone calling it "advanced process control". The main function of cascade control is to suppress the disturbance in the circuit and enhance the overall control performance. However, the cascade also can not be used carelessly. If the response speed of the main loop and the secondary loop are about the same, or the corresponding speed of the main loop is even slower than the secondary loop (which can be done through abnormal debugging), such a cascade is problematic. In theory, you can use the resonance frequency of what analysis, but do not cost that matter, with the knee to know, a quick head to a warm subordinate command around, the result can only be everyone is exhausted, things are done. On the contrary, a calm head commanding a quick subordinate must do well.
If the main disturbance is outside the loop but can be predicted, then in another way, which is the feedforward said in front of the red deer. Or with the example of taking a hot bath. If the cold water pipe and the same water room of the flush toilet function, you are taking a bath, others pumping water, then you become cooked lobster (I wanted to say monkey PP, but that is indecent, we want to talk about five four beauty is not?). At this time, to make that person in the water at the same time tell you, you calculate the time, calculate the amount, sharply reduce the hot water, and the temperature can be roughly the same. This is called feedforward control (feed-forward control). Feedforward control has two important things:
One is the effect of quantitative disturbance on the controlled variable, which is called feed-forward gain; the second is the disturbance dynamic, others pump water to the bath faucet water temperature becomes hot, there is a process, not instantaneous. If you can know these two things accurately, then the feedforward compensation can completely compensate for the measurable disturbance. But in fact, there is no accurate knowledge of things, if it is expected to feed forward to fully compensate, self-defeating is certain. So feedforward is usually used together with feedback, which is adding a feedforward to the PID loop. Generally, only the static feedforward is used, that is, to compensate for the static influence of the disturbance on the controlled variables, while the dynamic factors of the disturbance are ignored, mainly because the static feedforward has discovered 80% of the benefits of the feedforward. The dynamic feedforward is both complex and unreliable and is rarely used in the PID loop. Theoretically, feedforward all add a feedforward effect to the control effect of PID and can multiply a control effect. Multiplication forward is too powerful, I never used it, usually with addition. In the implementation, the feedforward is proportional to the change of disturbance (i.e., increment), so the disturbance variables remain unchanged and the feedforward action disappears, otherwise, the fixed feedforward control gain will cause disturbance to the PID main circuit. The feed-forward gain can be calculated based on a rough calculation, for example, how much the temperature will drop and how much hot water flow needs to be adjusted to maintain the temperature, which is not difficult to calculate from the heat balance. If you don't want to bother with this, you can also calculate from historical data. After generally calculating a feedforward gain, hit 70% or even 50% again, use a little bit of safety, and do not overcorrect.
Feedforward is generally used as an auxiliary control, but in special cases, feedforward can also be used as a "preload" (pre-loading) as a reference control. For example, during the startup process of a high-pressure system, the pressure can quickly rise from the stationary constant pressure to the very high pressure. The high-pressure system does not allow the large movement of the valve, so the control gain is relatively low, but in this way, during the starting pressure boost process, the pressure control reaction is very slow, which is easy to cause too high pressure. At this time, with the speed of the compressor or the flow of high-pressure feed for feedforward, the pressure control valve "advances" to the approximate position, and then slowly adjusts with feedback, which can solve this problem.
As mentioned earlier, it is sometimes difficult to control the large range of variable flow rates with a single valve, which is a very practical problem. Industrial valves generally turn down is only 10:1, that is to say, if the maximum flow of this valve is 100 tons/hour, it is difficult to control less than 10 tons/hour, of course, more than 90 tons/hour is almost out of control. Therefore, to ensure the precise control of 0-100, the needs to be a large valve and a small valve parallel, the small valve is responsible for the precise control of small flow, and the large valve is responsible for the precise control of large flow, this is the so-called range control (split range control). In range control, the small valve first opens, beyond the maximum flow of the small valve the small valve is fixed in the full open position, and the large valve begins to open and takes over the control. This is the open-open range control. Also related to opening range control, such as reactor jacket temperature control, as the temperature rises, the cooling water gradually turns off until the cooling water is completely off and the heating steam begins to turn on. Range control, of course, is not necessarily only two sections, three or even more are possible, and the reason is the same. The problem with step control is the junction point of different valves. When the valve is a very small opening, the control is very insensitive, the 10:1 is also the truth. So practical, open-open range control is often a section of overlap near the intersection point, that is, the small valve is about to fully open but not fully open, the large valve has begun to act, so, to the small valve is fully open, can not move, the large valve has entered the effective control range. Off-open range control often sets a dead area at the junction point to avoid both valves having a little opening. The setting of the transition point is a little particular and should be according to the size of the valve. For example, if valve A is twice as large as valve B, the range point should be set in 1 / 3 of valve B first, rather than 1 / 2 of the lazy practice.
Many process parameters can be measured, but many parameters cannot be directly measured. At this time, if the parameters that need to be controlled can be indirectly calculated by other process parameters that can be measured, this is the so-called inference control (inferential control). For example, the purity of the product at the top of the distillation tower can be measured by gas chromatography (gas chromatograph, GC), but the results take 40 minutes to use for real-time control, and the day lily is cold. Inference control is closely linked to the concept of a "soft sensor" (soft sensor). For the example of the distillation tower roof purity, you can make a mathematical model of the purity and the roof temperature and pressure, and the purity can be calculated indirectly with the temperature and pressure that can be measured. When computer control is common today, this is easy to achieve, but in many places, inference control is still seen as a very mysterious thing, sad.
Sometimes, there is more than one means of control for the same variable. For example, the air cooler has a fan speed that can be adjusted, and the shutter opening can be adjusted. The effect of fan speed is fast and the control is accurate; the shutter opening effect is not easy to master, but it is beneficial to energy saving. Therefore, the temperature can be controlled with the rapid response of the fan, but the shutter opening is used to slowly affect the speed of the fan indirectly through the temperature so that the fan speed is set back to the most economical setting. Of course, the control loop of the shutter opening must be much slower than the control loop of the fan speed, generally slow pure integral control, otherwise, two people have to fight. Because this is equivalent to the "valve position" that controls the fan speed, the industry calls it valve position control (valve position control). This valve position control can also be changed. When the fan speed is higher than a certain value (such as 80% maximum speed), open the shutter large, and continue to open large if it is still high; when the fan speed is below a certain value (such as 20% maximum speed), close the shutter small. This is a one-way integral action, with two different places:
- There are two set points, depending on whether the fan speed is high or low
- The integral works only at the fan speed on the outside of the two "limits", with the inside of the shutter remaining the same.
This way, the fan speed does not have to return to a specific value but can float within a range. The other two controllers "compete" for a control valve case for selective control (override control or selective control).
For example, the temperature of the boiler is controlled by the fuel flow. When the temperature is high, the fuel flow is reduced, but the fuel flow is so low that the fuel pipeline pressure is lower than the furnace pressure, so there is a dangerous tempering. Therefore, at this time, the fuel pipeline pressure will take over the control at the expense of the furnace temperature. In other words, normally, when the furnace temperature control works, and when the fuel pipeline pressure is lower than a certain value, the fuel pipeline pressure controller functions. In the implementation, the output of the furnace temperature controller and the fuel pipeline pressure controller are connected to a high separator, and then the output of the separator is connected to the actual fuel valve. This concept is very clear, but people who contact selective control for the first time are often easy to be confused by high or low selection. Although the pressure is too low, how is it high selection? As long as you remember the high or low selection is from the end of the valve, and the temperature and the height of the pressure have nothing to do with it. If the "very" variable exceeds the limit, you have the valve open, that is a high selection; if you want the valve closed, that is a low selection.
PID from 20 to 30 years to began to be widely used in the industry, the magic has changed for decades, and it is time to change the pattern. The PID is also a product of the classical control theory. In the 1950s and 1960s, everything was modernist, from the classic column, proportion, and detail to the steel frame glass box of "form obeying function"; the car from the carriage drawn by machine to the art of streamlined steel; and the control theory should follow the situation and be modernized. This is not, Yankee Kalman ceremoniously launch... modern control theory. Have you all seen a dragon dance? A fluttering head chased a big hydrangea, the dragon twisting and jumping one or two. No dragon dance during the Chinese Spring Festival is as incredible as a foreign Christmas without Santa Claus. Imagine that if this is an invisible blind dragon, you can only command the dragon tail by one person behind the dragon's tail, then pass the control instructions one by one through the people in the dragon's body, and finally make the dragon head bite the hydrangea. This is a dynamic system, the longer the dragon’s body, the more people, and the slower the dynamic response. If you only look at the position of the dragon’s head, only control the tail of the dragon, and ignore the dynamics of the dragon’s body, that is the so-called input-output system.
Classical control theory is based on the input-output system. For many common applications, this is enough. But Kalman was not content with "enough". Of course, the dragon’s head should hold, the dragon’s tail, but why ignore the dragon’s body? Wouldn't it be better to be able to watch the dragon’s body, or even manipulate the dragon’s body, not only to control the dragon’s tail but also to control the instructions directly to the dragon’s body? This is the concept of a state space: splitting a system into inputs, outputs, and states. The output itself is also a state or a combination of the states. In mathematics, Kalman's state space method is to decompose a higher-order differential equation into a joint system of first-order differential equations, which can use many tools of linear algebra, and the expression is relatively concise and clear.
Kalman is a control theory. The idea of theorists is different from engineers. The first thought the engineer had in his mind was, "How do I control this exercise? How much is the gain? What is the structure of the controller like?" The theorist is thinking of the existence, uniqueness, and something. But so theorists are not fair. A lot of the time, the engineer by imagination and "work", hard for a long time, finds that the result is completely unreasonable, then just thinks of those "sex" (do not think crooked ah, hey hey), the original existence, unique what or useful.
Or look back at the dragon. Now, the dragon head, the dragon tail, and the dragon body all have to see, not only to see but also to directly control the dragon head to the dragon tail of everyone. However, the dragon does not want to see and, does not want to dance to dance. When it comes to "look", there are not many states that can directly measure/observe. The so-called look is an estimation. If you know how many sections the dragon body has (just how many people hold under) and how resilient the dragon body is, then hold the tail to shake, and see where the dragon head is, you can estimate the position of each section, which is called state observation. Then, if there are several children in the middle of the dragon, the hands are not well pulled, then it is useless to hold the dragon's tail again, then part of the state of the system is unobservable. If you deaf some children, these states are uncontrollable. Kalman mathematically derived uncontrollable and unsatisfactory conditions, which fundamentally solves the problem of when there is not a blind delay. This is an important milestone in control theory.
And then we look at this dragon. If you want to see the dragon neat or not neat, it is easy to see clearly; if you want to count the number of people, and look at the movements of each person, it is easy to see clearly. But no matter how, this dragon is still the same, just from a different Angle. At that time the Chinese Spring Festival dragon dance was not in the Chinese city, don't know if Kalman has seen the dragon dance, anyway, he moved the mathematical linear transformation and linear space theory to control, from now on, make control the tools, a system across the eye, can look upright, because no matter how to see, the essence of the system is the same. But different angles have different uses, some angles are easier to design the controller, some angles are easier to analyze the stability of the system, and things like that, in the control theory called this "standard". This is yet another milestone in control theory.
The purpose of the observed state is ultimately to control. Only output feedback is called output feedback, classical control theory feedback can be attributed to the output feedback, but the state feedback is called state feedback. Output feedback already works well for common systems, but the state feedback is much stronger. You think, all the states of a system are firmly targeted, and all the states obediently listen to the dispatch, that is what prestige is!
Although those who learn control learn modern control theory, most people remember Kalman because of the Kalman filter (Kalman Filter). It is a filter, which is a state observer (state observer), used to "reconstruct" the state of the system from the input and output. This reconstruction sounds mysterious, but it is not complicated. Isn't there a systematic mathematical model? As long as the model is accurate and gives it the same input as the real system, doesn't it obediently calculate the state of the system? Wait: the solution of the differential equation is not only determined by the differential equation itself but also by an initial condition. If the initial condition is wrong, the solution of the differential equation is in the right form, but the value is always different. Kalman adds a tail after the differential equation of the system model, compares the actual system output with the theoretical output of the model calculation, and then rides a scale factor to form practical state feedback, gradually eliminates the deviation of the state reconstruction, and solves the initial conditions and other systematic errors. The most subtle part of the Kalman filter is that Kalman derived a systematic method, which can consider the measurement noise and the random noise of the system itself, and determine the size of the above scale factor according to the signal-to-noise ratio.
This configuration is not the original Kalman, Luenburg also got a similar structure, but from the perspective of system stability, to determine the scale factor. The same structure is used in a variety of "prediction-correction" model structures, and also gets a lot of applications in industry, such as the molecular weight distribution of polymerization reactor can use the temperature of the reactor, feed ratio, catalyst, such as indirect calculation, but not accurate, also cannot put the all unmeasurable interference factors included into the mathematical model, then use the true value of the laboratory to regular correction, can combine the characteristics of the mathematical model in time and accurate characteristics of laboratory results, meet the requirement of real-time control, this may be a static Kalman filter. The Kalman application of the filter was on the radar. The so-called side scanning and side tracking is to estimate the position of the enemy aircraft with the Kalman filter, and then actually correct by the gap scanning results of the radar. In the practical application, there is a typical problem: sometimes, the same variable can have several measurements available, such as some more direct but not accurate, have plenty of indirect estimates, have a great lag but high accuracy, then can use the Kalman filter data from different sources by different signal to noise ratio weighted "integration", is also a civil version of the "sensor fusion" (sensor fusion).
Besides the Kalman filter, Kalman's theory is not used much in practice, but Kalman's theory establishes an excellent theoretical framework for understanding and studying control problems. By the way, Kalman's theory is limited to the linear system, that is, ten dollars to buy a bag of rice, and twenty dollars to buy two bags of rice, are proportional. There are a lot of nonlinear actual systems, 2,000 dollars can also buy 200 bags of rice, but 20 million dollars will see whether the rice warehouse is goods, the market is not the price, not the more money, the more rice to buy, there is a nonlinear problem. The problem of the nonlinear nature is much more complex to study. The actual system has other characteristics, some of which are called time-varying systems, like cosmic rockets, whose mass changes with time and fuel consumption, and whose characteristics will of course change. A lot of the problems are multivariable, like a car turning, not only the steering wheel is an input, but the throttle and brake are also input variables. However, the theory of state space provides a unified framework for linear, non-linear, multivariable, time-varying, and time-invariant systems, which is the largest contribution of Kalman.
As mentioned above, there are three groups of people engaged in control: electrician, chemical industry, and applied mathematics. Before Kalman, the electrician background was dominant, the mathematicians were working in the ivory tower, and the control theory was still "practical". After Kalman, a large number of mathematical people used their familiarity with mathematical tools to attack control theory. For a while, the mathematics of control theory seems to have become "the general trend of the world, those who follow me prosper, and those who go against me die". In the framework of state space, multivariants do not have too many problems to study, so optimization becomes a new fashion of control theory.
For a given curve, find a point where the first derivative is zero, which is the pole of the curve; In the second derivative, greater than zero is the minimum point, and less than zero is the maximum point. At this time, Newton understood the East, now high school or freshman everyone learned things. However the dynamic system is a differential equation, and finding the first derivative to zero leads to the variational method and the so-called Euler equation. But this thing is not convenient to use. The actual optimal control is not large directly using variations.
Russia is a strange place. The old boys are either wilting or crazy. The Russian tragedy movie makes you want to kill yourself. But what about a bad comedy? Then you either go crazy or you get crazy. It is such a place, in addition to numerous Tolstoy, Tchaikovsky, Pushkin, Turgenev, and other literary giants, Russia is also rich in mathematicians, two of whom are Pontryatic and the people always miss Lyapunov.
The Pontricapital's maximum principle sounds scary, but it's pretty simple. See the mountain? The top of the mountain is the highest point (cut, do you say?); See the hillside? If draw a line in the mountainside, and climb up from the foot of the mountain, although the hillside continues to extend up to the line, can not cross, the 38th Parallel on the mountainside is the highest point (cut, this also says?). This is the maximum principle of Ponatricapital. Of course, Pontryantine was expressed in delicate, esoteric mathematical language, otherwise, he would not be confused with mathematics. But that means that.
Pontrya is a typical application of maximum principle is the so-called most speed control problem, or time optimal control (time optimal control), simply put, is given the maximum horsepower and maximum brake power, how to drive the car from A point to B point (what turn, downhill, traffic lights, the trivial things also want to annoying? No taste at all!). You can use beautiful but tedious math, or with your knees, and the quickest way is to start at full speed, and then stop. The fastest way can be faster than that. With a little bit of imagination, you can imagine "bang", controlling the amount of oil door plate in the end, and then "bang", the brake plate in the end, the control task is completed. So the fastest control is also called "bang-bang" control (Bang-Bang control).
Speed control is a very interesting problem in theory, the solution is simple and beautiful, but in practice the direct use of examples is very rare, usually at the beginning with "bang-bang", or uniform speed to the maximum control, to ease the impact of the control; near the end, use PID for closed-loop fine-tuning, to overcome the shortcomings of "bang-bang" system model error is very sensitive. Elevator control is one such example. From the first floor to the fourth floor, the motor quickly rises to the highest speed. After the third floor, the motor drops at a lower speed and then slows down according to the actual position of the elevator and the floor of the elevator until it stops. If the control parameters are well adjusted, stop all at once; if not well enough, swing up and down a few times before stopping.
The fastest control problem, an earlier optimal control problem, provides an interesting idea, but not much blossom in this tree. In contrast, the other branch of optimal control is much more vibrant. This one is the linear quadratic optimal control (linear quadratic control). Mathematics is interesting, but mathematics is also blind. In mathematics, the optimization problem is a problem of finding convex points on a surface. As long as you can describe a physical problem as a surface, mathematics ignores the last name. In this case, the accumulation of the control deviation square in time is a natural choice, and the quadratic type is the square in linear algebra. The deviation square of the linear system is a steamed bread mountain with no cliffs, no ditches, and easy to climb; a mountain has only one peak, so do not have to worry about finding the wrong place. However, the mountain cannot include not only control deviation but also control quantity, for three reasons:
1, if the control quantity is not included, the solution of the optimal control is meaningless, because the infinite control quantity can minimize the cumulative squared deviation, but the infinite control quantity is unrealistic.
- The size of the control amount is usually connected with the consumption of energy and materials. The actual control problem is generally "the minimum energy and the highest control accuracy", so it is natural to include the control deviation and control amount in the "mountain".
- The system model always has errors, and the error "always" is the most prominent under the action of high frequency and large amplitude control. Therefore, to reduce the sensitivity of the system to the model error, it is also necessary to limit the size of the control amount.
So the "objective function" of linear quadratic optimal control (the mathematical representation of the peak shape) is an integral of the weighted sum of the squares of the control deviation and the control amount. Integral is, of course, the "accumulation in time," and the weighted sum is actually multiplying the scaling factor before the square term of the control deviation and the square phase of the control quantity, and then adding it up. The relative size of the two scaling factors determines who is more important. Using matrix differentiation and linear algebra tools, it is not difficult to derive a linear quadratic control law- -a basic state feedback control law! Only the feedback gain matrix is calculated according to the optimization requirements.
Linear quadratic optimal control created a whole new field of control, which soon came out of the state space and entered other fields and prospered. This branch is the subject of optimal control in the application today. Linear quadratic control has a variety of advantages, but linear quadratic control does not answer the most basic control question: whether the closed-loop system is stable. Here, our beloved comrade Lyapunov appeared.
Lyapunov is also strange thinking, more than one hundred years ago, differential equations played the addiction, the two stability (or convergence) theorem, before nothing too great, the nonlinear system linearization, is to use a curve with a small section, a small section of straight line approximate, and then according to the straight line. The latter one is a little bit of evil. Lao li figured out a theorem, said for any system, if you can find a self-dissipation energy function (mathematics is a positive definite function), which is always positive, but over time gradually to zero, or the energy function of time derivative is always negative, that the system is stable. It is said that the proof of the theorem is the masterpiece of a genius, and I can only nod frequently. But think also right, the energy dissipation of the system did not, the system is not also settled down? Stability, of course.
Lyapunov is more a mathematician than Kalman, his theorem only gives "if there is.......", how to find this self-dissipated energy function he did not say, what this function is generally he said. This is not difficult to make the automatic control of the masses. Isn't it necessary for a positive definite function? Is there there no limit to what form of positive definite function? Then use control the square of the deviation. Yes, but dry, fun things appear, the deviation square (or quadratic), and linear quadratic Riccati equation, and the so-called quadratic optimal control derivation process, emotion is the same way. In other words, the linear quadratic-type control is always stable. This is an important contribution to linear quadratic control: bringing optimality and stability together.
Lyapunov, whose second theorem is very powerful, but a bit like a strange sledgehammer, and people are still looking for the right nail to smash with the hammer. Linear quadratic control is one of the few known nails known, the other is variable structure control, which can also use the Lyapunov method, which is an aside.
It is said that after Watt's steam engine, the computer is the biggest invention that affects the human process, of course, the computer also has a profound impact on automatic control. As mentioned earlier, control theory revolves around differential equations, so it is continuous "essentially". But digital computers are discrete, that is, the digital controller's eyes are not always staring at the accused object, but just blinking. The "hands and feet" of the digital controller are not a continuous action, but a meal. This is the nature of digital computers. Therefore, the traditional control theory is all "translated" to the discrete-time field, differential equations into differential equations, all methods and conclusions have continuous and discrete two sets, different, but much the same.
If digital control is a simple continuous system discretization, computer control is not so great. Discrete control brings some new features that continuous control cannot have, which is: that the difference equation describes the dynamic process with the relationship between clearly defined moments. Back to the example of a hot bath, if the hot water faucet is not in front, but in the village in the small boiler room, you can only use the telephone remote control, the water temperature can be expressed for the next minute water temperature =0.7 * water temperature + 0.2 * water temperature + 0.1 * on the minute temperature + 0.4 * (5 minutes before boiler room leading opening-6 minutes before boiler room tap opening). Obviously, the water temperature of the next minute is more affected by the current water temperature than the last minute and another minute of water temperature, but the boiler room faucet opening is the same, now, one minute, and then the water temperature is the same, the water temperature of the next minute should be the same as the current water temperature. Why use the tap of the boiler room 5 minutes ago? That's because hot water from the village to the bathroom, which is a lag. If you push the time forward, then the current tap opening will affect the water temperature after five minutes. This illustrates an important trait of the discrete model: predictive ability. All forecast models are built on the power of the discrete model, whether it is weather forecast, economic forecasting, or the control of lagged processes in automatic control.
Another trait of digital control is the ability to implement some control rules that cannot be achieved in continuous time. Industrial changes in the control volume often need to be related to the current actual value. For example, for different products, the conversion rate of the reactor is always generally between 88-92%, without much change, but the catalyst can change between 0.5 and 35ppm. If the conventional PID is used, the gain is very difficult to set, and if it is appropriate for one situation, it is not appropriate for another. So the catalyst needs to be adjusted by the percentage rate of change, rather than simply being proportional to the deviation. For example, if the conversion rate deviates from 1%, the catalyst should be adjusted at 0.5ppm by 0.05ppm; but at 15ppm, it should be at 1.5ppm. Thus, the control law can be expressed as the current control amount = the control amount of the previous step * (set value / current measurement value). That is, if the control variable is 10% above the set value, the control amount also increases by 10%; when the measured value is the same as the set value, the control amount does not change. In actual use, who divides by who has to decide whether the measurement value rises is to control the rise or fall, and the control law should be slightly modified to become the current control amount = the previous control amount * (current measurement value/set value) ^ k. The k power is used to adjust the control law to the "deviation" (this is no longer a difference, but the ratio, strictly speaking, should be called "partial ratio"?) Sensitivity, equivalent to a proportional gain. This control law is equivalent to the pure integral control of the log-space, which, if interested, has a fairly good effect on many common nonlinear processes, and is simple to achieve. However, this is an essentially discrete control law that cannot be achieved in continuous time.
Discrete control can "see step by step" feature, is continuous control is difficult to imitate and is also extremely useful in practice. All kinds of control theories, without a mathematical model of the process. The previous bath water temperature is a mathematical model. This model is apocryphal and certainly can be easily given to it with all the model parameters. But in practice, the model parameters don't fall from the sky. How many scientists have devoted their lives to building mathematical models of a particular physics, biology, chemistry, or other discipline, making it difficult to establish the underlying mechanisms, let alone the underlying or deep mechanisms of many processes? So it is possible to derive a mathematical model of the controlled process by mechanism, but it is not practical for daily control problems.
This is where another branch of control theory- -recognition- -shows off. If you give a model, which is also a mathematical formula, and give it a set of input data, the model can calculate the corresponding output data. For example, given a model y=2 * x + 1, and then given x=1,2,3,4, that y is equal to 3,5,7,9, that's very simple. The problem of identification, in turn, given a model structure, here is y=a * x + b, known input-output data is x=1,2 when y=3,5, requires to calculate a and b. This is a binary one-time equation that anyone can solve. In practice, the input-output observation data contains measurement noise, which is bad for the accuracy of parameter estimation; but usually, the observed data amount is far more than the number of unknown parameters, except for mathematics, which feel favorable to overcome the measurement noise, the key is how to use the "redundant" data. One way is to pair the data groups in pairs, borrow many binary primary equations, and then average the solved a and b. Another way is the famous least squares method, to put it bluntly, with a and b as the optimized "control amount", minimizing the cumulative squared error between the model output and the actual observations.
Most actual industrial processes have many years of operation experience, and a large amount of data is not a problem. For most common processes, the basic structure and qualitative properties of the model can also be guessed, with such a powerful mathematical "sledgehammer", it should be able to break all the modeling hard walnuts. Wait, there is no real "magic bullet", one problem is solved, and another equally difficult problem will appear. For identification, there are several problems.
The first problem is the closed-loop nature of industrial data. Most important parameters are controlled by a closed-loop loop. If there is no closed-loop control, either the process features are too complex for simple circuits to control, or the parameter is not important and no one cares. However, once the closed loop, the systematic input and output are related. It doesn't matter as well, The causality between the input and output data is all messy: the output is related to the input through the accused process itself (this is good, Identification is about measuring the correlation, If the output is not related to the input, There is no control or identification), Input is related to feedback and output; The input-output becomes a closed system, You can prove the same thing with any many theorems or methods: because there is no cause and effect, Closed-loop identification is impossible, Unless another "fresh" incentive is added, Like changing the set point very hard, Or the excitation signal independent of the input and output in the closed-loop loop, Like "inexplicably" moving the valve several times. In the end, how much industrial data can be used is not a simple answer. Some processes are stable operation all year round, like ethylene units, with only a small range of fine-tuning.
This is not that people are lazy or not motivated, but that these devices have been highly optimized and operate extremely close to the limit all year round, but the raw materials and products are single, so the process condition does not change much. The closed-loop data of such a system is difficult to use, and some loop-opening tests are often required. Some processes often switch between different states (transition), or due to different raw materials, such as "eat" very miscellaneous refineries, or because of different products, such as polyethylene units, this is actually "hard to change the set point", is a fresh incentive. The closed-loop data of this system is useful, but other issues, are below.
The second problem is dynamics and steady state. The dynamic model has two functions: one is to describe how much time it is necessary to output to reach a certain value; the other is what value the output can finally achieve. In the stock market, you need to know two things: how much the stock ends up, and how much time it takes to get there, except that one of them isn’t of much use to you. Of course, to simplify, here assumes that this stock is soaring, rising down, or falling. This requires that the input-output data must contain sufficient dynamic and steady-state information, too biased on one side will be detrimental to the other. Therefore, the long-term stable operation process may contain enough steady-state data, but insufficient dynamics; the perennial less stable process may contain enough dynamic data, but insufficient steady-state. Using PID control, for example, accurate steady-state data helps to calculate the correct proportional control gain, accurate dynamic data helps to calculate the correct integral and differential gain it is more important to put the proportional gain right.
To obtain an accurate steady state, the identification often needs to wait for the process to open the ring to stabilize before the next step, but the problem is that the actual process sometimes the time constant is very long, with several distillation towers in series, the time constant for a few hours is polite, one or two days is possible. In this way, a small model, a dozen variables, does the open loop test for one or two weeks. If a device can open the rings for two weeks, it needs no control.
The third problem is the signal-to-noise ratio of the excitation. It is said that human activity is the main cause of carbon dioxide and the greenhouse effect, but if you go to make a bonfire, and then go to the upper atmosphere to measure the carbon dioxide and greenhouse effect, there will be nothing, which is still much. why? Not because the bonfire is not effective, but because the natural changes in the environment far outweigh the bonfire function, in other words, the noise far exceeds the signal. The industrial test is the same, the signal must have a certain intensity, otherwise, it is the white delay time. The signal strength should bring the process to the edge of severe instability so that the model can be accurate in a wide range so that the controller can not only work normally in "calm" conditions but also stabilize the system in "stormy" situations. However, the factory is mainly production, in everything is "haggled over every ounce" today, such a large range of testing brought by the product loss and even the possible harm to the equipment, are the factory is not willing to see.
Theorists have designed a pseudo-random signal, using a series of square wave signals of varying width as the input to the incentive process. Theoretically, the average value of process parameters can not deviate from the set point value too much, but ISO9000 requires not only to guarantee the average value of product quality but also the consistency of product quality. Besides, the pulse width of the pseudo-random signals is difficult to determine, too narrow, not enough steady-state data; too wide, and no different from the conventional step signals. So the pseudorandom signals are rarely used.
The fourth problem is the relevance of the input. When the actual industrial process comes to using identification to determine the model, it can not be dealt with in a single loop, so it is a multivariable process. Theoretically, multiple input variables can change simultaneously, as long as the changes of the input variables are independent of each other, multiple input variables are mathematically allowed to change at the same time, and identification can correctly identify the model. However, when using historical data of actual processes, problems often arise that multiple input variables are not independent of each other. For example, when making chocolate, vanilla chocolate is "bitter" or not too sweet, while milk chocolate is sweet. The question is when making milk chocolate, not only sugar, but also milk (nonsense, no milk is that still milk chocolate?) Because the two always appear at the same time, in the sweetness model, it is difficult to tell whether the sweetness is due to the sugar added or the milk added. Sometimes we can manually limit the identification process to eliminate this influence according to the understanding of the specific process. Sometimes, it is not easy, so we have to do special experiments without historical data and use our independent input to identify the model.
The fifth problem is the model structure. The model structure includes two aspects, one is the order of the model, and the other is to eliminate the physically impossible model. The identified model is ultimately the difference equation, which is a question of how to preset the order. There are many pre-and post-test tests. In industry, people steal a lazy one and use a fee parameter model, that is, a model with a response curve rather than an equation so that they can bypass the problem of order. However, removing the unrealistic model is still manual work, and each model needs to be carefully studied to determine whether the dynamic relationship described by the model is reasonable. The mathematical method is still not reliable enough.
Among the model makers, they often hear about black, white, and grey boxes. The black box model ignores the physical, chemical, and other properties of the actual process, purely mathematical start, assumes a model structure, and then finds a best model with a variety of mathematical methods. The white box goes the opposite way and establishes the mechanism model based on the physical and chemical properties. The benefit of the black box model is that it is "universal" and doesn't need a deep understanding of the specific process. The black box model is a kind of cutting the foot to fit the track, but if the track itself is done well and has considerable flexibility and adaptability, it does not need to cut the foot. Because the black box model is free to assume the model structure, the black box model is more convenient to process and use. The black box model is empirical, and the cases not included in the data cannot be predicted. The white box model is "tailored", reflecting the physical and chemical properties of the process, does not have much dependence on the data of the actual process, and can reliably predict the situation not included in the data. However, the structure of the white box model has specific problems, and the resulting model is not necessarily easy to use. In practice, people often assume a model structure to greatly simplify the process mechanism, so the model structure is not out of the head, but roughly catches the basic characteristics of the process, and then uses the black box method of "data meat grinder", will simplify the model is not able to capture the details.
This model combines the characteristics of black and white boxes, so it is called a gray box. In practical modeling, there are few successful examples of pure black or white boxes, and the grey box has much more chances of success. Whatever the box, there is finally a question of how to identify the actual process. Needless to mention the benefits of closed-loop identification, the question is how to obtain useful models from closed-loop identification. There is a way for industry, not an official name, but it is an open-loop-feedback process. The specific approach is as follows: first construct a simple multivariate controller with rough process knowledge, its task is not to accurately control the controlled process, but to control the accused variable within the limit, once approaching or over the limit, take the action to "drive" back to the limit; but as long as the limit, do step disturbance, test the process characteristics. The results of the test were used to improve the controller's model, and then over again. After a few times (usually, two times is enough), the model accuracy should be very good. This method well solves the requirements of identification accuracy and process stability.
The most beautiful fight in Journey to the West is the part of Sun Wukong fighting Erlang God. Sun Wukong can not change, Erlang God is "the enemy change I change", hot pursuit, finally put a lawless naughty monkey captured. From the point of view of control theory, the ability of this "enemy change" is to automatically adjust and optimize the adaptive control controller structure according to the changes of the controlled process.
Adaptive control has two basic ideas, one is the so-called model tracking control, and the other is the so-called correction control. Model tracking control, also known as model reference control, is not unfamiliar to people in concept. In that time, often set a variety of examples, the purpose is to in the party issued a call, we according to the behavior of the example, try to adjust their behavior, so that our behavior of the example of the behavior close. This is the basic idea of model tracking and control. Model tracking control is more used in aviation and electromechanical and rarely used in process control. The idea of self-correction control is closer to people's understanding of adaptation. Self-correction control is a two-step process, which first identifies the controlled process in real-time, and then is the basis of the identified model, and reconstructs the controller in real-time. The idea is simple and clear, and the implementation is not complicated, but the self-correction control after the initial cheers has not achieved widespread industrial success. Why?
One reason is the closed-loop identification. Although the self-correction control constantly changes the parameters of the controller, which breaks the causal relationship of the fixed gain feedback control on the input and output to a certain extent, the causal relationship still exists and is still quite strong, which has an impact on the quality of the identification model.
The second reason is the so-called "covariance explosion". Mathematics, of course, has strict, but simply put, is the correction controller the purpose of system stability is, of course, but in process of the system is more and more stable, since the correction controller for deviation and disturbance sensitivity is higher and higher, finally to "quiet", sensitivity, in theory, can reach infinity, however, then if really disturbance, the controller at a loss.
The third reason is the complexity of the actual process. In identifying the actual process, the most important step is not the following "mathematical meat grinder", but the screening of data, all kinds of abnormal data must be removed, otherwise it is "garbage comes in, garbage goes out". However, to eliminate abnormal data in real-time, automatically, this requirement is trivial, and the design, and operation of a self-correction controller are more troublesome. This is the biggest reason for the limited successful examples of self-correction control in practice.
Automatic control is dominated by electromechanical control from the very beginning. After the 1960s, in the 1970s, the chemical school began to "small lotus to show sharp corners". Self-correction control has a lot of chemical shadow, but the official entry of the chemical school is model predictive control (model predictive control, MPC). This is a general term, and its representative work is dynamic matrix control (dynamic matrix control, DMC). DMC is Charlie Cuttler's PhD paper, which was first used at The Shell Oil company, and then by Cuttler, who founded the DMC company and is now a part of the Aspen Technology company.
Mathematical control theory is very beautiful and universally applicable, but like a tiger, looks powerful, but can not work, after all, on the old cow. The success of DMC lies in the application of pseudo theory, some originally unrelated mathematical tools boiled, to an honest old cow covered with a gorgeous tiger skin, in the general public scare, quietly put the work.
DMC puts the non-parametric model (here is the truncated step curve) into the architecture of linear quadratic optimal control, and successfully solves the problems of multivariable, lag compensation, and constraint control. The meaning of the multivariate is self-evident, and it is not surprising that the lag is easy to predict under a discrete dynamic model. Curiously, the DMC uses the "soil method" to solve the constraint control problem. Limit control limits for all practical control problems. When accelerating, the accelerator is stepped on to the end, that is the limit, and one more horsepower will not come out. Pontryagin's maximum principle can theoretically deal with the constraint control problem, it is difficult to find useful solutions, and the fastest control is a special case.
So how does the DMC solve the constraint control problem? When a control quantity reaches the limit, the control quantity is fixed in the limit value, which is no longer a variable, but the known quantity, substituting the known quantity, withdrawing the relevant rows and columns in the control matrix, rearranging the matrix, and the rest is then solved. That's not so unusual. The headache is the question of how to deal with the output constraints. DMC combines linear programming and control problems, solves the problem of output constraint with linear programming, and solves the static optimal problem at the same time, killing two birds with one stone, which has achieved great success in the industry. Since Kalman, this was the first mass-produced "modern control technology", where Cuttler made a lot of money on DMC and made a lot of money by selling the company to Aspen Technology before the "high-tech bubble" burst. His son-in-law is a doctor, also did not practice medicine, changed to process control, and followed Cuttler.
DMC at the beginning to start from the actual needs, does not adhere to the theoretical rigor, integrity, ginseng, ephedra, red medicine, dog skin plaster all, as long as it works. For a long time, the stability of the DMC could not be analyzed, but it worked. Practical people can easily understand the crooked truth of DMC, but theoretical people are very headache with DMC.
After DMC opened the situation, a swarm rose, but after the dust settled, only three were still on stage. Honeywell RMPCT (Robust Multivariable Predictive Control Technology) was created by a Chinese compatriot, and his unique feature is the introduction of the concept of "funnel". Most control problems have one characteristic: if the disturbance is current, a little control deviation is tolerable; but over time, the control deviation should be eliminated. In other words, this is like a horizontal funnel on a timeline. This concept is very useful for tuning the MPC parameters of complex processes, and it has already appeared in the products of other companies.
The third one is the Perfecter of the Pavilion Technology from Fang Xing Zhengai. American companies have a bad habit, like to take a bad name for a good product. Perfecter Features a combination of neuronal technology (neural net) and MPC, so nonlinear processes can be effective with. There is no mystery about the neuron model, to put it bluntly, a regression model with certain complex forms, but it is less suitable for interpolation and extrapolation than the regression model. DMC also claims to be able to handle nonlinearity, because even if the step response curve turns several turns, DMC is still swallowed and can calculate the control output, which is the benefit of a non-parametric model. However, the problem is that the structural framework of DMC is linear after all, and the concept of the step response is not suitable for the nonlinear process at all, because nonlinear response is related to the absolute value, relative change and even change direction of the input, or even more complex, so the so-called DMC can deal with nonlinear firing guns. If the actual process is not strong, it can be ignored; if the actual process is strong, DMC must be blind. So, when Perfecter uses neurons, is it invincible? Also not.
Perfecter Inherits the good tradition of DMC without asking theory and practical questions, but the basic skeleton of Perfecter is still linear MPC, just a static neuron model from time to time. Perfecter, It is not good in theory, but it also works in practical practice.
As mentioned earlier, PID accounts for at least 85% of today's process control, so MPC accounts for 14.5%.
If the influence of computers on automatic control is limited to discrete control theory, it is not computer control. The new chemical plants after the 1980s, basically used computer control. It is possible to use more advanced technology than PID, but the vast majority still use PID, plus sequential control, to perform a series of actions. So what are the benefits of computer control?
The actual device of process control is initially installed directly on the site, and then the pneumatic unit instrument appears, which can pull the signal pipeline of compressed air from the site to the central control room, and the operator can observe and control the whole factory in the central control. After the explosion-proof problem of the electric unit instrument is solved, the use of central control is more widespread. The operator sits in front of the dashboard, knowing the situation of the section. However, with the increase of the factory and the complexity of the process, the dashboard is getting longer and longer, a large chemical plant can have thousands of basic control loops and thousands of various monitoring and alarm points, and the dashboard must be hundreds of meters long, which is impossible. The high integration of the production process enables one or two people to control the whole factory not only to meet the needs of reducing labor but also to reduce the communication links and comprehensively control the overall situation. So, the computer display is not just cool, but a must. In addition, the computer control makes the self-inspection of the field instruments (valves, measuring transmitters, etc.) possible, which greatly improves the reliability of the system. As a result, computer control is not spending and is inhumane.
Computer control from the beginning of the centralized control (with IBM mainframe) to the current decentralized control (so-called Distributed Control System, DCS) through a spiral upward process. The key to centralized control lies in risk concentration, if a large machine is hung, the whole factory should be out of control. Decentralized control divides the whole plant into several blocks, and a local network based on a microprocessor is used for disperse control. The main subsystems are real-time redundant, and the fault switches to the standby system the first time. The main system and the standby system can check and switch each other regularly at normal times to ensure reliability. The decentralized control significantly improves the reliability of the computer itself. However, field instruments and terminals (field terminal assembly, FTA) are not redundant, and the entire reliability chain still has vulnerabilities. In addition, the length of the coaxial cable controlling the local network is physically limited, and the length of FTA to DCS is also physically limited, so the final decentralized control is not very scattered, all concentrated near the central control room or in the basement. However, the geographical concentration of DCS does not prevent its logical dispersion. As long as the DCS room is not burned, the reliability of components can be well isolated in a small area.
Since the DCS is a local network, there is a problem with a communication protocol. DCS uses two types of communication protocols: polling (polling) and interruption. Inquiry by the central control unit, in turn, to query all subsystems, whether there is no data update, then I will ask again, so no matter what time, the system communication traffic is very high but constant. If the interruption is reversed, the subsystem checks itself first, if the data does not change, do not update online; until the data changes, then online "say a hello". The usual communication traffic of this mode is low, so the network bandwidth requirements are low. However, when the production process is abnormal, a large amount of alarm data will swarm in, and if the bandwidth is not enough, communication congestion will occur. Therefore, the interruption and polling are the same as the final bandwidth requirements, because no one can bear the consequences of communication blocking when the production process is abnormal.
Twenty years ago, Honeywell was the first company to eat DCS, and today Honeywell is still the leader in the industry, despite its expensive equipment, nicknamed Moneywell. The DCS was full of tailored hardware and software. Today, in the "open systems" (open architecture) wave, DCS manufacturers are shifting consoles and computing and network control units to universal WINTEL or UNIX platforms, focusing on the software integration of industrial control devices (such as basic controls, including I / O) and systems. But this raises new problems. The reliability of general/commercial hardware and software often fails to meet the requirements of 24 hours and 365 days of continuous operation. For most IT, when the machine is broken, it changes very quickly in two hours. But for the production process, this is intolerable. The open structure allows DCS to operate, manage, and office networks, which greatly improves the speed depth, and breadth of information exchange but also brings network security problems, followed by a firewall in front of DCS, reducing data sharing and remote control to the minimum. In addition, WINTEL is day and night, so the stability of hardware, and software is very bad, not much time, and upgrading, is a headache. This is the second spiral rise of DCS, but it still hovering more than rising.
Computer-controlled territory is also expanding, and technologies like USB are beginning to be used for digital instrumentation. In the past, the instrument had to pull the signal line to the wiring board (marshaling panel), and then connect to the FTA, so that 10 meters away needed parallel cable, which is wasteful. With a USB field bus (field bus), each instrument can be "hung" on the bus, and then a bus to the DCS, greatly saving the cost and time, the system (such as adding a measuring transmitter or control valve) expansion is also very convenient.
The greatest superiority of DCS is programmable. This is not simple programming like PLC (programmable logic controller, programmable logic controller, mostly used for electromechanical control) trapezoidal logic, but can be like C, FORTRAN as "regular" programming.
Not done in IT, but only compared to computer language courses and major homework programs in school. DCS programming and ordinary programming compared, there are some characteristics. First of all, the DCS program is a "re-in" type, which is a regular and repeated run, not a run from the beginning to the end. So the DCS program can store data in memory at the run and call it until the next run to form a so-called "recursive" operation. This is both a strength and a weakness if someone else is in you
Change the middle data in the middle, you're miserable, it's not easy to find a creditor. The DCS program features real-time, so its execution depends very much on the sequence of events in time. If the timing is wrong, the old hen will become a duck. The problem is that the more decentralized the decentralized control requirements, the better, not only the reliability, in the system of system resources scheduling, decentralized is easy to make the calculation load of the system uniform. As a result, an application package often divides a huge program into many small programs, and the timing and connection should be very careful.
Perhaps the biggest difference from the academic control computing program lies in the handling of anomalies. A multivariable control problem often has some variables under manual control, while the remaining variables are under automatic control. This is a theoretical trouble and a nightmare. Not only consider all the permutations, and combinations but also the smooth cut, cut out, the switch between different modes. There is also to consider how to safely and automatically exit the automatic control in abnormal circumstances, and return the manual control. Sometimes a sentence on the operating procedure, of the program is written is a page. If the operating procedures came up "according to the situation", it is even worse. In all control programs, the control calculation usually does not exceed 30%, 20% for human-machine interface function, and 50% for exception handling.
Computer control does not begin because of more advanced and effective human-computer interfaces. From the very beginning, the human-machine interface has faced a tricky problem. The computer CRT screen is only so big, that it is impossible to "in a word", all the process information in a glance. The computer can constantly change the screen and display the information of other devices and sections in sections, but all the sections and devices are expressed in their frames. If there is no effective organization, it is not easy to find, just like putting hundreds of documents in the same catalog. The graded menu is the traditional solution, but to step by step up and then step down, is very time-consuming, in a hurry, and often too late to change. The shortcut key on the large keyboard can be "pulled out with one key", but it needs to be memorized by rote. These are not a few or a dozen frames, but hundreds or more. For a long time, how to effectively navigate between frames, can be in the shortest time and with the least click, without the need to rote, can intuitively find the required frame, has been a headache. Another problem with human-machine interface design is color. Remember DOS 2.0, the WordStar of the Times?
It's a green word on a black background. At that time, CRT brightness was insufficient, the life was also bad, the black bottom could prolong the life, green words could increase the contrast, help reading, anyway, the machine room is dark, the black bottom does not hurt the eyes. To WordPerfect 5.0 time, is the blue background white words, the contrast between the word and the background is greatly reduced, and the blue bottom is also more suitable for use in a bright room. In the era of Word, there was no dark machine room, basically with paper like white write black words, and then use black green words, to hurt the eyes.
The central control room computer showed a similar journey. Early DCS display is a black green word, in the era of WINTEL or UNIX, a lot of people out of habit, still used the black green word, but modern ergonomic research shows that light background greatly reduces eye fatigue, in bright indoor light on the screen reflection is small, so the control room display began to light gray background evolution. Human-machine engineering research found at the same time, that color can be used as part of the process of information, and world peace, should use the most inconspicuous gray, all graphics, and data with different shades of gray, only in the process parameters or alarm, use color display, so you can immediately attract the operator's attention to where you need.
However, out of habitual thinking, many places still use a large variety of colors to express different equipment states and parameters, even if the normal state is the same. This is very good-looking, but in abnormal situations, it is not easy to find the first general in the army to abandon the essentials. The layout of the display is also very exquisite, less of course not, it is not the more the better, an operator of the vision up and down a certain range, the color of the console, structure, lighting can not be taken for granted. This is not fostering revisionism, but the requirement to keep the operator in the most effective control over the production process. Traditionally, the performance of the control loop is acceptable if the operator does not complain, and unless you want to improve, you usually do not go around rearranging the parameters. In today's economic benefits, the process conditions of the production process are pushed to the extreme, posing great challenges to the control performance, the control circuit must be in the optimal state. With the rapid growth of the number of control loops, it is difficult to grasp the performance status of all control loops at any time by manual observation alone. Performance evaluation technology of the control loop has emerged.
In theory, an optimal control can be designed for a process, one of which is called the minimum variance control. This is a linear quadratic optimal control, the control effect is relatively strong, but this is the theoretical limit, and the control variance can not be small. In the 1990s, the theoretical community proposed a method that could use the closed-loop identification method, without identifying the model, but directly determine the theoretical minimum variance, and then compare the actual variance with the theoretical minimum variance to determine whether the control circuit needs to be reorganized. This method is the first control loop performance evaluation, but it is not easy to eliminate adverse effects in practice and is not used much.
However, not compared with the theoretical optimal value, but compared with the actual ideal value, many troublesome theoretical problems can be bypassed. For example, the flow circuit should settle down in a minute, and the ideal value is one minute. The fast rich leaf transformation and frequency domain analysis can compare the theoretical and practical performance and quickly determine the current performance status of the loop. The most important thing is that this can be used by a computer to automatically collect data, and automatic calculations, every morning (or whenever) to give a report, control engineers can see at a glance, which circuits need to be rearranged, which is no problem, can be targeted. Real-time frequency domain analysis can also list all the loops that oscillate at similar frequencies, and then the control engineers can follow the map to find the bad apples.
The next step in control loop performance evaluation is, of course, automatic tuning. This is a simplified, intermittent self-corrected PID controller, which has no problem in theory, but there are still many practical reliability problems that have not been completely solved. There are many products, but not many practical ones. A further assessment of the control loop performance is, of course, the fault diagnosis of the production process. A fault is an abnormal situation, and an exception is different from normal. So the core of troubleshooting is how to detect this "different".
There are always clues of faults, the problem is that the industrial process of data is too large, in the sea, when the time, often has changed. In data analysis, PLS (actually Projection to Latent Structure, not Partial Least Square) and principal element analysis (Principal Component Analysis, PCA) are very popular methods. PLS and PCA fold a large number of related variables into a few "synthesized" variables so that a large system with a large number of variables can be reduced to a small system, from finding a needle in a haystack to a needle in a bowl. The needle retrieved is no longer a single variable, but a combination of variables. This is consistent with the fact that the early sign of a failure is often a combination of several variables that cannot be seen from one or two variables alone.
PLS and PCA can also be used in conjunction with graphical methods. For example, the nominal synthetic variables, are divided by the normal value, and the nominal value of all synthetic variables is 1. All variables are drawn as a "spider map" (spider chart), and each spider foot represents a synthetic variable. Since the nominal value of the synthetic variable is 1, the spider map is roughly round. If any of the feet changes, the spider is not round, it is very easy to see the abnormality, and then you can look for early signs of failure.
An alternative approach to graphical data analysis is the so-called co-linear analysis. This is something that IBM figured out in its early years. In theory, there is simply nothing, but it requires a change of thinking, just as a step back. Normal data points, above three dimensions can't be drawn. But if you draw all the number axes of three-dimensional space as parallel lines, rather than the common rectangular coordinates, then a point in three-dimensional space is a broken line connecting three parallel lines. If that's all, it's a simple but silly math game. The beauty of the parallel coordinate system is that parallel lines can be drawn, so 5,20,3 thousand dimensions, as long as the paper is large enough, you can draw, and you can see it, not just imagine. Parallel coordinates have only one disadvantage, that is, they can only express discrete points, and it is difficult to express continuous lines or surfaces, but this is not a problem for the data collected by computer, the data collected by computer is discrete points. In this way, a large number of data points are drawn into broken line clusters with parallel coordinates, and you can intuitively see the pattern in the data.
Another idea for troubleshooting is to identify the whole process. The identified model describes the behavior of the system, and the fault is of course a change of behavior. Therefore, by comparing the real-time identified model with the normal model, we can judge whether the system is abnormal or faulty.
Another use of computers and models is simulation. Simulation (simulation) is also called simulation, but simulation is easy to confuse with analog circuits (analog circuit), so it is now called simulation more. With a precise enough model of the actual process, the computer can imitate the behavior of the actual system. The simulation has both static simulation and dynamic simulation. Static simulation is solving a huge system of nonlinear joint equations, and the differential equations describing the spatial distribution are also decomposed by the finite element method. Modern static simulation can be done quite accurately, but it is also done based on years of "run-in" models combined with actual process data. Static simulation is widely used in the design and calculation of process equipment, but it has limited effect on the real behavior of the actual process, because the simulation of the whole production process and process should consider the time of each equipment action and the influence of the control circuit, and these static simulations cannot be reflected. Dynamic simulation to solve the same huge system of simultaneous differential equations, because to achieve real-time or faster, generally can only be greatly simplified, otherwise the calculation speed can not keep up.
Simulation is very useful in industry. Modern factories are becoming more stable and safer, and many operators have never been really dangerous in their whole lives. But did not encounter is not equal to not encounter, the operator must receive enough training, only in this way, when encountering a dangerous situation, first can timely and correct to identify the fault, and then can timely and correct response. This depends on simulation training. Modern factories are also constantly expanding the limits of process parameters, often requiring a variety of tests. With the simulation, it is possible to pre-verify the idea of the test and verify the handling of the emergency. Simulation is a good helper of the control engineer, the new control loop is first put on the simulation to try, get the initial setting parameters, verify the processing ability of abnormal situations, and then put on the real guy, which can avoid a lot of unnecessary surprises.
A distant relative of the simulation is real-time optimization (real-time optimization, RTO). For the modern manufacturing industry to haggle over every ounce, real-time optimization of course is desirable. Real-time optimization is to calculate the whole production process as a large real-time simulation, real-time (in fact, every hour) to calculate the optimal working condition. Ideas are good, but difficulties are many. First of all, it is not easy to converge a large system of equations, divide them into many blocks, solve them separately, and then spell them together. The problem is "spell", boundary conditions do not close how to do it? The model is always quite simplified, and some of the parameters must be consistent with the actual measurement, and some do not correspond to the actual measurement, which is the "empirical coefficient" (fudge factor). These coefficients are responsible for cleaning up the debts, if the boundary is not close, adjust the coefficients to align them. The problem is, a lot of time, this move is not effective, so the real-time optimization of the horn blowing very loud, really using very little, and spending a lot of money to finally give up is not a few.
As in a war, the warrior who wins the battle is not a weapon. Control engineers are the key to the success of the control loop, not a valuable computer, or a "universal" mathematical control theory. In Canada, the control "major" of the chemical industry department has to select all the chemical credits, and then add the control credits, so the requirement is a little higher than the average chemical engineer. It is very important to choose full chemical credits. If there is no right to speak in the chemical industry, chemical control should not be confused, which is lacking in China (at least when I was in university more than 20 years ago). This is just like a doctor. Only by having a deep understanding of physiology and pathology, and having a deep understanding of the specific situation of patients, can it be possible to reliably judge the condition and reliably prescribe a prescription for treatment. Only look at the single grasp medicine, this is not a doctor, but a pharmacist. In practice, the control engineer should understand at least the dynamic behavior of the process as well as the process engineer and the operator.
Most of the time, the mission of control engineers is to specify and automate the experience and knowledge of process engineers and operators. If you don't understand it, how do you do it? A good control engineer can operate on duty when the operator is absent; and make process decisions when the process engineer is away. But the control engineer is not a process engineer, nor an operator. Control engineers should master all of the areas mentioned above, from mathematical control theory to computer networks, to ergonomics, to process and instrumentation knowledge. This is very demanding, but not unrealistic. These are the diamonds necessary to live the porcelain. That is why the industry is now keen to hire control graduates with master's degrees because it is difficult to learn the necessary knowledge. As for the doctor, there is still a suspicion of high vision and low vision, professional knowledge is only the success side, and control engineers must be good at dealing with people. Process engineers better say, after all, have a similar background, but the operator is the key to the success or failure of the control system, if not get the operator to your personal and your control system trust and cooperation, the control system is likely to be permanently closed, the operator would rather manual control, the problem or because the control system is not reliable, you are waiting for inside and outside is not people. But with the trust and cooperation of the operator, things will go in the opposite direction. The operation union will actively offer you improvement suggestions or new ideas, actively find opportunities to help you test new functions, and actively expand the performance limits of the control system. If the customer is the god, the operator, not the department head, is the god of the control engineer.
Control engineers should also be good at dealing with leaders, after all, when doing projects and demanding money, or to find leaders. Reporting, reporting, project control and management, and dealing with suppliers are all essential skills.
Process engineers are also engineers, but in the army, they are crowded, more like conventional troops, used to large regiments. Control engineers, like special forces, are small and stop quirks (at least for craft people, they never understand what the people in control are doing and how to do it), from planning to implementation to maintenance. The development process of control theory is a process of finding a "magic bullet" that is "universally applicable". The ultimate goal is to use a unified mathematical control tool to "set" any specific control problem, without having a deep understanding of the physical, chemical, and other characteristics of the specific process. Every major advance in control theory brings us hope that "this time we have finally found it." But each hope brings new disappointment, new methods, and new tools to solve the old problems, but brings new limitations, sometimes even turning a circle back. New limitations are often more intractable than old problems, and require more understanding of the process, not less. Spears and shields fight in a spiral rise.
But reality often runs counter to what people know. In the tide of commercialization, the company promoting advanced control algorithms guarantees how to use the "universal" mathematical control tools to solve all control problems, and, after all, "technology theory" is not only prevalent in the American military but also in the North American corporate culture is popular. Until one day, people found that the perpetual motion machine is still a myth, people still can not walk on the water, just remembered that there is no such a good thing in the world. But that's an aside.
Originally knew this series would be smelly and long, but since written, finished it, hope not take up too much bandwidth, hope not to waste everyone's time, and hope to be interested in automatic control friends provide a little introductory knowledge, hope to give with good some experience from practice, many wrong place, please forgive, thank you for reading at the same time.
Loading...