Adaptive control


Creative Commons License This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

N.B.  :  This article is compiled …

This article explains concepts related to adaptive control in control systems.

Introduction

Adaptive Control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing themselves.

Parameter estimation

The foundation of adaptive control is parameter estimation. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws which are used to modify estimates in real time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criterion (typically persistent excitation). Projection (mathematics) and normalization are commonly used to improve the robustness of estimation algorithms.

Classification of adaptive control techniques

In general one should distinguish between:

  1. Feedforward Adaptive Control
  2. Feedback Adaptive Control

as well as between

  1. Direct Methods and
  2. Indirect Methods

Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters

There are several broad categories of feedback adaptive control (classification can vary):

  • Dual Adaptive Controllers [based on Dual control theory]
    1. Optimal Dual Controllers [difficult to design]
    2. Suboptimal Dual Controllers
  • Nondual Adaptive Controllers
    1. Adaptive Pole Placement
    2. Extremum Seeking Controllers
    3. Iterative learning control
    4. Gain scheduling
    • Model Reference Adaptive Controllers (MRACs) [incorporate a reference model defining desired closed loop performance]
      • Gradient Optimization MRACs [use local rule for adjusting params when performance differs from reference, e.g. , “MIT rule”.]
      • Stability Optimized MRACs
    • Model Identification Adaptive Controllers (MIACs) [perform System identification ewhile the system is running]
      • Cautious Adaptive Controllers [use current SI to modify control law, allowing for SI uncertainty]
      • Certainty Equivalent Adaptive Controllers [take current SI to be the true system, assume no uncertainty]
        • Nonparametric Adaptive Controllers
        • Parametric Adaptive Controllers
          1. Explicit Parameter Adaptive Controllers
          2. Implicit Parameter Adaptive Controllers

          Figure 1 – MRAC

          Figure 2 – MIAC

Advertisements

12 thoughts on “Adaptive control

  1. Pingback: Sitemap : Published Articles « Prerna Jain

  2. Pingback: Categorised List Of All Articles : | prernajaindotme

  3. Pingback: SiteMap: Published Articles | E-Prerna.Com

  4. Pingback: SiteMap: Categorised List Of All Articles | E-Prerna.Com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s