This paper proposes modifications to the data-enabled policy optimization (DeePO) algorithm to mitigate state perturbations. DeePO is an adaptive, data-driven approach designed to iteratively compute a feedback gain equivalent to the certainty-equivalence LQR gain. Like other data-driven approaches based on Willems’ fundamental lemma, DeePO requires persistently exciting input signals. However, linear state-feedback gains from LQR designs cannot inherently produce such inputs. To address this, probing noise is conventionally added to the control signal to ensure persistent excitation. However, the added noise may induce undesirable state perturbations. We first identify two key issues that jeopardize the desired performance of DeePO when probing noise is not added: the convergence of states to the equilibrium point, and the convergence of the controller to its optimal value. To address these challenges without relying on probing noise, we propose Perturbation-Free DeePO (PFDeePO) built on two fundamental principles. First, the algorithm pauses the control gain updating in DeePO process when system states are near the equilibrium point. Second, it applies a multiplicative noise, scaled by a mean value of 1 as a gain for the control signal, when the controller converges. This approach minimizes the impact of noise as the system approaches equilibrium while preserving stability. We demonstrate the effectiveness of PFDeePO through simulations, showcasing its ability to eliminate state perturbations while maintaining system performance and stability.

A Modified Adaptive Data-Enabled Policy Optimization Control to Resolve State Perturbations

De Iuliis, Vittorio;
2025-01-01

Abstract

This paper proposes modifications to the data-enabled policy optimization (DeePO) algorithm to mitigate state perturbations. DeePO is an adaptive, data-driven approach designed to iteratively compute a feedback gain equivalent to the certainty-equivalence LQR gain. Like other data-driven approaches based on Willems’ fundamental lemma, DeePO requires persistently exciting input signals. However, linear state-feedback gains from LQR designs cannot inherently produce such inputs. To address this, probing noise is conventionally added to the control signal to ensure persistent excitation. However, the added noise may induce undesirable state perturbations. We first identify two key issues that jeopardize the desired performance of DeePO when probing noise is not added: the convergence of states to the equilibrium point, and the convergence of the controller to its optimal value. To address these challenges without relying on probing noise, we propose Perturbation-Free DeePO (PFDeePO) built on two fundamental principles. First, the algorithm pauses the control gain updating in DeePO process when system states are near the equilibrium point. Second, it applies a multiplicative noise, scaled by a mean value of 1 as a gain for the control signal, when the controller converges. This approach minimizes the impact of noise as the system approaches equilibrium while preserving stability. We demonstrate the effectiveness of PFDeePO through simulations, showcasing its ability to eliminate state perturbations while maintaining system performance and stability.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12078/32946
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact