This thesis focuses on the unconstrained and constrained minimum time problems, in particular on regularity, numerical approximation, feedback and synthesis aspects. We first consider the problem of This thesis focuses on the unconstrained and constrained minimum time problems, in particular on regularity, numerical approximation, feedback and synthesis aspects. We first consider the problem of small-time local controllability for nonlinear finite-dimensional time-continuous control systems in presence of state constraints. More precisely, given a nonlinear control system subject to state constraints and a closed set S, we provide sufficient conditions to steer to S every point of a suitable neighborhood of S along admissible trajectories of the system, respecting the constraints, and giving also an upper estimate of the minimum time needed for each point to reach the target. Then in framework of control affine nonlinear systems, sufficient conditions to reach a target for a suitable discretization of a given dynamics are provided. We make use of an approach based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. We also design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback. We next propose a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its interior. Then owing to BDPP, we obtain the error estimates of a semi-Lagrangian discretization of the resulting Hamilton-Jacobi-Bellman equation. The remainder of this thesis is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation. In particular, the theoretical justification of the proposed approach is restricted to a class of linear control systems and uses arithmetic operations for convex sets. The error estimate of the fully discrete reachable set is provided by employing Hausdorff distance. The detailed procedure solving the corresponding discrete problem is described. Under standard assumptions, by means of convex analysis and knowledge of regularity of the true minimum time function, we estimate the error of its approximation. Finally, we reconstruct discrete suboptimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis. small-time local controllability for nonlinear finite-dimensional time-continuous control systems in presence of state constraints. More precisely, given a nonlinear control system subject to state constraints and a closed set S, we provide sufficient conditions to steer to S every point of a suitable neighborhood of S along admissible trajectories of the system, respecting the constraints, and giving also an upper estimate of the minimum time needed for each point to reach the target. Then in framework of control affine nonlinear systems, sufficient conditions to reach a target for a suitable discretization of a given dynamics are provided. We make use of an approach based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. We also design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback. We next propose a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its interior. Then owing to BDPP, we obtain the error estimates of a semi-Lagrangian discretization of the resulting Hamilton-Jacobi-Bellman equation. The remainder of this thesis is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation. In particular, the theoretical justification of the proposed approach is restricted to a class of linear control systems and uses arithmetic operations for convex sets. The error estimate of the fully discrete reachable set is provided by employing Hausdorff distance. The detailed procedure solving the corresponding discrete problem is described. Under standard assumptions, by means of convex analysis and knowledge of regularity of the true minimum time function, we estimate the error of its approximation. Finally, we reconstruct discrete suboptimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis

La tesi è dedicata a problemi di tempo minimo finito dimensionali, sia con vincoli di stato che senza, con particolare riguardo alla regolarità all’approssimazione numerica e ad asp etti collegati di sintesi. Si considera in primo luogo il problema della controllabilità locale per tempi piccoli con vincoli di stato: si forniscono condizioni sufficienti per portare ad un bersaglio in tempo finito una traiettoria del sistema dato, senza violare i vincoli, e si dà una stima del tempo necessario. Nell’ambito di problemi affini rispetto al controllo, si danno condizioni sufficienti per la controllabilità a rispetto ad una particolare discretizzazione della dinamica. Tale risultato è motivato da un approccio all’approssimazione del tempo minimo T basato sulla sua caratterizzazione mediante un’equazione di Hamilton-Jacobi. Il contributo di questa parte della tesi consiste in un risultato teorico che estende la teoria esistente al caso in cui T non sia Lipschitz (cioè sotto ipotesi deboli di controllabilità) e nella costruzione di un feedback approssimato con la relativa stima dell’errore. Si propone inoltre una nuova formulazione del problema del tempo minimo, nella quale si fa uso di un tempo negativo quando la traiettoria è penetrata all’interno del bersaglio, allo scopo di ridurre l’errore di approssimazione vicino alla frontiera. Si dimostra una nuova versione del principio della programmazione dinamica (il “Principio Ponte”), che stabilisce una relazione tra il tempo minimo all’interno e all’esterno del bersaglio. Si studia poi una discretizzazione della corrispondente equazione di Hamilton-Jacobi e si forniscono stime dell’errore. La parte finale della tesi è dedicata all’introduzione di un nuovo approccio per il calcolo approssimato di T basato sull’approssimazione degli insiemi raggiungibili mediante l’aritmetica degli insiemi convessi, valido per sistemi lineari. Si fornisce una stima dell’errore mediante la distanza di Hausdorff per gli insiemi raggiungibili e per il tempo minimo. Si costruiscono inoltre traiettorie subottimali discrete e si prova la convergenza dei corrispondenti controlli al controllo ottimo

Results on controllability and numerical approximation of the minimum time function / Le, THI THIEN THUY. - (2016 Jan 20).

Results on controllability and numerical approximation of the minimum time function

Le, Thi Thien Thuy
2016

Abstract

La tesi è dedicata a problemi di tempo minimo finito dimensionali, sia con vincoli di stato che senza, con particolare riguardo alla regolarità all’approssimazione numerica e ad asp etti collegati di sintesi. Si considera in primo luogo il problema della controllabilità locale per tempi piccoli con vincoli di stato: si forniscono condizioni sufficienti per portare ad un bersaglio in tempo finito una traiettoria del sistema dato, senza violare i vincoli, e si dà una stima del tempo necessario. Nell’ambito di problemi affini rispetto al controllo, si danno condizioni sufficienti per la controllabilità a rispetto ad una particolare discretizzazione della dinamica. Tale risultato è motivato da un approccio all’approssimazione del tempo minimo T basato sulla sua caratterizzazione mediante un’equazione di Hamilton-Jacobi. Il contributo di questa parte della tesi consiste in un risultato teorico che estende la teoria esistente al caso in cui T non sia Lipschitz (cioè sotto ipotesi deboli di controllabilità) e nella costruzione di un feedback approssimato con la relativa stima dell’errore. Si propone inoltre una nuova formulazione del problema del tempo minimo, nella quale si fa uso di un tempo negativo quando la traiettoria è penetrata all’interno del bersaglio, allo scopo di ridurre l’errore di approssimazione vicino alla frontiera. Si dimostra una nuova versione del principio della programmazione dinamica (il “Principio Ponte”), che stabilisce una relazione tra il tempo minimo all’interno e all’esterno del bersaglio. Si studia poi una discretizzazione della corrispondente equazione di Hamilton-Jacobi e si forniscono stime dell’errore. La parte finale della tesi è dedicata all’introduzione di un nuovo approccio per il calcolo approssimato di T basato sull’approssimazione degli insiemi raggiungibili mediante l’aritmetica degli insiemi convessi, valido per sistemi lineari. Si fornisce una stima dell’errore mediante la distanza di Hausdorff per gli insiemi raggiungibili e per il tempo minimo. Si costruiscono inoltre traiettorie subottimali discrete e si prova la convergenza dei corrispondenti controlli al controllo ottimo
20-gen-2016
This thesis focuses on the unconstrained and constrained minimum time problems, in particular on regularity, numerical approximation, feedback and synthesis aspects. We first consider the problem of This thesis focuses on the unconstrained and constrained minimum time problems, in particular on regularity, numerical approximation, feedback and synthesis aspects. We first consider the problem of small-time local controllability for nonlinear finite-dimensional time-continuous control systems in presence of state constraints. More precisely, given a nonlinear control system subject to state constraints and a closed set S, we provide sufficient conditions to steer to S every point of a suitable neighborhood of S along admissible trajectories of the system, respecting the constraints, and giving also an upper estimate of the minimum time needed for each point to reach the target. Then in framework of control affine nonlinear systems, sufficient conditions to reach a target for a suitable discretization of a given dynamics are provided. We make use of an approach based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. We also design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback. We next propose a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its interior. Then owing to BDPP, we obtain the error estimates of a semi-Lagrangian discretization of the resulting Hamilton-Jacobi-Bellman equation. The remainder of this thesis is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation. In particular, the theoretical justification of the proposed approach is restricted to a class of linear control systems and uses arithmetic operations for convex sets. The error estimate of the fully discrete reachable set is provided by employing Hausdorff distance. The detailed procedure solving the corresponding discrete problem is described. Under standard assumptions, by means of convex analysis and knowledge of regularity of the true minimum time function, we estimate the error of its approximation. Finally, we reconstruct discrete suboptimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis. small-time local controllability for nonlinear finite-dimensional time-continuous control systems in presence of state constraints. More precisely, given a nonlinear control system subject to state constraints and a closed set S, we provide sufficient conditions to steer to S every point of a suitable neighborhood of S along admissible trajectories of the system, respecting the constraints, and giving also an upper estimate of the minimum time needed for each point to reach the target. Then in framework of control affine nonlinear systems, sufficient conditions to reach a target for a suitable discretization of a given dynamics are provided. We make use of an approach based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. We also design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback. We next propose a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its interior. Then owing to BDPP, we obtain the error estimates of a semi-Lagrangian discretization of the resulting Hamilton-Jacobi-Bellman equation. The remainder of this thesis is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation. In particular, the theoretical justification of the proposed approach is restricted to a class of linear control systems and uses arithmetic operations for convex sets. The error estimate of the fully discrete reachable set is provided by employing Hausdorff distance. The detailed procedure solving the corresponding discrete problem is described. Under standard assumptions, by means of convex analysis and knowledge of regularity of the true minimum time function, we estimate the error of its approximation. Finally, we reconstruct discrete suboptimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis
Higher order scheme, Lie brackets, error estimates, approximate feedback, minimum time function, dynamic programming, controllability, Hamilton-Jacobi, reachable sets, linear control problems, set-valued Runge-Kutta methods, set-valued quadrature methods, support functions
Results on controllability and numerical approximation of the minimum time function / Le, THI THIEN THUY. - (2016 Jan 20).
File in questo prodotto:
File Dimensione Formato  
Le_Thi_Thien_Thuy_thesis.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Non specificato
Dimensione 5.23 MB
Formato Adobe PDF
5.23 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3424319
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact