Posts
Learn Like a Kindergartener
This is a Bilibili series designed to make advanced topics accessible to everyone. Each episode is 6-10 minutes long and covers topics such as control, estimation, and optimization. The series explores the mathematical beauty of classical theories from perspectives beyond standard textbooks. I hope you enjoy it!
PHR Conic Augmented Lagrangian Method
Slide: Starting from the penalty method, we extend to the augmented Lagrangian method for improved stability. By introducing \(s\), symmetric cone constraints are integrated, forming a unified framework for solving constrained optimization problems iteratively. Inspired by Dr. Zhepei Wang's Lecture "Numerical Optimization for Robotics".
EKF, UKF & Partical Filter
Slide: Evolving from the classic Kalman Filter, the EKF, UKF, and Particle Filter address nonlinear estimation through local linearization, deterministic sampling, and stochastic sampling, respectively, forming the cornerstone of modern state estimation.
Factor Graph for Pose Estimation
Slide: Overview of factor graphs in pose estimation, emphasizing their benefits over Kalman Filters for handling complex dependencies. Covers dynamic Bayesian networks, information forms, and smoothing for efficient state estimation.
Kalman Filter in 3 Ways
Slide: Discover the Kalman Filter through the Geometric perspective of orthogonal projection, the Probabilistic perspective of Bayesian filtering, and the Optimization perspective of weighted least squares. Inspired by Ling Shi's 2024-25 Spring lecture "Networked Sensing, Estimation and Control".
KKT Conditions and Optimization Techniques
Slide: This lecture begins with an introduction to the Karush-Kuhn-Tucker (KKT) conditions, which are fundamental in constrained optimization. We will explore their geometric interpretation, derive the necessary conditions for optimality, and discuss their applications in solving optimization problems.
Limit-memory BFGS Method
Slide: Starting with the classical Newton's method, we will examine its limitations and the improvements introduced by quasi-Newton methods. Finally, we will delve into the L-BFGS algorithm, which is particularly suited for large-scale problems. Inspired by Dr. Zhepei Wang's Lecture "Numerical Optimization for Robotics".
The Duality and the Failure of LQG Control
Slide: Explore the duality between state observers and feedback controllers, focusing on KF and LQR. Understand why combining the "optimal observer" with the "optimal controller" might fail. Inspired by Dominikus Noll's page "A generalization of the Linear Quadratic Gaussian Loop Transfer Recovery procedure (LQG/LTR)".
Linear Quadratic Regulator in 3 Ways
Slide: Explore the Linear Quadratic Regulator (LQR) through the Indirect Shooting Method (Pontryagin's Principle), the Optimization Approach (Quadratic Programming), and the Recursive Solution (Riccati Equation). Inspired by Zachary Manchester's Spring 2024-25 lecture "Optimal Control and Reinforcement Learning".