Skip to content

Posts

Java 面向对象远征

这是一系列关于 Java 面向对象编程的系列视频,涵盖了从基础概念到高级应用的各个方面。每个视频时长约为 10 分钟,旨在帮助初学者和有经验的开发者更好地理解和掌握 Java 面向对象编程的核心原理和实践技巧,为构建大型软件系统打下坚实的基础。希望你喜欢!

类和对象 三大特性 抽象与接口 Java 核心类 函数式编程
类的本质 封装 抽象类 Object 类 行为参数化
实例化对象 继承 接口 Enum 类 函数式接口
方法重载 多态 接口 vs 抽象类 Record 类 Lambda 表达式

Learn Like a Kindergartener

This is a Bilibili series designed to make advanced topics accessible to everyone. Each episode is 6-10 minutes long and covers topics such as control, estimation, and optimization. The series explores the mathematical beauty of classical theories from perspectives beyond standard textbooks. I hope you enjoy it!

PHR Conic Augmented Lagrangian Method

Slide: Starting from the penalty method, we extend to the augmented Lagrangian method for improved stability. By introducing \(s\), symmetric cone constraints are integrated, forming a unified framework for solving constrained optimization problems iteratively. Inspired by Dr. Zhepei Wang's Lecture "Numerical Optimization for Robotics".

EKF, UKF & Partical Filter

Slide: Evolving from the classic Kalman Filter, the EKF, UKF, and Particle Filter address nonlinear estimation through local linearization, deterministic sampling, and stochastic sampling, respectively, forming the cornerstone of modern state estimation.

Factor Graph for Pose Estimation

Slide: Overview of factor graphs in pose estimation, emphasizing their benefits over Kalman Filters for handling complex dependencies. Covers dynamic Bayesian networks, information forms, and smoothing for efficient state estimation.

Kalman Filter in 3 Ways

Slide: Discover the Kalman Filter through the Geometric perspective of orthogonal projection, the Probabilistic perspective of Bayesian filtering, and the Optimization perspective of weighted least squares. Inspired by Ling Shi's 2024-25 Spring lecture "Networked Sensing, Estimation and Control".

KKT Conditions and Optimization Techniques

Slide: This lecture begins with an introduction to the Karush-Kuhn-Tucker (KKT) conditions, which are fundamental in constrained optimization. We will explore their geometric interpretation, derive the necessary conditions for optimality, and discuss their applications in solving optimization problems.

Limit-memory BFGS Method

Slide: Starting with the classical Newton's method, we will examine its limitations and the improvements introduced by quasi-Newton methods. Finally, we will delve into the L-BFGS algorithm, which is particularly suited for large-scale problems. Inspired by Dr. Zhepei Wang's Lecture "Numerical Optimization for Robotics".

The Duality and the Failure of LQG Control

Slide: Explore the duality between state observers and feedback controllers, focusing on KF and LQR. Understand why combining the "optimal observer" with the "optimal controller" might fail. Inspired by Dominikus Noll's page "A generalization of the Linear Quadratic Gaussian Loop Transfer Recovery procedure (LQG/LTR)".

Linear Quadratic Regulator in 3 Ways

Slide: Explore the Linear Quadratic Regulator (LQR) through the Indirect Shooting Method (Pontryagin's Principle), the Optimization Approach (Quadratic Programming), and the Recursive Solution (Riccati Equation). Inspired by Zachary Manchester's Spring 2024-25 lecture "Optimal Control and Reinforcement Learning".