Many applications require repeatedly solving a certain optimization problem, each time with new but similar data. “Learning to optimize” or L2O is an approach to develop algorithms that solve these similar problems very efficiently. L2O-generated algorithms have achieved significant success in signal processing and inverse-problem applications. This talk introduces the motivation for L2O and gives a quick overview of different types of L2O approaches for continuous optimization. Then, we will introduce Fixed Point Networks (FPNs), which incorporate fixed-point iterations into deep neural networks and provide abilities such as physics-based inversion, data-driven regularization, encoding hard constraints, and infinite depth. The FPNs are easy to train with a new Jacobian-free back propagation (JFB) scheme.