Deep learning has had transformative impacts in many fields including computer vision, computational biology, and dynamics by allowing us to learn functions directly from data. However, there remain many domains in which learning is difficult due to poor model generalization or limited training data. We'll explore two applications of representation theory to neural networks which help address these issues. Firstly, consider the case in which the data represent an $G$-equivariant function. In this case, we can consider spaces of equivariant neural networks which may more easily be fit to the data using gradient descent. Secondly, we can consider symmetries of the parameter space as well. Exploiting these symmetries can lead to models with fewer free parameters, faster convergence, and more stable optimization.