In this talk, we consider the problem of learning sparse representations for data under the constraint that the representation satisfies some desired invariance. The problem of learning sparse representations is more typically referred to as dictionary learning in the literature, and the specific task we consider can be viewed as an extension of the convolutional dictionary learning problem.
Building on ideas from group representation theory, harmonic analysis, and convex geometry, we describe an end-to-end recipe for learning such data representations that are invariant to a fairly broad family of symmetries, and in particular, continuous ones. Our techniques draw connections between our learning problem and the geometric problem of fitting appropriately parameterized orbitopes to data. Spectrahedral descriptions of certain orbitopes based on Toeplitz positive semidefinite matrices feature prominently in our work.