|
|
Soledad Villar (Johns Hopkins)
Title:
Learning Structured Representations with Equivariant Contrastive Learning
Abstract:
Self-supervised learning converts raw perceptual data to a compact space using Euclidean distances to measure variations in data. In this work, we enhance the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations of embedding space. Specifically, in the contrastive learning setting, we introduce an equivariance objective and theoretically prove and empirically demonstrate that its minima forces augmentations on inputs to correspond to orthogonal transformations on the spherical embedding space. Our method, CARE: Contrastive Augmentation-induced Rotational Equivariance, improves performance on downstream tasks by giving an algebraic structure to the feature space.
This is based on joint work with Sharut Gupta, Joshua Robinson, Derek Lim, and Stefanie Jegelka.
|
|
|