Auto encoders name a group of models that basically learn to encode and decode a data sample. Although this approach can be used for any data type (image, audio file, video, point cloud …) it is very often used for images in combination with deep neural networks. The goal of an auto encoder is to compress the information of the original data such that the representation uses less memory than the original sample. Auto encoders generally consist of an encoder, mapping the data sample to its representation, and a decoder, reconstructing the data sample out of the representation. Using the simple condition, that the reconstruction should match the original input, no manually labeled or annotated data are required to train such a model.
A very related group of models are the variational auto encoders, which basically adds the constraint of having a gaussian distributed representation space.
- [notebook] Autoencoder for image compression