A few examples:
- Long Short-Term Memory Networks: LSTMs for short, these variants of Recurrent Neural Networks (RNNs) attempt to mimic the brain's ability to remember only information deemed significant by incorporating a mechanism to "forget" parameters predicted not to hold much value. Note that LSTMs have been around for more than a decade but have only recently gained popularity.
- Spike and Slab Restricted Bolzmann Machines: This variant of the older Restricted Bolzmann Machine (RBM) maintains both a real valued vector and a binary vector corresponding to each of its hidden layers, in contrast to the standard RBM that maintains only binary vectors.
- Tensor Deep Stacking Networks: This variant of deep stacking networks (DSN) introduces covariance statistics to the DSN's bilinear mapping of each of the two distinct sets of units comprising each of its layers.
- Deep Q-Networks: Introduced very recently in 2014 by Google DeepMind, Deep Q-Networks apply the traditional reinforcement learning technique of Q-Learning to training convolutional neural networks. An application of Deep Q-Networks to playing Atari games managed to outperform human players.
- Neural Turing Machines: Another Google DeepMind invention, these nascent neural networks are essential differentiable versions of Turing machines that one can train with gradient descent.
For a longer list, see Wikipedia's enumeration of deep learning architectures.
0 Comments:
Post a Comment