Blog

ProjectBlog: Feature Detection Using Deep Belief Networks(DBN)

Blog: Feature Detection Using Deep Belief Networks(DBN)


Introduction

A deep belief network can be viewed as a stack of RBMs, where the hidden layer of one RBM is the visible layer of the one “above” it. Deep Belief Nets (DBNs) were first introduced by Geoffrey Hinton at the University of Toronto in 2006. In terms of network structure, a DBN is identical to an MLP. But when it comes to training, they are entirely different. In fact, the difference in training methods is the key factor that enables DBNs to outperform their shallow counterparts.

You can read about RBM in my previous post here and its application here and here.

Deep Belief Networks in Detail

Like RBMs, DBNs can learn the underlying structure of input and probabilistically reconstruct it. In other words, DBNs like RBMs are generative models. The layers in DBNs have connections only between layers but not between units within same layer.

A DBN is trained as follows

  • In the DBN, one layer is trained at a time. The first RBM is trained to re-construct its input as accurately as possible.
  • The hidden layer of the first RBM is treated as the visible layer for the second and the second RBM is trained using the outputs from the first RBM.
  • This process continues until all the layers of the DBN are trained. Except for the first and final layers of the DBN, each layer in the DBN serves as both a hidden layer and a visible layer of an RBM.

An important note about a DBN is that each RBM layer learns the entire input. In other kinds of models like convolutional nets, early layers detect simple patterns and later layers recombine them. Like in facial recognition example, the early layers would detect edges in the image, and later layers would use these results to form facial features. On the other hand a DBN, works globally by fine tuning the entire input in succession as the model slowly improves.

The DBN is a hierarchy of representations and, like all neural networks, is a form of representation learning. Note that the DBN does not use any labels. Instead, the DBN is learning the underlying structure in the input data one layer at a time.

Labels can be used to fine-tune the last few layers of the DBN with supervised learning but only after the initial unsupervised learning has been completed. For example, if we want the DBN to be a classifier, we would perform unsupervised learning first (a process known as pre-training) and then use labels to fine-tune the DBN (a process called fine-tuning).

To do this, you need a very small set of labeled samples so that the features and patterns can be associated with a name. The weights and biases are altered slightly, resulting in a small change in the net’s perception of the patterns, and often a small increase in the total accuracy. Fortunately, the set of labelled data can be small relative to the original data set, which is extremely helpful in real-world applications.

Accompanied jupyter notebook for this post can be found here and here.

Conclusion

RBMs can extract features and reconstruct inputs, but RBMs, cannot capture structure in complex data such as images, sound, and text, but DBNs can. DBNs have been used to recognize and cluster images, video capture, sound and text.

I hope this article helped you to get a good understanding about Deep Belief Networks (DBN) and how it can be used as a features extraction system.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social