On Training Deep Boltzmann Machines

Guillaume Desjardins, Aaron Courville, Yoshua Bengio

The deep Boltzmann machine (DBM) has been an important development in the quest for powerful "deep" probabilistic models. To date, simultaneous or joint training of all layers of the DBM has been largely unsuccessful with existing training methods. We introduce a simple regularization scheme that encourages the weight vectors associated with each hidden unit to have similar norms. We demonstrate that this regularization can be easily combined with standard stochastic maximum likelihood to yield an effective training strategy for the simultaneous training of all layers of the deep Boltzmann machine.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment