Decreasing Information

During the training of several neural networks for my bachelor’s thesis (more on that later, maybe!) I noticed something fun. The used networks’ weights (in this case classification function parameters) are initialized with numbers drawn from the standard normal distribution, meaning the initial network state is random. Such randomness by its very nature has no actual structure, and thus has high entropy. This means that compressing the information to save it to disk is less effective than on other, more structured, information. Initially, saving one such network’s weights required approximately 19.5MB of disk space.

As the networks’ training progressed, the file sizes shrunk! After a day of training, the space required for saving this network’s weights had decreased to 18.0MB; a 1.5MB decrease from the original value. I hadn’t thought about it before, but once I noticed it I soon realized what was happening. The whole act of training networks is exercised to find structure in data. A neural network does this by learning some sort of representation of the data through continuously updating its weights while training — in other words, the weights are getting more structured as the network is getting smarter! When the weights’ structure increases the entropy decreases, making compression more effective and our disks happier. Or unhappier, perhaps.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.