Wednesday, 8 April 2015

Experiments

Experimental setup

As a reference model I used a comparably small network: 
Layer Structure Dimension
1 Convolution 10 filters 3x3
2 Convolution 20 filters 3x3, pooling 2x2
3 Convolution 64 filters 3x3, pooling 2x2
4 Convolution 64 filters 3x3, pooling 2x2
5 Convolution 128 filters 3x3, pooling 2x2
6 Fully connected 256
7 Fully connected 256

Then I trained six models: a reference one, then I fixed first layer, first and second and so forth. The idea is illustrated on the picture (grey colour means, that the weights are fixed):

Results

I plotted training and validation error during training. 
We can see, that models with fixed parameters output reasonable results. Also it is interesting, that fixing weights sometimes gives regularizing effect (like on blue, magenta and yellow lines). It is not so amazing, because we are decreasing the capacity of the model.

No comments:

Post a Comment