The main purpose of this project is to speedup the architecture selection process. In order to make my experiments representative, I run all the models on the same GPU with the same optimization method (RMSProp) with the same hyper parameters.
The model I used for this task is quite small and the majority of parameters are the parameters of the fully connected layer. Nevertheless, fixing weights being random in first layers helps to speed up the training procedure.
I conducted experiments for 2-5 fixed layers first and then after about a week I run a reference model and a model with 1 fixed layer. Seems, that something had changed within this week and two later models run much faster. I concluded, that for this small models the running time depends on other processes. I put in the table number of parameters for each model and time per epoch.
|Fixed layers||Trained parameters||One epoch time, s|