Open main menu

DAVE Developer's Wiki β

Changes

no edit summary
1/1 [==============================] - 0s 214ms/step - loss: 1.3166 - acc: 0.7083
</pre>
The model is loaded and trained once again, resuming its previous state, after applying a pruning schedule. As training proceeds, the pruning routine will be scheduled to execute, eliminating (i.e. setting to zero) the weights with the lowest magnitude values (i.e. those closest to zero) until the current sparsity target is reached. Every time the pruning routine is scheduled to execute, the current sparsity target is recalculated, starting from 0% until it reaches the final target sparsity at the end of the pruning schedule. After the end step, the training continues, in order to regain the lost accuracy, knowing that the actual level of sparsity will not change.
For In this particular case, a good compromise between compression and accuracy drop, is to prune only the two dense layers of the model, which have a high number of parameters, with a pruning schedule that start at epoch 0, ends at 1/3 of the total number of epochs (i.e. 100 epochs), starting with an initial sparsity of 50% and ending with a final sparsity of 80%, with a pruning frequency of 5 steps (i.e. the model is pruned every 5 steps during the training phase).
<pre>
dave_user
207
edits