Changes

Jump to: navigation, search
no edit summary
*increasing the network size gradually is more computationally efficient w.r.t. the classic approach of using all the layers from the start (fewer layers are faster to train because there are fewer parameters).
To implement a proGAN, one possibile solution is to pre-define all models prior to training and exploit the usage of the TensorFlow Keras functional API, ensuring that layers are shared across the models. This approach requires defining, for each resolution of the discriminator and generator, two models: one named ''straight-through'' and the other one named ''fade-in''. The latter, as the name suggests, implements the fade-in mechanism and so is used to transition from a lower resolution ''straight-through'' model to a higher one. The ''straight-through'' version has a plain architecture and its purpose is to fine-tuning all the layers for a given resolution.
[[File:Fade-in progression.png|center|thumb|500x500px|ProGAN: growing progression of the model during training]]
dave_user
207
edits

Navigation menu