You can use the fixed weighted sums in a fast transform matrix together with parametric (adjustable) activation functions to create very fast neural networks. They also work very well as autoencoders as they are highly statistical in nature AI462 neural networks.
Does the original paper use glorot uniform for initialization? I've personally seen better results when using random normal initialization for image translation, but I haven't implemented this architecture so I would love to see the reason behind it.
It isn't actually specified in the paper or the official implementation in PyTorch. The default PyTorch initializer for Conv2D is He Uniform, so I think it's safe to assume they've used that. I haven't experimented with using random normal initializations for this model, but I'll run a test and see how it goes. If it's better I'll update the repo to reflect that! I've simply found glorot uniform to be fairly reliable as far as initializers go. If you know of any papers regarding random normal initialization and image translation quality I'd love to read them!
Thank you for the video! Very clear explanation, super informative!
You can use the fixed weighted sums in a fast transform matrix together with parametric (adjustable) activation functions to create very fast neural networks. They also work very well as autoencoders as they are highly statistical in nature AI462 neural networks.
thank you!
thanks!
Does the original paper use glorot uniform for initialization? I've personally seen better results when using random normal initialization for image translation, but I haven't implemented this architecture so I would love to see the reason behind it.
It isn't actually specified in the paper or the official implementation in PyTorch. The default PyTorch initializer for Conv2D is He Uniform, so I think it's safe to assume they've used that. I haven't experimented with using random normal initializations for this model, but I'll run a test and see how it goes. If it's better I'll update the repo to reflect that! I've simply found glorot uniform to be fairly reliable as far as initializers go. If you know of any papers regarding random normal initialization and image translation quality I'd love to read them!