keras.optimizers.SGD (lr = 0.01 , momentum=0.0, decay=0.0, nesterov=False)
keras.optimizers.RMSprop (lr = 0.001, rho = 0.9 , epsilon=None, decay=0.0)
keras.optimizers.Adagrad (lr = 0.01 , epsilon=None, decay=0.0)
keras.optimizers.Adadelta(lr = 1.0 , rho = 0.95 , epsilon=None, decay=0.0)
keras.optimizers.Adam(lr = 0.001,
beta_1 = 0.9 ,
beta_2 = 0.999,
epsilon = None ,
decay = 0.0 ,
amsgrad = False)
keras.optimizers.Adamax(lr = 0.002,
beta_1 = 0.9 ,
beta_2 = 0.999,
epsilon = None ,
decay = 0.0 )
keras.optimizers.Nadam(lr = 0.002,
beta_1 = 0.9 ,
beta_2 = 0.999,
epsilon = None ,
schedule_decay = 0.004)
Reference: keras.io
Wednesday, 16 January 2019
TensorFlow Keras Optimizers
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment