layer_activation_leaky_relu.Rd
Allows a small gradient when the unit is not active: f(x) = alpha * x
for
x < 0
, f(x) = x
for x >= 0
.
layer_activation_leaky_relu(object, alpha = 0.3, input_shape = NULL, batch_input_shape = NULL, batch_size = NULL, dtype = NULL, name = NULL, trainable = NULL, weights = NULL)
object | Model or layer object |
---|---|
alpha | float >= 0. Negative slope coefficient. |
input_shape | Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model. |
batch_input_shape | Shapes, including the batch size. For instance,
|
batch_size | Fixed batch size for layer |
dtype | The data type expected by the input, as a string ( |
name | An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable | Whether the layer weights will be updated during training. |
weights | Initial weights for layer. |