layer_activation_parametric_relu.Rd
It follows: f(x) = alpha * x`` for
x < 0,
f(x) = xfor
x >= 0`, where
alpha is a learned array with the same shape as x.
layer_activation_parametric_relu(object, alpha_initializer = "zeros", alpha_regularizer = NULL, alpha_constraint = NULL, shared_axes = NULL, input_shape = NULL, batch_input_shape = NULL, batch_size = NULL, dtype = NULL, name = NULL, trainable = NULL, weights = NULL)
object | Model or layer object |
---|---|
alpha_initializer | Initializer function for the weights. |
alpha_regularizer | Regularizer for the weights. |
alpha_constraint | Constraint for the weights. |
shared_axes | The axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=c(1, 2). |
input_shape | Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model. |
batch_input_shape | Shapes, including the batch size. For instance,
|
batch_size | Fixed batch size for layer |
dtype | The data type expected by the input, as a string ( |
name | An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable | Whether the layer weights will be updated during training. |
weights | Initial weights for layer. |