text
stringlengths 0
4.99k
|
---|
>>> model.add(tf.keras.layers.Dense(4)) |
>>> model.build((None, 16)) |
>>> len(model.weights) |
4 |
# Note that when using the delayed-build pattern (no input shape specified), |
# the model gets built the first time you call `fit`, `eval`, or `predict`, |
# or the first time you call the model on some input data. |
model = tf.keras.Sequential() |
model.add(tf.keras.layers.Dense(8)) |
model.add(tf.keras.layers.Dense(1)) |
model.compile(optimizer='sgd', loss='mse') |
# This builds the model for the first time: |
model.fit(x, y, batch_size=32, epochs=10) |
add method |
Sequential.add(layer) |
Adds a layer instance on top of the layer stack. |
Arguments |
layer: layer instance. |
Raises |
TypeError: If layer is not a layer instance. |
ValueError: In case the layer argument does not know its input shape. |
ValueError: In case the layer argument has multiple output tensors, or is already connected somewhere else (forbidden in Sequential models). |
pop method |
Sequential.pop() |
Removes the last layer in the model. |
Raises |
TypeError: if there are no layers in the model. |
SGD |
SGD class |
tf.keras.optimizers.SGD( |
learning_rate=0.01, momentum=0.0, nesterov=False, name="SGD", **kwargs |
) |
Gradient descent (with momentum) optimizer. |
Update rule for parameter w with gradient g when momentum is 0: |
w = w - learning_rate * g |
Update rule when momentum is larger than 0: |
velocity = momentum * velocity - learning_rate * g |
w = w + velocity |
When nesterov=True, this rule becomes: |
velocity = momentum * velocity - learning_rate * g |
w = w + momentum * velocity - learning_rate * g |
Arguments |
learning_rate: A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.01. |
momentum: float hyperparameter >= 0 that accelerates gradient descent in the relevant direction and dampens oscillations. Defaults to 0, i.e., vanilla gradient descent. |
nesterov: boolean. Whether to apply Nesterov momentum. Defaults to False. |
name: Optional name prefix for the operations created when applying gradients. Defaults to "SGD". |
**kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. |
Usage: |
>>> opt = tf.keras.optimizers.SGD(learning_rate=0.1) |
>>> var = tf.Variable(1.0) |
>>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1 |
>>> step_count = opt.minimize(loss, [var]).numpy() |
>>> # Step is `- learning_rate * grad` |
>>> var.numpy() |
0.9 |
>>> opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9) |
>>> var = tf.Variable(1.0) |
>>> val0 = var.value() |
>>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1 |
>>> # First step is `- learning_rate * grad` |
>>> step_count = opt.minimize(loss, [var]).numpy() |
>>> val1 = var.value() |
>>> (val0 - val1).numpy() |
0.1 |
>>> # On later steps, step-size increases because of momentum |
>>> step_count = opt.minimize(loss, [var]).numpy() |
>>> val2 = var.value() |
>>> (val1 - val2).numpy() |
0.18 |
Reference |
For nesterov=True, See Sutskever et al., 2013.Adadelta |
Adadelta class |
tf.keras.optimizers.Adadelta( |
learning_rate=0.001, rho=0.95, epsilon=1e-07, name="Adadelta", **kwargs |
) |
Optimizer that implements the Adadelta algorithm. |
Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: |
The continual decay of learning rates throughout training. |
The need for a manually selected global learning rate. |
Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don't have to set an initial learning rate. In this version, the initial learning rate can be set, as in most other Keras optimizers. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.