text
stringlengths 0
4.99k
|
---|
) |
model = keras.models.Sequential( |
[ |
lstm_layer, |
keras.layers.BatchNormalization(), |
keras.layers.Dense(output_size), |
] |
) |
return model |
Let's load the MNIST dataset: |
mnist = keras.datasets.mnist |
(x_train, y_train), (x_test, y_test) = mnist.load_data() |
x_train, x_test = x_train / 255.0, x_test / 255.0 |
sample, sample_label = x_train[0], y_train[0] |
Let's create a model instance and train it. |
We choose sparse_categorical_crossentropy as the loss function for the model. The output of the model has shape of [batch_size, 10]. The target for the model is an integer vector, each of the integer is in the range of 0 to 9. |
model = build_model(allow_cudnn_kernel=True) |
model.compile( |
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
optimizer="sgd", |
metrics=["accuracy"], |
) |
model.fit( |
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 |
) |
938/938 [==============================] - 12s 11ms/step - loss: 1.3152 - accuracy: 0.5698 - val_loss: 0.5888 - val_accuracy: 0.8086 |
<tensorflow.python.keras.callbacks.History at 0x154f3e950> |
Now, let's compare to a model that does not use the CuDNN kernel: |
noncudnn_model = build_model(allow_cudnn_kernel=False) |
noncudnn_model.set_weights(model.get_weights()) |
noncudnn_model.compile( |
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
optimizer="sgd", |
metrics=["accuracy"], |
) |
noncudnn_model.fit( |
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 |
) |
938/938 [==============================] - 14s 14ms/step - loss: 0.4382 - accuracy: 0.8669 - val_loss: 0.3223 - val_accuracy: 0.8955 |
<tensorflow.python.keras.callbacks.History at 0x154ce1a10> |
When running on a machine with a NVIDIA GPU and CuDNN installed, the model built with CuDNN is much faster to train compared to the model that uses the regular TensorFlow kernel. |
The same CuDNN-enabled model can also be used to run inference in a CPU-only environment. The tf.device annotation below is just forcing the device placement. The model will run on CPU by default if no GPU is available. |
You simply don't have to worry about the hardware you're running on anymore. Isn't that pretty cool? |
import matplotlib.pyplot as plt |
with tf.device("CPU:0"): |
cpu_model = build_model(allow_cudnn_kernel=True) |
cpu_model.set_weights(model.get_weights()) |
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1) |
print( |
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label) |
) |
plt.imshow(sample, cmap=plt.get_cmap("gray")) |
Predicted result is: [3], target result is: 5 |
png |
RNNs with list/dict inputs, or nested inputs |
Nested structures allow implementers to include more information within a single timestep. For example, a video frame could have audio and video input at the same time. The data shape in this case could be: |
[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}] |
In another example, handwriting data could have both coordinates x and y for the current position of the pen, as well as pressure information. So the data representation could be: |
[batch, timestep, {"location": [x, y], "pressure": [force]}] |
The following code provides an example of how to build a custom RNN cell that accepts such structured inputs. |
Define a custom cell that supports nested input/output |
See Making new Layers & Models via subclassing for details on writing your own layers. |
class NestedCell(keras.layers.Layer): |
def __init__(self, unit_1, unit_2, unit_3, **kwargs): |
self.unit_1 = unit_1 |
self.unit_2 = unit_2 |
self.unit_3 = unit_3 |
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])] |
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])] |
super(NestedCell, self).__init__(**kwargs) |
def build(self, input_shapes): |
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)] |
i1 = input_shapes[0][1] |
i2 = input_shapes[1][1] |
i3 = input_shapes[1][2] |
self.kernel_1 = self.add_weight( |
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.