text
stringlengths 0
4.99k
|
---|
resnet = ResNet() |
dataset = ... |
resnet.fit(dataset, epochs=10) |
resnet.save(filepath) |
Putting it all together: an end-to-end example |
Here's what you've learned so far: |
A Layer encapsulate a state (created in __init__() or build()) and some computation (defined in call()). |
Layers can be recursively nested to create new, bigger computation blocks. |
Layers can create and track losses (typically regularization losses) as well as metrics, via add_loss() and add_metric() |
The outer container, the thing you want to train, is a Model. A Model is just like a Layer, but with added training and serialization utilities. |
Let's put all of these things together into an end-to-end example: we're going to implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits. |
Our VAE will be a subclass of Model, built as a nested composition of layers that subclass Layer. It will feature a regularization loss (KL divergence). |
from tensorflow.keras import layers |
class Sampling(layers.Layer): |
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.""" |
def call(self, inputs): |
z_mean, z_log_var = inputs |
batch = tf.shape(z_mean)[0] |
dim = tf.shape(z_mean)[1] |
epsilon = tf.keras.backend.random_normal(shape=(batch, dim)) |
return z_mean + tf.exp(0.5 * z_log_var) * epsilon |
class Encoder(layers.Layer): |
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z).""" |
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs): |
super(Encoder, self).__init__(name=name, **kwargs) |
self.dense_proj = layers.Dense(intermediate_dim, activation="relu") |
self.dense_mean = layers.Dense(latent_dim) |
self.dense_log_var = layers.Dense(latent_dim) |
self.sampling = Sampling() |
def call(self, inputs): |
x = self.dense_proj(inputs) |
z_mean = self.dense_mean(x) |
z_log_var = self.dense_log_var(x) |
z = self.sampling((z_mean, z_log_var)) |
return z_mean, z_log_var, z |
class Decoder(layers.Layer): |
"""Converts z, the encoded digit vector, back into a readable digit.""" |
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs): |
super(Decoder, self).__init__(name=name, **kwargs) |
self.dense_proj = layers.Dense(intermediate_dim, activation="relu") |
self.dense_output = layers.Dense(original_dim, activation="sigmoid") |
def call(self, inputs): |
x = self.dense_proj(inputs) |
return self.dense_output(x) |
class VariationalAutoEncoder(keras.Model): |
"""Combines the encoder and decoder into an end-to-end model for training.""" |
def __init__( |
self, |
original_dim, |
intermediate_dim=64, |
latent_dim=32, |
name="autoencoder", |
**kwargs |
): |
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs) |
self.original_dim = original_dim |
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim) |
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim) |
def call(self, inputs): |
z_mean, z_log_var, z = self.encoder(inputs) |
reconstructed = self.decoder(z) |
# Add KL divergence regularization loss. |
kl_loss = -0.5 * tf.reduce_mean( |
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1 |
) |
self.add_loss(kl_loss) |
return reconstructed |
Let's write a simple training loop on MNIST: |
original_dim = 784 |
vae = VariationalAutoEncoder(original_dim, 64, 32) |
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) |
mse_loss_fn = tf.keras.losses.MeanSquaredError() |
loss_metric = tf.keras.metrics.Mean() |
(x_train, _), _ = tf.keras.datasets.mnist.load_data() |
x_train = x_train.reshape(60000, 784).astype("float32") / 255 |
train_dataset = tf.data.Dataset.from_tensor_slices(x_train) |
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.