text
stringlengths 0
4.99k
|
---|
# Now you can recreate the layer from its config: |
layer = Linear(64) |
config = layer.get_config() |
print(config) |
new_layer = Linear.from_config(config) |
{'units': 64} |
Note that the __init__() method of the base Layer class takes some keyword arguments, in particular a name and a dtype. It's good practice to pass these arguments to the parent class in __init__() and to include them in the layer config: |
class Linear(keras.layers.Layer): |
def __init__(self, units=32, **kwargs): |
super(Linear, self).__init__(**kwargs) |
self.units = units |
def build(self, input_shape): |
self.w = self.add_weight( |
shape=(input_shape[-1], self.units), |
initializer="random_normal", |
trainable=True, |
) |
self.b = self.add_weight( |
shape=(self.units,), initializer="random_normal", trainable=True |
) |
def call(self, inputs): |
return tf.matmul(inputs, self.w) + self.b |
def get_config(self): |
config = super(Linear, self).get_config() |
config.update({"units": self.units}) |
return config |
layer = Linear(64) |
config = layer.get_config() |
print(config) |
new_layer = Linear.from_config(config) |
{'name': 'linear_8', 'trainable': True, 'dtype': 'float32', 'units': 64} |
If you need more flexibility when deserializing the layer from its config, you can also override the from_config() class method. This is the base implementation of from_config(): |
def from_config(cls, config): |
return cls(**config) |
To learn more about serialization and saving, see the complete guide to saving and serializing models. |
Privileged training argument in the call() method |
Some layers, in particular the BatchNormalization layer and the Dropout layer, have different behaviors during training and inference. For such layers, it is standard practice to expose a training (boolean) argument in the call() method. |
By exposing this argument in call(), you enable the built-in training and evaluation loops (e.g. fit()) to correctly use the layer in training and inference. |
class CustomDropout(keras.layers.Layer): |
def __init__(self, rate, **kwargs): |
super(CustomDropout, self).__init__(**kwargs) |
self.rate = rate |
def call(self, inputs, training=None): |
if training: |
return tf.nn.dropout(inputs, rate=self.rate) |
return inputs |
Privileged mask argument in the call() method |
The other privileged argument supported by call() is the mask argument. |
You will find it in all Keras RNN layers. A mask is a boolean tensor (one boolean value per timestep in the input) used to skip certain input timesteps when processing timeseries data. |
Keras will automatically pass the correct mask argument to __call__() for layers that support it, when a mask is generated by a prior layer. Mask-generating layers are the Embedding layer configured with mask_zero=True, and the Masking layer. |
To learn more about masking and how to write masking-enabled layers, please check out the guide "understanding padding and masking". |
The Model class |
In general, you will use the Layer class to define inner computation blocks, and will use the Model class to define the outer model -- the object you will train. |
For instance, in a ResNet50 model, you would have several ResNet blocks subclassing Layer, and a single Model encompassing the entire ResNet50 network. |
The Model class has the same API as Layer, with the following differences: |
It exposes built-in training, evaluation, and prediction loops (model.fit(), model.evaluate(), model.predict()). |
It exposes the list of its inner layers, via the model.layers property. |
It exposes saving and serialization APIs (save(), save_weights()...) |
Effectively, the Layer class corresponds to what we refer to in the literature as a "layer" (as in "convolution layer" or "recurrent layer") or as a "block" (as in "ResNet block" or "Inception block"). |
Meanwhile, the Model class corresponds to what is referred to in the literature as a "model" (as in "deep learning model") or as a "network" (as in "deep neural network"). |
So if you're wondering, "should I use the Layer class or the Model class?", ask yourself: will I need to call fit() on it? Will I need to call save() on it? If so, go with Model. If not (either because your class is just a block in a bigger system, or because you are writing training & saving code yourself), use Layer. |
For instance, we could take our mini-resnet example above, and use it to build a Model that we could train with fit(), and that we could save with save_weights(): |
class ResNet(tf.keras.Model): |
def __init__(self, num_classes=1000): |
super(ResNet, self).__init__() |
self.block_1 = ResNetBlock() |
self.block_2 = ResNetBlock() |
self.global_pool = layers.GlobalAveragePooling2D() |
self.classifier = Dense(num_classes) |
def call(self, inputs): |
x = self.block_1(inputs) |
x = self.block_2(x) |
x = self.global_pool(x) |
return self.classifier(x) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.