text
stringlengths 0
4.99k
|
---|
the text body of the ticket (text input), and |
any tags added by the user (categorical input) |
This model will have two outputs: |
the priority score between 0 and 1 (scalar sigmoid output), and |
the department that should handle the ticket (softmax output over the set of departments). |
You can build this model in a few lines with the functional API: |
num_tags = 12 # Number of unique issue tags |
num_words = 10000 # Size of vocabulary obtained when preprocessing text data |
num_departments = 4 # Number of departments for predictions |
title_input = keras.Input( |
shape=(None,), name="title" |
) # Variable-length sequence of ints |
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints |
tags_input = keras.Input( |
shape=(num_tags,), name="tags" |
) # Binary vectors of size `num_tags` |
# Embed each word in the title into a 64-dimensional vector |
title_features = layers.Embedding(num_words, 64)(title_input) |
# Embed each word in the text into a 64-dimensional vector |
body_features = layers.Embedding(num_words, 64)(body_input) |
# Reduce sequence of embedded words in the title into a single 128-dimensional vector |
title_features = layers.LSTM(128)(title_features) |
# Reduce sequence of embedded words in the body into a single 32-dimensional vector |
body_features = layers.LSTM(32)(body_features) |
# Merge all available features into a single large vector via concatenation |
x = layers.concatenate([title_features, body_features, tags_input]) |
# Stick a logistic regression for priority prediction on top of the features |
priority_pred = layers.Dense(1, name="priority")(x) |
# Stick a department classifier on top of the features |
department_pred = layers.Dense(num_departments, name="department")(x) |
# Instantiate an end-to-end model predicting both priority and department |
model = keras.Model( |
inputs=[title_input, body_input, tags_input], |
outputs=[priority_pred, department_pred], |
) |
Now plot the model: |
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True) |
png |
When compiling this model, you can assign different losses to each output. You can even assign different weights to each loss -- to modulate their contribution to the total training loss. |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss=[ |
keras.losses.BinaryCrossentropy(from_logits=True), |
keras.losses.CategoricalCrossentropy(from_logits=True), |
], |
loss_weights=[1.0, 0.2], |
) |
Since the output layers have different names, you could also specify the loss like this: |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss={ |
"priority": keras.losses.BinaryCrossentropy(from_logits=True), |
"department": keras.losses.CategoricalCrossentropy(from_logits=True), |
}, |
loss_weights=[1.0, 0.2], |
) |
Train the model by passing lists of NumPy arrays of inputs and targets: |
# Dummy input data |
title_data = np.random.randint(num_words, size=(1280, 10)) |
body_data = np.random.randint(num_words, size=(1280, 100)) |
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32") |
# Dummy target data |
priority_targets = np.random.random(size=(1280, 1)) |
dept_targets = np.random.randint(2, size=(1280, num_departments)) |
model.fit( |
{"title": title_data, "body": body_data, "tags": tags_data}, |
{"priority": priority_targets, "department": dept_targets}, |
epochs=2, |
batch_size=32, |
) |
Epoch 1/2 |
40/40 [==============================] - 3s 21ms/step - loss: 1.2713 - priority_loss: 0.7000 - department_loss: 2.8567 |
Epoch 2/2 |
40/40 [==============================] - 1s 22ms/step - loss: 1.2947 - priority_loss: 0.6990 - department_loss: 2.9786 |
<tensorflow.python.keras.callbacks.History at 0x156dbce10> |
When calling fit with a Dataset object, it should yield either a tuple of lists like ([title_data, body_data, tags_data], [priority_targets, dept_targets]) or a tuple of dictionaries like ({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}). |
For more detailed explanation, refer to the training and evaluation guide. |
A toy ResNet model |
In addition to models with multiple inputs and outputs, the functional API makes it easy to manipulate non-linear connectivity topologies -- these are models with layers that are not connected sequentially, which the Sequential API cannot handle. |
A common use case for this is residual connections. Let's build a toy ResNet model for CIFAR10 to demonstrate this: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.