text
stringlengths 0
4.99k
|
---|
Epoch 34/100 |
53/53 [==============================] - 61s 1s/step - loss: 0.0533 - accuracy: 0.9828 - val_loss: 0.0161 - val_accuracy: 0.9947 |
Epoch 35/100 |
53/53 [==============================] - 61s 1s/step - loss: 0.0258 - accuracy: 0.9911 - val_loss: 0.0277 - val_accuracy: 0.9867 |
Epoch 36/100 |
53/53 [==============================] - 60s 1s/step - loss: 0.0261 - accuracy: 0.9901 - val_loss: 0.0542 - val_accuracy: 0.9787 |
Epoch 37/100 |
53/53 [==============================] - 60s 1s/step - loss: 0.0368 - accuracy: 0.9877 - val_loss: 0.0699 - val_accuracy: 0.9813 |
Epoch 38/100 |
53/53 [==============================] - 63s 1s/step - loss: 0.0251 - accuracy: 0.9890 - val_loss: 0.0206 - val_accuracy: 0.9907 |
Epoch 39/100 |
53/53 [==============================] - 62s 1s/step - loss: 0.0220 - accuracy: 0.9913 - val_loss: 0.0211 - val_accuracy: 0.9947 |
Evaluation |
print(model.evaluate(valid_ds)) |
24/24 [==============================] - 6s 244ms/step - loss: 0.0146 - accuracy: 0.9947 |
[0.014629718847572803, 0.9946666955947876] |
We get ~ 98% validation accuracy. |
Demonstration |
Let's take some samples and: |
Predict the speaker |
Compare the prediction with the real speaker |
Listen to the audio to see that despite the samples being noisy, the model is still pretty accurate |
SAMPLES_TO_DISPLAY = 10 |
test_ds = paths_and_labels_to_dataset(valid_audio_paths, valid_labels) |
test_ds = test_ds.shuffle(buffer_size=BATCH_SIZE * 8, seed=SHUFFLE_SEED).batch( |
BATCH_SIZE |
) |
test_ds = test_ds.map(lambda x, y: (add_noise(x, noises, scale=SCALE), y)) |
for audios, labels in test_ds.take(1): |
# Get the signal FFT |
ffts = audio_to_fft(audios) |
# Predict |
y_pred = model.predict(ffts) |
# Take random samples |
rnd = np.random.randint(0, BATCH_SIZE, SAMPLES_TO_DISPLAY) |
audios = audios.numpy()[rnd, :, :] |
labels = labels.numpy()[rnd] |
y_pred = np.argmax(y_pred, axis=-1)[rnd] |
for index in range(SAMPLES_TO_DISPLAY): |
# For every sample, print the true and predicted label |
# as well as run the voice with the noise |
print( |
\"Speaker: {} - Predicted: {}\".format( |
class_names[labels[index]], |
class_names[y_pred[index]], |
) |
) |
display(Audio(audios[index, :, :].squeeze(), rate=SAMPLING_RATE)) |
Train a 3D convolutional neural network to predict presence of pneumonia. |
Introduction |
This example will show the steps needed to build a 3D convolutional neural network (CNN) to predict the presence of viral pneumonia in computer tomography (CT) scans. 2D CNNs are commonly used to process RGB images (3 channels). A 3D CNN is simply the 3D equivalent: it takes as input a 3D volume or a sequence of 2D frames (e.g. slices in a CT scan), 3D CNNs are a powerful model for learning representations for volumetric data. |
References |
A survey on Deep Learning Advances on Different 3D DataRepresentations |
VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition |
FusionNet: 3D Object Classification Using MultipleData Representations |
Uniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction |
Setup |
import os |
import zipfile |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings |
In this example, we use a subset of the MosMedData: Chest CT Scans with COVID-19 Related Findings. This dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings. |
We will be using the associated radiological findings of the CT scans as labels to build a classifier to predict presence of viral pneumonia. Hence, the task is a binary classification problem. |
# Download url of normal CT scans. |
url = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip\" |
filename = os.path.join(os.getcwd(), \"CT-0.zip\") |
keras.utils.get_file(filename, url) |
# Download url of abnormal CT scans. |
url = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip\" |
filename = os.path.join(os.getcwd(), \"CT-23.zip\") |
keras.utils.get_file(filename, url) |
# Make a directory to store the data. |
os.makedirs(\"MosMedData\") |
# Unzip data in the newly created directory. |
with zipfile.ZipFile(\"CT-0.zip\", \"r\") as z_fp: |
z_fp.extractall(\"./MosMedData/\") |
with zipfile.ZipFile(\"CT-23.zip\", \"r\") as z_fp: |
z_fp.extractall(\"./MosMedData/\") |
Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip |
1065476096/1065471431 [==============================] - 236s 0us/step |
Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.