text
stringlengths 0
4.99k
|
---|
ax[i].set_title(\"Model {}\".format(metric)) |
ax[i].set_xlabel(\"epochs\") |
ax[i].set_ylabel(metric) |
ax[i].legend([\"train\", \"val\"]) |
png |
Make predictions on a single CT scan |
# Load best weights. |
model.load_weights(\"3d_image_classification.h5\") |
prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0] |
scores = [1 - prediction[0], prediction[0]] |
class_names = [\"normal\", \"abnormal\"] |
for score, name in zip(scores, class_names): |
print( |
\"This model is %.2f percent confident that CT scan is %s\" |
% ((100 * score), name) |
) |
This model is 26.60 percent confident that CT scan is normal |
This model is 73.40 percent confident that CT scan is abnormal |
Minimal implementation of volumetric rendering as shown in NeRF. |
Introduction |
In this example, we present a minimal implementation of the research paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et. al. The authors have proposed an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network. |
To help you understand this intuitively, let's start with the following question: would it be possible to give to a neural network the position of a pixel in an image, and ask the network to predict the color at that position? |
2d-train |
Figure 1: A neural network being given coordinates of an image |
as input and asked to predict the color at the coordinates. |
The neural network would hypothetically memorize (overfit on) the image. This means that our neural network would have encoded the entire image in its weights. We could query the neural network with each position, and it would eventually reconstruct the entire image. |
2d-test |
Figure 2: The trained neural network recreates the image from scratch. |
A question now arises, how do we extend this idea to learn a 3D volumetric scene? Implementing a similar process as above would require the knowledge of every voxel (volume pixel). Turns out, this is quite a challenging task to do. |
The authors of the paper propose a minimal and elegant way to learn a 3D scene using a few images of the scene. They discard the use of voxels for training. The network learns to model the volumetric scene, thus generating novel views (images) of the 3D scene that the model was not shown at training time. |
There are a few prerequisites one needs to understand to fully appreciate the process. We structure the example in such a way that you will have all the required knowledge before starting the implementation. |
Setup |
# Setting random seed to obtain reproducible results. |
import tensorflow as tf |
tf.random.set_seed(42) |
import os |
import glob |
import imageio |
import numpy as np |
from tqdm import tqdm |
from tensorflow import keras |
from tensorflow.keras import layers |
import matplotlib.pyplot as plt |
# Initialize global variables. |
AUTO = tf.data.AUTOTUNE |
BATCH_SIZE = 5 |
NUM_SAMPLES = 32 |
POS_ENCODE_DIMS = 16 |
EPOCHS = 20 |
Download and load the data |
The npz data file contains images, camera poses, and a focal length. The images are taken from multiple camera angles as shown in Figure 3. |
camera-angles |
Figure 3: Multiple camera angles |
Source: NeRF |
To understand camera poses in this context we have to first allow ourselves to think that a camera is a mapping between the real-world and the 2-D image. |
mapping |
Figure 4: 3-D world to 2-D image mapping through a camera |
Source: Mathworks |
Consider the following equation: |
Where x is the 2-D image point, X is the 3-D world point and P is the camera-matrix. P is a 3 x 4 matrix that plays the crucial role of mapping the real world object onto an image plane. |
The camera-matrix is an affine transform matrix that is concatenated with a 3 x 1 column [image height, image width, focal length] to produce the pose matrix. This matrix is of dimensions 3 x 5 where the first 3 x 3 block is in the camera’s point of view. The axes are [down, right, backwards] or [-y, x, z] where the camera is facing forwards -z. |
camera-mapping |
Figure 5: The affine transformation. |
The COLMAP frame is [right, down, forwards] or [x, -y, -z]. Read more about COLMAP here. |
# Download the data if it does not already exist. |
file_name = \"tiny_nerf_data.npz\" |
url = \"https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz\" |
if not os.path.exists(file_name): |
data = keras.utils.get_file(fname=file_name, origin=url) |
data = np.load(data) |
images = data[\"images\"] |
im_shape = images.shape |
(num_images, H, W, _) = images.shape |
(poses, focal) = (data[\"poses\"], data[\"focal\"]) |
# Plot a random image from the dataset for visualization. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.