relative_path
stringclasses 812
values | section
stringclasses 339
values | filename
stringlengths 2
61
| text
stringlengths 6
1.76M
|
---|---|---|---|
TensorFlow/Segmentation/UNet_Industrial | UNet_Industrial | README | # UNet Industrial Defect Segmentation for TensorFlow
This repository provides a script and recipe to train UNet Industrial to achieve state of the art
accuracy on the dataset DAGM2007, and is tested and maintained by NVIDIA.
UNet model for TensorFlow1 is no longer maintained and will soon become unavailable, please consider other PyTorch or TensorFlow2 models as a substitute for your requirements.
## Table of Contents
- [Model overview](#model-overview)
* [Model architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
* [Mixed precision training](#mixed-precision-training)
* [Enabling mixed precision](#enabling-mixed-precision)
* [Enabling TF32](#enabling-tf32)
- [Setup](#setup)
* [Requirements](#requirements)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
* [Parameters](#parameters)
* [Command-line options](#command-line-options)
* [Getting the data](#getting-the-data)
* [Dataset guidelines](#dataset-guidelines)
* [Multi-dataset](#multi-dataset)
* [Training process](#training-process)
* [Inference process](#inference-process)
- [Performance](#performance)
* [Benchmarking](#benchmarking)
* [Training performance benchmark](#training-performance-benchmark)
* [Inference performance benchmark](#inference-performance-benchmark)
* [Results](#results)
* [Training accuracy results](#training-accuracy-results)
* [Training accuracy: NVIDIA DGX A100 (8x A100 40GB)](#training-accuracy-nvidia-dgx-a100-8x-a100-40gb)
* [Training accuracy: NVIDIA DGX-1 (8x V100 16GB)](#training-accuracy-nvidia-dgx-1-8x-v100-16gb)
* [Training stability results](#training-stability-results)
* [Training stability: NVIDIA DGX A100 (8x A100 40GB)](#training-stability-nvidia-dgx-a100-8x-a100-40gb)
* [Training stability: NVIDIA DGX-1 (8x V100 16GB)](#training-stability-nvidia-dgx-1-8x-v100-16gb)
* [Training performance results](#training-performance-results)
* [Training performance: NVIDIA DGX A100 (8x A100 40GB)](#training-performance-nvidia-dgx-a100-8x-a100-40gb)
* [Training performance: NVIDIA DGX-1 (8x V100 16GB)](#training-performance-nvidia-dgx-1-8x-v100-16gb)
* [Inference performance results](#inference-performance-results)
* [Inference performance: NVIDIA DGX A100 (1x A100 40GB)](#inference-performance-nvidia-dgx-a100-1x-a100-40gb)
* [Inference performance: NVIDIA DGX-1 (1x V100 16GB)](#inference-performance-nvidia-dgx-1-1x-v100-16gb)
- [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
This UNet model is adapted from the original version of the [UNet model](https://arxiv.org/abs/1505.04597) which is
a convolutional auto-encoder for 2D image segmentation. UNet was first introduced by
Olaf Ronneberger, Philip Fischer, and Thomas Brox in the paper:
[UNet: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597).
This work proposes a modified version of UNet, called `TinyUNet` which performs efficiently and with very high accuracy
on the industrial anomaly dataset [DAGM2007](https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html).
*TinyUNet*, like the original *UNet* is composed of two parts:
- an encoding sub-network (left-side)
- a decoding sub-network (right-side).
It repeatedly applies 3 downsampling blocks composed of two 2D convolutions followed by a 2D max pooling
layer in the encoding sub-network. In the decoding sub-network, 3 upsampling blocks are composed of a upsample2D
layer followed by a 2D convolution, a concatenation operation with the residual connection and two 2D convolutions.
`TinyUNet` has been introduced to reduce the model capacity which was leading to a high degree of over-fitting on a
small dataset like DAGM2007. The complete architecture is presented in the figure below:

Figure 1. Architecture of the UNet Industrial
### Default Configuration
This model trains in 2500 epochs, under the following setup:
* Global Batch Size: 16
* Optimizer RMSProp:
* decay: 0.9
* momentum: 0.8
* centered: True
* Learning Rate Schedule: Exponential Step Decay
* decay: 0.8
* steps: 500
* initial learning rate: 1e-4
* Weight Initialization: He Uniform Distribution (introduced by [Kaiming He et al. in 2015](https://arxiv.org/abs/1502.01852) to address issues related ReLU activations in deep neural networks)
* Loss Function: Adaptive
* When DICE Loss < 0.3, Loss = Binary Cross Entropy
* Else, Loss = DICE Loss
* Data Augmentation
* Random Horizontal Flip (50% chance)
* Random Rotation 90°
* Activation Functions:
* ReLU is used for all layers
* Sigmoid is used at the output to ensure that the ouputs are between [0, 1]
* Weight decay: 1e-5
### Feature support matrix
The following features are supported by this model.
| **Feature** | **UNet Medical** |
|---------------------------------|-----|
| Automatic mixed precision (AMP) | Yes |
| Horovod Multi-GPU (NCCL) | Yes |
| Accelerated Linear Algebra (XLA)| Yes |
#### Features
**Automatic Mixed Precision (AMP)**
This implementation of UNet uses AMP to implement mixed precision training. It allows us to use FP16 training with FP32 master weights by modifying just a few lines of code.
**Horovod**
Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, see the [Horovod: Official repository](https://github.com/horovod/horovod).
**Multi-GPU training with Horovod**
Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, see example sources in this repository or see the [TensorFlow tutorial](https://github.com/horovod/horovod/#usage).
**XLA support (experimental)**
XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. The results are improvements in speed and memory usage: most internal benchmarks run ~1.1-1.5x faster after XLA is enabled.
### Mixed precision training
Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores) in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using [mixed precision training](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) previously required two steps:
1. Porting the model to use the FP16 data type where appropriate.
2. Adding loss scaling to preserve small gradient values.
This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full [mixed precision methodology](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#tensorflow) in your existing TensorFlow model code. AMP enables mixed precision training on Volta and Turing GPUs automatically. The TensorFlow framework code makes all necessary model changes internally.
In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling.
- How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) documentation.
- Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
- How to access and enable AMP for TensorFlow, see [Using TF-AMP](https://docs.nvidia.com/deeplearning/dgx/tensorflow-user-guide/index.html#tfamp) from the TensorFlow User Guide.
#### Enabling mixed precision
This implementation exploits the TensorFlow Automatic Mixed Precision feature. In order to enable mixed precision training, the following environment variables must be defined with the correct value before the training starts:
```
TF_ENABLE_AUTO_MIXED_PRECISION=1
```
Exporting these variables ensures that loss scaling is performed correctly and automatically.
By supplying the `--amp` flag to the `main.py` script while training in FP32, the following variables are set to their correct value for mixed precision training:
```
if params.use_amp:
os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'
```
#### Enabling TF32
TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.
For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
## Setup
The following section lists the requirements in order to start training the UNet Medical model.
### Requirements
This repository contains Dockerfile which extends the TensorFlow NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- TensorFlow 20.06-tf1-py3 [NGC container](https://ngc.nvidia.com/registry/nvidia-tensorflow)
- GPU-based architecture:
- [NVIDIA Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
- [NVIDIA Turing](https://www.nvidia.com/en-us/geforce/turing/)
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, see the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation:
- [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
- [Accessing And Pulling From The NGC container registry](https://docs.nvidia.com/deeplearning/dgx/user-guide/index.html#accessing_registry)
- [Running TensorFlow](https://docs.nvidia.com/deeplearning/dgx/tensorflow-release-notes/running.html#running)
For those unable to use the TensorFlow NGC container, to set up the required environment or create your own container, see the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html).
## Quick Start Guide
To train your model using mixed precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the UNet model on the [EM segmentation challenge dataset](http://brainiac2.mit.edu/isbi_challenge/home). These steps enable you to build the UNet TensorFlow NGC container, train and evaluate your model, and generate predictions on the test data. Furthermore, you can then choose to:
* compare your evaluation accuracy with our [Training accuracy results](#training-accuracy-results),
* compare your training performance with our [Training performance benchmark](#training-performance-benchmark),
* compare your inference performance with our [Inference performance benchmark](#inference-performance-benchmark).
For the specifics concerning training and inference, see the [Advanced](#advanced) section.
1. Clone the repository.
```bash
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial
```
2. Build the UNet TensorFlow NGC container.
```bash
# Build the docker container
docker build . --rm -t unet_industrial:latest
```
3. Start an interactive session in the NGC container to run preprocessing/training/inference.
```bash
# make a directory for the dataset, for example ./dataset
mkdir <path/to/dataset/directory>
# make a directory for results, for example ./results
mkdir <path/to/results/directory>
# start the container with nvidia-docker
nvidia-docker run -it --rm --gpus all \
--shm-size=2g --ulimit memlock=-1 --ulimit stack=67108864 \
-v <path/to/dataset/directory>:/data/ \
-v <path/to/result/directory>:/results \
unet_industrial:latest
```
4. Download and preprocess the dataset: DAGM2007
In order to download the dataset. You can execute the following:
```bash
./download_and_preprocess_dagm2007.sh /data
```
**Important Information:** Some files of the dataset require an account to be downloaded, the script will invite you to download them manually and put them in the correct directory.
5. Start training.
To run training for a default configuration (as described in Default configuration, for example 1/4/8 GPUs,
FP32/TF-AMP), launch one of the scripts in the `./scripts` directory called
`./scripts/UNet{_AMP}_{1, 4, 8}GPU.sh`
Each of the scripts requires three parameters:
* path to the results directory of the model as the first argument
* path to the dataset as a second argument
* class ID from DAGM used (between 1-10)
For example, for class 1:
```bash
cd scripts/
./UNet_1GPU.sh /results /data 1
```
6. Run evaluation
Model evaluation on a checkpoint can be launched by running one of the scripts in the `./scripts` directory
called `./scripts/UNet{_AMP}_EVAL.sh`.
Each of the scripts requires three parameters:
* path to the results directory of the model as the first argument
* path to the dataset as a second argument
* class ID from DAGM used (between 1-10)
For example, for class 1:
```bash
cd scripts/
./UNet_EVAL.sh /results /data 1
```
## Advanced
The following sections provide greater details of the dataset, running training and inference, and the training results.
### Command line options
To see the full list of available options and their descriptions, use the -h or --help command line option, for example:
```bash
python main.py --help
```
The following mandatory flags must be used to tune different aspects of the training:
general
-------
`--exec_mode=train_and_evaluate` Which execution mode to run the model into.
`--iter_unit=batch` Will the model be run for X batches or X epochs ?
`--num_iter=2500` Number of iterations to run.
`--batch_size=16` Size of each minibatch per GPU.
`--results_dir=/path/to/results` Directory in which to write training logs, summaries and checkpoints.
`--data_dir=/path/to/dataset` Directory which contains the DAGM2007 dataset.
`--dataset_name=DAGM2007` Name of the dataset used in this run (only DAGM2007 is supported atm).
`--dataset_classID=1` ClassID to consider to train or evaluate the network (used for DAGM).
model
-----
`--amp` Enable Automatic Mixed Precision to speedup FP32 computation using tensor cores.
`--xla` Enable TensorFlow XLA to maximise performance.
`--use_auto_loss_scaling` Use AutoLossScaling in TF-AMP
#### Dataset guidelines
The UNet model was trained with the [Weakly Supervised Learning for Industrial Optical Inspection (DAGM 2007)](https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html) dataset.
> The competition is inspired by problems from industrial image processing. In order to satisfy their customers' needs, companies have to guarantee the quality of their products, which can often be achieved only by inspection of the finished product. Automatic visual defect detection has the potential to reduce the cost of quality assurance significantly.
>
> The competitors have to design a stand-alone algorithm which is able to detect miscellaneous defects on various background textures.
>
> The particular challenge of this contest is that the algorithm must learn, without human intervention, to discern defects automatically from a weakly labeled (i.e., labels are not exact to the pixel level) training set, the exact characteristics of which are unknown at development time. During the competition, the programs have to be trained on new data without any human guidance.
**Source:** https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html
> The provided data is artificially generated, but similar to real world problems. It consists of multiple data sets, each consisting of 1000 images showing the background texture without defects, and of 150 images with one labeled defect each on the background texture. The images in a single data set are very similar, but each data set is generated by a different texture model and defect model.
> Not all deviations from the texture are necessarily defects. The algorithm will need to use the weak labels provided during the training phase to learn the properties that characterize a defect.
> Below are two sample images from two data sets. In both examples, the left images are without defects; the right ones contain a scratch-shaped defect which appears as a thin dark line, and a diffuse darker area, respectively. The defects are weakly labeled by a surrounding ellipse, shown in red.

The DAGM2007 challenge comes in propose two different challenges:
- A development set: public and available for download from
[here](https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html).
The number of classes and sub-challenges for the development set is 6.
- A competition set: which requires an account to be downloaded from [here](https://hci.iwr.uni-heidelberg.de/node/3616).
The number of classes and sub-challenges for the competition set is 10.
The challenge consists in designing a single model with a set of predefined hyper-parameters which will not change
across the 10 different classes or sub-challenges of the competition set.
The performance shall be measured on the competition set which is normalized and more complex that the public dataset
while offering the most unbiased evaluation method.
### Training Process
*Laplace Smoothing*
We use this technique in the DICE loss to improve the training efficiency. This technique consists in replacing the
epsilon parameter (used to avoid dividing by zero and very small: +/- 1e-7) by 1. You can find more information on:
[https://en.wikipedia.org/wiki/Additive_smoothing](https://en.wikipedia.org/wiki/Additive_smoothing)
*Adaptive Loss*
The DICE Loss is not able to provide a meaningful gradient at initialisation. This leads to a model instability which
often push the model to diverge. Nonetheless, once the model starts to converge, DICE loss is able to very efficiently
fully train the model. Therefore, we implemented an *adaptive loss* which is composed of two sub-losses:
- Binary Cross-Entropy (BCE)
- DICE Loss
The model is trained with the BCE loss until the DICE Loss reach a experimentally defined threshold (0.3).
Thereafter, DICE loss is used to finish training.
*Weak Labelling*
This dataset is referred as weakly labelled. That means that the segmentation labels are not given at the pixel level
but rather in an approximate fashion.
## Performance
The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to [NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference).
### Benchmarking
The following sections shows how to run benchmarks measuring the model performance in training and inference modes.
#### Training performance benchmark
To benchmark the inference performance, you can run one of the scripts in the `./scripts/benchmarking/` directory
called `./scripts/benchmarking/UNet_trainbench{_AMP}_{1, 4, 8}GPU.sh`.
Each of the scripts requires three parameters:
* path to the dataset as the first argument
* class ID from DAGM used (between 1-10)
For example:
```bash
cd scripts/benchmarking/
./UNet_trainbench_1GPU.sh /data 1
```
#### Inference performance benchmark
To benchmark the training performance, you can run one of the scripts in the `./scripts/benchmarking/` directory
called `./scripts/benchmarking/UNet_evalbench{_AMP}.sh`.
Each of the scripts requires three parameters:
* path to the dataset as the first argument
* class ID from DAGM used (between 1-10)
For example:
```bash
cd scripts/benchmarking/
./UNet_evalbench_AMP.sh /data 1
```
### Results
The following sections provide details on the achieved results in training accuracy, performance and inference performance.
#### Training accuracy results
##### Training accuracy: NVIDIA DGX A100 (8x A100 40GB)
Our results were obtained by running the `./scripts/UNet{_AMP}_{1, 8}GPU.sh` training
script in the `tensorflow:20.06-tf1-py3` NGC container on NVIDIA DGX A100 (8x A100 40GB) GPUs.
| GPUs | Batch size / GPU | Accuracy - TF32 | Accuracy - mixed precision | Time to train - TF32 [min] | Time to train - mixed precision [min] | Time to train speedup (TF32 to mixed precision) |
|:----:|:----------------:|:---------------:|:--------------------------:|:--------------------:|:-------------------------------:|:-----------------------------------------------:|
| 1 | 16 | 0.9717 | 0.9726 | 3.6 | 2.3 | 1.57 |
| 8 | 2 | 0.9733 | 0.9683 | 4.3 | 3.5 | 1.23 |
##### Training accuracy: NVIDIA DGX-1 (8x V100 16GB)
Our results were obtained by running the `./scripts/UNet{_AMP}_{1, 8}GPU.sh` training
script in the `tensorflow:20.06-tf1-py3` NGC container on NVIDIA DGX-1 (8x V100 16GB) GPUs.
| GPUs | Batch size / GPU | Accuracy - FP32 | Accuracy - mixed precision | Time to train - FP32 [min] | Time to train - mixed precision [min] | Time to train speedup (FP32 to mixed precision) |
|:----:|:----------------:|:---------------:|:--------------------------:|:--------------------:|:-------------------------------:|:-----------------------------------------------:|
| 1 | 16 | 0.9643 | 0.9653 | 10 | 8 | 1.25 |
| 8 | 2 | 0.9637 | 0.9655 | 2.5 | 2.5 | 1.00 |
#### Training performance results
##### Training performance: NVIDIA DGX A100 (8x A100 40GB)
Our results were obtained by running the scripts
`./scripts/benchmarking/UNet_trainbench{_AMP}_{1, 4, 8}GPU.sh` training script in the
TensorFlow `20.06-tf1-py3` NGC container on NVIDIA DGX A100 (8x A100 40GB) GPUs.
| GPUs | Batch size / GPU | Throughput - TF32 [img/s] | Throughput - mixed precision [img/s] | Throughput speedup (TF32 - mixed precision) | Strong scaling - TF32 | Strong scaling - mixed precision |
|:----:|:----------------:|:-------------------------:|:------------------------------------:|:-------------------------------------------:|:---------------------:|:--------------------------------:|
| 1 | 16 | 135.95 | 255.26 | 1.88 | - | - |
| 4 | 4 | 420.2 | 691.19 | 1.64 | 3.09 | 2.71 |
| 8 | 2 | 655.05 | 665.66 | 1.02 | 4.82 | 2.61 |
##### Training performance: NVIDIA DGX-1 (8x V100 16GB)
Our results were obtained by running the scripts
`./scripts/benchmarking/UNet_trainbench{_AMP}_{1, 4, 8}GPU.sh` training script in the
TensorFlow `20.06-tf1-py3` NGC container on an NVIDIA DGX-1 (8 V100 16GB) GPUs.
| GPUs | Batch size / GPU | Throughput - FP32 [img/s] | Throughput - mixed precision [img/s] | Throughput speedup (FP32 - mixed precision) | Strong scaling - FP32 | Strong scaling - mixed precision |
|:----:|:----------------:|:-------------------------:|:------------------------------------:|:-------------------------------------------:|:---------------------:|:--------------------------------:|
| 1 | 16 | 86.95 | 168.54 | 1.94 | - | - |
| 4 | 4 | 287.01 | 459.07 | 1.60 | 3.30 | 2.72 |
| 8 | 2 | 474.77 | 444.13 | 0.94 | 5.46 | 2.64 |
To achieve these same results, follow the [Quick Start Guide](#quick-start-guide) outlined above.
#### Inference performance results
#### Inference performance results
##### Inference performance: NVIDIA DGX A100 (1x A100 40GB)
Our results were obtained by running the scripts `./scripts/benchmarking/UNet_evalbench{_AMP}.sh`
evaluation script in the `20.06-tf1-py3` NGC container on NVIDIA DGX A100 (1x A100 40GB) GPUs.
FP16
| Batch size | Resolution | Throughput Avg [img/s] |
|:----------:|:----------:|:----------------------:|
| 1 | 512x512x1 | 247.83 |
| 8 | 512x512x1 | 761.41 |
| 16 | 512x512x1 | 823.46 |
TF32
| Batch size | Resolution | Throughput Avg [img/s] |
|:----------:|:----------:|:----------------------:|
| 1 | 512x512x1 | 227.97 |
| 8 | 512x512x1 | 419.70 |
| 16 | 512x512x1 | 424.57 |
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
##### Inference performance: NVIDIA DGX-1 (1x V100 16GB)
Our results were obtained by running the scripts `./scripts/benchmarking/UNet_evalbench{_AMP}.sh`
evaluation script in the `20.06-tf1-py3` NGC container on NVIDIA DGX-1 (1x V100 16GB) GPUs.
FP16
| Batch size | Resolution | Throughput Avg [img/s] |
|:----------:|:----------:|:----------------------:|
| 1 | 512x512x1 | 157.91 |
| 8 | 512x512x1 | 438.00 |
| 16 | 512x512x1 | 469.27 |
FP32
| Batch size | Resolution | Throughput Avg [img/s] |
|:----------:|:----------:|:----------------------:|
| 1 | 512x512x1 | 159.65 |
| 8 | 512x512x1 | 243.99 |
| 16 | 512x512x1 | 250.23 |
To achieve these same results, follow the [Quick Start Guide](#quick-start-guide) outlined above.
## Release notes
### Changelog
April 2023
* Ceased maintenance of this model
June 2020
* Updated training and inference accuracy with A100 results
* Updated training and inference performance with A100 results
October 2019
* Jupyter notebooks added
March,2019
* Initial release
### Known issues
There are no known issues with this model.
|
TensorFlow2/Recommendation/SIM/sim/layers | layers | item_item_interaction | # Copyright (c) 2022 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
class DotItemItemInteraction(tf.keras.layers.Layer):
@tf.function
def call(self, inputs):
item1, item2 = inputs
return tf.reduce_sum(item1 * item2, axis=-1)
class DINActivationUnit(tf.keras.layers.Layer):
def __init__(self):
super(DINActivationUnit, self).__init__()
self.dense1 = tf.keras.layers.Dense(80, activation="sigmoid")
self.dense2 = tf.keras.layers.Dense(40, activation="sigmoid")
self.linear = tf.keras.layers.Dense(1)
@tf.function
def call(self, inputs):
targets, item = inputs
items = tf.tile(item, [1, targets.shape[1], 1])
combined = tf.concat(
[items, targets, items - targets, items * targets], axis=-1
)
output = self.dense1(combined)
output = self.dense2(output)
output = self.linear(output)
# (B, T, 1) -> (B, T)
output = tf.squeeze(output)
return output
class DIENAttentionUnit(tf.keras.layers.Layer):
def __init__(self, embedding_dim):
"""
NOTE(alexo): this looks very similar to DINActivationUnit.
Besides the input item adaptation, the remaining part stays the same.
"""
super(DIENAttentionUnit, self).__init__()
# Adaptation of input item
self.item_dense = tf.keras.layers.Dense(embedding_dim)
self.item_prelu = tf.keras.layers.PReLU(
alpha_initializer=tf.keras.initializers.Constant(0.1)
)
#
self.dense1 = tf.keras.layers.Dense(80, activation="sigmoid")
self.dense2 = tf.keras.layers.Dense(40, activation="sigmoid")
self.linear = tf.keras.layers.Dense(1)
@tf.function
def call(self, inputs):
targets, item = inputs
item = self.item_dense(item)
item = self.item_prelu(item)
items = tf.tile(item, [1, targets.shape[1], 1])
combined = tf.concat(
[items, targets, items - targets, items * targets], axis=-1
)
output = self.dense1(combined)
output = self.dense2(output)
output = self.linear(output) # unnormalized scores
# (B, T, 1) -> (B, T)
output = tf.squeeze(output)
return output
|
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/triton/deployment_toolkit | deployment_toolkit | args | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import inspect
import logging
from typing import Callable, Dict, Optional, Union
from model_navigator.utils.cli import is_dict_generic, is_list_generic, is_optional_generic
from .core import GET_ARGPARSER_FN_NAME, load_from_file
LOGGER = logging.getLogger(__name__)
def str2bool(v):
if isinstance(v, bool):
return v
if v.lower() in ("yes", "true", "t", "y", "1"):
return True
elif v.lower() in ("no", "false", "f", "n", "0"):
return False
else:
raise argparse.ArgumentTypeError("Boolean value expected.")
def filter_fn_args(args: Union[dict, argparse.Namespace], fn: Callable) -> dict:
signature = inspect.signature(fn)
parameters_names = list(signature.parameters)
if isinstance(args, argparse.Namespace):
args = vars(args)
args = {k: v for k, v in args.items() if k in parameters_names}
return args
def add_args_for_fn_signature(parser, fn) -> argparse.ArgumentParser:
parser.conflict_handler = "resolve"
signature = inspect.signature(fn)
for parameter in signature.parameters.values():
if parameter.name in ["self", "args", "kwargs"]:
continue
argument_kwargs = {}
if parameter.annotation != inspect.Parameter.empty:
is_optional = is_optional_generic(parameter.annotation)
if is_optional:
annotation = parameter.annotation.__args__[0] # Optional[cls] will be changed into Union[cls, None]
else:
annotation = parameter.annotation
is_list = is_list_generic(annotation)
is_dict = is_dict_generic(annotation)
if parameter.annotation == bool:
argument_kwargs["type"] = str2bool
argument_kwargs["choices"] = [0, 1]
elif is_list:
argument_kwargs["type"] = annotation.__args__[0] # List[cls] -> cls
elif is_dict:
raise RuntimeError(
f"Could not prepare argument parser for {parameter.name}: {parameter.annotation} in {fn}"
)
else:
argument_kwargs["type"] = annotation
if parameter.default != inspect.Parameter.empty:
if parameter.annotation == bool:
argument_kwargs["default"] = str2bool(parameter.default)
else:
argument_kwargs["default"] = parameter.default
else:
argument_kwargs["required"] = True
name = parameter.name.replace("_", "-")
LOGGER.debug(f"Adding argument {name} with {argument_kwargs}")
parser.add_argument(f"--{name}", **argument_kwargs)
return parser
class ArgParserGenerator:
def __init__(self, cls_or_fn, module_path: Optional[str] = None):
self._cls_or_fn = cls_or_fn
init_method_name = "__init__"
self._handle = cls_or_fn if inspect.isfunction(cls_or_fn) else getattr(cls_or_fn, init_method_name, None)
input_is_python_file = module_path and module_path.endswith(".py")
self._input_path = module_path if input_is_python_file else None
self._required_fn_name_for_signature_parsing = getattr(
cls_or_fn, "required_fn_name_for_signature_parsing", None
)
def update_argparser(self, parser):
name = self._handle.__name__
group_parser = parser.add_argument_group(name)
add_args_for_fn_signature(group_parser, fn=self._handle)
self._update_argparser(group_parser)
def get_args(self, args: argparse.Namespace):
filtered_args = filter_fn_args(args, fn=self._handle)
tmp_parser = argparse.ArgumentParser(allow_abbrev=False)
self._update_argparser(tmp_parser)
custom_names = [
p.dest.replace("-", "_") for p in tmp_parser._actions if not isinstance(p, argparse._HelpAction)
]
custom_params = {n: getattr(args, n) for n in custom_names}
filtered_args = {**filtered_args, **custom_params}
return filtered_args
def from_args(self, args: Union[argparse.Namespace, Dict]):
args = self.get_args(args)
LOGGER.info(f"Initializing {self._cls_or_fn.__name__}({args})")
return self._cls_or_fn(**args)
def _update_argparser(self, parser):
label = "argparser_update"
if self._input_path:
update_argparser_handle = load_from_file(self._input_path, label=label, target=GET_ARGPARSER_FN_NAME)
if update_argparser_handle:
update_argparser_handle(parser)
elif self._required_fn_name_for_signature_parsing:
fn_handle = load_from_file(
self._input_path, label=label, target=self._required_fn_name_for_signature_parsing
)
if fn_handle:
add_args_for_fn_signature(parser, fn_handle)
|
TensorFlow/Segmentation/VNet/utils | utils | var_storage | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import tensorflow as tf
__all__ = ['model_variable_scope']
def model_variable_scope(name, reuse=False, dtype=tf.float32, debug_mode=False, *args, **kwargs):
"""Returns a variable scope that the model should be created under.
If self.dtype is a castable type, model variable will be created in fp32
then cast to self.dtype before being used.
Returns:
A variable scope for the model.
"""
def _custom_dtype_getter(getter, name, shape=None, dtype=None, trainable=True, regularizer=None, *args, **kwargs):
"""Creates variables in fp32, then casts to fp16 if necessary.
This function is a custom getter. A custom getter is a function with the
same signature as tf.get_variable, except it has an additional getter
parameter. Custom getters can be passed as the `custom_getter` parameter of
tf.variable_scope. Then, tf.get_variable will call the custom getter,
instead of directly getting a variable itself. This can be used to change
the types of variables that are retrieved with tf.get_variable.
The `getter` parameter is the underlying variable getter, that would have
been called if no custom getter was used. Custom getters typically get a
variable with `getter`, then modify it in some way.
This custom getter will create an fp32 variable. If a low precision
(e.g. float16) variable was requested it will then cast the variable to the
requested dtype. The reason we do not directly create variables in low
precision dtypes is that applying small gradients to such variables may
cause the variable not to change.
Args:
getter: The underlying variable getter, that has the same signature as
tf.get_variable and returns a variable.
name: The name of the variable to get.
shape: The shape of the variable to get.
*args: Additional arguments to pass unmodified to getter.
**kwargs: Additional keyword arguments to pass unmodified to getter.
Returns:
A variable which is cast to fp16 if necessary.
"""
storage_dtype = tf.float32 if dtype in [tf.float32, tf.float16] else dtype
variable = getter(
name,
shape,
dtype=storage_dtype,
trainable=trainable,
regularizer=(
regularizer if
(trainable and not any(l_name.lower() in name.lower()
for l_name in ['batchnorm', 'batch_norm'])) else None
),
*args,
**kwargs
)
if dtype != tf.float32:
cast_name = name + '/fp16_cast'
try:
cast_variable = tf.get_default_graph().get_tensor_by_name(cast_name + ':0')
except KeyError:
cast_variable = tf.cast(variable, dtype, name=cast_name)
cast_variable._ref = variable._ref
variable = cast_variable
return variable
return tf.variable_scope(name, reuse=reuse, dtype=dtype, custom_getter=_custom_dtype_getter, *args, **kwargs)
|
TensorFlow2/Recommendation/WideAndDeep/triton/runner/maintainer/docker | docker | container | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import pathlib
import docker
from docker.models.containers import ExecResult
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ..container import Container
class DockerContainer(Container):
def __init__(self, name: str):
super().__init__(name)
self._container = None
self._docker_client = docker.from_env()
self._docker_api_client = docker.APIClient()
@abc.abstractmethod
def start(self):
"""
Start container
"""
pass
@abc.abstractmethod
def stop(self):
"""
Stop container
"""
@abc.abstractmethod
def run(self, command: str) -> ExecResult:
"""
Run command inside container
Args:
command: command to execute
Returns:
ExecResult
"""
pass
|
TensorFlow/Segmentation/UNet_Industrial/scripts/benchmarking | benchmarking | UNet_trainbench_1GPU | #!/usr/bin/env bash
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script launches UNet training benchmark in FP32/TF32 on 1 GPU using 16 batch size (16 per GPU)
# Usage ./UNet_trainbench_1GPU.sh <path to dataset> <dagm classID (1-10)>
BASEDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
export TF_CPP_MIN_LOG_LEVEL=3
# Cleaning up for benchmark
RESULT_DIR="/tmp"
rm -rf "${RESULT_DIR}"
python "${BASEDIR}/../../main.py" \
--unet_variant='tinyUNet' \
--activation_fn='relu' \
--exec_mode='training_benchmark' \
--iter_unit='batch' \
--num_iter=1500 \
--batch_size=16 \
--warmup_step=500 \
--results_dir="${RESULT_DIR}" \
--data_dir="${1}" \
--dataset_name='DAGM2007' \
--dataset_classID="${2}" \
--data_format='NCHW' \
--use_auto_loss_scaling \
--noamp \
--xla \
--learning_rate=1e-4 \
--learning_rate_decay_factor=0.8 \
--learning_rate_decay_steps=500 \
--rmsprop_decay=0.9 \
--rmsprop_momentum=0.8 \
--loss_fn_name='adaptive_loss' \
--weight_decay=1e-5 \
--weight_init_method='he_uniform' \
--augment_data \
--display_every=250 \
--debug_verbosity=0
|
PyTorch/SpeechSynthesis/HiFiGAN/hifigan | hifigan | logging | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from collections import OrderedDict
from copy import copy
from itertools import product
from pathlib import Path
import dllogger
import numpy as np
import torch.distributed as dist
import torch
from dllogger import StdOutBackend, JSONStreamBackend, Verbosity
from common import tb_dllogger
from common.tb_dllogger import (stdout_metric_format, stdout_step_format,
unique_log_fpath, TBLogger)
def init_logger(output_dir, log_file, ema_decay=0.0):
local_rank = 0 if not dist.is_initialized() else dist.get_rank()
if local_rank == 0:
Path(output_dir).mkdir(parents=False, exist_ok=True)
log_fpath = log_file or Path(output_dir, 'nvlog.json')
dllogger.init(backends=[
JSONStreamBackend(Verbosity.DEFAULT, log_fpath, append=True),
JSONStreamBackend(Verbosity.DEFAULT, unique_log_fpath(log_fpath)),
StdOutBackend(Verbosity.VERBOSE, step_format=stdout_step_format,
metric_format=stdout_metric_format)
])
init_train_metadata()
else:
dllogger.init(backends=[])
tb_train = ['train']
tb_val = ['val']
tb_ema = [k + '_ema' for k in tb_val] if ema_decay > 0.0 else []
tb_dllogger.tb_loggers = {
s: TBLogger(enabled=(local_rank == 0), log_dir=output_dir, name=s)
for s in tb_train + tb_val + tb_ema}
def init_train_metadata():
dllogger.metadata("train_lrate_gen",
{"name": "g lr", "unit": None, "format": ":>3.2e"})
dllogger.metadata("train_lrate_discrim",
{"name": "d lr", "unit": None, "format": ":>3.2e"})
dllogger.metadata("train_avg_lrate_gen",
{"name": "avg g lr", "unit": None, "format": ":>3.2e"})
dllogger.metadata("train_avg_lrate_discrim",
{"name": "avg d lr", "unit": None, "format": ":>3.2e"})
for id_, pref in [('train', ''), ('train_avg', 'avg train '),
('val', ' avg val '), ('val_ema', ' EMA val ')]:
dllogger.metadata(f"{id_}_loss_gen",
{"name": f"{pref}g loss", "unit": None, "format": ":>6.3f"})
dllogger.metadata(f"{id_}_loss_discrim",
{"name": f"{pref}d loss", "unit": None, "format": ":>6.3f"})
dllogger.metadata(f"{id_}_loss_mel",
{"name": f"{pref}mel loss", "unit": None, "format": ":>6.3f"})
dllogger.metadata(f"{id_}_frames/s",
{"name": None, "unit": "frames/s", "format": ":>8.2f"})
dllogger.metadata(f"{id_}_took",
{"name": "took", "unit": "s", "format": ":>3.2f"})
def init_inference_metadata(batch_size=None):
modalities = [('latency', 's', ':>10.5f'), ('RTF', 'x', ':>10.2f'),
('frames/s', 'frames/s', ':>10.2f'), ('samples/s', 'samples/s', ':>10.2f'),
('letters/s', 'letters/s', ':>10.2f'), ('tokens/s', 'tokens/s', ':>10.2f'),
('mel-loss', None, ':>10.5f')]
if batch_size is not None:
modalities.append((f'RTF@{batch_size}', 'x', ':>10.2f'))
percs = ['', 'avg', '90%', '95%', '99%']
models = ['', 'fastpitch', 'waveglow', 'hifigan']
for perc, model, (mod, unit, fmt) in product(percs, models, modalities):
name = f'{perc} {model} {mod}'.strip().replace(' ', ' ')
dllogger.metadata(name.replace(' ', '_'),
{'name': f'{name: <26}', 'unit': unit, 'format': fmt})
class defaultdict(OrderedDict):
"""A simple, ordered defaultdict."""
def __init__(self, type_, *args, **kwargs):
self.type_ = type_
super().__init__(*args, **kwargs)
def __getitem__(self, key):
if key not in self:
self.__setitem__(key, self.type_())
return super().__getitem__(key)
def __copy__(self):
return defaultdict(self.type_, self)
class Metrics(dict):
def __init__(self, scopes=['train', 'train_avg'],
dll_keys=['loss_gen', 'loss_discrim', 'loss_mel',
'frames/s', 'took', 'lrate_gen', 'lrate_discrim'],
benchmark_epochs=0, cuda=True):
super().__init__()
self.dll_keys = dll_keys
self.metrics = {scope: defaultdict(float) for scope in scopes}
self.metric_counts = {scope: defaultdict(int) for scope in scopes}
self.start_time = {scope: None for scope in scopes}
self.benchmark_epochs = benchmark_epochs
if benchmark_epochs > 0:
self.metrics['train_benchmark'] = defaultdict(list)
self.cuda = cuda
def __setitem__(self, key, val):
if type(val) is dict:
for k, v in val.items():
super().__setitem__(k, v)
else:
super().__setitem__(key, val)
def __getitem__(self, key):
if key not in self:
self.__setitem__(key, 0.0)
return super().__getitem__(key)
def start_accumulating(self, step, start_timer=True, scope='train'):
del step # unused
self.clear()
self.metrics[scope].clear()
self.metric_counts[scope].clear()
if start_timer:
self.start_time[scope] = time.time()
def accumulate(self, scopes=['train', 'train_avg']):
for scope in scopes:
for k, v in self.items():
self.metrics[scope][k] += v
self.metric_counts[scope][k] += 1
self.clear()
def finish_accumulating(self, stop_timer=True, scope='train'):
metr = self.metrics[scope]
counts = self.metric_counts[scope]
for k, v in metr.items():
if type(v) is torch.Tensor:
v = v.item()
metr[k] = v / counts[k]
if stop_timer:
took = time.time() - self.start_time[scope]
if 'frames' in metr:
metr['frames/s'] = metr.pop('frames') * counts['frames'] / took
metr['took'] = took
def start_iter(self, iter, start_timer=True):
self.start_accumulating(iter, start_timer, 'train')
def start_epoch(self, epoch, start_timer=True):
if self.cuda:
torch.cuda.synchronize()
self.start_accumulating(epoch, start_timer, 'train_avg')
def start_val(self, start_timer=True):
if self.cuda:
torch.cuda.synchronize()
self.start_accumulating(None, start_timer, 'val')
def finish_iter(self, stop_timer=True):
self.finish_accumulating(stop_timer, 'train')
def finish_epoch(self, stop_timer=True):
if self.cuda:
torch.cuda.synchronize()
self.finish_accumulating(stop_timer, 'train_avg')
metr = self.metrics['train_benchmark']
for k in ('took', 'frames/s', 'loss_gen', 'loss_discrim', 'loss_mel'):
metr[k].append(self.metrics['train_avg'][k])
if len(metr[k]) > self.benchmark_epochs:
metr[k].pop(0)
def finish_val(self, stop_timer=True):
if self.cuda:
torch.cuda.synchronize()
self.finish_accumulating(stop_timer, 'val')
def get_metrics(self, scope='train', target='dll'):
if scope == 'train_benchmark':
metr = self.metrics[scope]
ret = {'train_' + k: np.mean(v) for k, v in metr.items()}
ret['benchmark_epochs_num'] = len(list(metr.values())[0])
return ret
ret = copy(self.metrics[scope])
if scope == 'train':
ret.update(self)
if target == 'dll':
ret = {f'{scope}_{k}': v
for k, v in ret.items() if k in self.dll_keys}
elif target == 'tb':
# Rename keys so they would group nicely inside TensorBoard
def split_key(k):
pos = k.rfind('_')
return k[:pos] + '/' + k[pos+1:] if pos >= 0 else k
ret = {split_key(k): v for k, v in ret.items()}
return ret
|
PyTorch/SpeechRecognition/Jasper/triton/model_repo_configs/fp16/feature-extractor-ts-trace | feature-extractor-ts-trace | config | name: "feature-extractor-ts-trace"
platform: "pytorch_libtorch"
default_model_filename: "model.pt"
max_batch_size: 64
input [
{
name: "input__0"
data_type: TYPE_FP16
dims: [ -1 ]
},
{
name: "input__1"
data_type: TYPE_INT32
dims: [ 1 ]
reshape { shape: [] }
}
]
output [
{
name: "output__0"
data_type: TYPE_FP16
dims: [64, -1]
},
{
name: "output__1"
data_type: TYPE_INT32
dims: [ 1 ]
reshape: { shape: [] }
}
]
|
PyTorch/LanguageModeling/BERT/triton/runner | runner | logger | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pathlib
import coloredlogs
class Logger(logging.Logger):
def __init__(self, name, level=logging.NOTSET):
super().__init__(name, level=level)
self._file_path = None
def initialize(self, file_path: pathlib.Path):
self._file_path = file_path
def write(self, log: str):
if not self._file_path:
return
with open(self._file_path, "+a") as file:
file.write(log)
LOGGER = Logger("runner")
log_format = "%(asctime)s %(levelname)s %(name)s %(message)s"
logging.basicConfig(format=log_format)
coloredlogs.install(
level=logging.INFO,
fmt=log_format,
logger=LOGGER,
field_styles={
"asctime": {"color": "green"},
"hostname": {"color": "magenta"},
"levelname": {"bold": True, "color": "blue"},
"name": {"color": "blue"},
"programname": {"color": "cyan"},
"username": {"color": "yellow"},
},
reconfigure=True,
)
|
PyTorch/LanguageModeling/BART | BART | training_base | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import argparse
import logging
import os
from pathlib import Path
from typing import Any, Dict
import time
from bart.configuration.configuration_bart import BartConfig
from bart.tokenization.tokenization_bart import BartTokenizer
from bart.modeling.modeling_bart import *
from utils.optimization import (
AdamW,
Adafactor,
get_cosine_schedule_with_warmup,
get_cosine_with_hard_restarts_schedule_with_warmup,
get_linear_schedule_with_warmup,
get_polynomial_decay_schedule_with_warmup,
)
from utils.gpu_affinity import set_affinity
from utils.distributed_utils import get_rank, get_device_count, get_world_size
from utils.utils import get_readable_time, Mean
from apex.optimizers import FusedAdam, FusedMixedPrecisionLamb
import dllogger
logger = logging.getLogger(__name__)
MODEL_MODES = {
"question-answering": BartForQuestionAnswering,
"pretraining": PretrainedBartModel,
"token-classification": BartForSequenceClassification,
"language-modeling": BartModel,
"summarization": BartForConditionalGeneration,
"translation": BartForConditionalGeneration,
}
# update this and the import above to support new schedulers from transformers.optimization
arg_to_scheduler = {
"linear": get_linear_schedule_with_warmup,
"cosine": get_cosine_schedule_with_warmup,
"cosine_w_restarts": get_cosine_with_hard_restarts_schedule_with_warmup,
"polynomial": get_polynomial_decay_schedule_with_warmup,
# '': get_constant_schedule, # not supported for now
# '': get_constant_schedule_with_warmup, # not supported for now
}
arg_to_scheduler_choices = sorted(arg_to_scheduler.keys())
arg_to_scheduler_metavar = "{" + ", ".join(arg_to_scheduler_choices) + "}"
class BaseTransformer():
def __init__(
self,
hparams: argparse.Namespace,
num_labels=None,
mode="base",
config=None,
tokenizer=None,
model=None,
**config_kwargs
):
"""Initialize a model, tokenizer and config."""
super().__init__()
self.step_count = 0
self.hparams = hparams
self.output_dir = Path(self.hparams.output_dir)
cache_dir = self.hparams.cache_dir if self.hparams.cache_dir else None
if config is None:
self.config = AutoConfig.from_pretrained(
self.hparams.config_name if self.hparams.config_name else self.hparams.model_name_or_path,
**({"num_labels": num_labels} if num_labels is not None else {}),
cache_dir=cache_dir,
**config_kwargs,
)
else:
self.config: BartConfig = config
extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout")
for p in extra_model_params:
if getattr(self.hparams, p, None):
assert hasattr(self.config, p), f"model config doesn't have a `{p}` attribute"
setattr(self.config, p, getattr(self.hparams, p))
if tokenizer is None:
self.tokenizer = AutoTokenizer.from_pretrained(
self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path,
cache_dir=cache_dir,
)
else:
self.tokenizer: BartTokenizer = tokenizer
# self.model_type = MODEL_MODES[mode]
if model is None:
self.model = self.model_type.from_pretrained(
self.hparams.model_name_or_path,
from_tf=bool(".ckpt" in self.hparams.model_name_or_path),
config=self.config,
cache_dir=cache_dir,
)
else:
self.model = model
def __call__(self, input_ids, **kwargs):
return self.forward(input_ids, **kwargs)
def load_hf_checkpoint(self, *args, **kwargs):
self.model = self.model_type.from_pretrained(*args, **kwargs)
def get_lr_scheduler(self):
get_schedule_func = arg_to_scheduler[self.hparams.lr_scheduler]
scheduler = get_schedule_func(
self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps
)
scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1}
return scheduler
def configure_optimizers(self):
"""Prepare optimizer and schedule (linear warmup and decay)"""
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": self.hparams.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
if self.hparams.lamb:
optimizer_reduced_precision_type = self.config.dtype if self.hparams.allreduce_post_accumulation_half_precision else None
optimizer = FusedMixedPrecisionLamb(
optimizer_grouped_parameters,
lr=self.hparams.learning_rate,
eps=self.hparams.adam_epsilon,
max_grad_norm=self.hparams.gradient_clip_val,
reduced_precision_dtype=optimizer_reduced_precision_type)
elif self.hparams.allreduce_post_accumulation_half_precision:
raise ValueError("--allreduce_post_accumulation_half_precision is only supported on LAMB optimizer")
elif self.hparams.adafactor:
optimizer = Adafactor(
optimizer_grouped_parameters, lr=self.hparams.learning_rate, scale_parameter=False, relative_step=False
)
else:
optimizer = FusedAdam(
optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon)
self.opt = optimizer
scheduler = self.get_lr_scheduler()
return [optimizer], [scheduler]
def test_step(self, batch, batch_nb):
return self.validation_step(batch, batch_nb)
def test_epoch_end(self, outputs):
return self.validation_end(outputs)
@property
def total_steps(self) -> int:
"""The number of total training steps that will be run. Used for lr scheduler purposes."""
if self.hparams.max_steps:
return self.hparams.max_steps
else:
assert self.hparams.max_epochs is not None
num_devices = max(1, self.hparams.gpus * self.hparams.num_nodes) # TODO: consider num_tpu_cores
effective_batch_size = self.hparams.train_batch_size * self.hparams.accumulate_grad_batches * num_devices
dataset_size = len(self.train_loader.dataset)
return (dataset_size / effective_batch_size) * self.hparams.max_epochs
def get_dataloader(self, type_path, batch_size, shuffle=False):
raise NotImplementedError("You must implement this for your task")
def train_dataloader(self):
return self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True)
def val_dataloader(self):
return self.get_dataloader("dev", self.hparams.eval_batch_size, shuffle=False)
def test_dataloader(self):
return self.get_dataloader("test", self.hparams.eval_batch_size, shuffle=False)
def _feature_file(self, mode):
return os.path.join(
self.hparams.data_dir,
"cached_{}_{}_{}".format(
mode,
list(filter(None, self.hparams.model_name_or_path.split("/"))).pop(),
str(self.hparams.max_seq_length),
),
)
def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
save_path = self.output_dir.joinpath("best_tfmr")
self.model.config.save_step = self.step_count
self.model.save_pretrained(save_path)
self.tokenizer.save_pretrained(save_path)
@staticmethod
def add_model_specific_args(parser, root_dir):
parser.add_argument("--config_path", default="config.json", type=str, help="Config File for Bart model")
parser.add_argument(
"--cache_dir",
default="",
type=str,
help="Where do you want to store the pre-trained models downloaded from s3",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
help="""Path/URL of the checkpoint from which training is resumed. If there is no checkpoint file at
the path, start from scratch. If resuming from mid-epoch checkpoint, training will start from
the beginning of the next epoch.""",
)
parser.add_argument(
"--encoder_layerdrop",
type=float,
help="Encoder layer dropout probability (Optional). Goes into model.config",
)
parser.add_argument(
"--decoder_layerdrop",
type=float,
help="Decoder layer dropout probability (Optional). Goes into model.config",
)
parser.add_argument(
"--dropout",
type=float,
help="Dropout probability (Optional). Goes into model.config",
)
parser.add_argument(
"--attention_dropout",
type=float,
help="Attention dropout probability (Optional). Goes into model.config",
)
parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.")
parser.add_argument(
"--lr_scheduler",
default="linear",
choices=arg_to_scheduler_choices,
metavar=arg_to_scheduler_metavar,
type=str,
help="Learning rate scheduler",
)
parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.")
parser.add_argument("--gradient_clip_val", default=0.5, type=float, help="The value at which to clip gradients.")
parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.")
parser.add_argument("--max_steps", default=10, type=int, help="Stop training after this number of steps.")
parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.")
parser.add_argument("--num_workers", default=4, type=int, help="kwarg passed to DataLoader")
parser.add_argument("--min_num_train_epochs", dest="min_epochs", default=0, type=int)
parser.add_argument("--train_batch_size", default=32, type=int)
parser.add_argument("--eval_batch_size", default=32, type=int)
parser.add_argument("--adafactor", action="store_true")
parser.add_argument("--lamb", action="store_true")
parser.add_argument('--affinity', type=str,
default='socket_unique_interleaved',
choices=['socket', 'single', 'single_unique',
'socket_unique_interleaved',
'socket_unique_continuous',
'disabled'],
help='type of CPU affinity')
parser.add_argument('--allreduce_post_accumulation_half_precision',
default=False,
action='store_true',
help="Whether to do fp16/bf16 allreduce post accumulation.")
def add_generic_args(parser, root_dir) -> None:
parser.add_argument(
"--output_dir",
default=None,
type=str,
required=True,
help="The output directory where the model predictions and checkpoints will be written.",
)
parser.add_argument(
"--fp16",
action="store_true",
help="Whether to use 16-bit (mixed) precision instead of 32-bit",
)
parser.add_argument(
"--bf16",
action="store_true",
help="Whether to use BFloat 16 mixed precision instead of 32-bit",
)
parser.add_argument("--n_tpu_cores", dest="tpu_cores", type=int)
parser.add_argument("--max_grad_norm", dest="gradient_clip_val", default=1.0, type=float, help="Max gradient norm")
parser.add_argument("--do_train", action="store_true", help="Whether to run training.")
parser.add_argument("--do_predict", action="store_true", help="Whether to run predictions on the test set.")
parser.add_argument(
"--gradient_accumulation_steps",
dest="accumulate_grad_batches",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument("--seed", type=int, default=42, help="random seed for initialization")
parser.add_argument(
"--data_dir",
default=None,
type=str,
required=True,
help="The input data dir. Should contain the training files for the CoNLL-2003 NER task.",
)
parser.add_argument("--log_freq", type=int, default=100, help="Log every X updates steps.")
parser.add_argument("--save_checkpoint_steps", type=int, default=100, required=False, help="How many checkpoints to save")
parser.add_argument(
"--profile",
action="store_true",
)
parser.add_argument("--pre_ln",
default=True,
action='store_true',
help="Whether to use Pre-LN architecture."
)
def save_checkpoint(args, checkpoints, model, optimizer, scaler, step):
output_filename = os.path.join(args.output_dir, "_step{}.ckpt".format(step))
if get_rank() == 0:
model_to_save = model
while(hasattr(model_to_save, "module")):
model_to_save = model_to_save.module
torch.save({"model": model_to_save.state_dict(),
"optimizer": optimizer.state_dict(),
"scaler": scaler.state_dict()},
output_filename)
def train_one_step(args, trainer, optimizer, scheduler, features, local_step, scaler):
if args.fp16:
cast_dtype = torch.float16
elif args.bf16:
cast_dtype = torch.bfloat16
else:
cast_dtype = None
with torch.cuda.amp.autocast(dtype=cast_dtype, enabled=(args.fp16 or args.bf16) and not args.allreduce_post_accumulation_half_precision):
result = trainer.training_step(features)
total_loss = result["loss"]
loss = total_loss
if args.accumulate_grad_batches > 1:
total_loss = total_loss / args.accumulate_grad_batches
if local_step % args.accumulate_grad_batches == 0:
scaler.scale(total_loss).backward()
if not args.lamb:
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(trainer.model.parameters(), args.gradient_clip_val)
scheduler.step() # Update learning rate schedule
scaler.step(optimizer)
optimizer.zero_grad()
skip_optimizer_step = scaler._found_inf_per_device(optimizer)[args.device] if scaler.is_enabled() else 0
result["log"]["skip_optimizer_step"] = int(skip_optimizer_step)
scaler.update()
else:
with trainer.model.no_sync():
scaler.scale(total_loss).backward()
return loss, result["log"]
def generic_train(
args,
trainer,
optimizer,
scheduler,
scaler,
checkpoints,
step,
**extra_train_kwargs
):
device = args.device
# Set up dataset
dataloader = trainer.train_dataloader()
# Set up metrics
metrics = {}
metrics["avg_train_throughput"] = Mean(name="train_perf")
metrics["total_loss"] = Mean(name="total_loss")
trainer.model.train()
local_step = 0
train_start, start_step = time.time(), step - 1
resume_step = step
skipped_optimizer_steps = 0
if get_rank() == 0:
dllogger.metadata("avg_train_time", {"unit": "s"})
dllogger.metadata("avg_train_throughput", {"unit": "seq/s"})
while step <= args.max_steps:
for batch in dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
local_step += 1
torch.cuda.synchronize()
iter_start = time.time()
total_loss, logs = train_one_step(args, trainer, optimizer, scheduler, batch, local_step, scaler)
torch.cuda.synchronize()
train_perf = logs["bs"] * get_world_size() / (time.time() - iter_start)
metrics["total_loss"].update(total_loss)
metrics["avg_train_throughput"].update(train_perf)
if local_step % args.accumulate_grad_batches == 0:
static_optimizer_step = local_step // args.accumulate_grad_batches
skipped_optimizer_steps += logs["skip_optimizer_step"]
opt_step = static_optimizer_step - skipped_optimizer_steps + resume_step
if args.log_freq > 0 and step != opt_step and (
step % args.log_freq == 0 or step == args.max_steps):
log_info_dict = {k:v.result() for k, v in metrics.items()}
if get_rank() == 0:
dllogger.log(step=(step,), data=log_info_dict, verbosity=0)
print(
'Step:{step:6d}, Loss:{total_loss:10.6f}, Perf:{train_perf:4.2f}, Loss Scaler: {loss_scale}, '
'Elapsed: {elapsed}, ETA: {eta}'.format(
step=step,
total_loss=total_loss,
train_perf=train_perf,
loss_scale=scaler.get_scale(),
elapsed=get_readable_time(time.time() - train_start),
eta=get_readable_time(
(time.time() - train_start) / (step - start_step) * (args.max_steps - step))),
flush=True
)
if step == args.max_steps:
final_metrics = {}
log_info_dict['avg_train_time'] = time.time() - train_start
for key, v in log_info_dict.items():
val = torch.tensor(v, device=device)
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= get_world_size()
final_metrics[key] = val.item()
if get_rank() == 0:
dllogger.log(step=(), data=log_info_dict, verbosity=0)
logger.info('<FINAL STEP METRICS> Step:{step:6d}, Loss:{total_loss:10.6f}, Perf:{avg_train_throughput:4.2f}, Train time:{avg_train_time}s'.format(
step=step, **final_metrics))
for key, m in metrics.items():
if key != 'avg_train_throughput':
m.reset()
if get_rank() == 0:
dllogger.flush()
if args.save_checkpoint_steps > 0 and step != opt_step and \
((step % args.save_checkpoint_steps == 0 and step > 0) or step == args.max_steps):
save_checkpoint(args, checkpoints, trainer.model, optimizer, scaler, step)
logger.info(f" ** Saved model checkpoint for step {step}")
step = opt_step
if step > args.max_steps:
break
def generic_test(
args,
trainer
):
device = args.device
# Set up dataset
dataloader = trainer.test_dataloader()
metrics = {k: Mean(name=k) for k in trainer.loss_names + trainer.metric_names}
for batch in dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
result_metric = trainer.test_step(batch)
for k, v in result_metric:
metrics[k].update(v)
log_info_dict = {k:v.result() for k, v in metrics.items()}
final_metrics = {}
for key, v in log_info_dict.items():
val = torch.tensor(v, device=device)
torch.distributed.all_reduce(val, op=torch.distributed.ReduceOp.SUM)
val /= get_world_size()
final_metrics[key] = val.item()
if get_rank() == 0:
dllogger.log(step=(), data=log_info_dict, verbosity=0)
print(final_metrics)
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/waveglow | waveglow | waveGlowInstance | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_WAVEGLOWINSTANCE_H
#define TT2I_WAVEGLOWINSTANCE_H
#include "timedObject.h"
#include "trtPtr.h"
#include "waveGlowStreamingInstance.h"
namespace nvinfer1
{
class ICudaEngine;
}
namespace tts
{
class WaveGlowInstance : public TimedObject
{
public:
static constexpr const char* const ENGINE_NAME = "waveglow_chunk160_fp16";
/**
* @brief Create a new WaveGlowInstance from a deserialied engine.
*
* @param engine The deserialized engine.
*/
WaveGlowInstance(TRTPtr<nvinfer1::ICudaEngine> engine);
// disable copying
WaveGlowInstance(const WaveGlowInstance& other) = delete;
WaveGlowInstance& operator=(const WaveGlowInstance& other) = delete;
/**
* @brief Perform inference on a set of mel-scale spectrograms.
*
* @param batchSize The number of items in the batch.
* @param mels The mel-scale spectro grams in batch, sequence, channel
* order.
* @param melSpacing The offset from the start of subsequent spectrograms.
* @param numMels The number of spectrograms per batch item (must be less or
* equal to melSpacing).
* @param numMaxSamples The maximum number of samples to generate per batch
* item.
* @param samples The location to output samples to (each will start at item
* ID x numMaxSamples).
* @param numSamples The number of samples for each output.
*/
void infer(const int batchSize, const float* mels, const int melSpacing, const int* numMels,
const int numMaxSamples, float* samples, int* numSamples);
/**
* @brief Get the number of samples that will be generated per mel-scale
* spectrogram.
*
* @return The number of samples.
*/
int getNumberOfSamplesPerFrame() const;
/**
* @brief Get the frequency of the generated audio.
*
* @return The frequency.
*/
int getFrequency() const;
/**
* @brief Get the maximum batch size this object can perform inference with.
*
* @return The maximum batch size.
*/
int getMaxBatchSize() const;
private:
WaveGlowStreamingInstance mStreamingInstance;
int mFrequency;
int mNumOverlap;
int mIndependentChunkSize;
int mIndependentChunkSampleSize;
std::vector<int> mNumChunkMels;
std::vector<int> mNumChunkSamples;
CudaMemory<float> mInputFrame;
CudaMemory<float> mOutputFrame;
};
} // namespace tts
#endif
|
TensorFlow/Detection/SSD/examples | examples | SSD320_FP32_4GPU_BENCHMARK | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
CKPT_DIR=${1:-"/results/SSD320_FP32_4GPU"}
PIPELINE_CONFIG_PATH=${2:-"/workdir/models/research/configs"}"/ssd320_bench.config"
GPUS=4
TENSOR_OPS=0
export TF_ENABLE_CUBLAS_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
export TF_ENABLE_CUDNN_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
export TF_ENABLE_CUDNN_RNN_TENSOR_OP_MATH_FP32=${TENSOR_OPS}
TRAIN_LOG=$(mpirun --allow-run-as-root \
-np $GPUS \
-H localhost:$GPUS \
-bind-to none \
-map-by slot \
-x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH \
-x PATH \
-mca pml ob1 \
-mca btl ^openib \
python -u ./object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${CKPT_DIR} \
--alsologtostder \
"${@:3}" 2>&1)
PERF=$(echo "$TRAIN_LOG" | sed -n 's|.*global_step/sec: \(\S\+\).*|\1|p' | python -c "import sys; x = sys.stdin.readlines(); x = [float(a) for a in x[int(len(x)*3/4):]]; print(32*$GPUS*sum(x)/len(x), 'img/s')")
mkdir -p $CKPT_DIR
echo "$GPUS GPUs single precision training performance: $PERF" | tee $CKPT_DIR/train_log
echo "$TRAIN_LOG" >> $CKPT_DIR/train_log
|
PyTorch/Segmentation/nnUNet/data_preprocessing | data_preprocessing | configs | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
task = {
"01": "Task01_BrainTumour",
"02": "Task02_Heart",
"03": "Task03_Liver",
"04": "Task04_Hippocampus",
"05": "Task05_Prostate",
"06": "Task06_Lung",
"07": "Task07_Pancreas",
"08": "Task08_HepaticVessel",
"09": "Task09_Spleen",
"10": "Task10_Colon",
"11": "BraTS2021_train",
"12": "BraTS2021_val",
}
patch_size = {
"01_3d": [128, 128, 128],
"02_3d": [80, 192, 160],
"03_3d": [128, 128, 128],
"04_3d": [40, 56, 40],
"05_3d": [20, 320, 256],
"06_3d": [80, 192, 160],
"07_3d": [40, 224, 224],
"08_3d": [64, 192, 192],
"09_3d": [64, 192, 160],
"10_3d": [56, 192, 160],
"11_3d": [128, 128, 128],
"12_3d": [128, 128, 128],
"01_2d": [192, 160],
"02_2d": [320, 256],
"03_2d": [512, 512],
"04_2d": [56, 40],
"05_2d": [320, 320],
"06_2d": [512, 512],
"07_2d": [512, 512],
"08_2d": [512, 512],
"09_2d": [512, 512],
"10_2d": [512, 512],
}
spacings = {
"01_3d": [1.0, 1.0, 1.0],
"02_3d": [1.37, 1.25, 1.25],
"03_3d": [1, 0.7676, 0.7676],
"04_3d": [1.0, 1.0, 1.0],
"05_3d": [3.6, 0.62, 0.62],
"06_3d": [1.24, 0.79, 0.79],
"07_3d": [2.5, 0.8, 0.8],
"08_3d": [1.5, 0.8, 0.8],
"09_3d": [1.6, 0.79, 0.79],
"10_3d": [3, 0.78, 0.78],
"11_3d": [1.0, 1.0, 1.0],
"12_3d": [1.0, 1.0, 1.0],
"01_2d": [1.0, 1.0],
"02_2d": [1.25, 1.25],
"03_2d": [0.7676, 0.7676],
"04_2d": [1.0, 1.0],
"05_2d": [0.62, 0.62],
"06_2d": [0.79, 0.79],
"07_2d": [0.8, 0.8],
"08_2d": [0.8, 0.8],
"09_2d": [0.79, 0.79],
"10_2d": [0.78, 0.78],
}
ct_min = {
"03": -17,
"06": -1024,
"07": -96,
"08": -3,
"09": -41,
"10": -30,
}
ct_max = {
"03": 201,
"06": 325,
"07": 215,
"08": 243,
"09": 176,
"10": 165.82,
}
ct_mean = {"03": 99.4, "06": -158.58, "07": 77.9, "08": 104.37, "09": 99.29, "10": 62.18}
ct_std = {"03": 39.36, "06": 324.7, "07": 75.4, "08": 52.62, "09": 39.47, "10": 32.65}
|
PyTorch/Detection/Efficientdet/effdet/csrc/nms | nms | vision | // Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "nms.h"
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("nms", &nms, "non-maximum suppression");
}
|
CUDA-Optimized/FastSpeech/scripts/docker | docker | interactive | #!/bin/bash
nvidia-docker run -it --rm --shm-size=16g -v $PWD:/workspace/fastspeech/ fastspeech bash
# nvidia-docker run -it -u $(id -u):$(id -g) --rm --shm-size=16g -v $PWD:/workspace/fastspeech/ fastspeech bash
# --ulimit memlock=-1 --ulimit stack=67108864 --ipc=host
|
PyTorch/Segmentation/nnUNet/triton | triton | run_inference_on_fw | #!/usr/bin/env python3
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
To infer the model on framework runtime, you can use `run_inference_on_fw.py` script.
It infers data obtained from pointed data loader locally and saves received data into npz files.
Those files are stored in directory pointed by `--output-dir` argument.
Example call:
```shell script
python ./triton/run_inference_on_fw.py \
--input-path /models/exported/model.onnx \
--input-type onnx \
--dataloader triton/dataloader.py \
--data-dir /data/imagenet \
--batch-size 32 \
--output-dir /results/dump_local \
--dump-labels
```
"""
import argparse
import logging
import os
from pathlib import Path
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
os.environ["TF_ENABLE_DEPRECATION_WARNINGS"] = "0"
from tqdm import tqdm
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = Path(__file__).parent.name
from .deployment_toolkit.args import ArgParserGenerator
from .deployment_toolkit.core import DATALOADER_FN_NAME, BaseLoader, BaseRunner, Format, load_from_file
from .deployment_toolkit.dump import NpzWriter
from .deployment_toolkit.extensions import loaders, runners
LOGGER = logging.getLogger("run_inference_on_fw")
def _verify_and_format_dump(args, ids, x, y_pred, y_real):
data = {"outputs": y_pred, "ids": {"ids": ids}}
if args.dump_inputs:
data["inputs"] = x
if args.dump_labels:
if not y_real:
raise ValueError(
"Found empty label values. Please provide labels in dataloader_fn or do not use --dump-labels argument"
)
data["labels"] = y_real
return data
def _parse_and_validate_args():
supported_inputs = set(runners.supported_extensions) & set(loaders.supported_extensions)
parser = argparse.ArgumentParser(description="Dump local inference output of given model", allow_abbrev=False)
parser.add_argument("--input-path", help="Path to input model", required=True)
parser.add_argument("--input-type", help="Input model type", choices=supported_inputs, required=True)
parser.add_argument("--dataloader", help="Path to python file containing dataloader.", required=True)
parser.add_argument("--output-dir", help="Path to dir where output files will be stored", required=True)
parser.add_argument("--dump-labels", help="Dump labels to output dir", action="store_true", default=False)
parser.add_argument("--dump-inputs", help="Dump inputs to output dir", action="store_true", default=False)
parser.add_argument("-v", "--verbose", help="Verbose logs", action="store_true", default=False)
args, *_ = parser.parse_known_args()
get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME)
ArgParserGenerator(get_dataloader_fn).update_argparser(parser)
Loader: BaseLoader = loaders.get(args.input_type)
ArgParserGenerator(Loader, module_path=args.input_path).update_argparser(parser)
Runner: BaseRunner = runners.get(args.input_type)
ArgParserGenerator(Runner).update_argparser(parser)
args = parser.parse_args()
types_requiring_io_params = []
if args.input_type in types_requiring_io_params and not all(p for p in [args.inputs, args.outputs]):
parser.error(f"For {args.input_type} input provide --inputs and --outputs parameters")
return args
def main():
args = _parse_and_validate_args()
log_level = logging.INFO if not args.verbose else logging.DEBUG
log_format = "%(asctime)s %(levelname)s %(name)s %(message)s"
logging.basicConfig(level=log_level, format=log_format)
LOGGER.info(f"args:")
for key, value in vars(args).items():
LOGGER.info(f" {key} = {value}")
Loader: BaseLoader = loaders.get(args.input_type)
Runner: BaseRunner = runners.get(args.input_type)
loader = ArgParserGenerator(Loader, module_path=args.input_path).from_args(args)
runner = ArgParserGenerator(Runner).from_args(args)
LOGGER.info(f"Loading {args.input_path}")
model = loader.load(args.input_path)
with runner.init_inference(model=model) as runner_session, NpzWriter(args.output_dir) as writer:
get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME)
dataloader_fn = ArgParserGenerator(get_dataloader_fn).from_args(args)
LOGGER.info(f"Data loader initialized; Running inference")
for ids, x, y_real in tqdm(dataloader_fn(), unit="batch", mininterval=10):
y_pred = runner_session(x)
data = _verify_and_format_dump(args, ids=ids, x=x, y_pred=y_pred, y_real=y_real)
writer.write(**data)
LOGGER.info(f"Inference finished")
if __name__ == "__main__":
main()
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/layers | layers | batch_norm | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import torch
from torch import nn
class FrozenBatchNorm2d(nn.Module):
"""
BatchNorm2d where the batch statistics and the affine parameters
are fixed
"""
def __init__(self, n):
super(FrozenBatchNorm2d, self).__init__()
self.register_buffer("weight", torch.ones(n))
self.register_buffer("bias", torch.zeros(n))
self.register_buffer("running_mean", torch.zeros(n))
self.register_buffer("running_var", torch.ones(n))
def forward(self, x):
# Cast all fixed parameters to half() if necessary
if x.dtype == torch.float16:
self.weight = self.weight.half()
self.bias = self.bias.half()
self.running_mean = self.running_mean.half()
self.running_var = self.running_var.half()
scale = self.weight * self.running_var.rsqrt()
bias = self.bias - self.running_mean * scale
scale = scale.reshape(1, -1, 1, 1)
bias = bias.reshape(1, -1, 1, 1)
return x * scale + bias
|
PyTorch/SpeechRecognition/Jasper/triton | triton | README | # Deploying the Jasper Inference model using Triton Inference Server
This subfolder of the Jasper for PyTorch repository contains scripts for deployment of high-performance inference on NVIDIA Triton Inference Server as well as detailed performance analysis. It offers different options for the inference model pipeline.
## Table Of Contents
- [Solution overview](#solution-overview)
- [Inference Pipeline in Triton Inference Server](#inference-pipeline-in-triton-inference-server)
- [Setup](#setup)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
- [Performance](#performance)
* [Inference Benchmarking in Triton Inference Server](#inference-benchmarking-in-triton-inference-server)
* [Results](#results)
* [Performance Analysis for Triton Inference Server: NVIDIA T4](#performance-analysis-for-triton-inference-server-nvidia-t4)
* [Maximum batch size](#maximum-batch-size)
* [Batching techniques: Static versus Dynamic Batching](#batching-techniques-static-versus-dynamic)
* [TensorRT, ONNXRT-CUDA, and PyTorch JIT comparisons](#tensorrt-onnxrt-cuda-and-pytorch-jit-comparisons)
- [Release Notes](#release-notes)
* [Changelog](#change-log)
* [Known issues](#known-issues)
## Solution Overview
The [NVIDIA Triton Inference Server](https://github.com/NVIDIA/triton-inference-server) provides a datacenter and cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any number of GPU or CPU models being managed by the server.
This folder contains detailed performance analysis as well as scripts to run Jasper inference using Triton Inference Server.
A typical Triton Inference Server pipeline can be broken down into the following steps:
1. The client serializes the inference request into a message and sends it to the server (Client Send).
2. The message travels over the network from the client to the server (Network).
3. The message arrives at the server, and is deserialized (Server Receive).
4. The request is placed on the queue (Server Queue).
5. The request is removed from the queue and computed (Server Compute).
6. The completed request is serialized in a message and sent back to the client (Server Send).
7. The completed message then travels over the network from the server to the client (Network).
8. The completed message is deserialized by the client and processed as a completed inference request (Client Receive).
Generally, for local clients, steps 1-4 and 6-8 will only occupy a small fraction of time, compared to step 5. As backend deep learning systems like Jasper are rarely exposed directly to end users, but instead only interfacing with local front-end servers, for the sake of Jasper, we can consider that all clients are local.
In this section, we will go over how to launch both the Triton Inference Server and the client and get the best performance solution that fits your specific application needs.
More information on how to perform inference using NVIDIA Triton Inference Server can be found in [triton/README.md](https://github.com/triton-inference-server/server/blob/master/README.md).
## Inference Pipeline in Triton Inference Server
The Jasper model pipeline consists of 3 components, where each part can be customized to be a different backend:
**Data preprocessor**
The data processor transforms an input raw audio file into a spectrogram. By default the pipeline uses mel filter banks as spectrogram features. This part does not have any learnable weights.
**Acoustic model**
The acoustic model takes in the spectrogram and outputs a probability over a list of characters. This part is the most compute intensive, taking more than 90% of the entire end-to-end pipeline. The acoustic model is the only component with learnable parameters and what differentiates Jasper from other end-to-end neural speech recognition models. In the original paper, the acoustic model contains a masking operation for training (More details in [Jasper PyTorch README](../README.md)). We do not use masking for inference.
**Greedy decoder**
The decoder takes the probabilities over the list of characters and outputs the final transcription. Greedy decoding is a fast and simple way of doing this by always choosing the character with the maximum probability.
To run a model with TensorRT, we first construct the model in PyTorch, which is then exported into a ONNX static graph. Finally, a TensorRT engine is constructed from the ONNX file and can be launched to do inference. The following table shows which backends are supported for each part along the model pipeline.
|Backend\Pipeline component|Data preprocessor|Acoustic Model|Decoder|
|---|---|---|---|
|PyTorch JIT|x|x|x|
|ONNX|-|x|-|
|TensorRT|-|x|-|
In order to run inference with TensorRT outside of the inference server, refer to the [Jasper TensorRT README](../tensort/README.md).
## Setup
The repository contains a folder `./triton` with a `Dockerfile` which extends the PyTorch 20.10-py3 NGC container and encapsulates some dependencies. Ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- [PyTorch 20.10-py3 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
- [Triton Inference Server 20.10 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver)
- Access to [NVIDIA machine learning repository](https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb) and [NVIDIA CUDA repository](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb) for NVIDIA TensorRT 6
- Supported GPUs:
- [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
- [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/)
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
- [Pretrained Jasper Model Checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_ckpt_amp)
Required Python packages are listed in `requirements.txt`. These packages are automatically installed when the Docker container is built.
## Quick Start Guide
Running the following scripts will build and launch the container containing all required dependencies for native PyTorch as well as Triton. This is necessary for using inference and can also be used for data download, processing, and training of the model. For more information on the scripts and arguments, refer to the [Advanced](#advanced) section.
1. Clone the repository.
```bash
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/SpeechRecognition/Jasper
```
2. Build the Jasper PyTorch container.
Running the following scripts will build the container which contains all the required dependencies for data download and processing as well as converting the model.
```bash
bash scripts/docker/build.sh
```
3. Start an interactive session in the Docker container:
```bash
bash scripts/docker/launch.sh <DATA_DIR> <CHECKPOINT_DIR> <RESULT_DIR>
```
Where <DATA_DIR>, <CHECKPOINT_DIR> and <RESULT_DIR> can be either empty or absolute directory paths to dataset, existing checkpoints or potential output files. When left empty, they default to `datasets/`, `/checkpoints`, and `results/`, respectively. The `/datasets`, `/checkpoints`, `/results` directories will be mounted as volumes and mapped to the corresponding directories `<DATA_DIR>`, `<CHECKPOINT_DIR>`, `<RESULT_DIR>` on the host.
Note that `<DATA_DIR>`, `<CHECKPOINT_DIR>`, and `<RESULT_DIR>` directly correspond to the same arguments in `scripts/docker/launch.sh` and `trt/scripts/docker/launch.sh` mentioned in the [Jasper PyTorch README](../README.md) and [Jasper TensorRT README](../tensorrt/README.md).
Briefly, `<DATA_DIR>` should contain, or be prepared to contain a `LibriSpeech` sub-directory (created in [Acquiring Dataset](../trt/README.md)), `<CHECKPOINT_DIR>` should contain a PyTorch model checkpoint (`*.pt`) file obtained through training described in [Jasper PyTorch README](../README.md), and `<RESULT_DIR>` should be prepared to contain converted model and logs.
4. Downloading the `test-clean` part of `LibriSpeech` is required for model conversion. But it is not required for inference on Triton Inference Server, which can use a single .wav audio file. To download and preprocess LibriSpeech, run the following inside the container:
```bash
bash triton/scripts/download_triton_librispeech.sh
bash triton/scripts/preprocess_triton_librispeech.sh
```
5. (Option 1) Convert pretrained PyTorch model checkpoint into Triton Inference Server compatible model backends.
Inside the container, run:
```bash
export CHECKPOINT_PATH=<CHECKPOINT_PATH>
export CONVERT_PRECISIONS=<CONVERT_PRECISIONS>
export CONVERTS=<CONVERTS>
bash triton/scripts/export_model.sh
```
Where `<CHECKPOINT_PATH>` (`"/checkpoints/jasper_fp16.pt"`) is the absolute file path of the pretrained checkpoint, `<CONVERT_PRECISIONS>` (`"fp16" "fp32"`) is the list of precisions used for conversion, and `<CONVERTS>` (`"feature-extractor" "decoder" "ts-trace" "onnx" "tensorrt"`) is the list of conversions to be applied. The feature extractor converts only to TorchScript trace module (`feature-extractor`), the decoder only to TorchScript script module (`decoder`), and the Jasper model can convert to TorchScript trace module (`ts-trace`), ONNX (`onnx`), or TensorRT (`tensorrt`).
A pretrained PyTorch model checkpoint for model conversion can be downloaded from the [NGC model repository](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_ckpt_amp).
More details can be found in the [Advanced](#advanced) section under [Scripts and sample code](#scripts-and-sample-code).
6. (Option 2) Download pre-exported inference checkpoints from NGC.
Alternatively, you can skip the manual model export and download already generated model backends for every version of the model pipeline.
* [Jasper_ONNX](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_onnx_fp16_amp/version),
* [Jasper_TorchScript](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_torchscript_fp16_amp/version),
* [Jasper_TensorRT_Turing](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_trt_fp16_amp_turing/version),
* [Jasper_TensorRT_Volta](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_trt_fp16_amp_volta/version).
If you wish to use TensorRT pipeline, make sure to download the correct version for your hardware. The extracted model folder should contain 3 subfolders `feature-extractor-ts-trace`, `decoder-ts-script` and `jasper-x` where `x` can be `ts-trace`, `onnx`, `tensorrt` depending on the model backend. Copy the 3 model folders to the directory `./triton/model_repo/fp16` in your Jasper project.
7. Build a container that extends Triton Inference Client:
From outside the container, run:
```bash
bash triton/scripts/docker/build_triton_client.sh
```
Once the above steps are completed you can either run inference benchmarks or perform inference on real data.
8. (Option 1) Run all inference benchmarks.
From outside the container, run:
```bash
export RESULT_DIR=<RESULT_DIR>
export PRECISION_TESTS=<PRECISION_TESTS>
export BATCH_SIZES=<BATCH_SIZES>
export SEQ_LENS=<SEQ_LENS>
bash triton/scripts/execute_all_perf_runs.sh
```
Where `<RESULT_DIR>` is the absolute path to potential output files (`./results`), `<PRECISION_TESTS>` is a list of precisions to be tested (`"fp16" "fp32"`), `<BATCH_SIZES>` is a list of tested batch sizes (`"1" "2" "4" "8"`), and `<SEQ_LENS>` are tested sequnce lengths (`"32000" "112000" "267200"`).
Note: This can take several hours to complete due to the extensiveness of the benchmark. More details about the benchmark are found in the [Advanced](#advanced) section under [Performance](#performance).
9. (Option 2) Run inference on real data using the Client and Triton Inference Server.
8.1 From outside the container, restart the server:
```bash
bash triton/scripts/run_server.sh <MODEL_TYPE> <PRECISION>
```
8.2 From outside the container, submit the client request using:
```bash
bash triton/scripts/run_client.sh <MODEL_TYPE> <DATA_DIR> <FILE>
```
Where `<MODEL_TYPE>` can be either "ts-trace", "tensorrt" or "onnx", `<PRECISION>` is either "fp32" or "fp16". `<DATA_DIR>` is an absolute local path to the directory of files. <FILE> is the relative path to <DATA_DIR> to either an audio file in .wav format or a manifest file in .json format.
Note: If <FILE> is *.json <DATA_DIR> should be the path to the LibriSpeech dataset. In this case this script will do both inference and evaluation on the accoring LibriSpeech dataset.
## Advanced
The following sections provide greater details about the Triton Inference Server pipeline and inference analysis and benchmarking results.
### Scripts and sample code
The `triton/` directory contains the following files:
* `jasper-client.py`: Python client script that takes an audio file and a specific model pipeline type and submits a client request to the server to run inference with the model on the given audio file.
* `speech_utils.py`: helper functions for `jasper-client.py`.
* `converter.py`: Python script for model conversion to different backends.
* `jasper_module.py`: helper functions for `converter.py`.
* `model_repo_configs/`: directory with Triton model config files for different backend and precision configurations.
The `triton/scripts/` directory has easy to use scripts to run supported functionalities, such as:
* `./docker/build_triton_client.sh`: builds container
* `execute_all_perf_runs.sh`: runs all benchmarks using Triton Inference Server performance client; calls `generate_perf_results.sh`
* `export_model.sh`: from pretrained PyTorch checkpoint generates backends for every version of the model inference pipeline.
* `prepare_model_repository.sh`: copies model config files from `./model_repo_configs/` to `./deploy/model_repo` and creates links to generated model backends, setting up the model repository for Triton Inference Server
* `generate_perf_results.sh`: runs benchmark with `perf-client` for specific configuration and calls `run_perf_client.sh`
* `run_server.sh`: launches Triton Inference Server
* `run_client.sh`: launches client by using `jasper-client.py` to submit inference requests to server
### Running the Triton Inference Server
Launch the Triton Inference Server in detached mode to run in the background by default:
```bash
bash triton/scripts/run_server.sh
```
To run in the foreground interactively, for debugging purposes, run:
```bash
DAEMON="--detach=false" bash triton/scripts/run_server.sh
```
The script mounts and loads models at `$PWD/triton/deploy/model_repo` to the server with all visible GPUs. In order to selectively choose the devices, set `NVIDIA_VISIBLE_DEVICES`.
### Running the Triton Inference Client
*Real data*
In order to run the client with real data, run:
```bash
bash triton/scripts/run_client.sh <backend> <data directory> <audio file>
```
The script calls `triton/jasper-client.py` which preprocesses data and sends/receives requests to/from the server.
*Synthetic data*
In order to run the client with synthetic data for performance measurements, run:
```bash
export MODEL_NAME=jasper-tensorrt-ensemble
export MODEL_VERSION=1
export BATCH_SIZE=1
export MAX_LATENCY=500
export MAX_CONCURRENCY=64
export AUDIO_LENGTH=32000
export SERVER_HOSTNAME=localhost
export RESULT_DIR_H=${PWD}/results/perf_client/${MODEL_NAME}/batch_${BATCH_SIZE}_len_${AUDIO_LENGTH}
bash triton/scripts/run_perf_client.sh
```
The export values above are default values. The script waits until the server is up and running, sends requests as per the constraints set and writes results to `/results/results_${TIMESTAMP}.csv` where `TIMESTAMP=$(date "+%y%m%d_%H%M")` and `/results/` is the results directory mounted in the docker .
For more information about `perf_client`, refer to the [official documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/master-user-guide/docs/optimization.html#perf-client).
## Performance
The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to [NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference).
### Inference Benchmarking in Triton Inference Server
To benchmark the inference performance on Volta Turing or Ampere GPU, run `bash triton/scripts/execute_all_perf_runs.sh` according to [Quick-Start-Guide](#quick-start-guide) Step 7.
By default, this script measures inference performance for all 3 model pipelines: PyTorch JIT (`ts-trace`) pipeline, ONNX (`onnx`) pipeline, TensorRT(`tensorrt`) pipeline, both with FP32 and FP16 precision. Each of these pipelines is measured for different audio input lengths (2sec, 7sec, 16.7sec) and a range of different server batch sizes (up to 8). This takes place in `triton/scripts/generate_perf_results.sh`. For a specific audio length and batch size, static and dynamic batching comparison is performed.
### Results
In the following section, we analyze the results using the example of the Triton pipeline.
#### Performance Analysis for Triton Inference Server: NVIDIA T4
All results below are obtained using the following configurations:
* Single T4 16GB GPU on a local server
* FP16 precision
* Python 3.6.10
* PyTorch 1.7.0a0+7036e91
* TensorRT 7.2.1.4
* CUDA 11.1.0.024
* CUDNN 8.0.4.30
##### Batching techniques: Static Batching
Static batching is a feature of the inference server that allows inference requests to be served as they are received. The largest improvements to throughput come from increasing the batch size due to efficiency gains in the GPU with larger batches.

Figure 1: Throughput vs. Latency for Jasper, Audio Length = 2sec using various model backends available in Triton Inference Server and static batching.

Figure 2: Throughput vs. Latency for Jasper, Audio Length = 7sec using various model backends available in Triton Inference Server and static batching.

Figure 3: Throughput vs. Latency for Jasper, Audio Length = 16.7sec using various model backends available in Triton Inference Server and static batching.
These charts can be used to establish the optimal batch size to use in dynamic batching, given a latency budget. For example, in Figure 2 (Audio length = 7s) given a budget of 50ms, the optimal batch size to use for the TensorRT backend is 4. This will result in a maximum throughput of 100 inf/s under the latency constraint. In all three charts, TensorRT shows the best throughput and latency performance for a given batch size
##### Batching techniques: Dynamic Batching
Dynamic batching is a feature of the inference server that allows inference requests to be combined by the server, so that a batch is created dynamically, resulting in an increased throughput. It is preferred in scenarios where we would like to maximize throughput and GPU utilization at the cost of higher latencies. You can set the Dynamic Batcher parameter `max_queue_delay_microseconds` to indicate the maximum amount of time you are willing to wait and `preferred_batch_size` to indicate your maximum server batch size in the Triton Inference Server model config.
Figures 4, 5, and 6 emphasizes the increase in overall throughput with dynamic batching. At low numbers of concurrent requests, the increased throughput comes at the cost of increasing latency as the requests are queued up to max_queue_delay_microseconds.

Figure 4: Triton pipeline - Latency & Throughput vs Concurrency using dynamic Batching at maximum server batch size = 8, max_queue_delay_microseconds = 5000, input audio length = 2 seconds, TensorRT backend.

Figure 5: Triton pipeline - Latency & Throughput vs Concurrency using dynamic Batching at maximum server batch size = 8, max_queue_delay_microseconds = 5000, input audio length = 7 seconds, TensorRT backend.

Figure 6: Triton pipeline - Latency & Throughput vs Concurrency using dynamic Batching at maximum server batch size = 8, max_queue_delay_microseconds = 5000, input audio length = 16.7 seconds, TensorRT backend.
##### TensorRT, ONNXRT-CUDA, and PyTorch JIT comparisons
The following tables show inference and latency comparisons across all 3 backends for mixed precision and static batching. The main observations are:
Increasing the batch size leads to higher inference throughput and - latency up to a certain batch size, after which it slowly saturates.
The longer the audio length, the lower the throughput and the higher the latency.
###### Throughput Comparison
The following table shows the throughput benchmark results for all 3 model backends in Triton Inference Server using static batching under optimal concurrency
|Audio length in seconds|Batch Size|TensorRT (inf/s)|PyTorch (inf/s)|ONNXRT-CUDA (inf/s)|TensorRT/PyTorch Speedup|TensorRT/ONNXRT-CUDA Speedup|
|--- |--- |--- |--- |--- |--- |--- |
| 2.0| 1| 49.67| 55.67| 41.67| 0.89| 1.19|
| 2.0| 2| 98.67| 96.00| 77.33| 1.03| 1.28|
| 2.0| 4| 180.00| 141.33| 118.67| 1.27| 1.52|
| 2.0| 8| 285.33| 202.67| 136.00| 1.41| 2.10|
| 7.0| 1| 47.67| 37.00| 18.00| 1.29| 2.65|
| 7.0| 2| 79.33| 47.33| 46.00| 1.68| 1.72|
| 7.0| 4| 100.00| 73.33| 36.00| 1.36| 2.78|
| 7.0| 8| 117.33| 82.67| 40.00| 1.42| 2.93|
| 16.7| 1| 36.33| 21.67| 11.33| 1.68| 3.21|
| 16.7| 2| 40.67| 25.33| 16.00| 1.61| 2.54|
| 16.7| 4| 46.67| 37.33| 16.00| 1.25| 2.92|
| 16.7| 8| 48.00| 40.00| 18.67| 1.20| 2.57|
###### Latency Comparison
The following table shows the throughput benchmark results for all 3 model backends in Triton Inference Server using static batching and a single concurrent request.
|Audio length in seconds|Batch Size|TensorRT (ms)|PyTorch (ms)|ONNXRT-CUDA (ms)|TensorRT/PyTorch Speedup|TensorRT/ONNXRT-CUDA Speedup|
|--- |--- |--- |--- |--- |--- |--- |
| 2.0| 1| 23.61| 25.06| 31.84| 1.06| 1.35|
| 2.0| 2| 24.56| 25.11| 37.54| 1.02| 1.53|
| 2.0| 4| 25.90| 31.00| 37.20| 1.20| 1.44|
| 2.0| 8| 31.57| 41.76| 37.13| 1.32| 1.18|
| 7.0| 1| 24.79| 30.55| 32.16| 1.23| 1.30|
| 7.0| 2| 28.48| 45.05| 37.47| 1.58| 1.32|
| 7.0| 4| 41.71| 57.71| 37.92| 1.38| 0.91|
| 7.0| 8| 72.19| 98.84| 38.13| 1.37| 0.53|
| 16.7| 1| 30.66| 48.42| 32.74| 1.58| 1.07|
| 16.7| 2| 52.79| 81.89| 37.82| 1.55| 0.72|
| 16.7| 4| 92.86| 115.03| 37.91| 1.24| 0.41|
| 16.7| 8| 170.34| 203.52| 37.84| 2.36| 0.22|
## Release Notes
### Changelog
March 2021
* Updated ONNX runtime information
February 2021
* Updated Triton scripts for compatibility with Triton Inference Server version 2
* Updated Quick Start Guide
* Updated performance results
### Known issues
There are no known issues in this deployment.
|
Tools/PyTorch/TimeSeriesPredictionPlatform | TimeSeriesPredictionPlatform | distributed_utils | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import random
import torch
import torch.distributed as dist
from numba import cuda
import warnings
from dask.distributed import Client
from dask_cuda import LocalCUDACluster
from hydra.core.hydra_config import HydraConfig
from joblib.externals.loky.backend.context import get_context
def generate_seeds(rng, size):
"""
Generate list of random seeds
:param rng: random number generator
:param size: length of the returned list
"""
seeds = [rng.randint(0, 2 ** 32 - 1) for _ in range(size)]
return seeds
def broadcast_seeds(seeds, device):
"""
Broadcasts random seeds to all distributed workers.
Returns list of random seeds (broadcasted from workers with rank 0).
:param seeds: list of seeds (integers)
:param device: torch.device
"""
if torch.distributed.is_available() and torch.distributed.is_initialized():
seeds_tensor = torch.LongTensor(seeds).to(device)
torch.distributed.broadcast(seeds_tensor, 0)
seeds = seeds_tensor.tolist()
return seeds
def setup_seeds(master_seed, epochs, device):
"""
Generates seeds from one master_seed.
Function returns (worker_seeds, shuffling_seeds), worker_seeds are later
used to initialize per-worker random number generators (mostly for
dropouts), shuffling_seeds are for RNGs resposible for reshuffling the
dataset before each epoch.
Seeds are generated on worker with rank 0 and broadcasted to all other
workers.
:param master_seed: master RNG seed used to initialize other generators
:param epochs: number of epochs
:param device: torch.device (used for distributed.broadcast)
"""
if master_seed == -1:
# random master seed, random.SystemRandom() uses /dev/urandom on Unix
master_seed = random.SystemRandom().randint(0, 2 ** 32 - 1)
if get_rank() == 0:
# master seed is reported only from rank=0 worker, it's to avoid
# confusion, seeds from rank=0 are later broadcasted to other
# workers
print(f"Using random master seed: {master_seed}")
else:
# master seed was specified from command line
print(f"Using master seed from command line: {master_seed}")
# initialize seeding RNG
seeding_rng = random.Random(master_seed)
# generate worker seeds, one seed for every distributed worker
worker_seeds = generate_seeds(seeding_rng, get_world_size())
# generate seeds for data shuffling, one seed for every epoch
shuffling_seeds = generate_seeds(seeding_rng, epochs)
# broadcast seeds from rank=0 to other workers
worker_seeds = broadcast_seeds(worker_seeds, device)
shuffling_seeds = broadcast_seeds(shuffling_seeds, device)
return worker_seeds, shuffling_seeds
def get_world_size():
return int(os.environ.get("WORLD_SIZE", 1))
def reduce_tensor(tensor, num_gpus, average=False):
if num_gpus > 1:
rt = tensor.clone()
dist.all_reduce(rt, op=dist.reduce_op.SUM)
if average:
if rt.is_floating_point():
rt = rt / num_gpus
else:
rt = rt // num_gpus
return rt
return tensor
def init_distributed():
world_size = int(os.environ.get("WORLD_SIZE", 1))
local_rank = int(os.environ.get('LOCAL_RANK', 0))
if world_size > 1:
dist.init_process_group(backend='nccl', init_method="env://")
assert dist.is_initialized()
torch.cuda.set_device(local_rank)
torch.cuda.synchronize()
def get_rank():
"""
Gets distributed rank or returns zero if distributed is not initialized.
"""
if torch.distributed.is_available() and torch.distributed.is_initialized():
rank = torch.distributed.get_rank()
else:
rank = 0
return rank
def is_main_process():
return get_rank() == 0
def init_parallel():
if is_parallel():
torch.cuda.set_device(HydraConfig.get().job.num % torch.cuda.device_count())
def is_parallel():
return HydraConfig.get().launcher.get('n_jobs', 0) > 1 or HydraConfig.get().sweeper.get('n_jobs', 0) > 1
def get_mp_context():
if HydraConfig.get().launcher.get('n_jobs', 0) > 1 or HydraConfig.get().sweeper.get('n_jobs', 0) > 1:
return get_context('loky')
return None
def _pynvml_mem_size(kind="total", index=0):
import pynvml
pynvml.nvmlInit()
size = None
if kind == "free":
size = int(pynvml.nvmlDeviceGetMemoryInfo(pynvml.nvmlDeviceGetHandleByIndex(index)).free)
elif kind == "total":
size = int(pynvml.nvmlDeviceGetMemoryInfo(pynvml.nvmlDeviceGetHandleByIndex(index)).total)
else:
raise ValueError("{0} not a supported option for device_mem_size.".format(kind))
pynvml.nvmlShutdown()
return size
def device_mem_size(kind="total"):
if kind not in ["free", "total"]:
raise ValueError("{0} not a supported option for device_mem_size.".format(kind))
try:
if kind == "free":
return int(cuda.current_context().get_memory_info()[0])
else:
return int(cuda.current_context().get_memory_info()[1])
except NotImplementedError:
if kind == "free":
# Not using NVML "free" memory, because it will not include RMM-managed memory
warnings.warn("get_memory_info is not supported. Using total device memory from NVML.")
size = _pynvml_mem_size(kind="total", index=0)
return size
def get_rmm_size(size):
return (size // 256) * 256
def calculate_frac(num_rows, num_feat, world_size):
total_memory = world_size * device_mem_size(kind='total')
mem_to_use = total_memory * 0.4
num_rows_to_use = mem_to_use / (num_feat * 6)
print(num_rows_to_use)
frac = min(num_rows_to_use / num_rows, 1.0)
return frac
def create_client(config):
device_pool_frac = config.cluster.device_pool_frac
device_size = device_mem_size(kind="total")
device_pool_size = int(device_pool_frac * device_size)
dask_space = "/tmp/dask_space/"
protocol = config.cluster.protocol
visible_devices = [i for i in range(config.cluster.world_size)]
if protocol == "ucx":
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
rmm_pool_size=get_rmm_size(device_pool_size),
local_directory=dask_space,
device_memory_limit=None,
enable_tcp_over_ucx=True,
enable_nvlink=True)
else:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
rmm_pool_size=get_rmm_size(device_pool_size),
local_directory=dask_space,
device_memory_limit=None,
)
client = Client(cluster)
return client
|
TensorFlow/Classification/ConvNets/triton/scripts/docker | docker | interactive | #!/usr/bin/env bash
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
docker run -it --rm \
--gpus "device=all" \
--net=host \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
-e WORKDIR=$(pwd) \
-e PYTHONPATH=$(pwd) \
-v $(pwd):$(pwd) \
-w $(pwd) \
resnet50:latest bash
|
PyTorch/Detection/Efficientdet/effdet/layers | layers | config | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright 2019-2022 Ross Wightman
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Optional
__all__ = [
'is_exportable', 'is_scriptable', 'is_no_jit',
'set_exportable', 'set_scriptable', 'set_no_jit', 'set_layer_config'
]
# Set to True if prefer to have layers with no jit optimization (includes activations)
_NO_JIT = False
# Set to True if prefer to have activation layers with no jit optimization
# NOTE not currently used as no difference between no_jit and no_activation jit as only layers obeying
# the jit flags so far are activations. This will change as more layers are updated and/or added.
_NO_ACTIVATION_JIT = False
# Set to True if exporting a model with Same padding via ONNX
_EXPORTABLE = False
# Set to True if wanting to use torch.jit.script on a model
_SCRIPTABLE = False
def is_no_jit():
return _NO_JIT
class set_no_jit:
def __init__(self, mode: bool) -> None:
global _NO_JIT
self.prev = _NO_JIT
_NO_JIT = mode
def __enter__(self) -> None:
pass
def __exit__(self, *args: Any) -> bool:
global _NO_JIT
_NO_JIT = self.prev
return False
def is_exportable():
return _EXPORTABLE
class set_exportable:
def __init__(self, mode: bool) -> None:
global _EXPORTABLE
self.prev = _EXPORTABLE
_EXPORTABLE = mode
def __enter__(self) -> None:
pass
def __exit__(self, *args: Any) -> bool:
global _EXPORTABLE
_EXPORTABLE = self.prev
return False
def is_scriptable():
return _SCRIPTABLE
class set_scriptable:
def __init__(self, mode: bool) -> None:
global _SCRIPTABLE
self.prev = _SCRIPTABLE
_SCRIPTABLE = mode
def __enter__(self) -> None:
pass
def __exit__(self, *args: Any) -> bool:
global _SCRIPTABLE
_SCRIPTABLE = self.prev
return False
class set_layer_config:
""" Layer config context manager that allows setting all layer config flags at once.
If a flag arg is None, it will not change the current value.
"""
def __init__(
self,
scriptable: Optional[bool] = None,
exportable: Optional[bool] = None,
no_jit: Optional[bool] = None,
no_activation_jit: Optional[bool] = None):
global _SCRIPTABLE
global _EXPORTABLE
global _NO_JIT
global _NO_ACTIVATION_JIT
self.prev = _SCRIPTABLE, _EXPORTABLE, _NO_JIT, _NO_ACTIVATION_JIT
if scriptable is not None:
_SCRIPTABLE = scriptable
if exportable is not None:
_EXPORTABLE = exportable
if no_jit is not None:
_NO_JIT = no_jit
if no_activation_jit is not None:
_NO_ACTIVATION_JIT = no_activation_jit
def __enter__(self) -> None:
pass
def __exit__(self, *args: Any) -> bool:
global _SCRIPTABLE
global _EXPORTABLE
global _NO_JIT
global _NO_ACTIVATION_JIT
_SCRIPTABLE, _EXPORTABLE, _NO_JIT, _NO_ACTIVATION_JIT = self.prev
return False |
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/generator | generator | utils | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cupy as cp
from numba import cuda
WARP_SIZE = 32 # could be 32 or 64
@cuda.jit
def repeat_kernel(repeat_ptr, cumsum_ptr, res, size):
idx = cuda.grid(1)
stride = cuda.gridsize(1) / WARP_SIZE
warp_id = idx / WARP_SIZE
tid_in_warp = idx % WARP_SIZE
for i in range(warp_id, size, stride):
end = cumsum_ptr[i]
repeat = repeat_ptr[i]
start = end - repeat
for j in range(start + tid_in_warp, end, WARP_SIZE):
res[j] = i
def cuda_repeat(repeats):
cumsum = repeats.cumsum(0)
total = cumsum[-1].item()
size = len(repeats)
block = 512
warps_per_block = block // WARP_SIZE
grid = max((size + warps_per_block - 1) // warps_per_block, 2048)
res = cp.empty(total, dtype=repeats.dtype)
repeat_kernel[grid, block](repeats, cumsum, res, size)
cuda.synchronize()
return res
|
PyTorch/LanguageModeling/BERT/triton/large | large | README | # Deploying the BERT model on Triton Inference Server
This folder contains instructions for deployment to run inference
on Triton Inference Server as well as a detailed performance analysis.
The purpose of this document is to help you with achieving
the best inference performance.
## Table of contents
- [Solution overview](#solution-overview)
- [Introduction](#introduction)
- [Deployment process](#deployment-process)
- [Setup](#setup)
- [Quick Start Guide](#quick-start-guide)
- [Performance](#performance)
- [Offline scenario](#offline-scenario)
- [Offline: NVIDIA A30, ONNX Runtime with FP16](#offline-nvidia-a30-onnx-runtime-with-fp16)
- [Offline: NVIDIA A30, ONNX Runtime with FP16, Backend accelerator TensorRT](#offline-nvidia-a30-onnx-runtime-with-fp16-backend-accelerator-tensorrt)
- [Offline: NVIDIA A30, NVIDIA TensorRT with FP16](#offline-nvidia-a30-nvidia-tensorrt-with-fp16)
- [Offline: NVIDIA A30, NVIDIA PyTorch with FP16](#offline-nvidia-a30-pytorch-with-fp16)
- [Offline: NVIDIA DGX-1 (1x V100 32GB), ONNX Runtime with FP16](#offline-nvidia-dgx-1-1x-v100-32gb-onnx-runtime-with-fp16)
- [Offline: NVIDIA DGX-1 (1x V100 32GB), ONNX Runtime with FP16, Backend accelerator TensorRT](#offline-nvidia-dgx-1-1x-v100-32gb-onnx-runtime-with-fp16-backend-accelerator-tensorrt)
- [Offline: NVIDIA DGX-1 (1x V100 32GB), NVIDIA TensorRT with FP16](#offline-nvidia-dgx-1-1x-v100-32gb-nvidia-tensorrt-with-fp16)
- [Offline: NVIDIA DGX-1 (1x V100 32GB), PyTorch with FP16](#offline-nvidia-dgx-1-1x-v100-32gb-pytorch-with-fp16)
- [Offline: NVIDIA DGX A100 (1x A100 80GB), ONNX Runtime with FP16](#offline-nvidia-dgx-a100-1x-a100-80gb-onnx-runtime-with-fp16)
- [Offline: NVIDIA DGX A100 (1x A100 80GB), ONNX Runtime with FP16, Backend accelerator TensorRT](#offline-nvidia-dgx-a100-1x-a100-80gb-onnx-runtime-with-fp16-backend-accelerator-tensorrt)
- [Offline: NVIDIA DGX A100 (1x A100 80GB), NVIDIA TensorRT with FP16](#offline-nvidia-dgx-a100-1x-a100-80gb-nvidia-tensorrt-with-fp16)
- [Offline: NVIDIA DGX A100 (1x A100 80GB), PyTorch with FP16](#offline-nvidia-dgx-a100-1x-a100-80gb-pytorch-with-fp16)
- [Offline: NVIDIA T4, ONNX Runtime with FP16](#offline-nvidia-t4-onnx-runtime-with-fp16)
- [Offline: NVIDIA T4, ONNX Runtime with FP16, Backend accelerator TensorRT](#offline-nvidia-t4-onnx-runtime-with-fp16-backend-accelerator-tensorrt)
- [Offline: NVIDIA T4, NVIDIA TensorRT with FP16](#offline-nvidia-t4-nvidia-tensorrt-with-fp16)
- [Offline: NVIDIA T4, PyTorch with FP16](#offline-nvidia-t4-pytorch-with-fp16)
- [Advanced](#advanced)
- [Prepare configuration](#prepare-configuration)
- [Step by step deployment process](#step-by-step-deployment-process)
- [Latency explanation](#latency-explanation)
- [Release notes](#release-notes)
- [Changelog](#changelog)
- [Known issues](#known-issues)
## Solution overview
### Introduction
The [NVIDIA Triton Inference Server](https://github.com/NVIDIA/triton-inference-server)
provides a datacenter and cloud inferencing solution optimized for NVIDIA GPUs.
The server provides an inference service via an HTTP or gRPC endpoint,
allowing remote clients to request inferencing for any number of GPU
or CPU models being managed by the server.
This README provides step-by-step deployment instructions for models generated
during training (as described in the [model README](../readme.md)).
Additionally, this README provides the corresponding deployment scripts that
ensure optimal GPU utilization during inferencing on Triton Inference Server.
### Deployment process
The deployment process consists of two steps:
1. Conversion.
The purpose of conversion is to find the best performing model
format supported by Triton Inference Server.
Triton Inference Server uses a number of runtime backends such as
[TensorRT](https://developer.nvidia.com/tensorrt),
[LibTorch](https://github.com/triton-inference-server/pytorch_backend) and
[ONNX Runtime](https://github.com/triton-inference-server/onnxruntime_backend)
to support various model types. Refer to the
[Triton documentation](https://github.com/triton-inference-server/backend#where-can-i-find-all-the-backends-that-are-available-for-triton)
for a list of available backends.
2. Configuration.
Model configuration on Triton Inference Server, which generates
necessary [configuration files](https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md).
After deployment Triton inference server is used for evaluation of converted model in two steps:
1. Accuracy tests.
Produce results which are tested against given accuracy thresholds.
2. Performance tests.
Produce latency and throughput results for offline (static batching)
and online (dynamic batching) scenarios.
All steps are executed by provided runner script. Refer to [Quick Start Guide](#quick-start-guide)
## Setup
Ensure you have the following components:
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [PyTorch NGC container 21.10](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch)
* [Triton Inference Server NGC container 21.10](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver)
* [NVIDIA CUDA](https://docs.nvidia.com/cuda/archive//index.html)
* [NVIDIA Ampere](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/), [Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) or [Turing](https://www.nvidia.com/en-us/geforce/turing/) based GPU
## Quick Start Guide
Running the following scripts will build and launch the container with all required dependencies for native PyTorch as well as Triton Inference Server. This is necessary for running inference and can also be used for data download, processing, and training of the model.
1. Clone the repository.
```
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples/PyTorch/LanguageModeling/BERT/
```
2. Build and run a container that extends NGC PyTorch with the Triton client libraries and necessary dependencies.
```
./triton/large/scripts/docker/build.sh
./triton/large/scripts/docker/interactive.sh
```
3. Prepare dataset.
Runner requires script downloading and preparing publicly available datasets to run the process.
Script will download necessary data to DeepLearningExamples/PyTorch/LanguageModeling/BERT/datasets catalog.
```
./triton/large/runner/prepare_datasets.sh
```
4. Execute runner script (please mind, the run scripts are prepared per NVIDIA GPU).
```
NVIDIA A30: ./triton/large/runner/start_NVIDIA-A30.sh
NVIDIA DGX-1 (1x V100 32GB): ./triton/large/runner/start_NVIDIA-DGX-1-\(1x-V100-32GB\).sh
NVIDIA DGX A100 (1x A100 80GB): ./triton/large/runner/start_NVIDIA-DGX-A100-\(1x-A100-80GB\).sh
NVIDIA T4: ./triton/large/runner/start_NVIDIA-T4.sh
```
## Performance
The performance measurements in this document were conducted at the time of publication and may not reflect
the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to
[NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference).
### Offline scenario
The offline scenario assumes the client and server are located on the same host. The tests uses:
- tensors are passed through shared memory between client and server, the Perf Analyzer flag `shared-memory=system` is used
- single request is send from client to server with static size of batch
#### Offline: NVIDIA A30, ONNX Runtime with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:--------------------------|:----------------|
| GPU | NVIDIA A30 |
| Backend | ONNX Runtime |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | ONNX |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.40 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 102.0 | 0.0 | 0.3 | 0.0 | 0.1 | 9.3 | 0.0 | 0.0 | 9.7 | 9.8 | 9.8 | 9.8 | 9.7 |
| 8 | 1 | 133.3 | 0.0 | 0.4 | 0.1 | 0.1 | 59.3 | 0.0 | 0.0 | 59.8 | 60.3 | 60.5 | 61.0 | 59.8 |
| 16 | 1 | 130.7 | 0.0 | 0.5 | 0.1 | 0.1 | 119.8 | 0.0 | 0.0 | 120.4 | 120.8 | 120.9 | 121.1 | 120.4 |
#### Offline: NVIDIA A30, ONNX Runtime with FP16, Backend accelerator TensorRT
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Backend |ONNX Runtime |
| Backend accelerator |NVIDIA TensorRT|
| Precision |FP16 |
| Model format |ONNX |
| Max batch size |16 |
| Number of model instances |1|
| Accelerator Precision | FP16 |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.41 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 174.0 | 0.0 | 0.3 | 0.0 | 0.1 | 5.3 | 0.0 | 0.0 | 5.7 | 5.8 | 5.8 | 5.8 | 5.7 |
| 8 | 1 | 252.0 | 0.0 | 0.4 | 0.1 | 0.1 | 31.1 | 0.0 | 0.0 | 31.6 | 31.8 | 31.9 | 31.9 | 31.6 |
#### Offline: NVIDIA A30, NVIDIA TensorRT with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------|
| GPU | NVIDIA A30 |
| Backend | NVIDIA TensorRT |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | NVIDIA TensorRT |
| Max batch size | 16 |
| Number of model instances | 1 |
| NVIDIA TensorRT Capture CUDA Graph | Disabled |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.41 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 172.0 | 0.0 | 0.4 | 0.0 | 0.1 | 5.2 | 0.0 | 0.0 | 5.8 | 5.8 | 5.8 | 5.9 | 5.8 |
| 8 | 1 | 252.0 | 0.0 | 0.3 | 0.0 | 0.1 | 31.1 | 0.0 | 0.0 | 31.6 | 31.9 | 31.9 | 32.1 | 31.6 |
| 16 | 1 | 251.9 | 0.0 | 0.5 | 0.0 | 0.1 | 62.5 | 0.0 | 0.0 | 63.1 | 63.6 | 63.7 | 63.7 | 63.1 |
#### Offline: NVIDIA A30, PyTorch with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:------------------|
| GPU | NVIDIA A30 |
| Backend | PyTorch |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | TorchScript Trace |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.39 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 109.0 | 0.0 | 0.3 | 0.0 | 0.0 | 7.9 | 0.7 | 0.0 | 9.1 | 9.1 | 9.1 | 9.2 | 9.1 |
| 8 | 1 | 154.7 | 0.0 | 0.3 | 0.0 | 0.1 | 9.5 | 41.3 | 0.0 | 51.2 | 51.5 | 51.5 | 51.6 | 51.2 |
| 16 | 1 | 160.0 | 0.0 | 0.3 | 0.0 | 0.1 | 8.2 | 91.1 | 0.0 | 99.8 | 100.3 | 100.5 | 100.8 | 99.8 |
#### Offline: NVIDIA DGX-1 (1x V100 32GB), ONNX Runtime with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------------------|
| GPU | NVIDIA DGX-1 (1x V100 32GB) |
| Backend | ONNX Runtime |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | ONNX |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.40 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 81.0 | 0.0 | 0.3 | 0.1 | 0.1 | 11.8 | 0.0 | 0.0 | 12.2 | 12.4 | 12.5 | 12.5 | 12.2 |
| 8 | 1 | 128.0 | 0.0 | 0.4 | 0.1 | 0.1 | 61.8 | 0.0 | 0.0 | 62.3 | 62.5 | 62.5 | 62.6 | 62.3 |
| 16 | 1 | 136.0 | 0.0 | 0.3 | 0.0 | 0.1 | 115.9 | 0.0 | 0.0 | 116.3 | 116.6 | 116.7 | 116.8 | 116.3 |
#### Offline: NVIDIA DGX-1 (1x V100 32GB), ONNX Runtime with FP16, Backend accelerator TensorRT
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------------------|
| GPU | NVIDIA DGX-1 (1x V100 32GB) |
| Backend | ONNX Runtime |
| Backend accelerator | NVIDIA TensorRT |
| Precision | FP16 |
| Model format | ONNX |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | FP16 |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.38 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 128.0 | 0.0 | 0.3 | 0.0 | 0.1 | 7.4 | 0.0 | 0.0 | 7.7 | 7.8 | 7.8 | 8.0 | 7.8 |
| 8 | 1 | 208.0 | 0.0 | 0.2 | 0.0 | 0.1 | 38.0 | 0.0 | 0.0 | 38.3 | 38.5 | 38.5 | 38.5 | 38.3 |
| 16 | 1 | 223.9 | 0.0 | 0.3 | 0.0 | 0.1 | 70.1 | 0.0 | 0.0 | 70.5 | 70.8 | 70.8 | 70.8 | 70.5 |
#### Offline: NVIDIA DGX-1 (1x V100 32GB), NVIDIA TensorRT with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------------------|
| GPU | NVIDIA DGX-1 (1x V100 32GB) |
| Backend | NVIDIA TensorRT |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | NVIDIA TensorRT |
| Max batch size | 16 |
| Number of model instances | 1 |
| NVIDIA TensorRT Capture CUDA Graph | Disabled |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.38 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 125.0 | 0.0 | 0.3 | 0.1 | 0.1 | 7.4 | 0.1 | 0.0 | 8.0 | 8.0 | 8.0 | 8.1 | 8.0 |
| 8 | 1 | 208.0 | 0.0 | 0.3 | 0.1 | 0.2 | 37.7 | 0.0 | 0.0 | 38.3 | 38.4 | 38.5 | 38.5 | 38.3 |
| 16 | 1 | 224.0 | 0.0 | 0.3 | 0.1 | 0.2 | 70.1 | 0.0 | 0.0 | 70.7 | 71.1 | 71.2 | 71.3 | 70.7 |
#### Offline: NVIDIA DGX-1 (1x V100 32GB), PyTorch with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------------------|
| GPU | NVIDIA DGX-1 (1x V100 32GB) |
| Backend | PyTorch |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | TorchScript Trace |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.40 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 70.0 | 0.0 | 0.3 | 0.1 | 0.1 | 13.6 | 0.0 | 0.0 | 14.1 | 14.3 | 14.3 | 14.4 | 14.1 |
| 8 | 1 | 160.0 | 0.0 | 0.3 | 0.1 | 0.1 | 12.7 | 36.4 | 0.0 | 49.5 | 49.7 | 49.8 | 49.9 | 49.5 |
| 16 | 1 | 169.6 | 0.0 | 0.3 | 0.1 | 0.1 | 12.0 | 80.7 | 0.0 | 93.1 | 93.6 | 93.9 | 94.0 | 93.1 |
#### Offline: NVIDIA DGX A100 (1x A100 80GB), ONNX Runtime with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-------------------------------|
| GPU | NVIDIA DGX A100 (1x A100 80GB) |
| Backend | ONNX Runtime |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | ONNX |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.40 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 147.0 | 0.0 | 0.1 | 0.0 | 0.1 | 6.5 | 0.0 | 0.0 | 6.8 | 6.8 | 6.8 | 6.9 | 6.8 |
| 8 | 1 | 272.0 | 0.0 | 0.1 | 0.0 | 0.1 | 28.9 | 0.0 | 0.0 | 29.1 | 29.2 | 29.2 | 29.3 | 29.1 |
| 16 | 1 | 282.6 | 0.0 | 0.1 | 0.0 | 0.1 | 56.2 | 0.0 | 0.0 | 56.4 | 56.6 | 56.7 | 56.7 | 56.4 |
#### Offline: NVIDIA DGX A100 (1x A100 80GB), ONNX Runtime with FP16, Backend accelerator TensorRT
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-------------------------------|
| GPU | NVIDIA DGX A100 (1x A100 80GB) |
| Backend | ONNX Runtime |
| Backend accelerator | NVIDIA TensorRT |
| Precision | FP16 |
| Model format | ONNX |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | FP16 |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.41 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 265.0 | 0.0 | 0.1 | 0.0 | 0.1 | 3.5 | 0.0 | 0.0 | 3.8 | 3.8 | 3.8 | 3.9 | 3.8 |
| 8 | 1 | 512.0 | 0.0 | 0.1 | 0.0 | 0.1 | 15.2 | 0.0 | 0.0 | 15.4 | 15.5 | 15.6 | 15.6 | 15.4 |
| 16 | 1 | 544.0 | 0.0 | 0.1 | 0.0 | 0.1 | 29.2 | 0.0 | 0.0 | 29.3 | 29.6 | 29.6 | 29.7 | 29.4 |
#### Offline: NVIDIA DGX A100 (1x A100 80GB), NVIDIA TensorRT with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-------------------------------|
| GPU | NVIDIA DGX A100 (1x A100 80GB) |
| Backend | NVIDIA TensorRT |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | NVIDIA TensorRT |
| Max batch size | 16 |
| Number of model instances | 1 |
| NVIDIA TensorRT Capture CUDA Graph | Disabled |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.41 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 275.0 | 0.0 | 0.1 | 0.0 | 0.1 | 3.4 | 0.0 | 0.0 | 3.6 | 3.6 | 3.6 | 3.6 | 3.6 |
| 8 | 1 | 512.0 | 0.0 | 0.1 | 0.0 | 0.1 | 15.2 | 0.0 | 0.0 | 15.4 | 15.5 | 15.5 | 15.5 | 15.4 |
| 16 | 1 | 544.0 | 0.0 | 0.1 | 0.0 | 0.1 | 29.1 | 0.0 | 0.0 | 29.3 | 29.5 | 29.5 | 29.6 | 29.4 |
#### Offline: NVIDIA DGX A100 (1x A100 80GB), PyTorch with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-------------------------------|
| GPU | NVIDIA DGX A100 (1x A100 80GB) |
| Backend | PyTorch |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | TorchScript Trace |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.39 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 95.0 | 0.0 | 0.1 | 0.0 | 0.0 | 10.3 | 0.0 | 0.0 | 10.5 | 10.5 | 10.5 | 10.7 | 10.5 |
| 8 | 1 | 324.0 | 0.0 | 0.1 | 0.0 | 0.1 | 9.4 | 15.1 | 0.0 | 24.7 | 24.8 | 24.8 | 24.8 | 24.7 |
| 16 | 1 | 330.7 | 0.0 | 0.1 | 0.0 | 0.1 | 10.2 | 37.6 | 0.0 | 48.0 | 48.2 | 48.3 | 48.3 | 48.0 |
#### Offline: NVIDIA T4, ONNX Runtime with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------|
| GPU | NVIDIA T4 |
| Backend | ONNX Runtime |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | ONNX |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.42 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 40.0 | 0.0 | 0.5 | 0.1 | 0.1 | 24.3 | 0.0 | 0.0 | 24.9 | 25.2 | 25.2 | 25.4 | 24.9 |
| 8 | 1 | 44.4 | 0.0 | 0.3 | 0.1 | 0.1 | 177.2 | 0.0 | 0.0 | 177.8 | 179.5 | 179.9 | 180.7 | 177.6 |
| 16 | 1 | 45.3 | 0.0 | 0.5 | 0.1 | 0.1 | 349.5 | 0.0 | 0.0 | 350.4 | 352.9 | 353.1 | 353.6 | 350.1 |
#### Offline: NVIDIA T4, ONNX Runtime with FP16, Backend accelerator TensorRT
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Backend |ONNX Runtime |
| Backend accelerator |NVIDIA TensorRT|
| Precision |FP16 |
| Model format |ONNX |
| Max batch size |16 |
| Number of model instances |1|
| Accelerator Precision | FP16 |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.43 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 69.0 | 0.0 | 0.3 | 0.1 | 0.1 | 13.8 | 0.0 | 0.0 | 14.3 | 14.7 | 14.7 | 14.7 | 14.3 |
| 8 | 1 | 69.3 | 0.0 | 0.5 | 0.1 | 0.1 | 114.2 | 0.0 | 0.0 | 114.8 | 115.9 | 116.3 | 116.4 | 114.8 |
#### Offline: NVIDIA T4, NVIDIA TensorRT with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:----------------|
| GPU | NVIDIA T4 |
| Backend | NVIDIA TensorRT |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | NVIDIA TensorRT |
| Max batch size | 16 |
| Number of model instances | 1 |
| NVIDIA TensorRT Capture CUDA Graph | Disabled |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.43 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 69.0 | 0.0 | 0.5 | 0.0 | 0.1 | 13.7 | 0.0 | 0.0 | 14.3 | 14.7 | 14.8 | 14.8 | 14.4 |
| 8 | 1 | 69.3 | 0.0 | 0.5 | 0.0 | 0.1 | 113.8 | 0.0 | 0.0 | 114.8 | 115.6 | 115.9 | 115.9 | 114.5 |
| 16 | 1 | 72.7 | 0.0 | 0.6 | 0.0 | 0.1 | 218.0 | 0.0 | 0.0 | 219.1 | 222.3 | 222.4 | 224.2 | 218.8 |
#### Offline: NVIDIA T4, PyTorch with FP16
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:------------------|
| GPU | NVIDIA T4 |
| Backend | PyTorch |
| Backend accelerator | - |
| Precision | FP16 |
| Model format | TorchScript Trace |
| Max batch size | 16 |
| Number of model instances | 1 |
| Accelerator Precision | - |
| Max Seq Length | 384 |
| SQuAD v1.1 F1 Score | 91.40 |
<summary>Results Table</summary>
| Batch | Concurrency | Inferences/Second | Client Send (ms) | Network+Server Send/Recv (ms) | Server Queue (ms) | Server Compute Input (ms) | Server Compute Infer (ms) | Server Compute Output (ms) | Client Recv (ms) | p50 latency (ms) | p90 latency (ms) | p95 latency (ms) | p99 latency (ms) | avg latency (ms) |
|--------:|--------------:|--------------------:|-------------------:|--------------------------------:|--------------------:|----------------------------:|----------------------------:|-----------------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|-------------------:|
| 1 | 1 | 50.0 | 0.0 | 0.5 | 0.1 | 0.1 | 7.7 | 11.6 | 0.0 | 19.9 | 20.4 | 20.6 | 20.6 | 19.9 |
| 8 | 1 | 53.0 | 0.0 | 0.5 | 0.1 | 0.1 | 7.5 | 140.2 | 0.0 | 148.3 | 149.4 | 149.5 | 150.1 | 148.4 |
| 16 | 1 | 54.4 | 0.0 | 0.4 | 0.1 | 0.1 | 7.3 | 282.6 | 0.0 | 290.6 | 292.2 | 292.6 | 293.4 | 290.5 |
## Advanced
### Prepare configuration
You can use the environment variables to set the parameters of your inference
configuration.
Triton deployment scripts support several inference runtimes listed in the table below:
Example values of some key variables in one configuration:
```
FORMAT="onnx"
PRECISION="fp16"
EXPORT_FORMAT="onnx"
EXPORT_PRECISION="fp16"
ACCELERATOR="trt"
ACCELERATOR_PRECISION="fp16"
CAPTURE_CUDA_GRAPH="0"
BATCH_SIZE="16"
MAX_BATCH_SIZE="16"
MAX_SEQ_LENGTH="384"
CHECKPOINT_VARIANT="large"
CHECKPOINT_DIR=${CHECKPOINTS_DIR}/${CHECKPOINT_VARIANT}
TRITON_MAX_QUEUE_DELAY="1"
TRITON_GPU_ENGINE_COUNT="1"
TRITON_PREFERRED_BATCH_SIZES="1"
```
| Inference runtime | Mnemonic used in scripts |
|-------------------|--------------------------|
| [TorchScript Tracing](https://pytorch.org/docs/stable/jit.html) | `ts-trace` |
| [TorchScript Scripting](https://pytorch.org/docs/stable/jit.html) | `ts-script` |
| [ONNX](https://onnx.ai) | `onnx` |
| [NVIDIA TensorRT](https://developer.nvidia.com/tensorrt) | `trt` |
The deployment process consists of the following steps.
1. Export step. We export the model into the format set by `${EXPORT_FORMAT}`, with precision set by `${EXPORT_PRECISION}`.
2. Convert step. We convert the exported model from `${EXPORT_FORMAT}` into `${FORMAT}`. The precision of the model in `${FORMAT}` is set by `${PRECISION}`.
3. Deploy step. We create the triton model repository.
The most common use-case scenario is to export the model into ONNX format, and then convert it into TensorRT.
`${ACCELERATOR}` here refers to the accelerator of the ONNX format, which can be either `trt` or `none`.
All the above values are set in the `triton/scripts/setup_parameters.sh` file.
### Step by step deployment process
Commands described below can be used for exporting, converting and profiling the model.
#### Clone Repository
IMPORTANT: This step is executed on the host computer.
<details>
<summary>Clone Repository Command</summary>
```shell
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples/PyTorch/LanguageModeling/BERT/
```
</details>
#### Setup Environment
Setup the environment in the host computer and start Triton Inference Server.
<details>
<summary>Setup Environment Command</summary>
```shell
source ./triton/large/scripts/setup_environment.sh
./triton/large/scripts/docker/triton_inference_server.sh
```
</details>
#### Setup Container
Build and run a container that extends the NGC PyTorch container with the Triton Inference Server client libraries and dependencies.
<details>
<summary>Setup Container Command</summary>
```shell
./triton/large/scripts/docker/build.sh
./triton/large/scripts/docker/interactive.sh
```
</details>
#### Setup Parameters and Environment
Setup the environment and deployment parameters inside interactive container.
<details>
<summary>Setup Environment Command</summary>
```shell
source ./triton/large/scripts/setup_environment.sh
```
</details>
<details>
<summary>Setup Parameters Command</summary>
```shell
source ./triton/large/scripts/setup_parameters.sh
```
</details>
#### Prepare Dataset and Checkpoint
Prepare datasets and checkpoint if not run automatic evaluation scripts.
<details>
<summary>Prepare Datasets Command</summary>
```shell
./triton/large/runner/prepare_datasets.sh
```
</details>
<details>
<summary>Prepare Checkpoint Command</summary>
Download checkpoint from
```
https://catalog.ngc.nvidia.com/orgs/nvidia/models/bert_pyt_ckpt_large_qa_squad11_amp
```
Create the directory for checkpoint and copy the downloaded checkpoint content:
```shell
mkdir -p ${CHECKPOINTS_DIR}/large-qa
```
</details>
#### Export Model
Export model from Python source to desired format (e.g. Savedmodel or TorchScript)
<details>
<summary>Export Model Command</summary>
```shell
python3 triton/export_model.py \
--input-path triton/model.py \
--input-type pyt \
--output-path ${SHARED_DIR}/exported_model.${FORMAT_SUFFIX} \
--output-type ${EXPORT_FORMAT} \
--dataloader triton/dataloader.py \
--ignore-unknown-parameters \
--onnx-opset 13 \
${FLAG} \
\
--config-file bert_configs/large.json \
--checkpoint ${CHECKPOINT_DIR}/bert_large_qa.pt \
--precision ${EXPORT_PRECISION} \
\
--vocab-file ${DATASETS_DIR}/data/google_pretrained_weights/uncased_L-24_H-1024_A-16/vocab.txt \
--max-seq-length ${MAX_SEQ_LENGTH} \
--predict-file ${DATASETS_DIR}/data/squad/v1.1/dev-v1.1.json \
--batch-size ${MAX_BATCH_SIZE}
```
</details>
#### Convert Model
Convert the model from training to inference format (e.g. TensorRT).
<details>
<summary>Convert Model Command</summary>
```shell
model-navigator convert \
--model-name ${MODEL_NAME} \
--model-path ${SHARED_DIR}/exported_model.${FORMAT_SUFFIX} \
--output-path ${SHARED_DIR}/converted_model \
--target-formats ${FORMAT} \
--target-precisions ${PRECISION} \
--launch-mode local \
--override-workspace \
--verbose \
\
--onnx-opsets 13 \
--inputs input__0:${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH}:int32 \
--inputs input__1:${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH}:int32 \
--inputs input__2:${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH}:int32 \
--min-shapes input__0=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
input__1=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
input__2=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
--max-shapes input__0=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
input__1=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
input__2=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
--opt-shapes input__0=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
input__1=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
input__2=${MAX_BATCH_SIZE},${MAX_SEQ_LENGTH} \
--max-batch-size ${MAX_BATCH_SIZE} \
--tensorrt-max-workspace-size 8589934592 \
--atol 2 output__0=5.0 \
output__1=5.0 \
--rtol 1 output__0=5.0 \
output__1=5.0
```
</details>
#### Deploy Model
Configure the model on Triton Inference Server.
Generate the configuration from your model repository.
<details>
<summary>Deploy Model Command</summary>
```shell
model-navigator triton-config-model \
--model-repository ${MODEL_REPOSITORY_PATH} \
--model-name ${MODEL_NAME} \
--model-version 1 \
--model-path ${SHARED_DIR}/converted_model \
--model-format ${CONFIG_FORMAT} \
--model-control-mode ${TRITON_LOAD_MODEL_METHOD} \
--verbose \
--load-model \
--load-model-timeout-s 100 \
\
--backend-accelerator ${ACCELERATOR} \
--tensorrt-precision ${ACCELERATOR_PRECISION} \
--max-batch-size ${MBS} \
--preferred-batch-sizes ${TRITON_PREFERRED_BATCH_SIZES} \
--max-queue-delay-us ${TRITON_MAX_QUEUE_DELAY} \
--engine-count-per-device gpu=${TRITON_GPU_ENGINE_COUNT}
```
</details>
#### Prepare Triton Profiling Data
Prepare data used for profiling on Triton server.
<details>
<summary>Prepare Triton Profiling Data Command</summary>
```shell
mkdir -p ${SHARED_DIR}/input_data
python triton/prepare_input_data.py \
--dataloader triton/dataloader.py \
--input-data-dir ${SHARED_DIR}/input_data \
\
--batch-size ${MAX_BATCH_SIZE} \
--max-seq-length ${MAX_SEQ_LENGTH} \
--predict-file ${DATASETS_DIR}/data/squad/v1.1/dev-v1.1.json \
--vocab-file ${DATASETS_DIR}/data/google_pretrained_weights/uncased_L-24_H-1024_A-16/vocab.txt
```
</details>
#### Triton Performance Offline Test
We want to maximize throughput. It assumes you have your data available
for inference or that your data saturate to maximum batch size quickly.
Triton Inference Server supports offline scenarios with static batching.
Static batching allows inference requests to be served
as they are received. The largest improvements to throughput come
from increasing the batch size due to efficiency gains in the GPU with larger
batches.
<details>
<summary>Triton Performance Offline Test Command</summary>
```shell
python triton/run_performance_on_triton.py \
--model-repository ${MODEL_REPOSITORY_PATH} \
--model-name ${MODEL_NAME} \
--input-data ${SHARED_DIR}/input_data/data.json \
--input-shapes input__0:${MAX_SEQ_LENGTH} \
--input-shapes input__1:${MAX_SEQ_LENGTH} \
--input-shapes input__2:${MAX_SEQ_LENGTH} \
--batch-sizes ${BATCH_SIZE} \
--number-of-triton-instances ${TRITON_INSTANCES} \
--number-of-model-instances ${TRITON_GPU_ENGINE_COUNT} \
--batching-mode static \
--evaluation-mode offline \
--performance-tool perf_analyzer \
--result-path ${SHARED_DIR}/triton_performance_offline.csv
```
</details>
### Latency explanation
A typical Triton Inference Server pipeline can be broken down into the following steps:
1. The client serializes the inference request into a message and sends it to
the server (Client Send).
2. The message travels over the network from the client to the server (Network).
3. The message arrives at the server and is deserialized (Server Receive).
4. The request is placed on the queue (Server Queue).
5. The request is removed from the queue and computed (Server Compute).
6. The completed request is serialized in a message and sent back to
the client (Server Send).
7. The completed message then travels over the network from the server
to the client (Network).
8. The completed message is deserialized by the client and processed as
a completed inference request (Client Receive).
Generally, for local clients, steps 1-4 and 6-8 will only occupy
a small fraction of time, compared to step 5. As backend deep learning
systems like Jasper are rarely exposed directly to end users, but instead
only interfacing with local front-end servers, for the sake of Jasper,
we can consider that all clients are local.
## Release Notes
We’re constantly refining and improving our performance on AI
and HPC workloads even on the same hardware with frequent updates
to our software stack. For our latest performance data refer
to these pages for
[AI](https://developer.nvidia.com/deep-learning-performance-training-inference)
and [HPC](https://developer.nvidia.com/hpc-application-performance) benchmarks.
### Changelog
### Known issues
- There are no known issues with this model.
|
TensorFlow/Classification/ConvNets/utils | utils | image_processing | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import tensorflow as tf
_RESIZE_MIN = 256
_DEFAULT_IMAGE_SIZE = 224
__all__ = ['preprocess_image_record', 'preprocess_image_file']
def _deserialize_image_record(record):
feature_map = {
'image/encoded': tf.io.FixedLenFeature([], tf.string, ''),
'image/class/label': tf.io.FixedLenFeature([], tf.int64, -1),
'image/class/text': tf.io.FixedLenFeature([], tf.string, ''),
'image/object/bbox/xmin': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymin': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/xmax': tf.io.VarLenFeature(dtype=tf.float32),
'image/object/bbox/ymax': tf.io.VarLenFeature(dtype=tf.float32)
}
with tf.name_scope('deserialize_image_record'):
obj = tf.io.parse_single_example(record, feature_map)
imgdata = obj['image/encoded']
label = tf.cast(obj['image/class/label'], tf.int32)
bbox = tf.stack([obj['image/object/bbox/%s' % x].values for x in ['ymin', 'xmin', 'ymax', 'xmax']])
bbox = tf.transpose(tf.expand_dims(bbox, 0), [0, 2, 1])
text = obj['image/class/text']
return imgdata, label, bbox, text
def _decode_jpeg(imgdata, channels=3):
return tf.image.decode_jpeg(imgdata, channels=channels, fancy_upscaling=False, dct_method='INTEGER_FAST')
def _crop_and_filp(image, bbox, num_channels):
sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bbox,
min_object_covered=0.1,
aspect_ratio_range=[0.75, 1.33],
area_range=[0.05, 1.0],
max_attempts=100,
use_image_if_no_bounding_boxes=True
)
bbox_begin, bbox_size, _ = sample_distorted_bounding_box
offset_y, offset_x, _ = tf.unstack(bbox_begin)
target_height, target_width, _ = tf.unstack(bbox_size)
cropped = tf.image.crop_to_bounding_box(image, offset_y, offset_x, target_height, target_width)
cropped = tf.image.random_flip_left_right(cropped)
return cropped
def _central_crop(image, crop_height, crop_width):
shape = tf.shape(image)
height, width = shape[0], shape[1]
amount_to_be_cropped_h = (height - crop_height)
crop_top = amount_to_be_cropped_h // 2
amount_to_be_cropped_w = (width - crop_width)
crop_left = amount_to_be_cropped_w // 2
return tf.slice(image, [crop_top, crop_left, 0], [crop_height, crop_width, -1])
def _smallest_size_at_least(height, width, resize_min):
resize_min = tf.cast(resize_min, tf.float32)
# Convert to floats to make subsequent calculations go smoothly.
height, width = tf.cast(height, tf.float32), tf.cast(width, tf.float32)
smaller_dim = tf.minimum(height, width)
scale_ratio = resize_min / smaller_dim
# Convert back to ints to make heights and widths that TF ops will accept.
new_height = tf.cast(height * scale_ratio, tf.int32)
new_width = tf.cast(width * scale_ratio, tf.int32)
return new_height, new_width
def _aspect_preserving_resize(image, resize_min):
"""Resize images preserving the original aspect ratio.
Args:
image: A 3-D image `Tensor`.
resize_min: A python integer or scalar `Tensor` indicating the size of
the smallest side after resize.
Returns:
resized_image: A 3-D tensor containing the resized image.
"""
shape = tf.shape(image)
height, width = shape[0], shape[1]
new_height, new_width = _smallest_size_at_least(height, width, resize_min)
return _resize_image(image, new_height, new_width)
def _resize_image(image, height, width):
"""Simple wrapper around tf.resize_images.
This is primarily to make sure we use the same `ResizeMethod` and other
details each time.
Args:
image: A 3-D image `Tensor`.
height: The target height for the resized image.
width: The target width for the resized image.
Returns:
resized_image: A 3-D tensor containing the resized image. The first two
dimensions have the shape [height, width].
"""
return tf.image.resize(image, [height, width], method=tf.image.ResizeMethod.BILINEAR, preserve_aspect_ratio=False)
def preprocess_image_record(record, height, width, num_channels, is_training=False):
imgdata, label, bbox, text = _deserialize_image_record(record)
label -= 1
try:
image = _decode_jpeg(imgdata, channels=3)
except:
image = tf.image.decode_image(imgdata, channels=3)
if is_training:
# For training, we want to randomize some of the distortions.
image = _crop_and_filp(image, bbox, num_channels)
image = _resize_image(image, height, width)
else:
image = _aspect_preserving_resize(image, _RESIZE_MIN)
image = _central_crop(image, height, width)
return image, label
def preprocess_image_file(filename, height, width, num_channels, is_training=False):
imgdata = tf.read_file(filename)
try:
image = _decode_jpeg(imgdata, channels=3)
except:
image = tf.image.decode_image(imgdata, channels=3)
if is_training:
# For training, we want to randomize some of the distortions.
image = _crop_and_filp(image, bbox, num_channels)
image = _resize_image(image, height, width)
else:
image = _aspect_preserving_resize(image, _RESIZE_MIN)
image = _central_crop(image, height, width)
return image, filename
|
TensorFlow/Detection/SSD/models/research/object_detection/data | data | pascal_label_map | item {
id: 1
name: 'aeroplane'
}
item {
id: 2
name: 'bicycle'
}
item {
id: 3
name: 'bird'
}
item {
id: 4
name: 'boat'
}
item {
id: 5
name: 'bottle'
}
item {
id: 6
name: 'bus'
}
item {
id: 7
name: 'car'
}
item {
id: 8
name: 'cat'
}
item {
id: 9
name: 'chair'
}
item {
id: 10
name: 'cow'
}
item {
id: 11
name: 'diningtable'
}
item {
id: 12
name: 'dog'
}
item {
id: 13
name: 'horse'
}
item {
id: 14
name: 'motorbike'
}
item {
id: 15
name: 'person'
}
item {
id: 16
name: 'pottedplant'
}
item {
id: 17
name: 'sheep'
}
item {
id: 18
name: 'sofa'
}
item {
id: 19
name: 'train'
}
item {
id: 20
name: 'tvmonitor'
}
|
PyTorch/SpeechRecognition/QuartzNet/common | common | utils | import numpy as np
class BenchmarkStats:
""" Tracks statistics used for benchmarking. """
def __init__(self):
self.utts = []
self.times = []
self.losses = []
def update(self, utts, times, losses):
self.utts.append(utts)
self.times.append(times)
self.losses.append(losses)
def get(self, n_epochs):
throughput = sum(self.utts[-n_epochs:]) / sum(self.times[-n_epochs:])
return {'throughput': throughput, 'benchmark_epochs_num': n_epochs,
'loss': np.mean(self.losses[-n_epochs:])}
|
DGLPyTorch/DrugDiscovery/SE3Transformer/se3_transformer/runtime | runtime | loggers | # Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES
# SPDX-License-Identifier: MIT
import pathlib
from abc import ABC, abstractmethod
from enum import Enum
from typing import Dict, Any, Callable, Optional
import dllogger
import torch.distributed as dist
import wandb
from dllogger import Verbosity
from se3_transformer.runtime.utils import rank_zero_only
class Logger(ABC):
@rank_zero_only
@abstractmethod
def log_hyperparams(self, params):
pass
@rank_zero_only
@abstractmethod
def log_metrics(self, metrics, step=None):
pass
@staticmethod
def _sanitize_params(params):
def _sanitize(val):
if isinstance(val, Callable):
try:
_val = val()
if isinstance(_val, Callable):
return val.__name__
return _val
except Exception:
return getattr(val, "__name__", None)
elif isinstance(val, pathlib.Path) or isinstance(val, Enum):
return str(val)
return val
return {key: _sanitize(val) for key, val in params.items()}
class LoggerCollection(Logger):
def __init__(self, loggers):
super().__init__()
self.loggers = loggers
def __getitem__(self, index):
return [logger for logger in self.loggers][index]
@rank_zero_only
def log_metrics(self, metrics, step=None):
for logger in self.loggers:
logger.log_metrics(metrics, step)
@rank_zero_only
def log_hyperparams(self, params):
for logger in self.loggers:
logger.log_hyperparams(params)
class DLLogger(Logger):
def __init__(self, save_dir: pathlib.Path, filename: str):
super().__init__()
if not dist.is_initialized() or dist.get_rank() == 0:
save_dir.mkdir(parents=True, exist_ok=True)
dllogger.init(
backends=[dllogger.JSONStreamBackend(Verbosity.DEFAULT, str(save_dir / filename))])
@rank_zero_only
def log_hyperparams(self, params):
params = self._sanitize_params(params)
dllogger.log(step="PARAMETER", data=params)
@rank_zero_only
def log_metrics(self, metrics, step=None):
if step is None:
step = tuple()
dllogger.log(step=step, data=metrics)
class WandbLogger(Logger):
def __init__(
self,
name: str,
save_dir: pathlib.Path,
id: Optional[str] = None,
project: Optional[str] = None
):
super().__init__()
if not dist.is_initialized() or dist.get_rank() == 0:
save_dir.mkdir(parents=True, exist_ok=True)
self.experiment = wandb.init(name=name,
project=project,
id=id,
dir=str(save_dir),
resume='allow',
anonymous='must')
@rank_zero_only
def log_hyperparams(self, params: Dict[str, Any]) -> None:
params = self._sanitize_params(params)
self.experiment.config.update(params, allow_val_change=True)
@rank_zero_only
def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:
if step is not None:
self.experiment.log({**metrics, 'epoch': step})
else:
self.experiment.log(metrics)
|
TensorFlow2/LanguageModeling/BERT/official/nlp/modeling/networks | networks | span_labeling | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Span labeling network."""
from __future__ import absolute_import
from __future__ import division
# from __future__ import google_type_annotations
from __future__ import print_function
import tensorflow as tf
@tf.keras.utils.register_keras_serializable(package='Text')
class SpanLabeling(tf.keras.Model):
"""Span labeling network head for BERT modeling.
This network implements a simple single-span labeler based on a dense layer.
Attributes:
input_width: The innermost dimension of the input tensor to this network.
activation: The activation, if any, for the dense layer in this network.
initializer: The intializer for the dense layer in this network. Defaults to
a Glorot uniform initializer.
output: The output style for this network. Can be either 'logits' or
'predictions'.
"""
def __init__(self,
input_width,
activation=None,
initializer='glorot_uniform',
output='logits',
**kwargs):
self._self_setattr_tracking = False
self._config = {
'input_width': input_width,
'activation': activation,
'initializer': initializer,
'output': output,
}
sequence_data = tf.keras.layers.Input(
shape=(None, input_width), name='sequence_data', dtype=tf.float32)
intermediate_logits = tf.keras.layers.Dense(
2, # This layer predicts start location and end location.
activation=activation,
kernel_initializer=initializer,
name='predictions/transform/logits')(
sequence_data)
self.start_logits, self.end_logits = (
tf.keras.layers.Lambda(self._split_output_tensor)(intermediate_logits))
start_predictions = tf.keras.layers.Activation(tf.nn.log_softmax)(
self.start_logits)
end_predictions = tf.keras.layers.Activation(tf.nn.log_softmax)(
self.end_logits)
if output == 'logits':
output_tensors = [self.start_logits, self.end_logits]
elif output == 'predictions':
output_tensors = [start_predictions, end_predictions]
else:
raise ValueError(
('Unknown `output` value "%s". `output` can be either "logits" or '
'"predictions"') % output)
super(SpanLabeling, self).__init__(
inputs=[sequence_data], outputs=output_tensors, **kwargs)
def _split_output_tensor(self, tensor):
transposed_tensor = tf.transpose(tensor, [2, 0, 1])
return tf.unstack(transposed_tensor)
def get_config(self):
return self._config
@classmethod
def from_config(cls, config, custom_objects=None):
return cls(**config)
|
PyTorch/Recommendation/DLRM/dlrm/data | data | datasets | # Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import concurrent
import math
import os
import queue
import torch
import numpy as np
from torch.utils.data import Dataset
from typing import Optional, Sequence, Tuple, List
from dlrm.data.defaults import CATEGORICAL_CHANNEL, NUMERICAL_CHANNEL, LABEL_CHANNEL, \
DTYPE_SELECTOR, FEATURES_SELECTOR, FILES_SELECTOR
from dlrm.data.feature_spec import FeatureSpec
class SyntheticDataset(Dataset):
"""Synthetic dataset version of criteo dataset."""
def __init__(
self,
num_entries: int,
device: str = 'cuda',
batch_size: int = 32768,
numerical_features: Optional[int] = None,
categorical_feature_sizes: Optional[Sequence[int]] = None # features are returned in this order
):
cat_features_count = len(categorical_feature_sizes) if categorical_feature_sizes is not None else 0
num_features_count = numerical_features if numerical_features is not None else 0
self._batches_per_epoch = math.ceil(num_entries / batch_size)
self._num_tensor = torch.rand(size=(batch_size, num_features_count), device=device, dtype=torch.float32) \
if num_features_count > 0 else None
self._label_tensor = torch.randint(low=0, high=2, size=(batch_size,), device=device, dtype=torch.float32)
self._cat_tensor = torch.cat(
[torch.randint(low=0, high=cardinality, size=(batch_size, 1), device=device, dtype=torch.long)
for cardinality in categorical_feature_sizes], dim=1) if cat_features_count > 0 else None
def __len__(self):
return self._batches_per_epoch
def __getitem__(self, idx: int):
if idx >= self._batches_per_epoch:
raise IndexError()
return self._num_tensor, self._cat_tensor, self._label_tensor
class ParametricDataset(Dataset):
def __init__(
self,
feature_spec: FeatureSpec,
mapping: str,
batch_size: int = 1,
numerical_features_enabled: bool = False,
categorical_features_to_read: List[str] = None, # This parameter dictates order of returned features
prefetch_depth: int = 10,
drop_last_batch: bool = False,
**kwargs
):
self._feature_spec = feature_spec
self._batch_size = batch_size
self._mapping = mapping
feature_spec.check_feature_spec()
categorical_features = feature_spec.channel_spec[CATEGORICAL_CHANNEL]
numerical_features = feature_spec.channel_spec[NUMERICAL_CHANNEL]
label_features = feature_spec.channel_spec[LABEL_CHANNEL]
set_of_categorical_features = set(categorical_features)
set_of_numerical_features = set(numerical_features)
set_of_label_features = set(label_features)
set_of_categoricals_to_read = set(categorical_features_to_read)
bytes_per_feature = {feature_name: np.dtype(feature_spec.feature_spec[feature_name][DTYPE_SELECTOR]).itemsize
for feature_name in feature_spec.feature_spec.keys()}
self._numerical_features_file = None
self._label_file = None
self._numerical_bytes_per_batch = bytes_per_feature[numerical_features[0]] * \
len(numerical_features) * batch_size
self._label_bytes_per_batch = np.dtype(bool).itemsize * batch_size
self._number_of_numerical_features = len(numerical_features)
chosen_mapping = feature_spec.source_spec[mapping]
categorical_feature_files = {}
root_path = feature_spec.base_directory
number_of_batches = None
for chunk in chosen_mapping:
contained_features = chunk[FEATURES_SELECTOR]
containing_file = chunk[FILES_SELECTOR][0]
first_feature = contained_features[0]
if first_feature in set_of_categorical_features:
# Load categorical
if first_feature not in set_of_categoricals_to_read:
continue # skip chunk
path_to_open = os.path.join(root_path, containing_file)
cat_file = os.open(path_to_open, os.O_RDONLY)
bytes_per_batch = bytes_per_feature[first_feature] * self._batch_size
batch_num_float = os.fstat(cat_file).st_size / bytes_per_batch
categorical_feature_files[first_feature] = cat_file
elif first_feature in set_of_numerical_features:
# Load numerical
if not numerical_features_enabled:
continue # skip chunk
path_to_open = os.path.join(root_path, containing_file)
self._numerical_features_file = os.open(path_to_open, os.O_RDONLY)
batch_num_float = os.fstat(self._numerical_features_file).st_size / self._numerical_bytes_per_batch
elif first_feature in set_of_label_features:
# Load label
path_to_open = os.path.join(root_path, containing_file)
self._label_file = os.open(path_to_open, os.O_RDONLY)
batch_num_float = os.fstat(self._label_file).st_size / self._label_bytes_per_batch
else:
raise ValueError("Unknown chunk type")
local_number_of_batches = math.ceil(batch_num_float) if not drop_last_batch else math.floor(batch_num_float)
if number_of_batches is not None:
if local_number_of_batches != number_of_batches:
raise ValueError("Size mismatch in data files")
else:
number_of_batches = local_number_of_batches
self._categorical_features_files = None
if len(categorical_features_to_read) > 0:
self._categorical_features_files = [categorical_feature_files[feature] for feature in
categorical_features_to_read]
self._categorical_bytes_per_batch = [bytes_per_feature[feature] * self._batch_size for feature in
categorical_features_to_read]
self._categorical_types = [feature_spec.feature_spec[feature][DTYPE_SELECTOR] for feature in
categorical_features_to_read]
self._num_entries = number_of_batches
self._prefetch_depth = min(prefetch_depth, self._num_entries)
self._prefetch_queue = queue.Queue()
self._executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
def __len__(self):
return self._num_entries
def __getitem__(self, idx: int):
""" Numerical features are returned in the order they appear in the channel spec section
For performance reasons, this is required to be the order they are saved in, as specified
by the relevant chunk in source spec.
Categorical features are returned in the order they appear in the channel spec section """
if idx >= self._num_entries:
raise IndexError()
if self._prefetch_depth <= 1:
return self._get_item(idx)
# At the start, fill up the prefetching queue
if idx == 0:
for i in range(self._prefetch_depth):
self._prefetch_queue.put(self._executor.submit(self._get_item, (i)))
# Extend the prefetching window by one if not at the end of the dataset
if idx < self._num_entries - self._prefetch_depth:
self._prefetch_queue.put(self._executor.submit(self._get_item, (idx + self._prefetch_depth)))
return self._prefetch_queue.get().result()
def _get_item(self, idx: int) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[torch.Tensor]]:
click = self._get_label(idx)
numerical_features = self._get_numerical_features(idx)
categorical_features = self._get_categorical_features(idx)
return numerical_features, categorical_features, click
def _get_label(self, idx: int) -> torch.Tensor:
raw_label_data = os.pread(self._label_file, self._label_bytes_per_batch,
idx * self._label_bytes_per_batch)
array = np.frombuffer(raw_label_data, dtype=bool)
return torch.from_numpy(array).to(torch.float32)
def _get_numerical_features(self, idx: int) -> Optional[torch.Tensor]:
if self._numerical_features_file is None:
return None
raw_numerical_data = os.pread(self._numerical_features_file, self._numerical_bytes_per_batch,
idx * self._numerical_bytes_per_batch)
array = np.frombuffer(raw_numerical_data, dtype=np.float16)
return torch.from_numpy(array).view(-1, self._number_of_numerical_features)
def _get_categorical_features(self, idx: int) -> Optional[torch.Tensor]:
if self._categorical_features_files is None:
return None
categorical_features = []
for cat_bytes, cat_type, cat_file in zip(self._categorical_bytes_per_batch,
self._categorical_types,
self._categorical_features_files):
raw_cat_data = os.pread(cat_file, cat_bytes, idx * cat_bytes)
array = np.frombuffer(raw_cat_data, dtype=cat_type)
tensor = torch.from_numpy(array).unsqueeze(1).to(torch.long)
categorical_features.append(tensor)
return torch.cat(categorical_features, dim=1)
def __del__(self):
data_files = [self._label_file, self._numerical_features_file]
if self._categorical_features_files is not None:
data_files += self._categorical_features_files
for data_file in data_files:
if data_file is not None:
os.close(data_file)
|
PyTorch/Segmentation | Segmentation | README | # Segmentation
Image Segmentation is the field of image processing that deals with separating the image into multiple subgroups or regions (such as pixels set, also known as image segments) representing distinctive objects or its subparts.
Nowadays, we are constantly making interpretations of the world around us through cameras and other devices. Therefore image segmentation has become an integral part of our lives, since it's an indispensable technique for teaching the devices how to process this interpretation, how to understand the world around them.
In this collection, we will cover:
- What is image segmentation?
- Types of image segmentation
- How does image segmentation work?
- Use-cases and applications
- Where to get started
---
## What is image segmentation?
Image segmentation is a computer vision process by which a digital image is divided into various categories or segments. We use this method to understand what is depicted using a pixel-wise classification of the image. It is very much distinct from image classification, which allots labels to an entire image; object detection identifies and locates objects within an image by drawing bounding boxes around them. Image segmentation presents more pixel-level knowledge about the image content.
Consider a road side scenario with pedestrians, cars and lights:

This photo is made up of an immense number of individual pixels, and image segmentation aims to assign each of those pixels to the object to which it belongs. Segmentation of an image enables us to segregate the foreground from the background, identify a road or a car's precise location, and mark the margins that separate a pedestrian from a car or road.
---
## Types of image segmentation
Image segmentation tasks can be broken down into two broad categories: semantic segmentation and instance segmentation.
1. Semantic segmentation:- This is the process of classifying each pixel belonging to a particular label. It doesn't different across different instances of the same object. For example if there are 2 cats in an image, semantic segmentation gives same label to all the pixels of both cats
2. Instance segmentation:- This differs from semantic segmentation in the sense that it gives a unique label to every instance of a particular object in the image. As can be seen in the image above all 3 dogs are assigned different colours i.e different labels. With semantic segmentation all of them would have been assigned the same colour.
---
## How does image segmentation work?
Let's consider image segmentation as a function.
An image is given as input to the function and it gives a matrix or a mask as the output, where each element tells us which class or instance that pixel belongs to.
Machine learning moves towards image segmentation train models to recognize which features of an image are crucial, rather than designing bespoke heuristics by hand.
Although deep neural networks architectures for image segmentation may differ in implementation, most follows similar basis structure:

Source - [SegNet Paper](https://arxiv.org/pdf/1511.00561.pdf)
- The encoder: A set of layers that extract features of an image through a sequence of progressively narrower and deeper filters. Oftentimes, the encoder is pre-trained on a different task (like image recognition), where it learns statistical correlations from many images and may transfer that knowledge for the purposes of segmentation.
- The Decoder: A set of layers that progressively grows the output of the encoder into a segmentation mask resembling the pixel resolution of the input image.
- Skip connections: Long range connections in the neural network that allow the model to draw on features at varying spatial scales to improve model accuracy.
Most of the architectures used for segmentation tasks are built on the technique of Fully Convolutional Network (FCN) i.e., the architecture contains convolution layers instead of any Dense or Max Pool layers. Though various models support the FCN technique, a few handpicked models generally used in production are - UNet, MaskRCNN, and DeepLabv3.
---
## Use-cases and applications
Image Segmentation can be useful for a lot of different use-cases - handwriting recognition, virtual try-on, visual image search, road scene segmentation, organ segmentation and much more. Here are the few applications explained in detail:
### Autonomous vehicles:
There are a lot of things that needs your attention while driving- the road, other vehicles, pedestrians, sidewalks, and (potentially) a plethora of other potential obstacles/safety hazards.
If you’ve been driving for a long time, noticing and reacting to this environment might seem automatic or like second nature. In case of a self driving car, it would be a quick observation that this car needs to see, interpret, and respond to a scene in real-time. This implies the need to create pixel-level map of the world through the camera system in this vehicle in order to navigate it safely and efficiently.
Even though the field of autonomous machines/automobiles is much more complex than performing segmentation, this pixel-level understanding is a essential ingredient in a step towards reality.

### Medical imaging and diagnostics:
In the initial steps of a diagnostic and treatment pipeline for many conditions that require medical images, such as CT or MRI scans, image segmentation can be used as a powerful technique.
Essentially, segmentation can effectively distinguish and separate homogeneous areas that may include particularly important pixels of organs, lesions, etc. However, there are significant challenges, including low contrast, noise, and various other imaging ambiguities.

### Virtual try-on:
Virtual try on clothes is quite a fascinating feature which was available in stores using specialized hardware which creates a 3d model. But interestingly with deep learning and image segmentation the same can be obtained using just a 2d image.

---
## Where to get started
NVIDIA provides Deep Learning Examples for Image Segmentation on its GitHub repository. These examples provide you with easy to consume and highly optimized scripts for both training and inferencing. The quick start guide at our GitHub repository will help you in setting up the environment using NGC Docker Images, download pre-trained models from NGC and adapt the model training and inference for your application/use-case.
Here are the examples relevant for image segmentation, directly from [Deep Learning Examples](https://github.com/NVIDIA/DeepLearningExamples):
1. 3D UNet for Medical Image Segmentation using Tensorflow 1.x
- [Git repository](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Segmentation/UNet_3D_Medical)
- Uses TensorFlow 20.06-tf1-py3 [NGC container](https://ngc.nvidia.com/registry/nvidia-tensorflow)
2. 2D UNet for Industrial Defect Segmentation using Tensorflow 1.x
- [Git repository](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Segmentation/UNet_Industrial)
- Uses TensorFlow 20.06-tf1-py3 [NGC container](https://ngc.nvidia.com/registry/nvidia-tensorflow)
3. MaskRCNN for Common Objects Segmentation using PyTorch
- [Git repository](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Segmentation/MaskRCNN)
- Uses PyTorch 20.06-py3 [NGC container](https://ngc.nvidia.com/registry/nvidia-pytorch)
|
PyTorch/SpeechSynthesis/FastPitch/common | common | filter_warnings | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mutes known and unrelated PyTorch warnings.
The warnings module keeps a list of filters. Importing it as late as possible
prevents its filters from being overriden.
"""
import warnings
# NGC 22.04-py3 container (PyTorch 1.12.0a0+bd13bc6)
warnings.filterwarnings(
"ignore",
message='positional arguments and argument "destination" are deprecated.'
' nn.Module.state_dict will not accept them in the future.')
# 22.08-py3 container
warnings.filterwarnings(
"ignore",
message="is_namedtuple is deprecated, please use the python checks")
|
Tools/DGLPyTorch/SyntheticGraphGeneration/configurations | configurations | cora | {
"nodes": [
{
"name": "paper",
"count": 2708,
"features": [
{
"name": "w_0",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_2",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_3",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_4",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_5",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_6",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_7",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_8",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_9",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_10",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_11",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_12",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_13",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_14",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_15",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_16",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_17",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_18",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_19",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_20",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_21",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_22",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_23",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_24",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_25",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_26",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_27",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_28",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_29",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_30",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_31",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_32",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_33",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_34",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_35",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_36",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_37",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_38",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_39",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_40",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_41",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_42",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_43",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_44",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_45",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_46",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_47",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_48",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_49",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_50",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_51",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_52",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_53",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_54",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_55",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_56",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_57",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_58",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_59",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_60",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_61",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_62",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_63",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_64",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_65",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_66",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_67",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_68",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_69",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_70",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_71",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_72",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_73",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_74",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_75",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_76",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_77",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_78",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_79",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_80",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_81",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_82",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_83",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_84",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_85",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_86",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_87",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_88",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_89",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_90",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_91",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_92",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_93",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_94",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_95",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_96",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_97",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_98",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_99",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_100",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_101",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_102",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_103",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_104",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_105",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_106",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_107",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_108",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_109",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_110",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_111",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_112",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_113",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_114",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_115",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_116",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_117",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_118",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_119",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_120",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_121",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_122",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_123",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_124",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_125",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_126",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_127",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_128",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_129",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_130",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_131",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_132",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_133",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_134",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_135",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_136",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_137",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_138",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_139",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_140",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_141",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_142",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_143",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_144",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_145",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_146",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_147",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_148",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_149",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_150",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_151",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_152",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_153",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_154",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_155",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_156",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_157",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_158",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_159",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_160",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_161",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_162",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_163",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_164",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_165",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_166",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_167",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_168",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_169",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_170",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_171",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_172",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_173",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_174",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_175",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_176",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_177",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_178",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_179",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_180",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_181",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_182",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_183",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_184",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_185",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_186",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_187",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_188",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_189",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_190",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_191",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_192",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_193",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_194",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_195",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_196",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_197",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_198",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_199",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_200",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_201",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_202",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_203",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_204",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_205",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_206",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_207",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_208",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_209",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_210",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_211",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_212",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_213",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_214",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_215",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_216",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_217",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_218",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_219",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_220",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_221",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_222",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_223",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_224",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_225",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_226",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_227",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_228",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_229",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_230",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_231",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_232",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_233",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_234",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_235",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_236",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_237",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_238",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_239",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_240",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_241",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_242",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_243",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_244",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_245",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_246",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_247",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_248",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_249",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_250",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_251",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_252",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_253",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_254",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_255",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_256",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_257",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_258",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_259",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_260",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_261",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_262",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_263",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_264",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_265",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_266",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_267",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_268",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_269",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_270",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_271",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_272",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_273",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_274",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_275",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_276",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_277",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_278",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_279",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_280",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_281",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_282",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_283",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_284",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_285",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_286",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_287",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_288",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_289",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_290",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_291",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_292",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_293",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_294",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_295",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_296",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_297",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_298",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_299",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_300",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_301",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_302",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_303",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_304",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_305",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_306",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_307",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_308",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_309",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_310",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_311",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_312",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_313",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_314",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_315",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_316",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_317",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_318",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_319",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_320",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_321",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_322",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_323",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_324",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_325",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_326",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_327",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_328",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_329",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_330",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_331",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_332",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_333",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_334",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_335",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_336",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_337",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_338",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_339",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_340",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_341",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_342",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_343",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_344",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_345",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_346",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_347",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_348",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_349",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_350",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_351",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_352",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_353",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_354",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_355",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_356",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_357",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_358",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_359",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_360",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_361",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_362",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_363",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_364",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_365",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_366",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_367",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_368",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_369",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_370",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_371",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_372",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_373",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_374",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_375",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_376",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_377",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_378",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_379",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_380",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_381",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_382",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_383",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_384",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_385",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_386",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_387",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_388",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_389",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_390",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_391",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_392",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_393",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_394",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_395",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_396",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_397",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_398",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_399",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_400",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_401",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_402",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_403",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_404",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_405",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_406",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_407",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_408",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_409",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_410",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_411",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_412",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_413",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_414",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_415",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_416",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_417",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_418",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_419",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_420",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_421",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_422",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_423",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_424",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_425",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_426",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_427",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_428",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_429",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_430",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_431",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_432",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_433",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_434",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_435",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_436",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_437",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_438",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_439",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_440",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_441",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_442",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_443",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_444",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_445",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_446",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_447",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_448",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_449",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_450",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_451",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_452",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_453",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_454",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_455",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_456",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_457",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_458",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_459",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_460",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_461",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_462",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_463",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_464",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_465",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_466",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_467",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_468",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_469",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_470",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_471",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_472",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_473",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_474",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_475",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_476",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_477",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_478",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_479",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_480",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_481",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_482",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_483",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_484",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_485",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_486",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_487",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_488",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_489",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_490",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_491",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_492",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_493",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_494",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_495",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_496",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_497",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_498",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_499",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_500",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_501",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_502",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_503",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_504",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_505",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_506",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_507",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_508",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_509",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_510",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_511",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_512",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_513",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_514",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_515",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_516",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_517",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_518",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_519",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_520",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_521",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_522",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_523",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_524",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_525",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_526",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_527",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_528",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_529",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_530",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_531",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_532",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_533",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_534",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_535",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_536",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_537",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_538",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_539",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_540",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_541",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_542",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_543",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_544",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_545",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_546",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_547",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_548",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_549",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_550",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_551",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_552",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_553",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_554",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_555",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_556",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_557",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_558",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_559",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_560",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_561",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_562",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_563",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_564",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_565",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_566",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_567",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_568",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_569",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_570",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_571",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_572",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_573",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_574",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_575",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_576",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_577",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_578",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_579",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_580",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_581",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_582",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_583",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_584",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_585",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_586",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_587",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_588",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_589",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_590",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_591",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_592",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_593",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_594",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_595",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_596",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_597",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_598",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_599",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_600",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_601",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_602",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_603",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_604",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_605",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_606",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_607",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_608",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_609",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_610",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_611",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_612",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_613",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_614",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_615",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_616",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_617",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_618",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_619",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_620",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_621",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_622",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_623",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_624",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_625",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_626",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_627",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_628",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_629",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_630",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_631",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_632",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_633",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_634",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_635",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_636",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_637",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_638",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_639",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_640",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_641",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_642",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_643",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_644",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_645",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_646",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_647",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_648",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_649",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_650",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_651",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_652",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_653",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_654",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_655",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_656",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_657",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_658",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_659",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_660",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_661",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_662",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_663",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_664",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_665",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_666",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_667",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_668",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_669",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_670",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_671",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_672",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_673",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_674",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_675",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_676",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_677",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_678",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_679",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_680",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_681",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_682",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_683",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_684",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_685",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_686",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_687",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_688",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_689",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_690",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_691",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_692",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_693",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_694",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_695",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_696",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_697",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_698",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_699",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_700",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_701",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_702",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_703",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_704",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_705",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_706",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_707",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_708",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_709",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_710",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_711",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_712",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_713",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_714",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_715",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_716",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_717",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_718",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_719",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_720",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_721",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_722",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_723",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_724",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_725",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_726",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_727",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_728",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_729",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_730",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_731",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_732",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_733",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_734",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_735",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_736",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_737",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_738",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_739",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_740",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_741",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_742",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_743",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_744",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_745",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_746",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_747",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_748",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_749",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_750",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_751",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_752",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_753",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_754",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_755",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_756",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_757",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_758",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_759",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_760",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_761",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_762",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_763",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_764",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_765",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_766",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_767",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_768",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_769",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_770",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_771",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_772",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_773",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_774",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_775",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_776",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_777",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_778",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_779",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_780",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_781",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_782",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_783",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_784",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_785",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_786",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_787",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_788",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_789",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_790",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_791",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_792",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_793",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_794",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_795",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_796",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_797",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_798",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_799",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_800",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_801",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_802",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_803",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_804",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_805",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_806",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_807",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_808",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_809",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_810",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_811",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_812",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_813",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_814",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_815",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_816",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_817",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_818",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_819",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_820",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_821",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_822",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_823",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_824",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_825",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_826",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_827",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_828",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_829",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_830",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_831",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_832",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_833",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_834",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_835",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_836",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_837",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_838",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_839",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_840",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_841",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_842",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_843",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_844",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_845",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_846",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_847",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_848",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_849",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_850",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_851",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_852",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_853",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_854",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_855",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_856",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_857",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_858",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_859",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_860",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_861",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_862",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_863",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_864",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_865",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_866",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_867",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_868",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_869",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_870",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_871",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_872",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_873",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_874",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_875",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_876",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_877",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_878",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_879",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_880",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_881",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_882",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_883",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_884",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_885",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_886",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_887",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_888",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_889",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_890",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_891",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_892",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_893",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_894",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_895",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_896",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_897",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_898",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_899",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_900",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_901",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_902",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_903",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_904",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_905",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_906",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_907",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_908",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_909",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_910",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_911",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_912",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_913",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_914",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_915",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_916",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_917",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_918",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_919",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_920",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_921",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_922",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_923",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_924",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_925",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_926",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_927",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_928",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_929",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_930",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_931",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_932",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_933",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_934",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_935",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_936",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_937",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_938",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_939",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_940",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_941",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_942",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_943",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_944",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_945",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_946",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_947",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_948",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_949",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_950",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_951",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_952",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_953",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_954",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_955",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_956",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_957",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_958",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_959",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_960",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_961",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_962",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_963",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_964",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_965",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_966",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_967",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_968",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_969",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_970",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_971",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_972",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_973",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_974",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_975",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_976",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_977",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_978",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_979",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_980",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_981",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_982",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_983",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_984",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_985",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_986",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_987",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_988",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_989",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_990",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_991",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_992",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_993",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_994",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_995",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_996",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_997",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_998",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_999",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1000",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1001",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1002",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1003",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1004",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1005",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1006",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1007",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1008",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1009",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1010",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1011",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1012",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1013",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1014",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1015",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1016",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1017",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1018",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1019",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1020",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1021",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1022",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1023",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1024",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1025",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1026",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1027",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1028",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1029",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1030",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1031",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1032",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1033",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1034",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1035",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1036",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1037",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1038",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1039",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1040",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1041",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1042",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1043",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1044",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1045",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1046",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1047",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1048",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1049",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1050",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1051",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1052",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1053",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1054",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1055",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1056",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1057",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1058",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1059",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1060",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1061",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1062",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1063",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1064",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1065",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1066",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1067",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1068",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1069",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1070",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1071",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1072",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1073",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1074",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1075",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1076",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1077",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1078",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1079",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1080",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1081",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1082",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1083",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1084",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1085",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1086",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1087",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1088",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1089",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1090",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1091",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1092",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1093",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1094",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1095",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1096",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1097",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1098",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1099",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1100",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1101",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1102",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1103",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1104",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1105",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1106",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1107",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1108",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1109",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1110",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1111",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1112",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1113",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1114",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1115",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1116",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1117",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1118",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1119",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1120",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1121",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1122",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1123",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1124",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1125",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1126",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1127",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1128",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1129",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1130",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1131",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1132",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1133",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1134",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1135",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1136",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1137",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1138",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1139",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1140",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1141",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1142",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1143",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1144",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1145",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1146",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1147",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1148",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1149",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1150",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1151",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1152",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1153",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1154",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1155",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1156",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1157",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1158",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1159",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1160",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1161",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1162",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1163",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1164",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1165",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1166",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1167",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1168",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1169",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1170",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1171",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1172",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1173",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1174",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1175",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1176",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1177",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1178",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1179",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1180",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1181",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1182",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1183",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1184",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1185",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1186",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1187",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1188",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1189",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1190",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1191",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1192",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1193",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1194",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1195",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1196",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1197",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1198",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1199",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1200",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1201",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1202",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1203",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1204",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1205",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1206",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1207",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1208",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1209",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1210",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1211",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1212",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1213",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1214",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1215",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1216",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1217",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1218",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1219",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1220",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1221",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1222",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1223",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1224",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1225",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1226",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1227",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1228",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1229",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1230",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1231",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1232",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1233",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1234",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1235",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1236",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1237",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1238",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1239",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1240",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1241",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1242",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1243",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1244",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1245",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1246",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1247",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1248",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1249",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1250",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1251",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1252",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1253",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1254",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1255",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1256",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1257",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1258",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1259",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1260",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1261",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1262",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1263",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1264",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1265",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1266",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1267",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1268",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1269",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1270",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1271",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1272",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1273",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1274",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1275",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1276",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1277",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1278",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1279",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1280",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1281",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1282",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1283",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1284",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1285",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1286",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1287",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1288",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1289",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1290",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1291",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1292",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1293",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1294",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1295",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1296",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1297",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1298",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1299",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1300",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1301",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1302",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1303",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1304",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1305",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1306",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1307",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1308",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1309",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1310",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1311",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1312",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1313",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1314",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1315",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1316",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1317",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1318",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1319",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1320",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1321",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1322",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1323",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1324",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1325",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1326",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1327",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1328",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1329",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1330",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1331",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1332",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1333",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1334",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1335",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1336",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1337",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1338",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1339",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1340",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1341",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1342",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1343",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1344",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1345",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1346",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1347",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1348",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1349",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1350",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1351",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1352",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1353",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1354",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1355",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1356",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1357",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1358",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1359",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1360",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1361",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1362",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1363",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1364",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1365",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1366",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1367",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1368",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1369",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1370",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1371",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1372",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1373",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1374",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1375",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1376",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1377",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1378",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1379",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1380",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1381",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1382",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1383",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1384",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1385",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1386",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1387",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1388",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1389",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1390",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1391",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1392",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1393",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1394",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1395",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1396",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1397",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1398",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1399",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1400",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1401",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1402",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1403",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1404",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1405",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1406",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1407",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1408",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1409",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1410",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1411",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1412",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1413",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1414",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1415",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1416",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1417",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1418",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1419",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1420",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1421",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1422",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1423",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1424",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1425",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1426",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1427",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1428",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1429",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1430",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1431",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "w_1432",
"dtype": "int64",
"feature_type": "categorical"
},
{
"name": "label",
"dtype": "int64",
"feature_type": "categorical"
}
],
"features_path": "paper.parquet",
"[gen]tabular_generators": [
{
"type": "kde",
"features_list": -1,
"data_source": {
"type": "cfg",
"path": "/workspace/data/cora/syngen_preprocessed",
"name": "paper"
},
"params": {}
}
]
}
],
"edges": [
{
"name": "cite",
"count": 5428,
"src_node_type": "paper",
"dst_node_type": "paper",
"directed": false,
"features": [],
"features_path": null,
"structure_path": "cite_edge_list.parquet",
"[gen]structure_generator": {
"type": "RMAT",
"data_source": {
"type": "cfg",
"path": "/workspace/data/cora/syngen_preprocessed",
"name": ["paper", "cite", "paper"]
},
"params": {
"seed": 42,
"has_self_loop": false
}
}
}
],
"[gen]aligners": [
{
"type": "xgboost",
"graphs": ["cite"],
"edges": {},
"nodes": {
"paper": ["label"]
},
"params": {}
}
]
} |
PyTorch/DrugDiscovery/MoFlow/moflow/model | model | basic | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright 2020 Chengxi Zang
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import math
from typing import Tuple
import numpy as np
from scipy import linalg as la
import torch
from torch import nn
from torch.nn import functional as F
from moflow.runtime.distributed_utils import get_world_size, reduce_tensor
class ActNorm(nn.Module):
def __init__(self, num_channels, num_dims, channels_dim=1):
super().__init__()
self.num_channels = num_channels
self.num_dims = num_dims
self.channels_dim = channels_dim
self.shape = [1] * num_dims
self.shape[channels_dim] = num_channels
self.loc = nn.Parameter(torch.zeros(*self.shape))
self.scale = nn.Parameter(torch.ones(*self.shape))
self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8))
self.register_buffer('num_elements', torch.tensor(0, dtype=torch.uint8))
@torch.jit.ignore
def initialize(self, input):
if self.initialized.item() == 1:
return
dims = list(input.shape[1:])
del dims[self.channels_dim -1]
num_elems = math.prod(dims)
permutation = [self.channels_dim] + [i for i in range(self.num_dims) if i != self.channels_dim]
with torch.no_grad():
flatten = input.permute(*permutation).contiguous().view(self.num_channels, -1)
mean = flatten.mean(1).view(self.shape)
std = flatten.std(1).view(self.shape)
num_gpus = get_world_size()
mean = reduce_tensor(mean, num_gpus)
std = reduce_tensor(std, num_gpus)
self.loc.data.copy_(-mean)
self.scale.data.copy_(1 / (std + 1e-6))
self.initialized.fill_(1)
self.num_elements.fill_(num_elems)
def forward(self, input):
log_abs = torch.log(torch.abs(self.scale))
logdet = self.num_elements * torch.sum(log_abs)
return self.scale * (input + self.loc), logdet
@torch.jit.export
def reverse(self, output):
return output / self.scale - self.loc
class InvConv2d(nn.Module):
def __init__(self, in_channel):
super().__init__()
weight = torch.randn(in_channel, in_channel)
q, _ = torch.qr(weight)
weight = q.unsqueeze(2).unsqueeze(3)
self.weight = nn.Parameter(weight)
def forward(self, input):
_, _, height, width = input.shape
out = F.conv2d(input, self.weight)
logdet = (
height * width * torch.slogdet(self.weight.squeeze().double())[1].float()
)
return out, logdet
def reverse(self, output):
return F.conv2d(
output, self.weight.squeeze().inverse().unsqueeze(2).unsqueeze(3)
)
class InvConv2dLU(nn.Module):
def __init__(self, in_channel):
super().__init__()
weight = np.random.randn(in_channel, in_channel)
q, _ = la.qr(weight)
w_p, w_l, w_u = la.lu(q.astype(np.float32))
w_s = np.diag(w_u)
w_u = np.triu(w_u, 1)
u_mask = np.triu(np.ones_like(w_u), 1)
l_mask = u_mask.T
w_p = torch.from_numpy(w_p)
w_l = torch.from_numpy(w_l).contiguous()
w_s = torch.from_numpy(w_s)
w_u = torch.from_numpy(w_u)
self.register_buffer('w_p', w_p)
self.register_buffer('u_mask', torch.from_numpy(u_mask))
self.register_buffer('l_mask', torch.from_numpy(l_mask))
self.register_buffer('s_sign', torch.sign(w_s))
self.register_buffer('l_eye', torch.eye(l_mask.shape[0]))
self.w_l = nn.Parameter(w_l)
self.w_s = nn.Parameter(torch.log(torch.abs(w_s)))
self.w_u = nn.Parameter(w_u)
def forward(self, input):
_, _, height, width = input.shape
weight = self.calc_weight()
out = F.conv2d(input, weight)
logdet = height * width * torch.sum(self.w_s)
return out, logdet
def calc_weight(self):
weight = (
self.w_p
@ (self.w_l * self.l_mask + self.l_eye)
@ ((self.w_u * self.u_mask) + torch.diag(self.s_sign * torch.exp(self.w_s)))
)
return weight.unsqueeze(2).unsqueeze(3)
def reverse(self, output):
weight = self.calc_weight()
dtype = weight.dtype
weight = weight.float()
weight_inv = weight.squeeze().inverse().unsqueeze(2).unsqueeze(3)
weight_inv = weight_inv.to(dtype=dtype)
return F.conv2d(output, weight_inv)
class GraphConv(nn.Module):
def __init__(self, in_channels, out_channels, num_atoms, num_edge_type=4):
super(GraphConv, self).__init__()
self.graph_linear_self = nn.Linear(in_channels, out_channels)
self.graph_linear_edge = nn.Linear(in_channels, out_channels * num_edge_type)
self.num_edge_type = num_edge_type
self.in_ch = in_channels
self.out_ch = out_channels
self.num_atoms = num_atoms
def forward(self, graph: Tuple[torch.Tensor, torch.Tensor]) -> torch.Tensor:
adj, nodes = graph
hs = self.graph_linear_self(nodes)
m = self.graph_linear_edge(nodes)
m = m.view(-1, self.num_atoms, self.out_ch, self.num_edge_type)
hr = torch.einsum('bemn,bnce->bmc', adj, m)
hr = hr.unsqueeze(2)
return hs + hr
|
PyTorch/Forecasting/TFT/triton/deployment_toolkit | deployment_toolkit | warmup | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pathlib
from distutils.version import LooseVersion
from importlib.metadata import version
from typing import List
TRITON_CLIENT_VERSION = LooseVersion(version("tritonclient"))
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .core import BatchingMode, EvaluationMode, MeasurementMode, OfflineMode
from .perf_analyzer import PerfAnalyzer, PerfAnalyzerConfig
from .utils import parse_server_url
LOGGER = logging.getLogger("warmup")
def performance_evaluation_warmup(
server_url: str,
model_name: str,
batch_sizes: List[int],
number_of_triton_instances: int,
number_of_model_instances: int,
input_data: str,
input_shapes: List[str],
measurement_mode: MeasurementMode,
measurement_interval: int,
measurement_request_count: int,
batching_mode: BatchingMode,
offline_mode: OfflineMode,
evaluation_mode: EvaluationMode,
output_shared_memory_size: int,
):
protocol, host, port = parse_server_url(server_url)
measurement_interval = 2 * measurement_interval
measurement_request_count = 2 * measurement_request_count
if batching_mode == BatchingMode.STATIC:
if len(batch_sizes) == 1:
batch_sizes = {batch_sizes[0]}
else:
batch_sizes = sorted({1, batch_sizes[-1]})
max_concurrency = 1
min_concurrency = 1
step = 1
elif batching_mode == BatchingMode.DYNAMIC:
max_batch_size = max(batch_sizes)
max_total_requests = 2 * max_batch_size * number_of_triton_instances * number_of_model_instances
max_concurrency = min(256, max_total_requests)
step = max(1, max_concurrency // 2)
min_concurrency = step
batch_sizes = [max(1, max_total_requests // 256)]
else:
raise ValueError(f"Unsupported batching mode: {batching_mode}")
for batch_size in batch_sizes:
for concurrency in range(min_concurrency, max_concurrency + step, step):
params = {
"model-name": model_name,
"model-version": 1,
"batch-size": batch_size,
"url": f"{host}:{port}",
"protocol": protocol,
"input-data": input_data,
"measurement-interval": measurement_interval,
"concurrency-range": f"{concurrency}:{concurrency}:1",
}
if TRITON_CLIENT_VERSION >= LooseVersion("2.11.0"):
params["measurement-mode"] = measurement_mode.value
params["measurement-request-count"] = measurement_request_count
if evaluation_mode == EvaluationMode.OFFLINE:
params["shared-memory"] = offline_mode.value
params["output-shared-memory-size"] = output_shared_memory_size
config = PerfAnalyzerConfig()
for param, value in params.items():
config[param] = value
for shape in input_shapes:
config["shape"] = shape
perf_analyzer = PerfAnalyzer(config=config)
perf_analyzer.run()
|
TensorFlow2/Recommendation/WideAndDeep/triton | triton | run_performance_on_triton | #!/usr/bin/env python3
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import pathlib
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .deployment_toolkit.core import EvaluationMode, MeasurementMode, OfflineMode, PerformanceTool
from .deployment_toolkit.triton_performance_runner import TritonPerformanceRunner
LOGGER = logging.getLogger("run_performance_on_triton")
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--model-name",
type=str,
required=True,
help="Name of the model to test",
)
parser.add_argument(
"--result-path",
type=pathlib.Path,
required=True,
help="Path where results files is stored.",
)
parser.add_argument(
"--server-url",
type=str,
default="http://127.0.0.1:8000",
help="Url to Triton server",
)
parser.add_argument(
"--model-version",
type=str,
default=1,
help="Version of model",
)
parser.add_argument(
"--input-data",
type=str,
default="random",
help="Input data to perform profiling.",
)
parser.add_argument(
"--input-shapes",
action="append",
help="Input data shape in form INPUT_NAME:<full_shape_without_batch_axis>.",
)
parser.add_argument(
"--batch-sizes",
type=int,
default=[1],
help="List of batch sizes to tests.",
nargs="*",
)
parser.add_argument(
"--concurrency",
type=int,
default=[1],
help="List of concurrency modes.",
nargs="*",
)
parser.add_argument(
"--measurement-mode",
choices=[item.value for item in MeasurementMode],
default=MeasurementMode.COUNT_WINDOWS.value,
type=str,
help="Select measurement mode "
"'time_windows' stabilize performance on measurement window. "
"'count_windows' stabilize performance on number of samples.",
)
parser.add_argument(
"--measurement-interval",
help="Time window perf_analyzer will wait to stabilize the measurement",
default=5000,
type=int,
)
parser.add_argument(
"--measurement-request-count",
help="Number of samples on which perf_analyzer will stabilize the measurement",
default=50,
type=int,
)
parser.add_argument(
"--evaluation-mode",
choices=[item.value for item in EvaluationMode],
default=EvaluationMode.OFFLINE.value,
type=str,
help="Select evaluation mode "
"'offline' run offline analysis and use GPU memory to pass tensors. "
"'online' run online analysis and use HTTP protocol.",
)
parser.add_argument(
"--offline-mode",
choices=[item.value for item in OfflineMode],
default=OfflineMode.SYSTEM.value,
type=str,
help="Select offline mode "
"'system' pass tensors through CPU RAM memory. "
"'cuda' pass tensors through GPU RAM memory.",
)
parser.add_argument(
"--output-shared-memory-size",
default=102400,
type=int,
help="Size of memory buffer allocated for output with dynamic shapes in bytes. "
"Has to be equal to maximal size of output tensor.",
)
parser.add_argument(
"--performance-tool",
choices=[item.value for item in PerformanceTool],
default=PerformanceTool.MODEL_ANALYZER.value,
type=str,
help="Select performance tool for measurement mode "
"'model_analyzer' use Model Analyzer "
"'perf_analyzer' use Perf Analyzer",
)
parser.add_argument(
"--model-repository",
default=None,
type=str,
help="Path to model repository. Valid when using Model Analyzer",
)
parser.add_argument(
"--warmup",
help="Enable model warmup before performance test",
action="store_true",
default=False,
)
parser.add_argument(
"--timeout",
help="Timeout for performance analysis",
type=int,
default=None,
required=False,
)
parser.add_argument(
"-v",
"--verbose",
help="Verbose logs",
action="store_true",
default=False,
)
args = parser.parse_args()
log_level = logging.INFO if not args.verbose else logging.DEBUG
log_format = "%(asctime)s %(levelname)s %(name)s %(message)s"
logging.basicConfig(level=log_level, format=log_format)
runner = TritonPerformanceRunner(
server_url=args.server_url,
model_name=args.model_name,
input_data=args.input_data,
input_shapes=args.input_shapes or [],
batch_sizes=args.batch_sizes,
measurement_mode=MeasurementMode(args.measurement_mode),
measurement_interval=args.measurement_interval,
measurement_request_count=args.measurement_request_count,
concurrency=args.concurrency,
evaluation_mode=EvaluationMode(args.evaluation_mode),
offline_mode=OfflineMode(args.offline_mode),
output_shared_memory_size=args.output_shared_memory_size,
performance_tool=PerformanceTool(args.performance_tool),
model_repository=args.model_repository,
result_path=args.result_path,
warmup=args.warmup,
timeout=args.timeout,
verbose=args.verbose,
)
runner.run()
if __name__ == "__main__":
main()
|
PyTorch/SpeechRecognition/Jasper/triton/model_repo_configs/fp16/jasper-onnx-ensemble | jasper-onnx-ensemble | config | name: "jasper-onnx-ensemble"
platform: "ensemble"
max_batch_size: 8#MAX_BATCH
input {
name: "AUDIO_SIGNAL"
data_type: TYPE_FP16
dims: -1#AUDIO_LENGTH
}
input {
name: "NUM_SAMPLES"
data_type: TYPE_INT32
dims: [ 1 ]
}
output {
name: "TRANSCRIPT"
data_type: TYPE_INT32
dims: [-1]
}
ensemble_scheduling {
step {
model_name: "feature-extractor-ts-trace"
model_version: -1
input_map {
key: "input__0"
value: "AUDIO_SIGNAL"
}
input_map {
key: "input__1"
value: "NUM_SAMPLES"
}
output_map {
key: "output__0"
value: "AUDIO_FEATURES"
}
}
step {
model_name: "jasper-onnx"
model_version: -1
input_map {
key: "input__0"
value: "AUDIO_FEATURES"
}
output_map {
key: "output__0"
value: "CHARACTER_PROBABILITIES"
}
}
step {
model_name: "decoder-ts-script"
model_version: -1
input_map {
key: "input__0"
value: "CHARACTER_PROBABILITIES"
}
output_map {
key: "output__0"
value: "TRANSCRIPT"
}
}
}
|
PyTorch/SpeechRecognition/wav2vec2/utils | utils | libri_labels | #!/usr/bin/env python3
#
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset
"""
import argparse
import os
def main():
parser = argparse.ArgumentParser()
parser.add_argument("tsv")
parser.add_argument("--output-dir", required=True)
parser.add_argument("--output-name", required=True)
args = parser.parse_args()
os.makedirs(args.output_dir, exist_ok=True)
transcriptions = {}
with open(args.tsv, "r") as tsv, open(
os.path.join(args.output_dir, args.output_name + ".ltr"), "w"
) as ltr_out, open(
os.path.join(args.output_dir, args.output_name + ".wrd"), "w"
) as wrd_out:
root = next(tsv).strip()
for line in tsv:
line = line.strip()
dir = os.path.dirname(line)
if dir not in transcriptions:
parts = dir.split(os.path.sep)
trans_path = f"{parts[-2]}-{parts[-1]}.trans.txt"
path = os.path.join(root, dir, trans_path)
assert os.path.exists(path), f"File {path} does not exist."
texts = {}
with open(path, "r") as trans_f:
for tline in trans_f:
items = tline.strip().split()
texts[items[0]] = " ".join(items[1:])
transcriptions[dir] = texts
part = os.path.basename(line).split(".")[0]
assert part in transcriptions[dir]
print(transcriptions[dir][part], file=wrd_out)
print(
" ".join(list(transcriptions[dir][part].replace(" ", "|"))) + " |",
file=ltr_out,
)
if __name__ == "__main__":
main()
|
TensorFlow/Segmentation/UNet_Industrial/scripts | scripts | UNet_FP32_EVAL_XLA | #!/usr/bin/env bash
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script launches UNet evaluation in FP32 on 1 GPUs using 16 batch size
# Usage ./UNet_FP32_EVAL_XLA.sh <path to result repository> <path to dataset> <dagm classID (1-10)>
BASEDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
export TF_CPP_MIN_LOG_LEVEL=3
python "${BASEDIR}/../main.py" \
--unet_variant='tinyUNet' \
--activation_fn='relu' \
--exec_mode='evaluate' \
--iter_unit='epoch' \
--num_iter=1 \
--batch_size=16 \
--warmup_step=10 \
--results_dir="${1}" \
--data_dir="${2}" \
--dataset_name='DAGM2007' \
--dataset_classID="${3}" \
--data_format='NCHW' \
--use_auto_loss_scaling \
--nouse_tf_amp \
--use_xla \
--learning_rate=1e-4 \
--learning_rate_decay_factor=0.8 \
--learning_rate_decay_steps=500 \
--rmsprop_decay=0.9 \
--rmsprop_momentum=0.8 \
--loss_fn_name='adaptive_loss' \
--weight_decay=1e-5 \
--weight_init_method='he_uniform' \
--augment_data \
--display_every=50 \
--debug_verbosity=0
|
TensorFlow2/Segmentation/Contrib/UNet3P/configs | configs | README | Here we provide **overview** of our config file and how you can use your own custom settings's for training and
evaluation.
We are using [Hydra](https://hydra.cc/) for passing configurations. Hydra is a framework for elegantly configuring
complex applications. In Hydra you can easily [extend](https://hydra.cc/docs/patterns/extending_configs/)
and [interpolate](https://hydra.cc/docs/advanced/override_grammar/basic/#primitives) `yaml` config files.
#### Override Hydra config from command line
[Here](https://hydra.cc/docs/1.0/advanced/override_grammar/basic/) you can read how to pass or override configurations
through command line. Overall to
###### Override higher level attribute
Directly access the key and override its value
- For instance to override Data generator pass `DATA_GENERATOR_TYPE=DALI_GENERATOR`
###### Override nested attribute
Use `.` to access nested keys
- For instance to override model type `MODEL.TYPE=unet3plus`
- To override model backbone `MODEL.BACKBONE.TYPE=vgg19`
To add new element from command line add `+` before attribute name. E.g. `+warmup_steps=50` because warm steps is not
added in config file.
> Note: Don't add space between list elements, it will create problem with Hydra.
Most of the configurations attributes in our [config](./../configs/config.yaml) are self-explanatory. However, for some
attributes additional comments are added.
You can override configurations from command line too, but it's **advisable to override them from config file** because
it's
easy.
By default, hydra stores a log file of each run in a separate directory. We have disabled it in our case,
if you want to enable them to keep record of each run configuration's then comment out the settings at the end of config
file.
```yaml
# project root working directory, automatically read by hydra (.../UNet3P)
WORK_DIR: ${hydra:runtime.cwd}
DATA_PREPARATION:
# unprocessed LiTS scan data paths, for custom data training skip this section details
SCANS_TRAIN_DATA_PATH: "/data/Training Batch 2/"
...
DATASET:
# training data paths, should be relative from project root path
TRAIN:
IMAGES_PATH: "/data/train/images"
...
MODEL:
# available variants are unet3plus, unet3plus_deepsup, unet3plus_deepsup_cgm
TYPE: "unet3plus"
BACKBONE:
...
...
DATA_GENERATOR_TYPE: "DALI_GENERATOR" # options are TF_GENERATOR or DALI_GENERATOR
SHOW_CENTER_CHANNEL_IMAGE: True # only true for UNet3+. for custom dataset it should be False
# Model input shape
INPUT:
HEIGHT: 320
...
# Model output classes
OUTPUT:
CLASSES: 2
HYPER_PARAMETERS:
EPOCHS: 5
BATCH_SIZE: 2 # specify per gpu batch size
...
CALLBACKS:
TENSORBOARD:
...
PREPROCESS_DATA:
RESIZE:
VALUE: False # if True, resize to input height and width
...
USE_MULTI_GPUS:
...
# to stop hydra from storing logs files
defaults:
...
```
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/generator/tabular | tabular | ctgan | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import warnings
from typing import Optional, List
import cudf
import numpy as np
import pandas as pd
import torch
from packaging import version
from torch import optim
from torch.nn import (
BatchNorm1d,
Dropout,
LeakyReLU,
Linear,
Module,
ReLU,
Sequential,
functional,
)
from syngen.generator.tabular.base_tabular_generator import BaseTabularGenerator
from syngen.generator.tabular.data_transformer.ctgan_data_transformer import (
CTGANDataTransformer,
)
class CTGANGenerator(BaseTabularGenerator):
"""Conditional Table GAN Generator.
For more details about the process, please check the
[Modeling Tabular data using Conditional GAN](https://arxiv.org/abs/1907.00503) paper.
Adopted from: https://github.com/sdv-dev/CTGAN
Args:
embedding_dim (int): Size of the random sample passed to the Generator. Defaults to 128.
generator_dim (tuple or list of ints): Size of the output samples for each one of the Residuals. A Residual Layer
will be created for each one of the values provided. Defaults to (256, 256).
discriminator_dim (tuple or list of ints): Size of the output samples for each one of the Discriminator Layers. A Linear Layer
will be created for each one of the values provided. Defaults to (256, 256).
generator_lr (float):Learning rate for the generator. Defaults to 2e-4.
generator_decay (float):Generator weight decay for the Adam Optimizer. Defaults to 1e-6.
discriminator_lr (float):Learning rate for the discriminator. Defaults to 2e-4.
discriminator_decay (float):Discriminator weight decay for the Adam Optimizer. Defaults to 1e-6.
batch_size (int):Number of data samples to process in each step.
discriminator_steps (int):Number of discriminator updates to do for each generator update.
From the WGAN paper: https://arxiv.org/abs/1701.07875. WGAN paper
default is 5. Default used is 1 to match original CTGAN implementation.
log_frequency (boolean):Whether to use log frequency of categorical levels in conditional
sampling. Defaults to ``True``.
verbose (boolean):Whether to have print statements for progress results. Defaults to ``False``.
epochs (int):Number of training epochs. Defaults to 300.
pac (int):Number of samples to group together when applying the discriminator.
Defaults to 10.
gpu (bool):Whether to attempt to use cuda for GPU computation.
If this is False or CUDA is not available, CPU will be used.
Defaults to ``True``.
"""
def __init__(
self,
embedding_dim=128,
generator_dim=(256, 256),
discriminator_dim=(256, 256),
generator_lr=2e-4,
generator_decay=1e-6,
discriminator_lr=2e-4,
discriminator_decay=1e-6,
batch_size=500,
discriminator_steps=1,
log_frequency=True,
verbose=False,
epochs=300,
pac=10,
gpu=True,
**kwargs,
):
super(CTGANGenerator, self).__init__(**kwargs)
assert batch_size % 2 == 0
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logger = logging.getLogger(__name__)
self.log = logger
self._embedding_dim = embedding_dim
self._generator_dim = generator_dim
self._discriminator_dim = discriminator_dim
self._generator_lr = generator_lr
self._generator_decay = generator_decay
self._discriminator_lr = discriminator_lr
self._discriminator_decay = discriminator_decay
self._batch_size = int(batch_size)
self._discriminator_steps = discriminator_steps
self._log_frequency = log_frequency
self._verbose = verbose
self._epochs = epochs
self.pac = pac
if not gpu or not torch.cuda.is_available():
device = "cpu"
elif isinstance(gpu, str):
device = gpu
else:
device = "cuda"
self._device = torch.device(device)
self._transformer = None
self._data_sampler = None
self._generator = None
@staticmethod
def _gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1):
"""Deals with the instability of the gumbel_softmax for older versions of torch.
For more details about the issue:
https://drive.google.com/file/d/1AA5wPfZ1kquaRtVruCd6BiYZGcDeNxyP/view?usp=sharing
Parameters
**********
logits:
[…, num_features] unnormalized log probabilities
tau:
non-negative scalar temperature
hard:
if True, the returned samples will be discretized as one-hot vectors,
but will be differentiated as if it is the soft sample in autograd
dim (int):
a dimension along which softmax will be computed. Default: -1.
Returns
*******
Sampled tensor of same shape as logits from the Gumbel-Softmax distribution.
"""
if version.parse(torch.__version__) < version.parse("1.2.0"):
for i in range(10):
transformed = functional.gumbel_softmax(
logits, tau=tau, hard=hard, eps=eps, dim=dim
)
if not torch.isnan(transformed).any():
return transformed
raise ValueError("gumbel_softmax returning NaN.")
return functional.gumbel_softmax(
logits, tau=tau, hard=hard, eps=eps, dim=dim
)
def _apply_activate(self, data):
"""Apply proper activation function to the output of the generator."""
data_t = []
st = 0
for column_info in self._transformer.output_info_list:
for span_info in column_info:
if span_info.activation_fn == "tanh":
ed = st + span_info.dim
data_t.append(torch.tanh(data[:, st:ed]))
st = ed
elif span_info.activation_fn == "softmax":
ed = st + span_info.dim
transformed = self._gumbel_softmax(data[:, st:ed], tau=0.2)
data_t.append(transformed)
st = ed
else:
assert 0
return torch.cat(data_t, dim=1)
def _cond_loss(self, data, c, m):
"""Compute the cross entropy loss on the fixed discrete column."""
loss = []
st = 0
st_c = 0
for column_info in self._transformer.output_info_list:
for span_info in column_info:
if (
len(column_info) != 1
or span_info.activation_fn != "softmax"
):
# not discrete column
st += span_info.dim
else:
ed = st + span_info.dim
ed_c = st_c + span_info.dim
tmp = functional.cross_entropy(
data[:, st:ed],
torch.argmax(c[:, st_c:ed_c], dim=1),
reduction="none",
)
loss.append(tmp)
st = ed
st_c = ed_c
loss = torch.stack(loss, dim=1)
return (loss * m).sum() / data.size()[0]
def _validate_discrete_columns(self, train_data, categorical_columns):
"""Check whether ``categorical_columns`` exists in ``train_data``.
Args:
train_data (numpy.ndarray or pandas.DataFrame):
Training Data. It must be a 2-dimensional numpy array or a pandas.DataFrame.
categorical_columns (list-like):
List of discrete columns to be used to generate the Conditional
Vector. If ``train_data`` is a Numpy array, this list should
contain the integer indices of the columns. Otherwise, if it is
a ``pandas.DataFrame``, this list should contain the column names.
"""
if isinstance(train_data, (pd.DataFrame, cudf.DataFrame)):
invalid_columns = set(categorical_columns) - set(
train_data.columns
)
elif isinstance(train_data, np.ndarray):
invalid_columns = []
for column in categorical_columns:
if column < 0 or column >= train_data.shape[1]:
invalid_columns.append(column)
else:
raise TypeError(
"``train_data`` should be either pd.DataFrame or np.array."
)
if invalid_columns:
raise ValueError(
"Invalid columns found: {}".format(invalid_columns)
)
def fit(self, train_data, categorical_columns=tuple(), epochs=None, **kwargs):
"""Fit the CTGAN Synthesizer models to the training data.
Args:
train_data (numpy.ndarray or pandas.DataFrame):
Training Data. It must be a 2-dimensional numpy array or a pandas.DataFrame.
categorical_columns (list-like):
List of discrete columns to be used to generate the Conditional
Vector. If ``train_data`` is a Numpy array, this list should
contain the integer indices of the columns. Otherwise, if it is
a ``pandas.DataFrame``, this list should contain the column names.
"""
self._validate_discrete_columns(train_data, categorical_columns)
if epochs is None:
epochs = self._epochs
else:
warnings.warn(
(
"`epochs` argument in `fit` method has been deprecated and will be removed "
"in a future version. Please pass `epochs` to the constructor instead"
),
DeprecationWarning,
)
self._transformer = CTGANDataTransformer()
self._transformer.fit(train_data, categorical_columns)
train_data = self._transformer.transform(train_data)
self._data_sampler = DataSampler(
train_data, self._transformer.output_info_list, self._log_frequency
)
data_dim = self._transformer.output_dimensions
self._generator = Generator(
self._embedding_dim + self._data_sampler.dim_cond_vec(),
self._generator_dim,
data_dim,
).to(self._device)
discriminator = Discriminator(
data_dim + self._data_sampler.dim_cond_vec(),
self._discriminator_dim,
pac=self.pac,
).to(self._device)
optimizerG = optim.Adam(
self._generator.parameters(),
lr=self._generator_lr,
betas=(0.5, 0.9),
weight_decay=self._generator_decay,
)
optimizerD = optim.Adam(
discriminator.parameters(),
lr=self._discriminator_lr,
betas=(0.5, 0.9),
weight_decay=self._discriminator_decay,
)
mean = torch.zeros(
self._batch_size, self._embedding_dim, device=self._device
)
std = mean + 1
steps_per_epoch = max(len(train_data) // self._batch_size, 1)
for i in range(epochs):
for id_ in range(steps_per_epoch):
for n in range(self._discriminator_steps):
fakez = torch.normal(mean=mean, std=std)
condvec = self._data_sampler.sample_condvec(
self._batch_size
)
if condvec is None:
c1, m1, col, opt = None, None, None, None
real = self._data_sampler.sample_data(
self._batch_size, col, opt
)
else:
c1, m1, col, opt = condvec
c1 = torch.from_numpy(c1).to(self._device)
m1 = torch.from_numpy(m1).to(self._device)
fakez = torch.cat([fakez, c1], dim=1)
perm = np.arange(self._batch_size)
np.random.shuffle(perm)
real = self._data_sampler.sample_data(
self._batch_size, col[perm], opt[perm]
)
c2 = c1[perm]
fake = self._generator(fakez)
fakeact = self._apply_activate(fake)
real = torch.from_numpy(real.astype("float32")).to(
self._device
)
if c1 is not None:
fake_cat = torch.cat([fakeact, c1], dim=1)
real_cat = torch.cat([real, c2], dim=1)
else:
real_cat = real
fake_cat = fakeact
y_fake = discriminator(fake_cat)
y_real = discriminator(real_cat)
pen = discriminator.calc_gradient_penalty(
real_cat, fake_cat, self._device, self.pac
)
loss_d = -(torch.mean(y_real) - torch.mean(y_fake))
optimizerD.zero_grad()
pen.backward(retain_graph=True)
loss_d.backward()
optimizerD.step()
fakez = torch.normal(mean=mean, std=std)
condvec = self._data_sampler.sample_condvec(self._batch_size)
if condvec is None:
c1, m1, col, opt = None, None, None, None
else:
c1, m1, col, opt = condvec
c1 = torch.from_numpy(c1).to(self._device)
m1 = torch.from_numpy(m1).to(self._device)
fakez = torch.cat([fakez, c1], dim=1)
fake = self._generator(fakez)
fakeact = self._apply_activate(fake)
if c1 is not None:
y_fake = discriminator(torch.cat([fakeact, c1], dim=1))
else:
y_fake = discriminator(fakeact)
if condvec is None:
cross_entropy = 0
else:
cross_entropy = self._cond_loss(fake, c1, m1)
loss_g = -torch.mean(y_fake) + cross_entropy
optimizerG.zero_grad()
loss_g.backward()
optimizerG.step()
if self._verbose:
self.log.info(
f"Epoch {i + 1}, Loss G: {loss_g.detach().cpu(): .4f}, "
f"Loss D: {loss_d.detach().cpu(): .4f}"
)
def sample(self, n, gpu=False, condition_column=None, condition_value=None, ):
"""Sample data similar to the training data.
Choosing a condition_column and condition_value will increase the probability of the
discrete condition_value happening in the condition_column.
Args:
n (int):
Number of rows to sample.
condition_column (string):
Name of a discrete column.
condition_value (string):
Name of the category in the condition_column which we wish to increase the
probability of happening.
Returns:
numpy.ndarray or pandas.DataFrame
"""
if gpu:
self.set_device('cuda')
else:
self.set_device('cpu')
if condition_column is not None and condition_value is not None:
condition_info = self._transformer.convert_column_name_value_to_id(
condition_column, condition_value
)
global_condition_vec = self._data_sampler.generate_cond_from_condition_column_info(
condition_info, self._batch_size
)
else:
global_condition_vec = None
steps = n // self._batch_size + 1
data = []
for i in range(steps):
mean = torch.zeros(self._batch_size, self._embedding_dim)
std = mean + 1
fakez = torch.normal(mean=mean, std=std).to(self._device)
if global_condition_vec is not None:
condvec = global_condition_vec.copy()
else:
condvec = self._data_sampler.sample_original_condvec(
self._batch_size
)
if condvec is not None:
c1 = condvec
c1 = torch.from_numpy(c1).to(self._device)
fakez = torch.cat([fakez, c1], dim=1)
fake = self._generator(fakez)
fakeact = self._apply_activate(fake)
data.append(fakeact.detach().cpu().numpy())
data = np.concatenate(data, axis=0)
data = data[:n]
return self._transformer.inverse_transform(data)
def set_device(self, device):
self._device = device
if self._generator is not None:
self._generator.to(self._device)
def save(self, path):
"""save the trained model"""
device_backup = self._device
self.set_device(torch.device("cpu"))
torch.save(self, path)
self.set_device(device_backup)
@classmethod
def load(cls, path):
"""load model from `path`"""
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torch.load(path)
model.set_device(device)
return model
class Discriminator(Module):
def __init__(self, input_dim, discriminator_dim, pac=10):
super(Discriminator, self).__init__()
dim = input_dim * pac
self.pac = pac
self.pacdim = dim
seq = []
for item in list(discriminator_dim):
seq += [Linear(dim, item), LeakyReLU(0.2), Dropout(0.5)]
dim = item
seq += [Linear(dim, 1)]
self.seq = Sequential(*seq)
def calc_gradient_penalty(
self, real_data, fake_data, device="cpu", pac=10, lambda_=10
):
alpha = torch.rand(real_data.size(0) // pac, 1, 1, device=device)
alpha = alpha.repeat(1, pac, real_data.size(1))
alpha = alpha.view(-1, real_data.size(1))
interpolates = alpha * real_data + ((1 - alpha) * fake_data)
disc_interpolates = self(interpolates)
gradients = torch.autograd.grad(
outputs=disc_interpolates,
inputs=interpolates,
grad_outputs=torch.ones(disc_interpolates.size(), device=device),
create_graph=True,
retain_graph=True,
only_inputs=True,
)[0]
gradient_penalty = (
(gradients.view(-1, pac * real_data.size(1)).norm(2, dim=1) - 1)
** 2
).mean() * lambda_
return gradient_penalty
def forward(self, input):
assert input.size()[0] % self.pac == 0, f'generator batch size ({input.size()[0]}) ' \
f'should be divisible by pac ({self.pac})'
return self.seq(input.view(-1, self.pacdim))
class Residual(Module):
def __init__(self, i, o):
super(Residual, self).__init__()
self.fc = Linear(i, o)
self.bn = BatchNorm1d(o)
self.relu = ReLU()
def forward(self, input):
out = self.fc(input)
out = self.bn(out)
out = self.relu(out)
return torch.cat([out, input], dim=1)
class Generator(Module):
def __init__(self, embedding_dim, generator_dim, data_dim):
super(Generator, self).__init__()
dim = embedding_dim
seq = []
for item in list(generator_dim):
seq += [Residual(dim, item)]
dim += item
seq.append(Linear(dim, data_dim))
self.seq = Sequential(*seq)
def forward(self, input):
data = self.seq(input)
return data
class DataSampler(object):
"""DataSampler samples the conditional vector and corresponding data for CTGAN."""
def __init__(self, data, output_info, log_frequency):
self._data = data
def is_discrete_column(column_info):
return (
len(column_info) == 1
and column_info[0].activation_fn == "softmax"
)
n_discrete_columns = sum(
[
1
for column_info in output_info
if is_discrete_column(column_info)
]
)
self._discrete_column_matrix_st = np.zeros(
n_discrete_columns, dtype="int32"
)
# Store the row id for each category in each discrete column.
# For example _rid_by_cat_cols[a][b] is a list of all rows with the
# a-th discrete column equal value b.
self._rid_by_cat_cols = []
# Compute _rid_by_cat_cols
st = 0
for column_info in output_info:
if is_discrete_column(column_info):
span_info = column_info[0]
ed = st + span_info.dim
rid_by_cat = []
for j in range(span_info.dim):
rid_by_cat.append(np.nonzero(data[:, st + j])[0])
self._rid_by_cat_cols.append(rid_by_cat)
st = ed
else:
st += sum([span_info.dim for span_info in column_info])
assert st == data.shape[1]
# Prepare an interval matrix for efficiently sample conditional vector
max_category = max(
[
column_info[0].dim
for column_info in output_info
if is_discrete_column(column_info)
],
default=0,
)
self._discrete_column_cond_st = np.zeros(
n_discrete_columns, dtype="int32"
)
self._discrete_column_n_category = np.zeros(
n_discrete_columns, dtype="int32"
)
self._discrete_column_category_prob = np.zeros(
(n_discrete_columns, max_category)
)
self._n_discrete_columns = n_discrete_columns
self._n_categories = sum(
[
column_info[0].dim
for column_info in output_info
if is_discrete_column(column_info)
]
)
st = 0
current_id = 0
current_cond_st = 0
for column_info in output_info:
if is_discrete_column(column_info):
span_info = column_info[0]
ed = st + span_info.dim
category_freq = np.sum(data[:, st:ed], axis=0)
if log_frequency:
category_freq = np.log(category_freq + 1)
category_prob = category_freq / np.sum(category_freq)
self._discrete_column_category_prob[
current_id, : span_info.dim
] = category_prob
self._discrete_column_cond_st[current_id] = current_cond_st
self._discrete_column_n_category[current_id] = span_info.dim
current_cond_st += span_info.dim
current_id += 1
st = ed
else:
st += sum([span_info.dim for span_info in column_info])
def _random_choice_prob_index(self, discrete_column_id):
probs = self._discrete_column_category_prob[discrete_column_id]
r = np.expand_dims(np.random.rand(probs.shape[0]), axis=1)
return (probs.cumsum(axis=1) > r).argmax(axis=1)
def sample_condvec(self, batch):
"""Generate the conditional vector for training.
Returns:
cond (batch x #categories):
The conditional vector.
mask (batch x #discrete columns):
A one-hot vector indicating the selected discrete column.
discrete column id (batch):
Integer representation of mask.
category_id_in_col (batch):
Selected category in the selected discrete column.
"""
if self._n_discrete_columns == 0:
return None
discrete_column_id = np.random.choice(
np.arange(self._n_discrete_columns), batch
)
cond = np.zeros((batch, self._n_categories), dtype="float32")
mask = np.zeros((batch, self._n_discrete_columns), dtype="float32")
mask[np.arange(batch), discrete_column_id] = 1
category_id_in_col = self._random_choice_prob_index(discrete_column_id)
category_id = (
self._discrete_column_cond_st[discrete_column_id]
+ category_id_in_col
)
cond[np.arange(batch), category_id] = 1
return cond, mask, discrete_column_id, category_id_in_col
def sample_original_condvec(self, batch):
"""Generate the conditional vector for generation use original frequency."""
if self._n_discrete_columns == 0:
return None
cond = np.zeros((batch, self._n_categories), dtype="float32")
for i in range(batch):
row_idx = np.random.randint(0, len(self._data))
col_idx = np.random.randint(0, self._n_discrete_columns)
matrix_st = self._discrete_column_matrix_st[col_idx]
matrix_ed = matrix_st + self._discrete_column_n_category[col_idx]
pick = np.argmax(self._data[row_idx, matrix_st:matrix_ed])
cond[i, pick + self._discrete_column_cond_st[col_idx]] = 1
return cond
def sample_data(self, n, col, opt):
"""Sample data from original training data satisfying the sampled conditional vector.
Returns:
n rows of matrix data.
"""
if col is None:
idx = np.random.randint(len(self._data), size=n)
return self._data[idx]
idx = []
for c, o in zip(col, opt):
idx.append(np.random.choice(self._rid_by_cat_cols[c][o]))
return self._data[idx]
def dim_cond_vec(self):
return self._n_categories
def generate_cond_from_condition_column_info(self, condition_info, batch):
vec = np.zeros((batch, self._n_categories), dtype="float32")
vec_id = (
self._discrete_column_matrix_st[
condition_info["discrete_column_id"]
]
+ condition_info["value_id"]
)
vec[:, vec_id] = 1
return vec
|
PyTorch/Forecasting/TFT/triton/deployment_toolkit/bermuda | bermuda | utils | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import Counter
from typing import Callable, Dict, List, Optional
import networkx as nx
from ..core import ShapeSpec
def infer_precision(
nx_graph: nx.Graph,
input_names: List[str],
output_names: List[str],
get_node_dtype_fn: Callable,
):
node_dtypes = [nx_graph.nodes[node_name].get("dtype", None) for node_name in nx_graph.nodes]
node_dtypes = [dt for dt in node_dtypes if dt is None or dt.kind not in ["i", "b"]]
dtypes_counter = Counter(node_dtypes)
return dtypes_counter.most_common()[0][0]
def get_shapes_with_dynamic_axes(dataloader, batch_size_dim: Optional[int] = None):
def _set_dynamic_shapes(t, shapes):
for k, v in t.items():
shape = list(v.shape)
for dim, s in enumerate(shape):
if shapes[k][dim] != -1 and shapes[k][dim] != s:
shapes[k][dim] = -1
def _mark_batch_axis(shape, batch_axis: int):
shape = list(shape)
shape[batch_axis] = -1
return tuple(shape)
## get all shapes from input and output tensors
input_shapes = {}
output_shapes = {}
for batch in dataloader:
_, x, y = batch
for k, v in x.items():
input_shapes[k] = list(v.shape)
for k, v in y.items():
output_shapes[k] = list(v.shape)
break
# based on max <max_num_iters> iterations, check which
# dimensions differ to determine dynamic_axes
max_num_iters = 100
for idx, batch in enumerate(dataloader):
if idx >= max_num_iters:
break
_, x, y = batch
_set_dynamic_shapes(x, input_shapes)
_set_dynamic_shapes(y, output_shapes)
if batch_size_dim is not None:
input_shapes = {name: _mark_batch_axis(shape, batch_size_dim) for name, shape in input_shapes.items()}
output_shapes = {name: _mark_batch_axis(shape, batch_size_dim) for name, shape in output_shapes.items()}
return input_shapes, output_shapes
def get_dynamic_axes(dataloader, batch_size_dim: Optional[int] = None):
input_shapes, output_shapes = get_shapes_with_dynamic_axes(dataloader, batch_size_dim=batch_size_dim)
all_shapes = {**input_shapes, **output_shapes}
dynamic_axes = {}
for k, shape in all_shapes.items():
for idx, s in enumerate(shape):
if s == -1:
dynamic_axes[k] = {idx: k + "_" + str(idx)}
for k in all_shapes:
if k in dynamic_axes:
dynamic_axes[k].update({batch_size_dim: "batch_size_" + str(batch_size_dim)})
else:
dynamic_axes[k] = {batch_size_dim: "batch_size_" + str(batch_size_dim)}
return dynamic_axes
def get_input_shapes(dataloader, max_batch_size=1) -> Dict[str, ShapeSpec]:
def init_counters_and_shapes(x, counters, min_shapes, max_shapes):
for k, v in x.items():
counters[k] = Counter()
min_shapes[k] = [float("inf")] * v.ndim
max_shapes[k] = [float("-inf")] * v.ndim
counters = {}
min_shapes: Dict[str, tuple] = {}
max_shapes: Dict[str, tuple] = {}
for idx, batch in enumerate(dataloader):
ids, x, y = batch
if idx == 0:
init_counters_and_shapes(x, counters, min_shapes, max_shapes)
for k, v in x.items():
shape = v.shape
counters[k][shape] += 1
min_shapes[k] = tuple(min(a, b) for a, b in zip(min_shapes[k], shape))
max_shapes[k] = tuple(max(a, b) for a, b in zip(max_shapes[k], shape))
opt_shapes: Dict[str, tuple] = {}
for k, v in counters.items():
opt_shapes[k] = v.most_common(1)[0][0]
shapes = {}
for k in opt_shapes.keys(): # same keys in min_shapes and max_shapes
shapes[k] = ShapeSpec(
min=(1,) + min_shapes[k][1:],
max=(max_batch_size,) + max_shapes[k][1:],
opt=(max_batch_size,) + opt_shapes[k][1:],
)
return shapes
|
Tools/PyTorch/TimeSeriesPredictionPlatform/training | training | trainer | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from abc import ABC
import dgl
import dllogger
import hydra
import numpy as np
import torch
import torch.nn as nn
import importlib
try:
from apex import amp
except ImportError:
print("Nvidia apex not available. Can't use apex Automatic Mixed Precision (AMP) for training.\
Please check: https://github.com/NVIDIA/apex for installation")
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data import DataLoader, DistributedSampler
from callbacks.ctl_callbacks import CTLCallbackContainer
from data.datasets import TSBaseDataset, get_collate_fn
from distributed_utils import reduce_tensor, get_mp_context
from loggers.log_helper import setup_logger
from training.ema import ModelEmaV2
from criterion import TSPP_criterion_wrapper
from training.checkpoint_utils import maybe_continue_run
from training.utils import to_device
class Trainer(ABC):
def train(self):
return
class CTLTrainer(Trainer):
def __init__(
self,
model: nn.Module,
train_dataset: TSBaseDataset,
valid_dataset: TSBaseDataset,
optimizer,
criterion,
callbacks,
config,
):
self.config = config
self._stop_training = False
self.metrics = {}
callbacks = callbacks.values()
self.callbacks = CTLCallbackContainer(self, callbacks)
self.world_size = int(os.environ.get('WORLD_SIZE', 1))
self.local_rank = int(os.environ.get("LOCAL_RANK", 0))
self.device = next(model.parameters()).device
self.valid_dataset_len = len(valid_dataset)
self.train_dataset_len = len(train_dataset)
self.train_sampler = None
self.valid_sampler = None
self.example_length = config.example_length
self.encoder_length = config.encoder_length
if self.world_size > 1:
# XXX: is the seed argument here needed for reproducibility?
# It should be set in launch_training.py with other seeds
self.train_sampler = DistributedSampler(
train_dataset, self.world_size, seed=config.get("seed", 1), drop_last=True
)
self.valid_sampler = DistributedSampler(
valid_dataset, self.world_size, seed=config.get("seed", 1), drop_last=False
)
self.logger = setup_logger(self.config)
self.optimizer = optimizer
self.amp_enabled = self.config.get("amp", False)
if not importlib.util.find_spec("apex"):
self.amp_enabled = False
self.model = model
self.global_step = 0
self.epoch = 0
if not self.config.get('force_rerun'):
maybe_continue_run(self)
if config.get("ema", False):
self.ema = ModelEmaV2(model, decay=self.config.get('ema_decay', 0.999), device=self.device)
else:
self.ema = None
if self.amp_enabled:
self.model, self.optimizer = amp.initialize(self.model, self.optimizer, opt_level="O2", loss_scale="dynamic")
if self.world_size > 1:
self.model = DDP(self.model, device_ids=[self.local_rank], output_device=self.local_rank, find_unused_parameters=True)
mp_context = get_mp_context()
self.train_dataloader = DataLoader(
train_dataset,
batch_size=self.config.batch_size,
num_workers=self.config.num_workers,
sampler=self.train_sampler,
shuffle=True if self.train_sampler is None else False,
pin_memory=True,
collate_fn=get_collate_fn(config.model_type, config.encoder_length),
multiprocessing_context=mp_context
)
self.valid_dataloader = DataLoader(
valid_dataset,
batch_size=self.config.batch_size,
num_workers=self.config.num_workers,
sampler=self.valid_sampler,
pin_memory=True,
collate_fn=get_collate_fn(config.model_type, config.encoder_length),
multiprocessing_context=mp_context
)
# TODO: make it reccursively instantiated
if self.config.get("scheduler", None):
self.config.scheduler._target_ = self.config.scheduler.target
del self.config.scheduler.target
self.scheduler = hydra.utils.instantiate(self.config.scheduler, optimizer)
else:
self.scheduler = None
cl_start_horizon = config.get("cl_start_horizon")
cl_update = config.get("cl_update")
self.criterion = TSPP_criterion_wrapper(criterion, cl_start_horizon, cl_update)
self.log_path = self.config.get("log_path", os.getcwd())
def prep_data(self, batch, labels, weights):
batch = to_device(batch, device=self.device)
labels = to_device(labels, device=self.device)
weights = to_device(weights, device=self.device)
return batch, labels, weights
def validate(self):
self.model.eval()
self.criterion.eval()
with torch.no_grad():
running_losses = 0
for i, (batch, labels, weights) in enumerate(self.valid_dataloader):
batch, labels, weights = self.prep_data(batch, labels, weights)
if self.ema:
preds = self.ema.module(batch)
else:
preds = self.model(batch)
losses = self.criterion(preds, labels, weights=weights)
losses = reduce_tensor(losses, self.world_size).detach()
running_losses += losses
running_losses = running_losses / (len(self.valid_dataloader.dataset) / self.config.batch_size)
if len(running_losses.size()) < 1:
running_losses = running_losses.unsqueeze(0)
running_losses = [loss.item() for loss in running_losses]
data = {"val_loss": sum(running_losses)}
for i, elem in enumerate(running_losses):
data["val_loss_component_" + str(i)] = elem
self.logger.log(step=self.global_step, data=data, verbosity=dllogger.Verbosity.VERBOSE)
self.model.train()
self.criterion.train()
return sum(running_losses)
def train(self):
self.callbacks.on_train_begin()
while self.epoch < self.config.num_epochs:
self.callbacks.on_epoch_begin(self.epoch)
self.logger.log(step=self.global_step, data={"epoch": self.epoch}, verbosity=dllogger.Verbosity.VERBOSE)
for i, (batch, labels, weights) in enumerate(self.train_dataloader):
self.callbacks.on_batch_begin(i)
self.optimizer.zero_grad()
batch, labels, weights = self.prep_data(batch, labels, weights)
preds = self.model(batch)
losses = self.criterion(preds, labels, weights=weights)
loss = losses.sum()
if self.amp_enabled:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
if self.config.get("gradient_norm", 0.0) > 0:
nn.utils.clip_grad_norm(self.model.parameters(), self.config.gradient_norm)
self.optimizer.step()
losses = reduce_tensor(losses, self.world_size, average=True)
if len(losses.size()) < 1:
losses = [losses]
losses = [loss.item() for loss in losses]
data = {"loss": loss.item()}
for k, v in enumerate(losses):
data["loss_component_" + str(k)] = v
self.logger.log(step=self.global_step, data=data, verbosity=dllogger.Verbosity.VERBOSE)
self.callbacks.on_batch_end(i, logs=data)
if self.ema:
self.ema.update(self.model)
self.global_step += 1
if self.scheduler:
self.scheduler.step()
self.callbacks.on_valid_begin(self.epoch)
validation_loss = self.validate()
if validation_loss != validation_loss: #NaN check
self._stop_training = True
data = {"val_loss": validation_loss}
self.callbacks.on_valid_end(self.epoch, logs=data)
if self.train_sampler:
self.train_sampler.set_epoch(self.epoch)
self.valid_sampler.set_epoch(self.epoch)
self.callbacks.on_epoch_end(self.epoch, logs=data)
if self._stop_training:
break
self.epoch += 1
self.callbacks.on_train_end(logs=self.metrics)
class StatTrainer(Trainer):
def __init__(self,
config,
model,
train_dataset,
valid_dataset
):
self.config = config
self.train_dataset = train_dataset
self.global_step = 0
self.epoch = 0
self.model = model
self.logger = setup_logger(self.config)
def train(self):
for train_batch in self.train_dataset:
self.model.fit(train_batch["endog"], train_batch["exog"])
self.model.save()
def validate(self):
raise RuntimeError("Validation is not supported for StatTrainer")
class XGBTrainer(Trainer):
def __init__(self, config, callbacks, model, train_dataset, valid_dataset):
'''
The idea behind this trainer is that we are given data at a time step t and want to create models to predict the value of a target
from t+1 to t+n. At time step t we have access to every feature including the target, and if we are trying to predict at time step
t+i, we have access to the known and static values from there, using the function target_shift. To aid in prediction and
give the model access to the history, lag and moving features can be specified in the configs.
Lag features can either be specifed by a min value and max value or a list of values. If a min and max
value are specified then the range(min, max+1) is used as the list. Moving average (or rolling features) are specified
by a window size. These values are added with the feat_adder function. A new model is trained for every step we want
to predict. The trainer is not recursive so each model is independent and does not rely on the previous trained models.
'''
self.config = config
self.logger = setup_logger(config)
self.train_dataset = train_dataset
self.valid_dataset = valid_dataset
self.patience = callbacks.early_stopping.patience
self.log_interval = config.get('log_interval', 25)
self.model = model
def train(self):
for i, ((train_step, labels), (valid_step, valid_labels)) in enumerate(zip(self.train_dataset, self.valid_dataset)):
self.model.fit(train_step, labels, valid_step, valid_labels,
patience=self.patience,
log_interval=self.log_interval)
self.model.save(os.getcwd())
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/modeling/rpn | rpn | inference | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
import torch
from maskrcnn_benchmark.modeling.box_coder import BoxCoder
from maskrcnn_benchmark.structures.bounding_box import BoxList
from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist
from maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms
from maskrcnn_benchmark.structures.boxlist_ops import remove_small_boxes
from ..utils import cat
from maskrcnn_benchmark import _C as C
class RPNPostProcessor(torch.nn.Module):
"""
Performs post-processing on the outputs of the RPN boxes, before feeding the
proposals to the heads
"""
def __init__(
self,
pre_nms_top_n,
post_nms_top_n,
nms_thresh,
min_size,
box_coder=None,
fpn_post_nms_top_n=None,
):
"""
Arguments:
pre_nms_top_n (int)
post_nms_top_n (int)
nms_thresh (float)
min_size (int)
box_coder (BoxCoder)
fpn_post_nms_top_n (int)
"""
super(RPNPostProcessor, self).__init__()
self.pre_nms_top_n = pre_nms_top_n
self.post_nms_top_n = post_nms_top_n
self.nms_thresh = nms_thresh
self.min_size = min_size
if box_coder is None:
box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))
self.box_coder = box_coder
if fpn_post_nms_top_n is None:
fpn_post_nms_top_n = post_nms_top_n
self.fpn_post_nms_top_n = fpn_post_nms_top_n
def add_gt_proposals(self, proposals, targets):
"""
Arguments:
proposals: list[BoxList]
targets: list[BoxList]
"""
# Get the device we're operating on
device = proposals[0].bbox.device
gt_boxes = [target.copy_with_fields([]) for target in targets]
# later cat of bbox requires all fields to be present for all bbox
# so we need to add a dummy for objectness that's missing
for gt_box in gt_boxes:
gt_box.add_field("objectness", torch.ones(len(gt_box), device=device))
proposals = [
cat_boxlist((proposal, gt_box))
for proposal, gt_box in zip(proposals, gt_boxes)
]
return proposals
def forward_for_single_feature_map(self, anchors, objectness, box_regression):
"""
Arguments:
anchors: list[BoxList]
objectness: tensor of size N, A, H, W
box_regression: tensor of size N, A * 4, H, W
"""
device = objectness.device
N, A, H, W = objectness.shape
num_anchors = A * H * W
# If inputs are on GPU, use a faster path
use_fast_cuda_path = (objectness.is_cuda and box_regression.is_cuda)
# Encompasses box decode, clip_to_image and remove_small_boxes calls
if use_fast_cuda_path:
objectness = objectness.reshape(N, -1) # Now [N, AHW]
objectness = objectness.sigmoid()
pre_nms_top_n = min(self.pre_nms_top_n, num_anchors)
objectness, topk_idx = objectness.topk(pre_nms_top_n, dim=1, sorted=True)
# Get all image shapes, and cat them together
image_shapes = [box.size for box in anchors]
image_shapes_cat = torch.tensor([box.size for box in anchors], device=objectness.device).float()
# Get a single tensor for all anchors
concat_anchors = torch.cat([a.bbox for a in anchors], dim=0)
# Note: Take all anchors, we'll index accordingly inside the kernel
# only take the anchors corresponding to the topk boxes
concat_anchors = concat_anchors.reshape(N, -1, 4) # [batch_idx, topk_idx]
# Return pre-nms boxes, associated scores and keep flag
# Encompasses:
# 1. Box decode
# 2. Box clipping
# 3. Box filtering
# At the end we need to keep only the proposals & scores flagged
# Note: topk_idx, objectness are sorted => proposals, objectness, keep are also
# sorted -- this is important later
use_nhwc_kernel = box_regression.is_contiguous(memory_format=torch.channels_last)
proposals, objectness, keep = C.GeneratePreNMSUprightBoxes(
N,
A,
H,
W,
topk_idx,
objectness.float(), # Need to cast these as kernel doesn't support fp16
box_regression.float(),
concat_anchors,
image_shapes_cat,
pre_nms_top_n,
self.min_size,
self.box_coder.bbox_xform_clip,
True,
use_nhwc_kernel)
# view as [N, pre_nms_top_n, 4]
proposals = proposals.view(N, -1, 4)
objectness = objectness.view(N, -1)
else:
# reverse the reshape from before ready for permutation
objectness = objectness.reshape(N, A, H, W)
objectness = objectness.permute(0, 2, 3, 1).reshape(N, -1)
objectness = objectness.sigmoid()
pre_nms_top_n = min(self.pre_nms_top_n, num_anchors)
objectness, topk_idx = objectness.topk(pre_nms_top_n, dim=1, sorted=True)
# put in the same format as anchors
box_regression = box_regression.view(N, -1, 4, H, W).permute(0, 3, 4, 1, 2)
box_regression = box_regression.reshape(N, -1, 4)
batch_idx = torch.arange(N, device=device)[:, None]
box_regression = box_regression[batch_idx, topk_idx]
image_shapes = [box.size for box in anchors]
concat_anchors = torch.cat([a.bbox for a in anchors], dim=0)
concat_anchors = concat_anchors.reshape(N, -1, 4)[batch_idx, topk_idx]
proposals = self.box_coder.decode(
box_regression.view(-1, 4), concat_anchors.view(-1, 4)
)
proposals = proposals.view(N, -1, 4)
# handle non-fast path without changing the loop
if not use_fast_cuda_path:
keep = [None for _ in range(N)]
result = []
keep = keep.to(torch.bool)
for proposal, score, im_shape, k in zip(proposals, objectness, image_shapes, keep):
if use_fast_cuda_path:
# Note: Want k to be applied per-image instead of all-at-once in batched code earlier
# clip_to_image and remove_small_boxes already done in single kernel
p = proposal.masked_select(k[:, None]).view(-1, 4)
score = score.masked_select(k)
boxlist = BoxList(p, im_shape, mode="xyxy")
else:
boxlist = BoxList(proposal, im_shape, mode="xyxy")
boxlist = boxlist.clip_to_image(remove_empty=False)
boxlist = remove_small_boxes(boxlist, self.min_size)
boxlist.add_field("objectness", score)
boxlist = boxlist_nms(
boxlist,
self.nms_thresh,
max_proposals=self.post_nms_top_n,
score_field="objectness",
)
result.append(boxlist)
return result
def forward(self, anchors, objectness, box_regression, targets=None):
"""
Arguments:
anchors: list[list[BoxList]]
objectness: list[tensor]
box_regression: list[tensor]
Returns:
boxlists (list[BoxList]): the post-processed anchors, after
applying box decoding and NMS
"""
sampled_boxes = []
num_levels = len(objectness)
anchors = list(zip(*anchors))
for a, o, b in zip(anchors, objectness, box_regression):
sampled_boxes.append(self.forward_for_single_feature_map(a, o, b))
boxlists = list(zip(*sampled_boxes))
boxlists = [cat_boxlist(boxlist) for boxlist in boxlists]
if num_levels > 1:
boxlists = self.select_over_all_levels(boxlists)
# append ground-truth bboxes to proposals
if self.training and targets is not None:
boxlists = self.add_gt_proposals(boxlists, targets)
return boxlists
def select_over_all_levels(self, boxlists):
num_images = len(boxlists)
# different behavior during training and during testing:
# during training, post_nms_top_n is over *all* the proposals combined, while
# during testing, it is over the proposals for each image
# TODO resolve this difference and make it consistent. It should be per image,
# and not per batch
if self.training:
objectness = torch.cat(
[boxlist.get_field("objectness") for boxlist in boxlists], dim=0
)
box_sizes = [len(boxlist) for boxlist in boxlists]
post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness))
_, inds_sorted = torch.topk(objectness, post_nms_top_n, dim=0, sorted=True)
inds_mask = torch.zeros_like(objectness, dtype=torch.bool)
inds_mask[inds_sorted] = 1
inds_mask = inds_mask.split(box_sizes)
for i in range(num_images):
boxlists[i] = boxlists[i][inds_mask[i]]
else:
for i in range(num_images):
objectness = boxlists[i].get_field("objectness")
post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness))
_, inds_sorted = torch.topk(
objectness, post_nms_top_n, dim=0, sorted=True
)
boxlists[i] = boxlists[i][inds_sorted]
return boxlists
def make_rpn_postprocessor(config, rpn_box_coder, is_train):
fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN
if not is_train:
fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST
pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TRAIN
post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TRAIN
if not is_train:
pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TEST
post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TEST
nms_thresh = config.MODEL.RPN.NMS_THRESH
min_size = config.MODEL.RPN.MIN_SIZE
box_selector = RPNPostProcessor(
pre_nms_top_n=pre_nms_top_n,
post_nms_top_n=post_nms_top_n,
nms_thresh=nms_thresh,
min_size=min_size,
box_coder=rpn_box_coder,
fpn_post_nms_top_n=fpn_post_nms_top_n,
)
return box_selector
|
PyTorch/Classification/GPUNet/triton/deployment_toolkit/library | library | __init__ | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
PyTorch/Classification/ConvNets/triton/scripts | scripts | process_dataset | #!/usr/bin/env bash
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ -d "${DATASETS_DIR}/imagenet" ]; then
echo "Dataset already downloaded and processed."
else
python triton/process_dataset.py
fi |
PyTorch/SpeechRecognition/Jasper/platform | platform | DGXA100_Jasper_AMP_8GPU | #!/bin/bash
NUM_GPUS=8 AMP=true BATCH_SIZE=64 GRAD_ACCUMULATION_STEPS=1 bash scripts/train.sh "$@"
|
TensorFlow2/LanguageModeling/BERT/official/nlp/modeling/layers | layers | self_attention_mask | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras layer that creates a self-attention mask."""
from __future__ import absolute_import
from __future__ import division
# from __future__ import google_type_annotations
from __future__ import print_function
import tensorflow as tf
from official.modeling import tf_utils
@tf.keras.utils.register_keras_serializable(package='Text')
class SelfAttentionMask(tf.keras.layers.Layer):
"""Create 3D attention mask from a 2D tensor mask.
inputs[0]: from_tensor: 2D or 3D Tensor of shape
[batch_size, from_seq_length, ...].
inputs[1]: to_mask: int32 Tensor of shape [batch_size, to_seq_length].
Returns:
float Tensor of shape [batch_size, from_seq_length, to_seq_length].
"""
def call(self, inputs):
from_tensor = inputs[0]
to_mask = inputs[1]
from_shape = tf_utils.get_shape_list(from_tensor, expected_rank=[2, 3])
batch_size = from_shape[0]
from_seq_length = from_shape[1]
to_shape = tf_utils.get_shape_list(to_mask, expected_rank=2)
to_seq_length = to_shape[1]
to_mask = tf.cast(
tf.reshape(to_mask, [batch_size, 1, to_seq_length]),
dtype=from_tensor.dtype)
# We don't assume that `from_tensor` is a mask (although it could be). We
# don't actually care if we attend *from* padding tokens (only *to* padding)
# tokens so we create a tensor of all ones.
#
# `broadcast_ones` = [batch_size, from_seq_length, 1]
broadcast_ones = tf.ones(
shape=[batch_size, from_seq_length, 1], dtype=from_tensor.dtype)
# Here we broadcast along two dimensions to create the mask.
mask = broadcast_ones * to_mask
return mask
|
TensorFlow2/Recommendation/DLRM_and_DCNv2/doc | doc | tensorflow_inference | # Deploying Large Recommender models with TensorFlow and Triton Inference Server
This file contains instructions to run inference
on Triton Inference Server as well as detailed performance analysis for DLRM and DCNv2
with TensorFlow and TensorRT. It is intended to provide the best possible performance for
models that fit into a single GPU or, for some reason, cannot use Merlin HPS.
When the best possible performance is required for models larger than single GPU memory,
we recommend the solution described [here](merlin_hps_inference.md) instead
## Solution overview
### Introduction
The [NVIDIA Triton Inference Server](https://github.com/NVIDIA/triton-inference-server)
provides a data center and cloud inferencing solution optimized for NVIDIA GPUs.
The server provides an inference service via an HTTP or gRPC endpoint,
allowing remote clients to request inferencing for any number of GPU
or CPU models being managed by the server.
This README provides step-by-step deployment instructions for models generated
during training (as described in the [model README](../README.md)).
Additionally, this README provides the corresponding deployment scripts that
ensure optimal GPU utilization during inferencing on Triton Inference Server.
### Deployment using a TensorFlow SavedModel + TensorRT ensemble
Embedding tables used in recommender models can often get so large that serving them becomes challenging. In this example,
we show a way to serve a model that is larger than a GPU-memory device using CPU offloading. As opposed to the solution
shown in [the Merlin HPS inference guide](merlin_hps_inference.md), this guide does not use any custom Triton backends.
The solution below also efficiently handles models that are large but can still fit into GPU memory.
The first step is to sort the embedding tables by their size (from smallest to largest) and decide the amount of GPU memory to be spent
on storing the embeddings. First N smallest embedding tables that can be fit in this amount will be placed in GPU memory,
while the rest will be run on the CPU. This ensures that a large proportion of embedding lookups will be performed on the GPU.
This process is depicted in Figure 1.
The resulting part of the model that contains the embedding with encoded device placement is then saved in the TensorFlow
SavedModel format. We will refer to it as the "sparse submodel."
<p align="center">
<img width="100%" src="./img/inference/sorted_tables_approach.svg" />
<br>
Figure 1. Sorting the embedding tables by size as a way to serve very large recommender models.
</p>
The other part of the network that contains the interaction layer and the MLPs can benefit significantly from running it
with NVIDIA TensorRT. We, therefore, save it to a separate SavedModel file and then convert it first to the ONNX format
and then from the ONNX format to a TensorRT engine. We refer to this part as the "dense submodel."
The entire model is run as a Triton Ensemble of the sparse and dense submodel. The communication between
the two parts is managed efficiently with CUDA memcopies by Triton. The overall architecture of this solution
is depicted in Figure 2.
<p align="center">
<img width="100%" src="./img/inference/tf_tensorrt_architecture.svg" />
<br>
Figure 2. Overall architecture of the TF SavedModel + TensorRT ensemble for running large recommender inference.
</p>
### Deployment process
The deployment process consists of two steps:
1. Conversion.
The purpose of conversion is to transform the checkpoint saved during training into a ready-to-serve model.
2. Configuration.
Model configuration on Triton Inference Server, which generates
necessary [configuration files](https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md).
After deployment, the Triton inference server is used for the evaluation of the converted model in two steps:
1. Correctness tests.
Produce results that are tested against given correctness thresholds.
2. Performance tests.
Produce latency and throughput results for offline (static batching)
and online (dynamic batching) scenarios.
Refer to [Quick Start Guide](#quick-start-guide) for further instructions on performing these tests.
## Setup
Ensure you have the following components:
* [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
* [NVIDIA TensorFlow NGC container 22.02](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow)
* [NVIDIA Triton Inference Server NGC container 22.02](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver)
* [NVIDIA CUDA](https://docs.nvidia.com/cuda/archive//index.html)
* [NVIDIA Ampere](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/), [Volta](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) or [Turing](https://www.nvidia.com/en-us/geforce/turing/) based GPU
## Quick Start Guide
The instructions below assume you have already cloned the repository,
built the training docker container, preprocessed the Criteo
1TB dataset, run the training and saved a model checkpoint.
If you haven't completed those steps, refer
to the [Quick Start Guide for DLRM](DLRM.md#quick-start-guide)
or the [Quick Start Guide to DCNv2](DCNv2.md#quick-start-guide),
depending on which model you'd like to deploy.
1. Run the training docker container built during the training stage:
```
# set input variables
checkpoint_path=<path_to_checkpoint_saved_during_training>
deploy_path=<destination_path_of_the_triton_model_repository>
dataset_path=<path_to_the_dataset>
mkdir -p $deploy_path
docker run -v $checkpoint_path:$checkpoint_path -v $deploy_path:$deploy_path -v $dataset_path:$dataset_path -it --rm --network=host --ipc=host \
--shm-size=2g --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all --cap-add SYS_NICE train_docker_image \
bash
```
2. Convert the model checkpoint into a Triton model repository:
```
# set input variables inside the container
checkpoint_path=<path_to_checkpoint_saved_during_training>
deploy_path=<destination_path_of_the_triton_model_repository>
dataset_path=<path_to_the_dataset>
# run the deployment
horovodrun -np 1 --mpi-args=--oversubscribe numactl --interleave=all \
python -m deployment.deploy --checkpoint-dir $checkpoint_path --model-repository-path $deploy_path \
--num_gpus 1 --fused_embedding --model-name dlrm --model-precision fp16 --dense-format trt \
--sparse-format tf-savedmodel --memory-threshold-gb 60
```
3. In a separate terminal, start the Triton Inference Server:
```
deploy_path=<destination_path_of_the_triton_model_repository>
docker run -v $deploy_path:$deploy_path -it --rm --network=host --detach --ipc=host \
--shm-size=2g --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all nvcr.io/nvidia/tritonserver:23.02-py3 \
bash -c "tritonserver --model-repository=${deploy_path} \
--pinned-memory-pool-byte-size=4000000000 --cuda-memory-pool-byte-size=0:2000000000 2>&1"
```
4. Measure inference execution speed
```
python -u -m deployment.evaluate_latency --sparse-format tf-savedmodel --model-name dlrm --dataset_path $dataset_path \
--fused-embedding --measurement-request-count 50 --measurement-interval 5000 \
--num-benchmark-samples 262144
```
5. Measure the prediction quality of the deployed model
```
python -u -m deployment.evaluate_accuracy --dataset_path $dataset_path --fused_embedding \
--model_name dlrm --batch_size 16384 --sparse_input_format tf-savedmodel"
```
## Performance
The performance measurements in this document were conducted at the time of publication and may not reflect
the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to
[NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference).
### Offline scenario
#### Offline: DLRM on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP32, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DLRM |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 3.99e+05 | 24 | 143 | 0 | 44 | 336 | 88 | 0 | 626 | 672 | 688 | 729 | 635 |
| 1 | 512 | 1 | 6.90e+05 | 31 | 152 | 0 | 55 | 406 | 91 | 0 | 738 | 770 | 789 | 814 | 735 |
| 2 | 1024 | 1 | 1.22e+06 | 34 | 162 | 0 | 72 | 472 | 94 | 0 | 830 | 863 | 884 | 906 | 834 |
| 3 | 2048 | 1 | 1.68e+06 | 26 | 164 | 0 | 127 | 772 | 124 | 0 | 1199 | 1274 | 1317 | 1341 | 1213 |
| 4 | 4096 | 1 | 2.46e+06 | 36 | 176 | 0 | 160 | 1128 | 157 | 0 | 1653 | 1669 | 1675 | 1716 | 1657 |
| 5 | 8192 | 1 | 3.08e+06 | 37 | 182 | 0 | 327 | 1879 | 222 | 0 | 2612 | 2721 | 2915 | 3135 | 2647 |
| 6 | 16384 | 1 | 3.36e+06 | 39 | 193 | 0 | 668 | 3623 | 349 | 0 | 4822 | 4979 | 5357 | 5505 | 4872 |
| 7 | 32768 | 1 | 3.85e+06 | 42 | 204 | 0 | 991 | 6623 | 627 | 0 | 8439 | 8584 | 8613 | 8768 | 8487 |
<img width="100%" src="./img/inference/tensorflow_dlrm_dgx-a100-80gb_t15_fp32.svg" />
</details>
#### Offline: DLRM on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP16, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DLRM |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 4.00e+05 | 26 | 144 | 0 | 48 | 326 | 90 | 0 | 631 | 645 | 651 | 679 | 634 |
| 1 | 512 | 1 | 6.65e+05 | 23 | 161 | 0 | 62 | 417 | 99 | 0 | 762 | 779 | 786 | 803 | 762 |
| 2 | 1024 | 1 | 1.23e+06 | 23 | 160 | 0 | 80 | 457 | 106 | 0 | 821 | 837 | 843 | 865 | 826 |
| 3 | 2048 | 1 | 1.95e+06 | 25 | 158 | 0 | 125 | 615 | 123 | 0 | 1030 | 1102 | 1123 | 1157 | 1046 |
| 4 | 4096 | 1 | 2.89e+06 | 26 | 160 | 0 | 204 | 866 | 154 | 0 | 1393 | 1444 | 1515 | 1641 | 1410 |
| 5 | 8192 | 1 | 3.80e+06 | 35 | 173 | 0 | 364 | 1360 | 215 | 0 | 2115 | 2270 | 2377 | 2484 | 2147 |
| 6 | 16384 | 1 | 4.32e+06 | 38 | 209 | 0 | 751 | 2440 | 347 | 0 | 3741 | 3914 | 4060 | 4352 | 3785 |
| 7 | 32768 | 1 | 4.95e+06 | 44 | 223 | 0 | 1294 | 4449 | 614 | 0 | 6604 | 6758 | 6820 | 7107 | 6624 |
<img width="100%" src="./img/inference/tensorflow_dlrm_dgx-a100-80gb_t15_fp16.svg" />
</details>
#### Offline: DLRM on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP32, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DLRM |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 2.44e+05 | 24 | 153 | 0 | 51 | 715 | 100 | 0 | 1029 | 1077 | 1103 | 1468 | 1043 |
| 1 | 512 | 1 | 4.10e+05 | 30 | 160 | 0 | 63 | 891 | 98 | 0 | 1241 | 1277 | 1289 | 1448 | 1242 |
| 2 | 1024 | 1 | 6.45e+05 | 24 | 157 | 0 | 88 | 1204 | 109 | 0 | 1559 | 1665 | 1705 | 2289 | 1582 |
| 3 | 2048 | 1 | 8.00e+05 | 23 | 160 | 0 | 179 | 2051 | 139 | 0 | 2478 | 2761 | 2880 | 3978 | 2552 |
| 4 | 4096 | 1 | 1.07e+06 | 34 | 190 | 0 | 305 | 3104 | 179 | 0 | 3514 | 4683 | 5312 | 7935 | 3812 |
| 5 | 8192 | 1 | 1.52e+06 | 39 | 201 | 0 | 425 | 4484 | 218 | 0 | 5213 | 5486 | 5567 | 7479 | 5367 |
| 6 | 16384 | 1 | 1.69e+06 | 43 | 221 | 0 | 853 | 8189 | 354 | 0 | 9473 | 10195 | 10620 | 12676 | 9660 |
| 7 | 32768 | 1 | 1.88e+06 | 53 | 267 | 0 | 1199 | 15221 | 631 | 0 | 16969 | 18753 | 20200 | 22143 | 17371 |
<img width="100%" src="./img/inference/tensorflow_dlrm_dgx-a100-80gb_t3_fp32.svg" />
</details>
#### Offline: DLRM on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP16, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DLRM |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 2.58e+05 | 23 | 159 | 0 | 47 | 661 | 96 | 0 | 981 | 1000 | 1010 | 1103 | 986 |
| 1 | 512 | 1 | 4.33e+05 | 26 | 152 | 0 | 60 | 841 | 95 | 0 | 1182 | 1211 | 1220 | 1264 | 1174 |
| 2 | 1024 | 1 | 7.24e+05 | 23 | 130 | 0 | 76 | 1076 | 103 | 0 | 1402 | 1426 | 1435 | 1609 | 1408 |
| 3 | 2048 | 1 | 9.36e+05 | 24 | 134 | 0 | 124 | 1776 | 125 | 0 | 2131 | 2422 | 2486 | 2556 | 2183 |
| 4 | 4096 | 1 | 1.20e+06 | 27 | 141 | 0 | 215 | 2853 | 161 | 0 | 3236 | 4163 | 4436 | 4952 | 3397 |
| 5 | 8192 | 1 | 1.38e+06 | 38 | 196 | 0 | 398 | 5079 | 224 | 0 | 5625 | 7542 | 8188 | 10051 | 5935 |
| 6 | 16384 | 1 | 1.89e+06 | 45 | 225 | 0 | 797 | 7226 | 347 | 0 | 8472 | 9362 | 10036 | 11189 | 8640 |
| 7 | 32768 | 1 | 2.16e+06 | 43 | 246 | 0 | 1049 | 13171 | 620 | 0 | 14827 | 16124 | 16971 | 18651 | 15129 |
<img width="100%" src="./img/inference/tensorflow_dlrm_dgx-a100-80gb_t3_fp16.svg" />
</details>
#### Offline: DLRM on NVIDIA A30, TensorFlow + TensorRT with FP32, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DLRM |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 2.31e+05 | 24 | 147 | 0 | 47 | 804 | 77 | 0 | 1086 | 1148 | 1224 | 1266 | 1099 |
| 1 | 512 | 1 | 3.67e+05 | 26 | 145 | 0 | 63 | 1070 | 82 | 0 | 1353 | 1552 | 1586 | 1740 | 1386 |
| 2 | 1024 | 1 | 5.36e+05 | 27 | 152 | 0 | 107 | 1517 | 101 | 0 | 1897 | 1977 | 1993 | 2068 | 1904 |
| 3 | 2048 | 1 | 5.53e+05 | 68 | 248 | 0 | 236 | 2997 | 142 | 0 | 3661 | 3928 | 4044 | 4351 | 3691 |
| 4 | 4096 | 1 | 6.18e+05 | 51 | 275 | 0 | 686 | 5374 | 220 | 0 | 6407 | 7397 | 8148 | 10301 | 6606 |
| 5 | 8192 | 1 | 7.94e+05 | 57 | 379 | 0 | 625 | 8812 | 410 | 0 | 9833 | 13872 | 15165 | 15940 | 10283 |
| 6 | 16384 | 1 | 9.77e+05 | 61 | 459 | 1 | 1251 | 14220 | 690 | 0 | 15220 | 20960 | 23930 | 27304 | 16682 |
| 7 | 32768 | 1 | 1.02e+06 | 101 | 577 | 2 | 2188 | 28085 | 1294 | 2 | 30168 | 43267 | 48349 | 54028 | 32249 |
<img width="100%" src="./img/inference/tensorflow_dlrm_a30-24gb_t15_fp32.svg" />
</details>
#### Offline: DLRM on NVIDIA A30, TensorFlow + TensorRT with FP16, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DLRM |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.93e+05 | 59 | 237 | 1 | 65 | 852 | 100 | 1 | 1308 | 1346 | 1360 | 1428 | 1315 |
| 1 | 512 | 1 | 3.16e+05 | 60 | 244 | 1 | 91 | 1110 | 105 | 1 | 1606 | 1675 | 1699 | 1750 | 1612 |
| 2 | 1024 | 1 | 4.98e+05 | 64 | 253 | 1 | 147 | 1458 | 125 | 1 | 2013 | 2191 | 2275 | 2472 | 2049 |
| 3 | 2048 | 1 | 5.87e+05 | 105 | 323 | 1 | 258 | 2621 | 160 | 1 | 3436 | 3631 | 3813 | 4128 | 3469 |
| 4 | 4096 | 1 | 5.43e+05 | 108 | 423 | 2 | 1041 | 5735 | 237 | 1 | 7142 | 9926 | 10563 | 11887 | 7547 |
| 5 | 8192 | 1 | 7.86e+05 | 96 | 439 | 2 | 1155 | 8309 | 380 | 1 | 10056 | 14265 | 15897 | 18104 | 10382 |
| 6 | 16384 | 1 | 1.13e+06 | 96 | 471 | 2 | 1321 | 11777 | 729 | 1 | 13512 | 18506 | 19884 | 23454 | 14397 |
| 7 | 32768 | 1 | 1.27e+06 | 96 | 491 | 2 | 2062 | 22107 | 1272 | 1 | 23498 | 33255 | 38954 | 65158 | 26031 |
<img width="100%" src="./img/inference/tensorflow_dlrm_a30-24gb_t15_fp16.svg" />
</details>
#### Offline: DLRM on NVIDIA A30, TensorFlow + TensorRT with FP32, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DLRM |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.93e+05 | 23 | 186 | 0 | 56 | 986 | 71 | 0 | 1307 | 1435 | 1481 | 1545 | 1322 |
| 1 | 512 | 1 | 2.80e+05 | 26 | 206 | 0 | 86 | 1423 | 79 | 0 | 1814 | 1884 | 1914 | 1973 | 1820 |
| 2 | 1024 | 1 | 1.99e+05 | 62 | 339 | 2 | 165 | 4453 | 119 | 0 | 5200 | 6259 | 6428 | 6966 | 5140 |
| 3 | 2048 | 1 | 3.00e+05 | 50 | 301 | 1 | 721 | 5571 | 167 | 0 | 6006 | 9340 | 10103 | 11385 | 6811 |
| 4 | 4096 | 1 | 3.49e+05 | 61 | 408 | 1 | 1782 | 9183 | 299 | 0 | 11165 | 15907 | 17936 | 23733 | 11734 |
| 5 | 8192 | 1 | 5.87e+05 | 65 | 380 | 1 | 1106 | 12027 | 360 | 0 | 13332 | 18063 | 21316 | 24739 | 13939 |
| 6 | 16384 | 1 | 6.85e+05 | 56 | 398 | 1 | 3061 | 19763 | 674 | 0 | 23218 | 31017 | 34275 | 38914 | 23953 |
| 7 | 32768 | 1 | 7.61e+05 | 69 | 496 | 1 | 9223 | 31973 | 1266 | 0 | 41964 | 55256 | 59519 | 65834 | 43028 |
<img width="100%" src="./img/inference/tensorflow_dlrm_a30-24gb_t3_fp32.svg" />
</details>
#### Offline: DLRM on NVIDIA A30, TensorFlow + TensorRT with FP16, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DLRM |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.79e+05 | 57 | 244 | 1 | 60 | 969 | 93 | 1 | 1416 | 1497 | 1527 | 1637 | 1425 |
| 1 | 512 | 1 | 2.69e+05 | 63 | 264 | 1 | 88 | 1373 | 104 | 1 | 1865 | 1999 | 2050 | 2375 | 1894 |
| 2 | 1024 | 1 | 3.63e+05 | 67 | 253 | 1 | 133 | 2228 | 129 | 1 | 2806 | 2909 | 2933 | 3047 | 2812 |
| 3 | 2048 | 1 | 4.04e+05 | 113 | 344 | 1 | 262 | 4155 | 170 | 1 | 4996 | 5287 | 5401 | 5799 | 5046 |
| 4 | 4096 | 1 | 5.54e+05 | 72 | 277 | 1 | 643 | 6119 | 248 | 1 | 7329 | 9277 | 10541 | 12213 | 7361 |
| 5 | 8192 | 1 | 7.18e+05 | 74 | 313 | 2 | 1193 | 9424 | 382 | 1 | 10820 | 14038 | 15253 | 19589 | 11389 |
| 6 | 16384 | 1 | 8.89e+05 | 82 | 329 | 2 | 1646 | 15666 | 685 | 1 | 17436 | 23288 | 24813 | 27289 | 18411 |
| 7 | 32768 | 1 | 9.44e+05 | 87 | 420 | 2 | 4725 | 28180 | 1277 | 1 | 32825 | 44277 | 49607 | 56222 | 34692 |
<img width="100%" src="./img/inference/tensorflow_dlrm_a30-24gb_t3_fp16.svg" />
</details>
#### Offline: DLRM on NVIDIA T4, TensorFlow + TensorRT with FP32, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DLRM |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.38e+05 | 47 | 282 | 0 | 75 | 1341 | 95 | 0 | 1788 | 2072 | 2144 | 2319 | 1840 |
| 1 | 512 | 1 | 1.87e+05 | 52 | 356 | 0 | 109 | 2078 | 131 | 0 | 2708 | 2936 | 3023 | 3190 | 2726 |
| 2 | 1024 | 1 | 2.34e+05 | 44 | 455 | 0 | 240 | 3395 | 227 | 0 | 4323 | 4653 | 4805 | 5763 | 4361 |
| 3 | 2048 | 1 | 2.59e+05 | 45 | 553 | 0 | 418 | 6382 | 498 | 0 | 7879 | 8220 | 8424 | 9091 | 7896 |
| 4 | 4096 | 1 | 3.15e+05 | 45 | 535 | 0 | 718 | 10922 | 744 | 0 | 12784 | 13496 | 13736 | 17274 | 12964 |
| 5 | 8192 | 1 | 3.47e+05 | 49 | 600 | 0 | 1293 | 20431 | 1183 | 0 | 23484 | 24332 | 24569 | 25045 | 23556 |
| 6 | 16384 | 1 | 3.57e+05 | 58 | 670 | 0 | 2448 | 40605 | 2077 | 0 | 45913 | 47110 | 47411 | 47908 | 45858 |
| 7 | 32768 | 1 | 3.63e+05 | 72 | 769 | 1 | 4837 | 80249 | 3924 | 0 | 89881 | 91684 | 92614 | 94206 | 89852 |
<img width="100%" src="./img/inference/tensorflow_dlrm_t4-16gb_t15_fp32.svg" />
</details>
#### Offline: DLRM on NVIDIA T4, TensorFlow + TensorRT with FP16, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DLRM |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.86e+05 | 35 | 191 | 0 | 75 | 965 | 103 | 0 | 1356 | 1456 | 1501 | 1606 | 1369 |
| 1 | 512 | 1 | 2.48e+05 | 43 | 221 | 0 | 110 | 1513 | 172 | 0 | 2017 | 2263 | 2353 | 2565 | 2059 |
| 2 | 1024 | 1 | 2.81e+05 | 53 | 470 | 0 | 205 | 2676 | 224 | 0 | 3576 | 3950 | 4047 | 4400 | 3628 |
| 3 | 2048 | 1 | 3.38e+05 | 51 | 524 | 0 | 341 | 4735 | 380 | 0 | 5833 | 6743 | 7420 | 8829 | 6031 |
| 4 | 4096 | 1 | 4.29e+05 | 47 | 548 | 0 | 621 | 7603 | 720 | 0 | 9480 | 9910 | 10013 | 12769 | 9539 |
| 5 | 8192 | 1 | 4.75e+05 | 49 | 585 | 0 | 1202 | 14118 | 1191 | 0 | 16936 | 17653 | 18283 | 20753 | 17145 |
| 6 | 16384 | 1 | 5.08e+05 | 55 | 667 | 0 | 2371 | 26920 | 2094 | 0 | 32044 | 33005 | 33383 | 35777 | 32107 |
| 7 | 32768 | 1 | 5.27e+05 | 63 | 747 | 1 | 4668 | 52568 | 3899 | 0 | 62101 | 63747 | 64063 | 66173 | 61946 |
<img width="100%" src="./img/inference/tensorflow_dlrm_t4-16gb_t15_fp16.svg" />
</details>
#### Offline: DLRM on NVIDIA T4, TensorFlow + TensorRT with FP32, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DLRM |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.47e+05 | 35 | 177 | 0 | 80 | 1357 | 91 | 0 | 1697 | 1948 | 2039 | 2342 | 1740 |
| 1 | 512 | 1 | 1.81e+05 | 57 | 238 | 0 | 123 | 2257 | 135 | 0 | 2801 | 3042 | 3118 | 3281 | 2810 |
| 2 | 1024 | 1 | 2.28e+05 | 48 | 490 | 0 | 236 | 3478 | 224 | 0 | 4448 | 4776 | 4885 | 5811 | 4476 |
| 3 | 2048 | 1 | 2.57e+05 | 44 | 530 | 0 | 364 | 6548 | 490 | 0 | 7966 | 8273 | 8391 | 9240 | 7976 |
| 4 | 4096 | 1 | 3.06e+05 | 45 | 518 | 0 | 648 | 11450 | 729 | 0 | 13389 | 13797 | 14082 | 14728 | 13390 |
| 5 | 8192 | 1 | 3.25e+05 | 49 | 570 | 0 | 1253 | 22088 | 1181 | 0 | 24847 | 25946 | 26689 | 36261 | 25141 |
| 6 | 16384 | 1 | 3.37e+05 | 67 | 654 | 1 | 2507 | 43155 | 2069 | 0 | 48132 | 49830 | 50316 | 54283 | 48453 |
| 7 | 32768 | 1 | 3.47e+05 | 77 | 763 | 1 | 4675 | 84544 | 3899 | 0 | 93086 | 96342 | 97109 | 101241 | 93959 |
<img width="100%" src="./img/inference/tensorflow_dlrm_t4-16gb_t3_fp32.svg" />
</details>
#### Offline: DLRM on NVIDIA T4, TensorFlow + TensorRT with FP16, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DLRM |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.72e+05 | 44 | 249 | 0 | 75 | 1006 | 106 | 0 | 1472 | 1623 | 1670 | 1807 | 1480 |
| 1 | 512 | 1 | 2.40e+05 | 50 | 249 | 0 | 108 | 1535 | 180 | 0 | 2085 | 2355 | 2443 | 2656 | 2122 |
| 2 | 1024 | 1 | 2.83e+05 | 52 | 483 | 0 | 222 | 2574 | 272 | 0 | 3560 | 3879 | 4013 | 4351 | 3603 |
| 3 | 2048 | 1 | 3.44e+05 | 49 | 534 | 0 | 346 | 4634 | 376 | 0 | 5863 | 6467 | 6891 | 7474 | 5939 |
| 4 | 4096 | 1 | 4.04e+05 | 46 | 594 | 0 | 713 | 8003 | 735 | 0 | 10131 | 10606 | 10838 | 11176 | 10091 |
| 5 | 8192 | 1 | 4.61e+05 | 47 | 612 | 0 | 1220 | 14633 | 1226 | 0 | 17645 | 18614 | 18848 | 21215 | 17738 |
| 6 | 16384 | 1 | 4.91e+05 | 54 | 651 | 0 | 2406 | 28024 | 2112 | 0 | 33225 | 34406 | 34675 | 35664 | 33247 |
| 7 | 32768 | 1 | 4.94e+05 | 65 | 737 | 1 | 4816 | 56577 | 3944 | 0 | 65870 | 68351 | 69091 | 70905 | 66140 |
<img width="100%" src="./img/inference/tensorflow_dlrm_t4-16gb_t3_fp16.svg" />
</details>
#### Offline: DCNv2 on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP32, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DCNv2 |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 2.70e+05 | 23 | 149 | 0 | 49 | 630 | 93 | 0 | 929 | 953 | 1047 | 1112 | 944 |
| 1 | 512 | 1 | 4.95e+05 | 23 | 151 | 0 | 59 | 705 | 90 | 0 | 1032 | 1058 | 1172 | 1191 | 1028 |
| 2 | 1024 | 1 | 8.42e+05 | 23 | 152 | 0 | 76 | 862 | 96 | 0 | 1193 | 1233 | 1354 | 1396 | 1209 |
| 3 | 2048 | 1 | 1.08e+06 | 30 | 172 | 0 | 123 | 1421 | 150 | 0 | 1810 | 2047 | 2069 | 4216 | 1896 |
| 4 | 4096 | 1 | 1.37e+06 | 32 | 167 | 0 | 200 | 2414 | 166 | 0 | 2927 | 3072 | 3295 | 3435 | 2979 |
| 5 | 8192 | 1 | 1.49e+06 | 40 | 200 | 0 | 342 | 4649 | 239 | 0 | 5419 | 5587 | 5618 | 5749 | 5470 |
| 6 | 16384 | 1 | 1.41e+06 | 29 | 186 | 0 | 661 | 10358 | 348 | 0 | 11501 | 11719 | 12265 | 12401 | 11582 |
| 7 | 32768 | 1 | 1.37e+06 | 43 | 232 | 0 | 1379 | 21628 | 616 | 0 | 23233 | 23738 | 24043 | 24865 | 23898 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_dgx-a100-80gb_t15_fp32.svg" />
</details>
#### Offline: DCNv2 on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP16, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DCNv2 |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 3.39e+05 | 24 | 152 | 0 | 46 | 440 | 88 | 0 | 732 | 791 | 800 | 827 | 750 |
| 1 | 512 | 1 | 6.15e+05 | 23 | 150 | 0 | 58 | 505 | 91 | 0 | 826 | 854 | 905 | 935 | 827 |
| 2 | 1024 | 1 | 1.12e+06 | 23 | 150 | 0 | 74 | 566 | 98 | 0 | 901 | 929 | 1002 | 1034 | 911 |
| 3 | 2048 | 1 | 1.55e+06 | 23 | 154 | 0 | 122 | 894 | 121 | 0 | 1302 | 1332 | 1434 | 1465 | 1314 |
| 4 | 4096 | 1 | 2.16e+06 | 24 | 155 | 0 | 166 | 1387 | 157 | 0 | 1871 | 1909 | 2096 | 2173 | 1889 |
| 5 | 8192 | 1 | 2.53e+06 | 30 | 180 | 0 | 333 | 2458 | 231 | 0 | 3195 | 3399 | 3544 | 3731 | 3232 |
| 6 | 16384 | 1 | 2.48e+06 | 40 | 204 | 0 | 765 | 5201 | 367 | 0 | 6501 | 6684 | 7033 | 7235 | 6577 |
| 7 | 32768 | 1 | 2.67e+06 | 42 | 235 | 0 | 1243 | 10114 | 622 | 0 | 12115 | 12815 | 13240 | 14024 | 12256 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_dgx-a100-80gb_t15_fp16.svg" />
</details>
#### Offline: DCNv2 on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP32, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DCNv2 |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.92e+05 | 24 | 149 | 0 | 52 | 1012 | 91 | 0 | 1300 | 1411 | 1434 | 1555 | 1328 |
| 1 | 512 | 1 | 3.36e+05 | 24 | 152 | 0 | 62 | 1184 | 93 | 0 | 1511 | 1587 | 1658 | 1717 | 1515 |
| 2 | 1024 | 1 | 5.49e+05 | 24 | 155 | 0 | 79 | 1498 | 101 | 0 | 1836 | 1906 | 2009 | 2139 | 1857 |
| 3 | 2048 | 1 | 6.99e+05 | 26 | 156 | 0 | 124 | 2487 | 130 | 0 | 2857 | 3174 | 3308 | 3655 | 2923 |
| 4 | 4096 | 1 | 8.30e+05 | 30 | 177 | 0 | 215 | 4348 | 153 | 0 | 4812 | 5567 | 5971 | 6442 | 4923 |
| 5 | 8192 | 1 | 9.85e+05 | 45 | 209 | 0 | 414 | 7393 | 225 | 0 | 8177 | 8742 | 9208 | 10278 | 8286 |
| 6 | 16384 | 1 | 9.93e+05 | 49 | 233 | 0 | 843 | 14939 | 352 | 0 | 16206 | 17388 | 17870 | 18617 | 16416 |
| 7 | 32768 | 1 | 1.06e+06 | 49 | 259 | 0 | 1131 | 28711 | 628 | 0 | 30315 | 32463 | 33532 | 36270 | 30778 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_dgx-a100-80gb_t3_fp32.svg" />
</details>
#### Offline: DCNv2 on NVIDIA DGX A100 (1x A100 80GB), TensorFlow + TensorRT with FP16, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA DGX A100 (1x A100 80GB) |
| Model architecture | DCNv2 |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.98e+05 | 34 | 198 | 0 | 61 | 884 | 106 | 0 | 1269 | 1327 | 1368 | 1480 | 1283 |
| 1 | 512 | 1 | 3.45e+05 | 29 | 191 | 0 | 76 | 1077 | 104 | 0 | 1467 | 1516 | 1539 | 1596 | 1477 |
| 2 | 1024 | 1 | 5.67e+05 | 36 | 192 | 0 | 101 | 1354 | 113 | 0 | 1782 | 1829 | 1848 | 2143 | 1796 |
| 3 | 2048 | 1 | 7.91e+05 | 36 | 183 | 0 | 158 | 2072 | 131 | 0 | 2553 | 2703 | 2800 | 3127 | 2580 |
| 4 | 4096 | 1 | 1.16e+06 | 36 | 179 | 0 | 254 | 2895 | 166 | 0 | 3449 | 3809 | 3965 | 5094 | 3530 |
| 5 | 8192 | 1 | 1.29e+06 | 55 | 261 | 0 | 449 | 5356 | 224 | 0 | 6194 | 7174 | 7493 | 8730 | 6345 |
| 6 | 16384 | 1 | 1.46e+06 | 46 | 250 | 0 | 748 | 9808 | 369 | 0 | 10971 | 12202 | 12713 | 14880 | 11221 |
| 7 | 32768 | 1 | 1.61e+06 | 44 | 266 | 0 | 1214 | 18171 | 659 | 0 | 19841 | 20937 | 23008 | 28718 | 20354 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_dgx-a100-80gb_t3_fp16.svg" />
</details>
#### Offline: DCNv2 on NVIDIA A30, TensorFlow + TensorRT with FP32, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DCNv2 |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.40e+05 | 60 | 222 | 1 | 51 | 1396 | 92 | 1 | 1789 | 1869 | 2256 | 2394 | 1823 |
| 1 | 512 | 1 | 2.35e+05 | 60 | 217 | 1 | 66 | 1721 | 100 | 1 | 2123 | 2375 | 2530 | 2916 | 2166 |
| 2 | 1024 | 1 | 3.43e+05 | 76 | 244 | 1 | 118 | 2410 | 124 | 1 | 2980 | 3053 | 3084 | 3226 | 2974 |
| 3 | 2048 | 1 | 3.72e+05 | 90 | 361 | 1 | 208 | 4646 | 169 | 1 | 5452 | 5804 | 6076 | 6376 | 5476 |
| 4 | 4096 | 1 | 5.14e+05 | 96 | 429 | 1 | 368 | 6770 | 262 | 1 | 7888 | 8321 | 8427 | 8842 | 7927 |
| 5 | 8192 | 1 | 6.25e+05 | 94 | 442 | 2 | 692 | 11322 | 537 | 1 | 13014 | 13343 | 13442 | 14706 | 13090 |
| 6 | 16384 | 1 | 6.41e+05 | 103 | 581 | 2 | 1292 | 22762 | 760 | 1 | 25280 | 27910 | 28633 | 29536 | 25501 |
| 7 | 32768 | 1 | 6.88e+05 | 112 | 641 | 2 | 2666 | 42753 | 1336 | 2 | 46470 | 50954 | 52078 | 56703 | 47512 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_a30-24gb_t15_fp32.svg" />
</details>
#### Offline: DCNv2 on NVIDIA A30, TensorFlow + TensorRT with FP16, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DCNv2 |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.84e+05 | 59 | 234 | 1 | 52 | 937 | 98 | 1 | 1369 | 1513 | 1569 | 1640 | 1382 |
| 1 | 512 | 1 | 2.99e+05 | 64 | 231 | 1 | 68 | 1231 | 107 | 1 | 1678 | 1849 | 1956 | 2055 | 1703 |
| 2 | 1024 | 1 | 4.25e+05 | 73 | 271 | 1 | 147 | 1781 | 127 | 1 | 2368 | 2578 | 2644 | 2786 | 2401 |
| 3 | 2048 | 1 | 4.97e+05 | 104 | 337 | 1 | 258 | 3224 | 171 | 1 | 4019 | 4501 | 4761 | 5127 | 4096 |
| 4 | 4096 | 1 | 4.71e+05 | 77 | 306 | 2 | 517 | 7521 | 256 | 1 | 8184 | 10650 | 12194 | 15546 | 8680 |
| 5 | 8192 | 1 | 7.56e+05 | 92 | 383 | 2 | 672 | 9269 | 391 | 1 | 9902 | 13945 | 14758 | 17802 | 10810 |
| 6 | 16384 | 1 | 9.28e+05 | 96 | 500 | 2 | 1141 | 15117 | 723 | 1 | 16894 | 21048 | 22180 | 25198 | 17580 |
| 7 | 32768 | 1 | 1.03e+06 | 103 | 589 | 2 | 2228 | 27519 | 1320 | 1 | 30467 | 35800 | 36760 | 39742 | 31762 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_a30-24gb_t15_fp16.svg" />
</details>
#### Offline: DCNv2 on NVIDIA A30, TensorFlow + TensorRT with FP32, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DCNv2 |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.41e+05 | 34 | 201 | 0 | 68 | 1422 | 86 | 0 | 1798 | 1937 | 2161 | 2380 | 1811 |
| 1 | 512 | 1 | 2.50e+05 | 37 | 193 | 0 | 91 | 1629 | 91 | 0 | 2015 | 2233 | 2318 | 2554 | 2041 |
| 2 | 1024 | 1 | 2.38e+05 | 39 | 248 | 0 | 149 | 3730 | 127 | 0 | 4226 | 5017 | 5430 | 6047 | 4293 |
| 3 | 2048 | 1 | 3.25e+05 | 64 | 331 | 0 | 209 | 5504 | 182 | 0 | 5933 | 7999 | 8351 | 9265 | 6290 |
| 4 | 4096 | 1 | 4.33e+05 | 60 | 336 | 0 | 345 | 8492 | 224 | 0 | 8519 | 12891 | 13500 | 14957 | 9457 |
| 5 | 8192 | 1 | 5.05e+05 | 69 | 328 | 0 | 757 | 14507 | 489 | 0 | 15555 | 20018 | 21217 | 24015 | 16150 |
| 6 | 16384 | 1 | 5.29e+05 | 70 | 452 | 1 | 1861 | 27757 | 729 | 0 | 30222 | 36890 | 38138 | 42585 | 30870 |
| 7 | 32768 | 1 | 5.61e+05 | 85 | 602 | 1 | 3301 | 52915 | 1302 | 0 | 57743 | 66789 | 69415 | 80008 | 58206 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_a30-24gb_t3_fp32.svg" />
</details>
#### Offline: DCNv2 on NVIDIA A30, TensorFlow + TensorRT with FP16, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA A30 |
| Model architecture | DCNv2 |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.75e+05 | 31 | 155 | 0 | 52 | 1136 | 78 | 0 | 1419 | 1591 | 1640 | 1831 | 1452 |
| 1 | 512 | 1 | 2.71e+05 | 33 | 163 | 0 | 82 | 1520 | 80 | 0 | 1849 | 1924 | 1958 | 3602 | 1878 |
| 2 | 1024 | 1 | 3.14e+05 | 73 | 260 | 0 | 148 | 2651 | 110 | 0 | 3218 | 3445 | 3536 | 5800 | 3242 |
| 3 | 2048 | 1 | 2.80e+05 | 58 | 209 | 0 | 245 | 6634 | 156 | 0 | 6994 | 10021 | 10424 | 10919 | 7302 |
| 4 | 4096 | 1 | 4.48e+05 | 68 | 283 | 0 | 346 | 8211 | 219 | 0 | 8385 | 12535 | 13358 | 16307 | 9127 |
| 5 | 8192 | 1 | 5.62e+05 | 77 | 271 | 0 | 650 | 13167 | 366 | 0 | 14355 | 18585 | 19638 | 22077 | 14531 |
| 6 | 16384 | 1 | 6.11e+05 | 83 | 377 | 0 | 2297 | 23271 | 680 | 0 | 26604 | 34647 | 36354 | 39316 | 26708 |
| 7 | 32768 | 1 | 7.17e+05 | 73 | 514 | 1 | 5409 | 38366 | 1279 | 0 | 44389 | 55813 | 58518 | 70669 | 45642 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_a30-24gb_t3_fp16.svg" />
</details>
/tmp/ipykernel_771557/1052116882.py:9: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
fig, axarr = plt.subplots(1, 2, figsize=[15, 3.5], dpi=100)
#### Offline: DCNv2 on NVIDIA T4, TensorFlow + TensorRT with FP32, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DCNv2 |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 7.86e+04 | 44 | 375 | 0 | 92 | 2501 | 236 | 0 | 3248 | 3415 | 3457 | 3569 | 3248 |
| 1 | 512 | 1 | 9.13e+04 | 44 | 433 | 0 | 131 | 4681 | 301 | 0 | 5447 | 5701 | 5830 | 9042 | 5590 |
| 2 | 1024 | 1 | 1.04e+05 | 45 | 435 | 0 | 203 | 8767 | 382 | 0 | 9849 | 10118 | 10238 | 11009 | 9832 |
| 3 | 2048 | 1 | 1.08e+05 | 46 | 407 | 0 | 341 | 17573 | 481 | 0 | 19072 | 19665 | 19791 | 20127 | 18848 |
| 4 | 4096 | 1 | 1.11e+05 | 49 | 433 | 0 | 620 | 34940 | 753 | 0 | 36648 | 38501 | 39238 | 40913 | 36795 |
| 5 | 8192 | 1 | 1.11e+05 | 54 | 520 | 0 | 1183 | 70303 | 1170 | 0 | 72605 | 75982 | 76263 | 80393 | 73230 |
| 6 | 16384 | 1 | 1.10e+05 | 67 | 587 | 0 | 2425 | 143325 | 2060 | 0 | 148529 | 150641 | 151048 | 154147 | 148464 |
| 7 | 32768 | 1 | 1.07e+05 | 98 | 846 | 1 | 4860 | 295283 | 3870 | 0 | 305032 | 308246 | 310093 | 311552 | 304958 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_t4-16gb_t15_fp32.svg" />
</details>
#### Offline: DCNv2 on NVIDIA T4, TensorFlow + TensorRT with FP16, 4B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DCNv2 |
| Model size | 4B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.09e+05 | 49 | 439 | 0 | 94 | 1579 | 171 | 0 | 2330 | 2553 | 2604 | 2704 | 2332 |
| 1 | 512 | 1 | 1.77e+05 | 51 | 367 | 0 | 124 | 2113 | 225 | 0 | 2880 | 3038 | 3080 | 3219 | 2880 |
| 2 | 1024 | 1 | 2.54e+05 | 40 | 361 | 0 | 198 | 3053 | 360 | 0 | 4000 | 4132 | 4192 | 4341 | 4012 |
| 3 | 2048 | 1 | 2.77e+05 | 49 | 535 | 0 | 348 | 5934 | 514 | 0 | 7334 | 7648 | 7793 | 9272 | 7380 |
| 4 | 4096 | 1 | 3.11e+05 | 48 | 541 | 0 | 644 | 11095 | 796 | 0 | 12911 | 13438 | 15733 | 18127 | 13124 |
| 5 | 8192 | 1 | 3.34e+05 | 50 | 576 | 0 | 1180 | 21472 | 1187 | 0 | 24101 | 25307 | 27011 | 30350 | 24465 |
| 6 | 16384 | 1 | 3.48e+05 | 59 | 662 | 0 | 2345 | 41747 | 2110 | 0 | 46995 | 47956 | 48105 | 48710 | 46923 |
| 7 | 32768 | 1 | 3.49e+05 | 69 | 756 | 1 | 4705 | 83982 | 3881 | 0 | 93290 | 95408 | 96025 | 97009 | 93394 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_t4-16gb_t15_fp16.svg" />
</details>
#### Offline: DCNv2 on NVIDIA T4, TensorFlow + TensorRT with FP32, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DCNv2 |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP32 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 7.24e+04 | 46 | 377 | 0 | 94 | 2710 | 297 | 0 | 3514 | 3707 | 3770 | 3848 | 3524 |
| 1 | 512 | 1 | 8.90e+04 | 48 | 467 | 0 | 133 | 4741 | 345 | 0 | 5717 | 5959 | 6026 | 6227 | 5734 |
| 2 | 1024 | 1 | 1.01e+05 | 46 | 562 | 0 | 217 | 8898 | 418 | 0 | 10061 | 10551 | 10735 | 12662 | 10141 |
| 3 | 2048 | 1 | 9.99e+04 | 46 | 612 | 0 | 431 | 18812 | 562 | 0 | 20075 | 21090 | 22357 | 31101 | 20463 |
| 4 | 4096 | 1 | 1.06e+05 | 46 | 655 | 0 | 727 | 36089 | 753 | 0 | 38056 | 39816 | 40214 | 48450 | 38270 |
| 5 | 8192 | 1 | 1.09e+05 | 49 | 668 | 1 | 1272 | 71380 | 1213 | 0 | 74280 | 75644 | 76134 | 77127 | 74583 |
| 6 | 16384 | 1 | 1.06e+05 | 72 | 817 | 1 | 2419 | 147768 | 2099 | 0 | 153166 | 155120 | 155385 | 156117 | 153176 |
| 7 | 32768 | 1 | 1.02e+05 | 89 | 940 | 1 | 4824 | 311509 | 3941 | 0 | 321135 | 325901 | 327276 | 330134 | 321304 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_t4-16gb_t3_fp32.svg" />
</details>
#### Offline: DCNv2 on NVIDIA T4, TensorFlow + TensorRT with FP16, 22B parameters
Our results were obtained using the following configuration:
| Parameter Name | Parameter Value |
|:-----------------------------|:-----------------------------|
| GPU |NVIDIA T4 |
| Model architecture | DCNv2 |
| Model size | 22B parameters |
| Backend |TensorFlow + NVIDIA TensorRT|
| Backend accelerator |-|
| Precision |FP16 |
| Model format |NVIDIA Triton Ensemble (TensorFlow SavedModel + NVIDIA TensorRT)|
| Max batch size |32768|
| Number of model instances |1|
| Export Format | TensorFlow SavedModel|
| NVIDIA TensorRT Capture CUDA Graph | Enabled|
| Device Kind | gpu|
<details><summary>Results Table</summary>
| | Batch | Concurrency | Inferences/Second | Client Send | Network+Server Send/Recv | Server Queue | Server Compute Input | Server Compute Infer | Server Compute Output | Client Recv | p50 latency | p90 latency | p95 latency | p99 latency | avg latency |
|---:|--------:|--------------:|--------------------:|--------------:|---------------------------:|---------------:|-----------------------:|-----------------------:|------------------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| 0 | 256 | 1 | 1.08e+05 | 44 | 398 | 0 | 93 | 1710 | 119 | 0 | 2341 | 2613 | 2725 | 3053 | 2364 |
| 1 | 512 | 1 | 1.57e+05 | 62 | 485 | 0 | 131 | 2418 | 147 | 0 | 3229 | 3460 | 3530 | 3859 | 3243 |
| 2 | 1024 | 1 | 2.15e+05 | 68 | 513 | 0 | 208 | 3619 | 339 | 0 | 4692 | 5164 | 5571 | 5999 | 4747 |
| 3 | 2048 | 1 | 2.64e+05 | 71 | 570 | 0 | 406 | 6183 | 504 | 0 | 7687 | 8198 | 8412 | 9083 | 7734 |
| 4 | 4096 | 1 | 3.02e+05 | 62 | 618 | 0 | 677 | 11380 | 792 | 0 | 13459 | 13972 | 14300 | 15488 | 13529 |
| 5 | 8192 | 1 | 3.21e+05 | 68 | 618 | 0 | 1257 | 22300 | 1193 | 0 | 25401 | 26175 | 26493 | 27150 | 25436 |
| 6 | 16384 | 1 | 3.37e+05 | 69 | 704 | 1 | 2488 | 43214 | 2089 | 0 | 48548 | 49881 | 50164 | 50964 | 48565 |
| 7 | 32768 | 1 | 3.36e+05 | 69 | 838 | 1 | 4720 | 87391 | 3864 | 0 | 96664 | 98617 | 99489 | 100986 | 96883 |
<img width="100%" src="./img/inference/tensorflow_dcnv2_t4-16gb_t3_fp16.svg" />
</details>
## Advanced
### Latency explanation
A typical Triton Inference Server pipeline can be broken down into the following steps:
1. The client serializes the inference request into a message and sends it to
the server (Client Send).
2. The message travels over the network from the client to the server (Network).
3. The message arrives at the server and is deserialized (Server Receive).
4. The request is placed on the queue (Server Queue).
5. The request is removed from the queue and computed (Server Compute).
6. The completed request is serialized in a message and sent back to
the client (Server Send).
7. The completed message then travels over the network from the server
to the client (Network).
8. The completed message is deserialized by the client and processed as
a completed inference request (Client Receive).
Generally, for local clients, steps 1-4 and 6-8 will only occupy
a small fraction of time compared to step 5. In distributed systems and online processing
where the client and the server side are connected through a network, the send and receive steps might have an impact
on overall processing performance. In order to analyze the possible bottlenecks, detailed
charts are presented in online scenario cases.
## Release Notes
We’re constantly refining and improving our performance on AI
and HPC workloads, even on the same hardware, with frequent updates
to our software stack. For our latest performance data, refer
to these pages for
[AI](https://developer.nvidia.com/deep-learning-performance-training-inference)
and [HPC](https://developer.nvidia.com/hpc-application-performance) benchmarks.
### Changelog
April 2023
- Initial release
### Known issues
- There are no known issues with this model.
|
Tools/PyTorch/TimeSeriesPredictionPlatform/triton/deployment_toolkit | deployment_toolkit | warmup | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pathlib
from distutils.version import LooseVersion
from importlib.metadata import version
from typing import List
TRITON_CLIENT_VERSION = LooseVersion(version("tritonclient"))
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .core import BatchingMode, EvaluationMode, MeasurementMode, OfflineMode
from .perf_analyzer import PerfAnalyzer, PerfAnalyzerConfig
from .utils import parse_server_url
LOGGER = logging.getLogger("warmup")
def performance_evaluation_warmup(
server_url: str,
model_name: str,
batch_sizes: List[int],
number_of_triton_instances: int,
number_of_model_instances: int,
input_data: str,
input_shapes: List[str],
measurement_mode: MeasurementMode,
measurement_interval: int,
measurement_request_count: int,
batching_mode: BatchingMode,
offline_mode: OfflineMode,
evaluation_mode: EvaluationMode,
output_shared_memory_size: int,
):
protocol, host, port = parse_server_url(server_url)
measurement_interval = 2 * measurement_interval
measurement_request_count = 2 * measurement_request_count
if batching_mode == BatchingMode.STATIC:
if len(batch_sizes) == 1:
batch_sizes = {batch_sizes[0]}
else:
batch_sizes = sorted({1, batch_sizes[-1]})
max_concurrency = 1
min_concurrency = 1
step = 1
elif batching_mode == BatchingMode.DYNAMIC:
max_batch_size = max(batch_sizes)
max_total_requests = 2 * max_batch_size * number_of_triton_instances * number_of_model_instances
max_concurrency = min(256, max_total_requests)
step = max(1, max_concurrency // 2)
min_concurrency = step
batch_sizes = [max(1, max_total_requests // 256)]
else:
raise ValueError(f"Unsupported batching mode: {batching_mode}")
for batch_size in batch_sizes:
for concurrency in range(min_concurrency, max_concurrency + step, step):
params = {
"model-name": model_name,
"model-version": 1,
"batch-size": batch_size,
"url": f"{host}:{port}",
"protocol": protocol,
"input-data": input_data,
"measurement-interval": measurement_interval,
"concurrency-range": f"{concurrency}:{concurrency}:1",
}
if TRITON_CLIENT_VERSION >= LooseVersion("2.11.0"):
params["measurement-mode"] = measurement_mode.value
params["measurement-request-count"] = measurement_request_count
if evaluation_mode == EvaluationMode.OFFLINE:
params["shared-memory"] = offline_mode.value
params["output-shared-memory-size"] = output_shared_memory_size
config = PerfAnalyzerConfig()
for param, value in params.items():
config[param] = value
for shape in input_shapes:
config["shape"] = shape
perf_analyzer = PerfAnalyzer(config=config)
perf_analyzer.run()
|
PyTorch/Forecasting/TFT/triton/runner | runner | downloader | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
import shutil
import urllib.request
from typing import Any, Callable
from zipfile import ZipFile
from retrying import retry
from tqdm.auto import tqdm
# method from PEP-366 to support relative import in executed modules
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .logger import LOGGER
from .exceptions import RunnerException
def unzip(checkpoint_path: pathlib.Path, archive_path: pathlib.Path) -> None:
"""
Unzip acrhive to provided path
Args:
checkpoint_path: Path where archive has to be unpacked
archive_path: Path to archive Archive filename
Returns:
None
"""
LOGGER.info(f"Creating directory for checkpoint: {checkpoint_path.name}")
checkpoint_path.mkdir(parents=True, exist_ok=True)
LOGGER.info(f"Unpacking checkpoint files {checkpoint_path}")
with ZipFile(archive_path, "r") as zf:
zf.extractall(path=checkpoint_path)
LOGGER.info("done")
LOGGER.info(f"Removing zip file: {archive_path}")
archive_path.unlink()
LOGGER.info("done")
def download_progress(t: Any) -> Callable:
"""
Progress bar
Args:
t: progress
Returns:
Callable
"""
last_b = [0]
def update_to(b: int = 1, bsize: int = 1, tsize: int = None):
if tsize not in (None, -1):
t.total = tsize
t.update((b - last_b[0]) * bsize)
last_b[0] = b
return update_to
@retry(stop_max_attempt_number=3)
def download(checkpoint_url: str, checkpoint_path: pathlib.Path) -> None:
"""
Download checkpoint from given url to provided path
Args:
checkpoint_url: Url from which checkpoint has to be downloaded
checkpoint_path: Path where checkpoint has to be stored
Returns:
None
"""
LOGGER.info(f"Downloading checkpoint from {checkpoint_url}")
with tqdm(unit="B") as t:
reporthook = download_progress(t)
result = urllib.request.urlretrieve(checkpoint_url, reporthook=reporthook)
filename = result[0]
LOGGER.info(f"Checkpoint saved in {filename}")
file_path = pathlib.Path(filename)
if not file_path.is_file() and not file_path.is_dir():
raise RunnerException(f"Checkpoint {filename} does not exist")
LOGGER.info(f"Moving checkpoint to {checkpoint_path.parent}")
shutil.move(file_path, checkpoint_path.parent / file_path.name)
LOGGER.info("done")
archive_path = checkpoint_path.parent / file_path.name
unzip(checkpoint_path, archive_path)
|
PyTorch/LanguageModeling/BART/scripts/params | params | xsum_params | #!/usr/bin/env bash
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# XSUM dataset summarization configurations for NVIDIA DGX A100 (8x NVIDIA A100 40GB GPU)
dgxa100_8gpu_bf16 ()
{
DATA_DIR=data/xsum
CKPT_PATH=data/nvidia_pretrained/bart_large/
CONFIG_PATH=configs/config_xsum.json
NUM_GPU=8
LR=1.25e-4
BS=40
ACCUM=1
PRECISION="bf16"
TRAIN_STEPS=2000
WARMUP_STEPS=50
MAX_SOURCE_LEN=1024
MAX_TARGET_LEN=60
EVAL_BEAMS=6
EVAL_BS=128
PRED_BS=128
PRELN=true
echo $DATA_DIR $CKPT_PATH $CONFIG_PATH $NUM_GPU $LR $BS $ACCUM $PRECISION $TRAIN_STEPS $WARMUP_STEPS $MAX_SOURCE_LEN $MAX_TARGET_LEN $EVAL_BEAMS $EVAL_BS $PRED_BS $PRELN
}
dgxa100_8gpu_bf16_eval ()
{
DATA_DIR=data/xsum
CONFIG_PATH=configs/config_xsum.json
NUM_GPU=8
PRECISION="bf16"
MAX_SOURCE_LEN=1024
MAX_TARGET_LEN=60
EVAL_BEAMS=6
PRED_BS=128
echo $PRED_BS $NUM_GPU $PRECISION $EVAL_BEAMS $MAX_SOURCE_LEN $MAX_TARGET_LEN $DATA_DIR $CONFIG_PATH
}
dgxa100_8gpu_tf32 ()
{
DATA_DIR=data/xsum
CKPT_PATH=data/nvidia_pretrained/bart_large/
CONFIG_PATH=configs/config_xsum.json
NUM_GPU=8
LR=1.25e-4
BS=24
ACCUM=1
PRECISION="tf32"
TRAIN_STEPS=3333
WARMUP_STEPS=50
MAX_SOURCE_LEN=1024
MAX_TARGET_LEN=60
EVAL_BEAMS=6
EVAL_BS=128
PRED_BS=64
PRELN=true
echo $DATA_DIR $CKPT_PATH $CONFIG_PATH $NUM_GPU $LR $BS $ACCUM $PRECISION $TRAIN_STEPS $WARMUP_STEPS $MAX_SOURCE_LEN $MAX_TARGET_LEN $EVAL_BEAMS $EVAL_BS $PRED_BS $PRELN
}
dgxa100_8gpu_tf32_eval ()
{
DATA_DIR=data/xsum
CONFIG_PATH=configs/config_xsum.json
NUM_GPU=8
PRECISION="tf32"
MAX_SOURCE_LEN=1024
MAX_TARGET_LEN=60
EVAL_BEAMS=6
PRED_BS=64
echo $PRED_BS $NUM_GPU $PRECISION $EVAL_BEAMS $MAX_SOURCE_LEN $MAX_TARGET_LEN $DATA_DIR $CONFIG_PATH
}
|
PyTorch/SpeechRecognition/wav2vec2/utils | utils | generate_dictionary | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import Counter
import sys
in_ltr = sys.argv[1]
out_dict = sys.argv[2]
counter = Counter()
with open(in_ltr) as ltr:
for line in ltr:
counter.update(line[:-1].replace(" ", ""))
with open(out_dict, "w") as out:
for letter, cnt in counter.most_common():
out.write(f"{letter} {cnt}\n")
|
PyTorch/Forecasting/TFT/triton/runner | runner | exporter | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pathlib
# method from PEP-366 to support relative import in executed modules
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .core import Command
from .exceptions import RunnerException
from .stages import Stage
class CommandsExporter:
"""
Command exported to BASH scripts
"""
def __init__(self, scripts_dir: pathlib.Path):
"""
Args:
scripts_dir: Paths where scripts should be stored
"""
self._scripts_dir = scripts_dir
def export(self, stage: Stage) -> Command:
"""
Export stage commands to script and return new command to execute
Args:
stage: Stage object with commands
Returns:
Command object with script execution command
"""
filename = self._get_filename(stage.label)
file_path = self._scripts_dir / filename
with open(file_path, "w+") as stagefile:
stagefile.write("set -x\n")
stagefile.write("set -e\n")
stagefile.write("export PYTHONUNBUFFERED=1\n")
stagefile.write("export PYTHONPATH=`pwd`\n")
for command in stage.commands:
stagefile.write(str(command))
result = os.system(f'ex +"set syn=sh" +"norm gg=G" -cwq {file_path}')
if result != 0:
raise RunnerException(f"Failed running {filename} script formatting. Exit code {result}")
command = Command(f"bash -xe {file_path.as_posix()}")
return command
def _get_filename(self, label: str):
"""
Generate filename for script based on label
Args:
label: String with stage label
Returns:
String with script filename
"""
filename = label.replace(" ", "_").lower()
filename = f"{filename}.sh"
return filename
|
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/triton/deployment_toolkit | deployment_toolkit | __init__ | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. |
TensorFlow/LanguageModeling/BERT/notebooks | notebooks | bert_squad_tf_inference_colab | #!/usr/bin/env python
# coding: utf-8
# In[ ]:
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# <a href="https://colab.research.google.com/github/NVIDIA/DeepLearningExamples/blob/master/TensorFlow/LanguageModeling/BERT/notebooks/bert_squad_tf_inference_colab.ipynb#scrollTo=5hRb96NKE3X0" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# <img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
#
# # BERT Question Answering Inference with Mixed Precision
#
# ## 1. Overview
#
# Bidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.
#
# The original paper can be found here: https://arxiv.org/abs/1810.04805.
#
# NVIDIA's BERT 19.10 is an optimized version of Google's official implementation, leveraging mixed precision arithmetic and tensor cores on V100 GPUS for faster training times while maintaining target accuracy.
# ### 1.a Learning objectives
#
# This notebook demonstrates:
# - Inference on QA task with BERT Large model
# - The use/download of fine-tuned NVIDIA BERT models
# - Use of Mixed Precision for Inference
# ## 2. Requirements
#
# ### 2.a GPU
#
# Before running this notebook, please set the Colab runtime environment to GPU via the menu *Runtime => Change runtime type => GPU*.
#
# This demo will work on any NVIDIA GPU with CUDA cores, though for improved FP16 inference, a Volta, Turing or newer generation GPU with Tensor cores is desired. On Google Colab, this normally means a T4 GPU. If you are assigned an older K80 GPU, another trial at another time might give you a T4 GPU.
# In[ ]:
#Select lower version of tensroflow on Google Colab
get_ipython().run_line_magic('tensorflow_version', '1.x')
import tensorflow
print(tensorflow.__version__)
# In[ ]:
get_ipython().system('nvidia-smi')
# ### 2.b Download the required files from NVIDIA-Github:
# In[ ]:
get_ipython().system('wget -nc -q --show-progress -O ./master.zip https://github.com/NVIDIA/DeepLearningExamples/archive/master.zip')
get_ipython().system('unzip -q -n -d . ./master.zip')
# In[ ]:
import os
WORKSPACE_DIR='./DeepLearningExamples-master/TensorFlow/LanguageModeling/BERT/'
os.chdir(WORKSPACE_DIR)
print (os.getcwd())
# ## 3. BERT Inference: Question Answering
#
# We can run inference on a fine-tuned BERT model for tasks like Question Answering.
#
# Here we use a BERT model fine-tuned on a [SQuaD 2.0 Dataset](https://rajpurkar.github.io/SQuAD-explorer/) which contains 100,000+ question-answer pairs on 500+ articles combined with over 50,000 new, unanswerable questions.
# ### 3.a Paragraph and Queries
#
# In this example we will ask our BERT model questions related to the following paragraph:
#
# **The Apollo Program**
# _"The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's administration as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy's national goal of landing a man on the Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Project Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two-man Gemini program which ran concurrently with it from 1962 to 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehicles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and the Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975."_
#
# The questions and relative answers expected are shown below:
#
# - **Q1:** "What project put the first Americans into space?"
# - **A1:** "Project Mercury"
# - **Q2:** "What program was created to carry out these projects and missions?"
# - **A2:** "The Apollo program"
# - **Q3:** "What year did the first manned Apollo flight occur?"
# - **A3:** "1968"
# - **Q4:** "What President is credited with the original notion of putting Americans in space?"
# - **A4:** "John F. Kennedy"
# - **Q5:** "Who did the U.S. collaborate with on an Earth orbit mission in 1975?"
# - **A5:** "Soviet Union"
# - **Q6:** "How long did Project Apollo run?"
# - **A6:** "1961 to 1972"
# - **Q7:** "What program helped develop space travel techniques that Project Apollo used?"
# - **A7:** "Gemini Mission"
# - **Q8:** "What space station supported three manned missions in 1973-1974?"
# - **A8:** "Skylab"
#
# ---
#
# The paragraph and the questions can be easily customized by changing the code below:
#
# ---
# In[ ]:
get_ipython().run_cell_magic('writefile', 'input.json', '{"data": \n [\n {"title": "Project Apollo",\n "paragraphs": [\n {"context":"The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower\'s administration as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy\'s national goal of landing a man on the Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Project Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two man Gemini program which ran concurrently with it from 1962 to 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehicles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and the Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975.", \n "qas": [\n { "question": "What project put the first Americans into space?", \n "id": "Q1"\n },\n { "question": "What program was created to carry out these projects and missions?",\n "id": "Q2"\n },\n { "question": "What year did the first manned Apollo flight occur?",\n "id": "Q3"\n }, \n { "question": "What President is credited with the original notion of putting Americans in space?",\n "id": "Q4"\n },\n { "question": "Who did the U.S. collaborate with on an Earth orbit mission in 1975?",\n "id": "Q5"\n },\n { "question": "How long did Project Apollo run?",\n "id": "Q6"\n }, \n { "question": "What program helped develop space travel techniques that Project Apollo used?",\n "id": "Q7"\n }, \n {"question": "What space station supported three manned missions in 1973-1974?",\n "id": "Q8"\n } \n]}]}]}\n')
# In[ ]:
import sys
working_dir = os.getcwd();
data_dir = os.path.join(working_dir, 'data/download');
if working_dir not in sys.path:
sys.path.append(working_dir)
# In[ ]:
input_file = os.path.join(working_dir, 'input.json')
# ### 3.b Mixed Precision
#
# Mixed precision training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of tensor cores in the Volta and Turing architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures.
#
# For information about:
# - How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) documentation.
# - How to access and enable AMP for TensorFlow, see [Using TF-AMP](https://docs.nvidia.com/deeplearning/dgx/tensorflow-user-guide/index.html#tfamp) from the TensorFlow User Guide.
# - Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
# In this notebook we control mixed precision execution with the environmental variable:
# In[ ]:
import os
os.environ["TF_ENABLE_AUTO_MIXED_PRECISION"] = "1"
# ## 4. Fine-Tuned NVIDIA BERT TF Models
#
# Based on the model size, we have the following two default configurations of BERT.
#
# | **Model** | **Hidden layers** | **Hidden unit size** | **Attention heads** | **Feedforward filter size** | **Max sequence length** | **Parameters** |
# |:---------:|:----------:|:----:|:---:|:--------:|:---:|:----:|
# |BERTBASE |12 encoder| 768| 12|4 x 768|512|110M|
# |BERTLARGE|24 encoder|1024| 16|4 x 1024|512|330M|
#
# We will take advantage of the fine-tuned models available on NGC (NVIDIA GPU Cluster, https://ngc.nvidia.com).
# Among the many configurations available we will download these two:
#
# - **bert_tf_ckpt_large_qa_squad2_amp_384**
#
# Which are trained on the SQuaD 2.0 Dataset.
# In[ ]:
# bert_tf_ckpt_large_qa_squad2_amp_384
DATA_DIR_FT = os.path.join(data_dir, 'finetuned_large_model')
get_ipython().system('mkdir -p $DATA_DIR_FT')
get_ipython().system('wget --content-disposition -O $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad2_amp_384_19.03.1.zip https://api.ngc.nvidia.com/v2/models/nvidia/bert_tf_ckpt_large_qa_squad2_amp_384/versions/19.03.1/zip && unzip -n -d $DATA_DIR_FT/ $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad2_amp_384_19.03.1.zip && rm $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad2_amp_384_19.03.1.zip')
# In the code that follows we will refer to these models.
# Download the Google pretrained weights and vocab file:
# In[ ]:
os.chdir("./data");
from GooglePretrainedWeightDownloader import GooglePretrainedWeightDownloader
gd = GooglePretrainedWeightDownloader(data_dir)
gd.download()
os.chdir("..");
# We need the horovod package:
# In[ ]:
try:
__import__("horovod")
except ImportError:
os.system("pip install --no-cache-dir horovod")
# ## 5. Running QA task inference
#
# In order to run QA inference we will follow step-by-step the flow implemented in run_squad.py.
#
# Configuration:
# In[ ]:
import run_squad
import json
import tensorflow as tf
import modeling
import tokenization
import time
import random
tf.logging.set_verbosity(tf.logging.INFO)
# Create the output directory where all the results are saved.
output_dir = os.path.join(working_dir, 'results')
tf.gfile.MakeDirs(output_dir)
# The config json file corresponding to the pre-trained BERT model.
# This specifies the model architecture.
bert_config_file = os.path.join(data_dir, 'finetuned_large_model_SQUAD2.0/bert_config.json')
# The vocabulary file that the BERT model was trained on.
vocab_file = os.path.join(data_dir, 'finetuned_large_model_SQUAD2.0/vocab.txt')
# Initiate checkpoint to the fine-tuned BERT Large model
init_checkpoint = os.path.join(data_dir, 'finetuned_large_model/model.ckpt')
# Whether to lower case the input text.
# Should be True for uncased models and False for cased models.
do_lower_case = True
# Total batch size for predictions
predict_batch_size = 1
params = dict([('batch_size', predict_batch_size)])
# The maximum total input sequence length after WordPiece tokenization.
# Sequences longer than this will be truncated, and sequences shorter than this will be padded.
max_seq_length = 384
# When splitting up a long document into chunks, how much stride to take between chunks.
doc_stride = 128
# The maximum number of tokens for the question.
# Questions longer than this will be truncated to this length.
max_query_length = 64
# This is a WA to use flags from here:
flags = tf.flags
if 'f' not in tf.flags.FLAGS:
tf.app.flags.DEFINE_string('f', '', 'kernel')
FLAGS = flags.FLAGS
verbose_logging = True
# Set to True if the dataset has samples with no answers. For SQuAD 1.1, this is set to False
version_2_with_negative = False
# The total number of n-best predictions to generate in the nbest_predictions.json output file.
n_best_size = 20
# The maximum length of an answer that can be generated.
# This is needed because the start and end predictions are not conditioned on one another.
max_answer_length = 30
# Let's define the tokenizer and create the model for the estimator:
# In[ ]:
# Validate the casing config consistency with the checkpoint name.
tokenization.validate_case_matches_checkpoint(do_lower_case, init_checkpoint)
# Create the tokenizer.
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
# Load the configuration from file
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
(start_logits, end_logits) = run_squad.create_model(
bert_config=bert_config,
is_training=False,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
use_one_hot_embeddings=False)
tvars = tf.trainable_variables()
initialized_variable_names = {}
(assignment_map, initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
output_spec = None
predictions = {"unique_ids": unique_ids,
"start_logits": start_logits,
"end_logits": end_logits}
output_spec = tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
return output_spec
config = tf.ConfigProto(log_device_placement=True)
run_config = tf.estimator.RunConfig(
model_dir=None,
session_config=config,
save_checkpoints_steps=1000,
keep_checkpoint_max=1)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params=params)
# ### 5.a Inference
# In[ ]:
eval_examples = run_squad.read_squad_examples(
input_file=input_file, is_training=False)
eval_writer = run_squad.FeatureWriter(
filename=os.path.join(output_dir, "eval.tf_record"),
is_training=False)
eval_features = []
def append_feature(feature):
eval_features.append(feature)
eval_writer.process_feature(feature)
# Loads a data file into a list of InputBatch's
run_squad.convert_examples_to_features(
examples=eval_examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=False,
output_fn=append_feature)
eval_writer.close()
tf.logging.info("***** Running predictions *****")
tf.logging.info(" Num orig examples = %d", len(eval_examples))
tf.logging.info(" Num split examples = %d", len(eval_features))
tf.logging.info(" Batch size = %d", predict_batch_size)
predict_input_fn = run_squad.input_fn_builder(
input_file=eval_writer.filename,
batch_size=predict_batch_size,
seq_length=max_seq_length,
is_training=False,
drop_remainder=False)
all_results = []
eval_hooks = [run_squad.LogEvalRunHook(predict_batch_size)]
eval_start_time = time.time()
for result in estimator.predict(
predict_input_fn, yield_single_examples=True, hooks=eval_hooks, checkpoint_path=init_checkpoint):
unique_id = int(result["unique_ids"])
start_logits = [float(x) for x in result["start_logits"].flat]
end_logits = [float(x) for x in result["end_logits"].flat]
all_results.append(
run_squad.RawResult(
unique_id=unique_id,
start_logits=start_logits,
end_logits=end_logits))
eval_time_elapsed = time.time() - eval_start_time
time_list = eval_hooks[-1].time_list
time_list.sort()
eval_time_wo_startup = sum(time_list[:int(len(time_list) * 0.99)])
num_sentences = eval_hooks[-1].count * predict_batch_size
avg_sentences_per_second = num_sentences * 1.0 / eval_time_wo_startup
tf.logging.info("-----------------------------")
tf.logging.info("Total Inference Time = %0.2f Inference Time W/O start up overhead = %0.2f "
"Sentences processed = %d", eval_time_elapsed, eval_time_wo_startup,
num_sentences)
tf.logging.info("Inference Performance = %0.4f sentences/sec", avg_sentences_per_second)
tf.logging.info("-----------------------------")
output_prediction_file = os.path.join(output_dir, "predictions.json")
output_nbest_file = os.path.join(output_dir, "nbest_predictions.json")
output_null_log_odds_file = os.path.join(output_dir, "null_odds.json")
run_squad.write_predictions(eval_examples, eval_features, all_results,
n_best_size, max_answer_length,
do_lower_case, output_prediction_file,
output_nbest_file, output_null_log_odds_file,
version_2_with_negative, verbose_logging)
tf.logging.info("Inference Results:")
# Here we show only the prediction results, nbest prediction is also available in the output directory
results = ""
with open(output_prediction_file, 'r') as json_file:
data = json.load(json_file)
for question in eval_examples:
results += "<tr><td>{}</td><td>{}</td><td>{}</td></tr>".format(question.qas_id, question.question_text, data[question.qas_id])
from IPython.display import display, HTML
display(HTML("<table><tr><th>Id</th><th>Question</th><th>Answer</th></tr>{}</table>".format(results)))
# ## 6. What's next
# Now that you are familiar with running QA Inference on BERT, using mixed precision, you may want to try
# your own paragraphs and queries.
#
# You may also want to take a look to the notebook __bert_squad_tf_finetuning.ipynb__ on how to run fine-tuning on BERT, available in the same directory.
|
TensorFlow2/Segmentation/UNet_Medical | UNet_Medical | main | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Entry point of the application.
This file serves as entry point to the run of UNet for segmentation of neuronal processes.
Example:
Training can be adjusted by modifying the arguments specified below::
$ python main.py --exec_mode train --model_dir /dataset ...
"""
import horovod.tensorflow as hvd
from model.unet import Unet
from runtime.run import train, evaluate, predict
from runtime.setup import get_logger, set_flags, prepare_model_dir
from runtime.arguments import PARSER, parse_args
from data_loading.data_loader import Dataset
def main():
"""
Starting point of the application
"""
hvd.init()
params = parse_args(PARSER.parse_args())
set_flags(params)
model_dir = prepare_model_dir(params)
params.model_dir = model_dir
logger = get_logger(params)
model = Unet()
dataset = Dataset(data_dir=params.data_dir,
batch_size=params.batch_size,
fold=params.fold,
augment=params.augment,
gpu_id=hvd.rank(),
num_gpus=hvd.size(),
seed=params.seed,
amp=params.use_amp)
if 'train' in params.exec_mode:
train(params, model, dataset, logger)
if 'evaluate' in params.exec_mode:
if hvd.rank() == 0:
evaluate(params, model, dataset, logger)
if 'predict' in params.exec_mode:
if hvd.rank() == 0:
predict(params, model, dataset, logger)
if __name__ == '__main__':
main()
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/util | util | normalDistribution | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "normalDistribution.h"
#include <cassert>
#include <string>
namespace tts
{
/******************************************************************************
* CONSTANTS ******************************************************************
*****************************************************************************/
namespace
{
constexpr const int NORMAL_DIST_BLOCK_SIZE = 512;
} // namespace
/******************************************************************************
* CUDA KERNELS ***************************************************************
*****************************************************************************/
__global__ void normalDistributionKernel(
curandState_t* const states, const int numStates, float* const outValues, const int numValues)
{
const int tid = blockIdx.x * blockDim.x + threadIdx.x;
if (tid < numStates)
{
// load random state information from global memory
curandState_t localState = states[tid];
for (int index = tid; index < numValues; index += numStates)
{
outValues[index] = curand_normal(&localState);
}
// save random state information back to global memory
states[tid] = localState;
}
}
/******************************************************************************
* HELPER FUNCTIONS ***********************************************************
*****************************************************************************/
namespace
{
int roundUpBlocks(const int num, const int blockSize)
{
return ((num - 1) / blockSize) + 1;
}
} // namespace
/******************************************************************************
* CONSTRUCTORS / DESTRUCTOR **************************************************
*****************************************************************************/
NormalDistribution::NormalDistribution(const int numStates, const uint32_t seed)
: mRand(numStates)
{
setSeed(seed, 0);
}
/******************************************************************************
* PUBLIC METHODS *************************************************************
*****************************************************************************/
void NormalDistribution::setSeed(const uint32_t seed, cudaStream_t stream)
{
mRand.setSeed(seed, stream);
}
void NormalDistribution::generate(float* const outValues, const int numValues, cudaStream_t stream)
{
const dim3 grid(roundUpBlocks(mRand.size(), NORMAL_DIST_BLOCK_SIZE));
const dim3 block(NORMAL_DIST_BLOCK_SIZE);
assert(mRand.size() <= grid.x * block.x);
normalDistributionKernel<<<grid, block, 0, stream>>>(mRand.getRandomStates(), mRand.size(), outValues, numValues);
}
} // namespace tts
|
PyTorch/SpeechSynthesis/FastPitch/platform | platform | DGX1_FastPitch_FP32_8GPU | #!/bin/bash
set -a
: ${NUM_GPUS:=8}
: ${BATCH_SIZE:=16}
: ${GRAD_ACCUMULATION:=2}
: ${AMP:=false}
bash scripts/train.sh "$@"
|
PyTorch/LanguageModeling/BERT/triton/runner | runner | pipeline | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
from typing import Dict, Tuple
# method from PEP-366 to support relative import in executed modules
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .stages import (
ConversionStage,
DeployStage,
ExportStage,
ResultsType,
TritonPerformanceOfflineStage,
TritonPerformanceOnlineStage,
TritonPreparePerformanceProfilingDataStage,
)
class Pipeline:
"""
Definition of stages that has to be executed before and during experiments
"""
# Stages to execute as part of single experiment
_experiment_stages = [
ExportStage.label,
ConversionStage.label,
DeployStage.label,
TritonPreparePerformanceProfilingDataStage.label,
TritonPerformanceOfflineStage.label,
TritonPerformanceOnlineStage.label,
]
def __init__(self):
"""
Initialize pipeline
"""
self._stages: Dict = dict()
def model_export(self, commands: Tuple[str, ...]) -> None:
"""
Model export stage
Args:
commands: Commands to be executed as part of stage
Returns:
None
"""
stage = ExportStage(commands=commands)
self._stages[stage.label] = stage
def model_conversion(self, commands: Tuple[str, ...]) -> None:
"""
Model conversion stage
Args:
commands: Commands to be executed as part of stage
Returns:
None
"""
stage = ConversionStage(commands=commands)
self._stages[stage.label] = stage
def model_deploy(self, commands: Tuple[str, ...]) -> None:
"""
Model deployment stage
Args:
commands: Commands to be executed as part of stage
Returns:
None
"""
stage = DeployStage(commands=commands)
self._stages[stage.label] = stage
def triton_prepare_performance_profiling_data(self, commands: Tuple[str, ...]) -> None:
"""
Model profiling data creation stage
Args:
commands: Commands to be executed as part of stage
Returns:
None
"""
stage = TritonPreparePerformanceProfilingDataStage(commands=commands)
self._stages[stage.label] = stage
def triton_performance_offline_tests(self, commands: Tuple[str, ...], result_path: str) -> None:
"""
Model performance offline test stage
Args:
commands: Commands to be executed as part of stage
result_path: Path where results file is stored
Returns:
None
"""
stage = TritonPerformanceOfflineStage(
commands=commands,
result_path=result_path,
result_type=ResultsType.TRITON_PERFORMANCE_OFFLINE,
)
self._stages[stage.label] = stage
def triton_performance_online_tests(self, commands: Tuple[str, ...], result_path: str) -> None:
"""
Model performance online test stage
Args:
commands: Commands to be executed as part of stage
result_path: Path where results file is stored
Returns:
None
"""
stage = TritonPerformanceOnlineStage(
commands=commands,
result_path=result_path,
result_type=ResultsType.TRITON_PERFORMANCE_ONLINE,
)
self._stages[stage.label] = stage
def stages(self):
"""
Generate stages which should be run per experiment
Returns:
Generator with stages object
"""
for stage_name in self._experiment_stages:
stage = self._stages.get(stage_name)
if not stage:
continue
yield stage
|
PyTorch/Recommendation/NCF | NCF | verify_dataset | function get_checker {
if [[ "$OSTYPE" == "darwin"* ]]; then
checkmd5=md5
else
checkmd5=md5sum
fi
echo $checkmd5
}
function verify_1m {
# From: curl -O http://files.grouplens.org/datasets/movielens/ml-1m.zip.md5
hash=<(echo "MD5 (ml-1m.zip) = c4d9eecfca2ab87c1945afe126590906")
local checkmd5=$(get_checker)
if diff <($checkmd5 ml-1m.zip) $hash &> /dev/null
then
echo "PASSED"
else
echo "FAILED"
fi
}
function verify_20m {
# From: curl -O http://files.grouplens.org/datasets/movielens/ml-20m.zip.md5
hash=<(echo "MD5 (ml-20m.zip) = cd245b17a1ae2cc31bb14903e1204af3")
local checkmd5=$(get_checker)
if diff <($checkmd5 ml-20m.zip) $hash &> /dev/null
then
echo "PASSED"
else
echo "FAILED"
fi
}
if [[ $1 == "ml-1m" ]]
then
verify_1m
else
verify_20m
fi
|
TensorFlow/Detection/SSD/models/research/object_detection/data | data | mscoco_complete_label_map | item {
name: "background"
id: 0
display_name: "background"
}
item {
name: "/m/01g317"
id: 1
display_name: "person"
}
item {
name: "/m/0199g"
id: 2
display_name: "bicycle"
}
item {
name: "/m/0k4j"
id: 3
display_name: "car"
}
item {
name: "/m/04_sv"
id: 4
display_name: "motorcycle"
}
item {
name: "/m/05czz6l"
id: 5
display_name: "airplane"
}
item {
name: "/m/01bjv"
id: 6
display_name: "bus"
}
item {
name: "/m/07jdr"
id: 7
display_name: "train"
}
item {
name: "/m/07r04"
id: 8
display_name: "truck"
}
item {
name: "/m/019jd"
id: 9
display_name: "boat"
}
item {
name: "/m/015qff"
id: 10
display_name: "traffic light"
}
item {
name: "/m/01pns0"
id: 11
display_name: "fire hydrant"
}
item {
name: "12"
id: 12
display_name: "12"
}
item {
name: "/m/02pv19"
id: 13
display_name: "stop sign"
}
item {
name: "/m/015qbp"
id: 14
display_name: "parking meter"
}
item {
name: "/m/0cvnqh"
id: 15
display_name: "bench"
}
item {
name: "/m/015p6"
id: 16
display_name: "bird"
}
item {
name: "/m/01yrx"
id: 17
display_name: "cat"
}
item {
name: "/m/0bt9lr"
id: 18
display_name: "dog"
}
item {
name: "/m/03k3r"
id: 19
display_name: "horse"
}
item {
name: "/m/07bgp"
id: 20
display_name: "sheep"
}
item {
name: "/m/01xq0k1"
id: 21
display_name: "cow"
}
item {
name: "/m/0bwd_0j"
id: 22
display_name: "elephant"
}
item {
name: "/m/01dws"
id: 23
display_name: "bear"
}
item {
name: "/m/0898b"
id: 24
display_name: "zebra"
}
item {
name: "/m/03bk1"
id: 25
display_name: "giraffe"
}
item {
name: "26"
id: 26
display_name: "26"
}
item {
name: "/m/01940j"
id: 27
display_name: "backpack"
}
item {
name: "/m/0hnnb"
id: 28
display_name: "umbrella"
}
item {
name: "29"
id: 29
display_name: "29"
}
item {
name: "30"
id: 30
display_name: "30"
}
item {
name: "/m/080hkjn"
id: 31
display_name: "handbag"
}
item {
name: "/m/01rkbr"
id: 32
display_name: "tie"
}
item {
name: "/m/01s55n"
id: 33
display_name: "suitcase"
}
item {
name: "/m/02wmf"
id: 34
display_name: "frisbee"
}
item {
name: "/m/071p9"
id: 35
display_name: "skis"
}
item {
name: "/m/06__v"
id: 36
display_name: "snowboard"
}
item {
name: "/m/018xm"
id: 37
display_name: "sports ball"
}
item {
name: "/m/02zt3"
id: 38
display_name: "kite"
}
item {
name: "/m/03g8mr"
id: 39
display_name: "baseball bat"
}
item {
name: "/m/03grzl"
id: 40
display_name: "baseball glove"
}
item {
name: "/m/06_fw"
id: 41
display_name: "skateboard"
}
item {
name: "/m/019w40"
id: 42
display_name: "surfboard"
}
item {
name: "/m/0dv9c"
id: 43
display_name: "tennis racket"
}
item {
name: "/m/04dr76w"
id: 44
display_name: "bottle"
}
item {
name: "45"
id: 45
display_name: "45"
}
item {
name: "/m/09tvcd"
id: 46
display_name: "wine glass"
}
item {
name: "/m/08gqpm"
id: 47
display_name: "cup"
}
item {
name: "/m/0dt3t"
id: 48
display_name: "fork"
}
item {
name: "/m/04ctx"
id: 49
display_name: "knife"
}
item {
name: "/m/0cmx8"
id: 50
display_name: "spoon"
}
item {
name: "/m/04kkgm"
id: 51
display_name: "bowl"
}
item {
name: "/m/09qck"
id: 52
display_name: "banana"
}
item {
name: "/m/014j1m"
id: 53
display_name: "apple"
}
item {
name: "/m/0l515"
id: 54
display_name: "sandwich"
}
item {
name: "/m/0cyhj_"
id: 55
display_name: "orange"
}
item {
name: "/m/0hkxq"
id: 56
display_name: "broccoli"
}
item {
name: "/m/0fj52s"
id: 57
display_name: "carrot"
}
item {
name: "/m/01b9xk"
id: 58
display_name: "hot dog"
}
item {
name: "/m/0663v"
id: 59
display_name: "pizza"
}
item {
name: "/m/0jy4k"
id: 60
display_name: "donut"
}
item {
name: "/m/0fszt"
id: 61
display_name: "cake"
}
item {
name: "/m/01mzpv"
id: 62
display_name: "chair"
}
item {
name: "/m/02crq1"
id: 63
display_name: "couch"
}
item {
name: "/m/03fp41"
id: 64
display_name: "potted plant"
}
item {
name: "/m/03ssj5"
id: 65
display_name: "bed"
}
item {
name: "66"
id: 66
display_name: "66"
}
item {
name: "/m/04bcr3"
id: 67
display_name: "dining table"
}
item {
name: "68"
id: 68
display_name: "68"
}
item {
name: "69"
id: 69
display_name: "69"
}
item {
name: "/m/09g1w"
id: 70
display_name: "toilet"
}
item {
name: "71"
id: 71
display_name: "71"
}
item {
name: "/m/07c52"
id: 72
display_name: "tv"
}
item {
name: "/m/01c648"
id: 73
display_name: "laptop"
}
item {
name: "/m/020lf"
id: 74
display_name: "mouse"
}
item {
name: "/m/0qjjc"
id: 75
display_name: "remote"
}
item {
name: "/m/01m2v"
id: 76
display_name: "keyboard"
}
item {
name: "/m/050k8"
id: 77
display_name: "cell phone"
}
item {
name: "/m/0fx9l"
id: 78
display_name: "microwave"
}
item {
name: "/m/029bxz"
id: 79
display_name: "oven"
}
item {
name: "/m/01k6s3"
id: 80
display_name: "toaster"
}
item {
name: "/m/0130jx"
id: 81
display_name: "sink"
}
item {
name: "/m/040b_t"
id: 82
display_name: "refrigerator"
}
item {
name: "83"
id: 83
display_name: "83"
}
item {
name: "/m/0bt_c3"
id: 84
display_name: "book"
}
item {
name: "/m/01x3z"
id: 85
display_name: "clock"
}
item {
name: "/m/02s195"
id: 86
display_name: "vase"
}
item {
name: "/m/01lsmm"
id: 87
display_name: "scissors"
}
item {
name: "/m/0kmg4"
id: 88
display_name: "teddy bear"
}
item {
name: "/m/03wvsk"
id: 89
display_name: "hair drier"
}
item {
name: "/m/012xff"
id: 90
display_name: "toothbrush"
}
|
PyTorch/Forecasting/TFT/scripts | scripts | run_electricity_DGX1-16G | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
: ${SEED:=1}
: ${LR:=1e-3}
: ${NGPU:=8}
: ${BATCH_SIZE:=1024}
: ${EPOCHS:=30}
python -m torch.distributed.run --nproc_per_node=${NGPU} train.py \
--dataset electricity \
--data_path /data/processed/electricity_bin \
--batch_size=${BATCH_SIZE} \
--sample 450000 50000 \
--lr ${LR} \
--epochs ${EPOCHS} \
--seed ${SEED} \
--use_amp \
--results /results/TFT_electricity_bs${NGPU}x${BATCH_SIZE}_lr${LR}/seed_${SEED}
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/test | test | Taco2PrenetLayerPlugin_test | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "UnitTest.hpp"
#include "binding.h"
#include "cudaMemory.h"
#include "cudaUtils.h"
#include "logging.h"
#include "taco2PrenetLayerPlugin.h"
#include "trtUtils.h"
#include "NvInfer.h"
#include <random>
#include <vector>
using namespace nvinfer1;
using namespace nvinfer1::plugin;
using namespace tts;
/******************************************************************************
* HELPER FUNCTIONS ***********************************************************
*****************************************************************************/
namespace
{
template <typename RNG>
std::vector<float> genVec(const size_t size, RNG& rng)
{
std::uniform_real_distribution<float> dist(-1.0, 1.0);
std::vector<float> vec(size);
for (size_t i = 0; i < size; ++i) {
vec[i] = dist(rng);
}
return vec;
}
} // namespace
/******************************************************************************
* UNIT TESTS *****************************************************************
*****************************************************************************/
TEST(CPUCompareTest)
{
std::mt19937 rng(0);
const int inputLength = 80;
const int numDimensions = 256;
// weights
std::vector<float> weight1 = genVec(inputLength * numDimensions, rng);
std::vector<float> weight2 = genVec(numDimensions * numDimensions, rng);
Taco2PrenetLayerPlugin layer(
TRTUtils::toWeights(weight1),
TRTUtils::toWeights(weight2),
inputLength,
numDimensions);
const std::vector<float> inputHost = genVec(numDimensions, rng);
const std::vector<float> dropoutHost(numDimensions, 1.0f);
CudaMemory<float> inputDevice(inputHost);
CudaMemory<float> dropoutDevice(dropoutHost);
std::vector<Dims> inputDims{Dims3(1, 1, inputLength),
Dims2(1, numDimensions)};
const std::vector<Dims> outputDims{Dims3(1, 1, numDimensions)};
const std::vector<DataType> dataTypes(2, DataType::kFLOAT);
const std::vector<DynamicPluginTensorDesc> inDynDesc{
{{Dims3(-1, 1, inputLength),
DataType::kFLOAT,
TensorFormat::kLINEAR,
1.0f},
Dims3(1, 1, inputLength),
Dims3(1, 1, inputLength)},
{{Dims2(-1, numDimensions),
DataType::kFLOAT,
TensorFormat::kLINEAR,
1.0f},
Dims2(1, numDimensions),
Dims2(1, numDimensions)}};
const std::vector<DynamicPluginTensorDesc> outDynDesc{
{{Dims3(-1, 1, numDimensions),
DataType::kFLOAT,
TensorFormat::kLINEAR,
1.0f},
Dims3(1, 1, numDimensions),
Dims3(1, 1, numDimensions)}};
layer.configurePlugin(
inDynDesc.data(), inDynDesc.size(), outDynDesc.data(), outDynDesc.size());
layer.initialize();
std::vector<const float*> inputs{inputDevice.data(), dropoutDevice.data()};
CudaMemory<float> outputDevice(numDimensions);
std::vector<float*> outputs{outputDevice.data()};
const std::vector<PluginTensorDesc> inDesc{
{Dims3(1, 1, inputLength), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f},
{Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f},
};
const std::vector<PluginTensorDesc> outDesc{{Dims3(1, 1, numDimensions),
DataType::kFLOAT,
TensorFormat::kLINEAR,
1.0f}};
CudaMemory<uint8_t> workspace(layer.getWorkspaceSize(
inDesc.data(),
static_cast<int>(inDesc.size()),
outDesc.data(),
static_cast<int>(outDesc.size())));
layer.enqueue(
inDesc.data(),
outDesc.data(),
reinterpret_cast<const void* const*>(inputs.data()),
reinterpret_cast<void**>(outputs.data()),
workspace.data(),
0);
CudaUtils::sync(0);
// perform operations on cpu
std::vector<float> expOutput(numDimensions);
std::vector<float> intermediate(numDimensions);
for (int i = 0; i < numDimensions; ++i) {
float v = 0.0f;
for (int j = 0; j < inputLength; ++j) {
v += inputHost[j] * weight1[i * inputLength + j];
}
intermediate[i] = v;
}
for (int i = 0; i < numDimensions; ++i) {
intermediate[i] = std::max(0.0f, intermediate[i]) * dropoutHost[i];
}
for (int i = 0; i < numDimensions; ++i) {
float v = 0.0f;
for (int j = 0; j < numDimensions; ++j) {
v += intermediate[j] * weight2[i * numDimensions + j];
}
expOutput[i] = v;
}
for (int i = 0; i < numDimensions; ++i) {
expOutput[i] = std::max(0.0f, expOutput[i]) * dropoutHost[i];
}
// match outputs
const std::vector<float> actOutput = outputDevice.toHost();
ASSERT_EQ(expOutput.size(), actOutput.size());
for (size_t i = 0; i < expOutput.size(); ++i) {
EXPECT_NEAR(expOutput[i], actOutput[i], 1e-4) << "i = " << i;
}
}
|
TensorFlow/Detection/SSD/models/research/object_detection/inference | inference | detection_inference_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
r"""Tests for detection_inference.py."""
import os
import StringIO
import numpy as np
from PIL import Image
import tensorflow as tf
from object_detection.core import standard_fields
from object_detection.inference import detection_inference
from object_detection.utils import dataset_util
def get_mock_tfrecord_path():
return os.path.join(tf.test.get_temp_dir(), 'mock.tfrec')
def create_mock_tfrecord():
pil_image = Image.fromarray(np.array([[[123, 0, 0]]], dtype=np.uint8), 'RGB')
image_output_stream = StringIO.StringIO()
pil_image.save(image_output_stream, format='png')
encoded_image = image_output_stream.getvalue()
feature_map = {
'test_field':
dataset_util.float_list_feature([1, 2, 3, 4]),
standard_fields.TfExampleFields.image_encoded:
dataset_util.bytes_feature(encoded_image),
}
tf_example = tf.train.Example(features=tf.train.Features(feature=feature_map))
with tf.python_io.TFRecordWriter(get_mock_tfrecord_path()) as writer:
writer.write(tf_example.SerializeToString())
def get_mock_graph_path():
return os.path.join(tf.test.get_temp_dir(), 'mock_graph.pb')
def create_mock_graph():
g = tf.Graph()
with g.as_default():
in_image_tensor = tf.placeholder(
tf.uint8, shape=[1, None, None, 3], name='image_tensor')
tf.constant([2.0], name='num_detections')
tf.constant(
[[[0, 0.8, 0.7, 1], [0.1, 0.2, 0.8, 0.9], [0.2, 0.3, 0.4, 0.5]]],
name='detection_boxes')
tf.constant([[0.1, 0.2, 0.3]], name='detection_scores')
tf.identity(
tf.constant([[1.0, 2.0, 3.0]]) *
tf.reduce_sum(tf.cast(in_image_tensor, dtype=tf.float32)),
name='detection_classes')
graph_def = g.as_graph_def()
with tf.gfile.Open(get_mock_graph_path(), 'w') as fl:
fl.write(graph_def.SerializeToString())
class InferDetectionsTests(tf.test.TestCase):
def test_simple(self):
create_mock_graph()
create_mock_tfrecord()
serialized_example_tensor, image_tensor = detection_inference.build_input(
[get_mock_tfrecord_path()])
self.assertAllEqual(image_tensor.get_shape().as_list(), [1, None, None, 3])
(detected_boxes_tensor, detected_scores_tensor,
detected_labels_tensor) = detection_inference.build_inference_graph(
image_tensor, get_mock_graph_path())
with self.test_session(use_gpu=False) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
tf.train.start_queue_runners()
tf_example = detection_inference.infer_detections_and_add_to_example(
serialized_example_tensor, detected_boxes_tensor,
detected_scores_tensor, detected_labels_tensor, False)
self.assertProtoEquals(r"""
features {
feature {
key: "image/detection/bbox/ymin"
value { float_list { value: [0.0, 0.1] } } }
feature {
key: "image/detection/bbox/xmin"
value { float_list { value: [0.8, 0.2] } } }
feature {
key: "image/detection/bbox/ymax"
value { float_list { value: [0.7, 0.8] } } }
feature {
key: "image/detection/bbox/xmax"
value { float_list { value: [1.0, 0.9] } } }
feature {
key: "image/detection/label"
value { int64_list { value: [123, 246] } } }
feature {
key: "image/detection/score"
value { float_list { value: [0.1, 0.2] } } }
feature {
key: "image/encoded"
value { bytes_list { value:
"\211PNG\r\n\032\n\000\000\000\rIHDR\000\000\000\001\000\000"
"\000\001\010\002\000\000\000\220wS\336\000\000\000\022IDATx"
"\234b\250f`\000\000\000\000\377\377\003\000\001u\000|gO\242"
"\213\000\000\000\000IEND\256B`\202" } } }
feature {
key: "test_field"
value { float_list { value: [1.0, 2.0, 3.0, 4.0] } } } }
""", tf_example)
def test_discard_image(self):
create_mock_graph()
create_mock_tfrecord()
serialized_example_tensor, image_tensor = detection_inference.build_input(
[get_mock_tfrecord_path()])
(detected_boxes_tensor, detected_scores_tensor,
detected_labels_tensor) = detection_inference.build_inference_graph(
image_tensor, get_mock_graph_path())
with self.test_session(use_gpu=False) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
tf.train.start_queue_runners()
tf_example = detection_inference.infer_detections_and_add_to_example(
serialized_example_tensor, detected_boxes_tensor,
detected_scores_tensor, detected_labels_tensor, True)
self.assertProtoEquals(r"""
features {
feature {
key: "image/detection/bbox/ymin"
value { float_list { value: [0.0, 0.1] } } }
feature {
key: "image/detection/bbox/xmin"
value { float_list { value: [0.8, 0.2] } } }
feature {
key: "image/detection/bbox/ymax"
value { float_list { value: [0.7, 0.8] } } }
feature {
key: "image/detection/bbox/xmax"
value { float_list { value: [1.0, 0.9] } } }
feature {
key: "image/detection/label"
value { int64_list { value: [123, 246] } } }
feature {
key: "image/detection/score"
value { float_list { value: [0.1, 0.2] } } }
feature {
key: "test_field"
value { float_list { value: [1.0, 2.0, 3.0, 4.0] } } } }
""", tf_example)
if __name__ == '__main__':
tf.test.main()
|
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/triton/runner | runner | stages | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
from typing import List, Optional, Tuple, Union
# method from PEP-366 to support relative import in executed modules
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .core import Command
class ResultsType:
"""
Results types generated by runner
"""
TRITON_PERFORMANCE_OFFLINE = "triton_performance_offline"
TRITON_PERFORMANCE_ONLINE = "triton_performance_online"
class Stage:
"""
Stage definition
"""
label: str
commands: List[Command]
result_path: Optional[str]
result_type: Optional[str]
def __init__(
self,
commands: Union[Tuple[str, ...], List[str]],
result_path: Optional[str] = None,
result_type: Optional[str] = None,
):
"""
Args:
commands: List or Tuple of commands provided as raw string
result_path: Path to results file generated by stage
result_type: Type of results generated by stage
"""
if type(commands) not in [tuple, list]:
raise ValueError("""Incorrect type of commands list. Please, provide list of commands as tuple.""")
self.commands = list(map(lambda command: Command(data=command), commands))
self.result_path = result_path
self.result_type = result_type
class ExportStage(Stage):
label = "Export Model"
class ConversionStage(Stage):
label = "Convert Model"
class DeployStage(Stage):
label = "Deploy Model"
class CorrectnessStage(Stage):
label = "Model Correctness Tests"
class TritonPreparePerformanceProfilingDataStage(Stage):
label = "Prepare Triton Profiling Data"
class TritonPerformanceOfflineStage(Stage):
label = "Triton Performance Offline Tests"
class TritonPerformanceOnlineStage(Stage):
label = "Triton Performance Online Tests"
|
TensorFlow/Segmentation/VNet/utils | utils | cmd_util | # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
PARSER = argparse.ArgumentParser(description="VNet")
PARSER.add_argument('--exec_mode',
choices=['train', 'predict', 'train_and_predict', 'train_and_evaluate'],
required=True,
type=str)
PARSER.add_argument('--data_normalization',
choices=['zscore'],
default='zscore',
type=str)
PARSER.add_argument('--activation',
choices=['relu'],
default='relu',
type=str)
PARSER.add_argument('--resize_interpolator',
choices=['linear'],
default='linear',
type=str)
PARSER.add_argument('--loss',
choices=['dice'],
default='dice',
type=str)
PARSER.add_argument('--normalization_layer',
choices=['batchnorm'],
default='batchnorm',
type=str)
PARSER.add_argument('--pooling',
choices=['conv_pool'],
default='conv_pool',
type=str)
PARSER.add_argument('--upsampling',
choices=['transposed_conv'],
default='transposed_conv',
type=str)
PARSER.add_argument('--seed',
default=0,
type=int)
PARSER.add_argument('--input_shape', nargs='+', type=int, default=[32, 32, 32])
PARSER.add_argument('--upscale_blocks', nargs='+', type=int, default=[3, 3])
PARSER.add_argument('--downscale_blocks', nargs='+', type=int, default=[3, 3, 3])
PARSER.add_argument('--convolution_size',
choices=[3, 5],
default=3,
type=int)
PARSER.add_argument('--batch_size',
required=True,
type=int)
PARSER.add_argument('--log_every',
default=10,
type=int)
PARSER.add_argument('--warmup_steps',
default=200,
type=int)
PARSER.add_argument('--train_epochs',
default=1,
type=int)
PARSER.add_argument('--optimizer',
choices=['rmsprop'],
default='rmsprop',
type=str)
PARSER.add_argument('--gradient_clipping',
choices=['global_norm'],
default='global_norm',
type=str)
PARSER.add_argument('--base_lr',
default=0.0001,
type=float)
PARSER.add_argument('--momentum',
default=0.0,
type=float)
PARSER.add_argument('--train_split',
default=1.0,
type=float)
PARSER.add_argument('--split_seed',
default=0,
type=int)
PARSER.add_argument('--model_dir',
required=True,
type=str)
PARSER.add_argument('--log_dir',
default=None,
type=str)
PARSER.add_argument('--data_dir',
required=True,
type=str)
PARSER.add_argument('--benchmark', dest='benchmark', action='store_true', default=False)
PARSER.add_argument('--use_amp', '--amp', dest='use_amp', action='store_true', default=False)
PARSER.add_argument('--use_xla', '--xla', dest='use_xla', action='store_true', default=False)
PARSER.add_argument('--augment', dest='augment', action='store_true', default=False)
|
TensorFlow/Segmentation/UNet_Industrial/model/blocks | blocks | unet_upsample | # !/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
import tensorflow as tf
from model import layers
from model import blocks
__all__ = ["upsample_unet_block"]
def upsample_unet_block(
inputs,
residual_input,
filters,
data_format='NCHW',
is_training=True,
conv2d_hparams=None,
block_name='upsample_block'
):
if not isinstance(conv2d_hparams, tf.contrib.training.HParams):
raise ValueError("The paramater `conv2d_hparams` is not of type `HParams`")
if data_format not in ['NHWC', 'NCHW']:
raise ValueError("Unknown data format: `%s` (accepted: ['NHWC', 'NCHW'])" % data_format)
if not isinstance(residual_input, tf.Tensor):
raise ValueError("`residual_input` should be a Tensorflow Tensor")
with tf.variable_scope(block_name):
net = layers.concat([inputs, residual_input], axis=1 if data_format == 'NCHW' else 3)
net = layers.conv2d(
net,
n_channels=filters,
kernel_size=(3, 3),
strides=(1, 1),
padding='same',
data_format=data_format,
use_bias=True,
trainable=is_training,
kernel_initializer=conv2d_hparams.kernel_initializer,
bias_initializer=conv2d_hparams.bias_initializer,
)
net = blocks.activation_block(
inputs=net, act_fn=conv2d_hparams.activation_fn, trainable=is_training, block_name='act1'
)
net = layers.conv2d(
net,
n_channels=filters / 2,
kernel_size=(3, 3),
strides=(1, 1),
padding='same',
data_format=data_format,
use_bias=True,
trainable=is_training,
kernel_initializer=conv2d_hparams.kernel_initializer,
bias_initializer=conv2d_hparams.bias_initializer,
)
net = blocks.activation_block(
inputs=net, act_fn=conv2d_hparams.activation_fn, trainable=is_training, block_name='act2'
)
net = layers.deconv2d(
net,
n_channels=filters / 2,
kernel_size=(2, 2),
padding='same',
data_format=data_format,
use_bias=True,
trainable=is_training,
kernel_initializer=conv2d_hparams.kernel_initializer,
bias_initializer=conv2d_hparams.bias_initializer,
)
net = blocks.activation_block(
inputs=net, act_fn=conv2d_hparams.activation_fn, trainable=is_training, block_name='act3'
)
return net
|
.github/ISSUE_TEMPLATE | ISSUE_TEMPLATE | feature_request | ---
name: Feature request
about: Suggest an idea for this project
title: "[Model/Framework or something else] Feature requested"
labels: enhancement
assignees: ''
---
Related to **Model/Framework(s) or something else (describe)**
*Examples:*
* *GNMT/PyTorch*
* *AMP*
* *Tensorflow 2.0*
* *Jupyter notebooks*
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/graph_aligner | graph_aligner | utils | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from pathlib import Path, PosixPath
from typing import List, Union
import cudf
import cupy
import pandas as pd
import torch
from tqdm import tqdm
from syngen.utils.types import ColumnType
from syngen.utils.cugraph import import_cugraph
def get_graph(df: cudf.DataFrame, src="src", dst="dst"):
"""Construct directed graph
Args:
df (DataFrameType): dataframe containing edge info
src (str): source node column name
dst (str): destination node column name
Returns:
`cugraph.DiGraph`
"""
cugraph = import_cugraph()
graph = cugraph.DiGraph()
graph.from_cudf_edgelist(df, source=src, destination=dst)
return graph
def merge_dfs(dfs, **kwargs):
"""merge a list of dataframes on a particular column
Args:
dfs (DataFrame): list of dataframes to merge on
kwargs (dict): key-word arguments to pass to DataFrame `merge` function
"""
if "on" not in kwargs:
kwargs["on"] = "vertex"
if "how" not in kwargs:
kwargs["how"] = "outer"
df = dfs[0]
for i in range(1, len(dfs)):
df = df.merge(dfs[i], **kwargs)
return df
def get_features(
df,
G,
src: str = "src",
dst: str = "dst",
pagerank_kwargs: dict = {"tol": 1e-4},
):
"""Extract structural features from graph `G`
features extracted: katz_centrality, out degree, pagerank
Args:
df (cudf.DataFrame): data containg edge list informatoin
G (cugraph.DiGraph): cuGraph graph descriptor containing connectivity information
from df.
src (str): source node column name.
dst (dst): destination node column name.
pagerank_kwargs (dict): page rank function arguments to pass.
"""
# - pagerank feat
cugraph = import_cugraph()
pr_df = cugraph.pagerank(G, **pagerank_kwargs)
# - out-degree feat
degree_src_df = df.groupby(src).count()
degree_src_df = degree_src_df.reset_index().rename(
columns={src: "vertex", dst: "out_degree"}
)
# - in-degree feat
degree_dst_df = df.groupby(dst).count()
degree_dst_df = degree_dst_df.reset_index().rename(
columns={dst: "vertex", src: "in_degree"}
)
# - katz feat
katz_df = cugraph.katz_centrality(G, tol=1e-2, alpha=1e-3)
return [pr_df, degree_src_df, degree_dst_df, katz_df]
def merge_graph_vertex_feat(old, new):
if old is None:
return new
merged_df = old.merge(new, on=['vertex'], how='outer')
merged_df = merged_df.fillna(0)
return merged_df
def chunk_pd_save(
df: pd.DataFrame,
save_path: Union[str, PosixPath],
chunk_size: Union[int, float],
):
"""Chunks a large dataframe and casts to a cudf for faster save
Args:
df (pdDataFrame): dataframe object to dump data
save_path (str): data path to dump chunks
chunk_size (int): size of the chunks
"""
save_path = Path(save_path)
num_rows = len(df)
if not save_path.exists():
os.makedirs(save_path)
if chunk_size > 0.0 <= 1.0:
chunk_size = int(num_rows * chunk_size)
else:
chunk_size = int(chunk_size)
for i in tqdm(range(num_rows // chunk_size - 1)):
chunk_df = df.iloc[i * chunk_size : (i + 1) * chunk_size]
chunk_cudf = cudf.from_pandas(chunk_df)
chunk_cudf.to_parquet(save_path / f"{i}_chunk.parquet", index=False)
def z_norm(series, meta=None, compute=False):
"""applies z-normalization (x - mu) / std"""
if meta:
mean = meta["mean"]
std = meta["std"]
else:
mean = series.mean()
std = series.std()
out = (series - mean) / std
return out, {"mean": mean, "std": std}
def categorify(series, meta=None, compute=False):
"""Converts categorical to ordinal"""
cat_codes = series.astype("category").cat.codes
return cat_codes, {}
def get_preproc_fn(name: str):
"""Preprocessing map function"""
PREPROC_FN_MAP = {"z_norm": z_norm, "categorify": categorify}
return PREPROC_FN_MAP[name]
def get_preproc_dict(feature_types: dict):
"""Apply preprocessing functions to each column type specified in `feature_types` """
preproc_dict = {}
for feat, type_ in feature_types.items():
if type_ == ColumnType.CONTINUOUS:
preproc_dict[feat] = {"type": type_, "preproc": "z_norm"}
elif type_ == ColumnType.CATEGORICAL:
preproc_dict[feat] = {"type": type_, "preproc": "categorify"}
return preproc_dict
def spread_ranks(ranks):
vals = cupy.unique(ranks)
rr = 0
for v in vals:
m = ranks == v
num_v = cupy.sum(m)
idx_range = cupy.arange(0, cupy.sum(m))
ranks[m] = ranks[m] + idx_range + rr
rr += num_v
return ranks
|
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/preprocessing | preprocessing | __init__ | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/util | util | utils | /*
* Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the NVIDIA CORPORATION nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TT2I_UTILS_H
#define TT2I_UTILS_H
#include <cmath>
#include <fstream>
#include <iostream>
#include <sstream>
#include <stdexcept>
#include <string>
#include <vector>
namespace tts
{
class Utils
{
public:
/**
* @brief Convert a string to lower-case.
*
* @param str The string.
*
* @return The lower-case version of the string.
*/
static std::string toLower(const std::string& str)
{
std::string lower(str);
for (char& c : lower)
{
c = std::tolower(c);
}
return lower;
}
/**
* @brief Check if a given filename ends with a given extension.
*
* @param str The filename.
* @param ext The extension.
*
* @return True if the filename ends with the given extension.
*/
static bool hasExtension(const std::string& str, const std::string& ext)
{
return str.length() >= ext.length()
&& std::equal(str.begin() + (str.length() - ext.length()), str.end(), ext.begin());
}
/**
* @brief Convert a string to a bool value. It accepts "y", "yes", "true",
* and "1", ignoring capitalization, as true. It accepts "n", "no",
* "false", and "0", ignoring capitalization, as false. Otherwise an
* exception is thrown.
*
* @param str The string to parse.
*
* @return True or false depending on the value of the string.
*/
static bool parseBool(const std::string& str)
{
const std::string lower = toLower(str);
if (lower == "y" || lower == "yes" || lower == "true" || lower == "1")
{
return true;
}
else if (lower == "n" || lower == "no" || lower == "false" || lower == "0")
{
return false;
}
else
{
throw std::runtime_error("Unable to parse bool from '" + str + "'.");
}
}
/**
* @brief Evaluate the 'sigmoid' function: f(x) = 1 / (1 + e^{-x}).
*
* @param x The value to evaluate the sigmoid function at.
*
* @return The result.
*/
static float sigmoid(const float x)
{
return 1.0f / (1.0f + std::exp(-x));
}
/**
* @brief Perform division of value by block, but round up to the nearest
* integral.
*
* @tparam T The value type.
* @param value The numerator.
* @param block The denominator.
*
* @return The divided value rounded up.
*/
template <typename T>
static T roundUpDiv(const T value, const T block)
{
return (value / block) + (value % block > 0);
}
/**
* @brief Round the value up to the nearest multiple of block.
*
* @tparam T The value type.
* @param value The value to round up.
* @param block The block size.
*
* @return The value rounded up to the nearest multiple of block.
*/
template <typename T>
static T roundUpTo(const T value, const T block)
{
return block * roundUpDiv(value, block);
}
};
} // namespace tts
#endif
|
PyTorch/Recommendation/NCF/qa | qa | generate_tables | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import tabulate
import numpy as np
def get_training_data(filename):
with open(filename, 'r') as opened:
line = opened.readlines()[-1]
json_content = line[len("DLLL "):]
data = json.loads(json_content)["data"]
with open(filename, 'r') as opened:
for line in opened.readlines():
d = json.loads(line[len("DLLL "):])
if d.get("step", "") == "PARAMETER":
data['batch_size'] = d["data"]["batch_size"]
return data
a100 = "runs/pytorch_ncf_A100-SXM4-40GBx{numgpus}gpus_{precision}_{num_run}.json"
v16 = "runs/pytorch_ncf_Tesla V100-SXM2-16GBx{numgpus}gpus_{precision}_{num_run}.json"
v32 = "runs/pytorch_ncf_Tesla V100-SXM2-32GBx{numgpus}gpus_{precision}_{num_run}.json"
dgx2 = "runs/pytorch_ncf_Tesla V100-SXM3-32GBx{numgpus}gpus_{precision}_{num_run}.json"
fp32 = "FP32"
amp = "Mixed (AMP)"
tf32 = "TF32"
first = a100.format(numgpus=1, precision=fp32, num_run=1)
timevar = 'time_to_target' #"time_to_best_model"
def get_acc_table(arch, numgpus, fullprec):
headers = ["GPUs", "Batch size / GPU", f"Accuracy - {fullprec}", "Accuracy - mixed precision", f"Time to train - {fullprec}", "Time to train - mixed precision", f"Time to train speedup ({fullprec} to mixed precision)"]
table = []
for numgpus in numgpus:
data_full = [get_training_data(arch.format(numgpus=numgpus, num_run=num_run, precision=fullprec)) for num_run in range(1, 21)]
data_mixed = [get_training_data(arch.format(numgpus=numgpus, num_run=num_run, precision=amp)) for num_run in range(1, 21)]
bsize = data_full[0]['batch_size']/numgpus
accs_full = np.mean([d["best_accuracy"] for d in data_full])
accs_mixed = np.mean([d["best_accuracy"] for d in data_mixed])
time_full = np.mean([d[timevar] for d in data_full])
time_mixed = np.mean([d[timevar] for d in data_mixed])
speedup = time_full / time_mixed
row = [numgpus, int(bsize),
"{:.6f}".format(accs_full),
"{:.6f}".format(accs_mixed),
"{:.6f}".format(time_full),
"{:.6f}".format(time_mixed),
"{:.2f}".format(speedup)]
table.append(row)
print(tabulate.tabulate(table, headers, tablefmt='pipe'))
def get_perf_table(arch, numgpus, fullprec):
headers = ["GPUs",
"Batch size / GPU",
f"Throughput - {fullprec} (samples/s)",
"Throughput - mixed precision (samples/s)",
f"Throughput speedup ({fullprec} to mixed precision)",
f"Strong scaling - {fullprec}",
"Strong scaling - mixed precision",
]
table = []
base_full = None
base_mixed = None
for numgpus in numgpus:
data_full = [get_training_data(arch.format(numgpus=numgpus, num_run=num_run, precision=fullprec)) for num_run in range(1, 21)]
data_mixed = [get_training_data(arch.format(numgpus=numgpus, num_run=num_run, precision=amp)) for num_run in range(1, 21)]
bsize = data_full[0]['batch_size']/numgpus
_full = np.mean([d["best_train_throughput"] for d in data_full])
_mixed = np.mean([d["best_train_throughput"] for d in data_mixed])
if numgpus == 1:
base_full = _full
base_mixed = _mixed
scaling_full = _full/ base_full
scaling_mixed = _mixed / base_mixed
time_mixed = np.mean([d[timevar] for d in data_mixed])
speedup = _full / _mixed
row = [numgpus, int(bsize),
"{:.2f}M".format(_full / 10**6),
"{:.2f}M".format(_mixed / 10**6),
"{:.2f}".format(speedup),
"{:.2f}".format(scaling_full),
"{:.2f}".format(scaling_mixed)]
table.append(row)
print(tabulate.tabulate(table, headers, tablefmt='pipe'))
#get_acc_table(a100, (1, 8), tf32)
#get_acc_table(v16, (1, 8), fp32)
#get_acc_table(v32, (1, 8), fp32)
#get_acc_table(dgx2, (1, 8, 16), fp32)
#get_perf_table(a100, (1, 8), tf32)
#get_perf_table(v16, (1, 8), fp32)
#get_perf_table(v32, (1, 8), fp32)
#get_perf_table(dgx2, (1, 8, 16), fp32) |
CUDA-Optimized/FastSpeech/fastspeech | fastspeech | align_tacotron2 | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import pathlib
import fire
import torch
from tqdm import tqdm
from fastspeech.data_load import PadDataLoader
from fastspeech.dataset.ljspeech_dataset import LJSpeechDataset
import tacotron2.train
import tacotron2.hparams
from fastspeech import hparam as hp, DEFAULT_DEVICE
import os
import numpy as np
from fastspeech.utils.logging import tprint
from fastspeech.utils.pytorch import to_device_async, to_cpu_numpy
def get_tacotron2(device, is_training=False):
hparams = tacotron2.hparams.create_hparams()
model = tacotron2.train.load_model(hparams)
model.load_state_dict(torch.load(
hp.tacotron2_path, map_location=torch.device(device))["state_dict"])
if is_training:
model.train()
else:
model.eval()
return model
def get_duration(texts, text_lens, mels, mel_lens, tacotron2, device):
texts = to_device_async(texts, device)
text_lens = to_device_async(text_lens, device)
mels = to_device_async(mels, device)
mel_lens = to_device_async(mel_lens, device)
_, _, _, aligns = tacotron2.forward(
(texts, text_lens, mels, None, mel_lens))
aligns = to_cpu_numpy(aligns)
durs = torch.FloatTensor([compute_duration(align) for align in aligns])
return durs
def compute_duration(align):
"""
Warning. This code assumes the attention is monotonic.
"""
d_mel, d_text = align.shape
dur = np.array([0 for _ in range(d_text)])
for i in range(d_mel):
idx = np.argmax(align[i])
dur[idx] += 1
return dur
def preprocess_aligns(
hparam="base.yaml",
device=DEFAULT_DEVICE):
""" The script for preprocessing alignments.
By default, this script assumes to load parameters in the default config file, fastspeech/hparams/base.yaml.
--dataset_path=DATASET_PATH
Path to dataset directory.
--tacotron2_path=TACOTRON2_PATH
Path to tacotron2 checkpoint file.
--aligns_path=ALIGNS_PATH
Path to output preprocessed alignments directory.
Refer to fastspeech/hparams/base.yaml to see more parameters.
Args:
hparam (str, optional): Path to default config file. Defaults to "base.yaml".
device (str, optional): Device to use. Defaults to "cuda" if avaiable, or "cpu".
"""
hp.set_hparam(hparam)
pathlib.Path(hp.aligns_path).mkdir(parents=True, exist_ok=True)
dataset = LJSpeechDataset(hp.dataset_path)
dataloader = PadDataLoader(
dataset, batch_size=1, shuffle=False, num_workers=32, drop_last=True)
tacotron2 = get_tacotron2(device, is_training=True)
to_device_async(tacotron2, device)
for batched in tqdm(dataloader):
names = batched['name']
texts = batched['text_encoded']
text_lens = batched['text_len']
mels = batched['mel']
mel_lens = batched['mel_len']
tprint("Processing {}.".format(', '.join(names)))
durs = get_duration(texts, text_lens, mels,
mel_lens, tacotron2, device)
for i, (name, dur) in enumerate(zip(names, durs)):
save_path = os.path.join(hp.aligns_path, name + ".align.npy")
if os.path.exists(save_path):
continue
np.save(save_path, dur)
# assert sum(duration) == len(align)
if __name__ == '__main__':
fire.Fire(preprocess_aligns)
|
PaddlePaddle/LanguageModeling/BERT/scripts | scripts | run_pretraining | # Copyright (c) 2022 NVIDIA Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
echo "Container nvidia build = " $NVIDIA_BUILD_ID
train_batch_size=${1:-256}
learning_rate=${2:-"6e-3"}
precision=${3:-"amp"}
num_gpus=${4:-8}
warmup_proportion=${5:-"0.2843"}
train_steps=${6:-7038}
save_checkpoint_steps=${7:-200}
create_logfile=${8:-"false"}
gradient_accumulation_steps=${9:-32}
seed=${10:-12439}
job_name=${11:-"bert_lamb_pretraining"}
train_batch_size_phase2=${12:-32}
learning_rate_phase2=${13:-"4e-3"}
warmup_proportion_phase2=${14:-"0.128"}
train_steps_phase2=${15:-1563}
gradient_accumulation_steps_phase2=${16:-128}
#change this for other datasets
DATASET=pretrain/phase1/unbinned/parquet
DATA_DIR_PHASE1=${17:-$BERT_PREP_WORKING_DIR/${DATASET}/}
#change this for other datasets
DATASET2=pretrain/phase2/bin_size_64/parquet
DATA_DIR_PHASE2=${18:-$BERT_PREP_WORKING_DIR/${DATASET2}/}
CODEDIR=${19:-"/workspace/bert"}
init_checkpoint=${20:-"None"}
VOCAB_FILE=vocab/bert-large-uncased-vocab.txt
RESULTS_DIR=$CODEDIR/results
CHECKPOINTS_DIR=$RESULTS_DIR
wikipedia_source=${21:-$BERT_PREP_WORKING_DIR/wikipedia/source/}
num_dask_workers=${22:-$(nproc)}
num_shards_per_worker=${23:-128}
num_workers=${24:-4}
num_nodes=1
sample_ratio=${25:-0.9}
phase2_bin_size=${26:-64}
masking=${27:-static}
BERT_CONFIG=${28:-"None"}
enable_benchmark=${29:-"false"}
benchmark_steps=${30:-"10"}
benchmark_warmup_steps=${31:-"10"}
fuse_mha=${32:-"true"}
# Calculate the total number of shards.
readonly num_blocks=$((num_shards_per_worker * $(( num_workers > 0 ? num_workers : 1 )) * num_nodes * num_gpus))
if [ "${phase2_bin_size}" == "none" ]; then
readonly phase2_bin_size_flag=""
elif [[ "${phase2_bin_size}" =~ ^(32|64|128|256|512)$ ]]; then
readonly phase2_bin_size_flag="--bin-size ${phase2_bin_size}"
else
echo "Error! phase2_bin_size=${phase2_bin_size} not supported!"
return -1
fi
if [ "${masking}" == "static" ]; then
readonly masking_flag="--masking"
elif [ "${masking}" == "dynamic" ]; then
readonly masking_flag=""
else
echo "Error! masking=${masking} not supported!"
return -1
fi
mkdir -p $CHECKPOINTS_DIR
if [ ! -d "${DATA_DIR_PHASE1}" ] || [ -z "$(ls -A ${DATA_DIR_PHASE1})" ]; then
echo "Warning! ${DATA_DIR_PHASE1} directory missing."
if [ ! -d "${wikipedia_source}" ] || [ -z "$(ls -A ${wikipedia_source})" ]; then
echo "Error! ${wikipedia_source} directory missing. Training cannot start!"
return -1
fi
preprocess_cmd=" \
mpirun \
--oversubscribe \
--allow-run-as-root \
-np ${num_dask_workers} \
-x LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so \
preprocess_bert_pretrain \
--schedule mpi \
--vocab-file ${VOCAB_FILE} \
--wikipedia ${wikipedia_source} \
--sink ${DATA_DIR_PHASE1} \
--num-blocks ${num_blocks} \
--sample-ratio ${sample_ratio} \
${masking_flag} \
--seed ${seed}"
echo "Running ${preprocess_cmd} ..."
${preprocess_cmd}
balance_load_cmd=" \
mpirun \
--oversubscribe \
--allow-run-as-root \
-np ${num_dask_workers} \
balance_dask_output \
--indir ${DATA_DIR_PHASE1} \
--num-shards ${num_blocks}"
echo "Running ${balance_load_cmd} ..."
${balance_load_cmd}
fi
if [ ! -d "$RESULTS_DIR" ] ; then
echo "Error! $RESULTS_DIR directory missing."
exit -1
fi
if [ ! -d "$CHECKPOINTS_DIR" ] ; then
echo "Warning! $CHECKPOINTS_DIR directory missing."
echo "Checkpoints will be written to $RESULTS_DIR instead."
CHECKPOINTS_DIR=$RESULTS_DIR
fi
CONFIG=""
if [ "$BERT_CONFIG" != "None" ] ; then
CONFIG="--config-file=$BERT_CONFIG"
fi
PREC=""
FUSE_MHA=""
if [ "$precision" = "amp" ] ; then
PREC="--amp --use-dynamic-loss-scaling --scale-loss=1048576"
if [ "$fuse_mha" = "true" ] ; then
FUSE_MHA="--fuse-mha"
fi
elif [ "$precision" = "fp32" ] ; then
PREC=""
elif [ "$precision" = "tf32" ] ; then
PREC=""
else
echo "Unknown <precision> argument"
exit -2
fi
ACCUMULATE_GRADIENTS="--gradient-merge-steps=$gradient_accumulation_steps"
INIT_CHECKPOINT=""
if [ "$init_checkpoint" != "None" ] ; then
INIT_CHECKPOINT="--from-checkpoint=$init_checkpoint --last-step-of-checkpoint=auto"
fi
BENCH=""
if [ "$enable_benchmark" = "true" ] ; then
BENCH="--benchmark --benchmark-steps=$benchmark_steps --benchmark-warmup-steps=$benchmark_warmup_steps"
fi
unset CUDA_VISIBLE_DEVICES
if [ "$num_gpus" = "1" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0"
elif [ "$num_gpus" = "2" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1"
elif [ "$num_gpus" = "3" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1,2"
elif [ "$num_gpus" = "4" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1,2,3"
elif [ "$num_gpus" = "5" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1,2,3,4"
elif [ "$num_gpus" = "6" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1,2,3,4,5"
elif [ "$num_gpus" = "7" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1,2,3,4,5,6"
elif [ "$num_gpus" = "8" ] ; then
DIST_CMD="python -m paddle.distributed.launch --gpus=0,1,2,3,4,5,6,7"
else
echo "Wrong number of gpus"
exit -2
fi
echo $DATA_DIR_PHASE1
INPUT_DIR=$DATA_DIR_PHASE1
CMD=" $CODEDIR/run_pretraining.py"
CMD+=" --input-dir=$DATA_DIR_PHASE1"
CMD+=" --vocab-file=$VOCAB_FILE"
CMD+=" --output-dir=$CHECKPOINTS_DIR"
CMD+=" $CONFIG "
CMD+=" --bert-model=bert-large-uncased"
CMD+=" --batch-size=$train_batch_size"
CMD+=" --max-seq-length=128"
CMD+=" --max-predictions-per-seq=20"
CMD+=" --max-steps=$train_steps"
CMD+=" --warmup-proportion=$warmup_proportion"
CMD+=" --num-steps-per-checkpoint=$save_checkpoint_steps"
CMD+=" --learning-rate=$learning_rate"
CMD+=" --seed=$seed"
CMD+=" --log-freq=1"
CMD+=" --optimizer=Lamb"
CMD+=" --phase1"
CMD+=" $PREC"
CMD+=" $FUSE_MHA"
CMD+=" $ACCUMULATE_GRADIENTS"
CMD+=" $INIT_CHECKPOINT"
CMD+=" $BENCH"
CMD+=" --report-file ${RESULTS_DIR}/dllogger_p1.json "
CMD="$DIST_CMD $CMD"
if [ "$create_logfile" = "true" ] ; then
export GBS=$(expr $train_batch_size \* $num_gpus \* $gradient_accumulation_steps)
printf -v TAG "paddle_bert_pretraining_phase1_%s_gbs%d" "$precision" $GBS
DATESTAMP=`date +'%y%m%d%H%M%S'`
LOGFILE=$RESULTS_DIR/$job_name.$TAG.$DATESTAMP.log
printf "Logs written to %s\n" "$LOGFILE"
fi
set -x
if [ -z "$LOGFILE" ] ; then
$CMD
else
(
$CMD
) |& tee $LOGFILE
fi
set +x
echo "finished pretraining"
#Start Phase2
PREC=""
if [ "$precision" = "amp" ] ; then
PREC="--amp --use-dynamic-loss-scaling --scale-loss=1048576"
elif [ "$precision" = "fp32" ] ; then
PREC=""
elif [ "$precision" = "tf32" ] ; then
PREC=""
else
echo "Unknown <precision> argument"
exit -2
fi
ACCUMULATE_GRADIENTS="--gradient-merge-steps=$gradient_accumulation_steps_phase2"
if [ ! -d "${DATA_DIR_PHASE2}" ] || [ -z "$(ls -A ${DATA_DIR_PHASE2})" ]; then
echo "Warning! ${DATA_DIR_PHASE2} directory missing."
if [ ! -d "${wikipedia_source}" ] || [ -z "$(ls -A ${wikipedia_source})" ]; then
echo "Error! ${wikipedia_source} directory missing. Training cannot start!"
return -1
fi
preprocess_cmd=" \
mpirun \
--oversubscribe \
--allow-run-as-root \
-np ${num_dask_workers} \
-x LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so \
preprocess_bert_pretrain \
--schedule mpi \
--vocab-file ${VOCAB_FILE} \
--wikipedia ${wikipedia_source} \
--sink ${DATA_DIR_PHASE2} \
--target-seq-length 512 \
--num-blocks ${num_blocks} \
--sample-ratio ${sample_ratio} \
${phase2_bin_size_flag} \
${masking_flag} \
--seed ${seed}"
echo "Running ${preprocess_cmd} ..."
${preprocess_cmd}
balance_load_cmd=" \
mpirun \
--oversubscribe \
--allow-run-as-root \
-np ${num_dask_workers} \
balance_dask_output \
--indir ${DATA_DIR_PHASE2} \
--num-shards ${num_blocks}"
echo "Running ${balance_load_cmd} ..."
${balance_load_cmd}
fi
echo $DATA_DIR_PHASE2
INPUT_DIR=$DATA_DIR_PHASE2
PHASE1_END_CKPT_DIR="${CHECKPOINTS_DIR}/bert-large-uncased/phase1/${train_steps}"
CMD=" $CODEDIR/run_pretraining.py"
CMD+=" --input-dir=$DATA_DIR_PHASE2"
CMD+=" --vocab-file=$VOCAB_FILE"
CMD+=" --output-dir=$CHECKPOINTS_DIR"
CMD+=" $CONFIG "
CMD+=" --bert-model=bert-large-uncased"
CMD+=" --batch-size=$train_batch_size_phase2"
CMD+=" --max-seq-length=512"
CMD+=" --max-predictions-per-seq=80"
CMD+=" --max-steps=$train_steps_phase2"
CMD+=" --warmup-proportion=$warmup_proportion_phase2"
CMD+=" --num-steps-per-checkpoint=$save_checkpoint_steps"
CMD+=" --learning-rate=$learning_rate_phase2"
CMD+=" --seed=$seed"
CMD+=" --log-freq=1"
CMD+=" --optimizer=Lamb"
CMD+=" $PREC"
CMD+=" $ACCUMULATE_GRADIENTS"
CMD+=" $BENCH"
CMD+=" --from-pretrained-params=${PHASE1_END_CKPT_DIR} "
CMD+=" --phase2 "
CMD+=" --report-file ${RESULTS_DIR}/dllogger_p2.json "
CMD="$DIST_CMD $CMD"
if [ "$create_logfile" = "true" ] ; then
export GBS=$(expr $train_batch_size_phase2 \* $num_gpus \* $gradient_accumulation_steps_phase2)
printf -v TAG "paddle_bert_pretraining_phase2_%s_gbs%d" "$precision" $GBS
DATESTAMP=`date +'%y%m%d%H%M%S'`
LOGFILE=$RESULTS_DIR/$job_name.$TAG.$DATESTAMP.log
printf "Logs written to %s\n" "$LOGFILE"
fi
set -x
if [ -z "$LOGFILE" ] ; then
$CMD
else
(
$CMD
) |& tee $LOGFILE
fi
set +x
echo "finished phase2"
|
PyTorch/Classification/ConvNets/resnet50v1.5/training/TF32 | TF32 | DGXA100_resnet50_TF32_90E | python ./multiproc.py --nproc_per_node 8 ./launch.py --model resnet50 --precision TF32 --mode convergence --platform DGXA100 /imagenet --epochs 90 --mixup 0.0 --workspace ${1:-./} --raport-file raport.json
|
PyTorch/LanguageModeling/BERT/triton/runner | runner | configuration | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
from typing import Any, Dict, List, Union
# method from PEP-366 to support relative import in executed modules
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from .task import DataObject
class Configuration(DataObject):
"""
Configuration object - handle single experiment data
"""
def __init__(
self,
precision: str,
format: str,
batch_size: Union[str, List],
accelerator: str,
triton_gpu_engine_count: int,
triton_max_queue_delay: int,
capture_cuda_graph: int,
checkpoint_variant: str,
triton_preferred_batch_sizes: Union[str, List],
**kwargs: Any,
):
"""
Args:
precision: Target model precision
format: Target conversion format
batch_size: Batch sizes to evaluate
accelerator: Triton Backend Accelerator
triton_gpu_engine_count: Number of model instances
triton_max_queue_delay: Maximal queue delay
capture_cuda_graph: Triton Capture CUDA Graph optimization for tensorrt
checkpoint_variant: Checkpoint used for configuration
triton_preferred_batch_sizes: Preferred batch sizes
**kwargs: Additional model arguments
"""
if isinstance(batch_size, str):
batch_size = map(lambda item: int(item), batch_size.split(","))
if isinstance(triton_preferred_batch_sizes, str):
triton_preferred_batch_sizes = map(lambda item: int(item), triton_preferred_batch_sizes.split(" "))
self.precision = precision
self.format = format
self.batch_size = sorted(batch_size)
self.accelerator = accelerator
self.triton_gpu_engine_count = triton_gpu_engine_count
self.triton_max_queue_delay = triton_max_queue_delay
self.capture_cuda_graph = capture_cuda_graph
self.max_batch_size = max(self.batch_size)
self.checkpoint_variant = checkpoint_variant
self.triton_preferred_batch_sizes = " ".join(map(lambda i: str(i), sorted(triton_preferred_batch_sizes)))
for key, value in kwargs.items():
self.__setattr__(key, value)
@property
def parameters(self) -> Dict:
"""
Return values stored in configuration
Returns:
Dictionary with configuration parameters
"""
return self.__dict__
|
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/triton/runner/maintainer/docker | docker | maintainer | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pathlib
from typing import Any, Dict, List, Optional, Union
import docker
if __name__ == "__main__" and __package__ is None:
__package__ = pathlib.Path(__file__).parent.name
from ...logger import LOGGER
from ..maintainer import Maintainer
from .container import DockerContainer
from .containers import TritonServerContainer
class DockerMaintainer(Maintainer):
def triton_container(
self, command: str, image: str, devices: List, volumes: Dict, environment: Dict, log_file: Union[pathlib.Path, str]
) -> DockerContainer:
"""
Return triton container
Args:
command: Triton Server command that has to be executed
image: Container image
devices: List of device ids which has to be available in container
volumes: Volumes mapping
environment: Environment variables set in container
log_file: File path where server logs has to be saved
Returns:
DockerContainer object
"""
return TritonServerContainer(
name="triton-server",
command=command,
image=image,
devices=devices,
volumes=volumes,
environment=environment,
log_file=log_file,
)
def build_image(
self,
*,
image_file_path: pathlib.Path,
image_name: str,
workdir_path: Optional[pathlib.Path] = None,
build_args: Optional[Dict[str, Any]] = None,
) -> None:
workdir_path = workdir_path or image_file_path.parent
build_args = build_args or {}
LOGGER.info(f"Building {image_name} docker image.")
LOGGER.debug(f" Using workdir: {workdir_path}")
LOGGER.debug(f" Dockerfile: {image_file_path}")
LOGGER.debug(f" Build args: {build_args}")
build_logs = list()
try:
docker_client = docker.from_env()
_, build_logs = docker_client.images.build(
path=workdir_path.resolve().as_posix(),
dockerfile=image_file_path.resolve().as_posix(),
tag=image_name,
buildargs=build_args,
network_mode="host",
rm=True,
)
except docker.errors.BuildError as e:
build_logs = e.build_log
raise e
finally:
for chunk in build_logs:
log = chunk.get("stream")
if log:
LOGGER.debug(log.rstrip())
|
PyTorch/LanguageModeling/BART/bart | bart | lamb | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# MIT License
#
# Copyright (c) 2019 cybertronai
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""Lamb optimizer."""
import torch
from torch.optim import Optimizer
class Lamb(Optimizer):
r"""Implements Lamb algorithm.
It has been proposed in `Large Batch Optimization for Deep Learning: Training BERT in 76 minutes`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
adam (bool, optional): always use trust ratio = 1, which turns this into
Adam. Useful for comparison purposes.
.. _Large Batch Optimization for Deep Learning: Training BERT in 76 minutes:
https://arxiv.org/abs/1904.00962
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-6,
weight_decay=0, adam=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
self.adam = adam
super(Lamb, self).__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Lamb does not support sparse gradients.')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
# m_t
exp_avg.mul_(beta1).add_(1 - beta1, grad)
# v_t
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
# Paper v3 does not use debiasing.
# bias_correction1 = 1 - beta1 ** state['step']
# bias_correction2 = 1 - beta2 ** state['step']
# Apply bias to lr to avoid broadcast.
step_size = group['lr'] # * math.sqrt(bias_correction2) / bias_correction1
weight_norm = p.data.norm(p=2).clamp_(0, 10)
adam_step = exp_avg / exp_avg_sq.sqrt().add(group['eps'])
if group['weight_decay'] != 0:
adam_step.add_(group['weight_decay'], p.data)
adam_norm = adam_step.norm(p=2)
if weight_norm == 0.0 or adam_norm == 0.0:
trust_ratio = 1
else:
trust_ratio = weight_norm / (adam_norm + group['eps'])
state['weight_norm'] = weight_norm
state['adam_norm'] = adam_norm
state['trust_ratio'] = trust_ratio
if self.adam:
trust_ratio = 1
p.data.add_(-step_size * trust_ratio, adam_step)
return loss
@torch.jit.script
def lamb_kernel(param, grad, exp_avg, exp_avg_sq, beta1: float,
beta2: float, step_size: float, eps: float, weight_decay: float):
exp_avg = exp_avg * beta1 + (1 - beta1) * grad
exp_avg_sq = exp_avg_sq * beta2 + (1 - beta2) * (grad * grad)
adam_step = exp_avg / (exp_avg_sq.sqrt() + eps)
adam_step = adam_step + weight_decay * param
weight_norm = param.norm(p=2).clamp(0, 10)
adam_norm = adam_step.norm(p=2)
trust_ratio = weight_norm / (adam_norm + eps)
trust_ratio = (weight_norm == 0.0) * 1.0 + (weight_norm != 0.0) * trust_ratio
trust_ratio = (adam_norm == 0.0) * 1.0 + (adam_norm != 0.0) * trust_ratio
trust_ratio = trust_ratio.float()
param = param - step_size * trust_ratio * adam_step
return param, exp_avg, exp_avg_sq
class JITLamb(Optimizer):
r"""Implements Lamb algorithm.
It has been proposed in `Large Batch Optimization for Deep Learning: Training BERT in 76 minutes`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
adam (bool, optional): always use trust ratio = 1, which turns this into
Adam. Useful for comparison purposes.
.. _Large Batch Optimization for Deep Learning: Training BERT in 76 minutes:
https://arxiv.org/abs/1904.00962
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-6,
weight_decay=0, adam=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
self.adam = adam
super().__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Lamb does not support sparse gradients.')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
step_size = group['lr']
param, exp_avg, exp_avg_sq = lamb_kernel(p.data, grad, exp_avg,
exp_avg_sq, beta1,
beta2, step_size,
group['eps'],
group['weight_decay'],
)
state['exp_avg'] = exp_avg
state['exp_avg_sq'] = exp_avg_sq
p.data = param
return loss
|
PyTorch/LanguageModeling/BERT/triton | triton | requirements | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
model_navigator[pyt] @ git+https://github.com/triton-inference-server/model_navigator.git@v0.2.3#egg=model_navigator
natsort>=7.0.0
networkx==2.5
numpy
onnx==1.8.1
onnxruntime-gpu==1.8.1
pycuda>=2019.1.2
PyYAML>=5.2
tabulate>=0.8.7
tqdm>=4.44.1
wget
|
PyTorch/Recommendation/NCF | NCF | feature_spec | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import yaml
import os
from typing import List, Dict
class FeatureSpec:
def __init__(self, feature_spec, source_spec, channel_spec, metadata, base_directory):
self.feature_spec: Dict = feature_spec
self.source_spec: Dict = source_spec
self.channel_spec: Dict = channel_spec
self.metadata: Dict = metadata
self.base_directory: str = base_directory
@classmethod
def from_yaml(cls, path):
with open(path, 'r') as feature_spec_file:
base_directory = os.path.dirname(path)
feature_spec = yaml.safe_load(feature_spec_file)
return cls.from_dict(feature_spec, base_directory=base_directory)
@classmethod
def from_dict(cls, source_dict, base_directory):
return cls(base_directory=base_directory, **source_dict)
def to_dict(self) -> Dict:
attributes_to_dump = ['feature_spec', 'source_spec', 'channel_spec', 'metadata']
return {attr: self.__dict__[attr] for attr in attributes_to_dump}
def to_string(self):
return yaml.dump(self.to_dict())
def to_yaml(self, output_path=None):
if not output_path:
output_path = self.base_directory + '/feature_spec.yaml'
with open(output_path, 'w') as output_file:
print(yaml.dump(self.to_dict()), file=output_file)
|
TensorFlow/Translation/GNMT | GNMT | model_helper | # Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utility functions for building models."""
from __future__ import print_function
import collections
import os
import time
import numpy as np
import six
import tensorflow as tf
from utils import math_utils
from utils import misc_utils as utils
from utils import vocab_utils
__all__ = [
"get_initializer", "create_emb_for_encoder_and_decoder", "create_rnn_cell",
"gradient_clip", "create_or_load_model", "load_model", "avg_checkpoints",
]
# If a vocab size is greater than this value, put the embedding on cpu instead
VOCAB_SIZE_THRESHOLD_CPU = 50000
def get_initializer(init_op, seed=None, init_weight=0):
"""Create an initializer. init_weight is only for uniform."""
if init_op == "uniform":
assert init_weight
return tf.random_uniform_initializer(
-init_weight, init_weight, seed=seed)
elif init_op == "glorot_normal":
return tf.keras.initializers.glorot_normal(
seed=seed)
elif init_op == "glorot_uniform":
return tf.keras.initializers.glorot_uniform(
seed=seed)
elif init_op.isdigit():
# dtype is default fp32 for variables.
val = int(init_op)
return tf.constant_initializer(val)
else:
raise ValueError("Unknown init_op %s" % init_op)
class ExtraArgs(collections.namedtuple(
"ExtraArgs", ("single_cell_fn", "model_device_fn",
"attention_mechanism_fn", "encoder_emb_lookup_fn"))):
pass
class TrainModel(
collections.namedtuple("TrainModel", ("graph", "model", "iterator",
"skip_count_placeholder"))):
pass
def _get_embed_device(vocab_size):
"""Decide on which device to place an embed matrix given its vocab size."""
if vocab_size > VOCAB_SIZE_THRESHOLD_CPU:
return "/cpu:0"
else:
return "/gpu:0"
def _create_pretrained_emb_from_txt(
vocab_file, embed_file, num_trainable_tokens=3, dtype=tf.float32,
scope=None):
"""Load pretrain embeding from embed_file, and return an embedding matrix.
Args:
vocab_file: Path to vocab file.
embed_file: Path to a Glove formmated embedding txt file.
num_trainable_tokens: Make the first n tokens in the vocab file as trainable
variables. Default is 3, which is "<unk>", "<s>" and "</s>".
dtype: data type.
scope: tf scope name.
Returns:
pretrained embedding table variable.
"""
vocab, _ = vocab_utils.load_vocab(vocab_file)
trainable_tokens = vocab[:num_trainable_tokens]
utils.print_out("# Using pretrained embedding: %s." % embed_file)
utils.print_out(" with trainable tokens: ")
emb_dict, emb_size = vocab_utils.load_embed_txt(embed_file)
for token in trainable_tokens:
utils.print_out(" %s" % token)
if token not in emb_dict:
emb_dict[token] = [0.0] * emb_size
emb_mat = np.array(
[emb_dict[token] for token in vocab], dtype=dtype.as_numpy_dtype())
emb_mat = tf.constant(emb_mat)
emb_mat_const = tf.slice(emb_mat, [num_trainable_tokens, 0], [-1, -1])
with tf.variable_scope(scope or "pretrain_embeddings", dtype=dtype) as scope:
emb_mat_var = tf.get_variable(
"emb_mat_var", [num_trainable_tokens, emb_size])
return tf.concat([emb_mat_var, emb_mat_const], 0)
def _create_or_load_embed(embed_name, vocab_file, embed_file,
vocab_size, embed_size, dtype):
"""Create a new or load an existing embedding matrix."""
if vocab_file and embed_file:
embedding = _create_pretrained_emb_from_txt(vocab_file, embed_file)
else:
embedding = tf.get_variable(
embed_name, [vocab_size, embed_size], dtype)
return embedding
def create_emb_for_encoder_and_decoder(share_vocab,
src_vocab_size,
tgt_vocab_size,
src_embed_size,
tgt_embed_size,
dtype=tf.float32,
num_enc_partitions=0,
num_dec_partitions=0,
src_vocab_file=None,
tgt_vocab_file=None,
src_embed_file=None,
tgt_embed_file=None,
use_char_encode=False,
scope=None):
"""Create embedding matrix for both encoder and decoder.
Args:
share_vocab: A boolean. Whether to share embedding matrix for both
encoder and decoder.
src_vocab_size: An integer. The source vocab size.
tgt_vocab_size: An integer. The target vocab size.
src_embed_size: An integer. The embedding dimension for the encoder's
embedding.
tgt_embed_size: An integer. The embedding dimension for the decoder's
embedding.
dtype: dtype of the embedding matrix. Default to float32.
num_enc_partitions: number of partitions used for the encoder's embedding
vars.
num_dec_partitions: number of partitions used for the decoder's embedding
vars.
src_vocab_file: A string. The source vocabulary file.
tgt_vocab_file: A string. The target vocabulary file.
src_embed_file: A string. The source embedding file.
tgt_embed_file: A string. The target embedding file.
use_char_encode: A boolean. If true, use char encoder.
scope: VariableScope for the created subgraph. Default to "embedding".
Returns:
embedding_encoder: Encoder's embedding matrix.
embedding_decoder: Decoder's embedding matrix.
Raises:
ValueError: if use share_vocab but source and target have different vocab
size.
"""
if num_enc_partitions <= 1:
enc_partitioner = None
else:
# Note: num_partitions > 1 is required for distributed training due to
# embedding_lookup tries to colocate single partition-ed embedding variable
# with lookup ops. This may cause embedding variables being placed on worker
# jobs.
enc_partitioner = tf.fixed_size_partitioner(num_enc_partitions)
if num_dec_partitions <= 1:
dec_partitioner = None
else:
# Note: num_partitions > 1 is required for distributed training due to
# embedding_lookup tries to colocate single partition-ed embedding variable
# with lookup ops. This may cause embedding variables being placed on worker
# jobs.
dec_partitioner = tf.fixed_size_partitioner(num_dec_partitions)
if src_embed_file and enc_partitioner:
raise ValueError(
"Can't set num_enc_partitions > 1 when using pretrained encoder "
"embedding")
if tgt_embed_file and dec_partitioner:
raise ValueError(
"Can't set num_dec_partitions > 1 when using pretrained decdoer "
"embedding")
with tf.variable_scope(
scope or "embeddings", dtype=dtype, partitioner=enc_partitioner) as scope:
# Share embedding
if share_vocab:
if src_vocab_size != tgt_vocab_size:
raise ValueError("Share embedding but different src/tgt vocab sizes"
" %d vs. %d" % (src_vocab_size, tgt_vocab_size))
assert src_embed_size == tgt_embed_size
utils.print_out("# Use the same embedding for source and target")
vocab_file = src_vocab_file or tgt_vocab_file
embed_file = src_embed_file or tgt_embed_file
embedding_encoder = _create_or_load_embed(
"embedding_share", vocab_file, embed_file,
src_vocab_size, src_embed_size, dtype)
embedding_decoder = embedding_encoder
else:
if not use_char_encode:
with tf.variable_scope("encoder", partitioner=enc_partitioner):
embedding_encoder = _create_or_load_embed(
"embedding_encoder", src_vocab_file, src_embed_file,
src_vocab_size, src_embed_size, dtype)
else:
embedding_encoder = None
with tf.variable_scope("decoder", partitioner=dec_partitioner):
embedding_decoder = _create_or_load_embed(
"embedding_decoder", tgt_vocab_file, tgt_embed_file,
tgt_vocab_size, tgt_embed_size, dtype)
return embedding_encoder, embedding_decoder
def build_cell(cell, input_shape):
if isinstance(cell, tf.contrib.rnn.MultiRNNCell):
assert isinstance(input_shape, collections.Sequence)
for i, c in enumerate(cell._cells):
if i == 0:
c.build((None, input_shape))
else:
c.build((None, c.num_units))
return
if isinstance(cell, tf.nn.rnn_cell.DropoutWrapper):
build_cell(cell._cell, input_shape)
elif isinstance(cell, tf.nn.rnn_cell.ResidualWrapper):
build_cell(cell._cell, input_shape)
elif isinstance(cell, tf.nn.rnn_cell.LSTMCell):
cell.build(input_shape)
else:
raise ValueError("%s not supported" % type(cell))
def _single_cell(unit_type, num_units, forget_bias, dropout, mode,
dtype=None, residual_connection=False, residual_fn=None,
use_block_lstm=False):
"""Create an instance of a single RNN cell."""
# dropout (= 1 - keep_prob) is set to 0 during eval and infer
dropout = dropout if mode == tf.contrib.learn.ModeKeys.TRAIN else 0.0
# Cell Type
if unit_type == "lstm":
utils.print_out(" LSTM, forget_bias=%g" % forget_bias, new_line=False)
if not use_block_lstm:
single_cell = tf.nn.rnn_cell.LSTMCell(
num_units, dtype=dtype, forget_bias=forget_bias)
else:
single_cell = tf.contrib.rnn.LSTMBlockCell(
num_units, forget_bias=forget_bias)
elif unit_type == "gru":
utils.print_out(" GRU", new_line=False)
single_cell = tf.contrib.rnn.GRUCell(num_units)
elif unit_type == "layer_norm_lstm":
utils.print_out(" Layer Normalized LSTM, forget_bias=%g" % forget_bias,
new_line=False)
single_cell = tf.contrib.rnn.LayerNormBasicLSTMCell(
num_units,
forget_bias=forget_bias,
layer_norm=True)
elif unit_type == "nas":
utils.print_out(" NASCell", new_line=False)
single_cell = tf.contrib.rnn.NASCell(num_units)
else:
raise ValueError("Unknown unit type %s!" % unit_type)
# Dropout (= 1 - keep_prob)
if dropout > 0.0:
single_cell = tf.contrib.rnn.DropoutWrapper(
cell=single_cell, input_keep_prob=(1.0 - dropout))
utils.print_out(" %s, dropout=%g " %(type(single_cell).__name__, dropout),
new_line=False)
# Residual
if residual_connection:
single_cell = tf.contrib.rnn.ResidualWrapper(
single_cell, residual_fn=residual_fn)
utils.print_out(" %s" % type(single_cell).__name__, new_line=False)
return single_cell
def _cell_list(unit_type, num_units, num_layers, num_residual_layers,
forget_bias, dropout, mode, dtype=None,
single_cell_fn=None, residual_fn=None, use_block_lstm=False):
"""Create a list of RNN cells."""
if not single_cell_fn:
single_cell_fn = _single_cell
# Multi-GPU
cell_list = []
for i in range(num_layers):
utils.print_out(" cell %d" % i, new_line=False)
single_cell = single_cell_fn(
unit_type=unit_type,
num_units=num_units,
forget_bias=forget_bias,
dropout=dropout,
mode=mode,
dtype=dtype,
residual_connection=(i >= num_layers - num_residual_layers),
residual_fn=residual_fn,
use_block_lstm=use_block_lstm
)
utils.print_out("")
cell_list.append(single_cell)
return cell_list
def create_rnn_cell(unit_type, num_units, num_layers, num_residual_layers,
forget_bias, dropout, mode, dtype=None,
single_cell_fn=None, use_block_lstm=False):
"""Create multi-layer RNN cell.
Args:
unit_type: string representing the unit type, i.e. "lstm".
num_units: the depth of each unit.
num_layers: number of cells.
num_residual_layers: Number of residual layers from top to bottom. For
example, if `num_layers=4` and `num_residual_layers=2`, the last 2 RNN
cells in the returned list will be wrapped with `ResidualWrapper`.
forget_bias: the initial forget bias of the RNNCell(s).
dropout: floating point value between 0.0 and 1.0:
the probability of dropout. this is ignored if `mode != TRAIN`.
mode: either tf.contrib.learn.TRAIN/EVAL/INFER
single_cell_fn: allow for adding customized cell.
When not specified, we default to model_helper._single_cell
Returns:
An `RNNCell` instance.
"""
cell_list = _cell_list(unit_type=unit_type,
num_units=num_units,
num_layers=num_layers,
num_residual_layers=num_residual_layers,
forget_bias=forget_bias,
dropout=dropout,
mode=mode,
dtype=dtype,
single_cell_fn=single_cell_fn,
use_block_lstm=use_block_lstm)
if len(cell_list) == 1: # Single layer.
return cell_list[0]
else: # Multi layers
return tf.contrib.rnn.MultiRNNCell(cell_list)
def gradient_clip(gradients, max_gradient_norm):
"""Clipping gradients of a model."""
clipped_gradients, gradient_norm = math_utils.clip_by_global_norm(
gradients, max_gradient_norm)
return clipped_gradients, gradient_norm
def print_variables_in_ckpt(ckpt_path):
"""Print a list of variables in a checkpoint together with their shapes."""
utils.print_out("# Variables in ckpt %s" % ckpt_path)
reader = tf.train.NewCheckpointReader(ckpt_path)
variable_map = reader.get_variable_to_shape_map()
for key in sorted(variable_map.keys()):
utils.print_out(" %s: %s" % (key, variable_map[key]))
def load_model(model, ckpt_path, session, name):
"""Load model from a checkpoint."""
start_time = time.time()
try:
model.saver.restore(session, ckpt_path)
except tf.errors.NotFoundError as e:
utils.print_out("Can't load checkpoint")
print_variables_in_ckpt(ckpt_path)
utils.print_out("%s" % str(e))
session.run(tf.tables_initializer())
utils.print_out(
" loaded %s model parameters from %s, time %.2fs" %
(name, ckpt_path, time.time() - start_time))
return model
def avg_checkpoints(model_dir, num_last_checkpoints, global_step_name):
"""Average the last N checkpoints in the model_dir."""
checkpoint_state = tf.train.get_checkpoint_state(model_dir)
if not checkpoint_state:
utils.print_out("# No checkpoint file found in directory: %s" % model_dir)
return None
# Checkpoints are ordered from oldest to newest.
checkpoints = (
checkpoint_state.all_model_checkpoint_paths[-num_last_checkpoints:])
if len(checkpoints) < num_last_checkpoints:
utils.print_out(
"# Skipping averaging checkpoints because not enough checkpoints is "
"available.")
return None
avg_model_dir = os.path.join(model_dir, "avg_checkpoints")
if not tf.gfile.Exists(avg_model_dir):
utils.print_out(
"# Creating new directory %s for saving averaged checkpoints." %
avg_model_dir)
tf.gfile.MakeDirs(avg_model_dir)
utils.print_out("# Reading and averaging variables in checkpoints:")
var_list = tf.contrib.framework.list_variables(checkpoints[0])
var_values, var_dtypes = {}, {}
for (name, shape) in var_list:
if name != global_step_name:
var_values[name] = np.zeros(shape)
for checkpoint in checkpoints:
utils.print_out(" %s" % checkpoint)
reader = tf.contrib.framework.load_checkpoint(checkpoint)
for name in var_values:
tensor = reader.get_tensor(name)
var_dtypes[name] = tensor.dtype
var_values[name] += tensor
for name in var_values:
var_values[name] /= len(checkpoints)
# Build a graph with same variables in the checkpoints, and save the averaged
# variables into the avg_model_dir.
with tf.Graph().as_default():
tf_vars = [
tf.get_variable(v, shape=var_values[v].shape, dtype=var_dtypes[name])
for v in var_values
]
placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, placeholders)]
saver = tf.train.Saver(tf.all_variables(), save_relative_paths=True)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
for p, assign_op, (name, value) in zip(placeholders, assign_ops,
six.iteritems(var_values)):
sess.run(assign_op, {p: value})
# Use the built saver to save the averaged checkpoint. Only keep 1
# checkpoint and the best checkpoint will be moved to avg_best_metric_dir.
saver.save(
sess,
os.path.join(avg_model_dir, "translate.ckpt"))
return avg_model_dir
def create_or_load_model(model, model_dir, session, name):
"""Create translation model and initialize or load parameters in session."""
latest_ckpt = tf.train.latest_checkpoint(model_dir)
if latest_ckpt:
model = load_model(model, latest_ckpt, session, name)
else:
start_time = time.time()
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
utils.print_out(" created %s model with fresh parameters, time %.2fs" %
(name, time.time() - start_time))
global_step = model.global_step.eval(session=session)
return model, global_step
|
TensorFlow/LanguageModeling/BERT/notebooks | notebooks | bert_squad_tf_finetuning | #!/usr/bin/env python
# coding: utf-8
# In[ ]:
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# <img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
#
# # BERT Question Answering Fine-Tuning with Mixed Precision
# ## 1. Overview
#
# Bidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.
#
# The original paper can be found here: https://arxiv.org/abs/1810.04805.
#
# NVIDIA's BERT 19.10 is an optimized version of Google's official implementation, leveraging mixed precision arithmetic and tensor cores on V100 GPUS for faster training times while maintaining target accuracy.
# ### 1.a Learning objectives
#
# This notebook demonstrates:
# - Fine-Tuning on Question Answering (QA) task with BERT Large model
# - The use/download of pretrained NVIDIA BERT models
# - Use of Mixed Precision for Training
# ## 2. Requirements
#
# Please refer to Section 2. of the ReadMe file
# ## 3. BERT Question Answering Task
#
# Here we run QA fine-tuning on a pre-trained BERT model.
# To fine-tune we will use the [SQuaD 1.1 Dataset](https://rajpurkar.github.io/SQuAD-explorer/) which contains 100,000+ question-answer pairs on 500+ articles.
# In[ ]:
import os
import sys
data_dir = '../data/download'
# SQuAD json for training
train_file = os.path.join(data_dir, 'squad/v1.1/train-v1.1.json')
# json for inference
predict_file = os.path.join(data_dir, 'squad/v1.1/dev-v1.1.json')
# ### 3.a Mixed Precision
#
# Mixed precision training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of tensor cores in the Volta and Turing architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures.
#
# For information about:
# - How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) documentation.
# - How to access and enable AMP for TensorFlow, see [Using TF-AMP](https://docs.nvidia.com/deeplearning/dgx/tensorflow-user-guide/index.html#tfamp) from the TensorFlow User Guide.
# - Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
# In this notebook we control mixed precision execution with the following flag:
# In[ ]:
use_fp16 = True;
import os
os.environ["TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE"] = "1" if use_fp16 else "0"
# For detailed debug uncomment the following line:
#os.environ["TF_CPP_VMODULE"]="auto_mixed_precision=2"
# ## 4. Pre-Trained NVIDIA BERT TF Models
#
# Based on the model size, we have the following two default configurations of BERT.
#
# | **Model** | **Hidden layers** | **Hidden unit size** | **Attention heads** | **Feedforward filter size** | **Max sequence length** | **Parameters** |
# |:---------:|:----------:|:----:|:---:|:--------:|:---:|:----:|
# |BERTBASE |12 encoder| 768| 12|4 x 768|512|110M|
# |BERTLARGE|24 encoder|1024| 16|4 x 1024|512|330M|
#
# We will use large pre-trained models avaialble on NGC (NVIDIA GPU Cluster, https://ngc.nvidia.com).
# There are many configuration available, in particular we will download and use the following:
#
# **bert_tf_ckpt_large_pretraining_amp_lamb**
#
# Which is pre-trained using the Wikipedia and Book corpus datasets as training data with AMP and LAMB optimizer.
# We will fine-tune on the SQuaD 1.1 Dataset.
# Let's create the folders for the pre-trained models:
# In[ ]:
# bert_tf_large pretrained model
DATA_DIR_PT = data_dir + '/pretrained_large_model'
get_ipython().system('mkdir -p $DATA_DIR_PT')
get_ipython().system('wget --content-disposition -O $DATA_DIR_PT/bert_tf_ckpt_large_pretraining_amp_lamb_19.03.1.zip https://api.ngc.nvidia.com/v2/models/nvidia/bert_tf_ckpt_large_pretraining_amp_lamb/versions/19.03.1/zip && unzip -n -d $DATA_DIR_PT/ $DATA_DIR_PT/bert_tf_ckpt_large_pretraining_amp_lamb_19.03.1.zip && rm $DATA_DIR_PT/bert_tf_ckpt_large_pretraining_amp_lamb_19.03.1.zip')
# bert_tf_large finetuned model on SQUAD1.1
DATA_DIR_FT = data_dir + '/finetuned_large_model_SQUAD1.1'
get_ipython().system('mkdir -p $DATA_DIR_FT')
get_ipython().system('wget --content-disposition -O $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad11_amp_384_19.03.1.zip https://api.ngc.nvidia.com/v2/models/nvidia/bert_tf_ckpt_large_qa_squad11_amp_384/versions/19.03.1/zip && unzip -n -d $DATA_DIR_FT/ $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad11_amp_384_19.03.1.zip && rm $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad11_amp_384_19.03.1.zip')
# In the code that follows we will refer to this model.
# In[ ]:
notebooks_dir = '../notebooks'
working_dir = '..'
if working_dir not in sys.path:
sys.path.append(working_dir)
init_checkpoint = os.path.join(data_dir, 'pretrained_large_model/model.ckpt')
# ## 5. Running QA task fine-tuning
#
# In order to run Q-A inference we will follow step-by-step a simplified flow implemented in run_squad.py:
# In[ ]:
import run_squad
import json
import tensorflow as tf
import modeling
import tokenization
import time
import random
import optimization
tf.logging.set_verbosity(tf.logging.INFO)
# Create the output directory where all the results are saved.
output_dir = os.path.join(working_dir, 'results')
tf.gfile.MakeDirs(output_dir)
# The config json file corresponding to the pre-trained BERT model.
# This specifies the model architecture.
bert_config_file = os.path.join(data_dir, 'finetuned_large_model_SQUAD1.1/bert_config.json')
# The vocabulary file that the BERT model was trained on.
vocab_file = os.path.join(data_dir, 'finetuned_large_model_SQUAD1.1/vocab.txt')
# Whether to lower case the input text.
# Should be True for uncased models and False for cased models.
do_lower_case = True
# Total batch size for predictions
predict_batch_size = 1
params = dict([('batch_size', predict_batch_size)])
# The maximum total input sequence length after WordPiece tokenization.
# Sequences longer than this will be truncated, and sequences shorter than this will be padded.
max_seq_length = 384
# When splitting up a long document into chunks, how much stride to take between chunks.
doc_stride = 128
# The maximum number of tokens for the question.
# Questions longer than this will be truncated to this length.
max_query_length = 64
# This is a WA to use flags from here:
flags = tf.flags
if 'f' not in tf.flags.FLAGS:
tf.app.flags.DEFINE_string('f', '', 'kernel')
FLAGS = flags.FLAGS
verbose_logging = True
# Set to True if the dataset has samples with no answers. For SQuAD 1.1, this is set to False
version_2_with_negative = False
# The total number of n-best predictions to generate in the nbest_predictions.json output file.
n_best_size = 20
# The maximum length of an answer that can be generated.
# This is needed because the start and end predictions are not conditioned on one another.
max_answer_length = 30
# The initial learning rate for Adam
learning_rate = 5e-6
# Total batch size for training
train_batch_size = 3
# Proportion of training to perform linear learning rate warmup for
warmup_proportion = 0.1
# # Total number of training epochs to perform (results will improve if trained with epochs)
num_train_epochs = 1
global_batch_size = train_batch_size
training_hooks = []
training_hooks.append(run_squad.LogTrainRunHook(global_batch_size, 0))
# Let's create the tokenizer and the training tf_record:
# In[ ]:
# Validate the casing config consistency with the checkpoint name.
tokenization.validate_case_matches_checkpoint(do_lower_case, init_checkpoint)
# Create the tokenizer.
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
# Load the configuration from file
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
config = tf.ConfigProto(log_device_placement=True)
run_config = tf.estimator.RunConfig(
model_dir=output_dir,
session_config=config,
save_checkpoints_steps=1000,
keep_checkpoint_max=1)
# Read the training examples from the training file:
train_examples = run_squad.read_squad_examples(input_file=train_file, is_training=True)
num_train_steps = int(len(train_examples) / global_batch_size * num_train_epochs)
num_warmup_steps = int(num_train_steps * warmup_proportion)
# Pre-shuffle the input to avoid having to make a very large shuffle buffer in in the `input_fn`.
rng = random.Random(12345)
rng.shuffle(train_examples)
start_index = 0
end_index = len(train_examples)
tmp_filenames = os.path.join(output_dir, "train.tf_record")
# We write to a temporary file to avoid storing very large constant tensors in memory.
train_writer = run_squad.FeatureWriter(
filename=tmp_filenames,
is_training=True)
run_squad.convert_examples_to_features(
examples=train_examples[start_index:end_index],
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=True,
output_fn=train_writer.process_feature)
train_writer.close()
tf.logging.info("***** Running training *****")
tf.logging.info(" Num orig examples = %d", end_index - start_index)
tf.logging.info(" Num split examples = %d", train_writer.num_features)
tf.logging.info(" Batch size = %d", train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
tf.logging.info(" Learning Rate = %f", learning_rate)
del train_examples
# We need to create the model for the estimator:
# In[ ]:
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
(start_logits, end_logits) = run_squad.create_model(
bert_config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
use_one_hot_embeddings=False)
tvars = tf.trainable_variables()
initialized_variable_names = {}
if init_checkpoint:
(assignment_map, initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
output_spec = None
if mode == tf.estimator.ModeKeys.TRAIN:
seq_length = modeling.get_shape_list(input_ids)[1]
def compute_loss(logits, positions):
one_hot_positions = tf.one_hot(positions, depth=seq_length, dtype=tf.float32)
log_probs = tf.nn.log_softmax(logits, axis=-1)
loss = -tf.reduce_mean(tf.reduce_sum(one_hot_positions * log_probs, axis=-1))
return loss
start_positions = features["start_positions"]
end_positions = features["end_positions"]
start_loss = compute_loss(start_logits, start_positions)
end_loss = compute_loss(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2.0
train_op = optimization.create_optimizer(total_loss, learning_rate, num_train_steps, num_warmup_steps, None, False, use_fp16)
output_spec = tf.estimator.EstimatorSpec(mode=mode, loss=total_loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
"unique_ids": unique_ids,
"start_logits": start_logits,
"end_logits": end_logits,
}
output_spec = tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
return output_spec
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params=params)
# ### 5.a Fine Tuning
#
# Fine tuning is performed using the run_squad.py.
#
# The run_squad.sh script trains a model and performs evaluation on the SQuaD v1.1 dataset.
# In[ ]:
train_input_fn = run_squad.input_fn_builder(
input_file=tmp_filenames,
batch_size=train_batch_size,
seq_length=max_seq_length,
is_training=True,
drop_remainder=True,
hvd=None)
train_start_time = time.time()
estimator.train(input_fn=train_input_fn, hooks=training_hooks, max_steps=num_train_steps)
train_time_elapsed = time.time() - train_start_time
train_time_wo_startup = training_hooks[-1].total_time
avg_sentences_per_second = num_train_steps * global_batch_size * 1.0 / train_time_wo_startup if train_time_wo_startup else 0
tf.logging.info("-----------------------------")
tf.logging.info("Total Training Time = %0.2f Training Time W/O start up overhead = %0.2f "
"Sentences processed = %d", train_time_elapsed, train_time_wo_startup,
num_train_steps * global_batch_size)
tf.logging.info("Training Performance = %0.4f sentences/sec", avg_sentences_per_second)
tf.logging.info("-----------------------------")
# ### 5.b Inference
#
# Now we run inference with the fine-tuned model just saved:
# In[ ]:
eval_examples = run_squad.read_squad_examples(
input_file=predict_file, is_training=False)
eval_writer = run_squad.FeatureWriter(
filename=os.path.join(output_dir, "eval.tf_record"),
is_training=False)
eval_features = []
def append_feature(feature):
eval_features.append(feature)
eval_writer.process_feature(feature)
# Loads a data file into a list of InputBatch's
run_squad.convert_examples_to_features(
examples=eval_examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=False,
output_fn=append_feature)
eval_writer.close()
tf.logging.info("***** Running predictions *****")
tf.logging.info(" Num orig examples = %d", len(eval_examples))
tf.logging.info(" Num split examples = %d", len(eval_features))
tf.logging.info(" Batch size = %d", predict_batch_size)
predict_input_fn = run_squad.input_fn_builder(
input_file=eval_writer.filename,
batch_size=predict_batch_size,
seq_length=max_seq_length,
is_training=False,
drop_remainder=False)
all_results = []
eval_hooks = [run_squad.LogEvalRunHook(predict_batch_size)]
eval_start_time = time.time()
for result in estimator.predict(
predict_input_fn, yield_single_examples=True, hooks=eval_hooks, checkpoint_path=None):
unique_id = int(result["unique_ids"])
start_logits = [float(x) for x in result["start_logits"].flat]
end_logits = [float(x) for x in result["end_logits"].flat]
all_results.append(
run_squad.RawResult(
unique_id=unique_id,
start_logits=start_logits,
end_logits=end_logits))
eval_time_elapsed = time.time() - eval_start_time
time_list = eval_hooks[-1].time_list
time_list.sort()
eval_time_wo_startup = sum(time_list[:int(len(time_list) * 0.99)])
num_sentences = eval_hooks[-1].count * predict_batch_size
avg_sentences_per_second = num_sentences * 1.0 / eval_time_wo_startup
tf.logging.info("-----------------------------")
tf.logging.info("Total Inference Time = %0.2f Inference Time W/O start up overhead = %0.2f "
"Sentences processed = %d", eval_time_elapsed, eval_time_wo_startup,
num_sentences)
tf.logging.info("Inference Performance = %0.4f sentences/sec", avg_sentences_per_second)
tf.logging.info("-----------------------------")
output_prediction_file = os.path.join(output_dir, "predictions.json")
output_nbest_file = os.path.join(output_dir, "nbest_predictions.json")
output_null_log_odds_file = os.path.join(output_dir, "null_odds.json")
run_squad.write_predictions(eval_examples, eval_features, all_results,
n_best_size, max_answer_length,
do_lower_case, output_prediction_file,
output_nbest_file, output_null_log_odds_file,
version_2_with_negative, verbose_logging)
tf.logging.info("Inference Results:")
# Here we show only the prediction results, nbest prediction is also available in the output directory
results = ""
with open(output_prediction_file, 'r') as json_file:
data = json.load(json_file)
for question in eval_examples:
results += "<tr><td>{}</td><td>{}</td><td>{}</td></tr>".format(question.qas_id, question.question_text, data[question.qas_id])
from IPython.display import display, HTML
display(HTML("<table><tr><th>Id</th><th>Question</th><th>Answer</th></tr>{}</table>".format(results)))
# ### 5.b Evaluation
#
# Let's run evaluation using the script in the SQuaD1.1 folder and our fine-tuned model:
# In[ ]:
get_ipython().system('python ../data/download/squad/v1.1/evaluate-v1.1.py $predict_file $output_dir/predictions.json')
# ## 6. What's next
#
# Now that you have fine-tuned a BERT model you may want to take a look at the run_squad script which containd more options for fine-tuning.
|
PyTorch/LanguageModeling/Transformer-XL/pytorch | pytorch | lamb | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# MIT License
#
# Copyright (c) 2019 cybertronai
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""Lamb optimizer."""
import torch
from torch.optim import Optimizer
class Lamb(Optimizer):
r"""Implements Lamb algorithm.
It has been proposed in `Large Batch Optimization for Deep Learning: Training BERT in 76 minutes`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
adam (bool, optional): always use trust ratio = 1, which turns this into
Adam. Useful for comparison purposes.
.. _Large Batch Optimization for Deep Learning: Training BERT in 76 minutes:
https://arxiv.org/abs/1904.00962
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-6,
weight_decay=0, adam=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
self.adam = adam
super(Lamb, self).__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Lamb does not support sparse gradients.')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
# m_t
exp_avg.mul_(beta1).add_(1 - beta1, grad)
# v_t
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
# Paper v3 does not use debiasing.
# bias_correction1 = 1 - beta1 ** state['step']
# bias_correction2 = 1 - beta2 ** state['step']
# Apply bias to lr to avoid broadcast.
step_size = group['lr'] # * math.sqrt(bias_correction2) / bias_correction1
weight_norm = p.data.norm(p=2).clamp_(0, 10)
adam_step = exp_avg / exp_avg_sq.sqrt().add(group['eps'])
if group['weight_decay'] != 0:
adam_step.add_(group['weight_decay'], p.data)
adam_norm = adam_step.norm(p=2)
if weight_norm == 0.0 or adam_norm == 0.0:
trust_ratio = 1
else:
trust_ratio = weight_norm / (adam_norm + group['eps'])
state['weight_norm'] = weight_norm
state['adam_norm'] = adam_norm
state['trust_ratio'] = trust_ratio
if self.adam:
trust_ratio = 1
p.data.add_(-step_size * trust_ratio, adam_step)
return loss
@torch.jit.script
def lamb_kernel(param, grad, exp_avg, exp_avg_sq, beta1: float,
beta2: float, step_size: float, eps: float, weight_decay: float):
exp_avg = exp_avg * beta1 + (1 - beta1) * grad
exp_avg_sq = exp_avg_sq * beta2 + (1 - beta2) * (grad * grad)
adam_step = exp_avg / (exp_avg_sq.sqrt() + eps)
adam_step = adam_step + weight_decay * param
weight_norm = param.norm(p=2).clamp(0, 10)
adam_norm = adam_step.norm(p=2)
trust_ratio = weight_norm / (adam_norm + eps)
trust_ratio = (weight_norm == 0.0) * 1.0 + (weight_norm != 0.0) * trust_ratio
trust_ratio = (adam_norm == 0.0) * 1.0 + (adam_norm != 0.0) * trust_ratio
trust_ratio = trust_ratio.float()
param = param - step_size * trust_ratio * adam_step
return param, exp_avg, exp_avg_sq
class JITLamb(Optimizer):
r"""Implements Lamb algorithm.
It has been proposed in `Large Batch Optimization for Deep Learning: Training BERT in 76 minutes`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
adam (bool, optional): always use trust ratio = 1, which turns this into
Adam. Useful for comparison purposes.
.. _Large Batch Optimization for Deep Learning: Training BERT in 76 minutes:
https://arxiv.org/abs/1904.00962
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-6,
weight_decay=0, adam=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay)
self.adam = adam
super().__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Lamb does not support sparse gradients.')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
step_size = group['lr']
param, exp_avg, exp_avg_sq = lamb_kernel(p.data, grad, exp_avg,
exp_avg_sq, beta1,
beta2, step_size,
group['eps'],
group['weight_decay'],
)
state['exp_avg'] = exp_avg
state['exp_avg_sq'] = exp_avg_sq
p.data = param
return loss
|
PyTorch/LanguageModeling/BERT/lamb_amp_opt/fused_lamb | fused_lamb | __init__ | from fused_lamb.fused_lamb import FusedLAMBAMP # NOQA
|
Tools/PyTorch/TimeSeriesPredictionPlatform/callbacks | callbacks | hydra_callbacks | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pandas as pd
from omegaconf import OmegaConf
from hydra.experimental.callback import Callback
from loggers.log_helper import jsonlog_2_df
class MergeLogs(Callback):
def on_multirun_end(self, config, **kwargs):
OmegaConf.resolve(config)
ALLOWED_KEYS=['timestamp', 'elapsed_time', 'step', 'loss', 'val_loss', 'MAE', 'MSE', 'RMSE', 'P50', 'P90']
dfs = []
for p, sub_dirs, files in os.walk(config.hydra.sweep.dir):
if 'log.json' in files:
path = os.path.join(p, 'log.json')
df = jsonlog_2_df(path, ALLOWED_KEYS)
dfs.append(df)
# Transpose dataframes
plots = {}
for c in dfs[0].columns:
joint_plots = pd.DataFrame({i : df[c] for i, df in enumerate(dfs)})
metrics = {}
metrics['mean'] = joint_plots.mean(axis=1)
metrics['std'] = joint_plots.std(axis=1)
metrics['mean_m_std'] = metrics['mean'] - metrics['std']
metrics['mean_p_std'] = metrics['mean'] + metrics['std']
metrics_df = pd.DataFrame(metrics)
plots[c] = metrics_df[~metrics_df.isna().all(axis=1)] # Drop rows which contain only NaNs
timestamps = plots.pop('timestamp')['mean']
timestamps = (timestamps * 1000).astype(int)
if not timestamps.is_monotonic:
raise ValueError('Timestamps are not monotonic')
|
TensorFlow/Detection/SSD/models/research/object_detection/utils | utils | metrics_test | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for object_detection.metrics."""
import numpy as np
import tensorflow as tf
from object_detection.utils import metrics
class MetricsTest(tf.test.TestCase):
def test_compute_cor_loc(self):
num_gt_imgs_per_class = np.array([100, 1, 5, 1, 1], dtype=int)
num_images_correctly_detected_per_class = np.array(
[10, 0, 1, 0, 0], dtype=int)
corloc = metrics.compute_cor_loc(num_gt_imgs_per_class,
num_images_correctly_detected_per_class)
expected_corloc = np.array([0.1, 0, 0.2, 0, 0], dtype=float)
self.assertTrue(np.allclose(corloc, expected_corloc))
def test_compute_cor_loc_nans(self):
num_gt_imgs_per_class = np.array([100, 0, 0, 1, 1], dtype=int)
num_images_correctly_detected_per_class = np.array(
[10, 0, 1, 0, 0], dtype=int)
corloc = metrics.compute_cor_loc(num_gt_imgs_per_class,
num_images_correctly_detected_per_class)
expected_corloc = np.array([0.1, np.nan, np.nan, 0, 0], dtype=float)
self.assertAllClose(corloc, expected_corloc)
def test_compute_precision_recall(self):
num_gt = 10
scores = np.array([0.4, 0.3, 0.6, 0.2, 0.7, 0.1], dtype=float)
labels = np.array([0, 1, 1, 0, 0, 1], dtype=bool)
labels_float_type = np.array([0, 1, 1, 0, 0, 1], dtype=float)
accumulated_tp_count = np.array([0, 1, 1, 2, 2, 3], dtype=float)
expected_precision = accumulated_tp_count / np.array([1, 2, 3, 4, 5, 6])
expected_recall = accumulated_tp_count / num_gt
precision, recall = metrics.compute_precision_recall(scores, labels, num_gt)
precision_float_type, recall_float_type = metrics.compute_precision_recall(
scores, labels_float_type, num_gt)
self.assertAllClose(precision, expected_precision)
self.assertAllClose(recall, expected_recall)
self.assertAllClose(precision_float_type, expected_precision)
self.assertAllClose(recall_float_type, expected_recall)
def test_compute_precision_recall_float(self):
num_gt = 10
scores = np.array([0.4, 0.3, 0.6, 0.2, 0.7, 0.1], dtype=float)
labels_float = np.array([0, 1, 1, 0.5, 0, 1], dtype=float)
expected_precision = np.array(
[0., 0.5, 0.33333333, 0.5, 0.55555556, 0.63636364], dtype=float)
expected_recall = np.array([0., 0.1, 0.1, 0.2, 0.25, 0.35], dtype=float)
precision, recall = metrics.compute_precision_recall(
scores, labels_float, num_gt)
self.assertAllClose(precision, expected_precision)
self.assertAllClose(recall, expected_recall)
def test_compute_average_precision(self):
precision = np.array([0.8, 0.76, 0.9, 0.65, 0.7, 0.5, 0.55, 0], dtype=float)
recall = np.array([0.3, 0.3, 0.4, 0.4, 0.45, 0.45, 0.5, 0.5], dtype=float)
processed_precision = np.array(
[0.9, 0.9, 0.9, 0.7, 0.7, 0.55, 0.55, 0], dtype=float)
recall_interval = np.array([0.3, 0, 0.1, 0, 0.05, 0, 0.05, 0], dtype=float)
expected_mean_ap = np.sum(recall_interval * processed_precision)
mean_ap = metrics.compute_average_precision(precision, recall)
self.assertAlmostEqual(expected_mean_ap, mean_ap)
def test_compute_precision_recall_and_ap_no_groundtruth(self):
num_gt = 0
scores = np.array([0.4, 0.3, 0.6, 0.2, 0.7, 0.1], dtype=float)
labels = np.array([0, 0, 0, 0, 0, 0], dtype=bool)
expected_precision = None
expected_recall = None
precision, recall = metrics.compute_precision_recall(scores, labels, num_gt)
self.assertEquals(precision, expected_precision)
self.assertEquals(recall, expected_recall)
ap = metrics.compute_average_precision(precision, recall)
self.assertTrue(np.isnan(ap))
def test_compute_recall_at_k(self):
num_gt = 4
tp_fp = [
np.array([1, 0, 0], dtype=float),
np.array([0, 1], dtype=float),
np.array([0, 0, 0, 0, 0], dtype=float)
]
tp_fp_bool = [
np.array([True, False, False], dtype=bool),
np.array([False, True], dtype=float),
np.array([False, False, False, False, False], dtype=float)
]
recall_1 = metrics.compute_recall_at_k(tp_fp, num_gt, 1)
recall_3 = metrics.compute_recall_at_k(tp_fp, num_gt, 3)
recall_5 = metrics.compute_recall_at_k(tp_fp, num_gt, 5)
recall_3_bool = metrics.compute_recall_at_k(tp_fp_bool, num_gt, 3)
self.assertAlmostEqual(recall_1, 0.25)
self.assertAlmostEqual(recall_3, 0.5)
self.assertAlmostEqual(recall_3_bool, 0.5)
self.assertAlmostEqual(recall_5, 0.5)
def test_compute_median_rank_at_k(self):
tp_fp = [
np.array([1, 0, 0], dtype=float),
np.array([0, 0.1], dtype=float),
np.array([0, 0, 0, 0, 0], dtype=float)
]
tp_fp_bool = [
np.array([True, False, False], dtype=bool),
np.array([False, True], dtype=float),
np.array([False, False, False, False, False], dtype=float)
]
median_ranks_1 = metrics.compute_median_rank_at_k(tp_fp, 1)
median_ranks_3 = metrics.compute_median_rank_at_k(tp_fp, 3)
median_ranks_5 = metrics.compute_median_rank_at_k(tp_fp, 5)
median_ranks_3_bool = metrics.compute_median_rank_at_k(tp_fp_bool, 3)
self.assertEquals(median_ranks_1, 0)
self.assertEquals(median_ranks_3, 0.5)
self.assertEquals(median_ranks_3_bool, 0.5)
self.assertEquals(median_ranks_5, 0.5)
if __name__ == '__main__':
tf.test.main()
|
PyTorch/Classification/GPUNet/triton/05ms-D/runner | runner | start_NVIDIA-DGX-A100-(1x-A100-80GB) | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
# Evaluate Runner
python3 -m "triton.05ms-D.runner.__main__" \
--config-path "triton/05ms-D/runner/config_NVIDIA-DGX-A100-(1x-A100-80GB).yaml" \
--device 0 |
TensorFlow/Classification/ConvNets/triton | triton | run_inference_on_triton | #!/usr/bin/env python3
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
To infer the model deployed on Triton, you can use `run_inference_on_triton.py` script.
It sends a request with data obtained from pointed data loader and dumps received data into npz files.
Those files are stored in directory pointed by `--output-dir` argument.
Currently, the client communicates with the Triton server asynchronously using GRPC protocol.
Example call:
```shell script
python ./triton/run_inference_on_triton.py \
--server-url localhost:8001 \
--model-name ResNet50 \
--model-version 1 \
--dump-labels \
--output-dir /results/dump_triton
```
"""
import argparse
import functools
import logging
import queue
import threading
import time
from pathlib import Path
from typing import Optional
from tqdm import tqdm
# pytype: disable=import-error
try:
from tritonclient import utils as client_utils # noqa: F401
from tritonclient.grpc import (
InferenceServerClient,
InferInput,
InferRequestedOutput,
)
except ImportError:
import tritongrpcclient as grpc_client
from tritongrpcclient import (
InferenceServerClient,
InferInput,
InferRequestedOutput,
)
# pytype: enable=import-error
# method from PEP-366 to support relative import in executed modules
if __package__ is None:
__package__ = Path(__file__).parent.name
from .deployment_toolkit.args import ArgParserGenerator
from .deployment_toolkit.core import DATALOADER_FN_NAME, load_from_file
from .deployment_toolkit.dump import JsonDumpWriter
LOGGER = logging.getLogger("run_inference_on_triton")
class AsyncGRPCTritonRunner:
DEFAULT_MAX_RESP_WAIT_S = 120
DEFAULT_MAX_UNRESP_REQS = 128
DEFAULT_MAX_FINISH_WAIT_S = 900 # 15min
def __init__(
self,
server_url: str,
model_name: str,
model_version: str,
*,
dataloader,
verbose=False,
resp_wait_s: Optional[float] = None,
max_unresponded_reqs: Optional[int] = None,
):
self._server_url = server_url
self._model_name = model_name
self._model_version = model_version
self._dataloader = dataloader
self._verbose = verbose
self._response_wait_t = self.DEFAULT_MAX_RESP_WAIT_S if resp_wait_s is None else resp_wait_s
self._max_unresp_reqs = self.DEFAULT_MAX_UNRESP_REQS if max_unresponded_reqs is None else max_unresponded_reqs
self._results = queue.Queue()
self._processed_all = False
self._errors = []
self._num_waiting_for = 0
self._sync = threading.Condition()
self._req_thread = threading.Thread(target=self.req_loop, daemon=True)
def __iter__(self):
self._req_thread.start()
timeout_s = 0.050 # check flags processed_all and error flags every 50ms
while True:
try:
ids, x, y_pred, y_real = self._results.get(timeout=timeout_s)
yield ids, x, y_pred, y_real
except queue.Empty:
shall_stop = self._processed_all or self._errors
if shall_stop:
break
LOGGER.debug("Waiting for request thread to stop")
self._req_thread.join()
if self._errors:
error_msg = "\n".join(map(str, self._errors))
raise RuntimeError(error_msg)
def _on_result(self, ids, x, y_real, output_names, result, error):
with self._sync:
if error:
self._errors.append(error)
else:
y_pred = {name: result.as_numpy(name) for name in output_names}
self._results.put((ids, x, y_pred, y_real))
self._num_waiting_for -= 1
self._sync.notify_all()
def req_loop(self):
client = InferenceServerClient(self._server_url, verbose=self._verbose)
self._errors = self._verify_triton_state(client)
if self._errors:
return
LOGGER.debug(
f"Triton server {self._server_url} and model {self._model_name}:{self._model_version} " f"are up and ready!"
)
model_config = client.get_model_config(self._model_name, self._model_version)
model_metadata = client.get_model_metadata(self._model_name, self._model_version)
LOGGER.info(f"Model config {model_config}")
LOGGER.info(f"Model metadata {model_metadata}")
inputs = {tm.name: tm for tm in model_metadata.inputs}
outputs = {tm.name: tm for tm in model_metadata.outputs}
output_names = list(outputs)
outputs_req = [InferRequestedOutput(name) for name in outputs]
self._num_waiting_for = 0
for ids, x, y_real in self._dataloader:
infer_inputs = []
for name in inputs:
data = x[name]
infer_input = InferInput(name, data.shape, inputs[name].datatype)
target_np_dtype = client_utils.triton_to_np_dtype(inputs[name].datatype)
data = data.astype(target_np_dtype)
infer_input.set_data_from_numpy(data)
infer_inputs.append(infer_input)
with self._sync:
def _check_can_send():
return self._num_waiting_for < self._max_unresp_reqs
can_send = self._sync.wait_for(_check_can_send, timeout=self._response_wait_t)
if not can_send:
error_msg = f"Runner could not send new requests for {self._response_wait_t}s"
self._errors.append(error_msg)
break
callback = functools.partial(AsyncGRPCTritonRunner._on_result, self, ids, x, y_real, output_names)
client.async_infer(
model_name=self._model_name,
model_version=self._model_version,
inputs=infer_inputs,
outputs=outputs_req,
callback=callback,
)
self._num_waiting_for += 1
# wait till receive all requested data
with self._sync:
def _all_processed():
LOGGER.debug(f"wait for {self._num_waiting_for} unprocessed jobs")
return self._num_waiting_for == 0
self._processed_all = self._sync.wait_for(_all_processed, self.DEFAULT_MAX_FINISH_WAIT_S)
if not self._processed_all:
error_msg = f"Runner {self._response_wait_t}s timeout received while waiting for results from server"
self._errors.append(error_msg)
LOGGER.debug("Finished request thread")
def _verify_triton_state(self, triton_client):
errors = []
if not triton_client.is_server_live():
errors.append(f"Triton server {self._server_url} is not live")
elif not triton_client.is_server_ready():
errors.append(f"Triton server {self._server_url} is not ready")
elif not triton_client.is_model_ready(self._model_name, self._model_version):
errors.append(f"Model {self._model_name}:{self._model_version} is not ready")
return errors
def _parse_args():
parser = argparse.ArgumentParser(description="Infer model on Triton server", allow_abbrev=False)
parser.add_argument(
"--server-url", type=str, default="localhost:8001", help="Inference server URL (default localhost:8001)"
)
parser.add_argument("--model-name", help="The name of the model used for inference.", required=True)
parser.add_argument("--model-version", help="The version of the model used for inference.", required=True)
parser.add_argument("--dataloader", help="Path to python file containing dataloader.", required=True)
parser.add_argument("--dump-labels", help="Dump labels to output dir", action="store_true", default=False)
parser.add_argument("--dump-inputs", help="Dump inputs to output dir", action="store_true", default=False)
parser.add_argument("-v", "--verbose", help="Verbose logs", action="store_true", default=False)
parser.add_argument("--output-dir", required=True, help="Path to directory where outputs will be saved")
parser.add_argument(
"--response-wait-time", required=False, help="Maximal time to wait for response", default=120, type=float
)
parser.add_argument(
"--max-unresponded-requests",
required=False,
help="Maximal number of unresponded requests",
default=128,
type=int,
)
args, *_ = parser.parse_known_args()
get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME)
ArgParserGenerator(get_dataloader_fn).update_argparser(parser)
args = parser.parse_args()
return args
def main():
args = _parse_args()
log_format = "%(asctime)s %(levelname)s %(name)s %(message)s"
log_level = logging.INFO if not args.verbose else logging.DEBUG
logging.basicConfig(level=log_level, format=log_format)
LOGGER.info(f"args:")
for key, value in vars(args).items():
LOGGER.info(f" {key} = {value}")
get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME)
dataloader_fn = ArgParserGenerator(get_dataloader_fn).from_args(args)
runner = AsyncGRPCTritonRunner(
args.server_url,
args.model_name,
args.model_version,
dataloader=dataloader_fn(),
verbose=False,
resp_wait_s=args.response_wait_time,
max_unresponded_reqs=args.max_unresponded_requests,
)
with JsonDumpWriter(output_dir=args.output_dir) as writer:
start = time.time()
for ids, x, y_pred, y_real in tqdm(runner, unit="batch", mininterval=10):
data = _verify_and_format_dump(args, ids, x, y_pred, y_real)
writer.write(**data)
stop = time.time()
LOGGER.info(f"\nThe inference took {stop - start:0.3f}s")
def _verify_and_format_dump(args, ids, x, y_pred, y_real):
data = {"outputs": y_pred, "ids": {"ids": ids}}
if args.dump_inputs:
data["inputs"] = x
if args.dump_labels:
if not y_real:
raise ValueError(
"Found empty label values. Please provide labels in dataloader_fn or do not use --dump-labels argument"
)
data["labels"] = y_real
return data
if __name__ == "__main__":
main()
|
TensorFlow2/Recommendation/SIM/sim/models | models | dien_model | # Copyright (c) 2022 NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from functools import partial
import tensorflow as tf
from sim.layers.ctr_classification_mlp import CTRClassificationMLP
from sim.layers.item_sequence_interaction import DIENItemSequenceInteractionBlock
from sim.models.sequential_recommender_model import SequentialRecommenderModel
EPS = 1e-06
DIEN_ITEM_SEQ_INTERACTION_SIZE = 6 # Value taken from TF1 original code
def compute_auxiliary_probs(auxiliary_net, rnn_states, items_hist, training=False):
"""
Given h(1),..,h(T) GRU sequence outputs and e(1),..,e(T) encoded user
sequence or negative user sequence behaviours, compute probabilities
for auxiliary loss term.
Args:
auxiliary_net: model that computes a probability of interaction
rnn_states: sequence of GRU outputs
items_hist: sequence of user behaviours or negative user behaviours
Returns:
click_prob: clicking probability for each timestep
"""
# for rnn_states, select h(1),..,h(T-1)
rnn_states = rnn_states[:, :-1, :]
# for items_hist, select e(2),..,e(T)
items_hist = items_hist[:, 1:, :]
# concatenate over feature dimension
click_input = tf.concat([rnn_states, items_hist], -1)
# forward pass
click_logits = auxiliary_net(click_input, training=training)
click_probs = tf.nn.sigmoid(click_logits) + EPS
return tf.squeeze(click_probs, axis=2)
class DIENModel(SequentialRecommenderModel):
def __init__(
self,
feature_spec,
mlp_hidden_dims,
embedding_dim=4
):
super(DIENModel, self).__init__(
feature_spec, embedding_dim, mlp_hidden_dims["classifier"]
)
# DIEN block
self.dien_block = DIENItemSequenceInteractionBlock(
hidden_size=embedding_dim * DIEN_ITEM_SEQ_INTERACTION_SIZE
)
# aux_loss uses an MLP in TF1 code
self.auxiliary_net = CTRClassificationMLP(
mlp_hidden_dims["aux"],
activation_function=partial(
tf.keras.layers.Activation, activation="sigmoid"
),
)
@tf.function
def call(
self,
inputs,
compute_aux_loss=True,
training=False,
):
user_features = inputs["user_features"]
target_item_features = inputs["target_item_features"]
short_sequence_features = inputs["short_sequence_features"]
short_neg_sequence_features = inputs["short_neg_sequence_features"]
short_sequence_mask = inputs["short_sequence_mask"]
output_dict = {}
user_embedding = self.embed(user_features)
target_item_embedding = self.embed(target_item_features)
short_sequence_embeddings = self.embed(short_sequence_features)
short_sequence_embeddings = short_sequence_embeddings * tf.expand_dims(
short_sequence_mask, axis=-1
)
# Pass sequence_embeddings and target_item_embedding to a DIEN block
# it needs to output h'(T) for concatenation and h(1),...,h(T) for aux_loss
final_seq_repr, features_layer_1 = self.dien_block(
(target_item_embedding, short_sequence_embeddings, short_sequence_mask)
)
# short_features_layer_1 = features_layer_1[:, -short_seq_len:, :]
if compute_aux_loss:
# Embed negative sequence features
short_neg_sequence_embeddings = self.embed(short_neg_sequence_features)
short_neg_sequence_embeddings = short_neg_sequence_embeddings * tf.expand_dims(
short_sequence_mask, axis=-1
)
# compute auxiliary logits
aux_click_probs = compute_auxiliary_probs(
self.auxiliary_net,
features_layer_1,
short_sequence_embeddings,
training=training,
)
output_dict["aux_click_probs"] = aux_click_probs
aux_noclick_probs = compute_auxiliary_probs(
self.auxiliary_net,
features_layer_1,
short_neg_sequence_embeddings,
training=training,
)
output_dict["aux_noclick_probs"] = aux_noclick_probs
combined_embeddings = tf.concat([
target_item_embedding,
final_seq_repr,
user_embedding
], -1)
classification_logits = self.classificationMLP(combined_embeddings)
output_dict["logits"] = classification_logits
return output_dict
|
PyTorch/LanguageModeling/BART | BART | README | # BART For PyTorch
This repository provides a script and recipe to train the BART model to achieve state-of-the-art accuracy and is tested and maintained by NVIDIA.
## Table Of Contents
- [Model overview](#model-overview)
* [Model architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
* [Mixed precision training](#mixed-precision-training)
* [Enabling mixed precision](#enabling-mixed-precision)
* [TF32](#tf32)
* [Glossary](#glossary)
- [Setup](#setup)
* [Requirements](#requirements)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
* [Parameters](#parameters)
* [Command-line options](#command-line-options)
* [Getting the data](#getting-the-data)
* [Dataset guidelines](#dataset-guidelines)
* [Training process](#training-process)
* [Inference process](#inference-process)
- [Performance](#performance)
* [Benchmarking](#benchmarking)
* [Training performance benchmark](#training-performance-benchmark)
* [Inference performance benchmark](#inference-performance-benchmark)
* [Results](#results)
* [Training accuracy results](#training-accuracy-results)
* [Pre-training accuracy: NVIDIA DGX A100 (320x A100 80GB)](#pre-training-accuracy-nvidia-dgx-a100-320x-a100-80gb)
* [Fine-tuning accuracy: NVIDIA DGX A100 (8x A100 80GB)](#fine-tuning-accuracy-nvidia-dgx-a100-8x-a100-80gb)
* [Training stability test](#training-stability-test)
* [Training performance results](#training-performance-results)
* [Pre-training performance: Single-node on NVIDIA DGX A100 (8x A100 80GB)](#pre-training-performance-single-node-on-nvidia-dgx-a100-8x-a100-80gb)
* [Pre-training performance: Multi-node on NVIDIA DGX A100 (8x A100 80GB)](#pre-training-performance-multi-node-on-nvidia-dgx-a100-8x-a100-80gb)
* [Fine-tuning performance: NVIDIA DGX A100 (8x A100 80GB)](#fine-tuning-performance-nvidia-dgx-a100-8x-a100-80gb)
* [Inference performance results](#inference-performance-results)
* [Inference performance: NVIDIA DGX A100 (1x A100 80GB)](#inference-performance-nvidia-dgx-a100-1x-a100-80gb)
- [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
BART is a denoising autoencoder for pretraining sequence-to-sequence models. According to the [paper](https://arxiv.org/abs/1910.13461), the model uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
Other publicly available implementations of BART include:
1. [Hugging Face](https://huggingface.co/transformers/model_doc/bart.html)
2. [Fairseq](https://github.com/pytorch/fairseq/tree/master/examples/bart)
This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results 1.4 to 2.1x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
### Model architecture
BART uses a standard sequence-to-sequence Transformer architecture with GeLU activations. The base model consists of 6 layers in encoder and decoder, whereas large consists of 12. The architecture has roughly 10% more parameters than BERT.
BART is trained by corrupting documents and then optimizing the reconstruction loss. The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
### Default configuration
BART model is similar to BERT with the following differences:
Decoder layers additionally perform cross-attention over final hidden encoder layer
BART removes the additional feed-forward network before word prediction that BERT uses
Inference is done by default with beam search 4 for CNN-DM dataset and 6 for XSum Dataset.
### Feature support matrix
The following features are supported by this model:
| **Feature** | **BART** |
|:---------:|:----------:|
| PyTorch AMP | Yes |
| PyTorch DDP | Yes |
| LAMB | Yes |
| Multi-node | Yes |
| LDDL | Yes |
| Pre-LN | Yes |
#### Features
[APEX](https://github.com/NVIDIA/apex) is a PyTorch extension with NVIDIA-maintained utilities to streamline mixed precision and distributed training, whereas [AMP](https://nvidia.github.io/apex/amp.html) is an abbreviation used for automatic mixed precision training.
[DDP](https://nvidia.github.io/apex/parallel.html) stands for DistributedDataParallel and is used for multi-GPU training.
[LAMB](https://arxiv.org/pdf/1904.00962.pdf) stands for Layerwise Adaptive Moments based optimizer, is a large batch optimization technique that helps accelerate training of deep neural networks using large minibatches. It allows using a global batch size of 65536 and 32768 on sequence lengths 128 and 512 respectively, compared to a batch size of 256 for [Adam](https://arxiv.org/pdf/1412.6980.pdf). The optimized implementation accumulates 1024 gradient batches in phase 1 and 4096 steps in phase 2 before updating weights once. This results in a 15% training speedup. On multi-node systems, LAMB allows scaling up to 1024 GPUs resulting in training speedups of up to 72x in comparison to Adam. Adam has limitations on the learning rate that can be used since it is applied globally on all parameters whereas LAMB follows a layerwise learning rate strategy.
NVLAMB adds the necessary tweaks to [LAMB version 1](https://arxiv.org/abs/1904.00962v1), to ensure correct convergence. The algorithm is as follows:

In this PyTorch BART example, we used global batch size of 64000 and 30720 on sequence lengths 128 and 512 respectively, compared to a batch size of 8000 and sequence lengths 512 on [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) which Facebook used for BART. We only trained with 44% total number of tokens compared to Facebook's BART. It can get 2.7x training speedup and achieve similar accuracy.
[LDDL](../lddl) is a library that enables scalable data preprocessing and loading. LDDL is used by this PyTorch BART example.
[Pre-LN](https://arxiv.org/pdf/2002.04745.pdf) is an transformer architecture, which layer normalization is put inside the residual blocks. In our experiments, For Pre-LN transformer, the loss decays faster and it makes training more stable without gradient exploding or vanishing .
### Mixed precision training
Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores) in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using [mixed precision training](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) previously required two steps:
1. Porting the model to use the FP16 data type where appropriate.
2. Adding loss scaling to preserve small gradient values.
For information about:
- How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) documentation.
- Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog.
- APEX tools for mixed precision training, see the [NVIDIA Apex: Tools for Easy Mixed-Precision Training in PyTorch](https://devblogs.nvidia.com/apex-pytorch-easy-mixed-precision-training/).
#### Enabling mixed precision
In this repository, mixed precision is enabled in PyTorch by using the Automatic Mixed Precision (AMP)
autocast [torch.cuda.amp.autocast](https://pytorch.org/docs/stable/amp.html#autocasting) which casts variables
to half-precision upon retrieval, while storing variables in single-precision format.
Furthermore, to preserve small gradient magnitudes in backpropagation,
a [gradient scaling](https://pytorch.org/docs/stable/amp.html#gradient-scaling)
step must be included.
For an in-depth walk through on AMP, check out sample usage
[here](https://pytorch.org/docs/stable/amp.html).
#### TF32
TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.
For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
### Glossary
**Fine-tuning**
Training an already pretrained model further using a task specific dataset for subject-specific refinements, by adding task-specific layers on top if required.
**Language Model**
Assigns a probability distribution over a sequence of words. Given a sequence of words, it assigns a probability to the whole sequence.
**Pre-training**
Training a model on vast amounts of data on the same (or different) task to build general understandings.
**Transformer**
The paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762) introduces a novel architecture called Transformer that uses an attention mechanism and transforms one sequence into another.
## Setup
The following section lists the requirements that you need to meet in order to start training the BART model.
### Requirements
This repository contains Dockerfile which extends the PyTorch
NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- [PyTorch 22.08-py3+](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) NGC container
- Supported GPUs:
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, see the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation:
- [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
- [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#accessing_registry)
- [Running PyTorch](https://docs.nvidia.com/deeplearning/dgx/pytorch-release-notes/running.html#running)
For those unable to use the PyTorch NGC container, to set up the required environment or create your own container, see the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/dgx/support-matrix/index.html).
## Quick Start Guide
To train your model using mixed or TF32 precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the BART model on the CNN-DM/XSum datasets. For the specifics concerning training and inference, see the [Advanced](#advanced) section.
1. Clone the repository.
```bash
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/LanguageModeling/BART
```
2. Build the BART PyTorch container.
```bash
bash scripts/docker/build.sh
```
3. Start an interactive session in the container to run training/inference.
After you build the container image, you can start an interactive CLI session as follows:
```bash
bash scripts/docker/launch.sh
```
The `launch.sh` script, by default, mounts the current directory to `/workspace/bart`.
4. Download and preprocess the dataset.
Use the following script to download and preprocess CNN DM data as well as XSum dataset.
```bash
bash scripts/get_data.sh <path to data folder>
```
Use the script to download Wikipedia, Common Crawl, and OpenWebTextCorpus for pre-training dataset
```bash
bash scripts/get_pretraining_data.sh <path to data folder>
```
The pretraining dataset is 200GB+ and takes 24+ hours to download.
For downloading less dataset, you can change the date period of Common Crawl archive to take less time. For example:
```bash
download_common_crawl \
--outdir $data_folder/common_crawl \
--warc-files-start-date 2016-09-01 \
--warc-files-end-date 2016-10-31 \
--start-date 2016-09-01 \
--end-date 2016-10-31
```
Use the script to preprocess the pre-training dataset into LDDL Parquet shards
```bash
bash scripts/preprocess_pretrain_data.sh <path to Wikipedia> <path to Common Crawl> <path to OpenWebTextCorpus> <path to data folder>
```
By default, the path to the data folder is set to /workspace/bart/data for ease of use in all the scripts.
5. Start pre-training
BART is designed to pre-train language representations. The following scripts are to replicate pre-training on Wikipedia, Common Crawl, and OpenWebTextCorpus from the LAMB paper. These scripts are general and can be used for pre-training language representations on any corpus of choice.
From within the container, you can use the following script to run pre-training using LAMB.
```bash
bash scripts/run_pretraining.sh <train_batch_size_phase1> <train_batch_size_phase2> <learning_rate_phase1> <learning_rate_phase2> <precision> <use_preln> <num_gpus> <warmup_steps_phase1> <warmup_steps_phase2> <train_steps_phase1> <train_steps_phase2> <save_checkpoint_steps> <num_accumulation_phase1> <num_accumulation_steps_phase2> <config_path>
```
6. Start summarizing.
Pretrained BART representations can be fine tuned for a state-of-the-art summarization system. From within the container, you can use the following script to run summarization on CNN DM dataset.
```bash
bash scripts/run_summarization.sh <DATA_DIR> <CKPT_PATH> <CONFIG_PATH> <NUM_GPU> <LR> <BS> <ACCUM> <PREC> <TRAIN_STEPS> <WARMUP_STEPS> <MAX_SOURCE_LEN> <MAX_TARGET_LEN> <EVAL_BEAMS> <EVAL_BS> <PRED_BS> <PRELN>
```
This repository contains a number of predefined configurations to run the CNN+DM fine tuning on NVIDIA DGX-1 V100 or NVIDIA DGX A100 nodes in `scripts/params/cnn_dm_params.sh`. For example, to use the default DGX A100 8 gpu config, run:
```bash
bash scripts/run_summarization.sh $(source scripts/params/cnn_dm_params.sh && dgxa100_8gpu_bf16)
```
Similarly, configurations for XSum dataset are available in `scripts/params/xsum_params.sh`.
7. Start inference/predictions.
You can run the following script to run inference summarization using a fine-tuned checkpoint:
```bash
bash scripts/run_eval_summarization.sh <INIT_CKPT> <PRED_BS> <NUM_GPU> <PRECISION> <EVAL_BEAMS> <MAX_SOURCE_LEN> <MAX_TARGET_LEN> <DATA_DIR> <CONFIG_PATH> <PRELN>
```
This repository contains multiple predefined configurations in `scripts/params/cnn_dm_params.sh` and `scripts/params/xsum_params.sh`. For example, to run inference on CNN-DM with a checkpoint run:
```bash
bash scripts/run_eval_summarization.sh <INIT_CKPT> $(source scripts/params/cnn_dm_params.sh && dgxa100_8gpu_bf16_eval)
```
Now that you have your model trained and evaluated, you can choose to compare your training results with our [Training accuracy results](#training-accuracy-results). You can also choose to benchmark yours performance to [Training performance benchmark](#training-performance-results), or [Inference performance benchmark](#inference-performance-results). Following the steps in these sections will ensure that you achieve the same accuracy and performance results as stated in the [Results](#results) section.
8. Run Custom Inference with the fine-tuned checkpoint
We can write a simple few lines of code to run custom inference with the fine-tuned checkpoint.
```python
from bart.modeling.modeling_bart import BartForConditionalGeneration
from bart.tokenization.tokenization_bart import BartTokenizer
from bart.configuration.configuration_bart import BartConfig
import json
config = BartConfig(**json.load(open('configs/config.json', "r")))
config.dtype = None
config.pre_ln = True
model_path = 'results/_epoch1_step2000.ckpt' # The fine-tuned checkpoint path
model = BartForConditionalGeneration.from_pretrained(model_path, config=config)
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "NVIDIA Geforce Won't Run or Uninstall"
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, truncation=True, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'],
num_beams=4,
max_length=50,
num_beam_groups=1,
output_scores=False,
return_dict_in_generate=False,
encoder_no_repeat_ngram_size=0,
diversity_penalty=0.0,
early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
```
## Advanced
The following sections provide greater details of the dataset, running training and inference, and the training results.
### Scripts and sample code
In the root directory, the most important files are:
* `pretrain.py` - Serves as entry point for pre-training
* `finetune.py` - Serves as entry point for fine-tuning
* `run_eval.py` - Serves as entry point for inference
* `Dockerfile` - Container with the basic set of dependencies to run BART
The `scripts/` folder encapsulates all the one-click scripts required for running various functionalities supported such as:
* `run_summarization.sh` - Runs summarization finetuning followed by inference using the `finetune.py` and `run_eval.py` files.
* `run_summarization_eval.sh` - Runs inference on fine tuned checkpoint using the `run_eval.py` file.
* `get_data.sh` - Preprocesses CNN-DM dataset as well as downloads and preprocesses XSum dataset.
* `get_pretraining_data.sh` - Downloads pre-train dataset.
* `preprocess_pretrain_data.sh` - Preprocesses pre-train dataset.
Other folders included in the root directory are:
* `data/` - Necessary folder to download datasets required for fine tuning of BART.
* `src/` - Modeling, tokenization and configuration functionality files for implementing the BART model.
* `utils/` - Necessary utility files for BART model.
### Parameters
Aside from the options to set hyperparameters, the relevant options to control the behaviour of the `pretrain.py` script are:
```
--config_path: The configuration file corresponding to BART Model
--warmup_steps: Number of WARMUP_STEPS
--max_steps: Number of MAX_STEPS
--data_dir: Location to DATA_DIR
--learning_rate: Learning Rate
--n_val: Number of validation examples to test for early stopping
--train_batch_size: Train batch size
--gradient_accumulation_steps: Number of accumulation steps
--max_source_length: Maximum source length
--max_target_length: Maximum target length
--val_max_target_length: Maximum length of validation tokens
--eval_max_gen_length: Maximum length while generating validation tokens
--weight_decay: weight decay
--dropout: drop out
--lamb: Whether to use LAMB optimizer
--pre_ln: Whether to use Pre-LN architecture
--allreduce_post_accumulation_half_precision: Whether to do fp16/bf16 allreduce post accumulation
```
Aside from the options to set hyperparameters, the relevant options to control the behaviour of the `finetune.py` script are:
```
--config_path: The configuration file corresponding to BART Model
--warmup_steps: Number of WARMUP_STEPS
--max_steps: Number of MAX_STEPS
--data_dir: Location to DATA_DIR
--learning_rate: Learning Rate
--n_val: Number of validation examples to test for early stopping
--train_batch_size: Train batch size
--gradient_accumulation_steps: Number of accumulation steps
--max_source_length: Maximum source length
--max_target_length: Maximum target length
--val_max_target_length: Maximum length of validation tokens
--eval_max_gen_length: Maximum length while generating validation tokens
--weight_decay: weight decay
--dropout: drop out
--pre_ln: Whether to use Pre-LN architecture
--allreduce_post_accumulation_half_precision: Whether to do fp16/bf16 allreduce post accumulation
```
### Command-line options
To see the full list of available options and their descriptions, use the `-h` or `--help` command-line option with the Python file, for example:
```bash
python pretrain.py --help
python finetune.py --help
python run_eval.py --help
```
### Getting the data
For pre-training BART, we use the concatenation of Wikipedia, Common Crawl, and OpenWebTextCorpus.
Common Crawl is an archieve of news articles from small and major publishers world wide, which is provided from commoncrawl.org.
OpenWebTextCorpus is an open source effort to reproduce OpenAI’s WebText dataset. The distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University.
For fine-tuning BART, we have tested fine tuning the BART model on summarization benchmarks such as CNN-DM and XSum.
CNN-DM is a concatenation of CNN Stories as well as Daily Mail Stories. CNN consists of approximately 90k documents whereas Daily Mail consists of 197k documents.
These documents are preprocessed to have two features:
* Article: text of news article, used as the document to be summarized
* Highlights: joined text of highlights with and around each highlight, which is the target summary
XSum, on the other hand, is also a single-document summarization task dataset but one that favors abstractive modeling. It consists of BBC articles and single sentence summaries. It consists of approximately 230k articles.
#### Dataset guidelines
The repository contains scripts to preprocess and download data. It can be run as:
```bash
bash scripts/get_data.sh <path to output data folder>
```
The script downloads CNN and DM raw data from [here](https://cs.nyu.edu/~kcho/DMQA/). The raw data is preprocessed using scripts from [repository](https://github.com/abisee/cnn-dailymail). The stories are first tokenized, written to serialized binary files and split into train, test and validation sets.
The script also downloads the XSum dataset from the [HuggingFace storage](https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz).
```bash
bash scripts/get_pretraining_data.sh <path to data folder>
```
The script uses the LDDL downloader to download Wikipedia, Common Crawl, and OpenWebTextCorpus dataset. The Common Crawl is downloaded by [news-please](https://github.com/fhamborg/news-please). And OpenWebTextCorpus is downloaded from [here](https://skylion007.github.io/OpenWebTextCorpus/)
For downloading less dataset, you can change the date period of Common Crawl archive in the script to take less time. For example:
```bash
download_common_crawl \
--outdir $data_folder/common_crawl \
--warc-files-start-date 2016-09-01 \
--warc-files-end-date 2016-10-31 \
--start-date 2016-09-01 \
--end-date 2016-10-31
```
```bash
bash scripts/preprocess_pretrain_data.sh <path to Wikipedia> <path to Common Crawl> <path to OpenWebTextCorpus> <path to data folder>
```
The script uses the LDDL preprocessor and load balancer to preprocess the pre-training dataset into Parquet shards which are then streamed during the pre-training by the LDDL data loader.
The script by default stores the data into the `/workspace/bart/data` folder.
### Training process
The training process consists of two steps: pre-training and fine-tuning.
#### Pre-training
Pre-training BART is done using `scripts/run_pretraining.sh` script that, in turn, uses the `pretrain.py` file to perform training.
For example, it can be invoked by calling:
```bash
bash scripts/run_pretraining.sh <train_batch_size_phase1> <train_batch_size_phase2> <learning_rate_phase1> <learning_rate_phase2> <precision> <use_preln> <num_gpus> <warmup_steps_phase1> <warmup_steps_phase2> <train_steps_phase1> <train_steps_phase2> <save_checkpoint_steps> <num_accumulation_phase1> <num_accumulation_steps_phase2> <config_path>
```
Where:
* train_batch_size_phase* - per-GPU batch size used for training in the respective phase
* learning_rate_phase* - Learning rate in the respective phase
* precision - fp16/bf16/fp32/tf32 precision for training
* use_preln - Whether to use Pre-LN architecture
* num_gpus - number of GPUs to run training with
* warmup_steps_phase* - Number of warmup steps for learning rate scheduler in the respective phase
* train_steps_phase* - Number of training steps in the respective phase
* save_checkpoint_steps - Number of steps for saving checkpoint
* num_accumulation_phase* - Number of accumulation steps for an effective larger training batch size in the respective phase
* config_path - path to configuration file of BART Model
By default, the training script stores results to `results/bart_pyt_pretraining` and runs with:
```bash
bash scripts/run_pretraining.sh 200 32 5e-3 4e-3 bf16 true 8 2166 200 95040 7560 100 40 120 configs/config.json
```
#### Fine-tuning
Training BART for summarization is done using `scripts/run_summarization.sh` script that, in turn, uses the `finetune.py` file to perform training.
For example, it can be invoked by calling:
```bash
bash scripts/run_summarization.sh <DATA_DIR> <CKPT_PATH> <CONFIG_PATH> <NUM_GPU> <LR> <BS> <ACCUM> <PREC> <TRAIN_STEPS> <WARMUP_STEPS> <MAX_SOURCE_LEN> <MAX_TARGET_LEN> <EVAL_BEAMS> <EVAL_BS> <PRED_BS> <PRELN>
```
Where:
* DATA_DIR - path to data directory with train/test/val files.
* CONFIG_PATH - path to configuration file of BART Model
* NUM_GPU - number of GPUs to run training with
* LR - Learning rate for training process
* BS - Training batch size
* ACCUM - Number of accumulation steps for an effective larger training batch size
* PREC - fp16/fp32/tf32 precision for training and inference
* TRAIN_STEPS - Maximum number of training steps
* WARMUP_STEPS - Number of warmup steps for learning rate scheduler
* MAX_SOURCE_LEN - Maximum source length of articles
* MAX_TARGET_LEN - Maximum target length of summaries
* EVAL_BEAMS - Number of beams to run during inference
* EVAL_BS - Batch size for inference during validation
* PRED_BS - Batch size for inference on test data
* PRELN - Whether to use Pre-LN architecture
By default, the training script stores results to `results/bart_pyt_${DATESTAMP}` and runs with:
```bash
bash scripts/run_summarization.sh data/cnn_dm/ data/nvidia_pretrained/bart_large/ configs/config.json 8 1.25e-4 40 1 bf16 2000 50 1024 142 4 128 true
```
These parameters train CNN-DM with reasonable rouge scores on a LUNA with 80GB A100 cards. Other tested configurations are available under `scripts/params/cnn_dm_params.sh` for CNN-DM and `scripts/params/xsum_params.sh` for XSum datasets.
### Inference process
Evaluating BART for summarization is done using `scripts/run_eval_summarization.sh` script that, in turn, uses the `run_eval.py` file to perform inference.
For example, it can be invoked by calling:
```bash
bash scripts/run_eval_summarization.sh <INIT_CKPT> <PRED_BS> <NUM_GPU> <PRECISION> <EVAL_BEAMS> <MAX_SOURCE_LEN> <MAX_TARGET_LEN> <DATA_DIR> <CONFIG_PATH> <PRELN>
```
Where:
* `INIT_CKPT` - Model name or path to initialize BART Model weights with.
* `PRED_BS` - Batch size for inference
* `NUM_GPU` - number of GPUs to run training with
* `PRECISION` - FP16/FP32/TF32 precision for training and inference
* `EVAL_BEAMS` - Number of beams to run during inference
* `MAX_SOURCE_LEN` - Maximum source length of articles
* `MAX_TARGET_LEN` - Maximum target length of summaries
* `DATA_DIR` - path to data directory with train/test/val files.
* `CONFIG_PATH` - path to configuration file of BART Model
* `PRELN` - Whether to use Pre-LN architecture
By default, the training script stores results to `results/bart_pyt_inference_${DATESTAMP}` and runs with:
```bash
bash scripts/run_eval_summarization.sh data/nvidia-pretrained/model.ckpt128 8 fp16 4 1024 142 data/cnn_dm/ configs/config.json
```
These parameters run inference on CNN-DM on a DGX A100 with 80GB A100 cards. For XSum, try `EVAL_BEAMS`=6, `MAX_SOURCE_LEN`=1024 and `MAX_TARGET_LEN`=60. For other GPUS/precisions, reduce PRED_BS as indicated in `scripts/params`.
## Performance
### Benchmarking
The following section shows how to run benchmarks measuring the model performance in training and inference modes.
#### Training performance benchmark
To benchmark the training performance on a specific batch size, source length, target length and dataset for one epoch, run:
```bash
bash scripts/run_training_benchmark.sh <batch size> <max source length> <max target length> <data dir>
```
The resulting `NUM_GPU` and PRECISION vs Throughput is stored in `results/bart_pyt_training_benchmark_${DATESTAMP}/inference_benchmark.log`
#### Inference performance benchmark
To benchmark the inference performance on a specific batch size, source length, target length and dataset, run:
```bash
bash scripts/run_inference_benchmark.sh <predict batch size> <eval beams> <max source length> <max target length> <model name or path> <data dir> <config path>
```
The resulting `NUM_GPU` and PRECISION vs Throughput is stored in `results/bart_pyt_inference_benchmark_${DATESTAMP}/inference_benchmark.log`
### Results
The following sections provide details on how we achieved our performance and accuracy in training and inference.
#### Training accuracy results
##### Pre-training accuracy: NVIDIA DGX A100 (320x A100 80GB)
Our results were obtained by running the `run_pretraining.sh` training script in the PyTorch 22.08-py3 NGC container on 40 nodes NVIDIA DGX A100 (320x A100 80GB) GPUs.
| Nodes | Sequence Length | Batch size/GPU (BF16) | Accumulation Steps | Final loss - BF16 | Time to train (hrs) - BF16 |
|-------|-------|---------------------------------------|------------------------------------|----------------------------------|-----------------------------------|
| 40 | 128 | 200 | 1 | 0.5095 | 17.38 |
| 40 | 512 | 32 | 3 | 0.6085 | 3.28 |
##### Fine-tuning accuracy: NVIDIA DGX A100 (8x A100 80GB)
Our results for XSUM dataset were obtained by running the `run_summarization.sh` training script in the PyTorch 22.08-py3 NGC container on NVIDIA DGX A100 (8x A100 80GB) GPUs. Rogue1, rogue2 and rogueLSum scores list as accuracy.
| GPUs | Batch size (TF32, BF16) | R1 - TF32 | R2 - TF32 | RL - TF32 | R1 - BF16 | R2 - BF16 | RL - BF16 | Time to train (hrs) - TF32 | Time to train (hrs) - BF16 | Time to train (hrs) speedup (TF32 to BF16) |
|------|------------------|-----|-----|-----|-----|-----|-----|----------------------|---------------------------------|-------------------------------------------------|
| 1 | 24, 40 | 45.22 | 22.03 | 36.95 | 44.91 | 21.85 | 36.78 | 2.41 | 1.69 | 1.43 |
| 8 | 192, 320 | 45.04 | 21.92 | 36.82 | 45.01 | 21.86 | 36.81 | 0.64 | 0.39 | 1.64 |
In addition,results for CNN-DM dataset are:
| GPUs | Batch size (TF32, BF16) | R1 - TF32 | R2 - TF32 | RL - TF32 | R1 - BF16 | R2 - BF16 | RL - BF16 | Time to train (hrs) - TF32 | Time to train (hrs) - BF16 | Time to train (hrs) speedup (TF32 to BF16) |
|------|------------------|-----|-----|-----|-----|-----|-----|----------------------|---------------------------------|-------------------------------------------------|
| 1 | 24, 40 | 43.76 | 20.79 | 40.51 | 43.58 | 20.63 | 40.32 | 3.87 | 2.42 | 1.60 |
| 8 | 192, 320 | 43.77 | 20.77 | 40.53 | 43.76 | 20.73 | 40.50 | 0.73 | 0.45 | 1.62 |
##### Fine-tuning stability test
Our results for XSUM dataset were obtained by running the `run_summarization.sh` training script in the PyTorch 22.08-py3 NGC container on NVIDIA DGX A100 (8x A100 80GB) GPUs. Accuracy column lists rogue1 scores across 5 different training runs with different seeds on DGX A100.
| **FP16, 8x GPUs** | **seed 1** | **seed 2** | **seed 3** | **seed 4** | **seed 5** | **mean** | **std** |
|:-----------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
|rogue1 | 45.08 | 44.98 | 45.10 | 44.91 | 44.95 | 45.00 |
#### Training performance results
##### Pre-training performance: Single-node on NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `run_pretraining.sh` training script in the PyTorch 22.08-py3 NGC container on single node NVIDIA DGX A100 (8x A100 80GB) GPUs.
| GPUs | Sequence Length | Batch size / GPU (TF32, BF16) | Throughput - TF32 | Throughput - BF16 | Throughput speedup (TF32 - BF16) | Weak scaling - TF32 | Weak scaling - BF16 |
|------|------|------------------|-------------------|------------------------------|---------------------------------------------|---------------------|--------------------------------|
| 1 | 128 | 100, 200 | 202.53 | 326.53 | 1.61 | 1 | 1 |
| 8 | 128 | 100, 200 | 1556.23 | 2572.86 | 1.65 | 7.68 | 7.88 |
| 1 | 512 | 16, 32 | 41.35 | 69.31 | 1.68 | 1 | 1 |
| 8 | 512 | 16, 32 | 317.85 | 549.67 | 1.73 | 7.69 | 7.93 |
##### Pre-training performance: Multi-node on NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `run_pretraining.sh` training script in the PyTorch 22.08-py3 NGC container on multi node NVIDIA DGX A100 (8x A100 80GB) GPUs.
| Nodes | Sequence Length |Batch size / GPU (TF32, BF16) | Throughput - TF32 | Throughput - BF16 | Throughput speedup (TF32 - BF16) | Weak scaling - TF32 | Weak scaling - BF16 |
|------|------|------------------|-------------------|------------------------------|---------------------------------------------|---------------------|--------------------------------|
| 1 | 128 | 100, 200 | 1556.23 | 2572.86 | 1.65 | 1 | 1 |
| 20 | 128 | 100, 200 | 31067.96 | 52,459.02 | 1.69 | 19.96 | 20.39 |
| 40 | 128 | 100, 200 | 61,538.46 | 97028.51 | 1.58 | 39.54 | 37.71 |
| 1 | 512 | 16, 32 | 317.85 | 549.67 | 1.73 | 1 | 1 |
| 20 | 512 | 16, 32 | 5953.49 | 10520.54 | 1.77 | 18.73 | 19.14 |
| 40 | 512 | 16, 32 | 11,636.36 | 19948.05 | 1.71 | 36.61 | 36.29 |
##### Fine-tuning performance: NVIDIA DGX A100 (8x A100 80GB)
Our results were obtained by running the `run_summarization.sh` training script in the PyTorch 22.08-py3 NGC container on NVIDIA DGX A100 (8x A100 80GB) GPUs. Performance numbers (in items per second) were averaged over an entire training epoch.
| GPUs | Batch size / GPU (TF32, BF16) | Throughput - TF32 | Throughput - BF16 | Throughput speedup (TF32 - BF16) | Weak scaling - TF32 | Weak scaling - BF16 |
|------|------------------|-------------------|------------------------------|---------------------------------------------|---------------------|--------------------------------|
| 1 | 24, 40 | 48.61 | 74.59 | 1.53 | 1.00 | 1.00 |
| 8 | 24, 40 | 243.03 | 390.24 | 1.61 | 3.39 | 4.08 |
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
The performance metrics used are tokens per second computed from iterating through an entire epoch of XSum dataset with source length = 1024 and target length = 60.
#### Inference performance results
##### Inference performance: NVIDIA DGX A100 (1x A100 80GB)
Our results were obtained by running the `run_eval_summarization.sh` inferencing benchmarking script in the PyTorch 22.08-py3 NGC container on NVIDIA DGX A100 (1x A100 80GB) GPU.
BF16
| Batch size | Latency Avg | Latency 90% | Latency 95% | Latency 99% | Throughput |
|------------|-------------|:-----------:|:-----------:|:-----------:|------------|
| 1 | 0.28 | 0.35 | 0.38 | 0.46 | 3.54 |
| 4 | 0.44 | 0.52 | 0.56 | 0.71 | 9.16 |
| 8 | 0.63 | 0.75 | 0.83 | 0.98 | 12.79 |
| 16 | 0.98 | 1.2 | 1.29 | 1.47 | 16.3 |
| 32 | 1.8 | 2.27 | 2.47 | 2.63 | 17.73 |
| 64 | 3.78 | 4.85 | 5.21 | 5.4 | 16.83 |
| 128 | 8.29 | 10.53 | 10.69 | 10.93 | 15.36 |
To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide).
The inference performance metrics used are milliseconds per iteration. They are computed by iterating through the XSum test data with source length = 1024, target length = 60 and beam search = 6.
## Release notes
### Changelog
June, 2021
- Initial release
December, 2022
- Add features for pre-training
### Known issues
There are no known issues with this model.
|
TensorFlow/Detection/SSD/models/research/object_detection/core | core | post_processing | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Post-processing operations on detected boxes."""
import numpy as np
import tensorflow as tf
from object_detection.core import box_list
from object_detection.core import box_list_ops
from object_detection.core import standard_fields as fields
from object_detection.utils import shape_utils
def multiclass_non_max_suppression(boxes,
scores,
score_thresh,
iou_thresh,
max_size_per_class,
max_total_size=0,
clip_window=None,
change_coordinate_frame=False,
masks=None,
boundaries=None,
pad_to_max_output_size=False,
additional_fields=None,
scope=None):
"""Multi-class version of non maximum suppression.
This op greedily selects a subset of detection bounding boxes, pruning
away boxes that have high IOU (intersection over union) overlap (> thresh)
with already selected boxes. It operates independently for each class for
which scores are provided (via the scores field of the input box_list),
pruning boxes with score less than a provided threshold prior to
applying NMS.
Please note that this operation is performed on *all* classes, therefore any
background classes should be removed prior to calling this function.
Selected boxes are guaranteed to be sorted in decreasing order by score (but
the sort is not guaranteed to be stable).
Args:
boxes: A [k, q, 4] float32 tensor containing k detections. `q` can be either
number of classes or 1 depending on whether a separate box is predicted
per class.
scores: A [k, num_classes] float32 tensor containing the scores for each of
the k detections. The scores have to be non-negative when
pad_to_max_output_size is True.
score_thresh: scalar threshold for score (low scoring boxes are removed).
iou_thresh: scalar threshold for IOU (new boxes that have high IOU overlap
with previously selected boxes are removed).
max_size_per_class: maximum number of retained boxes per class.
max_total_size: maximum number of boxes retained over all classes. By
default returns all boxes retained after capping boxes per class.
clip_window: A float32 tensor of the form [y_min, x_min, y_max, x_max]
representing the window to clip and normalize boxes to before performing
non-max suppression.
change_coordinate_frame: Whether to normalize coordinates after clipping
relative to clip_window (this can only be set to True if a clip_window
is provided)
masks: (optional) a [k, q, mask_height, mask_width] float32 tensor
containing box masks. `q` can be either number of classes or 1 depending
on whether a separate mask is predicted per class.
boundaries: (optional) a [k, q, boundary_height, boundary_width] float32
tensor containing box boundaries. `q` can be either number of classes or 1
depending on whether a separate boundary is predicted per class.
pad_to_max_output_size: If true, the output nmsed boxes are padded to be of
length `max_size_per_class`. Defaults to false.
additional_fields: (optional) If not None, a dictionary that maps keys to
tensors whose first dimensions are all of size `k`. After non-maximum
suppression, all tensors corresponding to the selected boxes will be
added to resulting BoxList.
scope: name scope.
Returns:
A tuple of sorted_boxes and num_valid_nms_boxes. The sorted_boxes is a
BoxList holds M boxes with a rank-1 scores field representing
corresponding scores for each box with scores sorted in decreasing order
and a rank-1 classes field representing a class label for each box. The
num_valid_nms_boxes is a 0-D integer tensor representing the number of
valid elements in `BoxList`, with the valid elements appearing first.
Raises:
ValueError: if iou_thresh is not in [0, 1] or if input boxlist does not have
a valid scores field.
"""
if not 0 <= iou_thresh <= 1.0:
raise ValueError('iou_thresh must be between 0 and 1')
if scores.shape.ndims != 2:
raise ValueError('scores field must be of rank 2')
if scores.shape[1].value is None:
raise ValueError('scores must have statically defined second '
'dimension')
if boxes.shape.ndims != 3:
raise ValueError('boxes must be of rank 3.')
if not (boxes.shape[1].value == scores.shape[1].value or
boxes.shape[1].value == 1):
raise ValueError('second dimension of boxes must be either 1 or equal '
'to the second dimension of scores')
if boxes.shape[2].value != 4:
raise ValueError('last dimension of boxes must be of size 4.')
if change_coordinate_frame and clip_window is None:
raise ValueError('if change_coordinate_frame is True, then a clip_window'
'must be specified.')
with tf.name_scope(scope, 'MultiClassNonMaxSuppression'):
num_scores = tf.shape(scores)[0]
num_classes = scores.get_shape()[1]
selected_boxes_list = []
num_valid_nms_boxes_cumulative = tf.constant(0)
per_class_boxes_list = tf.unstack(boxes, axis=1)
if masks is not None:
per_class_masks_list = tf.unstack(masks, axis=1)
if boundaries is not None:
per_class_boundaries_list = tf.unstack(boundaries, axis=1)
boxes_ids = (range(num_classes) if len(per_class_boxes_list) > 1
else [0] * num_classes.value)
for class_idx, boxes_idx in zip(range(num_classes), boxes_ids):
per_class_boxes = per_class_boxes_list[boxes_idx]
boxlist_and_class_scores = box_list.BoxList(per_class_boxes)
class_scores = tf.reshape(
tf.slice(scores, [0, class_idx], tf.stack([num_scores, 1])), [-1])
boxlist_and_class_scores.add_field(fields.BoxListFields.scores,
class_scores)
if masks is not None:
per_class_masks = per_class_masks_list[boxes_idx]
boxlist_and_class_scores.add_field(fields.BoxListFields.masks,
per_class_masks)
if boundaries is not None:
per_class_boundaries = per_class_boundaries_list[boxes_idx]
boxlist_and_class_scores.add_field(fields.BoxListFields.boundaries,
per_class_boundaries)
if additional_fields is not None:
for key, tensor in additional_fields.items():
boxlist_and_class_scores.add_field(key, tensor)
if pad_to_max_output_size:
max_selection_size = max_size_per_class
with tf.device('/CPU:0'):
selected_indices, num_valid_nms_boxes = (
tf.image.non_max_suppression_padded(
boxlist_and_class_scores.get(),
boxlist_and_class_scores.get_field(fields.BoxListFields.scores),
max_selection_size,
iou_threshold=iou_thresh,
score_threshold=score_thresh,
pad_to_max_output_size=True))
else:
max_selection_size = tf.minimum(max_size_per_class,
boxlist_and_class_scores.num_boxes())
with tf.device('/CPU:0'):
selected_indices = tf.image.non_max_suppression(
boxlist_and_class_scores.get(),
boxlist_and_class_scores.get_field(fields.BoxListFields.scores),
max_selection_size,
iou_threshold=iou_thresh,
score_threshold=score_thresh)
num_valid_nms_boxes = tf.shape(selected_indices)[0]
selected_indices = tf.concat(
[selected_indices,
tf.zeros(max_selection_size-num_valid_nms_boxes, tf.int32)], 0)
nms_result = box_list_ops.gather(boxlist_and_class_scores,
selected_indices)
# Make the scores -1 for invalid boxes.
valid_nms_boxes_indx = tf.less(
tf.range(max_selection_size), num_valid_nms_boxes)
nms_scores = nms_result.get_field(fields.BoxListFields.scores)
nms_result.add_field(fields.BoxListFields.scores,
tf.where(valid_nms_boxes_indx,
nms_scores, -1*tf.ones(max_selection_size)))
num_valid_nms_boxes_cumulative += num_valid_nms_boxes
nms_result.add_field(
fields.BoxListFields.classes, (tf.zeros_like(
nms_result.get_field(fields.BoxListFields.scores)) + class_idx))
selected_boxes_list.append(nms_result)
selected_boxes = box_list_ops.concatenate(selected_boxes_list)
sorted_boxes = box_list_ops.sort_by_field(selected_boxes,
fields.BoxListFields.scores)
if clip_window is not None:
# When pad_to_max_output_size is False, it prunes the boxes with zero
# area.
sorted_boxes = box_list_ops.clip_to_window(
sorted_boxes,
clip_window,
filter_nonoverlapping=not pad_to_max_output_size)
# Set the scores of boxes with zero area to -1 to keep the default
# behaviour of pruning out zero area boxes.
sorted_boxes_size = tf.shape(sorted_boxes.get())[0]
non_zero_box_area = tf.cast(box_list_ops.area(sorted_boxes), tf.bool)
sorted_boxes_scores = tf.where(
non_zero_box_area,
sorted_boxes.get_field(fields.BoxListFields.scores),
-1*tf.ones(sorted_boxes_size))
sorted_boxes.add_field(fields.BoxListFields.scores, sorted_boxes_scores)
num_valid_nms_boxes_cumulative = tf.reduce_sum(
tf.cast(tf.greater_equal(sorted_boxes_scores, 0), tf.int32))
sorted_boxes = box_list_ops.sort_by_field(sorted_boxes,
fields.BoxListFields.scores)
if change_coordinate_frame:
sorted_boxes = box_list_ops.change_coordinate_frame(
sorted_boxes, clip_window)
if max_total_size:
max_total_size = tf.minimum(max_total_size,
sorted_boxes.num_boxes())
sorted_boxes = box_list_ops.gather(sorted_boxes,
tf.range(max_total_size))
num_valid_nms_boxes_cumulative = tf.where(
max_total_size > num_valid_nms_boxes_cumulative,
num_valid_nms_boxes_cumulative, max_total_size)
# Select only the valid boxes if pad_to_max_output_size is False.
if not pad_to_max_output_size:
sorted_boxes = box_list_ops.gather(
sorted_boxes, tf.range(num_valid_nms_boxes_cumulative))
return sorted_boxes, num_valid_nms_boxes_cumulative
def batch_multiclass_non_max_suppression(boxes,
scores,
score_thresh,
iou_thresh,
max_size_per_class,
max_total_size=0,
clip_window=None,
change_coordinate_frame=False,
num_valid_boxes=None,
masks=None,
additional_fields=None,
scope=None,
use_static_shapes=False,
parallel_iterations=32):
"""Multi-class version of non maximum suppression that operates on a batch.
This op is similar to `multiclass_non_max_suppression` but operates on a batch
of boxes and scores. See documentation for `multiclass_non_max_suppression`
for details.
Args:
boxes: A [batch_size, num_anchors, q, 4] float32 tensor containing
detections. If `q` is 1 then same boxes are used for all classes
otherwise, if `q` is equal to number of classes, class-specific boxes
are used.
scores: A [batch_size, num_anchors, num_classes] float32 tensor containing
the scores for each of the `num_anchors` detections. The scores have to be
non-negative when use_static_shapes is set True.
score_thresh: scalar threshold for score (low scoring boxes are removed).
iou_thresh: scalar threshold for IOU (new boxes that have high IOU overlap
with previously selected boxes are removed).
max_size_per_class: maximum number of retained boxes per class.
max_total_size: maximum number of boxes retained over all classes. By
default returns all boxes retained after capping boxes per class.
clip_window: A float32 tensor of shape [batch_size, 4] where each entry is
of the form [y_min, x_min, y_max, x_max] representing the window to clip
boxes to before performing non-max suppression. This argument can also be
a tensor of shape [4] in which case, the same clip window is applied to
all images in the batch. If clip_widow is None, all boxes are used to
perform non-max suppression.
change_coordinate_frame: Whether to normalize coordinates after clipping
relative to clip_window (this can only be set to True if a clip_window
is provided)
num_valid_boxes: (optional) a Tensor of type `int32`. A 1-D tensor of shape
[batch_size] representing the number of valid boxes to be considered
for each image in the batch. This parameter allows for ignoring zero
paddings.
masks: (optional) a [batch_size, num_anchors, q, mask_height, mask_width]
float32 tensor containing box masks. `q` can be either number of classes
or 1 depending on whether a separate mask is predicted per class.
additional_fields: (optional) If not None, a dictionary that maps keys to
tensors whose dimensions are [batch_size, num_anchors, ...].
scope: tf scope name.
use_static_shapes: If true, the output nmsed boxes are padded to be of
length `max_size_per_class` and it doesn't clip boxes to max_total_size.
Defaults to false.
parallel_iterations: (optional) number of batch items to process in
parallel.
Returns:
'nmsed_boxes': A [batch_size, max_detections, 4] float32 tensor
containing the non-max suppressed boxes.
'nmsed_scores': A [batch_size, max_detections] float32 tensor containing
the scores for the boxes.
'nmsed_classes': A [batch_size, max_detections] float32 tensor
containing the class for boxes.
'nmsed_masks': (optional) a
[batch_size, max_detections, mask_height, mask_width] float32 tensor
containing masks for each selected box. This is set to None if input
`masks` is None.
'nmsed_additional_fields': (optional) a dictionary of
[batch_size, max_detections, ...] float32 tensors corresponding to the
tensors specified in the input `additional_fields`. This is not returned
if input `additional_fields` is None.
'num_detections': A [batch_size] int32 tensor indicating the number of
valid detections per batch item. Only the top num_detections[i] entries in
nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The rest of the
entries are zero paddings.
Raises:
ValueError: if `q` in boxes.shape is not 1 or not equal to number of
classes as inferred from scores.shape.
"""
q = boxes.shape[2].value
num_classes = scores.shape[2].value
if q != 1 and q != num_classes:
raise ValueError('third dimension of boxes must be either 1 or equal '
'to the third dimension of scores')
if change_coordinate_frame and clip_window is None:
raise ValueError('if change_coordinate_frame is True, then a clip_window'
'must be specified.')
original_masks = masks
original_additional_fields = additional_fields
with tf.name_scope(scope, 'BatchMultiClassNonMaxSuppression'):
boxes_shape = boxes.shape
batch_size = boxes_shape[0].value
num_anchors = boxes_shape[1].value
if batch_size is None:
batch_size = tf.shape(boxes)[0]
if num_anchors is None:
num_anchors = tf.shape(boxes)[1]
# If num valid boxes aren't provided, create one and mark all boxes as
# valid.
if num_valid_boxes is None:
num_valid_boxes = tf.ones([batch_size], dtype=tf.int32) * num_anchors
# If masks aren't provided, create dummy masks so we can only have one copy
# of _single_image_nms_fn and discard the dummy masks after map_fn.
if masks is None:
masks_shape = tf.stack([batch_size, num_anchors, q, 1, 1])
masks = tf.zeros(masks_shape)
if clip_window is None:
clip_window = tf.stack([
tf.reduce_min(boxes[:, :, :, 0]),
tf.reduce_min(boxes[:, :, :, 1]),
tf.reduce_max(boxes[:, :, :, 2]),
tf.reduce_max(boxes[:, :, :, 3])
])
if clip_window.shape.ndims == 1:
clip_window = tf.tile(tf.expand_dims(clip_window, 0), [batch_size, 1])
if additional_fields is None:
additional_fields = {}
def _single_image_nms_fn(args):
"""Runs NMS on a single image and returns padded output.
Args:
args: A list of tensors consisting of the following:
per_image_boxes - A [num_anchors, q, 4] float32 tensor containing
detections. If `q` is 1 then same boxes are used for all classes
otherwise, if `q` is equal to number of classes, class-specific
boxes are used.
per_image_scores - A [num_anchors, num_classes] float32 tensor
containing the scores for each of the `num_anchors` detections.
per_image_masks - A [num_anchors, q, mask_height, mask_width] float32
tensor containing box masks. `q` can be either number of classes
or 1 depending on whether a separate mask is predicted per class.
per_image_clip_window - A 1D float32 tensor of the form
[ymin, xmin, ymax, xmax] representing the window to clip the boxes
to.
per_image_additional_fields - (optional) A variable number of float32
tensors each with size [num_anchors, ...].
per_image_num_valid_boxes - A tensor of type `int32`. A 1-D tensor of
shape [batch_size] representing the number of valid boxes to be
considered for each image in the batch. This parameter allows for
ignoring zero paddings.
Returns:
'nmsed_boxes': A [max_detections, 4] float32 tensor containing the
non-max suppressed boxes.
'nmsed_scores': A [max_detections] float32 tensor containing the scores
for the boxes.
'nmsed_classes': A [max_detections] float32 tensor containing the class
for boxes.
'nmsed_masks': (optional) a [max_detections, mask_height, mask_width]
float32 tensor containing masks for each selected box. This is set to
None if input `masks` is None.
'nmsed_additional_fields': (optional) A variable number of float32
tensors each with size [max_detections, ...] corresponding to the
input `per_image_additional_fields`.
'num_detections': A [batch_size] int32 tensor indicating the number of
valid detections per batch item. Only the top num_detections[i]
entries in nms_boxes[i], nms_scores[i] and nms_class[i] are valid. The
rest of the entries are zero paddings.
"""
per_image_boxes = args[0]
per_image_scores = args[1]
per_image_masks = args[2]
per_image_clip_window = args[3]
per_image_additional_fields = {
key: value
for key, value in zip(additional_fields, args[4:-1])
}
per_image_num_valid_boxes = args[-1]
if use_static_shapes:
total_proposals = tf.shape(per_image_scores)
per_image_scores = tf.where(
tf.less(tf.range(total_proposals[0]), per_image_num_valid_boxes),
per_image_scores,
tf.fill(total_proposals, np.finfo('float32').min))
else:
per_image_boxes = tf.reshape(
tf.slice(per_image_boxes, 3 * [0],
tf.stack([per_image_num_valid_boxes, -1, -1])), [-1, q, 4])
per_image_scores = tf.reshape(
tf.slice(per_image_scores, [0, 0],
tf.stack([per_image_num_valid_boxes, -1])),
[-1, num_classes])
per_image_masks = tf.reshape(
tf.slice(per_image_masks, 4 * [0],
tf.stack([per_image_num_valid_boxes, -1, -1, -1])),
[-1, q, per_image_masks.shape[2].value,
per_image_masks.shape[3].value])
if per_image_additional_fields is not None:
for key, tensor in per_image_additional_fields.items():
additional_field_shape = tensor.get_shape()
additional_field_dim = len(additional_field_shape)
per_image_additional_fields[key] = tf.reshape(
tf.slice(per_image_additional_fields[key],
additional_field_dim * [0],
tf.stack([per_image_num_valid_boxes] +
(additional_field_dim - 1) * [-1])),
[-1] + [dim.value for dim in additional_field_shape[1:]])
nmsed_boxlist, num_valid_nms_boxes = multiclass_non_max_suppression(
per_image_boxes,
per_image_scores,
score_thresh,
iou_thresh,
max_size_per_class,
max_total_size,
clip_window=per_image_clip_window,
change_coordinate_frame=change_coordinate_frame,
masks=per_image_masks,
pad_to_max_output_size=use_static_shapes,
additional_fields=per_image_additional_fields)
if not use_static_shapes:
nmsed_boxlist = box_list_ops.pad_or_clip_box_list(
nmsed_boxlist, max_total_size)
num_detections = num_valid_nms_boxes
nmsed_boxes = nmsed_boxlist.get()
nmsed_scores = nmsed_boxlist.get_field(fields.BoxListFields.scores)
nmsed_classes = nmsed_boxlist.get_field(fields.BoxListFields.classes)
nmsed_masks = nmsed_boxlist.get_field(fields.BoxListFields.masks)
nmsed_additional_fields = [
nmsed_boxlist.get_field(key) for key in per_image_additional_fields
]
return ([nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks] +
nmsed_additional_fields + [num_detections])
num_additional_fields = 0
if additional_fields is not None:
num_additional_fields = len(additional_fields)
num_nmsed_outputs = 4 + num_additional_fields
batch_outputs = shape_utils.static_or_dynamic_map_fn(
_single_image_nms_fn,
elems=([boxes, scores, masks, clip_window] +
list(additional_fields.values()) + [num_valid_boxes]),
dtype=(num_nmsed_outputs * [tf.float32] + [tf.int32]),
parallel_iterations=parallel_iterations)
batch_nmsed_boxes = batch_outputs[0]
batch_nmsed_scores = batch_outputs[1]
batch_nmsed_classes = batch_outputs[2]
batch_nmsed_masks = batch_outputs[3]
batch_nmsed_additional_fields = {
key: value
for key, value in zip(additional_fields, batch_outputs[4:-1])
}
batch_num_detections = batch_outputs[-1]
if original_masks is None:
batch_nmsed_masks = None
if original_additional_fields is None:
batch_nmsed_additional_fields = None
return (batch_nmsed_boxes, batch_nmsed_scores, batch_nmsed_classes,
batch_nmsed_masks, batch_nmsed_additional_fields,
batch_num_detections)
|
TensorFlow/Detection/SSD/models/research/object_detection/g3doc | g3doc | faq | # Frequently Asked Questions
## Q: How can I ensure that all the groundtruth boxes are used during train and eval?
A: For the object detecion framework to be TPU-complient, we must pad our input
tensors to static shapes. This means that we must pad to a fixed number of
bounding boxes, configured by `InputReader.max_number_of_boxes`. It is
important to set this value to a number larger than the maximum number of
groundtruth boxes in the dataset. If an image is encountered with more
bounding boxes, the excess boxes will be clipped.
## Q: AttributeError: 'module' object has no attribute 'BackupHandler'
A: This BackupHandler (tf.contrib.slim.tfexample_decoder.BackupHandler) was
introduced in tensorflow 1.5.0 so runing with earlier versions may cause this
issue. It now has been replaced by
object_detection.data_decoders.tf_example_decoder.BackupHandler. Whoever sees
this issue should be able to resolve it by syncing your fork to HEAD.
Same for LookupTensor.
## Q: AttributeError: 'module' object has no attribute 'LookupTensor'
A: Similar to BackupHandler, syncing your fork to HEAD should make it work.
## Q: Why can't I get the inference time as reported in model zoo?
A: The inference time reported in model zoo is mean time of testing hundreds of
images with an internal machine. As mentioned in
[Tensorflow detection model zoo](detection_model_zoo.md), this speed depends
highly on one's specific hardware configuration and should be treated more as
relative timing.
|
Tools/PyTorch/TimeSeriesPredictionPlatform/conf/deployment | deployment | convert | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
_target_: inference.converter.run_converter
defaults:
- export: ts-trace
- convert: torchscript
config:
checkpoint: ???
batch_size: 64
precision: fp32
optimize: False
accelerator: none
gpu: 0 |
CUDA-Optimized/FastSpeech/fastspeech/trainer | trainer | __init__ | # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
PyTorch/Classification/GPUNet | GPUNet | README | # GPUNet for Pytorch
GPUNet is a new family of Convolutional Neural Networks designed to max out the performance of NVIDIA GPU and TensorRT. Crafted by AI, GPUNet demonstrates state-of-the-art inference performance up to 2x faster than EfficientNet-X and FBNet-V3. This repo holds the original GPUNet implementation in our CVPR-2022 [paper](https://arxiv.org/pdf/2205.00841.pdf), allowing a user to quickly reproduce the inference latency and accuracy, re-train or customize the models.
- [Model overview](#model-overview)
* [Model architecture](#model-architecture)
* [Default configuration](#default-configuration)
* [Feature support matrix](#feature-support-matrix)
* [Features](#features)
- [Setup](#setup)
* [Requirements](#requirements)
- [Quick Start Guide](#quick-start-guide)
* [Prepare the dataset](#prepare-the-dataset)
* [Training](#training)
* [Inference](#inference)
- [Advanced Usage](#advanced-usage)
* [Scripts and sample code](#scripts-and-sample-code)
* [Model customization](#model-customization)
* [Command-line options](#command-line-options)
* [Train on your data](#train-on-your-data)
* [Training process](#training-process)
* [Inference process](#inference-process)
* [Benchmark the GPUNet latency](#benchmark-the-gpunet-latency)
- [Performance](#performance)
* [Training accuracy results](#training-accuracy-results)
* [Training performance results](#training-performance-results)
* [Inference results](#inference-results)
- [Release notes](#release-notes)
* [Changelog](#changelog)
* [Known issues](#known-issues)
## Model overview
Developed by NVIDIA, GPUNet differs from the current ConvNets in three aspects:
- **Designed by AI**: we built an AI agent to establish SOTA GPUNet out of our years of research in Neural Architecture Search. Powered by [Selene](https://blogs.nvidia.com/blog/2020/12/18/nvidia-selene-busy/) supercomputer, our AI agent can automatically orchestrate hundreds of GPUs to meticulously trade-off sophisticated design decisions w.r.t multiple design goals without intervening by the domain experts.
- **Co-designed with NVIDIA TensorRT and GPU**: GPUNet only considers the most relevant factors to the model accuracy and the TensorRT inference latency, promoting GPU friendly operators (for example, larger filters) over memory-bound operators (for example, fancy activations), therefore delivering the SOTA GPU latency and the accuracy on ImageNet.
- **TensorRT deployment-ready**: All the GPUNet reported latencies are after the optimization from TensorRT, including kernel fusion, quantization, etc., so GPUNet is directly deployable to users.
<p align="center">
<img src="./figs/gpunet_performance.png" />
</p>
Because of better design trade-off and hardware and software co-design, GPUNet has established new SOTA latency and accuracy Pareto frontier on ImageNet. Specifically, GPUNet is up to 2x faster than EfficentNet, EfficientNet-X and FBNetV3. Our CVPR-2022 [paper](https://arxiv.org/pdf/2205.00841.pdf) provides extensive evaluation results aginsts other networks.
### Model architecture
<p align="center">
<img src="./figs/search_space.png" />
</p>
The above table describes the general structure of GPUNet, which consists of 8 stages, and we search for the configurations of each stage. The layers within a stage share the same configurations. The first two stages are to search for the head configurations using convolutions. Inspired by [EfficientNet-V2](https://arxiv.org/abs/2104.00298), the 2 and 3 stages use Fused Inverted Residual Blocks(IRB); however, we observed the increasing latency after replacing the rest IRB with Fused-IRB. Therefore, from stages 4 to 7, we use IRB as the primary layer. The column \#Layers shows the range of \#Layers in the stage, for example, [3, 10] at stage 4 means that the stage can have three to 10 IRBs, and the column filters shows the range of filters for the layers in the stage. We also tuned the expansion ratio, activation types, kernel sizes, and the Squeeze Excitation(SE) layer inside the IRB/Fused-IRB. Finally, the dimensions of the input image increased from 224 to 512 at step 32.
GPUNet has provided seven specific model architectures at different latencies. You can easily query the architecture details from the JSON formatted model (for example, those in eval.py). The following figure describes GPUNet-0, GPUNet-1, and GPUNet-2 in the paper. Note that only the first IRB's stride is two and the stride of the rest IRBs is 1 in stages 2, 3, 4, and 6.
<p align="center">
<img src="./figs/gpunet_archs.png" />
</p>
### Default configuration
* Training features:
* Customize the training pipeline in [Timm](https://github.com/rwightman/pytorch-image-models) to support the distillation.
* [Here](./train_params/GPUNet-D1.train.params) provides an example of training hyper-parameters with distillation.
* All the features in [Timm](https://github.com/rwightman/pytorch-image-models), including
* Random data augmentation, mean = 9, std = 0.5.
* Exponential Moving Average (EMA).
* Rmsproptf optimizer.
* Multi-GPU training.
* Automatic mixed precision (AMP).
* GPUNet can be further improved with other training hacks such as Mixup or drop path regularization. More hacks are available at [Timm](https://github.com/rwightman/pytorch-image-models).
* The exact training hyper-parameters to reproduce the GPUNet results can be found [here](./train_params).
* Inference features:
* Test the accuracy of pre-trained GPUNet.
* Save GPUNet to the ONNX files for the latency benchmarking.
### Feature support matrix
This model supports the following features:
| Feature | GPUNet
|-----------------------|--------------------------
|Multi-GPU training | ✓
|Automatic mixed precision (AMP) | ✓
|Distillation | ✓
#### Features
**Multi-GPU training**: we re-use the same training pipeline from [Timm](https://github.com/rwightman/pytorch-image-models) to train GPUNet. Timm has adopted NCCL to optimize the multi-GPU training efficiency.
**Automatic Mixed Precision (AMP)**: mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speed-up by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network.
[Timm](https://github.com/rwightman/pytorch-image-models) supports AMP by default and only requires the '--amp' flag to enable the AMP training.
**Distillation**: originally introduced in [Hinton's seminal paper](https://arxiv.org/pdf/1503.02531.pdf), knowledge distillation uses a larger and better teacher network to supervise the training of a student network in addition to the ground truth. Generally the final accuracy of a student network is better than the training without a teacher; for example, ~+2% on ImageNet.
We customized [Timm](https://github.com/rwightman/pytorch-image-models) to support the distillation. The teacher model can be any model supported by [Timm](https://github.com/rwightman/pytorch-image-models). We demonstrate the usage of distillation in [Training with distillation](#training-with-distillation).
## Setup
The following section lists the requirements you need to meet to start training the GPUNet.
### Requirements
This repository contains a Dockerfile that extends the PyTorch NGC container and encapsulates all dependencies. You also need the following components to get started:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- [ImageNet 1K dataset](https://image-net.org/download.php)
- Supported GPUs:
* [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
* [NVIDIA Turing architecture](https://www.nvidia.com/en-us/design-visualization/technologies/turing-architecture/)
* [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
For more information about how to get started with NGC containers, refer tothe following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning
DGX Documentation:
* [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html)
* [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/dgx/user-guide/index.html#accessing_registry)
* [Running PyTorch](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/running.html#running)
## Quick Start Guide
This repo allows a user to easily train GPUNet, reproduce our results, test the accuracy of pre-trained checkpoints, and benchmark GPUNet latency. For customizing GPUNet, refer to [Model customization](#model-customization).
1. To get started, clone the repo:
```
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/Classification/GPUNet
```
2. Download ImageNet from the [offical website](https://image-net.org/download-images). Recursively unzip the dataset, and locate the train and val folders. Refer to [Prepare the dataset](#prepare-the-dataset) for more details.
3. Build and run the GPUNet PyTorch container, assuming you have installed the docker.
```
docker build -t gpunet .
docker run --gpus all -it --rm --network=host --shm-size 600G --ipc=host -v /path/to/imagenet:/root/data/imagenet/ gpunet
```
### Prepare the Dataset
1. [Download the ImageNet](http://image-net.org/download-images).
2. Extract the training data:
```bash
mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
cd ..
```
3. Extract the validation data and move the images to subfolders:
```bash
mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar
wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash
```
The directory where the `train/` and `val/` directories are placed is referred to as `/path/to/imagenet/` in this document.
### Training
We have provided the [training launch scripts](./train_params) for you to reproduce the GPUNet accuracy by training from scratch. For example, a user can copy the launch script in GPUNet-0.train.params or the training hyper-parameters below to reproduce the accuracy.
GPUNet training hyperparameters:
1. GPUNet-0
```
./train.sh 8 /root/data/imagenet/ --model gpunet_0 --sched step --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf -b 192 --epochs 450 --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr .06 --num-classes 1000 --enable-distill False --crop-pct 1.0 --img-size 320 --amp
```
2. GPUNet-1
```
./train.sh 8 /root/data/imagenet/ --model gpunet_1 --sched step --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf -b 192 --epochs 450 --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr .06 --num-classes 1000 --enable-distill False --crop-pct 1.0 --img-size 288 --amp
```
3. GPUNet-2
```
./train.sh 8 /root/data/imagenet/ --model gpunet_2 --sched step --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf -b 192 --epochs 450 --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr .06 --num-classes 1000 --enable-distill False --crop-pct 1.0 --img-size 384 --amp
```
4. GPUNet-D1 with distillation
```
./train.sh 8 /root/data/imagenet/ --model gpunet_d1 --sched step --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf -b 192 --epochs 450 --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr .06 --num-classes 1000 --enable-distill True --crop-pct 1.0 --img-size 456 --amp --test-teacher False --teacher tf_efficientnet_b5_ns --teacher-img-size 456
```
5. GPUNet-D2 with distillation
```
./train.sh 8 /root/data/imagenet/ --model gpunet_d2 --sched step --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf -b 128 --epochs 450 --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr .06 --num-classes 1000 --enable-distill True --crop-pct 1.0 --img-size 528 --amp --test-teacher False --teacher tf_efficientnet_b6_ns --teacher-img-size 528
```
6. GPUNet-P0 with distillation
```
./train.sh 8 /root/data/imagenet/ --model gpunet_p0 --sched step --decay-epochs 2.4 --decay-rate 0.97 --opt rmsproptf -b 256 --epochs 450 --opt-eps 0.001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr 0.08 --num-classes 1000 --enable-distill True --crop-pct 0.875 --img-size 224 --amp --test-teacher False --teacher tf_efficientnet_b2 --teacher-img-size 260
```
7. GPUNet-P1 with distillation
```
./train.sh 8 /root/data/imagenet/ --model gpunet_p1 --sched step --decay-epochs 2.4 --decay-rate 0.97 --opt rmsproptf -b 256 --epochs 450 --opt-eps 0.001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --drop 0.3 --drop-connect 0.2 --model-ema --model-ema-decay 0.9999 --aa rand-m9-mstd0.5 --remode pixel --reprob 0.2 --lr 0.08 --num-classes 1000 --enable-distill True --crop-pct 0.875 --img-size 224 --amp --test-teacher False --teacher tf_efficientnet_b2 --teacher-img-size 260
```
You need to call train.sh to start the training, and here is an example of arguments to train.sh.
```
./train.sh 8 >>launch with 8 GPUs.
/root/data/imagenet/ >>path to the imagenet.
--model gpunet_d1 >>name of GPUNet.
--sched step >>stepwise learning rate scheduler.
--decay-epochs 2.4 >>epoch interval to decay LR.
--decay-rate .97 >>LR decay rate (default: 0.1).
--opt rmsproptf >>optimizer.
-b 192 >>batch size.
--epochs 450 >>total training epochs.
--opt-eps .001 >>optimizer epsilon.
-j 8 >>the number of threads for data loader.
--lr .06 >>learning rate.
--warmup-lr 1e-6 >>warmup learning rate.
--weight-decay 1e-5 >>weight-decay rate.
--drop 0.3 >>dropout rate.
--drop-connect 0.2 >>drop connect rate.
--model-ema >>enable tracking moving average of model weights.
--model-ema-decay 0.9999 >>decay factor for model weights moving average (default: 0.9998).
--aa rand-m9-mstd0.5 >>using the random augmentation.
--remode pixel >>random erase mode.
--reprob 0.2 >>random erase prob.
--num-classes 1000 >>the number of output classes.
--amp >>enable the amp training.
--crop-pct 1.0 >>input image center crop percent.
--output ./output/ >>path to output folder.
--img-size 456 >>image size for the student model, i.e., gpunet_d1.
--enable-distill True >>to turn on/off the distillation.
--test-teacher False >>to test the accuracy of teacher model
--teacher tf_efficientnet_b5 >>the name of teacher model
--teacher-img-size 456 >>the image size to the teacher model. Note the student and teacher may have different image resolutions.
```
#### Training with distillation
We recommend running the distillation on a GPU with large DRAM; for example, 80G A100, since it needs to fit another teacher network.
- The following describes the usage of distillation.
* --enable-distill [Boolean]
* Enable or disable distillation.
* --teacher [String]
* Specify the name of the teacher model from Timm. You can find the full list of teacher models [here](https://github.com/rwightman/pytorch-image-models/blob/master/results/results-imagenet.csv), and you can pick any models from the model column. We expect the teacher model to be larger and better than the selected GPUNet.
* --teacher-img-size [Int]
* Specify the image resolution to the teacher model. The teacher model may use a larger resolution, and you can find the teacher's resolution from the column img_size to the selected teacher. Internally we use one data loader for both the teacher and student models, and we downsample an image batch with the teacher's resolution to the student's resolution using [Interpolation](https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html).
### Inference
We also allow a user to evaluate the accuracy of pre-trained GPUNet checkpoints and benchmark the model's TensorRT latency. For evaluating GPUNet on a custom dataset, refer to [Train on your data](#train-on-your-data).
#### Evaluate the pre-trained GPUNet checkpoints
In the `eval.py`, we have listed seven configurations to the released GPUNet models in the table below.
| batch | Distillation | GPU | Latency |
|-----------|---------------|---------------------|----------|
| 1 | No | GV100 | 0.65ms |
| 1 | No | GV100 | 0.85ms |
| 1 | No | GV100 | 1.75ms |
| 1 | Yes | GV100 | 0.5ms-D |
| 1 | Yes | GV100 | 0.8ms-D |
| 1 | Yes | GV100 | 1.25ms-D |
| 1 | Yes | GV100 | 2.25ms-D |
A user can easily evaluate the accuracy of a pre-trained checkpoint using the following code:
```
from configs.model_hub import get_configs, get_model_list
from models.gpunet_builder import GPUNet_Builder
modelJSON, cpkPath = get_configs(batch=1, latency="0.65ms", gpuType="GV100") >>Get the model configurations and checkpoints.
builder = GPUNet_Builder() >>Build an instance of GPUNet constructor.
model = builder.get_model(modelJSON) >>Build the GPUNet based on the model json.
builder.export_onnx(model) >>Export Pytorch model to ONNX for benchmarking the latency.
builder.test_model( >>Test the checkpoint accuracy.
model,
testBatch=200,
checkpoint=cpkPath,
imgRes=(3, model.imgRes, model.imgRes),
dtype="fp16",
crop_pct=1,
val_path="/root/data/imagenet/val",
)
```
#### Benchmark the GPUNet latency
We will need the ONNX file of the GPUNet model to reproduce the latency. `builder.export_onnx(model)` will export an ONNX file named `gpunet.onnx`. You can get the FP16 latency with the following command:
```
trtexec --onnx=gpunet.onnx --fp16 --workspace=10240
```
Here `gpunet.onnx` is configured to benchmark the latency at the batch = 1 to be consistent with the GPUNet [paper](https://arxiv.org/pdf/2205.00841.pdf). You can also look at the [torch.onnx](https://pytorch.org/docs/stable/onnx.html) API to benchmark different settings, such as batch sizes. Finally, we report the median GPUNet compute time; here is an example output of a network with batch=1, latency=0.65ms, gpuType=GV100.
```
[04/07/2022-19:40:17] [I] GPU Compute Time: min = 0.554077 ms, max = 0.572388 ms, mean = 0.564606 ms, median = 0.564209 ms, percentile(99%) = 0.570312 ms
```
## Advanced usage
The following sections provide greater details of the dataset, running training and inference, and the training results.
### Scripts and sample code
- Training
* All the training launch scripts are available [here](./train_params).
* See section [Training](#training) for the explanations of different control flags.
- Inference
* We also provide `validate.py` to evaluate a customized model.
```
python validate.py /path/to/imagenet/val
--model gpunet_0 >>Model name.
-b 200 >>Batch size.
-j 8
--img-size 320 >>Test image resolution.
--num-classes 1000 >>1000 classes for ImageNet 1K.
--checkpoint ./configs/batch1/GV100/0.65ms.pth.tar >>Checkpoint location.
```
### Model customization
Customizing GPUNet is as simple as tweaking a few hyper-parameters in a JSON, and [this folder](./configs/batch1/GV100) provides all the JSON formatted GPUNet. Let's take GPUNet-0 (0.65ms.json) as an example.
```
[
{
"layer_type": "data",
"img_resolution": 320, >>the image resolution to the network
"distill": false
},
...
# 1 convolution layer
{
"layer_type": "conv",
"num_in_channels": 32, >> input filters to this convolution layer
"num_out_channels": 32, >> output filters
"stride": 1,
"kernel_size": 3,
"act": "relu",
"stage": 1
},
# 1 Fused Inverted Residual Block (IRB), all the hyper-parameters are tunable.
{
"layer_type": "fused_irb",
"num_in_channels": 32,
"num_out_channels": 32,
"stride": 2,
"expansion": 5,
"kernel_size": 3,
"act": "relu",
"use_se": false,
"stage": 2
},
...
```
The entire GPUNet is customizable in the above JSON. Feel free to add or trim layers, change the filters, kernels, activation, or layer types.
### Command-line options
`validate.py` and `train.py` enable users to test and train GPUNet. To see the complete list of available options and their descriptions, use the `-h` or `--help` command-line option, for example:
```
python train.py -h
python validate.py -h
```
### Train on your data
To use your own dataset, divide it into directories. For example:
- Training images - `train/<class id>/<image>`
- Validation images - `val/<class id>/<image>`
If your dataset has a number of classes different than 1000, you need to pass the `--num-classes N` flag to the training script.
### Training process
All the results of the training will be stored in the directory specified with `--output` argument.
The script will store:
- the most recent checkpoint - `last.pth.tar`.
- the checkpoint with the best validation accuracy - `model_best.pth.tar`.
- the log - in the file of `summary.csv`.
Metrics gathered through training:
- `Loss` - training loss (average train loss).
- `Time` - iteration time, images/second (average iteration time, and average images/second).
- `LR` - the current learning rate.
- `Data` - data loading time.
To restart training from the checkpoint use the `--resume path/to/latest_checkpoint.pth` option.
### Inference process
Validation is done every epoch, and can be also run separately on a checkpointed model.
```
python validate.py </path/to/val>
--model <model name> -b <batch size>
-j <data loader thread, default 8> --img-size <image resolution>
--num-classes <prediction classes, 1000 for imagenet 1k>
--checkpoint <checkpoint path>
```
Metrics gathered through training:
- `Time` - iteration time (average iteration time, and average images/second).
- `Loss` - inference loss (average inference loss).
- `Acc@1` - current top1 accuracy (average top1 accuracy),
- `Acc@5` - top5 speed measured in images/second
## Performance
This section demonstrates the GPUNet training and inference results independently benchmarked by a third party. You can also easily replicate the same results following [Quick Start Guide](#quick-start-guide).
### Results
#### Training Accuracy Results
We benchmark the training results following the steps in [Training](#training). This section lists the training results on NVIDIA DGX V100.
##### NVIDIA DGX V100 (8x V100 32GB)
| **Model**|**Batch**| **Epochs** | **GPUs** | **FP32 Top1** | **AMP Top1** | **FP32 (hours)<br />Train Time** | **AMP (hours)<br />Train Time** | **Training speedup<br />(FP32 / AMP)** |
|:--------:|:------:|:----------:|:--------:|:--------------:|:--------------:|:-------------------:|:-----------------------:|:--------------------------------:|
| GPUNet-0 |192 | 450 | 8 | 78.90<sub>+/-0.03</sub> | 78.96<sub>+/-0.05</sub> |71.63|46.56| 1.54 x |
| GPUNet-1 |192 | 450 | 8 | 80.4-<sub>+/-0.03</sub> | 80.5<sub>+/-0.03</sub> |67.5 |43.5 | 1.55 x |
| GPUNet-2 |192 | 450 | 8 | 82.1-<sub>+/-0.04</sub> | 82.2<sub>+/-0.04</sub> |171 |84.25| 2.03 x |
#### Training performance results
Please also follow the steps in [Training](#training) to reproduce the performance results below.
##### NVIDIA DGX V100 (8x V100 32GB)
| **Model** | **GPUs** | **Batch** | FP32<br />imgs/second | **AMP<br />imgs/second** | Speedup<br />(FP32 to AMP) |
|:---------:|:--------:|:---------:|:-----------:|:--------------------------------:|:------------------------------------------------:|
| GPUNet-0 | 8 | 192 | 2289 img/s | 3518 img/s | 1.53 x |
| GPUNet-1 | 8 | 192 | 2415 img/s | 3774 img/s | 1.56 x |
| GPUNet-2 | 8 | 192 | 948 img/s | 1957 img/s | 2.03 x |
##### NVIDIA DGX A100 (8x A100 80GB)
| **Model** | **GPUs** | **Batch** | FP32<br />imgs/second | **AMP<br />imgs/second** | Speedup<br />(TF32 to AMP) |
|:---------:|:--------:|:---------:|:-----------:|:--------------------------------:|:------------------------------------------------:|
| GPUNet-2 | 8 | 192 | 2002 img/s | 2690 img/s | 1.34 x |
| GPUNet-D1 | 8 | 128 | 755 img/s | 844 img/s | 1.11 x |
#### Inference results
We benchmark the training results following the steps in [Benchmark the GPUNet latency](#benchmark-the-gpunet-latency). This section lists the inference results on NVIDIA 32G V100 and 80G A100.
##### NVIDIA V100 (32GB)
| **GPUNet** | **Batch** | **GPU** | **TensorRT8 FP16 Latency** | FP16 Latency | Perf Details | ImageNet Top1 |
|:-----------------:|:--------------:|:--------:|:--------------------------:|:--------------------------:|:------------------------:|:------------:|
| GPUNet-0 | 1 | V100 | 0.63 ms | 1.82 ms | [here](./triton/065ms) | 78.9 |
| GPUNet-1 | 1 | V100 | 0.82 ms | 2.75 ms | [here](./triton/085ms) | 80.5 |
| GPUNet-2 | 1 | V100 | 1.68 ms | 5.50 ms | [here](./triton/175ms) | 82.2 |
| GPUNet-P0 | 1 | V100 | 0.63 ms | 2.11 ms | [here](./triton/05ms-D) | 80.3 |
| GPUNet-P1 | 1 | V100 | 0.96 ms | 2.47 ms | [here](./triton/08ms-D) | 81.1 |
| GPUNet-D1 | 1 | V100 | 1.24 ms | 2.88 ms | [here](./triton/125ms-D) | 82.5 |
| GPUNet-D2 | 1 | V100 | 2.17 ms | 4.22 ms | [here](./triton/225ms-D) | 83.6 |
##### NVIDIA A100 (80GB)
| **GPUNet** | **Batch** | **GPU** | **TensorRT8 FP16 Latency** | FP16 Latency | Perf Details | ImageNet Top1 |
|:-----------------:|:--------------:|:--------:|:--------------------------:|:--------------------------:|:------------------------:|:------------:|
| GPUNet-0 | 1 | A100 | 0.46 ms | 1.46 ms | [here](./triton/065ms) | 78.9 |
| GPUNet-1 | 1 | A100 | 0.59 ms | 1.81 ms | [here](./triton/085ms) | 80.5 |
| GPUNet-2 | 1 | A100 | 1.25 ms | 4.03 ms | [here](./triton/175ms) | 82.2 |
| GPUNet-P0 | 1 | A100 | 0.45 ms | 1.31 ms | [here](./triton/05ms-D) | 80.3 |
| GPUNet-P1 | 1 | A100 | 0.61 ms | 1.64 ms | [here](./triton/08ms-D) | 81.1 |
| GPUNet-D1 | 1 | A100 | 0.94 ms | 2.44 ms | [here](./triton/125ms-D) | 82.5 |
| GPUNet-D2 | 1 | A100 | 1.40 ms | 3.06 ms | [here](./triton/225ms-D) | 83.6 |
## Release notes
The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to https://developer.nvidia.com/deep-learning-performance-training-inference.
### Changelog
May 2022
- Initial release
### Known issues
There are no known issues with this model.
|
TensorFlow2/Recommendation/DLRM_and_DCNv2/deployment/hps | hps | deploy_dense | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# author: Tomasz Grel (tgrel@nvidia.com)
import logging
import os
import pathlib
import shutil
import subprocess
import tempfile
import textwrap
from typing import List
import numpy as np
import tensorflow as tf
from nn.dense_model import DenseModel
from . import constants as c
LOGGER = logging.getLogger(__name__)
_dense_model_config_template = r"""name: "{model_name}"
{backend_type}: "{backend_runtime}"
max_batch_size: 0
input [
{{
name: "{input1}"
data_type: TYPE_FP32
dims: [-1]
}},
{{
name: "{input2}"
data_type: TYPE_FP32
dims: [-1]
}}
]
output [
{{
name: "{output1}"
data_type: TYPE_FP32
dims: [-1,1]
}}
]
version_policy: {{
specific:{{versions: 1}}
}},
instance_group [
{{
count: {engine_count_per_device}
kind : KIND_GPU
gpus: [0]
}}
]
"""
def _execute_cmd(cmd: List, verbose: bool = False):
"""Execute command as subprocess.
Args:
cmd: A command definition
verbose: Stream command output
Raises:
OSError when command execution failed
"""
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, encoding="utf-8"
)
if verbose:
LOGGER.info("Command output:")
stream_output = ""
while True:
output = process.stdout.readline()
if output == "" and process.poll() is not None:
break
if output:
stream_output += output
if verbose:
print(textwrap.indent(output.rstrip(), " ")) # noqa: T201
result = process.poll()
if result != 0:
raise OSError(
f"Processes exited with error code:{result}. Command to reproduce error:\n{' '.join(cmd)}"
)
def _savedmodel2onnx(source_model_path, dst_model_path, opset=11, verbose=False):
convert_cmd = [
"python",
"-m",
"tf2onnx.convert",
"--saved-model",
source_model_path.as_posix(),
"--output",
dst_model_path.as_posix(),
"--opset",
str(opset),
"--verbose",
]
_execute_cmd(convert_cmd, verbose=verbose)
def _onnx2trt(
model,
source_model_path,
dst_model_path,
precision,
optimal_batch_size,
max_batch_size,
verbose=False,
):
min_batch = np.array([model.num_numerical_features, sum(model.embedding_dim)])
optimal_batch = min_batch * optimal_batch_size
max_batch = min_batch * max_batch_size
print(
f"min batch {min_batch}, optimal_batch: {optimal_batch}, max_batch: {max_batch}"
)
convert_cmd = [
"trtexec",
f"--onnx={source_model_path.as_posix()}",
"--buildOnly",
f"--saveEngine={dst_model_path.as_posix()}",
f"--minShapes=args_0:{min_batch[0]},args_1:{min_batch[1]}",
f"--optShapes=args_0:{optimal_batch[0]},args_1:{optimal_batch[1]}",
f"--maxShapes=args_0:{max_batch[0]},args_1:{max_batch[1]}",
]
if precision == "fp16":
convert_cmd += ["--fp16"]
_execute_cmd(convert_cmd, verbose=verbose)
def _convert2onnx(source_model_path, workdir, verbose=False):
model_path = workdir / "model.onnx"
_savedmodel2onnx(
source_model_path=source_model_path,
dst_model_path=model_path,
verbose=verbose,
)
return model_path
def _convert2trt(
model,
source_model_path,
precision,
workdir,
optimal_batch_size,
max_batch_size,
verbose=False,
):
onnx_model_path = _convert2onnx(
source_model_path=source_model_path,
workdir=workdir,
verbose=verbose,
)
trt_model_path = workdir / "model.plan"
_onnx2trt(
model=model,
source_model_path=onnx_model_path,
dst_model_path=trt_model_path,
precision=precision,
verbose=verbose,
optimal_batch_size=optimal_batch_size,
max_batch_size=max_batch_size,
)
return trt_model_path
def _set_tf_memory_growth():
physical_devices = tf.config.list_physical_devices("GPU")
for d in physical_devices:
tf.config.experimental.set_memory_growth(d, True)
def deploy_dense(
src,
dst,
model_name,
model_format,
model_precision,
max_batch_size,
engine_count_per_device,
trt_optimal_batch_size,
version="1",
):
print("deploy dense dst: ", dst)
_set_tf_memory_growth()
os.makedirs(dst, exist_ok=True)
dense_model = DenseModel.from_config(os.path.join(src, "config.json"))
if model_precision == "fp16" and model_format == 'tf-savedmodel':
policy = tf.keras.mixed_precision.Policy("mixed_float16")
tf.keras.mixed_precision.set_global_policy(policy)
# Currently, there's no support for custom kernels deployment.
# Use pure tensorflow implementation instead on the inference side.
if dense_model.interaction == 'dot_custom_cuda':
dense_model.interaction = 'dot_tensorflow'
dense_model._create_interaction_op()
dense_model.load_weights(os.path.join(src, "dense"))
# transpose needed here because HPS expects a table-major format vs TensorFlow uses batch-major
dense_model.transpose = True
dense_model.force_initialization(training=False, flattened_input=True)
with tempfile.TemporaryDirectory() as tempdir:
tempdir = pathlib.Path(tempdir)
model_path = tempdir / "model.savedmodel"
dense_model.save_model(model_path.as_posix(), save_input_signature=False)
model_store = pathlib.Path(dst) / str(version)
model_store.mkdir(parents=True, exist_ok=True)
if model_format == "tf-savedmodel":
backend_type = "platform"
backend_runtime = "tensorflow_savedmodel"
shutil.copytree(model_path, model_store / "model.savedmodel")
elif model_format == "onnx":
backend_type = "backend"
backend_runtime = "onnxruntime"
model_path = _convert2onnx(model_path, workdir=tempdir)
shutil.copy(model_path, model_store / "model.onnx")
elif model_format == "trt":
backend_type = "backend"
backend_runtime = "tensorrt"
model_path = _convert2trt(
dense_model,
model_path,
precision=model_precision,
workdir=tempdir,
optimal_batch_size=trt_optimal_batch_size,
max_batch_size=max_batch_size,
)
shutil.copy(model_path, model_store / "model.plan")
else:
raise ValueError(f"Unsupported format: {model_format}")
with open(os.path.join(dst, "config.pbtxt"), "w") as f:
s = _dense_model_config_template.format(
backend_type=backend_type,
backend_runtime=backend_runtime,
model_name=model_name,
input1=c.dense_input1_name,
input2=c.dense_numerical_features_name,
output1=c.dense_output_name,
max_batch_size=max_batch_size,
engine_count_per_device=engine_count_per_device,
)
f.write(s)
print(f"{model_name} configuration:")
print(s)
|
PyTorch/Classification/GPUNet/triton/deployment_toolkit | deployment_toolkit | dump | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import json
import pickle
import threading
from pathlib import Path
from typing import Dict, Iterator, List, Union
import numpy as np
MB2B = 2 ** 20
B2MB = 1 / MB2B
FLUSH_THRESHOLD_B = 256 * MB2B
def _validate_batch(name: str, value: Union[list, np.ndarray]):
if not isinstance(value, (list, np.ndarray)):
raise ValueError(f"Values shall be lists or np.ndarrays; current type {type(value)}")
def _validate_prefix_data(prefix_data: Dict[str, List[np.ndarray]]):
batch_sizes_per_io_name = {name: [len(batch) for batch in batches] for name, batches in prefix_data.items()}
names = list(batch_sizes_per_io_name)
for io_name in names:
for batch_idx, batch_size in enumerate(batch_sizes_per_io_name[io_name]):
if not all([batch_sizes_per_io_name[other_name][batch_idx] == batch_size for other_name in names]):
non_equal_batch_sizes = {
other_name: batch_sizes_per_io_name[other_name][batch_idx] for other_name in names
}
non_equal_batch_sizes_str = ", ".join(
[f"{name}={batch_size}" for name, batch_size in non_equal_batch_sizes.items()]
)
raise ValueError(
"All inputs/outputs should have same number of batches with equal batch_size. "
f"At batch_idx={batch_idx} there are batch_sizes: {non_equal_batch_sizes_str}"
)
# ensure if each io has same number of batches with equal size
def _get_nitems_and_batches(prefix_data: Dict[str, List[np.ndarray]]):
nitems = 0
nbatches = 0
if prefix_data:
nitems_per_io_name = {name: sum(len(batch) for batch in batches) for name, batches in prefix_data.items()}
nbatches_per_io_name = {name: len(batches) for name, batches in prefix_data.items()}
nitems = list(nitems_per_io_name.values())[0]
nbatches = list(nbatches_per_io_name.values())[0]
return nitems, nbatches
class BaseDumpWriter(abc.ABC):
FILE_SUFFIX = ".abstract"
def __init__(self, output_dir: Union[str, Path]):
self._output_dir = Path(output_dir)
# outer dict key is prefix (i.e. input/output/labels/...), inner dict key is input/output name
# list is list of batches
self._items_cache: Dict[str, Dict[str, List[np.ndarray]]] = {}
# key is prefix
self._items_counters: Dict[str, int] = {}
self._cache_lock = threading.RLock()
self._flush_threshold_b = FLUSH_THRESHOLD_B
@property
def cache_size(self):
def _get_bytes_size(name, batch):
_validate_batch(name, batch)
if not isinstance(batch, np.ndarray):
batch = np.narray(batch)
return batch.nbytes
with self._cache_lock:
return {
prefix: sum(_get_bytes_size(name, batch) for name, batches in data.items() for batch in batches)
for prefix, data in self._items_cache.items()
}
def _append_to_cache(self, prefix, prefix_data):
if prefix_data is None:
return
if not isinstance(prefix_data, dict):
raise ValueError(f"{prefix} data to store shall be dict")
with self._cache_lock:
cached_prefix_data = self._items_cache.setdefault(prefix, {})
for name, batch in prefix_data.items():
_validate_batch(name, batch)
if not isinstance(batch, np.ndarray):
batch = np.array(batch)
cached_batches = cached_prefix_data.setdefault(name, [])
cached_batches += [batch]
def write(self, **kwargs):
with self._cache_lock:
for prefix, prefix_data in kwargs.items():
self._append_to_cache(prefix, prefix_data)
biggest_prefix_data_size = max(self.cache_size.values())
if biggest_prefix_data_size > self._flush_threshold_b:
self.flush()
def flush(self):
with self._cache_lock:
for prefix, prefix_data in self._items_cache.items():
_validate_prefix_data(prefix_data)
output_path = self._output_dir / self._get_filename(prefix)
self._dump(prefix_data, output_path)
nitems, nbatches = _get_nitems_and_batches(prefix_data)
self._items_counters[prefix] += nitems
self._items_cache = {}
def _get_filename(self, prefix):
idx = self._items_counters.setdefault(prefix, 0)
return f"{prefix}-{idx:012d}{self.FILE_SUFFIX}"
@abc.abstractmethod
def _dump(self, prefix_data: Dict[str, List[np.ndarray]], output_path: Path):
pass
def __enter__(self):
if self._output_dir.exists() and len(list(self._output_dir.iterdir())):
raise ValueError(f"{self._output_dir.as_posix()} is not empty")
self._output_dir.mkdir(parents=True, exist_ok=True)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.flush()
class PickleDumpWriter(BaseDumpWriter):
FILE_SUFFIX = ".pkl"
def _dump(self, prefix_data: Dict[str, List[np.ndarray]], output_path: Path):
output_path.parent.mkdir(parents=True, exist_ok=True)
with output_path.open("wb") as pickle_file:
pickle.dump(prefix_data, pickle_file)
class JsonDumpWriter(BaseDumpWriter):
FILE_SUFFIX = ".json"
def _dump(self, prefix_data: Dict[str, List[np.ndarray]], output_path: Path):
repacked_prefix_data = self._format_data(prefix_data)
output_path.parent.mkdir(parents=True, exist_ok=True)
with output_path.open("w") as json_file:
json.dump(repacked_prefix_data, json_file)
def _format_data(self, prefix_data: Dict[str, List[np.ndarray]]) -> Dict:
def _format_batch_for_perf_analyzer_json_format(batch: np.ndarray):
return {
"content": batch.flatten().tolist(),
"shape": list(batch.shape),
"dtype": str(batch.dtype),
}
_, nbatches = _get_nitems_and_batches(prefix_data)
batches = [{} for _ in range(nbatches)]
for io_name, batches_per_io in prefix_data.items():
for batch_idx, batch in enumerate(batches_per_io):
batches[batch_idx][io_name] = _format_batch_for_perf_analyzer_json_format(batch)
return {"data": batches}
class BaseDumpReader(abc.ABC):
FILE_SUFFIX = ".abstract"
def __init__(self, dump_dir: Union[Path, str]):
self._dump_dir = Path(dump_dir)
def get(self, prefix: str) -> Iterator[Dict[str, np.ndarray]]:
dump_files_paths = sorted(self._dump_dir.glob(f"{prefix}*{self.FILE_SUFFIX}"))
for dump_file_path in dump_files_paths:
prefix_data = self._load_file(dump_file_path)
nitems, nbatches = _get_nitems_and_batches(prefix_data)
for batch_idx in range(nbatches):
yield {io_name: prefix_data[io_name][batch_idx] for io_name in prefix_data}
@abc.abstractmethod
def _load_file(self, dump_file_path: Path) -> Dict[str, List[np.ndarray]]:
pass
def iterate_over(self, prefix_list: List[str]) -> Iterator:
iterators = [self.get(prefix) for prefix in prefix_list]
empty_iterators = [False] * len(iterators)
while not all(empty_iterators):
values = [None] * len(iterators)
for idx, iterator in enumerate(iterators):
if empty_iterators[idx]:
continue
try:
values[idx] = next(iterator)
except StopIteration:
empty_iterators[idx] = True
if all(empty_iterators):
break
if not all(empty_iterators):
yield values
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
class PickleDumpReader(BaseDumpReader):
FILE_SUFFIX = ".pkl"
def _load_file(self, dump_file_path: Path) -> Dict[str, List[np.ndarray]]:
with dump_file_path.open("rb") as pickle_file:
return pickle.load(pickle_file)
class JsonDumpReader(BaseDumpReader):
FILE_SUFFIX = ".json"
def _load_file(self, dump_file_path: Path) -> Dict[str, List[np.ndarray]]:
with dump_file_path.open("rb") as json_file:
data = json.load(json_file)
return self._repack_data(data)
def _repack_data(self, data: Dict) -> Dict[str, List[np.ndarray]]:
result: Dict[str, List[np.ndarray]] = {}
batches = data["data"]
for batch in batches:
for io_name, batch_as_dict in batch.items():
io_batches = result.setdefault(io_name, [])
flat_array = batch_as_dict["content"]
shape = batch_as_dict["shape"]
dtype = batch_as_dict["dtype"]
batch_as_array = np.array(flat_array).reshape(shape).astype(dtype)
io_batches.append(batch_as_array)
return result
|
PyTorch/LanguageModeling/BERT/triton/deployment_toolkit | deployment_toolkit | args | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import inspect
import logging
from typing import Callable, Dict, Optional, Union
from model_navigator.utils.cli import is_dict_generic, is_list_generic, is_optional_generic
from .core import GET_ARGPARSER_FN_NAME, load_from_file
LOGGER = logging.getLogger(__name__)
def str2bool(v):
if isinstance(v, bool):
return v
if v.lower() in ("yes", "true", "t", "y", "1"):
return True
elif v.lower() in ("no", "false", "f", "n", "0"):
return False
else:
raise argparse.ArgumentTypeError("Boolean value expected.")
def filter_fn_args(args: Union[dict, argparse.Namespace], fn: Callable) -> dict:
signature = inspect.signature(fn)
parameters_names = list(signature.parameters)
if isinstance(args, argparse.Namespace):
args = vars(args)
args = {k: v for k, v in args.items() if k in parameters_names}
return args
def add_args_for_fn_signature(parser, fn) -> argparse.ArgumentParser:
parser.conflict_handler = "resolve"
signature = inspect.signature(fn)
for parameter in signature.parameters.values():
if parameter.name in ["self", "args", "kwargs"]:
continue
argument_kwargs = {}
if parameter.annotation != inspect.Parameter.empty:
is_optional = is_optional_generic(parameter.annotation)
if is_optional:
annotation = parameter.annotation.__args__[0] # Optional[cls] will be changed into Union[cls, None]
else:
annotation = parameter.annotation
is_list = is_list_generic(annotation)
is_dict = is_dict_generic(annotation)
if parameter.annotation == bool:
argument_kwargs["type"] = str2bool
argument_kwargs["choices"] = [0, 1]
elif is_list:
argument_kwargs["type"] = annotation.__args__[0] # List[cls] -> cls
elif is_dict:
raise RuntimeError(
f"Could not prepare argument parser for {parameter.name}: {parameter.annotation} in {fn}"
)
else:
argument_kwargs["type"] = annotation
if parameter.default != inspect.Parameter.empty:
if parameter.annotation == bool:
argument_kwargs["default"] = str2bool(parameter.default)
else:
argument_kwargs["default"] = parameter.default
else:
argument_kwargs["required"] = True
name = parameter.name.replace("_", "-")
LOGGER.debug(f"Adding argument {name} with {argument_kwargs}")
parser.add_argument(f"--{name}", **argument_kwargs)
return parser
class ArgParserGenerator:
def __init__(self, cls_or_fn, module_path: Optional[str] = None):
self._cls_or_fn = cls_or_fn
init_method_name = "__init__"
self._handle = cls_or_fn if inspect.isfunction(cls_or_fn) else getattr(cls_or_fn, init_method_name, None)
input_is_python_file = module_path and module_path.endswith(".py")
self._input_path = module_path if input_is_python_file else None
self._required_fn_name_for_signature_parsing = getattr(
cls_or_fn, "required_fn_name_for_signature_parsing", None
)
def update_argparser(self, parser):
name = self._handle.__name__
group_parser = parser.add_argument_group(name)
add_args_for_fn_signature(group_parser, fn=self._handle)
self._update_argparser(group_parser)
def get_args(self, args: argparse.Namespace):
filtered_args = filter_fn_args(args, fn=self._handle)
tmp_parser = argparse.ArgumentParser(allow_abbrev=False)
self._update_argparser(tmp_parser)
custom_names = [
p.dest.replace("-", "_") for p in tmp_parser._actions if not isinstance(p, argparse._HelpAction)
]
custom_params = {n: getattr(args, n) for n in custom_names}
filtered_args = {**filtered_args, **custom_params}
return filtered_args
def from_args(self, args: Union[argparse.Namespace, Dict]):
args = self.get_args(args)
LOGGER.info(f"Initializing {self._cls_or_fn.__name__}({args})")
return self._cls_or_fn(**args)
def _update_argparser(self, parser):
label = "argparser_update"
if self._input_path:
update_argparser_handle = load_from_file(self._input_path, label=label, target=GET_ARGPARSER_FN_NAME)
if update_argparser_handle:
update_argparser_handle(parser)
elif self._required_fn_name_for_signature_parsing:
fn_handle = load_from_file(
self._input_path, label=label, target=self._required_fn_name_for_signature_parsing
)
if fn_handle:
add_args_for_fn_signature(parser, fn_handle)
|
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/modeling/backbone | backbone | backbone | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
from collections import OrderedDict
from torch import nn
from maskrcnn_benchmark.modeling import registry
from maskrcnn_benchmark.modeling.make_layers import conv_with_kaiming_uniform
from . import fpn as fpn_module
from . import resnet
@registry.BACKBONES.register("R-50-C4")
@registry.BACKBONES.register("R-50-C5")
@registry.BACKBONES.register("R-101-C4")
@registry.BACKBONES.register("R-101-C5")
def build_resnet_backbone(cfg):
body = resnet.ResNet(cfg)
model = nn.Sequential(OrderedDict([("body", body)]))
return model
@registry.BACKBONES.register("R-50-FPN")
@registry.BACKBONES.register("R-101-FPN")
def build_resnet_fpn_backbone(cfg):
body = resnet.ResNet(cfg)
in_channels_stage2 = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS
out_channels = cfg.MODEL.BACKBONE.OUT_CHANNELS
fpn = fpn_module.FPN(
in_channels_list=[
in_channels_stage2,
in_channels_stage2 * 2,
in_channels_stage2 * 4,
in_channels_stage2 * 8,
],
out_channels=out_channels,
conv_block=conv_with_kaiming_uniform(
cfg.MODEL.FPN.USE_GN, cfg.MODEL.FPN.USE_RELU
),
top_blocks=fpn_module.LastLevelMaxPool(),
nhwc=cfg.NHWC,
)
model = nn.Sequential(OrderedDict([("body", body), ("fpn", fpn)]))
return model
def build_backbone(cfg):
assert cfg.MODEL.BACKBONE.CONV_BODY in registry.BACKBONES, \
"cfg.MODEL.BACKBONE.CONV_BODY: {} are not registered in registry".format(
cfg.MODEL.BACKBONE.CONV_BODY
)
return registry.BACKBONES[cfg.MODEL.BACKBONE.CONV_BODY](cfg)
|
TensorFlow2/LanguageModeling/BERT/official/utils/flags | flags | _performance | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Register flags for optimizing performance."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import multiprocessing
from absl import flags # pylint: disable=g-bad-import-order
import tensorflow as tf # pylint: disable=g-bad-import-order
from official.utils.flags._conventions import help_wrap
# Map string to TensorFlow dtype
DTYPE_MAP = {
"fp16": tf.float16,
"bf16": tf.bfloat16,
"fp32": tf.float32,
}
def get_tf_dtype(flags_obj):
if getattr(flags_obj, "fp16_implementation", None) == "graph_rewrite":
# If the graph_rewrite is used, we build the graph with fp32, and let the
# graph rewrite change ops to fp16.
return tf.float32
return DTYPE_MAP[flags_obj.dtype]
def get_loss_scale(flags_obj, default_for_fp16):
if flags_obj.loss_scale == "dynamic":
return flags_obj.loss_scale
elif flags_obj.loss_scale is not None:
return float(flags_obj.loss_scale)
elif flags_obj.dtype == "fp32":
return 1 # No loss scaling is needed for fp32
else:
assert flags_obj.dtype == "fp16"
return default_for_fp16
def define_performance(num_parallel_calls=False, inter_op=False, intra_op=False,
synthetic_data=False, max_train_steps=False, dtype=False,
all_reduce_alg=False, num_packs=False,
tf_gpu_thread_mode=False,
datasets_num_private_threads=False,
datasets_num_parallel_batches=False,
dynamic_loss_scale=False, fp16_implementation=False,
loss_scale=False,
tf_data_experimental_slack=False, enable_xla=False,
force_v2_in_keras_compile=False,
training_dataset_cache=False):
"""Register flags for specifying performance tuning arguments.
Args:
num_parallel_calls: Create a flag to specify parallelism of data loading.
inter_op: Create a flag to allow specification of inter op threads.
intra_op: Create a flag to allow specification of intra op threads.
synthetic_data: Create a flag to allow the use of synthetic data.
max_train_steps: Create a flags to allow specification of maximum number
of training steps
dtype: Create flags for specifying dtype.
all_reduce_alg: If set forces a specific algorithm for multi-gpu.
num_packs: If set provides number of packs for MirroredStrategy's cross
device ops.
tf_gpu_thread_mode: gpu_private triggers us of private thread pool.
datasets_num_private_threads: Number of private threads for datasets.
datasets_num_parallel_batches: Determines how many batches to process in
parallel when using map and batch from tf.data.
dynamic_loss_scale: Allow the "loss_scale" flag to take on the value
"dynamic". Only valid if `dtype` is True.
fp16_implementation: Create fp16_implementation flag.
loss_scale: Controls the loss scaling, normally for mixed-precision
training. Can only be turned on if dtype is also True.
tf_data_experimental_slack: Determines whether to enable tf.data's
`experimental_slack` option.
enable_xla: Determines if XLA (auto clustering) is turned on.
force_v2_in_keras_compile: Forces the use of run_distribued path even if not
using a `strategy`. This is not the same as
`tf.distribute.OneDeviceStrategy`
training_dataset_cache: Whether to cache the training dataset on workers.
Typically used to improve training performance when training data is in
remote storage and can fit into worker memory.
Returns:
A list of flags for core.py to marks as key flags.
"""
key_flags = []
if num_parallel_calls:
flags.DEFINE_integer(
name="num_parallel_calls", short_name="npc",
default=multiprocessing.cpu_count(),
help=help_wrap("The number of records that are processed in parallel "
"during input processing. This can be optimized per "
"data set but for generally homogeneous data sets, "
"should be approximately the number of available CPU "
"cores. (default behavior)"))
if inter_op:
flags.DEFINE_integer(
name="inter_op_parallelism_threads", short_name="inter", default=0,
help=help_wrap("Number of inter_op_parallelism_threads to use for CPU. "
"See TensorFlow config.proto for details.")
)
if intra_op:
flags.DEFINE_integer(
name="intra_op_parallelism_threads", short_name="intra", default=0,
help=help_wrap("Number of intra_op_parallelism_threads to use for CPU. "
"See TensorFlow config.proto for details."))
if synthetic_data:
flags.DEFINE_bool(
name="use_synthetic_data", short_name="synth", default=False,
help=help_wrap(
"If set, use fake data (zeroes) instead of a real dataset. "
"This mode is useful for performance debugging, as it removes "
"input processing steps, but will not learn anything."))
if max_train_steps:
flags.DEFINE_integer(
name="max_train_steps", short_name="mts", default=None, help=help_wrap(
"The model will stop training if the global_step reaches this "
"value. If not set, training will run until the specified number "
"of epochs have run as usual. It is generally recommended to set "
"--train_epochs=1 when using this flag."
))
if dtype:
flags.DEFINE_enum(
name="dtype", short_name="dt", default="fp32",
enum_values=DTYPE_MAP.keys(),
help=help_wrap("The TensorFlow datatype used for calculations. "
"Variables may be cast to a higher precision on a "
"case-by-case basis for numerical stability."))
loss_scale_help_text = (
"The amount to scale the loss by when the model is run. {}. Before "
"gradients are computed, the loss is multiplied by the loss scale, "
"making all gradients loss_scale times larger. To adjust for this, "
"gradients are divided by the loss scale before being applied to "
"variables. This is mathematically equivalent to training without "
"a loss scale, but the loss scale helps avoid some intermediate "
"gradients from underflowing to zero. If not provided the default "
"for fp16 is 128 and 1 for all other dtypes.{}"
)
if dynamic_loss_scale:
loss_scale_help_text = loss_scale_help_text.format(
"This can be an int/float or the string 'dynamic'",
" The string 'dynamic' can be used to dynamically determine the "
"optimal loss scale during training, but currently this "
"significantly slows down performance")
loss_scale_validation_msg = ("loss_scale should be a positive int/float "
"or the string 'dynamic'.")
else:
loss_scale_help_text = loss_scale_help_text.format(
"This must be an int/float", "")
loss_scale_validation_msg = "loss_scale should be a positive int/float."
if loss_scale:
flags.DEFINE_string(
name="loss_scale", short_name="ls", default=None,
help=help_wrap(loss_scale_help_text))
@flags.validator(flag_name="loss_scale",
message=loss_scale_validation_msg)
def _check_loss_scale(loss_scale): # pylint: disable=unused-variable
"""Validator to check the loss scale flag is valid."""
if loss_scale is None:
return True # null case is handled in get_loss_scale()
if loss_scale == "dynamic" and dynamic_loss_scale:
return True
try:
loss_scale = float(loss_scale)
except ValueError:
return False
return loss_scale > 0
if fp16_implementation:
flags.DEFINE_enum(
name="fp16_implementation", default="keras",
enum_values=("keras', 'graph_rewrite"),
help=help_wrap(
"When --dtype=fp16, how fp16 should be implemented. This has no "
"impact on correctness. 'keras' uses the "
"tf.keras.mixed_precision API. 'graph_rewrite' uses the "
"tf.train.experimental.enable_mixed_precision_graph_rewrite "
"API."))
@flags.multi_flags_validator(["fp16_implementation", "dtype",
"loss_scale"])
def _check_fp16_implementation(flags_dict):
"""Validator to check fp16_implementation flag is valid."""
if (flags_dict["fp16_implementation"] == "graph_rewrite" and
flags_dict["dtype"] != "fp16"):
raise flags.ValidationError("--fp16_implementation should not be "
"specified unless --dtype=fp16")
return True
if all_reduce_alg:
flags.DEFINE_string(
name="all_reduce_alg", short_name="ara", default=None,
help=help_wrap("Defines the algorithm to use for performing all-reduce."
"When specified with MirroredStrategy for single "
"worker, this controls "
"tf.contrib.distribute.AllReduceCrossTowerOps. When "
"specified with MultiWorkerMirroredStrategy, this "
"controls "
"tf.distribute.experimental.CollectiveCommunication; "
"valid options are `ring` and `nccl`."))
if num_packs:
flags.DEFINE_integer(
name="num_packs", default=1,
help=help_wrap("Sets `num_packs` in the cross device ops used in "
"MirroredStrategy. For details, see "
"tf.distribute.NcclAllReduce."))
if tf_gpu_thread_mode:
flags.DEFINE_string(
name="tf_gpu_thread_mode", short_name="gt_mode", default=None,
help=help_wrap(
"Whether and how the GPU device uses its own threadpool.")
)
flags.DEFINE_integer(
name="per_gpu_thread_count", short_name="pgtc", default=0,
help=help_wrap(
"The number of threads to use for GPU. Only valid when "
"tf_gpu_thread_mode is not global.")
)
if datasets_num_private_threads:
flags.DEFINE_integer(
name="datasets_num_private_threads",
default=None,
help=help_wrap(
"Number of threads for a private threadpool created for all"
"datasets computation..")
)
if datasets_num_parallel_batches:
flags.DEFINE_integer(
name="datasets_num_parallel_batches",
default=None,
help=help_wrap(
"Determines how many batches to process in parallel when using "
"map and batch from tf.data.")
)
if training_dataset_cache:
flags.DEFINE_boolean(
name="training_dataset_cache",
default=False,
help=help_wrap(
"Determines whether to cache the training dataset on workers. "
"Typically used to improve training performance when training "
"data is in remote storage and can fit into worker memory.")
)
if tf_data_experimental_slack:
flags.DEFINE_boolean(
name="tf_data_experimental_slack",
default=False,
help=help_wrap(
"Whether to enable tf.data's `experimental_slack` option.")
)
if enable_xla:
flags.DEFINE_boolean(
name="enable_xla", default=False,
help="Whether to enable XLA auto jit compilation")
if force_v2_in_keras_compile:
flags.DEFINE_boolean(
name="force_v2_in_keras_compile", default=None,
help="Forces the use of run_distribued path even if not"
"using a `strategy`. This is not the same as"
"`tf.distribute.OneDeviceStrategy`")
return key_flags
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.