text
stringlengths 0
2k
| heading1
stringlengths 4
79
| source_page_url
stringclasses 178
values | source_page_title
stringclasses 178
values |
---|---|---|---|
Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or "token") into different categories, such as names of people or names of locations, or different parts of speech.
For example, given the sentence:
> Does Chicago have any Pakistani restaurants?
A named-entity recognition algorithm may identify:
- "Chicago" as a **location**
- "Pakistani" as an **ethnicity**
and so on.
Using `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.
Here is an example of a demo that you'll be able to build:
$demo_ner_pipeline
This tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library.
Approach 1: List of Entity Dictionaries
Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a "start" index, and an "end" index. This is, for example, how NER models in the `transformers` library operate:
```py
from transformers import pipeline
ner_pipeline = pipeline("ner")
ner_pipeline("Does Chicago have any Pakistani restaurants")
```
Output:
```bash
[{'entity': 'I-LOC',
'score': 0.9988978,
'index': 2,
'word': 'Chicago',
'start': 5,
'end': 12},
{'entity': 'I-MISC',
'score': 0.9958592,
'index': 5,
'word': 'Pakistani',
'start': 22,
'end': 31}]
```
If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this
|
Introduction
|
https://gradio.app/guides/named-entity-recognition
|
Other Tutorials - Named Entity Recognition Guide
|
index': 5,
'word': 'Pakistani',
'start': 22,
'end': 31}]
```
If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `"entities"` and `"text"` respectively.
Here is a complete example:
$code_ner_pipeline
$demo_ner_pipeline
Approach 2: List of Tuples
An alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities.
In some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger:
$code_text_analysis
$demo_text_analysis
---
And you're done! That's all you need to know to build a web-based GUI for your NER model.
Fun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.
|
Introduction
|
https://gradio.app/guides/named-entity-recognition
|
Other Tutorials - Named Entity Recognition Guide
|
In this Guide, we'll walk you through:
- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces
- How to setup a Gradio demo for EfficientNet-Lite4
- How to contribute your own Gradio demos for the ONNX organization on Hugging Face
Here's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.
|
Introduction
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.
The [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.
|
What is the ONNX Model Zoo?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
Gradio
Gradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
Get started [here](https://gradio.app/getting_started)
Hugging Face Spaces
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
Hugging Face Models
Hugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)
|
What are Hugging Face Spaces & Gradio?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).
|
How did Hugging Face help the ONNX Model Zoo?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).
|
What is the role of ONNX Runtime?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)
Here we walk through setting up a example demo for EfficientNet-Lite4 using Gradio
First we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.
```python
import numpy as np
import math
import matplotlib.pyplot as plt
import cv2
import json
import gradio as gr
from huggingface_hub import hf_hub_download
from onnx import hub
import onnxruntime as ort
loads ONNX model from ONNX Model Zoo
model = hub.load("efficientnet-lite4")
loads the labels text file
labels = json.load(open("labels_map.txt", "r"))
sets image file dimensions to 224x224 by resizing and cropping image from center
def pre_process_edgetpu(img, dims):
output_height, output_width, _ = dims
img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)
img = center_crop(img, output_height, output_width)
img = np.asarray(img, dtype='float32')
converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]
img -= [127.0, 127.0, 127.0]
img /= [128.0, 128.0, 128.0]
return img
resizes the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
height, width, _ = img.shape
new_height = int(100. * out_he
|
Setting up a Gradio Demo for EfficientNet-Lite4
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
height, width, _ = img.shape
new_height = int(100. * out_height / scale)
new_width = int(100. * out_width / scale)
if height > width:
w = new_width
h = int(new_height * height / width)
else:
h = new_height
w = int(new_width * width / height)
img = cv2.resize(img, (w, h), interpolation=inter_pol)
return img
crops the image around the center based on given height and width
def center_crop(img, out_height, out_width):
height, width, _ = img.shape
left = int((width - out_width) / 2)
right = int((width + out_width) / 2)
top = int((height - out_height) / 2)
bottom = int((height + out_height) / 2)
img = img[top:bottom, left:right]
return img
sess = ort.InferenceSession(model)
def inference(img):
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = pre_process_edgetpu(img, (224, 224, 3))
img_batch = np.expand_dims(img, axis=0)
results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
result = reversed(results[0].argsort()[-5:])
resultdic = {}
for r in result:
resultdic[labels[str(r)]] = float(results[0][r])
return resultdic
title = "EfficientNet-Lite4"
description = "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU."
examples = [['catonnx.jpg']]
gr.Interface(inference, gr.Image(type="filepath"), "label", title=title, description=description, examples=examples).launch()
```
|
Setting up a Gradio Demo for EfficientNet-Lite4
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
examples=examples).launch()
```
|
Setting up a Gradio Demo for EfficientNet-Lite4
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- Create an account on Hugging Face [here](https://huggingface.co/join).
- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/modelsmodels)
- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.
- Request to join ONNX Organization [here](https://huggingface.co/onnx).
- Once approved transfer model from your username to ONNX organization
- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/modelsmodels)
|
How to contribute Gradio demos on HF spaces using ONNX models
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` or `Interface` constructor. For example:
```python
with gr.Blocks(theme=gr.themes.Soft()) as demo:
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-soft.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:
* `gr.themes.Base()` - the `"base"` theme sets the primary color to blue but otherwise has minimal styling, making it particularly useful as a base for creating new, custom themes.
* `gr.themes.Default()` - the `"default"` Gradio 5 theme, with a vibrant orange primary color and gray secondary color.
* `gr.themes.Origin()` - the `"origin"` theme is most similar to Gradio 4 styling. Colors, especially in light mode, are more subdued than the Gradio 5 default theme.
* `gr.themes.Citrus()` - the `"citrus"` theme uses a yellow primary color, highlights form elements that are in focus, and includes fun 3D effects when buttons are clicked.
* `gr.themes.Monochrome()` - the `"monochrome"` theme uses a black primary and white secondary color, and uses serif-style fonts, giving the appearance of a black-and-white newspaper.
* `gr.themes.Soft()` - the `"soft"` theme uses a purple primary color and white secondary color. It also increases the border radius around buttons and form elements and highlights labels.
* `gr.themes.Glass()` - the `"glass"` theme has a blue primary color and a transclucent gray secondary color. The theme also uses vertical gradients to create a glassy effect.
* `gr.themes.Ocean()` - the `"ocean"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.
Each of these themes set values for hundreds of CSS variables. You can use preb
|
Introduction
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
lor and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.
Each of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.
|
Introduction
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:
```python
import gradio as gr
gr.themes.builder()
```
$demo_theme_builder
You can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.
As you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.
In the rest of the guide, we will cover building themes programmatically.
|
Using the Theme Builder
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app.
Core Colors
The first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.
The 3 color constructor arguments are:
- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.
- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.
- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.
You could modify these values using their string shortcuts, such as
```python
with gr.Blocks(theme=gr.themes.Default(primary_hue="red", secondary_hue="pink")) as demo:
...
```
or you could use the `Color` objects directly, like this:
```python
with gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-1.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
Predefined colors are:
- `slate`
- `gray`
- `zinc`
- `neutral`
- `stone`
- `red`
- `orange`
- `amber`
- `yellow`
- `lime`
- `green`
- `emerald`
- `teal`
- `cyan`
- `sky`
- `blue`
- `indigo`
- `violet`
- `purple`
- `fuchsia`
- `pink`
- `rose`
You could also create your own custom `Color` objects and pass them in.
Core Sizing
The next 3 constructor arguments set the sizing of the theme and are `gradio.
|
Extending Themes via the Constructor
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
`
- `fuchsia`
- `pink`
- `rose`
You could also create your own custom `Color` objects and pass them in.
Core Sizing
The next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.
- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.
- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.
- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.
You could modify these values using their string shortcuts, such as
```python
with gr.Blocks(theme=gr.themes.Default(spacing_size="sm", radius_size="none")) as demo:
...
```
or you could use the `Size` objects directly, like this:
```python
with gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-2.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
The predefined size objects are:
- `radius_none`
- `radius_sm`
- `radius_md`
- `radius_lg`
- `spacing_sm`
- `spacing_md`
- `spacing_lg`
- `text_sm`
- `text_md`
- `text_lg`
You could also create your own custom `Size` objects and pass them in.
Core Fonts
The final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.
- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Sans")`.
- `font_mono`: This sets th
|
Extending Themes via the Constructor
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
font will be loaded from Google Fonts.
- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Sans")`.
- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Mono")`.
You could modify these values such as the following:
```python
with gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont("Inconsolata"), "Arial", "sans-serif"])) as demo:
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-3.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
|
Extending Themes via the Constructor
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:
```python
theme = gr.themes.Default(primary_hue="blue").set(
loader_color="FF0000",
slider_color="FF0000",
)
with gr.Blocks(theme=theme) as demo:
...
```
In the example above, we've set the `loader_color` and `slider_color` variables to `FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.
Your IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.
CSS Variable Naming Conventions
CSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:
1. The target element, such as `button`, `slider`, or `block`.
2. The target element type or sub-element, such as `button_primary`, or `block_label`.
3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.
4. Any relevant state, such as `button_primary_background_fill_hover`.
5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.
Of course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.
CSS Variable Organization
Though there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may wan
|
Extending Themes via `.set()`
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
d referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify.
Referencing Core Variables
To reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:
```python
theme = gr.themes.Default(primary_hue="blue").set(
button_primary_background_fill="*primary_200",
button_primary_background_fill_hover="*primary_300",
)
```
In the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.
Similarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:
```python
theme = gr.themes.Default(radius_size="md").set(
button_primary_border_radius="*radius_xl",
)
```
In the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range.
Referencing Other Variables
Variables can also reference each other. For example, look at the example below:
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_hover="FF0000",
button_primary_border="FF0000",
)
```
Having to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_back
|
Extending Themes via `.set()`
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
mary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_hover="*button_primary_background_fill",
button_primary_border="*button_primary_background_fill",
)
```
Now, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.
This is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.
Note that dark mode variables automatically reference each other. For example:
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_dark="AAAAAA",
button_primary_border="*button_primary_background_fill",
button_primary_border_dark="*button_primary_background_fill_dark",
)
```
`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable.
|
Extending Themes via `.set()`
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.
Our new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.
$code_theme_new_step_1
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-1.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
The Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.
We'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.
$code_theme_new_step_2
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-2.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
See how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.
Let's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.
$code_theme_new_step_3
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-3.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
Look how fun our theme looks now! With just a few variable changes, our theme looks completely different.
You may find it helpful to explore the [source code
|
Creating a Full Theme
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
ght"
frameborder="0"
></iframe>
</div>
Look how fun our theme looks now! With just a few variable changes, our theme looks completely different.
You may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.
|
Creating a Full Theme
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!
Uploading a Theme
There are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.
- Via the class instance
Each theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.
```python
seafoam.push_to_hub(repo_name="seafoam",
version="0.0.1",
hf_token="<token>")
```
- Via the command line
First save the theme to disk
```python
seafoam.dump(filename="seafoam.json")
```
Then use the `upload_theme` command:
```bash
upload_theme\
"seafoam.json"\
"seafoam"\
--version "0.0.1"\
--hf_token "<token>"
```
In order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-startlogin)
as the `hf_token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-startlogin) (which comes installed with `gradio`),
you can omit the `hf_token` argument.
The `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.
That way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying
about changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied.
Theme Previews
By calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).
For example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).
<div class="wrapper">
<ifr
|
Sharing Themes
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
hub/spaces-overview).
For example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).
<div class="wrapper">
<iframe
src="https://shivalikasingh-calm-seafoam.hf.space/?__theme=light"
frameborder="0"
></iframe>
</div>
Discovering Themes
The [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,
it will automatically show up in the theme gallery after a couple of minutes.
You can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.
<div class="wrapper">
<iframe
src="https://gradio-theme-gallery.static.hf.space"
frameborder="0"
></iframe>
</div>
Downloading
To use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:
```python
my_theme = gr.Theme.from_hub("gradio/seafoam")
with gr.Blocks(theme=my_theme) as demo:
....
```
You can also pass the theme string directly to `Blocks` or `Interface` (`gr.Blocks(theme="gradio/seafoam")`)
You can pin your app to an upstream theme version by using semantic versioning expressions.
For example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:
```python
with gr.Blocks(theme="gradio/seafoam@>=0.0.1,<0.1.0") as demo:
....
```
Enjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!
If you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!
<style>
.wrapper {
position: relative;
padding-bottom: 56.25%;
padding-top: 25px;
height: 0;
}
.wrapper iframe {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
</style>
|
Sharing Themes
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
er iframe {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
</style>
|
Sharing Themes
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
In this Guide, we'll walk you through:
- Introduction of Gradio, and Hugging Face Spaces, and Wandb
- How to setup a Gradio demo using the Wandb integration for JoJoGAN
- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face
|
Introduction
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:
<img alt="Screen Shot 2022-08-01 at 5 54 59 PM" src="https://user-images.githubusercontent.com/81195143/182252755-4a0e1ca8-fd25-40ff-8c91-c9da38aaa9ec.png">
|
What is Wandb?
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
Gradio
Gradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
Get started [here](https://gradio.app/getting_started)
Hugging Face Spaces
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
|
What are Hugging Face Spaces & Gradio?
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.
Let's get started!
1. Create a W&B account
Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you donβt have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.
2. Open Colab Install Gradio and W&B
We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.
[](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
Install Gradio and Wandb at the top:
```sh
pip install gradio wandb
```
3. Finetune StyleGAN and W&B experiment tracking
This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:
```python
alpha = 1.0
alpha = 1-alpha
preserve_color = True
num_iter = 100
log_interval = 50
samples = []
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
load discriminator for perceptual loss
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
reset generator
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(),
|
Setting up a Gradio Demo for JoJoGAN
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
reset generator
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
Which layers to swap for generating a family of plausible real images -> fake image
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
img = generator(in_latent, input_is_latent=True)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
g_optim.zero_grad()
loss.backward()
g_optim.step()
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({"Current Samples": out_table})
```
4. Save, Download, and Load Model
Here's how to save and download your model.
```python
from PIL import Image
import torch
torch.backends.cudnn.benchmark = True
from torchvision impor
|
Setting up a Gradio Demo for JoJoGAN
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
ave, Download, and Load Model
Here's how to save and download your model.
```python
from PIL import Image
import torch
torch.backends.cudnn.benchmark = True
from torchvision import transforms, utils
from util import *
import math
import random
import numpy as np
from torch import nn, autograd, optim
from torch.nn import functional as F
from tqdm import tqdm
import lpips
from model import *
from e4e_projection import projection as e4e_projection
from copy import deepcopy
import imageio
import os
import sys
import torchvision.transforms as transforms
from argparse import Namespace
from e4e.models.psp import pSp
from util import *
from huggingface_hub import hf_hub_download
from google.colab import files
torch.save({"g": generator.state_dict()}, "your-model-name.pt")
files.download('your-model-name.pt')
latent_dim = 512
device="cuda"
model_path_s = hf_hub_download(repo_id="akhaliq/jojogan-stylegan2-ffhq-config-f", filename="stylegan2-ffhq-config-f.pt")
original_generator = Generator(1024, latent_dim, 8, 2).to(device)
ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)
original_generator.load_state_dict(ckpt["g_ema"], strict=False)
mean_latent = original_generator.mean_latent(10000)
generator = deepcopy(original_generator)
ckpt = torch.load("/content/JoJoGAN/your-model-name.pt", map_location=lambda storage, loc: storage)
generator.load_state_dict(ckpt["g"], strict=False)
generator.eval()
plt.rcParams['figure.dpi'] = 150
transform = transforms.Compose(
[
transforms.Resize((1024, 1024)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
def inference(img):
img.save('out.jpg')
aligned_face = align_face('out.jpg')
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
|
Setting up a Gradio Demo for JoJoGAN
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
.5, 0.5)),
]
)
def inference(img):
img.save('out.jpg')
aligned_face = align_face('out.jpg')
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
with torch.no_grad():
my_sample = generator(my_w, input_is_latent=True)
npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()
imageio.imwrite('filename.jpeg', npimage)
return 'filename.jpeg'
````
5. Build a Gradio Demo
```python
import gradio as gr
title = "JoJoGAN"
description = "Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below."
demo = gr.Interface(
inference,
gr.Image(type="pil"),
gr.Image(type="file"),
title=title,
description=description
)
demo.launch(share=True)
```
6. Integrate Gradio into your W&B Dashboard
The last stepβintegrating your Gradio demo with your W&B dashboardβis just one extra line:
```python
demo.integrate(wandb=wandb)
```
Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.
Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:
```html
<gradio-app space="akhaliq/JoJoGAN"> </gradio-app>
```
7. (Optional) Embed W&B plots in your Gradio App
It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and
embed them within your Gradio app within a `gr.HTML` block.
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
```python
import gradio as gr
def wandb_report(url):
iframe = f'<iframe src={url} style="border:none;height:1024px;width:100%">'
return gr.HTML(iframe)
with gr.Blocks() a
|
Setting up a Gradio Demo for JoJoGAN
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
``python
import gradio as gr
def wandb_report(url):
iframe = f'<iframe src={url} style="border:none;height:1024px;width:100%">'
return gr.HTML(iframe)
with gr.Blocks() as demo:
report_url = 'https://wandb.ai/_scott/pytorch-sweeps-demo/reports/loss-22-10-07-16-00-17---VmlldzoyNzU2NzAx'
report = wandb_report(report_url)
demo.launch(share=True)
```
|
Setting up a Gradio Demo for JoJoGAN
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
We hope you enjoyed this brief demo of embedding a Gradio demo to a W&B report! Thanks for making it to the end. To recap:
- Only one single reference image is needed for fine-tuning JoJoGAN which usually takes about 1 minute on a GPU in colab. After training, style can be applied to any input image. Read more in the paper.
- W&B tracks experiments with just a few lines of code added to a colab and you can visualize, sort, and understand your experiments in a single, centralized dashboard.
- Gradio, meanwhile, demos the model in a user friendly interface to share anywhere on the web.
|
Conclusion
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
- Create an account on Hugging Face [here](https://huggingface.co/join).
- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
- Request to join wandb organization [here](https://huggingface.co/wandb).
- Once approved transfer model from your username to Wandb organization
|
How to contribute Gradio demos on HF spaces on the Wandb organization
|
https://gradio.app/guides/Gradio-and-Wandb-Integration
|
Other Tutorials - Gradio And Wandb Integration Guide
|
It seems that cryptocurrencies, [NFTs](https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html), and the web3 movement are all the rage these days! Digital assets are being listed on marketplaces for astounding amounts of money, and just about every celebrity is debuting their own NFT collection. While your crypto assets [may be taxable, such as in Canada](https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html), today we'll explore some fun and tax-free ways to generate your own assortment of procedurally generated [CryptoPunks](https://www.larvalabs.com/cryptopunks).
Generative Adversarial Networks, often known just as _GANs_, are a specific class of deep-learning models that are designed to learn from an input dataset to create (_generate!_) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!
Today we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a [peek](https://nimaboscarino-cryptopunks.hf.space) at what we're going to be putting together.
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). To use the pretrained model, also install `torch` and `torchvision`.
|
Introduction
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
Originally proposed in [Goodfellow et al. 2014](https://arxiv.org/abs/1406.2661), GANs are made up of neural networks which compete with the intention of outsmarting each other. One network, known as the _generator_, is responsible for generating images. The other network, the _discriminator_, receives an image at a time from the generator along with a **real** image from the training data set. The discriminator then has to guess: which image is the fake?
The generator is constantly training to create images which are trickier for the discriminator to identify, while the discriminator raises the bar for the generator every time it correctly detects a fake. As the networks engage in this competitive (_adversarial!_) relationship, the images that get generated improve to the point where they become indistinguishable to human eyes!
For a more in-depth look at GANs, you can take a look at [this excellent post on Analytics Vidhya](https://www.analyticsvidhya.com/blog/2021/06/a-detailed-explanation-of-gan-with-implementation-using-tensorflow-and-keras/) or this [PyTorch tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). For now, though, we'll dive into a demo!
|
GANs: a very brief introduction
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
To generate new images with a GAN, you only need the generator model. There are many different architectures that the generator could use, but for this demo we'll use a pretrained GAN generator model with the following architecture:
```python
from torch import nn
class Generator(nn.Module):
Refer to the link below for explanations about nc, nz, and ngf
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.htmlinputs
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
```
We're taking the generator from [this repo by @teddykoker](https://github.com/teddykoker/cryptopunks-gan/blob/main/train.pyL90), where you can also see the original discriminator model structure.
After instantiating the model, we'll load in the weights from the Hugging Face Hub, stored at [nateraw/cryptopunks-gan](https://huggingface.co/nateraw/cryptopunks-gan):
```python
from huggingface_hub import hf_hub_download
import torch
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) Use 'cuda' if you have a GPU available
```
|
Step 1 β Create the Generator model
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
The `predict` function is the key to making Gradio work! Whatever inputs we choose through the Gradio interface will get passed through our `predict` function, which should operate on the inputs and generate outputs that we can display with Gradio output components. For GANs it's common to pass random noise into our model as the input, so we'll generate a tensor of random numbers and pass that through the model. We can then use `torchvision`'s `save_image` function to save the output of the model as a `png` file, and return the file name:
```python
from torchvision.utils import save_image
def predict(seed):
num_punks = 4
torch.manual_seed(seed)
z = torch.randn(num_punks, 100, 1, 1)
punks = model(z)
save_image(punks, "punks.png", normalize=True)
return 'punks.png'
```
We're giving our `predict` function a `seed` parameter, so that we can fix the random tensor generation with a seed. We'll then be able to reproduce punks if we want to see them again by passing in the same seed.
_Note!_ Our model needs an input tensor of dimensions 100x1x1 to do a single inference, or (BatchSize)x100x1x1 for generating a batch of images. In this demo we'll start by generating 4 punks at a time.
|
Step 2 β Defining a `predict` function
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
At this point you can even run the code you have with `predict(<SOME_NUMBER>)`, and you'll find your freshly generated punks in your file system at `./punks.png`. To make a truly interactive demo, though, we'll build out a simple interface with Gradio. Our goals here are to:
- Set a slider input so users can choose the "seed" value
- Use an image component for our output to showcase the generated punks
- Use our `predict()` to take the seed and generate the images
With `gr.Interface()`, we can define all of that with a single function call:
```python
import gradio as gr
gr.Interface(
predict,
inputs=[
gr.Slider(0, 1000, label='Seed', default=42),
],
outputs="image",
).launch()
```
|
Step 3 β Creating a Gradio interface
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
Generating 4 punks at a time is a good start, but maybe we'd like to control how many we want to make each time. Adding more inputs to our Gradio interface is as simple as adding another item to the `inputs` list that we pass to `gr.Interface`:
```python
gr.Interface(
predict,
inputs=[
gr.Slider(0, 1000, label='Seed', default=42),
gr.Slider(4, 64, label='Number of Punks', step=1, default=10), Adding another slider!
],
outputs="image",
).launch()
```
The new input will be passed to our `predict()` function, so we have to make some changes to that function to accept a new parameter:
```python
def predict(seed, num_punks):
torch.manual_seed(seed)
z = torch.randn(num_punks, 100, 1, 1)
punks = model(z)
save_image(punks, "punks.png", normalize=True)
return 'punks.png'
```
When you relaunch your interface, you should see a second slider that'll let you control the number of punks!
|
Step 4 β Even more punks!
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
Your Gradio app is pretty much good to go, but you can add a few extra things to really make it ready for the spotlight β¨
We can add some examples that users can easily try out by adding this to the `gr.Interface`:
```python
gr.Interface(
...
keep everything as it is, and then add
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
).launch(cache_examples=True) cache_examples is optional
```
The `examples` parameter takes a list of lists, where each item in the sublists is ordered in the same order that we've listed the `inputs`. So in our case, `[seed, num_punks]`. Give it a try!
You can also try adding a `title`, `description`, and `article` to the `gr.Interface`. Each of those parameters accepts a string, so try it out and see what happens π `article` will also accept HTML, as [explored in a previous guide](/guides/key-features/descriptive-content)!
When you're all done, you may end up with something like [this](https://nimaboscarino-cryptopunks.hf.space).
For reference, here is our full code:
```python
import torch
from torch import nn
from huggingface_hub import hf_hub_download
from torchvision.utils import save_image
import gradio as gr
class Generator(nn.Module):
Refer to the link below for explanations about nc, nz, and ngf
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.htmlinputs
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self,
|
Step 5 - Polishing it up
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) Use 'cuda' if you have a GPU available
def predict(seed, num_punks):
torch.manual_seed(seed)
z = torch.randn(num_punks, 100, 1, 1)
punks = model(z)
save_image(punks, "punks.png", normalize=True)
return 'punks.png'
gr.Interface(
predict,
inputs=[
gr.Slider(0, 1000, label='Seed', default=42),
gr.Slider(4, 64, label='Number of Punks', step=1, default=10),
],
outputs="image",
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
).launch(cache_examples=True)
```
---
Congratulations! You've built out your very own GAN-powered CryptoPunks generator, with a fancy Gradio interface that makes it easy for anyone to use. Now you can [scour the Hub for more GANs](https://huggingface.co/models?other=gan) (or train your own) and continue making even more awesome demos π€
|
Step 5 - Polishing it up
|
https://gradio.app/guides/create-your-own-friends-with-a-gan
|
Other Tutorials - Create Your Own Friends With A Gan Guide
|
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
Let's get started!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained image classification model, so you should also have `torch` installed.
|
Introduction
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
```python
import torch
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
```
Because we will be using the model for inference, we have called the `.eval()` method.
|
Step 1 β Setting up the Image Classification Model
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
In the case of our pretrained model, it will look like this:
```python
import requests
from PIL import Image
from torchvision import transforms
Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def predict(inp):
inp = transforms.ToTensor()(inp).unsqueeze(0)
with torch.no_grad():
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
return confidences
```
Let's break this down. The function takes one parameter:
- `inp`: the input image as a `PIL` image
Then, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
|
Step 2 β Defining a `predict` function
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
Now that we have our predictive function set up, we can create a Gradio Interface around it.
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
The output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.
Finally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:
```python
import gradio as gr
gr.Interface(fn=predict,
inputs=gr.Image(type="pil"),
outputs=gr.Label(num_top_classes=3),
examples=["lion.jpg", "cheetah.jpg"]).launch()
```
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
<gradio-app space="gradio/pytorch-image-classifier">
---
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
|
Step 3 β Creating a Gradio Interface
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.
State-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like the demo on the bottom of the page.
Let's get started!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started).
|
Introduction
|
https://gradio.app/guides/image-classification-with-vision-transformers
|
Other Tutorials - Image Classification With Vision Transformers Guide
|
First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.
Expand the Tasks category on the left sidebar and select "Image Classification" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.
At the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo.
|
Step 1 β Choosing a Vision Image Classification Model
|
https://gradio.app/guides/image-classification-with-vision-transformers
|
Other Tutorials - Image Classification With Vision Transformers Guide
|
When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.
All of these are automatically inferred from the model tags.
Besides the import statement, it only takes a single line of Python to load and launch the demo.
We use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.
```python
import gradio as gr
gr.Interface.load(
"huggingface/google/vit-base-patch16-224",
examples=["alligator.jpg", "laptop.jpg"]).launch()
```
Notice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.
This produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!
<gradio-app space="gradio/vision-transformer">
---
And you're done! In one line of code, you have built a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
|
Step 2 β Loading the Vision Transformer Model with Gradio
|
https://gradio.app/guides/image-classification-with-vision-transformers
|
Other Tutorials - Image Classification With Vision Transformers Guide
|
Data visualization is a crucial aspect of data analysis and machine learning. The Gradio `DataFrame` component is a popular way to display tabular data within a web application.
But what if you want to stylize the table of data? What if you want to add background colors, partially highlight cells, or change the display precision of numbers? This Guide is for you!
Let's dive in!
**Prerequisites**: We'll be using the `gradio.Blocks` class in our examples.
You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
|
Introduction
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
The Gradio `DataFrame` component now supports values of the type `Styler` from the `pandas` class. This allows us to reuse the rich existing API and documentation of the `Styler` class instead of inventing a new style format on our own. Here's a complete example of how it looks:
```python
import pandas as pd
import gradio as gr
Creating a sample dataframe
df = pd.DataFrame({
"A" : [14, 4, 5, 4, 1],
"B" : [5, 2, 54, 3, 2],
"C" : [20, 20, 7, 3, 8],
"D" : [14, 3, 6, 2, 6],
"E" : [23, 45, 64, 32, 23]
})
Applying style to highlight the maximum value in each row
styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
Displaying the styled dataframe in Gradio
with gr.Blocks() as demo:
gr.DataFrame(styler)
demo.launch()
```
The Styler class can be used to apply conditional formatting and styling to dataframes, making them more visually appealing and interpretable. You can highlight certain values, apply gradients, or even use custom CSS to style the DataFrame. The Styler object is applied to a DataFrame and it returns a new object with the relevant styling properties, which can then be previewed directly, or rendered dynamically in a Gradio interface.
To read more about the Styler object, read the official `pandas` documentation at: https://pandas.pydata.org/docs/user_guide/style.html
Below, we'll explore a few examples:
Highlighting Cells
Ok, so let's revisit the previous example. We start by creating a `pd.DataFrame` object and then highlight the highest value in each row with a light green color:
```python
import pandas as pd
Creating a sample dataframe
df = pd.DataFrame({
"A" : [14, 4, 5, 4, 1],
"B" : [5, 2, 54, 3, 2],
"C" : [20, 20, 7, 3, 8],
"D" : [14, 3, 6, 2, 6],
"E" : [23, 45, 64, 32, 23]
})
Applying style to highlight the maximum value in each row
styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
```
Now, we simply pass this object into the Gradio `DataFra
|
The Pandas `Styler`
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
, 32, 23]
})
Applying style to highlight the maximum value in each row
styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
```
Now, we simply pass this object into the Gradio `DataFrame` and we can visualize our colorful table of data in 4 lines of python:
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Dataframe(styler)
demo.launch()
```
Here's how it looks:

Font Colors
Apart from highlighting cells, you might want to color specific text within the cells. Here's how you can change text colors for certain columns:
```python
import pandas as pd
import gradio as gr
Creating a sample dataframe
df = pd.DataFrame({
"A" : [14, 4, 5, 4, 1],
"B" : [5, 2, 54, 3, 2],
"C" : [20, 20, 7, 3, 8],
"D" : [14, 3, 6, 2, 6],
"E" : [23, 45, 64, 32, 23]
})
Function to apply text color
def highlight_cols(x):
df = x.copy()
df.loc[:, :] = 'color: purple'
df[['B', 'C', 'E']] = 'color: green'
return df
Applying the style function
s = df.style.apply(highlight_cols, axis = None)
Displaying the styled dataframe in Gradio
with gr.Blocks() as demo:
gr.DataFrame(s)
demo.launch()
```
In this script, we define a custom function highlight_cols that changes the text color to purple for all cells, but overrides this for columns B, C, and E with green. Here's how it looks:

Display Precision
Sometimes, the data you are dealing with might have long floating numbers, and you may want to display only a fixed number of decimals for simplicity. The pandas Styler object allows you to format the precision of numbers displayed. Here's how you can do this:
```python
import pandas as pd
import gradio as gr
Creating a sample dataframe with floating numbers
df = pd.DataFrame({
"A" : [14.12345, 4.
|
The Pandas `Styler`
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
on of numbers displayed. Here's how you can do this:
```python
import pandas as pd
import gradio as gr
Creating a sample dataframe with floating numbers
df = pd.DataFrame({
"A" : [14.12345, 4.23456, 5.34567, 4.45678, 1.56789],
"B" : [5.67891, 2.78912, 54.89123, 3.91234, 2.12345],
... other columns
})
Setting the precision of numbers to 2 decimal places
s = df.style.format("{:.2f}")
Displaying the styled dataframe in Gradio
with gr.Blocks() as demo:
gr.DataFrame(s)
demo.launch()
```
In this script, the format method of the Styler object is used to set the precision of numbers to two decimal places. Much cleaner now:

|
The Pandas `Styler`
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
So far, we've been restricting ourselves to styling that is supported by the Pandas `Styler` class. But what if you want to create custom styles like partially highlighting cells based on their values:

This isn't possible with `Styler`, but you can do this by creating your own **`styling`** array, which is a 2D array the same size and shape as your data. Each element in this list should be a CSS style string (e.g. `"background-color: green"`) that applies to the `<td>` element containing the cell value (or an empty string if no custom CSS should be applied). Similarly, you can create a **`display_value`** array which controls the value that is displayed in each cell (which can be different the underlying value which is the one that is used for searching/sorting).
Here's the complete code for how to can use custom styling with `gr.Dataframe` as in the screenshot above:
$code_dataframe_custom_styling
|
Custom Styling
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
One thing to keep in mind is that the gradio `DataFrame` component only accepts custom styling objects when it is non-interactive (i.e. in "static" mode). If the `DataFrame` component is interactive, then the styling information is ignored and instead the raw table values are shown instead.
The `DataFrame` component is by default non-interactive, unless it is used as an input to an event. In which case, you can force the component to be non-interactive by setting the `interactive` prop like this:
```python
c = gr.DataFrame(styler, interactive=False)
```
|
Note about Interactivity
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
This is just a taste of what's possible using the `gradio.DataFrame` component with the `Styler` class from `pandas`. Try it out and let us know what you think!
|
Conclusion π
|
https://gradio.app/guides/styling-the-gradio-dataframe
|
Other Tutorials - Styling The Gradio Dataframe Guide
|
By default, every Gradio demo includes a built-in queuing system that scales to thousands of requests. When a user of your app submits a request (i.e. submits an input to your function), Gradio adds the request to the queue, and requests are processed in order, generally speaking (this is not exactly true, as discussed below). When the user's request has finished processing, the Gradio server returns the result back to the user using server-side events (SSE). The SSE protocol has several advantages over simply using HTTP POST requests:
(1) They do not time out -- most browsers raise a timeout error if they do not get a response to a POST request after a short period of time (e.g. 1 min). This can be a problem if your inference function takes longer than 1 minute to run or if many people are trying out your demo at the same time, resulting in increased latency.
(2) They allow the server to send multiple updates to the frontend. This means, for example, that the server can send a real-time ETA of how long your prediction will take to complete.
To configure the queue, simply call the `.queue()` method before launching an `Interface`, `TabbedInterface`, `ChatInterface` or any `Blocks`. Here's an example:
```py
import gradio as gr
app = gr.Interface(lambda x:x, "image", "image")
app.queue() <-- Sets up a queue with default parameters
app.launch()
```
**How Requests are Processed from the Queue**
When a Gradio server is launched, a pool of threads is used to execute requests from the queue. By default, the maximum size of this thread pool is `40` (which is the default inherited from FastAPI, on which the Gradio server is based). However, this does *not* mean that 40 requests are always processed in parallel from the queue.
Instead, Gradio uses a **single-function-single-worker** model by default. This means that each worker thread is only assigned a single function from among all of the functions that could be part of your Gradio app. This ensures that you do
|
Overview of Gradio's Queueing System
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
-single-worker** model by default. This means that each worker thread is only assigned a single function from among all of the functions that could be part of your Gradio app. This ensures that you do not see, for example, out-of-memory errors, due to multiple workers calling a machine learning model at the same time. Suppose you have 3 functions in your Gradio app: A, B, and C. And you see the following sequence of 7 requests come in from users using your app:
```
1 2 3 4 5 6 7
-------------
A B A A C B A
```
Initially, 3 workers will get dispatched to handle requests 1, 2, and 5 (corresponding to functions: A, B, C). As soon as any of these workers finish, they will start processing the next function in the queue of the same function type, e.g. the worker that finished processing request 1 will start processing request 3, and so on.
If you want to change this behavior, there are several parameters that can be used to configure the queue and help reduce latency. Let's go through them one-by-one.
The `default_concurrency_limit` parameter in `queue()`
The first parameter we will explore is the `default_concurrency_limit` parameter in `queue()`. This controls how many workers can execute the same event. By default, this is set to `1`, but you can set it to a higher integer: `2`, `10`, or even `None` (in the last case, there is no limit besides the total number of available workers).
This is useful, for example, if your Gradio app does not call any resource-intensive functions. If your app only queries external APIs, then you can set the `default_concurrency_limit` much higher. Increasing this parameter can **linearly multiply the capacity of your server to handle requests**.
So why not set this parameter much higher all the time? Keep in mind that since requests are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `default_concurrenc
|
Overview of Gradio's Queueing System
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
sts are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `default_concurrency_limit` too high. You may also start to get diminishing returns if the `default_concurrency_limit` is too high because of costs of switching between different worker threads.
**Recommendation**: Increase the `default_concurrency_limit` parameter as high as you can while you continue to see performance gains or until you hit memory limits on your machine. You can [read about Hugging Face Spaces machine specs here](https://huggingface.co/docs/hub/spaces-overview).
The `concurrency_limit` parameter in events
You can also set the number of requests that can be processed in parallel for each event individually. These take priority over the `default_concurrency_limit` parameter described previously.
To do this, set the `concurrency_limit` parameter of any event listener, e.g. `btn.click(..., concurrency_limit=20)` or in the `Interface` or `ChatInterface` classes: e.g. `gr.Interface(..., concurrency_limit=20)`. By default, this parameter is set to the global `default_concurrency_limit`.
The `max_threads` parameter in `launch()`
If your demo uses non-async functions, e.g. `def` instead of `async def`, they will be run in a threadpool. This threadpool has a size of 40 meaning that only 40 threads can be created to run your non-async functions. If you are running into this limit, you can increase the threadpool size with `max_threads`. The default value is 40.
Tip: You should use async functions whenever possible to increase the number of concurrent requests your app can handle. Quick functions that are not CPU-bound are good candidates to be written as `async`. This [guide](https://fastapi.tiangolo.com/async/) is a good primer on the concept.
The `max_size` parameter in `queue()`
A more blunt way to reduce the wait times is simply to prevent too many pe
|
Overview of Gradio's Queueing System
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
is [guide](https://fastapi.tiangolo.com/async/) is a good primer on the concept.
The `max_size` parameter in `queue()`
A more blunt way to reduce the wait times is simply to prevent too many people from joining the queue in the first place. You can set the maximum number of requests that the queue processes using the `max_size` parameter of `queue()`. If a request arrives when the queue is already of the maximum size, it will not be allowed to join the queue and instead, the user will receive an error saying that the queue is full and to try again. By default, `max_size=None`, meaning that there is no limit to the number of users that can join the queue.
Paradoxically, setting a `max_size` can often improve user experience because it prevents users from being dissuaded by very long queue wait times. Users who are more interested and invested in your demo will keep trying to join the queue, and will be able to get their results faster.
**Recommendation**: For a better user experience, set a `max_size` that is reasonable given your expectations of how long users might be willing to wait for a prediction.
The `max_batch_size` parameter in events
Another way to increase the parallelism of your Gradio demo is to write your function so that it can accept **batches** of inputs. Most deep learning models can process batches of samples more efficiently than processing individual samples.
If you write your function to process a batch of samples, Gradio will automatically batch incoming requests together and pass them into your function as a batch of samples. You need to set `batch` to `True` (by default it is `False`) and set a `max_batch_size` (by default it is `4`) based on the maximum number of samples your function is able to handle. These two parameters can be passed into `gr.Interface()` or to an event in Blocks such as `.click()`.
While setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than set
|
Overview of Gradio's Queueing System
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
e passed into `gr.Interface()` or to an event in Blocks such as `.click()`.
While setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than setting the `concurrency_count` for deep learning models. The downside is that you might need to adapt your function a little bit to accept batches of samples instead of individual samples.
Here's an example of a function that does _not_ accept a batch of inputs -- it processes a single input at a time:
```py
import time
def trim_words(word, length):
return word[:int(length)]
```
Here's the same function rewritten to take in a batch of samples:
```py
import time
def trim_words(words, lengths):
trimmed_words = []
for w, l in zip(words, lengths):
trimmed_words.append(w[:int(l)])
return [trimmed_words]
```
The second function can be used with `batch=True` and an appropriate `max_batch_size` parameter.
**Recommendation**: If possible, write your function to accept batches of samples, and then set `batch` to `True` and the `max_batch_size` as high as possible based on your machine's memory limits.
|
Overview of Gradio's Queueing System
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
If you have done everything above, and your demo is still not fast enough, you can upgrade the hardware that your model is running on. Changing the model from running on CPUs to running on GPUs will usually provide a 10x-50x increase in inference time for deep learning models.
It is particularly straightforward to upgrade your Hardware on Hugging Face Spaces. Simply click on the "Settings" tab in your Space and choose the Space Hardware you'd like.

While you might need to adapt portions of your machine learning inference code to run on a GPU (here's a [handy guide](https://cnvrg.io/pytorch-cuda/) if you are using PyTorch), Gradio is completely agnostic to the choice of hardware and will work completely fine if you use it with CPUs, GPUs, TPUs, or any other hardware!
Note: your GPU memory is different than your CPU memory, so if you upgrade your hardware,
you might need to adjust the value of the `default_concurrency_limit` parameter described above.
|
Upgrading your Hardware (GPUs, TPUs, etc.)
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
Congratulations! You know how to set up a Gradio demo for maximum performance. Good luck on your next viral demo!
|
Conclusion
|
https://gradio.app/guides/setting-up-a-demo-for-maximum-performance
|
Other Tutorials - Setting Up A Demo For Maximum Performance Guide
|
The Hugging Face Hub is a central platform that has hundreds of thousands of [models](https://huggingface.co/models), [datasets](https://huggingface.co/datasets) and [demos](https://huggingface.co/spaces) (also known as Spaces).
Gradio has multiple features that make it extremely easy to leverage existing models and Spaces on the Hub. This guide walks through these features.
|
Introduction
|
https://gradio.app/guides/using-hugging-face-integrations
|
Other Tutorials - Using Hugging Face Integrations Guide
|
Hugging Face has a service called [Serverless Inference Endpoints](https://huggingface.co/docs/api-inference/index), which allows you to send HTTP requests to models on the Hub. The API includes a generous free tier, and you can switch to [dedicated Inference Endpoints](https://huggingface.co/inference-endpoints/dedicated) when you want to use it in production. Gradio integrates directly with Serverless Inference Endpoints so that you can create a demo simply by specifying a model's name (e.g. `Helsinki-NLP/opus-mt-en-es`), like this:
```python
import gradio as gr
demo = gr.load("Helsinki-NLP/opus-mt-en-es", src="models")
demo.launch()
```
For any Hugging Face model supported in Inference Endpoints, Gradio automatically infers the expected input and output and make the underlying server calls, so you don't have to worry about defining the prediction function.
Notice that we just put specify the model name and state that the `src` should be `models` (Hugging Face's Model Hub). There is no need to install any dependencies (except `gradio`) since you are not loading the model on your computer.
You might notice that the first inference takes a little bit longer. This happens since the Inference Endpoints is loading the model in the server. You get some benefits afterward:
- The inference will be much faster.
- The server caches your requests.
- You get built-in automatic scaling.
|
Demos with the Hugging Face Inference Endpoints
|
https://gradio.app/guides/using-hugging-face-integrations
|
Other Tutorials - Using Hugging Face Integrations Guide
|
[Hugging Face Spaces](https://hf.co/spaces) allows anyone to host their Gradio demos freely, and uploading your Gradio demos take a couple of minutes. You can head to [hf.co/new-space](https://huggingface.co/new-space), select the Gradio SDK, create an `app.py` file, and voila! You have a demo you can share with anyone else. To learn more, read [this guide how to host on Hugging Face Spaces using the website](https://huggingface.co/blog/gradio-spaces).
Alternatively, you can create a Space programmatically, making use of the [huggingface_hub client library](https://huggingface.co/docs/huggingface_hub/index) library. Here's an example:
```python
from huggingface_hub import (
create_repo,
get_full_repo_name,
upload_file,
)
create_repo(name=target_space_name, token=hf_token, repo_type="space", space_sdk="gradio")
repo_name = get_full_repo_name(model_id=target_space_name, token=hf_token)
file_url = upload_file(
path_or_fileobj="file.txt",
path_in_repo="app.py",
repo_id=repo_name,
repo_type="space",
token=hf_token,
)
```
Here, `create_repo` creates a gradio repo with the target name under a specific account using that account's Write Token. `repo_name` gets the full repo name of the related repo. Finally `upload_file` uploads a file inside the repo with the name `app.py`.
|
Hosting your Gradio demos on Spaces
|
https://gradio.app/guides/using-hugging-face-integrations
|
Other Tutorials - Using Hugging Face Integrations Guide
|
You can also use and remix existing Gradio demos on Hugging Face Spaces. For example, you could take two existing Gradio demos on Spaces and put them as separate tabs and create a new demo. You can run this new demo locally, or upload it to Spaces, allowing endless possibilities to remix and create new demos!
Here's an example that does exactly that:
```python
import gradio as gr
with gr.Blocks() as demo:
with gr.Tab("Translate to Spanish"):
gr.load("gradio/en2es", src="spaces")
with gr.Tab("Translate to French"):
gr.load("abidlabs/en2fr", src="spaces")
demo.launch()
```
Notice that we use `gr.load()`, the same method we used to load models using Inference Endpoints. However, here we specify that the `src` is `spaces` (Hugging Face Spaces).
Note: loading a Space in this way may result in slight differences from the original Space. In particular, any attributes that apply to the entire Blocks, such as the theme or custom CSS/JS, will not be loaded. You can copy these properties from the Space you are loading into your own `Blocks` object.
|
Loading demos from Spaces
|
https://gradio.app/guides/using-hugging-face-integrations
|
Other Tutorials - Using Hugging Face Integrations Guide
|
Hugging Face's popular `transformers` library has a very easy-to-use abstraction, [`pipeline()`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/pipelinestransformers.pipeline) that handles most of the complex code to offer a simple API for common tasks. By specifying the task and an (optional) model, you can build a demo around an existing model with few lines of Python:
```python
import gradio as gr
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
def predict(text):
return pipe(text)[0]["translation_text"]
demo = gr.Interface(
fn=predict,
inputs='text',
outputs='text',
)
demo.launch()
```
But `gradio` actually makes it even easier to convert a `pipeline` to a demo, simply by using the `gradio.Interface.from_pipeline` methods, which skips the need to specify the input and output components:
```python
from transformers import pipeline
import gradio as gr
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
demo = gr.Interface.from_pipeline(pipe)
demo.launch()
```
The previous code produces the following interface, which you can try right here in your browser:
<gradio-app space="gradio/en2es"></gradio-app>
|
Demos with the `Pipeline` in `transformers`
|
https://gradio.app/guides/using-hugging-face-integrations
|
Other Tutorials - Using Hugging Face Integrations Guide
|
That's it! Let's recap the various ways Gradio and Hugging Face work together:
1. You can build a demo around Inference Endpoints without having to load the model, by using `gr.load()`.
2. You host your Gradio demo on Hugging Face Spaces, either using the GUI or entirely in Python.
3. You can load demos from Hugging Face Spaces to remix and create new Gradio demos using `gr.load()`.
4. You can convert a `transformers` pipeline into a Gradio demo using `from_pipeline()`.
π€
|
Recap
|
https://gradio.app/guides/using-hugging-face-integrations
|
Other Tutorials - Using Hugging Face Integrations Guide
|
Encoder functions to send audio as base64-encoded data and images as base64-encoded JPEG.
```python
import base64
import numpy as np
from io import BytesIO
from PIL import Image
def encode_audio(data: np.ndarray) -> dict:
"""Encode audio data (int16 mono) for Gemini."""
return {
"mime_type": "audio/pcm",
"data": base64.b64encode(data.tobytes()).decode("UTF-8"),
}
def encode_image(data: np.ndarray) -> dict:
with BytesIO() as output_bytes:
pil_image = Image.fromarray(data)
pil_image.save(output_bytes, "JPEG")
bytes_data = output_bytes.getvalue()
base64_str = str(base64.b64encode(bytes_data), "utf-8")
return {"mime_type": "image/jpeg", "data": base64_str}
```
|
1) Encoders for audio and images
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
This handler:
- Opens a Gemini Live session on startup
- Receives streaming audio from Gemini and yields it back to the client
- Sends microphone audio as it arrives
- Sends a video frame at most once per second (to avoid flooding the API)
- Optionally sends an uploaded image (`gr.Image`) alongside the webcam frame
```python
import asyncio
import os
import time
import numpy as np
import websockets
from dotenv import load_dotenv
from google import genai
from fastrtc import AsyncAudioVideoStreamHandler, wait_for_item, WebRTCError
load_dotenv()
class GeminiHandler(AsyncAudioVideoStreamHandler):
def __init__(self) -> None:
super().__init__(
"mono",
output_sample_rate=24000,
input_sample_rate=16000,
)
self.audio_queue = asyncio.Queue()
self.video_queue = asyncio.Queue()
self.session = None
self.last_frame_time = 0.0
self.quit = asyncio.Event()
def copy(self) -> "GeminiHandler":
return GeminiHandler()
async def start_up(self):
await self.wait_for_args()
api_key = self.latest_args[3]
hf_token = self.latest_args[4]
if hf_token is None or hf_token == "":
raise WebRTCError("HF Token is required")
os.environ["HF_TOKEN"] = hf_token
client = genai.Client(
api_key=api_key, http_options={"api_version": "v1alpha"}
)
config = {"response_modalities": ["AUDIO"], "system_instruction": "You are an art critic that will critique the artwork passed in as an image to the user. Critique the artwork in a funny and lighthearted way. Be concise and to the point. Be friendly and engaging. Be helpful and informative. Be funny and lighthearted."}
async with client.aio.live.connect(
model="gemini-2.0-flash-exp",
config=config,
) as session:
self.session = session
while not self.quit.is_set():
turn = self.session.receiv
|
2) Implement the Gemini audio-video handler
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
model="gemini-2.0-flash-exp",
config=config,
) as session:
self.session = session
while not self.quit.is_set():
turn = self.session.receive()
try:
async for response in turn:
if data := response.data:
audio = np.frombuffer(data, dtype=np.int16).reshape(1, -1)
self.audio_queue.put_nowait(audio)
except websockets.exceptions.ConnectionClosedOK:
print("connection closed")
break
Video: receive and (optionally) send frames to Gemini
async def video_receive(self, frame: np.ndarray):
self.video_queue.put_nowait(frame)
if self.session and (time.time() - self.last_frame_time > 1.0):
self.last_frame_time = time.time()
await self.session.send(input=encode_image(frame))
If there is an uploaded image passed alongside the WebRTC component,
it will be available in latest_args[2]
if self.latest_args[2] is not None:
await self.session.send(input=encode_image(self.latest_args[2]))
async def video_emit(self) -> np.ndarray:
frame = await wait_for_item(self.video_queue, 0.01)
if frame is not None:
return frame
Fallback while waiting for first frame
return np.zeros((100, 100, 3), dtype=np.uint8)
Audio: forward microphone audio to Gemini
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
_, array = frame
array = array.squeeze() (num_samples,)
audio_message = encode_audio(array)
if self.session:
await self.session.send(input=audio_message)
Audio: emit Geminiβs audio back to the client
async def emit(self):
array = await wait_for_item(self.audio_queue, 0.01)
if array is not None:
return (self.output_sam
|
2) Implement the Gemini audio-video handler
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
Audio: emit Geminiβs audio back to the client
async def emit(self):
array = await wait_for_item(self.audio_queue, 0.01)
if array is not None:
return (self.output_sample_rate, array)
return array
async def shutdown(self) -> None:
if self.session:
self.quit.set()
await self.session.close()
self.quit.clear()
```
|
2) Implement the Gemini audio-video handler
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
Weβll add an optional `gr.Image` input alongside the `WebRTC` component. The handler will access this in `self.latest_args[1]` when sending frames to Gemini.
```python
import gradio as gr
from fastrtc import Stream, WebRTC, get_hf_turn_credentials
stream = Stream(
handler=GeminiHandler(),
modality="audio-video",
mode="send-receive",
server_rtc_configuration=get_hf_turn_credentials(ttl=600*10000),
rtc_configuration=get_hf_turn_credentials(),
additional_inputs=[
gr.Markdown(
"π¨ Art Critic\n\n"
"Provide an image of your artwork or hold it up to the webcam, and Gemini will critique it for you."
"To get a Gemini API key, please visit the [Gemini API Key](https://aistudio.google.com/apikey) page."
"To get an HF Token, please visit the [HF Token](https://huggingface.co/settings/tokens) page."
),
gr.Image(label="Artwork", value="mona_lisa.jpg", type="numpy", sources=["upload", "clipboard"]),
gr.Textbox(label="Gemini API Key", type="password"),
gr.Textbox(label="HF Token", type="password"),
],
ui_args={
"icon": "https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
"pulse_color": "rgb(255, 255, 255)",
"icon_button_color": "rgb(255, 255, 255)",
"title": "Gemini Audio Video Chat",
},
time_limit=90,
concurrency_limit=5,
)
if __name__ == "__main__":
stream.ui.launch()
```
References
- Gemini Audio Video Chat reference code: [Hugging Face Space](https://huggingface.co/spaces/gradio/gemini-audio-video/blob/main/app.py)
- FastRTC docs: `https://fastrtc.org`
- Audio + video user guide: `https://fastrtc.org/userguide/audio-video/`
- Gradio component integration: `https://fastrtc.org/userguide/gradio/`
- Cookbook (live demos + code): `https://fastrtc.org/cookbook/`
|
3) Setup Stream and Gradio UI
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
This guide explains how you can use Gradio to plot geographical data on a map using the `gradio.Plot` component. The Gradio `Plot` component works with Matplotlib, Bokeh and Plotly. Plotly is what we will be working with in this guide. Plotly allows developers to easily create all sorts of maps with their geographical data. Take a look [here](https://plotly.com/python/maps/) for some examples.
|
Introduction
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
We will be using the New York City Airbnb dataset, which is hosted on kaggle [here](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data). I've uploaded it to the Hugging Face Hub as a dataset [here](https://huggingface.co/datasets/gradio/NYC-Airbnb-Open-Data) for easier use and download. Using this data we will plot Airbnb locations on a map output and allow filtering based on price and location. Below is the demo that we will be building. β‘οΈ
$demo_map_airbnb
|
Overview
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
Let's start by loading the Airbnb NYC data from the Hugging Face Hub.
```python
from datasets import load_dataset
dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()
def filter_map(min_price, max_price, boroughs):
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
(df['price'] > min_price) & (df['price'] < max_price)]
names = new_df["name"].tolist()
prices = new_df["price"].tolist()
text_list = [(names[i], prices[i]) for i in range(0, len(names))]
```
In the code above, we first load the csv data into a pandas dataframe. Let's begin by defining a function that we will use as the prediction function for the gradio app. This function will accept the minimum price and maximum price range as well as the list of boroughs to filter the resulting map. We can use the passed in values (`min_price`, `max_price`, and list of `boroughs`) to filter the dataframe and create `new_df`. Next we will create `text_list` of the names and prices of each Airbnb to use as labels on the map.
|
Step 1 - Loading CSV data πΎ
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
Plotly makes it easy to work with maps. Let's take a look below how we can create a map figure.
```python
import plotly.graph_objects as go
fig = go.Figure(go.Scattermapbox(
customdata=text_list,
lat=new_df['latitude'].tolist(),
lon=new_df['longitude'].tolist(),
mode='markers',
marker=go.scattermapbox.Marker(
size=6
),
hoverinfo="text",
hovertemplate='<b>Name</b>: %{customdata[0]}<br><b>Price</b>: $%{customdata[1]}'
))
fig.update_layout(
mapbox_style="open-street-map",
hovermode='closest',
mapbox=dict(
bearing=0,
center=go.layout.mapbox.Center(
lat=40.67,
lon=-73.90
),
pitch=0,
zoom=9
),
)
```
Above, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.
More info [here](https://plotly.com/python/scattermapbox/) on scatter plots using Mapbox and Plotly.
|
Step 2 - Map Figure π
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
We will use two `gr.Number` components and a `gr.CheckboxGroup` to allow users of our app to specify price ranges and borough locations. We will then use the `gr.Plot` component as an output for our Plotly + Mapbox map we created earlier.
```python
with gr.Blocks() as demo:
with gr.Column():
with gr.Row():
min_price = gr.Number(value=250, label="Minimum Price")
max_price = gr.Number(value=1000, label="Maximum Price")
boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
btn = gr.Button(value="Update Filter")
map = gr.Plot()
demo.load(filter_map, [min_price, max_price, boroughs], map)
btn.click(filter_map, [min_price, max_price, boroughs], map)
```
We layout these components using the `gr.Column` and `gr.Row` and we'll also add event triggers for when the demo first loads and when our "Update Filter" button is clicked in order to trigger the map to update with our new filters.
This is what the full demo code looks like:
$code_map_airbnb
|
Step 3 - Gradio App β‘οΈ
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
If you run the code above, your app will start running locally.
You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
But what if you want to a permanent deployment solution?
Let's deploy our Gradio app to the free HuggingFace Spaces platform.
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
|
Step 4 - Deployment π€
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
And you're all done! That's all the code you need to build a map demo.
Here's a link to the demo [Map demo](https://huggingface.co/spaces/gradio/map_airbnb) and [complete code](https://huggingface.co/spaces/gradio/map_airbnb/blob/main/run.py) (on Hugging Face Spaces)
|
Conclusion π
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/):
1\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the "Share" button in the top-right corner, and then click on the "Get shareable link" button. This will give you a URL that looks something like this:
```html
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0
```
2\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet):
```python
import pandas as pd
URL = "https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0"
csv_url = URL.replace('/editgid=', '/export?format=csv&gid=')
def get_data():
return pd.read_csv(csv_url)
```
3\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code:
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("π Real-Time Line Plot")
with gr.Row():
with gr.Column():
gr.DataFrame(get_data, every=gr.Timer(5))
with gr.Column():
gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
demo.queue().launch() Run the demo with queuing enabled
```
And that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
|
Public Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets.
Authentication
To authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials):
1\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)
2\. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set.
4\. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
5\. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. **Note down the email of the service account**
6\. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:
```json
{
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
|
Private Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
Querying
Once you have the credentials `.json` file, you can use the following steps to query your Google Sheet:
1\. Click on the "Share" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the "Get shareable link" button. This will give you a URL that looks something like this:
```html
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0
```
2\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`
3\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):
```python
import gspread
import pandas as pd
Authenticate with Google and get the sheet
URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/editgid=0'
gc = gspread.service_account("path/to/key.json")
sh = gc.open_by_url(URL)
worksheet = sh.sheet1
def get_data():
values = worksheet.get_all_values()
df = pd.DataFrame(values[1:], columns=values[0])
return df
```
4\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio cod
|
Private Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
. To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code:
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("π Real-Time Line Plot")
with gr.Row():
with gr.Column():
gr.DataFrame(get_data, every=gr.Timer(5))
with gr.Column():
gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
demo.queue().launch() Run the demo with queuing enabled
```
You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
|
Private Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.
|
Conclusion
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
Let's go through a simple example to understand how to containerize a Gradio app using Docker.
Step 1: Create Your Gradio App
First, we need a simple Gradio app. Let's create a Python file named `app.py` with the following content:
```python
import gradio as gr
def greet(name):
return f"Hello {name}!"
iface = gr.Interface(fn=greet, inputs="text", outputs="text").launch()
```
This app creates a simple interface that greets the user by name.
Step 2: Create a Dockerfile
Next, we'll create a Dockerfile to specify how our app should be built and run in a Docker container. Create a file named `Dockerfile` in the same directory as your app with the following content:
```dockerfile
FROM python:3.10-slim
WORKDIR /usr/src/app
COPY . .
RUN pip install --no-cache-dir gradio
EXPOSE 7860
ENV GRADIO_SERVER_NAME="0.0.0.0"
CMD ["python", "app.py"]
```
This Dockerfile performs the following steps:
- Starts from a Python 3.10 slim image.
- Sets the working directory and copies the app into the container.
- Installs Gradio (you should install all other requirements as well).
- Exposes port 7860 (Gradio's default port).
- Sets the `GRADIO_SERVER_NAME` environment variable to ensure Gradio listens on all network interfaces.
- Specifies the command to run the app.
Step 3: Build and Run Your Docker Container
With the Dockerfile in place, you can build and run your container:
```bash
docker build -t gradio-app .
docker run -p 7860:7860 gradio-app
```
Your Gradio app should now be accessible at `http://localhost:7860`.
|
How to Dockerize a Gradio App
|
https://gradio.app/guides/deploying-gradio-with-docker
|
Other Tutorials - Deploying Gradio With Docker Guide
|
When running Gradio applications in Docker, there are a few important things to keep in mind:
Running the Gradio app on `"0.0.0.0"` and exposing port 7860
In the Docker environment, setting `GRADIO_SERVER_NAME="0.0.0.0"` as an environment variable (or directly in your Gradio app's `launch()` function) is crucial for allowing connections from outside the container. And the `EXPOSE 7860` directive in the Dockerfile tells Docker to expose Gradio's default port on the container to enable external access to the Gradio app.
Enable Stickiness for Multiple Replicas
When deploying Gradio apps with multiple replicas, such as on AWS ECS, it's important to enable stickiness with `sessionAffinity: ClientIP`. This ensures that all requests from the same user are routed to the same instance. This is important because Gradio's communication protocol requires multiple separate connections from the frontend to the backend in order for events to be processed correctly. (If you use Terraform, you'll want to add a [stickiness block](https://registry.terraform.io/providers/hashicorp/aws/3.14.1/docs/resources/lb_target_groupstickiness) into your target group definition.)
Deploying Behind a Proxy
If you're deploying your Gradio app behind a proxy, like Nginx, it's essential to configure the proxy correctly. Gradio provides a [Guide that walks through the necessary steps](https://www.gradio.app/guides/running-gradio-on-your-web-server-with-nginx). This setup ensures your app is accessible and performs well in production environments.
|
Important Considerations
|
https://gradio.app/guides/deploying-gradio-with-docker
|
Other Tutorials - Deploying Gradio With Docker Guide
|
Tabular data science is the most widely used domain of machine learning, with problems ranging from customer segmentation to churn prediction. Throughout various stages of the tabular data science workflow, communicating your work to stakeholders or clients can be cumbersome; which prevents data scientists from focusing on what matters, such as data analysis and model building. Data scientists can end up spending hours building a dashboard that takes in dataframe and returning plots, or returning a prediction or plot of clusters in a dataset. In this guide, we'll go through how to use `gradio` to improve your data science workflows. We will also talk about how to use `gradio` and [skops](https://skops.readthedocs.io/en/stable/) to build interfaces with only one line of code!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started).
|
Introduction
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
We will take a look at how we can create a simple UI that predicts failures based on product information.
```python
import gradio as gr
import pandas as pd
import joblib
import datasets
inputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(4,"dynamic"), label="Input Data", interactive=1)]
outputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(1, "fixed"), label="Predictions", headers=["Failures"])]
model = joblib.load("model.pkl")
we will give our dataframe as example
df = datasets.load_dataset("merve/supersoaker-failures")
df = df["train"].to_pandas()
def infer(input_dataframe):
return pd.DataFrame(model.predict(input_dataframe))
gr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()
```
Let's break down above code.
- `fn`: the inference function that takes input dataframe and returns predictions.
- `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.
- `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.
- `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.
We will now create an example for a minimal data visualization dashboard. You can find a more comprehensive version in the related Spaces.
<gradio-app space="gradio/tabular-playground"></gradio-app>
```python
import gradio as gr
import pandas as pd
import datasets
import seaborn as sns
import matplotlib.pyplot as plt
df = datasets.load_dataset
|
Let's Create a Simple Interface!
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
app space="gradio/tabular-playground"></gradio-app>
```python
import gradio as gr
import pandas as pd
import datasets
import seaborn as sns
import matplotlib.pyplot as plt
df = datasets.load_dataset("merve/supersoaker-failures")
df = df["train"].to_pandas()
df.dropna(axis=0, inplace=True)
def plot(df):
plt.scatter(df.measurement_13, df.measurement_15, c = df.loading,alpha=0.5)
plt.savefig("scatter.png")
df['failure'].value_counts().plot(kind='bar')
plt.savefig("bar.png")
sns.heatmap(df.select_dtypes(include="number").corr())
plt.savefig("corr.png")
plots = ["corr.png","scatter.png", "bar.png"]
return plots
inputs = [gr.Dataframe(label="Supersoaker Production Data")]
outputs = [gr.Gallery(label="Profiling Dashboard", columns=(1,3))]
gr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], title="Supersoaker Failures Analysis Dashboard").launch()
```
<gradio-app space="gradio/gradio-analysis-dashboard-minimal"></gradio-app>
We will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.
- `fn`: The function that will create plots based on data.
- `inputs`: We use the same `Dataframe` component we used above.
- `outputs`: The `Gallery` component is used to keep our visualizations.
- `examples`: We will have the dataset itself as the example.
|
Let's Create a Simple Interface!
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
`skops` is a library built on top of `huggingface_hub` and `sklearn`. With the recent `gradio` integration of `skops`, you can build tabular data interfaces with one line of code!
```python
import gradio as gr
title and description are optional
title = "Supersoaker Defective Product Prediction"
description = "This model predicts Supersoaker production line failures. Drag and drop any slice from dataset or edit values as you wish in below dataframe component."
gr.load("huggingface/scikit-learn/tabular-playground", title=title, description=description).launch()
```
<gradio-app space="gradio/gradio-skops-integration"></gradio-app>
`sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.htmlsphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.
|
Easily load tabular data interfaces with one line of code using skops
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
Global state in Gradio apps is very simple: any variable created outside of a function is shared globally between all users.
This makes managing global state very simple and without the need for external services. For example, in this application, the `visitor_count` variable is shared between all users
```py
import gradio as gr
Shared between all users
visitor_count = 0
def increment_counter():
global visitor_count
visitor_count += 1
return visitor_count
with gr.Blocks() as demo:
number = gr.Textbox(label="Total Visitors", value="Counting...")
demo.load(increment_counter, inputs=None, outputs=number)
demo.launch()
```
This means that any time you do _not_ want to share a value between users, you should declare it _within_ a function. But what if you need to share values between function calls, e.g. a chat history? In that case, you should use one of the subsequent approaches to manage state.
|
Global State
|
https://gradio.app/guides/state-in-blocks
|
Building With Blocks - State In Blocks Guide
|
Gradio supports session state, where data persists across multiple submits within a page session. To reiterate, session data is _not_ shared between different users of your model, and does _not_ persist if a user refreshes the page to reload the Gradio app. To store data in a session state, you need to do three things:
1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor. Note that `gr.State` objects must be [deepcopy-able](https://docs.python.org/3/library/copy.html), otherwise you will need to use a different approach as described below.
2. In the event listener, put the `State` object as an input and output as needed.
3. In the event listener function, add the variable to the input parameters and the return value.
Let's take a look at a simple example. We have a simple checkout app below where you add items to a cart. You can also see the size of the cart.
$code_simple_state
Notice how we do this with state:
1. We store the cart items in a `gr.State()` object, initialized here to be an empty list.
2. When adding items to the cart, the event listener uses the cart as both input and output - it returns the updated cart with all the items inside.
3. We can attach a `.change` listener to cart, that uses the state variable as input as well.
You can think of `gr.State` as an invisible Gradio component that can store any kind of value. Here, `cart` is not visible in the frontend but is used for calculations.
The `.change` listener for a state variable triggers after any event listener changes the value of a state variable. If the state variable holds a sequence (like a `list`, `set`, or `dict`), a change is triggered if any of the elements inside change. If it holds an object or primitive, a change is triggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__
|
Session State
|
https://gradio.app/guides/state-in-blocks
|
Building With Blocks - State In Blocks Guide
|
riggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__hash__` implementation.
The value of a session State variable is cleared when the user refreshes the page. The value is stored on in the app backend for 60 minutes after the user closes the tab (this can be configured by the `delete_cache` parameter in `gr.Blocks`).
Learn more about `State` in the [docs](https://gradio.app/docs/gradio/state).
**What about objects that cannot be deepcopied?**
As mentioned earlier, the value stored in `gr.State` must be [deepcopy-able](https://docs.python.org/3/library/copy.html). If you are working with a complex object that cannot be deepcopied, you can take a different approach to manually read the user's `session_hash` and store a global `dictionary` with instances of your object for each user. Here's how you would do that:
```py
import gradio as gr
class NonDeepCopyable:
def __init__(self):
from threading import Lock
self.counter = 0
self.lock = Lock() Lock objects cannot be deepcopied
def increment(self):
with self.lock:
self.counter += 1
return self.counter
Global dictionary to store user-specific instances
instances = {}
def initialize_instance(request: gr.Request):
instances[request.session_hash] = NonDeepCopyable()
return "Session initialized!"
def cleanup_instance(request: gr.Request):
if request.session_hash in instances:
del instances[request.session_hash]
def increment_counter(request: gr.Request):
if request.session_hash in instances:
instance = instances[request.session_hash]
return instance.increment()
return "Error: Session not initialized"
with gr.Blocks() as demo:
output = gr.Textbox(label="Status")
counter = gr.Number(label="Counter Value")
increment_btn = gr.Button("Increment Co
|
Session State
|
https://gradio.app/guides/state-in-blocks
|
Building With Blocks - State In Blocks Guide
|
return "Error: Session not initialized"
with gr.Blocks() as demo:
output = gr.Textbox(label="Status")
counter = gr.Number(label="Counter Value")
increment_btn = gr.Button("Increment Counter")
increment_btn.click(increment_counter, inputs=None, outputs=counter)
Initialize instance when page loads
demo.load(initialize_instance, inputs=None, outputs=output)
Clean up instance when page is closed/refreshed
demo.unload(cleanup_instance)
demo.launch()
```
|
Session State
|
https://gradio.app/guides/state-in-blocks
|
Building With Blocks - State In Blocks Guide
|
Gradio also supports browser state, where data persists in the browser's localStorage even after the page is refreshed or closed. This is useful for storing user preferences, settings, API keys, or other data that should persist across sessions. To use local state:
1. Create a `gr.BrowserState` object. You can optionally provide an initial default value and a key to identify the data in the browser's localStorage.
2. Use it like a regular `gr.State` component in event listeners as inputs and outputs.
Here's a simple example that saves a user's username and password across sessions:
$code_browserstate
Note: The value stored in `gr.BrowserState` does not persist if the Grado app is restarted. To persist it, you can hardcode specific values of `storage_key` and `secret` in the `gr.BrowserState` component and restart the Gradio app on the same server name and server port. However, this should only be done if you are running trusted Gradio apps, as in principle, this can allow one Gradio app to access localStorage data that was created by a different Gradio app.
|
Browser State
|
https://gradio.app/guides/state-in-blocks
|
Building With Blocks - State In Blocks Guide
|
Elements within a `with gr.Row` clause will all be displayed horizontally. For example, to display two Buttons side by side:
```python
with gr.Blocks() as demo:
with gr.Row():
btn1 = gr.Button("Button 1")
btn2 = gr.Button("Button 2")
```
You can set every element in a Row to have the same height. Configure this with the `equal_height` argument.
```python
with gr.Blocks() as demo:
with gr.Row(equal_height=True):
textbox = gr.Textbox()
btn2 = gr.Button("Button 2")
```
The widths of elements in a Row can be controlled via a combination of `scale` and `min_width` arguments that are present in every Component.
- `scale` is an integer that defines how an element will take up space in a Row. If scale is set to `0`, the element will not expand to take up space. If scale is set to `1` or greater, the element will expand. Multiple elements in a row will expand proportional to their scale. Below, `btn2` will expand twice as much as `btn1`, while `btn0` will not expand at all:
```python
with gr.Blocks() as demo:
with gr.Row():
btn0 = gr.Button("Button 0", scale=0)
btn1 = gr.Button("Button 1", scale=1)
btn2 = gr.Button("Button 2", scale=2)
```
- `min_width` will set the minimum width the element will take. The Row will wrap if there isn't sufficient space to satisfy all `min_width` values.
Learn more about Rows in the [docs](https://gradio.app/docs/row).
|
Rows
|
https://gradio.app/guides/controlling-layout
|
Building With Blocks - Controlling Layout Guide
|
Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:
$code_rows_and_columns
$demo_rows_and_columns
See how the first column has two Textboxes arranged vertically. The second column has an Image and Button arranged vertically. Notice how the relative widths of the two columns is set by the `scale` parameter. The column with twice the `scale` value takes up twice the width.
Learn more about Columns in the [docs](https://gradio.app/docs/column).
Fill Browser Height / Width
To make an app take the full width of the browser by removing the side padding, use `gr.Blocks(fill_width=True)`.
To make top level Components expand to take the full height of the browser, use `fill_height` and apply scale to the expanding Components.
```python
import gradio as gr
with gr.Blocks(fill_height=True) as demo:
gr.Chatbot(scale=1)
gr.Textbox(scale=0)
```
|
Columns and Nesting
|
https://gradio.app/guides/controlling-layout
|
Building With Blocks - Controlling Layout Guide
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.