Text2Earth-inpainting Model Card
This model card focuses on the model associated with the Text2Earth, available here. Paper is [here]
Examples
Using the ๐ค's Diffusers library to run Text2Earth-inpainting in a simple and efficient manner.
pip install diffusers transformers accelerate scipy safetensors
import torch
from diffusers import StableDiffusionInpaintPipeline
from diffusers.utils import load_image
model_id = "lcybuaa/Text2Earth-inpainting"
pipe = StableDiffusionInpaintPipeline.from_pretrained(
model_id, torch_dtype=torch.float16,
custom_pipeline='pipeline_text2earth_diffusion_inpaint',
safety_checker=None
)
pipe.to("cuda")
# load base and mask image
# image and mask_image should be PIL images.
# The mask structure is white for inpainting and black for keeping as is
init_image = load_image(r"https://github.com/Chen-Yang-Liu/Text2Earth/blob/main/images/sparse_residential_310.jpg")
mask_image = load_image(r"https://github.com/Chen-Yang-Liu/Text2Earth/blob/main/images/sparse_residential_310.png")
prompt = "There is one big green lake"
image = pipe(prompt=prompt,
image=init_image,
mask_image=mask_image,
height=256,
width=256,
num_inference_steps=50,
guidance_scale=4.0).images[0]
image.save("lake.png")
Citation
If you find this paper useful in your research, please consider citing:
@ARTICLE{10988859,
author={Liu, Chenyang and Chen, Keyan and Zhao, Rui and Zou, Zhengxia and Shi, Zhenwei},
journal={IEEE Geoscience and Remote Sensing Magazine},
title={Text2Earth: Unlocking text-driven remote sensing image generation with a global-scale dataset and a foundation model},
year={2025},
volume={},
number={},
pages={2-23},
doi={10.1109/MGRS.2025.3560455}}
- Downloads last month
- 32
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support