{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Unconditional image generation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Unconditional image generation is a relatively straightforward task. The model only generates images - without any additional context like text or an image - resembling the training data it was trained on.\n",
"\n",
"The [DiffusionPipeline](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline) is the easiest way to use a pre-trained diffusion system for inference.\n",
"\n",
"Start by creating an instance of [DiffusionPipeline](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline) and specify which pipeline checkpoint you would like to download.\n",
"You can use any of the 🧨 Diffusers [checkpoints](https://huggingface.co/models?library=diffusers&sort=downloads) from the Hub (the checkpoint you'll use generates images of butterflies).\n",
"\n",
"\n",
"\n",
"💡 Want to train your own unconditional image generation model? Take a look at the training [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/training/unconditional_training) to learn how to generate your own images.\n",
"\n",
"\n",
"\n",
"In this guide, you'll use [DiffusionPipeline](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline) for unconditional image generation with [DDPM](https://arxiv.org/abs/2006.11239):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from diffusers import DiffusionPipeline\n",
"\n",
"generator = DiffusionPipeline.from_pretrained(\"anton-l/ddpm-butterflies-128\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [DiffusionPipeline](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline) downloads and caches all modeling, tokenization, and scheduling components. \n",
"Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU.\n",
"You can move the generator object to a GPU, just like you would in PyTorch:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"generator.to(\"cuda\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can use the `generator` to generate an image:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image = generator().images[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The output is by default wrapped into a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object.\n",
"\n",
"You can save the image by calling:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image.save(\"generated_image.png\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try out the Spaces below, and feel free to play around with the inference steps parameter to see how it affects the image quality!\n",
"\n",
""
]
}
],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 4
}