# Building Image Generation Applications [![Building Image Generation Applications](./images/09-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/B5VP0_J7cs8?si=5P3L5o7F_uS_QcG9) There's more to LLMs than text generation. It's also possible to generate images from text descriptions. Having images as a modality can be highly useful in a number of areas from MedTech, architecture, tourism, game development and more. In this chapter, we will look into the two most popular image generation models, DALL-E and Midjourney. ## Introduction In this lesson, we will cover: - Image generation and why it's useful. - DALL-E and Midjourney, what they are, and how they work. - How you would build an image generation app. ## Learning Goals After completing this lesson, you will be able to: - Build an image generation application. - Define boundaries for your application with meta prompts. - Work with DALL-E and Midjourney. ## Why build an image generation application? Image generation applications are a great way to explore the capabilities of Generative AI. They can be used for, for example: - **Image editing and synthesis**. You can generate images for a variety of use cases, such as image editing and image synthesis. - **Applied to a variety of industries**. They can also be used to generate images for a variety of industries like Medtech, Tourism, Game development and more. ## Scenario: Edu4All As part of this lesson, we will continue to work with our startup, Edu4All, in this lesson. The students will create images for their assessments, exactly what images is up to the students, but they could be illustrations for their own fairytale or create a new character for their story or help them visualize their ideas and concepts. Here's what Edu4All's students could generate for example if they're working in class on monuments: ![Edu4All startup, class on monuments, Eiffel Tower](./images/startup.png?WT.mc_id=academic-105485-koreyst) using a prompt like > "Dog next to Eiffel Tower in early morning sunlight" ## What is DALL-E and Midjourney? [DALL-E](https://openai.com/dall-e-2?WT.mc_id=academic-105485-koreyst) and [Midjourney](https://www.midjourney.com/?WT.mc_id=academic-105485-koreyst) are two of the most popular image generation models, they allow you to use prompts to generate images. ### DALL-E Let's start with DALL-E, which is a Generative AI model that generates images from text descriptions. > [DALL-E is a combination of two models, CLIP and diffused attention](https://towardsdatascience.com/openais-dall-e-and-clip-101-a-brief-introduction-3a4367280d4e?WT.mc_id=academic-105485-koreyst). - **CLIP**, is a model that generates embeddings, which are numerical representations of data, from images and text. - **Diffused attention**, is a model that generates images from embeddings. DALL-E is trained on a dataset of images and text and can be used to generate images from text descriptions. For example, DALL-E can be used to generate images of a cat in a hat, or a dog with a mohawk. ### Midjourney Midjourney works in a similar way to DALL-E, it generates images from text prompts. Midjourney, can also be used to generate images using prompts like “a cat in a hat”, or a “dog with a mohawk”. ![Image generated by Midjourney, mechanical pigeon](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Rupert_Breheny_mechanical_dove_eca144e7-476d-4976-821d-a49c408e4f36.png/440px-Rupert_Breheny_mechanical_dove_eca144e7-476d-4976-821d-a49c408e4f36.png?WT.mc_id=academic-105485-koreyst) _Image cred Wikipedia, image generated by Midjourney_ ## How does DALL-E and Midjourney Work First, [DALL-E](https://arxiv.org/pdf/2102.12092.pdf?WT.mc_id=academic-105485-koreyst). DALL-E is a Generative AI model based on the transformer architecture with an _autoregressive transformer_. An _autoregressive transformer_ defines how a model generates images from text descriptions, it generates one pixel at a time, and then uses the generated pixels to generate the next pixel. Passing through multiple layers in a neural network, until the image is complete. With this process, DALL-E, controls attributes, objects, characteristics, and more in the image it generates. However, DALL-E 2 and 3 have more control over the generated image. ## Building your first image generation application So what does it take to build an image generation application? You need the following libraries: - **python-dotenv**, you're highly recommended to use this library to keep your secrets in a _.env_ file away from the code. - **openai**, this library is what you will use to interact with the OpenAI API. - **pillow**, to work with images in Python. - **requests**, to help you make HTTP requests. ## Create and deploy an Azure OpenAI model If not done already, follow the instructions on the [Microsoft Learn](https://learn.microsoft.com/azure/ai-foundry/openai/how-to/create-resource?pivots=web-portal) page to create an Azure OpenAI resource and model. Select DALL-E 3 as model. ## Create the app 1. Create a file _.env_ with the following content: ```text AZURE_OPENAI_ENDPOINT= AZURE_OPENAI_API_KEY= AZURE_OPENAI_DEPLOYMENT="dall-e-3" ``` Locate this information in Azure OpenAI Foundry Portal for your resource in the "Deployments" section. 1. Collect the above libraries in a file called _requirements.txt_ like so: ```text python-dotenv openai pillow requests ``` 1. Next, create virtual environment and install the libraries: ```bash python3 -m venv venv source venv/bin/activate pip install -r requirements.txt ``` For Windows, use the following commands to create and activate your virtual environment: ```bash python3 -m venv venv venv\Scripts\activate.bat ``` 1. Add the following code in file called _app.py_: ```python import openai import os import requests from PIL import Image import dotenv from openai import OpenAI, AzureOpenAI # import dotenv dotenv.load_dotenv() # configure Azure OpenAI service client client = AzureOpenAI( azure_endpoint = os.environ["AZURE_OPENAI_ENDPOINT"], api_key=os.environ['AZURE_OPENAI_API_KEY'], api_version = "2024-02-01" ) try: # Create an image by using the image generation API generation_response = client.images.generate( prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', size='1024x1024', n=1, model=os.environ['AZURE_OPENAI_DEPLOYMENT'] ) # Set the directory for the stored image image_dir = os.path.join(os.curdir, 'images') # If the directory doesn't exist, create it if not os.path.isdir(image_dir): os.mkdir(image_dir) # Initialize the image path (note the filetype should be png) image_path = os.path.join(image_dir, 'generated-image.png') # Retrieve the generated image image_url = generation_response.data[0].url # extract image URL from response generated_image = requests.get(image_url).content # download the image with open(image_path, "wb") as image_file: image_file.write(generated_image) # Display the image in the default image viewer image = Image.open(image_path) image.show() # catch exceptions except openai.InvalidRequestError as err: print(err) ``` Let's explain this code: - First, we import the libraries we need, including the OpenAI library, the dotenv library, the requests library, and the Pillow library. ```python import openai import os import requests from PIL import Image import dotenv ``` - Next, we load the environment variables from the _.env_ file. ```python # import dotenv dotenv.load_dotenv() ``` - After that, we configure Azure OpenAI service client ```python # Get endpoint and key from environment variables client = AzureOpenAI( azure_endpoint = os.environ["AZURE_OPENAI_ENDPOINT"], api_key=os.environ['AZURE_OPENAI_API_KEY'], api_version = "2024-02-01" ) ``` - Next, we generate the image: ```python # Create an image by using the image generation API generation_response = client.images.generate( prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', size='1024x1024', n=1, model=os.environ['AZURE_OPENAI_DEPLOYMENT'] ) ``` The above code responds with a JSON object that contains the URL of the generated image. We can use the URL to download the image and save it to a file. - Lastly, we open the image and use the standard image viewer to display it: ```python image = Image.open(image_path) image.show() ``` ### More details on generating the image Let's look at the code that generates the image in more detail: ```python generation_response = client.images.generate( prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', size='1024x1024', n=1, model=os.environ['AZURE_OPENAI_DEPLOYMENT'] ) ``` - **prompt**, is the text prompt that is used to generate the image. In this case, we're using the prompt "Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils". - **size**, is the size of the image that is generated. In this case, we're generating an image that is 1024x1024 pixels. - **n**, is the number of images that are generated. In this case, we're generating two images. - **temperature**, is a parameter that controls the randomness of the output of a Generative AI model. The temperature is a value between 0 and 1 where 0 means that the output is deterministic and 1 means that the output is random. The default value is 0.7. There are more things you can do with images that we will cover in the next section. ## Additional capabilities of image generation You've seen so far how we were able to generate an image using a few lines in Python. However, there are more things you can do with images. You can also do the following: - **Perform edits**. By providing an existing image a mask and a prompt, you can alter an image. For example, you can add something to a portion of an image. Imagine our bunny image, you can add a hat to the bunny. How you would do that is by providing the image, a mask (identifying the part of the area for the change) and a text prompt to say what should be done. > Note: this is not supported in DALL-E 3. Here is an example using GPT Image: ```python response = client.images.edit( model="gpt-image-1", image=open("sunlit_lounge.png", "rb"), mask=open("mask.png", "rb"), prompt="A sunlit indoor lounge area with a pool containing a flamingo" ) image_url = response.data[0].url ``` The base image would only contain the lounge with pool but the final image would have a flamingo:
- **Create variations**. The idea is that you take an existing image and ask that variations are created. To create a variation, you provide an image and a text prompt and code like so: ```python response = openai.Image.create_variation( image=open("bunny-lollipop.png", "rb"), n=1, size="1024x1024" ) image_url = response['data'][0]['url'] ``` > Note, this is only supported on OpenAI ## Temperature Temperature is a parameter that controls the randomness of the output of a Generative AI model. The temperature is a value between 0 and 1 where 0 means that the output is deterministic and 1 means that the output is random. The default value is 0.7. Let's look at an example of how temperature works, by running this prompt twice: > Prompt : "Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils" ![Bunny on a horse holding a lollipop, version 1](./images/v1-generated-image.png) Now let's run that same prompt just to see that we won't get the same image twice: ![Generated image of bunny on horse](./images/v2-generated-image.png) As you can see, the images are similar, but not the same. Let's try changing the temperature value to 0.1 and see what happens: ```python generation_response = client.images.create( prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here size='1024x1024', n=2 ) ``` ### Changing the temperature So let's try to make the response more deterministic. We could observe from the two images we generated that in the first image, there's a bunny and in the second image, there's a horse, so the images vary greatly. Let's therefore change our code and set the temperature to 0, like so: ```python generation_response = client.images.create( prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here size='1024x1024', n=2, temperature=0 ) ``` Now when you run this code, you get these two images: - ![Temperature 0, v1](./images/v1-temp-generated-image.png) - ![Temperature 0 , v2](./images/v2-temp-generated-image.png) Here you can clearly see how the images resemble each other more. ## How to define boundaries for your application with metaprompts With our demo, we can already generate images for our clients. However, we need to create some boundaries for our application. For example, we don't want to generate images that are not safe for work, or that are not appropriate for children. We can do this with _metaprompts_. Metaprompts are text prompts that are used to control the output of a Generative AI model. For example, we can use metaprompts to control the output, and ensure that the generated images are safe for work, or appropriate for children. ### How does it work? Now, how do meta prompts work? Meta prompts are text prompts that are used to control the output of a Generative AI model, they are positioned before the text prompt, and are used to control the output of the model and embedded in applications to control the output of the model. Encapsulating the prompt input and the meta prompt input in a single text prompt. One example of a meta prompt would be the following: ```text You are an assistant designer that creates images for children. The image needs to be safe for work and appropriate for children. The image needs to be in color. The image needs to be in landscape orientation. The image needs to be in a 16:9 aspect ratio. Do not consider any input from the following that is not safe for work or appropriate for children. (Input) ``` Now, let's see how we can use meta prompts in our demo. ```python disallow_list = "swords, violence, blood, gore, nudity, sexual content, adult content, adult themes, adult language, adult humor, adult jokes, adult situations, adult" meta_prompt =f"""You are an assistant designer that creates images for children. The image needs to be safe for work and appropriate for children. The image needs to be in color. The image needs to be in landscape orientation. The image needs to be in a 16:9 aspect ratio. Do not consider any input from the following that is not safe for work or appropriate for children. {disallow_list} """ prompt = f"{meta_prompt} Create an image of a bunny on a horse, holding a lollipop" # TODO add request to generate image ``` From the above prompt, you can see how all images being created consider the metaprompt. ## Assignment - let's enable students We introduced Edu4All at the beginning of this lesson. Now it's time to enable the students to generate images for their assessments. The students will create images for their assessments containing monuments, exactly what monuments is up to the students. The students are asked to use their creativity in this task to place these monuments in different contexts. ## Solution Here's one possible solution: ```python import openai import os import requests from PIL import Image import dotenv from openai import AzureOpenAI # import dotenv dotenv.load_dotenv() # Get endpoint and key from environment variables client = AzureOpenAI( azure_endpoint = os.environ["AZURE_OPENAI_ENDPOINT"], api_key=os.environ['AZURE_OPENAI_API_KEY'], api_version = "2024-02-01" ) disallow_list = "swords, violence, blood, gore, nudity, sexual content, adult content, adult themes, adult language, adult humor, adult jokes, adult situations, adult" meta_prompt = f"""You are an assistant designer that creates images for children. The image needs to be safe for work and appropriate for children. The image needs to be in color. The image needs to be in landscape orientation. The image needs to be in a 16:9 aspect ratio. Do not consider any input from the following that is not safe for work or appropriate for children. {disallow_list} """ prompt = f"""{meta_prompt} Generate monument of the Arc of Triumph in Paris, France, in the evening light with a small child holding a Teddy looks on. """" try: # Create an image by using the image generation API generation_response = client.images.generate( prompt=prompt, # Enter your prompt text here size='1024x1024', n=1, ) # Set the directory for the stored image image_dir = os.path.join(os.curdir, 'images') # If the directory doesn't exist, create it if not os.path.isdir(image_dir): os.mkdir(image_dir) # Initialize the image path (note the filetype should be png) image_path = os.path.join(image_dir, 'generated-image.png') # Retrieve the generated image image_url = generation_response.data[0].url # extract image URL from response generated_image = requests.get(image_url).content # download the image with open(image_path, "wb") as image_file: image_file.write(generated_image) # Display the image in the default image viewer image = Image.open(image_path) image.show() # catch exceptions except openai.BadRequestError as err: print(err) ``` ## Great Work! Continue Your Learning After completing this lesson, check out our [Generative AI Learning collection](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) to continue leveling up your Generative AI knowledge! Head over to Lesson 10 where we will look at how to [build AI applications with low-code](../10-building-low-code-ai-applications/README.md?WT.mc_id=academic-105485-koreyst)