Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services

Generative AI has become a common tool for enhancing and accelerating the creative process across various industries, including entertainment, advertising, and graphic design. It enables more personalized experiences for audiences and improves the overall quality of the final products.

One significant benefit of generative AI is creating unique and personalized experiences for users. For example, generative AI is used by streaming services to generate personalized movie titles and visuals to increase viewer engagement and build visuals for titles based on a user’s viewing history and preferences. The system then generates thousands of variations of a title’s artwork and tests them to determine which version most attracts the user’s attention. In some cases, personalized artwork for TV series significantly increased clickthrough rates and view rates as compared to shows without personalized artwork.

In this post, we demonstrate how you can use generative AI models like Stable Diffusion to build a personalized avatar solution on Amazon SageMaker and save inference cost with multi-model endpoints (MMEs) at the same time. The solution demonstrates how, by uploading 10–12 images of yourself, you can fine-tune a personalized model that can then generate avatars based on any text prompt, as shown in the following screenshots. Although this example generates personalized avatars, you can apply the technique to any creative art generation by fine-tuning on specific objects or styles.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai. Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Solution overview

The following architecture diagram outlines the end-to-end solution for our avatar generator.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

The scope of this post and the example GitHub code we provide focus only on the model training and inference orchestration (the green section in the preceding diagram). You can reference the full solution architecture and build on top of the example we provide.

Model training and inference can be broken down into four steps:

  1. Upload images to Amazon Simple Storage Service (Amazon S3). In this step, we ask you to provide a minimum of 10 high-resolution images of yourself. The more images, the better the result, but the longer it will take to train.
  2. Fine-tune a Stable Diffusion 2.1 base model using SageMaker asynchronous inference. We explain the rationale for using an inference endpoint for training later in this post. The fine-tuning process starts with preparing the images, including face cropping, background variation, and resizing for the model. Then we use Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning technique for large language models (LLMs), to fine-tune the model. Finally, in postprocessing, we package the fine-tuned LoRA weights with the inference script and configuration files (tar.gz) and upload them to an S3 bucket location for SageMaker MMEs.
  3. Host the fine-tuned models using SageMaker MMEs with GPU. SageMaker will dynamically load and cache the model from the Amazon S3 location based on the inference traffic to each model.
  4. Use the fine-tuned model for inference. After the Amazon Simple Notification Service (Amazon SNS) notification indicating the fine-tuning is sent, you can immediately use that model by supplying a target_model parameter when invoking the MME to create your avatar.

We explain each step in more detail in the following sections and walk through some of the sample code snippets.

Prepare the images

To achieve the best results from fine-tuning Stable Diffusion to generate images of yourself, you typically need to provide a large quantity and variety of photos of yourself from different angles, with different expressions, and in different backgrounds. However, with our implementation, you can now achieve a high-quality result with as few as 10 input images. We have also added automated preprocessing to extract your face from each photo. All you need is to capture the essence of how you look clearly from multiple perspectives. Include a front-facing photo, a profile shot from each side, and photos from angles in between. You should also include photos with different facial expressions like smiling, frowning, and a neutral expression. Having a mix of expressions will allow the model to better reproduce your unique facial features. The input images dictate the quality of avatar you can generate. To make sure this is done properly, we recommend an intuitive front-end UI experience to guide the user through the image capture and upload process.

The following are example selfie images at different angles with different facial expressions.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Fine-tune a Stable Diffusion model

After the images are uploaded to Amazon S3, we can invoke the SageMaker asynchronous inference endpoint to start our training process. Asynchronous endpoints are intended for inference use cases with large payloads (up to 1 GB) and long processing times (up to 1 hour). It also provides a built-in queuing mechanism for queuing up requests, and a task completion notification mechanism via Amazon SNS, in addition to other native features of SageMaker hosting such as auto scaling.

Even though fine-tuning is not an inference use case, we chose to utilize it here in lieu of SageMaker training jobs due to its built-in queuing and notification mechanisms and managed auto scaling, including the ability to scale down to 0 instances when the service is not in use. This allows us to easily scale the fine-tuning service to a large number of concurrent users and eliminates the need to implement and manage the additional components. However, it does come with the drawback of the 1 GB payload and 1 hour maximum processing time. In our testing, we found that 20 minutes is sufficient time to get reasonably good results with roughly 10 input images on an ml.g5.2xlarge instance. However, SageMaker training would be the recommended approach for larger-scale fine-tuning jobs.

To host the asynchronous endpoint, we must complete several steps. The first is to define our model server. For this post, we use the Large Model Inference Container (LMI). LMI is powered by DJL Serving, which is a high-performance, programming language-agnostic model serving solution. We chose this option because the SageMaker managed inference container already has many of the training libraries we need, such as Hugging Face Diffusers and Accelerate. This greatly reduces the amount of work required to customize the container for our fine-tuning job.

The following code snippet shows the version of the LMI container we used in our example:

inference_image_uri = ( f"763104351884.dkr.ecr.{region}.amazonaws.com/djl-inference:0.21.0-deepspeed0.8.3-cu117"
)
print(f"Image going to be used is ---- > {inference_image_uri}")

In addition to that, we need to have a serving.properties file that configures the serving properties, including the inference engine to use, the location of the model artifact, and dynamic batching. Lastly, we must have a model.py file that loads the model into the inference engine and prepares the data input and output from the model. In our example, we use the model.py file to spin up the fine-tuning job, which we explain in greater detail in a later section. Both the serving.properties and model.py files are provided in the training_service folder.

The next step after defining our model server is to create an endpoint configuration that defines how our asynchronous inference will be served. For our example, we are just defining the maximum concurrent invocation limit and the output S3 location. With the ml.g5.2xlarge instance, we have found that we are able to fine-tune up to two models concurrently without encountering an out-of-memory (OOM) exception, and therefore we set max_concurrent_invocations_per_instance to 2. This number may need to be adjusted if we’re using a different set of tuning parameters or a smaller instance type. We recommend setting this to 1 initially and monitoring the GPU memory utilization in Amazon CloudWatch.

# create async endpoint configuration
async_config = AsyncInferenceConfig( output_path=f"s3://{bucket}/{s3_prefix}/async_inference/output" , # Where our results will be stored max_concurrent_invocations_per_instance=2, notification_config={   "SuccessTopic": "...",   "ErrorTopic": "...", }, #  Notification configuration
)

Finally, we create a SageMaker model that packages the container information, model files, and AWS Identity and Access Management (IAM) role into a single object. The model is deployed using the endpoint configuration we defined earlier:

model = Model( image_uri=image_uri, model_data=model_data, role=role, env=env
) model.deploy( initial_instance_count=1, instance_type=instance_type, endpoint_name=endpoint_name, async_inference_config=async_inference_config
) predictor = sagemaker.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session
)

When the endpoint is ready, we use the following sample code to invoke the asynchronous endpoint and start the fine-tuning process:

sm_runtime = boto3.client("sagemaker-runtime") input_s3_loc = sess.upload_data("data/jw.tar.gz", bucket, s3_prefix) response = sm_runtime.invoke_endpoint_async( EndpointName=sd_tuning.endpoint_name, InputLocation=input_s3_loc)

For more details about LMI on SageMaker, refer to Deploy large models on Amazon SageMaker using DJLServing and DeepSpeed model parallel inference.

After invocation, the asynchronous endpoint starts queueing our fine-tuning job. Each job runs through the following steps: prepare the images, perform Dreambooth and LoRA fine-tuning, and prepare the model artifacts. Let’s dive deeper into the fine-tuning process.

Prepare the images

As we mentioned earlier, the quality of input images directly impacts the quality of fine-tuned model. For the avatar use case, we want the model to focus on the facial features. Instead of requiring users to provide carefully curated images of exact size and content, we implement a preprocessing step using computer vision techniques to alleviate this burden. In the preprocessing step, we first use a face detection model to isolate the largest face in each image. Then we crop and pad the image to the required size of 512 x 512 pixels for our model. Finally, we segment the face from the background and add random background variations. This helps highlight the facial features, allowing our model to learn from the face itself rather than the background. The following images illustrate the three steps in this process.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai. Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai. Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.
Step 1: Face detection using computer vision Step 2: Crop and pad the image to 512 x 512 pixels Step 3 (Optional): Segment and add background variation

Dreambooth and LoRA fine-tuning

For fine-tuning, we combined the techniques of Dreambooth and LoRA. Dreambooth allows you to personalize your Stable Diffusion model, embedding a subject into the model’s output domain using a unique identifier and expanding the model’s language vision dictionary. It uses a method called prior preservation to preserve the model’s semantic knowledge of the class of the subject, in this case a person, and use other objects in the class to improve the final image output. This is how Dreambooth can achieve high-quality results with just a few input images of the subject.

The following code snippet shows the inputs to our trainer.py class for our avatar solution. Notice we chose <<TOK>> as the unique identifier. This is purposely done to avoid picking a name that may already be in the model’s dictionary. If the name already exists, the model has to unlearn and then relearn the subject, which may lead to poor fine-tuning results. The subject class is set to “a photo of person”, which enables prior preservation by first generating photos of people to feed in as additional inputs during the fine-tuning process. This will help reduce overfitting as model tries to preserve the previous knowledge of a person using the prior preservation method.

status = trn.run(base_model="stabilityai/stable-diffusion-2-1-base", resolution=512, n_steps=1000, concept_prompt="photo of <<TOK>>", # << unique identifier of the subject learning_rate=1e-4, gradient_accumulation=1, fp16=True, use_8bit_adam=True, gradient_checkpointing=True, train_text_encoder=True, with_prior_preservation=True, prior_loss_weight=1.0, class_prompt="a photo of person", # << subject class num_class_images=50, class_data_dir=class_data_dir, lora_r=128, lora_alpha=1, lora_bias="none", lora_dropout=0.05, lora_text_encoder_r=64, lora_text_encoder_alpha=1, lora_text_encoder_bias="none", lora_text_encoder_dropout=0.05
)

A number of memory-saving options have been enabled in the configuration, including fp16, use_8bit_adam, and gradient accumulation. This reduces the memory footprint to under 12 GB, which allows for fine-tuning of up to two models concurrently on an ml.g5.2xlarge instance.

LoRA is an efficient fine-tuning technique for LLMs that freezes most of the weights and attaches a small adapter network to specific layers of the pre-trained LLM, allowing for faster training and optimized storage. For Stable Diffusion, the adapter is attached to the text encoder and U-Net components of the inference pipeline. The text encoder converts the input prompt to a latent space that is understood by the U-Net model, and the U-Net model uses the latent meaning to generate the image in the subsequent diffusion process. The output of the fine-tuning is just the text_encoder and U-Net adapter weights. At inference time, these weights can be reattached to the base Stable Diffusion model to reproduce the fine-tuning results.

The figures below are detail diagram of LoRA fine-tuning provided by original author: Cheng-Han Chiang, Yung-Sung Chuang, Hung-yi Lee, “AACL_2022_tutorial_PLMs,” 2022

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai. Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

By combining both methods, we were able to generate a personalized model while tuning an order-of-magnitude fewer parameters. This resulted in a much faster training time and reduced GPU utilization. Additionally, storage was optimized with the adapter weight being only 70 MB, compared to 6 GB for a full Stable Diffusion model, representing a 99% size reduction.

Prepare the model artifacts

After fine-tuning is complete, the postprocessing step will TAR the LoRA weights with the rest of the model serving files for NVIDIA Triton. We use a Python backend, which means the Triton config file and the Python script used for inference are required. Note that the Python script has to be named model.py. The final model TAR file should have the following file structure:

|--sd_lora |--config.pbtxt |--1 |--model.py |--output #LoRA weights |--text_encoder |--unet |--train.sh

Host the fine-tuned models using SageMaker MMEs with GPU

After the models have been fine-tuned, we host the personalized Stable Diffusion models using a SageMaker MME. A SageMaker MME is a powerful deployment feature that allows hosting multiple models in a single container behind a single endpoint. It automatically manages traffic and routing to your models to optimize resource utilization, save costs, and minimize operational burden of managing thousands of endpoints. In our example, we run on GPU instances, and SageMaker MMEs support GPU using Triton Server. This allows you to run multiple models on a single GPU device and take advantage of accelerated compute. For more detail on how to host Stable Diffusion on SageMaker MMEs, refer to Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMaker.

For our example, we made additional optimization to load the fine-tuned models faster during cold start situations. This is possible because of LoRA’s adapter design. Because the base model weights and Conda environments are the same for all fine-tuned models, we can share these common resources by pre-loading them onto the hosting container. This leaves only the Triton config file, Python backend (model.py), and LoRA adaptor weights to be dynamically loaded from Amazon S3 after the first invocation. The following diagram provides a side-by-side comparison.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

This significantly reduces the model TAR file from approximately 6 GB to 70 MB, and therefore is much faster to load and unpack. To do the preloading in our example, we created a utility Python backend model in models/model_setup. The script simply copies the base Stable Diffusion model and Conda environment from Amazon S3 to a common location to share across all the fine-tuned models. The following is the code snippet that performs the task:

def initialize(self, args): #conda env setup self.conda_pack_path = Path(args['model_repository']) / "sd_env.tar.gz" self.conda_target_path = Path("/tmp/conda") self.conda_env_path = self.conda_target_path / "sd_env.tar.gz" if not self.conda_env_path.exists(): self.conda_env_path.parent.mkdir(parents=True, exist_ok=True) shutil.copy(self.conda_pack_path, self.conda_env_path) #base diffusion model setup self.base_model_path = Path(args['model_repository']) / "stable_diff.tar.gz" try: with tarfile.open(self.base_model_path) as tar: tar.extractall('/tmp') self.response_message = "Model env setup successful." except Exception as e: # print the exception message print(f"Caught an exception: {e}") self.response_message = f"Caught an exception: {e}"

Then each fine-tuned model will point to the shared location on the container. The Conda environment is referenced in the config.pbtxt.

name: "pipeline_0"
backend: "python"
max_batch_size: 1 ... parameters: { key: "EXECUTION_ENV_PATH", value: {string_value: "/tmp/conda/sd_env.tar.gz"}
}

The Stable Diffusion base model is loaded from the initialize() function of each model.py file. We then apply the personalized LoRA weights to the unet and text_encoder model to reproduce each fine-tuned model:

... class TritonPythonModel: def initialize(self, args): self.output_dtype = pb_utils.triton_string_to_numpy( pb_utils.get_output_config_by_name(json.loads(args["model_config"]), "generated_image")["data_type"]) self.model_dir = args['model_repository'] device='cuda' self.pipe = StableDiffusionPipeline.from_pretrained('/tmp/stable_diff', torch_dtype=torch.float16, revision="fp16").to(device) # Load the LoRA weights self.pipe.unet = PeftModel.from_pretrained(self.pipe.unet, unet_sub_dir) if os.path.exists(text_encoder_sub_dir): self.pipe.text_encoder = PeftModel.from_pretrained(self.pipe.text_encoder, text_encoder_sub_dir)

Use the fine-tuned model for inference

Now we can try our fine-tuned model by invoking the MME endpoint. The input parameters we exposed in our example include prompt, negative_prompt, and gen_args, as shown in the following code snippet. We set the data type and shape of each input item in the dictionary and convert them into a JSON string. Finally, the string payload and TargetModel are passed into the request to generate your avatar picture.

import random prompt = """<<TOK>> epic portrait, zoomed out, blurred background cityscape, bokeh, perfect symmetry, by artgem, artstation ,concept art,cinematic lighting, highly detailed, octane, concept art, sharp focus, rockstar games, post processing, picture of the day, ambient lighting, epic composition""" negative_prompt = """
beard, goatee, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, watermark, grainy, signature, cut off, draft, amateur, multiple, gross, weird, uneven, furnishing, decorating, decoration, furniture, text, poor, low, basic, worst, juvenile, unprofessional, failure, crayon, oil, label, thousand hands """ seed = random.randint(1, 1000000000) gen_args = json.dumps(dict(num_inference_steps=50, guidance_scale=7, seed=seed)) inputs = dict(prompt = prompt, negative_prompt = negative_prompt, gen_args = gen_args) payload = { "inputs": [{"name": name, "shape": [1,1], "datatype": "BYTES", "data": [data]} for name, data in inputs.items()]
} response = sm_runtime.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload), TargetModel="sd_lora.tar.gz",
)
output = json.loads(response["Body"].read().decode("utf8"))["outputs"]
original_image = decode_image(output[0]["data"][0])
original_image

Clean up

Follow the instructions in the cleanup section of the notebook to delete the resources provisioned as part of this post to avoid unnecessary charges. Refer to Amazon SageMaker Pricing for details regarding the cost of the inference instances.

Conclusion

In this post, we demonstrated how to create a personalized avatar solution using Stable Diffusion on SageMaker. By fine-tuning a pre-trained model with just a few images, we can generate avatars that reflect the individuality and personality of each user. This is just one of many examples of how we can use generative AI to create customized and unique experiences for users. The possibilities are endless, and we encourage you to experiment with this technology and explore its potential to enhance the creative process. We hope this post has been informative and inspiring. We encourage you to try the example and share your creations with us using hashtags #sagemaker #mme #genai on social platforms. We would love to see what you make.

In addition to Stable Diffusion, many other generative AI models are available on Amazon SageMaker JumpStart. Refer to Getting started with Amazon SageMaker JumpStart to explore their capabilities.


About the Authors

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.James Wu is a Senior AI/ML Specialist Solution Architect at AWS. helping customers design and build AI/ML solutions. James’s work covers a wide range of ML use cases, with a primary interest in computer vision, deep learning, and scaling ML across the enterprise. Prior to joining AWS, James was an architect, developer, and technology leader for over 10 years, including 6 years in engineering and 4 years in marketing & advertising industries.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.Simon Zamarin is an AI/ML Solutions Architect whose main focus is helping customers extract value from their data assets. In his spare time, Simon enjoys spending time with family, reading sci-fi, and working on various DIY house projects.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.Vikram Elango is an AI/ML Specialist Solutions Architect at Amazon Web Services, based in Virginia USA. Vikram helps financial and insurance industry customers with design, thought leadership to build and deploy machine learning applications at scale. He is currently focused on natural language processing, responsible AI, inference optimization and scaling ML across the enterprise. In his spare time, he enjoys traveling, hiking, cooking and camping with his family.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.Lana Zhang is a Senior Solutions Architect at AWS WWSO AI Services team, specializing in AI and ML for content moderation, computer vision, and natural language processing. With her expertise, she is dedicated to promoting AWS AI/ML solutions and assisting customers in transforming their business solutions across diverse industries, including social media, gaming, e-commerce, and advertising & marketing.

Build a personalized avatar with generative AI using Amazon SageMaker | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Time Stamp:

More from AWS Machine Learning