Categories
News

Creating AI art with Stable Diffusion, ComfyUI and ControlNet

Creating AI art with Stable Diffusion ComfyUI and multiple ControlNet models

If you’ve been enjoying creating art using Stable Diffusion or one of the other AI models such as Midjourney or DallE 3 recently added to ChatGPT by OpenAI and available to Jews for free via the Microsoft Image Creator website. You might be interested in a new workflow created by Laura Carnevali which combines Stable Diffusion, ComfyUI and multiple ControlNet models.

Stable Diffusion XL (SDXL), created by the development team at Stability AI is well-known for its amazing image generation capabilities. While SDXL alone is impressive, its integration with ComfyUI elevates it to an entirely new level of user experience. ComfyUI serves as the perfect toolkit for anyone who wants to dabble in the art of image generation, providing an array of features that make the process more accessible, streamlined, and endlessly customizable.

AI art generation using Stable Diffusion, ComfyUI and ControlNet

ComfyUI operates on a nodes/graph/flowchart interface, where users can experiment and create complex workflows for their SDXL projects. What sets it apart is that you don’t have to write a single line of code to get started. It fully supports various versions of Stable Diffusion, including SD1.x, SD2.x, and SDXL, making it a versatile tool for any project.

Other articles we have written that you may find of interest on the subject of Stable Diffusion and Stability AI :

SDXL offers a plethora of ways to modify and enhance your art. From inpainting, which allows you to make internal edits, to outpainting for extending the canvas, and image-to-image transformations, the platform is designed for flexibility. Yet, it’s ComfyUI that truly provides the sandbox environment for experimentation and control.

ComfyUI node-based GUI for Stable Diffusion

The system is designed for efficiency, incorporating an asynchronous queue system that improves the speed of execution. One of its standout features is its optimization capability; it only re-executes the changed parts of the workflow between runs, saving both time and computational power. If you are resource-constrained, ComfyUI comes equipped with a low-vram command line option, making it compatible with GPUs that have less than 3GB of VRAM. It’s worth mentioning that the system can also operate on CPUs, although at a slower speed.

The types of models and checkpoints that ComfyUI can load are quite expansive. From standalone VAEs and CLIP models to ckpt, safetensors, and diffusers, you have a wide selection at your fingertips. It’s rich in additional features like Embeddings/Textual inversion, Loras, Hypernetworks, and even unCLIP models, offering you a holistic environment for creating and experimenting with AI art.

One of the more intriguing features is the ability to load full workflows, right from generated PNG files. You can save or load these workflows as JSON files for future use or collaboration. The nodes interface isn’t limited to simple tasks; you can create intricate workflows for more advanced operations like high-resolution fixes, Area Composition, and even model merging.

ComfyUI doesn’t fall short when it comes to image quality enhancements. It supports a range of upscale models like ESRGAN and its variants, SwinIR, Swin2SR, among others. It also allows inpainting with both regular and specialized inpainting models. Additional utilities like ControlNet, T2I-Adapter, and Latent previews with TAESD add more granularity to your customization efforts.

On top of all these features, ComfyUI starts up incredibly quickly and operates fully offline, ensuring that your workflow remains uninterrupted. The marriage between Stable Diffusion XL and ComfyUI offers a comprehensive, user-friendly platform for AI-based art generation. It blends technological sophistication with ease of use, catering to both novices and experts in the field. The versatility and depth of features available in ComfyUI make it a must-try for anyone serious about the craft of image generation.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to use Stable Diffusion and ControlNet to customize AI images

How to use Stable Diffusion and ControlNet to customise AI artwork

AI artists searching for a way to more accurately control AI art creation in Stable Diffusion might be interested in learning how to use ControlNet. A Stable Diffusion model, that has transformed the way AI artists can generate and manipulate images. This model allows users to copy compositions or human poses from a reference image, providing a level of precision that was previously unattainable. This article will delve into the intricacies of using ControlNet, focusing on the image prompt adapter, and how it can be utilized to customize AI images.

ControlNet is a neural network model designed to control Stable Diffusion models. It adds an extra layer of conditioning to the text prompt, which is the most basic form of using SDXL models. This extra conditioning can take various forms, allowing users to manipulate AI-generated images with precision. The image prompt adapter in ControlNet is a powerful tool that can be used to create a person and a background around an AI-generated face, change the age, hair type and color of a person in a photo, or alter elements in digital art.

How to use customise AI art with SDXL and ControlNet

ControlNet and its image prompt adapter provide a powerful tool for manipulating and generating AI images. Whether you’re looking to change elements in digital art, regenerate AI images, or create a whole body and environment from a face image, ControlNet offers a level of precision and control that was previously unattainable. With the right knowledge and tools, the possibilities for image manipulation and generation are virtually limitless.

Other articles you may find of interest on the subject of  Stable Diffusion created by Stability AI :

To use ControlNet, users need to download three IP adapter models from Hugging Face, as well as the IP adapter plus face model. The IP adapter model is an image prompt model for text-to-image-based diffusion models like stable diffusion and can be used in combination with other ControlNet models.

The workflow for using the IP adapter model involves regenerating a reference AI image in SDXL and adding elements to the final image using positive prompts. This process allows users to change elements in digital art using ControlNet. For instance, users can use inpainting to change the hair of a base AI image and inpaint the face from another base image. This technique provides a level of control over the subject’s body and face angle, allowing users to change the subject of an image without inpainting.

ControlNet models can also be used in combination with other models. For example, the Rev animated checkpoint can be used to take an AI-generated vector of a house and regenerate it as anime-style art. This technique can be used to manipulate art in various environments and weather conditions.

One of the most powerful features of ControlNet is the ability to create a whole body and environment from a face image. This is achieved by using the plus face model and a second ControlNet image using open pose. This feature provides users with more control over the subject’s body and face angle, allowing them to create more realistic and detailed images. To learn more about ControlNet on how to install it jump over to the Stability AI website.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.