真实感SDXL ComfyUI ULTIMATE Workflow

此模型的原作者为「@BeTheRobot」,我们致力于推动原创模型的分享和交流,因此转载了该模型、供大家非商用地交流、学习使用。请使用该模型的用户,遵守原作者所声明的使用许可。

如果您是该模型的原作者,请与我们联系,我们期待您入驻、并将第一时间把模型转移至您的账号中。如您不希望在 分享该模型,我们也将遵循您的意愿在第一时间下架您的模型。

我们尊重每一位模型原创作者,更期待与每一位模型作者共同成长!

声明:若该转载模型引发知识产权纠纷或其他侵权行为,我们将立即下架模型,并不会向原作者追责。

SDXL ComfyUI ULTIMATE Workflow

Everything you need to generate amazing images! Packed full of useful features that you can enable and disable on the fly. Contains multi-model / multi-LoRA support and multi-upscale options with img2img and Ultimate SD Upscaler. Now, this workflow also has FaceDetailer support with both SDXL 1.0 and SD 1.5 model support.

Community Discord

Come join me in my Discord server to ask any questions you may have, make suggestions for future versions of this workflow, or post your creations!

https://discord.gg/3XBt487zn

v3.0

  • New easier to use and understand UI

  • Canny & Depth ControlNet

  • Inpainting

  • Ultimate SD Upscaling

  • img2img Upscaling

v2.4

  • Added ability to adjust image contrast

  • Added img2img generation as optional input to Ultimate SD Upscaler

  • Fixed alternative VAE selector (0 = base | 1 = alternative)

  • Replaced image aspect ratio selector to support custom sizes

v2.3

  • Support for both SDXL Base-only and Base + Refiner flows

  • Support for alternate VAE loading

  • Support for loading outside images for upscaling

v2.1

  • Fixed bug where it would always regenerate images even when it didn't need to

v2.0

  • Major overhaul to the design of the workflow

    • Module based, making it easier to understand what's going on

    • Custom controls per module

  • Inpainting!!!

    • Draw custom masks or use the segmentation model to auto-generate masks for your selected areas

  • Face, Hand, and Person Detailer

    • The FaceDetailer module is now the AfterDetailer module and supports all the above

  • Global vs Local Seeds

    • You can standardize the seed across all the modules or use separate seeds per module

  • More LoRA / Lycoris support

    • I swear it works this time...

v1.2

  • Fixed incorrect linkage for img2img Face Detailer (was using txt2img output)

  • Added LoRA / Lycoris support for Face Detailer

  • Added additional notes for clarity

ControlNet

Every module has it's own ControlNet section, where you can copy/paste the preprocessed image and select the type of ControlNet you are looking to run. You can also adjust the strength, start percent, and end percent. This ControlNet will apply to the module when generating the output image.

Toggles

With a new toggle-based system, you can change what runs and how it runs in seconds. Not only is it easy to use, but leads to far less confusion than other "switches" built into ComfyUI.

Img2Img Upscaling

This is the traditional way to upscale images, and can be helpful to increase the details while upscaling. It can be a bit heavy on resource utilization at higher resolutions, but will create beautiful generations.

Ultimate SD Upscaler

With the Ultimate SD Upscaler, you can push your images to much higher resolution without needing a supercomputer to run it. 2x, 3x, 4x, you can do it all! This uses a tiled approach to upscaling, where each tile is the original txt2img resolution.

Multi-Clip Support

With the release of SDXL 0.9 and subsequently 1.0, we were introduced to "text_g" and "text_l", which are two separate ways of handling prompt inputs. This has has some benefits, namely cleaner and simpler prompts, but it's not for everyone. This workflow gives you the choice to use the SDXL prompt style or the original SD prompt style. Switch back and forth to test your prompts with the "CLIP Type" switch.

After Detailer

Ever generate an amazing image, only to have the face be sub-par? With the Face Detailer extension, you can swap that face out with a new one! This extension has it's own positive and negative prompts, as well as seed and sampler, so you can tweak it until it's just right. To add to the customizability, it also supports swapping between SDXL models and SD 1.5 models. With this, you can get the faces you've grown to love, while benefiting from the highly detailed SDXL model.

Inpainting

Just like Automatic1111, you can now do custom inpainting! Draw your own mask anywhere on your image and inpaint anything you want. If you don't feel like drawing your mask, but still want more customizability than the AfterDetailer module, use the Segmentation functionality to auto-draw a mask for your selected area of the image.

LoRA / Lycoris Models

This workflow fully supports loading up to 2 LoRA / Lycoris models for txt2img, img2img, and Ultimate SD Upscaler. It also supports 1 LoRA / Lycoris model for AfterDetailer and Inpainting. These can be quickly enabled and disabled for rapid prototyping and testing.

Usage

Installation

  1. Download the included zip file

  2. Extract the zip file (I suggest creating a directory for ComfyUI workflows)

  3. Launch ComfyUI (if not already)

  4. Click "Load" on ComfyUI and select the workflow .json file

Extensions

It's likely that you may not have all of the extensions necessary to run this workflow. No need to worry, there is a simple solution for this:

  1. If you do not already have ComfyUI Manager installed, please download that here and follow their setup guide: https://civitai.com/models/71980?modelVersionId=115220

  2. Once it is installed and you have relaunched ComfyUI, open the manager and click "Install Missing Custom Nodes"

  3. Wait for ComfyUI to finish installing all missing nodes

  4. To get the face detection models, download the models through the manager with the "Install Models" button. At minimum, you will want to get:
    - face_yolov8n_v2
    - sam_vit_b_01ec64

  5. Relaunch ComfyUI

  6. Make sure to manually select your models for the FaceDetailer section, otherwise it may not find them.

Checkpoint Models

Everyone has their own favorite models, and feel free to use them! That being said, for SDXL 1.0 purposes, I highly suggest getting the DreamShaperXL model. It is a MAJOR step up from the standard SDXL 1.0 Base model, and does not require a separate SDXL 1.0 Refiner model. As for the FaceDetailer, you can use the SDXL model or any other model of your choice. Play around with them to find what works best for you.

DreamShaper XL Model: https://civitai.com/models/112902/dreamshaper-xl10

ControlNet Models

SDXL Canny: https://huggingface.co/diffusers/controlnet-canny-sdxl-1.0

SDXL Depth: https://huggingface.co/SargeZT/controlnet-v1e-sdxl-depth

SDXL OpenPose: https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0

Full Extension List

Download the extension directly from GitHub if you are unable to use the ComfyUI Manager for downloads due to restrictions.

Straight Lines (and more)

failfast-comfyui-extensions

Efficiency Nodes for ComfyUI

Evaluate Strings

ComfyUI_Comfyroll_CustomNodes

CR Load LoRA

CR Module Pipe Loader

CR Module Input

ComfyUI Impact Pack

CheckpointLoaderSimple

MaskToImage

FaceDetailer

UltralyticsDetectorProvider

SAMLoader

WAS Node Suite

Text Multiline

UltimateSDUpscale

UltimateSDUpscale

SDXLCustomAspectRatio

SDXLAspectRatio

Detection Models

face_yolov8n_v2.pt

Instruction Guide

1. Controls

1.1 ON / OFF Switches

For every module (ie. txt2img, img2img, etc.) there is an ON / OFF switch labeled with the section name. Switching this module to OFF means it will not run, and to ON means it will run. This is pretty self-explanatory, but keep in mind the effect downstream. For example, if you turn off txt2img, neither img2img nor ultimate sd upscaler will have a generated image to run with.

1.2 Checkpoint Switches

Every module has a Checkpoint switch that gives you the ability to switch between an SDXL Checkpoint and an Alternative Checkpoint. Technically, both could be SDXL, both could be SD 1.5, or it can be a mix of both. This gives you the ability to adjust on the fly, and even do txt2img with SDXL, and then img2img with SD 1.5 (acts as refiner).

1.3 Seed Switches

This workflow has a Global Seed as well as Local Seeds per module. You can chose to use either the Global or Local seed for each module, giving you a lot of control over what you generate.

1.4 Prompt Type

With SDXL, there is the new concept of TEXT_G and TEXT_L with the CLIP Text Encoder. As I wanted to make this workflow more open ended, you can chose to use this new concept (CLIP 1) or use the original SD 1.5 style prompting (CLIP 2). Both positive prompts will be used for CLIP 1 and CLIP 2, just in different ways. With the CLIP 1 method, the Primary prompt is TEXT_G and the Secondary prompt is TEXT_L. With the CLIP 2 method, the output of the Primary prompt and the Secondary prompt are concatenated and used as the singular text input to the CLIP Text Encoder.

1.5 Stages

This is specific to the txt2img stage, where you can select from one or two stages. The one stage option will only run the Base checkpoint for your image generation. The two stage option will run the Base checkpoint for the first X% of the steps, and then run the Refiner checkpoint for the remaining steps. This gives you the option to do the full SDXL Base + Refiner workflow or the simpler SDXL Base-only workflow.

1.6 SDXL VAE (Base / Alt)

Chose between using the built-in VAE from the SDXL Base Checkpoint (0) or the SDXL Base Alternative VAE (1). Adjust the "boolean_number" field to the corresponding VAE selection.

1.8 Image Source

Chose between using the txt2img output or the loaded image as the input to the module.

1.8.1 txt2img Input

Selecting this will use the output of the txt2img module as the input for the current module. Keep in mind that for this to work, the txt2img module needs to be switched to the "ON" state, even if it is not generating a new image.

1.8.2 Load Image Input

Selecting this will use the image you have selected in the "Load Image" node for the current module. You can load any image into this module you want, or copy/paste an image into the node using the ComfyUI Clipspace.

2. AfterDetailer

This allows you to alter specific portions of an image detected by the Ultralytics models. This includes face detection, hand detection, and person detection. This is basically just inpainting, with the masks drawn for you.

2.1 Face Detailer

Select one of the bbox/face_*.pt models. Personally, I like using the bbox/face_yolov8n_v2.pt model for this.

2.2 Hand Detailer

Select one of the bbox/hand_*.pt models. I'd suggest the bbox/hand_yolov8n.pt model.

2.* *erson Detailer

Select one of the segm/person_*.pt models. I haven't played with this much, so you will need to experiment with which model works best for you.

3. Inpainting

3.1 How? Magic...

Here is something you may not know... If you right click on the Inpaint Image node, you will have an option to "Open in Mask Editor". This Mask Editor gives you the ability to draw your own mask directly on the image, similar to A1111 inpainting. Once you are done drawing your mask, click the "Save to Node" button to commit your mask to the image. This is now the area that will be inpainted!

3.2 How to Load an Image?

3.2.1 The Basic Way

On the Inpaint Image Node, there is a button to "chose file to upload". Once you click that, you will be presented with the standard file explorer where you can select your image.

3.2.2 Copy / Paste

With ComfyUI, there is the concept of the Clipspace, which is essentially like a clipboard, keeping track of all your copies and allowing you to paste them. If you just generated an image that you want to Inpaint, right click on the image and select "Copy (Clipspace)". You can then right click on the Inpaint Image node and select "Paste (Clipspace)".

3.3 Segmentation! More Magic...

If the "Open in Mask Editor" didn't blow your mind, segmentation just might. When you right click on the Inpaint Image node, instead of selecting "Open in Mask Editor", select "Open in SAM Detector". The UI will look similar, but in this case you will only need to put a dot down in a location you want to have detected and then click "Detect". Once you have verified the SAM Detector has auto-masked the right part(s) of the image, you can click "Save to Node" to apply the mask to the image.

4. Contrast

This node, situated above the Save Image node, gives you the ability to increase the contrast before saving. The contrast is not increased for the image as it flows through the workflow, just at the time of saving the image.

4.1 Settings

Keep the "blend_mode" at "overlay". For the "blend_factor", use this guide for the settings:

off = 0.0

low = 0.1

medium = 0.2

high = 0.3+

SDXL ULTIMATE V2.4

  • Added ability to adjust image contrast

  • Added img2img generation as optional input to Ultimate SD Upscaler

  • Fixed alternative VAE selector (0 = base | 1 = alternative)

  • Replaced image aspect ratio selector to support custom sizes

  • Adjusted AfterDetailer module default settings to improve quality

AI模型详情

模型类型:Other
基础算法:基础算法 XL
触发词:
文件格式: zip
最近更新:2024-02-10 01:27
创作相关
允许下载:
严禁转载:
会员专享下载:
作者原创:
转载来源: liblib
转载说明: 该模型及相关图片信息可能涉及来源于其他开源社区或由用户自行上传发布,非原创作者,如需商业使用请联系原作者。
郑重声明:本模型仅用于练习使用模型,如需商业请与原作者联系获取版权许可等,互联网鼓励用户分享作品,并坚决保护创作者的版权信息,模型、图片、素材资源的版权均为原创作者所有。
免责声明: 您基于本模型服务生成的文本内容由您自行维护并对其独立判断后使用,您需在遵守适用法律法规和本服务条款的条件下使用,基于本服务生成内容产生的任何知识产权问题由您自行处理,您需对生成内容负责,本网站对由此造成的任何损失不负责任。除法律法规另有规定外。
版权说明: 本网站部分素材、模型、图片等信息基于用户自行上传和分享,此资源来源于网络,如涉及版权问题,请及时与本网站联系,如需删除收到通知响应后,会在处理后下架删除。

讨论列表 共有 0 条讨论

暂无评论信息...
为用户提供AI模型下载、AI模型训练、AI模型学习、AI模型教程。
AI模型分享社区,基于Stable Diffusion潜在扩散的机器学习模型。