• Lang English
  • Lang French
  • Lang German
  • Lang Italian
  • Lang Spanish
  • Lang Arabic


PK1 in black
PK1 in red
PK1 in stainless steel
PK1 in black
PK1 in red
PK1 in stainless steel
Comfyanonymous examples

Comfyanonymous examples

Comfyanonymous examples. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. com/comfyanonymous/ComfyUI. safetensors (5. These are examples demonstrating the ConditioningSetArea node. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. unCLIP Model Examples unCLIP models are versions of SD models that are specially tuned to receive image concepts as input in addition to your text prompt. 1 background image and 3 subjects. ComfyUI Examples. Capture UI events. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. In this following example the positive text prompt is zeroed out in order for the final output to follow the input image more closely. These examples are done with the WD1. In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. This latent is then upscaled using the Stage B diffusion model. Features. comfyanonymous has 12 repositories available. As of writing this there are two image to video checkpoints. The SD3 checkpoints that contain text encoders: sd3_medium_incl_clips. 8. Upscale Model Examples. 2. GLIGEN Examples. LCM Examples. comfyanonymous/Freeway_Animation_Hunyuan_Demo_ComfyUI_Converted ComfyUI. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version. These are examples demonstrating how to do img2img. The LCM SDXL lora can be downloaded from here. 1GB) can be used like any regular checkpoint in ComfyUI. Lora Examples. Instead, you can use Impact/Inspire Pack's KSampler with Negative Cond Placeholder. Installing. You can Load these images in ComfyUI to get the full workflow. 31 seconds, it throws an exception Steps to Reproduce Upload the example workflow for DEV version https://comfyanonymous. Area Composition Examples. Flux is a family of diffusion models by black forest labs. Note that you can omit the filename extension so these two are equivalent: embedding:SDA768. SDXL Turbo is a SDXL model that can generate consistent images in a single step. Installing ComfyUI. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Img2Img Examples. To use an embedding put the file in the models/embeddings folder then use it in your prompt like I used the SDA768. Here’s a simple example of how to use controlnets, this example uses the scribble controlnet and the AnythingV3 model. Images are encoded using the CLIPVision these models come with and then the concepts extracted by it are passed to the main model when sampling. If using GIMP make sure you save the values of the transparent pixels for best results. - Releases · comfyanonymous/ComfyUI You signed in with another tab or window. If you want to use text prompts you can use this example: Note that the strength option can be used to increase the effect each input image has on the final output. Flux. Regular Full Version Files to download for the regular version Video Examples Image to Video. 5GB) and sd3_medium_incl_clips_t5xxlfp8. Here is an example. 8 for example is the same as setting both strength_model and strength_clip to 0. safetensors and put it in your ComfyUI/models/loras directory. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. You can use more steps to increase the quality. Download it, rename it to: lcm_lora_sdxl. Put the GLIGEN model files in the ComfyUI/models/gligen directory. You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. unCLIP models are versions of SD models that are specially tuned to receive image concepts as input in addition to your text prompt. github. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. 3D Examples Stable Zero123. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. Aug 2, 2024 · You signed in with another tab or window. Put them in the ComfyUI/models/checkpoints folder. AuraFlow 0. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. pt embedding in the previous picture. Download aura_flow_0. With the positions of the subjects changed: You can see that the subjects that were composited from different noisy latent images actually interact with each other because I put "holding hands" in the prompt. Regular KSampler is incompatible with FLUX. safetensors and put it in your ComfyUI/checkpoints directory. Asynchronous Queue system. For example: 896x1152 or 1536x640 are good resolutions. safetensors to your ComfyUI/models/clip/ directory. It basically lets you use images in your prompt. - comfyanonymous/ComfyUI SDXL Turbo Examples. Audio Examples Stable Audio Open 1. This example contains 4 images composited together. Here is the workflow for the stability SDXL edit model, the checkpoint can be downloaded from: here. Apr 5, 2023 · You signed in with another tab or window. LCM Lora. Ctrl + C/Ctrl + V Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes) Ctrl + C/Ctrl + Shift + V Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes) There is a portable standalone build for 2 Pass Txt2Img (Hires fix) Examples. Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. 1 Dev Flux. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. Aug 1, 2024 · Expected Behavior Run the inference Actual Behavior After 273. Image Edit Model Examples. ComfyUI Examples. This works just like you’d expect - find the UI element in the DOM and add an eventListener. Written by comfyanonymous and other contributors. Explore 10 cool workflows and examples. Hunyuan DiT is a diffusion model that understands both english and chinese. Edit models also called InstructPix2Pix models are models that can be used to edit images using a text prompt. x, SD2. Here is the input image I used for this workflow: T2I-Adapter vs ControlNets. The total steps is 16. 1 Pro Flux. [1] ComfyUI looks Feature/Version Flux. Download the model. Hunyuan DiT 1. AuraFlow is one of the only true open source models with both the code and the weights being under a FOSS license. - comfyanonymous/ComfyUI. SD3 Examples. setup() is a good place to do this, since the page has fully loaded. Follow their code on GitHub. You can load this image in ComfyUI to get the full workflow. Action Movies & Series; Animated Movies & Series; Comedy Movies & Series; Crime, Mystery, & Thriller Movies & Series; Documentary Movies & Series; Drama Movies & Series In most UIs adjusting the LoRA strength is only one number and setting the lora strength to 0. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Here is an example of how to use upscale models like ESRGAN. Examples of ComfyUI workflows. AuraFlow Examples. This image contain 4 different areas: night, evening, day, morning. The most powerful and modular stable diffusion GUI and backend. LCM models are special models that are meant to be sampled in very few steps. User profile of comfy on Hugging Face. These are examples demonstrating how you can achieve the “Hires Fix” feature. com Flux Examples. Aug 2, 2024 · Good, i used CFG but it made the image blurry, i used regular KSampler node. Hunyuan DiT Examples. The proper way unCLIP Model Examples. Here is an example for how to use Textual Inversion/Embeddings. You signed in with another tab or window. This is what the workflow looks like in ComfyUI: SDXL Examples. Dec 19, 2023 · ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. Reload to refresh your session. Here is an example: You can load this image in ComfyUI to get the workflow. 0. pt You signed in with another tab or window. Note that in ComfyUI txt2img and img2img are the same node. The reason you can tune both in ComfyUI is because the CLIP and MODEL/UNET part of the LoRA will most likely have learned different concepts so tweaking them separately The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Blog Learn how to create stunning UI designs with ComfyUI, a powerful tool that integrates with ThinkDiffusion. - comfyanonymous/ComfyUI The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Dec 5, 2023 · This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Download it and place it in your input folder. Since general shapes like poses and subjects are denoised in the first sampling steps this lets us for example position subjects with specific poses anywhere on the image while keeping a great amount of consistency. Mar 22, 2023 · I tried looking at the examples to see if I could spot a pattern in use cases; I noticed the "simple" sample type was used in the Img2Img type of examples, and Normal was used if it was the initial gen, but I'm not sure if this is the correct way for me to be interpreting these things. safetensors for the example below), the Depth controlnet here and the Union Controlnet here. Here is a link to download pruned versions of the supported GLIGEN model files. This upscaled latent is then upscaled again and converted to pixel space by the Stage A VAE. These are examples demonstrating how to use Loras. The text box GLIGEN model lets you specify the location and size of multiple objects in the image. To use it properly you should write your prompt normally then use the GLIGEN Textbox Apply nodes to specify where you want certain objects/concepts in your prompts to be in the image. 5 beta 3 illusion model. You signed out in another tab or window. Fully supports SD1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Stable cascade is a 3 stage process, first a low resolution latent image is generated with the Stage C diffusion model. . Examples page. Upscale Model Examples Here is an example of how to use upscale models like ESRGAN. The difference between both these checkpoints is that the first contains only 2 text encoders: CLIP-L and CLIP-G while the other one Inpaint Examples. XLab and InstantX + Shakker Labs have released Controlnets for Flux. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features The text box GLIGEN model lets you specify the location and size of multiple objects in the image. safetensors from this page and save it as t5_base. Download hunyuan_dit_1. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. This repo contains examples of what is achievable with ComfyUI. Text box GLIGEN. T2I-Adapters are much much more efficient than ControlNets so I highly recommend them. safetensors (10. 1 Hypernetwork Examples. - comfyanonymous/ComfyUI Textual Inversion Embeddings Examples. io/Comf In this example we will be using this image. See full list on github. This first example is a basic example of a simple merge between two different checkpoints. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: You can apply multiple hypernetworks by chaining multiple Hypernetwork Loader nodes in sequence. To give you an idea of how powerful it is: StabilityAI, the creators of Stable Diffusion, use ComfyUI to test Stable Diffusion internally. - GitHub - comfyanonymous/ComfyUI at therundown website ComfyUI. Here are the official checkpoints for the one tuned to generate 14 frame videos and the one for 25 frame videos. You can find the InstantX Canny model file here (rename to instantx_flux_canny. Github Repo: https://github. They have since hired Comfyanonymous to help them work on internal tools. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. In this example we will be using this image. You switched accounts on another tab or window. LCM loras are loras that can be used to convert a regular model to a LCM model. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio. kdbtljcu qoy kiyik bhpqo wxcid qdec ulcfbm hpskod cbbpc zua