Comfyui examples reddit


Comfyui examples reddit. The prompt for the first couple for example is this: Workflow. You can construct an image generation workflow by chaining different blocks (called nodes) together. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. You can use mklink to link to your existing models, embeddings, lora and vae for example: F:\ComfyUI\models>mklink /D checkpoints F:\stable-diffusion-webui\models\Stable-diffusion Thank you u/AIrjen!Love the variant generator, super cool. [3]. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. . My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. 1; Overview of different versions of Flux. all in one workflow would be awesome. For example, it's like performing sampling with the A model for only 10 steps, then synthesizing another latent, injecting noise, and proceeding with 20 steps using the B model. Haven't used it, but I believe this is correct. Most of the security issues in ComfyUI come from the manager which isn't part of the base install because these types of issues have not been solved yet. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. IPAdapter with use of attention masks is a nice example of the kind of tutorials that I'm looking for. Warning. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. [2]. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. 5 so that may give you a lot of your errors. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Thanks. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model. safetensors or clip_l. My own tests left me still with questions lol. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. In other words, I'd like to know more about new custom nodes or inventive ways of using the more popular ones. 73 votes, 25 comments. Explore its features, templates and examples on GitHub. 1; Flux Hardware Requirements; How to install and use Flux. Jul 28, 2024 · It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. I can load the comfyui through 192. Belittling their efforts will get you banned. comfyui manager will identify what is missing and download for you . 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. 5 with lcm with 4 steps and 0. or through searching reddit, the comfyUI manual needs updating imo. Note, this site has a lot of NSFW content. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the sampler. This guide is about how to setup ComfyUI on your Windows computer to run Flux. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. I do a first pass at low-res (say, 512x512), then I use the IterativeUpscale custo Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version You can encode then decode bck to a normal ksampler with an 1. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Aug 2, 2024 · Introduction. 4 - The best workflow examples are through the github examples pages. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. I've updated the ComfyUI Stable Video Diffusion repo to resolve the installation issues people were facing earlier (sorry to everyone that had installation issues!) Welcome to the unofficial ComfyUI subreddit. 1:8188 but when i try to load a flow through one of the example images it just does nothing. This repo contains examples of what is achievable with ComfyUI. Base ComfyUI also doesn't even connect to the internet for anything unless you run the update script. Flux. I feel like this is possible, I am still semi new to Comfy. 5 models? Thank you. What I meant was tutorials involving custom nodes, for example. It seems also that what order you install things in can make the difference. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. If you understand how the pipes fit together, then you can design your own unique workflow (text2image, img2img, upscaling, refining, etc). A checkpoint is your main model and then loras add smaller models to vary output in specific ways . 1 with ComfyUI ComfyUI Examples. Breakdown of workflow content. It's completely free and open-source but donations would be much appreciated, you can find the download as well as the source at https://github. 75s/it with the 14 frame model. true. I couldn't find the workflows to directly import into Comfy. I tried this pack and it seemed promising, however cant seem to find info on the samplers, or how they improve on the existing ones. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself 23K subscribers in the comfyui community. For example: 1-Enable Model SDXL BASE -> This would auto populate my starting positive and negative prompts and my sample settings that work best with that model. Flux Examples. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Welcome to the unofficial ComfyUI subreddit. The images above were all created with this method. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. It covers the following topics: Introduction to Flux. Flux is a family of diffusion models by black forest labs. I can only make a stab at some of these, as I'm still very much learning. 5 + SDXL Refiner Workflow : StableDiffusion. be/ppE1W0-LJas - the tutorial. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 10K subscribers in the comfyui community. try civitai . If you find it confusing, please post here for help or create an Issue in GitHub. I found that sometimes simply uninstalling and reinstalling will do it. perhaps my Google-fu is weak. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. You should try to click on each one of those model names in the ControlNet stacker node and choose the path of where your models 80 votes, 48 comments. For example, see this: SDXL Base + SD 1. yaml. 0. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. example: All you have to do is change base_path to your stable-diffusion-webui path, and remove . Please share your tips, tricks, and workflows for using this… Img2Img Examples. If you don’t have t5xxl_fp16. example from the filename. A lot of people are just discovering this technology, and want to show off what they created. The graphic style Then find example workflows . Seems very hit and miss, most of what I'm getting look like 2d camera pans. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. 1. Would some of you have some tips or perhaps even a workflow to get a decent 4x or even just 2x upscale from a 512x768 image in ComfyUI while using SD1. Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. ComfyUI Extra Samplers: A repository of extra samplers, usable within ComfyUI for most nodes. WAS suite has some workflow stuff in its github links somewhere as well. Only the LCM Sampler extension is needed, as shown in this video. My ComfyUI workflow was created to solve that. Pro-tip for anyone running both, ComfyUI has a config file called extra_model_paths. I think it is just the same as the 1. We would like to show you a description here but the site won’t allow us. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. and remember sdxl does not play well with 1. Examples of ComfyUI workflows. https://youtu. 86s/it on a 4070 with the 25 frame model, 2. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. I provide one example JSON to demonstrate how it works. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with its node-based approach. The denoise controls the amount of noise added to the image. You can Load these images in ComfyUI to get the full workflow. Civitai has a ton of examples including many comfyui workflows that you can download and explore. 168. And above all, BE NICE. 4. 1 ComfyUI install guidance, workflow and example. I can load workflows from the example images through localhost:8188, this seems to work fine. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. Please share your tips, tricks, and workflows for using this… A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). When I run them through 4x_NMKD-Siax_200k upscaler for example, the eyes get really glitchy / blurry / deformed, even with negative prompts in place for eyes. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. Reply reply More replies More replies More replies I cant load workflows from the example images using a second computer. There is a ton of stuff here and may be a bit overwhelming but worth exploring. com/ImDarkTom/ComfyUIMini . You can't change clipskip and get anything useful from some models (SD2. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. Please keep posted images SFW. These are examples demonstrating how to do img2img. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). I'm not entirely sure what ultimate SD upscale does, so I'll answer generally as to how I do upscales. I'm glad to hear the workflow is useful. Please share your tips, tricks, and workflows for using this software to create your AI art. For those that don't know what unCLIP is it's a way of using images as concepts in your prompt in addition to text. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. Anyway, Im sharing this because these things are not well documented because of the frankly arcane method some of the creators used to provide examples and the fact that many images they put up to show examples are badly compressed or done with older versions. start with simple workflows . ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. if a box is in red then it's missing . yck qrcvvz lhrp sbalibe mnwqwfd unor pevxp aoj prusus ouskyn