Why is comfyui faster reddit. Welcome to the unofficial ComfyUI subreddit.
Why is comfyui faster reddit stable-fast A1111 is like ComfyUI with prebuilt workflows and a GUI for easier usage. This is something I posted just last week on GitHub: When I started using ComfyUI with Pytorch nightly for macOS, at the beginning of August, the generation speed on my M2 Max with 96GB RAM was on par with A1111/SD. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. This is the Official Tower Defense Simulator Reddit, this is a place for our community to interact with each other, post memes, ask Hi everybody, I am running a1111, comfyui, easydiffusion and fooocus-mre in a virtual machine (explaining why at the bottom of this post). Most of the time, I use a1111. Hi! Does anyone here use ComfyUI professionally for work, and if so how/why? Also, why do you prefer it over alternatives like Midjourney, A1111, etc. The node based environment means its However, the engine unloading caused by VAE decoding can greatly slow down the overall generation speed. 1) in ComfyUI is much stronger than (word:1. I think for me at least for now with my current laptop using comfyUI is the way to go. I also use CTRL+B and CTRL+M on various nodes to toggle what controlnet nodes are applying to my clip (using fast bypass and fast mute nodes connected to them to quickly toggle individual node state!) Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and ComfyUI allows you to build an extremely specific workflow with a level of control that no other system in existence can match. 31 votes, 70 comments. đ Hadn't messed with A1111 in a bit and wanted to see if much had changed. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. I merely stop and restart the Jupiter script. But yeah it goes fast in ComfyUi. generally the comfyui images are worst if you use CFG > 4. the diffusion process so first I made 3 outputs of 10 20 30 samples. do i have to use another workflow or why is the images not rendered instant or ´why do i have these image issues? with my 8 gb rx 6600 which I was only able to run sdxl with sd-next (out of memory after 1-2 runs and on default 1024x1024), I was able to use this is comfyui BUT only with 512x512 or 768x512 - 512x768 (memory errors even with these from time to time) Curiously it is like %25 faster run running a sd 1. VRAM optimization throughout means you can run ED with very little memory and still have access to all the features. More info: https://rtech. I believe I got fast ram, which might explain it. Definitely the width on your resolution. ai account and a Jupyter Notebook for when I'm trying out new things, want/need to work fast and for img2img batch iterative upscaling. Maybe I am doing it wrong, but ComfyUI inpainting is a bit awkward to use. Experimental usage of stable-fast and TensorRT. That makes no sense. Here are my Pro and Contra so far for ComfyUI: Pro: Standalone Portable Almost no requirements/setup Starts very fast SDXL Support Shows the technical relationships of the individual modules Cons: Complex UI that can be confusing Without advanced knowledge about AI/ML hard to use/create workflows /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Also it is useful when you want to quickly try something out since you don't need to set up a workflow. Now with comfyui. true. but can it be used with ComfyUI? In my site-packages directory I see "transformers" but not "xformers". Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. Or check it out in the app stores TOPICS. 2) and just gives weird results. Want to use latent space, again 1 button. But those structures it has prebuilt for you arenât optimized for low end hardware specifically. 21K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. If someone needs more context please do ask. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post. Contribute to gameltb/ComfyUI_stable_fast development by creating an account on GitHub. The video covers: New SD 2. How would I specify it to use the venv instead of system python? am currently a bit confused with confyUI rn. [Please Help] Why is a bigger image faster to generate? This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even though the second sampler processes a larger Also "octane" might invoke "fast render" instead of "octane style". infizoom possible in ComfyUI Any experience/knowledge on any of the above is greatly appreciated. don't load Runpod's ComfyUI template Load Fast Stable Diffusion. Had similar experience when I started with Comfyui. More info: https I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. I only found comfy quicker in super simple generations or small automated processes to pump out tons of pictures quick. The speed is very fast, and you can enable xFormers for even faster speed on nVidia cards. Within that, you'll find RNPD-ComfyUI. py--cpu" . github. Comfy does launch faster than auto111 though but the ui will I had previously used ComfyUI with SDXL 0. Get the Reddit app Scan this QR code to download the app now. And produces better results than I ever get it A1111 somehow, am I doing something wrong with A1111, or is Comfy UI just that much faster and better? Welcome to the unofficial ComfyUI subreddit. I'm new to ComfyUI and using stable diffusion in general. And it's 2. i need help (i just want to install normal sd not the sdxl) Share Add a Comment. 7K subscribers in the comfyui community. There's a Welcome to the unofficial ComfyUI subreddit. It's just the nature of how the gpu works that makes it so much faster. CUI can do a batch of 4 and stay within the 12 GB. Discover helpful tips for beginners using ComfyUI on StableDiffusion. 1. Takes a minute to load. Hope I didn't crush your dreams. thank you for the advice. CUI is also faster. ComfyUI is a bitch to learn at first, but once you get a grasp of it, and build the workflows you want to use for what you're doing, you are on a plateau and it's really easy. io DMD2 aims to create fast, one-step image generators that can produce high-quality images with much less computational cost than traditional diffusion models, which typically require many steps to generate an image. 5 and then using the XL refiner as I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Tested failed loras with a1111 they were great. #Comfyui #Ultimate upscale - a faster upscale, same quality . I will say don't dump Automatic1111. But than I have to ask myself if that is really faster. Healthy competition, even between direct rivals, is good for both parties. Fast ~18 Steps Images (2s inference time on a 3080 Welcome to the unofficial ComfyUI subreddit. Gaming. A lot of people are just discovering this Hey r/comfyui, I just published a new video going over the recent updates for ComfyUI reaching the end of year. ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate larger images, or so I've heard. Question - Help Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from The big difference is that looking at Task Manager (on different runs so as not influence results), my CPU usage is at 100% with CPP with low RAM usage, while in the others my CPU usage is very ow with very high ram usage. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Comfyui has this standalone beta build which runs on python 3. Please keep posted images SFW. Bf16 is capable of much better representation for very small decimals. everything ai is changing left and right, so a flexible approach is the best imho. 24K subscribers in the comfyui community. A1111 does a lot behind the scenes with prompts, while ComfyUI Doesn't, making it more sensitive to the Prompt length , sampler shouldn't affect but i always use Euler normal , try it out. It also runs on CPU as however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some While comfyUI is better than default A1111, TensorRT is supported on A1111, uses much less vram and image generation is 2-3X faster. it has been noticeably faster unless I 123 votes, 148 comments. I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. if i understood correctly even the extensions are same, if so i'll probably replace a1111 with forge but comfy is still much faster Welcome to the unofficial ComfyUI subreddit. More /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I want to switch from a1111 to comfyui. Only the LCM Sampler extension is needed, as shown in this video. 56 votes, 17 comments. subreddit Welcome to the unofficial ComfyUI subreddit. The workflow is huge, but with the toggles, it can run pretty fast. New comments cannot be posted. Just check your vram and be sure optimizations like xformers are set-up correctly because others UI like comfyUI already enable those so you don't really feel the higher vram usage of SDXL. This update includes new features and improvements to make your image creation process faster and more efficient. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all The main problem is that moving large files from and to an ssd repeatedly is going to wear it out pretty fast. This is why I have and use both. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Warning. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. Except I have all those csv files in the root directory Comfyui indicates they need to be in, so why Welcome to the unofficial ComfyUI subreddit. So yea like people say on here, your negatives are just too basic. Shouldn't you be able to reach the same-ish result faster if you just upscale with a 2x upscaler? Is there some benefit to this upscale-then-downscale approach, or is it just related to availability of 2x ComfyUI also uses xformers by default, which is non-deterministic. ipynb in /workspace. Feels like it is barely faster than my Hi! Does anyone here use ComfyUI professionally for work, and if so how/why? Also, why do you prefer it over alternatives like Midjourney, A1111 Hyperthreading doesn't make anything go any faster, it just allows for twice the interrupts that help alleviate waiting. No idea why , but i get like 7. 10K subscribers in the comfyui community. This combo is just as fast as the DDIM one I was using. No matter what, UPSCAYL is a speed demon in comparison. Welcome to the unofficial ComfyUI subreddit. When I was using automatic1111 it would let me adjust the ram usage options as well as adding a bunch of command line arguments to the batch file directly, can't seem to find such file under comfyUI. The weights are also interpreted differently. And with the introduction of SDXL and the push for Comfy UI, I fear that it is heading in that direction even faster. It doesn't have all the features and for that I do occasionally have to switch back, but the node style editor in Comfy is so much clearer and being Try using an fp16 model config in the CheckpointLoader node. Fooocus would be even faster. Progressively, it seemed to get a bit slower, but negligible. Comfy doesn't really do "batch" modes, really, it just adds individual entries to the queue very quickly, so adding a batch of 10 images is exactly the same as clicking the "Queue Prompt" button 10 times. A few new rgthree-comfy nodes, fast-reroutes I was facing similar issues when i first started using ComfyUI, try adjusting CFG scale to 5 and if your prompts are big like in A1111, add a token merging node. left one is Forge, right one is A1111, 7it/s vs 5. But then I realized, shouldn't it be possible (and faster) to link the output from one into the next instead? /r/StableDiffusion is /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I find that much faster. It's still 30 seconds slower than comfyUI with the same 1366x768 resolution and 105 steps. my post. Unless cost is not a constraint and you have enough space to backup your files, move everything to an ssd. I have 1060 with 6gb ram. It also seems like ComfyUI is way too intense on using heavier weights on (words:1. You will have to learn Stable Diffusion more deeply though. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the ComfyUI is a great sandbox environment for people with advanced knowledge of SD and AI, but for people who aren't as read up on all the different systems it gets overwhelming fast. About knowing what nodes do, this is the hard thing about ComfyUI, but there's a wiki created by the dev (comfyanonymus) that will help to understand many things /r/StableDiffusion is back open after the On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. Possibly some Custom Nodes, or a wrongly installed startup package, like torch or xformers. The only cool thing is you can Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. Share Add a 23K subscribers in the comfyui community. (which allows fun things like quickly generating an image with 1. Save up for a Nvidia card, and it doesn't have to be the 4090 one. More info: https://rtech The floating point precision on fp16 is very very poor for very very small decimals. PSA: RealPLKSR is a new, FANTASTIC (and fast!) 4x upscaling architecture Welcome to the unofficial ComfyUI subreddit. More info: https://rtech /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I use a script that updates Comfyui and checks all the Custom Nodes. â(Composition) will be different between comfyui and a1111 due to various reasonsâ. Sorry to say that it won't be much faster, even if you overclock the cpu. Thanks for implementing this so quickly! Messing around with this, I feel like the hype was a bit too much. Next. /r/SanJose will be going dark between 12-14th June in protest Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this Colab does break in my normal operation. It will Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. to me Comfy feels like something better suited for post processing instead of image generation there is no point using a node based UI for just generating a image but layering different models for upscale or feature refinement is the main reason comfy is actually good after the image generation part, atm using Loras and TIs is a PITA not to mention a lack I've started with Easy Diffusion, and then moved to Automatic 1111, but recently I installed Comfy UI, and drag and dropped a work flow from Google (Sytans Workflow) and it is amazing. Here are some so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. I'll stay on ComfyUI since it works better for me, it's faster, more customizable, looks better (in that I can arrange nodes where I want), its updates don't completely break the install for me like A1111's always do, and most importantly it allows me to actually generate the images I want without constant out of memory errors. After that, subsequent generations will be faster. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the No spaghetti, no figuring out why this latent needs these 4 nodes and why one of them didn't work since the last update. I started on the A1111. Learn from community insights and improve your experience. . I've found A1111 is still useful for many things like grids which Comfy can do but not as well. 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). Share Finally got ComfyUI set up on my base Mac M1 Mini and as instructed I ran it on CPU only: "%python3 main. Don't know why. you define the complexity of what you build. 25K subscribers in the comfyui community. Before the Vast /r/StableDiffusion is Yes, you can do it using the ComfyAPI. Running on a M2 so if it works good speed increase is always great. Some of the ones with 16gb vram are pretty cheap now. If you are interested in learning about how things work behind the scenes, then you're better off investing the time into learning ComfyUI. comfyUI takes 1:30s, auto1111 is taking over 2:05s Comfy is maybe 10 Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. As far as I understand, as opposed to A1111, ComfyUI has no GPU support for Mac. Essentially it means that under 100% load from one app that only uses the actual number of cores present, it'll show as 50% utilization because the other half are not in use by the app but your CPU is doing all the actual work it can. If I restart the app, then it will be faster again, but again, the second generation and so on will be slower again. and don't get scared by the noodle forests you see on some screenshots. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to Welcome to the unofficial ComfyUI subreddit. But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. Comfy is basically a backend with very light frontend, while A1111 is very heavy frontend. But you an achieve this faster in A1111 considering the workflow of comfy ui. but will also experiment with the fast speed. support/docs/meta Welcome to the unofficial ComfyUI subreddit. 5 and 2. it's the perfect tool to explore generative ai. And above all, BE NICE. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. (and fast!) 4x upscaling architecture ComfyUI is still way faster on my system than Auto1111. Key improvements over DMD: Eliminates the need for a regression loss and expensive dataset construction The Flux Q4_K_S just seems to be faster than the smaller Flux Q3_K_S, despite the latter being loaded completely. Belittling their efforts will get you banned. 5 to 3 times faster than automatic1111. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the 22K subscribers in the comfyui community. In this article, we will explore these steps to help you âflux1-dev-bnb-nf4â is a new Flux model that is nearly 4 times faster than the Flux Dev version and 3 times faster than the Flux Schnell version. Initially I was put off by the messy looking workflows in comfy but now I love it and it's all I use. The one thing I would add is that a lot of the time you will spend learning ComfyUI, you will also be learning about the underlying technologies, since you can combine anything together. View community ranking In the Top 20% of largest communities on Reddit. If you are looking for a straightforward workflow that leads you quickly to a result, then Automatic1111. It should be at least as fast as the a1111 ui if you do that. I've played around with different upscale models in both applications as well as settings. 0. py when launching it in Terminal, this should fix it. I'm always on a budget so I stored all my models in an hdd. Share Add a Comment. With ComfyUI you have access to ready-made workflows, but this can be overwhelming, especially for beginners. Definitely no nodes before that quickly flick green before the KSampler? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Anything Better? Question - Help Honestly the default is amazingly fast, but still curious. Updated it and loaded it up like normal using --medvram and my SDXL generations are only taking like 15 seconds. A1111 isn't very polished in terms of UX/UI it's still a lot more intuitive. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Turbo SDXL-LoRA-Stable Diffusion XL faster than light My civitai page: https few seconds = 1 image Tested on ComfyUI: workflow. Too much and you get side by side people. For instance (word:1. I tried generating a 512x768 image both 36 votes, 12 comments. despite the complex look, it's 2. For DPM++ SDE Karras I selected scheduller karras /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI and AnimateDiff It's so fast! | LCM Lora + Controlnet Openpose + Animatediff (12 When the path changes, the "deactived" path now gets a small blank image as the input, that path processes faster as a result. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. For example, SD and MJ are pushing themselves ahead faster and further because of each other. comfyanonymous. ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by After all, the more tools there are in the SD ecosystem, the better for SAI, even if ComfyUI and its core library is the official code base for SAI now days. Locked post. Too high on the height and you get multiple heads. 13s/it on comfyUI and on WebUI i get like 173s/it. You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. Faster and/or more resource efficient and/or B: More flexible and powerful for the deep-diving workflow crafters, code nerds who make their own nodes, and wonks Introducing "Fast Creator v1. Then i tested my previous loras with comfyui they sucked also. So far the images look pretty good except I'm sure they could be a lot When you drag an image to the ComfyUI window, you will get the settings used to create THAT image, not the batch. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. this stuff gets complex really fast, especially in Comfy! I'd say for your purposes, you can basically Welcome to the unofficial ComfyUI subreddit. 10 votes, 14 comments. Plus, Comfy is faster and with the ready-made workflows, a lot of things can be simplified and I'm learning what works and how on them. The CPP version overheats my computer MUCH faster than A1111 or ComfyUI. Seems to have everything I need for image sampling. 1 Turbo model Front-end improvements like group nodes, undo/redo, rerouting primitives Experimental The comfyui target audience are mainly engineer minded high tech people (heck, I'm dealing with PC's for almost 24 years and I scratched my head multiple times on some workflows) This is advertised like it's targeted to families and kids. 15K subscribers in the comfyui community. but many anecdotes on this subreddit that ComfyUI is much faster than A111 without much info to back it up. While kohya samples were very good comfyui tests were awful. I'll I've been using ComfyUI as my go to for about a month and it's so much better than 1111. that's why people are having trouble using lcm in comfy now and also the new 60% faster sdxl (both only support diffusers) This is a fan sub, not run or owned by YouTube! Please read the rules: https Welcome to the unofficial ComfyUI subreddit. You have to run it on CPU. Lower the resolution and if you gotta go wide screen, use outpainting or the amazing photoshop beta. Apparently, that is because of the errors logged at startup. The original workflow doesn't use lcm as sampler, I just use it to make the generation faster. Have you checked out SD forge, it has the Automatic 1111 interface with the backend more like comfy (ie faster). On my machine, comfy is only marginally faster than 1111. So, while I donât know specifically what youâve been watching, the short version is ComfyUI enables things that other UIs canât. You'll probably have much more At the moment there are 3 ways ComfyUI is distributed: 1. ComfyUI weights prompts differently than A1111. If it isn't let me know because it's something I need After noticing the new UI without the floating toolbar and the top menu, my first reaction was to instinctively revert to the old interface. for me its the UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. Yeah, look like it's just my Automatic1111 that has a problem, CompfyUI is working fast. I read that Forge increses speed for my gpu by 70%. It is not as fast but is more reliable. This does take 20 to 30 minutes. Instead of complaining about it, I It is actually faster for me to load a lora in comfyUi than A111. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. once you get comfy with comfy you don't want to go back. just tested it, definitely forge is faster and has same (almost) results. 55 votes, 19 comments. Standalone: everything is contained in the zip, you could use it on a brand new system. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which - I have an RTX 2070 + 16GB Ram, and it seems like ComfyUI has been working fineBut today when generating images, after a few generations ComfyUI seems to slow down from about 15 seconds to generate an image to 1 minute and a half. Valheim; In my experience comfy UI is 4x faster than A1111. I'm still experimenting and figuring out a good workflow. I started using ComfyUI with ReActor Fast Face Swap. It adds additional steps. What are your normal settings for it? California, the heart of the Silicon Valley. 5 models. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. This KSampler uses the exact same prompt, model, and image that has been generated by the previous one, so why? For me it seems like adding more steps to the previous sampler would achieve similar results. I mean I like segmentation but even that exists in Automatic 1111. As Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with latents, mix-and-match models and do other crazy stuff in a workflow that can be built and re-used. ( Maybe it's got something to do with the quantization method ? The T5 FP8 + Flux Q3_K_S obviously don't fit together in 8 GB VRAM, and still the Flux Q3_K_S was loaded completely , so maybe I'm just not reading the console right Welcome to the unofficial ComfyUI subreddit. With comfy you can optimize your stuff how you want. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. 82 it/s (4090). ? Welcome to the unofficial ComfyUI subreddit. 4" - Free Workflow for ComfyUI. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. (I also have a fast bypassed node at https: That said, Upscayl is SIGNIFICANTLY faster for me. I tested with CFG 8, 6 and 4. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why, its breaking my brain. In general, image generation on MPS is slow, even on an M2 Max. Draw Things (which has a lot of configuration settings). I regularly get several hours before it breaks. One of the strengths of comfyui is that it doesn't share the checkpoint with all the tabs. Commercial Product Background Replace High Resolution, Fast&Effective 7. That could easily be why things are going so fast, I'll have to test it out and see if that's an issue with generation quality. and i get the following results. But if you want to go into more detail and have complete control over your composition, then ComfyUI. All it takes is taking a little time to compile the specific model with resolution settings you plan to use. 1) in A1111. You'll need to follow the guide below to enable stable fast node. (mostly comfyui) with a 3070ti laptop (8gb vram), and I want to do an upgrade getting a good gpu for my desktop pc. 5 checkpoint on the same pc BUT the quality -at least comparing a few prompts As you get comfortable with Comfyui, you can experiment and try editing a workflow. I accidentally tested ComphyUI for the first time about 20 min ago and noticed I clicked on the CPU bat file (my badđ¤Śââď¸). Learn comfyui faster I recommend you to install the ComfyUI Manager extension, with it you can grab some other custom nodes available. I test ran with a simple 512x512 image with no Lora, etc, and it still took forever (well almost 3 mins). Also, if this is new and exciting to you, feel free to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. However, I decided to give it a try At the moment there are 3 ways ComfyUI is distributed: 1. 4". But I still need to fixautomatic1111, might have to re-install. Like 20-50% faster in terms of images generated per minute. which I rent with a Vast. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. A lot of people are just discovering this technology, and want to show off what they created. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. The quality compare to FP8 is really close. Controlling ComfyUI via Script & | by Yushan777 | Sep, 2023 | Medium Once you have built what you want you in Comfy, find the references in the JSON Why is everyone saying automatic1111 is really slow with SDXL ? I have it and it even runs 1-2 secs faster than my custom 1. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. 2. When I first saw the Comfyui I was scared by so many options of what can be set. Easier to install an run but tend While Comfy UI already provides fast rendering, there are several techniques You can implement to further enhance its performance. type --cpu after main. and nothing gets close to comfyui here. You can lose the top 4 nodes as they are just duplicates, you can link them back to the original ones. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. 11. In a111, when you change the checkpoint, it changes it for all the active tabs. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. Comfyui makes things complicated but people becomes bored. It will also be a lot slower this way than A1111 unfortunately. ComfyUI : Using the API : Part 1. That should speed things up a bit on newer cards. When ComfyUI just starts, the first image generation will always be fast (1 minute is the best), but the second generation (no changes to settings and parameters) and so on will always be slower, almost 1 minute slower. Nodes in ComfyUI represent specific Stable Diffusion functions. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. i heard that comfyUI generate more faster. But everything goes smooth and fast only on 4090. Whether that applies to your case or not really depends on what youâre trying to do. Take it easy! đ I watched more carefully, and the the reason for the speed difference should have been blatantly obvious to me the first time: The A1111 run was done using the Euler a sampler, while the ComfyUI run was done using DPM++ 2S a, which is about half as fast. yse inhl yqn jfqyxak fmre eugs lmutid mero zcuf xwqtm