Comfyui arguments reddit


  1. Comfyui arguments reddit. (Same image takes 5. bat file. This narrows the problem down to GPU/Pytorch packages. For example, this is mine: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. 0 with refiner. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. SD1. bat looks like this: `. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). VFX artists are also typically very familiar with node based UIs as they are very common in that space. I think for me at least for now with my current laptop using comfyUI is the way to go. ) I haven't managed to reproduce this process in Comfyui yet. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). That helped the speed a bit FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. For a portable install, launch terminal in comfyUI folder and use . I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . Thanks in advanced for any information. Only if you want it early. 6s/it with comfy as opposed to 4. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. Launch arguments that I don't know about for ComfyUI, or, Some config stuff I've missed with ComfyUI. /main. 37 votes, 11 comments. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. \python_embeded\python. It said follow the instructions for manually installing for Windows. And above all, BE NICE. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. that did not I am using ComfyUI with its default settings. 5 (+ Controlnet,PatchModel. nms' with arguments from seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can 3. Comfyui is much better suited for studio use than other GUIs available now. I stand corrected. 0` Additionally, I've added some firewall rules for TCP/UDP for Port 8188. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Has anyone tried or is still trying? I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Open the . 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. I think a function must always have "self" as its first argument. But there's an even easier way now: StabilityMatrix It's the same thing, but they've done most of the work for you. py, eg: python3 . A. I used to do it manually, with Symlinks and command arguments and the like. --show-completion: Show completion for the current shell, to copy it or customize the installation. sh file, or in the command line, you can just add the --lowvram option straight after main. I did a clean install right now and it works perfectly. 21K subscribers in the comfyui community. I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. py", whether that be in a . bat file, . ) and I am trying out using SDXL in ComfyUI. 1’s 200,000 GPU hours. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. What worked for me was to add a simple command line argument to the file: `--listen 0. . Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Welcome to the unofficial ComfyUI subreddit. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. Update ComfyUI and all your custom nodes, and make sure you are using the correct models. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. 53 it/s for SDXL and approximately 4. I only have 4gb vram so I'm just trying get my settings optimized. For some reason, it broke my ComfyUI when I did it earlier. Options:--install-completion: Install completion for the current shell. py --normalvram. Supports: Basic txt2img. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. ComfyUI is also trivial to extend with custom nodes. json got prompt… Welcome to the unofficial ComfyUI subreddit. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. Welcome to the unofficial ComfyUI subreddit. Updating to the correct latest version of PyTorch is what is needed. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Lt. It appears some other AMD GPU users have similar unsolved issues: I just released an open source ComfyUI extension that can translate any native ComfyUI workflow into executable Python code. Inpainting (with auto-generated transparency masks). ==[Update]== Launching in CPU mode is successful ( python main. 1 or not. But inpainting in Comfy is still terrible, not sure it I simply didn't managed to config it right, but when inpainting faces in 1111 I can make it inpaint in a specific resolution, so I do faces at 512x512. Also, if this is new and exciting to you, feel free to post On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). A lot of people are just discovering this technology, and want to show off what they created. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. We would like to show you a description here but the site won’t allow us. I originally created it to learn more about the underlying code base powering ComfyUI, but after building it I think it could be useful for anyone in the community who is more comfortable coding than using GUIs. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. I don't know what Magnific AI uses. bat file set CUDA_VISIBLE_DEVICES=1. py --lowvram --auto-launch. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. r/synthdiy • We're going live with a workshop in an hour. Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. Hi r/comfyui, we worked with Dr. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. Belittling their efforts will get you banned. g. You might still have to add the occasional command line argument for extensions or something. bat file with notepad, make your changes, then save it. For the latest daily release, launch ComfyUI with this command line argument: --front-end-version Comfy-Org/ComfyUI_frontend@latest For a specific version, replace latest with the desired version number: Aug 8, 2023 · Wherever you are running the "main. 5 while creating a 896x1152 image via the Euler-A sampler. It's very early stage but I am curious what folks think / excited to update it over time! The goal is to a) make it easy for semi-casual users (e. But where do I begin, anyone know any good tutorials for a lora training beginner. exe -s ComfyUI\main. On Linux with the latest ComfyUI I am getting 3. Has anyone managed to implement Krea. Data to create a command line tool to improve the ergonomics of using ComfyUI. 9s/it with 1111. I've ensured both CUDA 11. Workflows are much more easily reproducible and versionable. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. bat file, it will load the arguments. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. Please share your tips, tricks, and… Hi all, How to ComfyUI with Zluda All credit goes to the people who did the work! lshqqytiger, LeagueRaINi, Next Tech and AI(Youtuber) I just pieced… These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. Invoke just released 3. Basic img2img. exe -m pip install [dependency] I have read the section of the Github page on installing ComfyUI. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Welcome to the unofficial ComfyUI subreddit. Using ComfyUI with my GTX 1650 is simply way better than using Automatic1111. 8 and PyTorch 2. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. I tested with different SDXL models and tested without the Lora but the result is always the same. Discord bot users lightly familiar with the models) to supply prompts that involve custom numeric arguments (like # of diffusion steps, LoRA strength, etc. However, I kept getting a black image. py --cpu ) but of course not ideal. Some main features are: Automatically install ComfyUI dependencies. Every time you run the . 55 it/s for SD1. py --windows-standalone-build --normalvram --listen 0. Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. For example, this is mine: Hey I'm new to using ComfyUI and was wondering if there are command line arguments to add to launch file like there is in Automatic1111. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. Please keep posted images SFW. 1 are updated and used by ComfyUI. Both character and environment. Installation¶ Welcome to the unofficial ComfyUI subreddit. This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. I get 1. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. 0` The final line in the run_nvidia_gpu. 5, SD2. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. and any other arguments you want to add. 2 seconds, with TensorRT. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. Please share your tips, tricks, and workflows for using this software to create your AI art. Here are some examples I did generate using comfyUI + SDXL 1. hsdvn lah gqlvnrr pvece gppglv xwmn miiuumwc towsro phv lza