Skip to main content

Local 940X90

Free comfyui workflow directory example reddit


  1. Free comfyui workflow directory example reddit. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. You can use t5xxl_fp8_e4m3fn. No, because it's not there yet. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? Thanks in advance! I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) r/StableDiffusion • A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 Welcome to the unofficial ComfyUI subreddit. But it is extremely light as we speak, so much so That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Hi everyone. If you don’t have t5xxl_fp16. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. 5 model I don't even want. txt cat002. That’s a cost of abou Here's an example of pushing that idea even further, and rendering directly to 3440x1440. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. Upcoming tutorial - SDXL Lora + using 1. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. . 1; Flux Hardware Requirements; How to install and use Flux. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. sft file in your: ComfyUI/models/unet/ folder. AP Workflow 5. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. 0 is the first step in that direction. This will avoid any errors. I stopped the process at 50GB, then deleted the custom node and the models directory. js", and then copy the above code into it. Please share your tips, tricks, and…. Starting workflow. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. true. I have no idea why the OP didn't bother to mention that this would require the same amount of storage space as 17 SDXL checkpoints - mainly for a garbage tier SD1. WAS suite has some workflow stuff in its github links somewhere as well. For your all-in-one workflow, use the Generate tab. Configure the input parameters according to your requirements. It covers the following topics: Introduction to Flux. Rename Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting Welcome to the unofficial ComfyUI subreddit. ComfyUI and Custom Nodes prerequisites installed Latest ComfyUI release and following custom nodes installed: ComfyUI-Manager ComfyUI Impact Pack ComfyUI's ControlNet Auxiliary Preprocessors ComfyUI-ExLlama ComfyUI set to use a shared folder that includes all kind of models Welcome to the unofficial ComfyUI subreddit. My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. ๐Ÿ™Œ Acknowledgments: You can create a new js file in the existing ". Sure, it's not 2. I'll also share the inpainting methods I use to correct any issues that might pop up. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. But let me know if you need help replicating some of the concepts in my process. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. How to get comfyui to free the GPU automatically This shit is fucking annoying. ' Maybe it little outOFdate nodes I tried to keep the noodles under control and organized so that extending the workflow isn't a pain. 1 or not. Excuse one of the janky legs, I'd usually edit that in Photoshop - but the idea is to show you what I get directly out of Comfy using the deepshrink method. Installation in ForgeUI: First Install ForgeUI if you have not yet. 150 workflow examples of things I created with ComfyUI and ai models from Civitai 2 days ago ยท First of all, to work with the respective workflow you must update your ComfyUI from the ComfyUI Manager by clicking on "Update ComfyUI". So. You can construct an image generation workflow by chaining different blocks (called nodes) together. 1’s 200,000 GPU hours. Has anyone else messed around with gligen much? Welcome to the unofficial ComfyUI subreddit. One of the most annoying problem I encountered with ComfyUI is that after installing a custom node, I have to poke around and guess where in the context menu the new node is located. all in one workflow catalog. Please keep posted images SFW. In the Custom ComfyUI Workflow drop-down of the plugin window, I chose the real_time_lcm_sketching_api. We would like to show you a description here but the site won’t allow us. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease This repo contains examples of what is achievable with ComfyUI. json of the file I just used. When I load an even semi decently complex workflow, I have to manually reboot comfy because it refuses to remove the models from memory for some reason. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. Welcome to the unofficial ComfyUI subreddit. but mine do include workflows for the most part in the video description. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can find the Flux Dev diffusion model weights here. Please share your tips, tricks, and workflows for using this software to create your AI art. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. It's completely free and open-source but donations would be much appreciated, you can find the download as well as the source at https://github. I've been using comfyui for a few weeks now and really like the flexibility it offers. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. EDIT: For example this workflow shows the use of the other prompt windows. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. That's the one I did. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Ending Workflow. py. ini file in the ComfyUI-Impact-Pack directory and change 'mmdet_skip = True' to 'mmdet_skip = False. Flux. AP Workflow 9. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. png cat002. I've been especially digging the detail in the clothing more than anything else. This is more of a starter workflow which supports img2img, txt2img, a second pass sampler, between the sample passes you can preview the latent in pixelspace, mask what you want, and inpaint (it just adds mask to the latent), you can blend gradients with the loaded image, or start with an image that is only gradient. 157 votes, 62 comments. safetensors or clip_l. If you see a few red boxes, be sure to read the Questions section on the page. com/ImDarkTom/ComfyUIMini . 1 with ComfyUI Explore thousands of workflows created by the community. To add content, your account must be vetted/verified. comfy uis inpainting and masking aint perfect. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). Execute the workflow to generate text based on your prompts and parameters. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. I also had issues with this workflow with unusually-sized images. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. Example: Directory: C:\Cat\ Files: cat001. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. \custom_nodes\ComfyUI-Manager\js" directory, for example, name it "restart_btn. 1. For some workflow examples and see what ComfyUI can do you can check out: In the standalone windows build you can find this file in the ComfyUI directory. The other is to make a wildcard directory within your ComfyUI installation. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] K12sysadmin is for K12 techs. Going to python_embedded and using python -m pip install compel got the nodes working. Connect the SuperPrompter node to other nodes in your workflow as needed. I hope that having a comparison was useful nevertheless. It looks freaking amazing! Anyhow, here is a screenshot and the . (for 12 gb VRAM Max is about 720p resolution). You can then load or drag the following image in ComfyUI to get the workflow: Jul 28, 2024 ยท It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. An example of the images you can generate with this workflow: Launch ComfyUI and start using the SuperPrompter node in your workflows! (Alternately you can just paste the github address into the comfy manager Git installation option) ๐Ÿ“‹ Usage: Add the SuperPrompter node to your ComfyUI workflow. I originally wanted to release 9. Aug 2, 2024 ยท Flux Dev. 30 votes, 11 comments. or through searching reddit, the comfyUI manual needs updating imo. Am looking to add more features like reverse image search in the future. png cat001. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. With it (or any other "built-in" workflow located in the native_workflow directory), I always get this error: Welcome to the unofficial ComfyUI subreddit. txt . Jul 6, 2024 ยท What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Add the SuperPrompter node to your ComfyUI workflow. The image-to-image workflow for official FLUX models can be downloaded from the Hugging Face Repository. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Forgot to copy and paste my original comment in the original posting ๐Ÿ˜… This may be well known, but I just learned about it recently. Ignore the prompts and setup Get the Reddit app Scan this QR code to download the app now Here are approx. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Is there a node that takes the directory as input and gives me back the filenames (images or text files) as a string? Welcome to the unofficial ComfyUI subreddit. 1 ComfyUI install guidance, workflow and example. 4 - The best workflow examples are through the github examples pages. 1; Overview of different versions of Flux. There are two options. I have a directory filled with png and txt files of the same name. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. If I understand correctly, the best (or maybe the only) way to do it is with the plugin using ComfyUI instead of A4. If you want to activate these nodes and use them, please edit the impact-pack. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Then just restart comfyui and you can see the button now. I made a wildcard directory right there with ComfyUI next to the python code main. 0 for ComfyUI. But for a base to start at it'll work. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. com/. Introducing ComfyUI Launcher! new. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. K12sysadmin is open to view and closed to post. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. One is to create a wildcard directory within the same directory as the dynamic prompt custom node from GitHub. (I've also edited the post to include a link to the workflow) My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. 19K subscribers in the comfyui community. Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. I'm using ComfyUI portable and had to install it into the embedded Python install. hey guys, i always had trouble finding workflows from tutorial vids since they mgiht not be on openaiart or comfyworkflows, so i built a solution that lets me search across both sites. Put the flux1-dev. czxda yglrlv eoool gmvhi kchb jskicms ifvjqtq isr yakuaa xzc