Comfyui upscale models reddit. Just use another model loader and select another model. So, vae decode to image, then vae encode to latent using the next model you're going to process with. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. 5 to sdxl or vica vera or you get a garbage result. Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. That's because latent upscale turns the base image into noise (blur). I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. 5 for the diffusion after scaling. Always wanted to integrate one myself. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. image. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). This is done after the refined image is upscaled and encoded into a latent. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. ckpt motion with Kosinkadink Evolved . so i. A step-by-step guide to mastering image quality. outputs. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. Welcome to the unofficial ComfyUI subreddit. 5 model, since their training was done at a low resolution. There's "latent upscale by", but I don't want to upscale the latent image. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. Thank Alright, back by popular DEMAND here is a version of my infinite skin detail workflows that works without any external tools. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. These comparisons are done using ComfyUI with default node settings and fixed seeds. 5 I'd go for Photon, RealisticVision or epiCRealism. 15-0. If you check the description on YT I have a Github repo I have set up with sample images and workflow JSON's as well as links to the LoRA's and Upscale models. Solution: click the node that calls the upscale model and pick one. No attempts to fix jpg artifacts, etc. May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 Generates a SD1. I am curious both which nodes are the best for this, and which models. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird Here is a workflow that I use currently with Ultimate SD Upscale. Note: Remember to add your models, VAE, LoRAs etc. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. - latent upscale looks much more detailed, but gets rid of the detail of the original image. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting messing around with upscale by model is pointless for high res fix. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. The workflow is kept very simple for this test; Load image Upscale Save image. - image upscale is less detailed, but more faithful to the image you upscale. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. 6. 5 to get a 1024x1024 final image (512 *4*0. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). py --directml Tried the llite custom nodes with lllite models and impressed. The pixel images to be upscaled. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. You could also try a standard checkpoint with say 13, and 30. It didn't work out. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. . You need to use the ImageScale node after if you want to downscale the image to something smaller. These upscale models always upscale at a fixed ratio. IMAGE. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. 0-RC , its taking only 7. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. But it's weird. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. I'm using mm_sd_v15_v2. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. There’s only so much you can do with an SD1. But for the other stuff, super small models and good results. 15K subscribers in the comfyui community. this is just a simple node build off what's given and some of the newer nodes that have come out. Though, from what someone else stated it comes to use case. 5=1024). Search for upscale and click on Install for the models you want. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). example. Edit: you could try the workflow to see it for yourself. e. It uses CN tile with ult SD upscale. I love to go with an SDXL model for the initial image and with a good 1. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. I have a custom image resizer that ensures the input image matches the output dimensions. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. Upscaling: Increasing the resolution and sharpness at the same time. Upscale x1. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P I've been using Stability Matrix and also installed ComfyUI portable. Cause I run SDXL based models from start and through 3 ultimate upscale nodes. We would like to show you a description here but the site won’t allow us. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. example usage text with workflow image Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. This. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. upscale_model. Good for depth, open pose so far so good. The Upscale Image (using Model) node can be used to upscale pixel images using a model loaded with the Load Upscale Model node. 5 if you want to divide by 2) after upscaling by a model. inputs. It's been trained to make any model produce higher quality images at very low steps like 4 or 5. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. g Use a X2 Upscaler model. Does anyone have any suggestions, would it be better to do an ite Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. That's because of the model upscale. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Here is an example: You can load this image in ComfyUI to get the workflow. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) You just have to use the node "upscale by" using bicubic method and a fractional value (0. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. 25 i get a good blending of the face without changing the image to much. Also, both have a denoise value that drastically changes the result. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. So I made a upscale test workflow that uses the exact same latent input and destination size. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. For SD 1. I am looking for good upscaler models to be used for SDXL in ComfyUI. 5x on 10GB NVIDIA GPU's. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. The upscaled images. Look at this workflow : I get good results using stepped upscalers, ultimateSD upscaler and stuff. Reply reply I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. Here is an example of how to use upscale models like ESRGAN. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. with a denoise setting of 0. This new upscale workflow also runs very efficiently, being able to 1. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. This way it replicates the sd upscale/ultimate upscale scripts from A1111. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. Thank you community! It s not necessary an inferior model, 1. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. Please keep posted images SFW. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. In resting if found that you CANNOT pass latent data from SD1. SD upscaler and upscale from that. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. it's nothing spectacular but gives good consistent results without Upscale Image (using Model) node. second pic. There are also "face detailer" workflows for faces specifically. I haven't been able to replicate this in Comfy. The downside is that it takes a very long time. Do you have ComfyUI manager. The model used for upscaling. There is a face detailer node. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. One does an image upscale and the other a latent upscale. That's exactly how other UIs that let you adjust the scaling of these models do it, they downscale the image using a regular scale method after. After generating my images I usually do Hires. the factor 2. If it's the best way to install control net because when I tried manually doing it . now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. I have played around with it but all the low step fast models require very low cfg also so it's difficult to make them follow prompts strongly, especially when you want to go against the models natural bias. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. I want to upscale my image with a model, and then select the final size of it. I rarely use upscale by model on its own because of the odd artifacts you can get. Please share your tips, tricks, and workflows for using this software to create your AI art. Thanks. The resolution is okay, but if possible I would like to get something better. My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. However, I'm facing an issue with sharing the model folder. Upscale Model Examples. Please share your tips, tricks, and workflows for using this… A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. I generate an image that I like then mute the first ksampler, unmute Ult. Makes sense when you look a bit into tensors I guess. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. You can also do latent upscales. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. And when purely upscaling, the best upscaler is called LDSR. ahd klowib lwkg vswv hjqqj fkfyil iwnh xkpjy gkxv zigyh