Comfyui snap to grid reddit. I found a few workflows that suit my needs, or most of them, in comfyworkflows and civitAI. I uploaded the workflow in GH . This works for position and rotation. The first space I can plug in -1 and it randomizes displays the seed for the current image, mostly what I would expect. FugueSegue • 3 hr. It depends: - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. There is a addon that places a restart button in the UI but it's requires updating to a different branch of Comfy. What a completely strange idea. Not exactly mile-high-club but actual club. png). TIL: I recently learned that Civitai saves workflows too. This is a sub-reddit for posting and sharing your own tutorials, either free or paid for, having to do with 3D modelling or animation. Thank you, now I understand how it all works, also a minute later he literally explained how to get the workflows Creating better Animations with Auto Masking, ControlNets and AnimateDiff. 9K subscribers in the comfyui community. • 2 mo. Footnotes. STEP 1: Open the venv folder, then type on its path. bat" and click "queue prompt" on my workflow. Oct 9, 2023 · It creates a 5 x 5 grid based on the Ksampler cfg and steps. Alternatively, it could involve splitting the latent space and applying different models, Lora, prompts, cfg, etc. ComfyUI basics tutorial. com to make it easier for people to share and discover ComfyUI workflows. life saver! More replies. I liken the different methods of generating AI art to the different types of keyboard synthesizers. I use ComfyUI Windows Portable. 5 if you want to divide by 2) after upscaling by a model. 5 so that may give you a lot of your errors. Hold ctrl while dragging an object, command on OSX. My it works for a single model, i fact it works with a few, but I run out of memory if I try to grid with more. To be begin with I have only 4GB Vram and by todays standards it's considered potato. Then click restart server, refresh your browser, and in all likelyhood, things will work. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. I would use images if I could but couldn’t find a node for it. I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. What I found helpful was to have Auto1111 and Comfy share models and the like from a common folder. Demo 3 - Model and LoRA. If it looks like its going in the right direction, I generate 4-12 images depending on how hard I think it’ll be to get the right result and I increase the resolution size of the masked area to get more resolution. 1: A complete guide - Stable Diffusion Art (stable-diffusion-art. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. 2. So you can say, queue your first task in the first workflow, switch Welcome to the unofficial ComfyUI subreddit. In A1111, I usually: Inpaint and try some key words and generate 2-4 images. ControlNet v1. Look at this workflow : So I made a workflow to genetate multiple options of fixed using amazing ImpactPack, and then to choose and paste the best one into the original picture. I’ve a workflow which draws a grid of images by combining latents with “multi latent composite” node. Return in the default folder and type on its path too, then remove it and type “cmd” instead. I've kindof gotten this to work with the "Text Load Line However, the Grid Annotations node from ImagesGrid is fully compatible with the 'annotations' input on Easy Grids' Save Image Grid node, and may be helpful for labeling your grid. The amazing MeshGraphormer understands the correct depth map for hands. Would be nice to have a way to do it from the main UI. I review the result and send the image back to If that doesn't give you the seed used to recreate the image, you need to find the original seed. ago. OP • 3 mo. upvotes ·comments. Impossible-Surprise4. 0 in 0. So I'm seeing two spaces related to the seed. One thing about this setup is sometimes plugin installations fail due to path issues, but it is easily cleared up by editing the installers. Note that when you close the UIs, the last one you closed is the one that will pop up the next time you get on Comfy. The venv folder should be inside the comfyui folder, and it should point to the Python version found when it is created, the first time you run comfyui. I've been googling around for a couple hours and I haven't found a great solution for this. You can look at the EXIF data to get the SEED used. Add a Comment. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Reload to refresh your session. design_ai_bot_human. Reply reply More replies More replies comfyui. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. GenArt42. The UE nodes (that allow you to broadcast data to matching inputs, avoiding all sorts of spaghetti) have a small update, thanks to a great suggestion from LuluViBritannia on GitHub. I want to load it into comfyui, push a button, and come back in several hours to a hard drive full of images. but for audio files. For example, it's like performing sampling with the A model for only 10 steps, then synthesizing another latent, injecting noise, and proceeding with 20 steps using the B model. And boy I was blown away by the fact that how well it uses a GPU. When the tab drops down, click to the right of the url to copy it. Hi. . Found some solid recommendation to use ComfyUI Manager, so I got the v. I've found out how to do XY with efficiency nodes, but I can't figure out how to run it with this average amount as the variable. So since my secret primitive prompt composer is now no longer secret it seems like a good time to upgrade to something more COMFY-like. Then navigate, in the command window on your computer, to the ComfyUI/custom_nodes folder and enter the command by Once you install it, you can load up a workflow in an otherwise fresh install of ComfyUI, click on Manager, and Install Missing Custom Nodes. New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). A tip I learned recently: Hold Shift+Ctrl while dragging snaps the objects pivot point to the collider behind (changes depending on camera angle). ComfyUI uses a 'signal handler' : Ctrl-C is graceful! Ctrl-C frees up memory and deletes temporary files. Inputs that are being connected by UE now have a subtle highlighting effect. This approach allows selective attention coupling at relevant layers without having to recompute the entire UNet multiple times for different prompts, leading to ComfyUI Basic to advanced tutorials. Graceful but inelegant. 513 upvotes · 36. It integrates with A1111 and helps reduce repetitive copying and pasting by letting you save prompt fragments. The command window pops up and vanishes in mere seconds Hi Reddit! In October, we launched https://comfyworkflows. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Size of the actual blueprint isn't important, it can be /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Whether for individual use or team collaboration, our extensions aim to enhance productivity, readability, and Design your blueprint of what ever you like to make chunk aligned, size irrelevant. Just Started using ComfyUI when I got to know about it in the recent SD XL news. - If only the base image generation Questions about ComfyUI from a newbie. If you've installed the nodes that contain the ControlNet preprocessors, it should be there. 1 upvote · 5. If you're interested in how StableDiffusion actually works, ComfyUI will let you experiment to your hearts content (or until it overwhelms you). ComfyUI Manager does not install. radianart. Please share your tips, tricks, and workflows for using this comfyui. Creating Better Animation With AutoMasking, Controlnets and AnimateDiff in Comfyui. Please keep posted images SFW. 1. In order to recreate Auto1111 in ComfyUI, you need those encode++ nodes, but you also need to get the noise that is generated by ComfyUI, to be made by the GPU (this is how auto1111 makes noise), along with getting ComfyUI to give each latent its own seed, instead of splitting a single seed across the batch. com I'm really new to Comfy and would like to show the change in the conditioning average between two prompts from 0. Also, you can double-click on the grid and search for "DW" and you'll see it. you can just plug the width and height from get image size directly into nodes where you need it too. I don't like all the memes, i think it influences people into thinking comfyui is an insurmountable learning curve. A lot of people are just discovering this technology, and want to show off what they created. Thanks tons! That's the one I'm referring CLIPSeg Plugin for ComfyUI. Detailer from the ImpactPack makes multiple fix options. Installation: Follow the link to the Plush for ComfyUI Github page if you're not already here. I share many results and many ask to share. The simplest way, of course, is direct generation using a prompt. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . I'll give it a shot. Please share your tips, tricks, and workflows for using this. ComfyUI forces to you to learn about the underlying pipeline, which can be intimidating and confusing at first. Cheers, appreciate any pointers! Somebody else on Reddit mentioned this application to drop and read. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. Look for Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors. This demo uses the XY Batch up prompts and execute them sequentially. and spit it out in some shape or form. You signed in with another tab or window. •. alohadave. Please share your tips, tricks, and workflows for using this software to create your AI art. First, we'll discuss a relatively simple scenario – using ComfyUI to generate an App logo. Copy that path (we’ll need it later). Set snap to grid and set the first X and Y values to 32 (a chunk is 32x32 pixels) now every movement when you have the blueprint active will jump from chunk to chunk on the map. ryokunyaa. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. Just starting to tinker with comfyui. Color grid T2i adapter preprocessor shrinks the reference image to 64 times smaller and then expands it back to the original size. . Plus quick run-through of an example ControlNet workflow. Cascade with restart sampling. Also added a second part where I just use a Rand noise in Latent blend. I've personally decided to step into the deep end with ComfyUI Here's everything you need to attempt to test Nightshade, including a test dataset of poisoned images for training or analysis, and code to visualize what Nightshade is doing to an image and test potential cleaning methods. It will then request to load 3 models (SDXL, SDXLCLIPMODEL, AutoencoderKL) EVERYSINGLE /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. I need for it to be executed from a single comfyui workflow ie cannot export to another piece of software and have it stitch the audio to the video. Auto1111 uses command line rags to specify folders, comfy uses and extra models file. sdxl. But if you have experience using Midjourney, you might notice that logos generated using ComfyUI are not as attractive as those generated using Midjourney. This is my first post on reddit, please do not judge too strict. Welcome to the ComfyUI Community Docs! This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. This might be useful for example in batch processing with inpainting so you don't have to manually mask every image. okachobe. Then find example workflows . r/StableDiffusion. In the case of ComfyUI though, there is none, so create it x). Sep 17, 2023 · More of a quality of life thing. Promoting your own tutorial is encouraged but do not post the same tutorial more than once every two days. Instead of polluting your system with 10GB+ of redundant AMD packages, please just do steps 3 through 5. Snap size can be adjusted under the Edit toolbar as Snap Settings. g. safetensors and sdxl. I would assume setting "control after generate" to fixed Welcome to the unofficial ComfyUI subreddit. Adds a setting to make moving nodes always snap to grid. 5=1024). Seed question : r/comfyui. Demo 4 - FreeU. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. For a dozen days, I've been working on a simple but efficient workflow for upscale. 0. Just keep 32x32 in mind. cool dragons) Automatic1111 will work fine (until it doesn't). i do that alot. This demo uses the XY List method. Bulk convert audio files + 1 image to video. The Ultimate AI Upscaler (ComfyUI Workflow) Workflow Included. Nodes in ComfyUI represent specific Stable Diffusion functions. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. [Testing] "Better" Loader Lists Adds custom Lora and Checkpoint loader nodes, these have the ability to show preview images, just place a png or jpg next to the file and it'll display in the list on hover (e. But once you get over that hump, you will be able to automate your workflow, create novel ways to use SD etc. Generation using prompt. I tested every sampler with several different loras (cyberrealistic_v33) I created a free tool and custom models (to be released) to create custom workflows and Unload checkpoint. Click on the green Code button at the top right of the page. DrakenZA. It provides a range of features, including customizable render modes, dynamic node coloring, and versatile management tools. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. I know there is the ComfyAnonymous workflow but it's lacking. 17K subscribers in the comfyui community. Then use sd upscale to split it to tiles and denoise each one using your parameters, that way you will get a grid with your images. r/comfyui. start with simple workflows . • 8 mo. The top three inputs are connected by UE, latent isn't connected at all, and seed has a With a fine grid and zoomed out, you still get snapping, but not necessarily to the grid line you want to. Is there anyway to unload the With CFG 1 use to works. It works pretty well in my tests within the limits of The "Attention Couple" node lets you apply a different prompt to different parts of the image by computing the cross-attentions for each prompt, which corresponds to an image segment. Yes, you can actually run two instances at once by having a second tab of ComfyUI up, and load the second workflow on the second tab. Belittling their efforts will get you banned. Midjourney is like those cheap Casio keyboards that only has a three octaves and some electronic drum rhythms. Solved: You create the primitive node and then drag the output to any input, then the primitive node with update to a number. I'm a complete noob to this, try do get it running for some days now. Everything ComfyUI needs to run with AMD ROCm is included in that pip package. After you can use the same latent and tweak start and end to manipulate it. ComfyUI should work perfectly. I created some custom nodes that allow you to use the CLIPSeg model inside ComfyUI to dynamically mask areas of an image based on a text prompt. The net effect is a grid-like patch of local average colors. Heya, I've been working on a few tutorials for comfyUI over the past couple of weeks if you are new at comfyUI and want a good grounding in how to use comfyUI then this tutorial might help you out. Click this and paste into Comfy. 05 increments as an XY plot. I’d be interested in doing this either via audio generated within comfy OR audio sequentially plucked from a folder of audio files- like a batch image loader. I suggest you remove comfyui and all Python versions and start anew. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. I have a text file full of prompts. com) In theory, without using a preprocessor, we can use other image editor At the moment, A1111 has more plugins and extension, and handles inpaint/outpaint better. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. If you deleted something under the python310 folder, you probably need to reinstall Python. Hello, I recently moved from Automatic 1111 to ComfyUI, and so far, it's been amazing. Graceful by design. I have a wide range of tutorials with both basic and advanced workflows. And above all, BE NICE. If your end goal is generating pictures (e. The x_size and y_size inputs on Save Image Grid currently default to being selection widgets rather than input points. true. 📷. comfyui. Ford vs Ferrari in anime style using animatediffv3. 114 votes, 85 comments. I noticed a jump in performance (it's not hogging my resources like crazy) and a huge improvement in generation speed. So I'm happy to announce today: my tutorial and workflow are available. comfyui or automatic 1111. I do not know if this problem is the same on A111 because I've only been using comfyui, basically I load up comfyui from "run_nvidia_gpu. Also, if this is new and exciting to you, feel free to post 11 comments. Its really not, what you actually need is to be open to learning a different way of creating images and to have easy access to useful tutorials for the various aspects of comfyUI. Maybe its also possible to save each tile separately after it's been denoised. try civitai . Sort by: Add a Comment. You switched accounts on another tab or window. , to each area for sampling. Drop the image back into ComfyUI to load it, then change the seed to what was in the exif Welcome to the unofficial ComfyUI subreddit. LD2WDavid. 22. Seed question. 2 from Civitai and followed installation instructions for portable version. if a box is in red then it's missing . Try it out and let me know if works. You just have to use the node "upscale by" using bicubic method and a fractional value (0. 56 upvotes · 34 comments. 5 to get a 1024x1024 final image (512 *4*0. I have OCD, so I would highly appreciate it if you could add a feature that lets nodes snap to a grid so I don't spent 25 minutes of my time making sure that its all perfectly aligned 😭😭 Welcome to the unofficial ComfyUI subreddit. There are apps and nodes which can read in generation data, but they fail for complex ComfyUI node setups. comfyui manager will identify what is missing and download for you . The output nodes for the XY grid are included in the same workflow. The XY grid is output using a second workflow. It creates a 5 x 4 grid based on the the selected models and LoRAs. 0-1. You signed out in another tab or window. ComfyUI Extensions by Blibla is a robust suite of enhancements, designed to optimize your ComfyUI experience. If you right-click on the grid, Add Node > ControlNet Preprocessors > Faces and Poses. Press Enter, it opens a command prompt. ComfyUI Is pretty Dope To be Honest. I'm losing my mind over here so my last resort is getting help from here. "Seed" and "Control after generate". and remember sdxl does not play well with 1. Welcome to the unofficial ComfyUI subreddit. I've been working on an app that takes a "card-based" approach to prompt building. we sn kr gs vt ep kw gr pj xp