Comfyui face id reddit

Comfyui face id reddit. Any help would be greatly appreciated. That said, I'm looking for a front-end face swap, something that will inject the face into the mix at the point of ksampler, so if I prompt for something like Freckles they won't get lost in the swap/upscale but I've still got my likeness. Hello, fellows. I’m using ComfyUI and have InstantID up and running perfectly in my generation process. I want to generate avatar images of people having animal features, preferably using SD1. Reply reply Open-Leadership-435 I've used InstantID with much smaller faces, probably in the 300-400px range, and it worked great. That determines how close the generated output face matches with its source. vision/download/face Jan 16, 2024 · Welcome back, everyone (Finally)! In this video, we'll show you how to use FaceIDv2 with IPadapter in ComfyUI to create consistent characters. 8K. Yeah, installing insightface is the hardest part. Best practice is to use the new Unified Loader FaceID node, then it will load the correct clip vision etc for you. 217 votes, 58 comments. Could anyone recommend the most effective way to do a quick face swap on an MP4 video? It doesn't necessarily have to be with ComfyUI; I'm open to any tools or methods that offer good quality and reasonable speed. EDIT: I'm sure Matteo, aka Cubiq, who's made IPAdapter Plus for ComfyUI will port this over very soon. Anything smaller, though, and you lose the likeness and details, but then again, small faces in general aren't great without a fix step. The RPG model doesn't do as well with distant faces as other models like Absolute Reality (which is why I used RPG for this guide, for the next part). This extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. We'll also int 4. I've reinstalled pip and face detailer is working again. That’s a cost of abou Welcome to the unofficial ComfyUI subreddit. 4. Yeah, I stoleadopted most of it from some example on inpainting a face. 0 you can save face models as "safetensors" files (stored in ComfyUI\models\reactor\faces) and load them into ReActor implementing different scenarios and keeping super lightweight face models of the faces you use. I just released version 4. Shitty example but I did the work for you. I have tried prompts like 'man with dog face', 'man with lion nose', etc but it generates images of a man with a dog or just a dog. I know how to use LoRAs and embeddings. 0. Welcome to the unofficial ComfyUI subreddit. But it's reasonably clean to be used as a learning tool, which is and will The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. The graphic style Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. for those with only one character, i could do face swap with ipadapter face id models, but I am wondering how i can do it with multiple characters in the picture. 92 votes, 29 comments. Every time I apply it, it screws up the previously generated face and replaces it with a generic one, regardless of Face ID or Face ID Plus. I assume before the 2 face ksamplers but can't figure out the hookup. . pth) Using the IP-adapter plus face model To use the IP adapter face model to copy a face, go to the ControlNet section and upload a headshot image. Ran a few different combinations with a fixed seed of both at different strengths and with and without an additional IPAdapter Advanced so that I could compare, and I definitely prefer the results from InstantID. Native InstantID support for ComfyUI. , The file name should be ip-adapter-plus-face_sd15. in reddit, but I never saw an "official" manual which can run for everybody. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. If that's the case it gives errors if you give an image with a closeup face or without a face. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. K12sysadmin is open to view and closed to post. 1. Thanks it seems you need to download controlnet model and instant id model. I agree wholeheartedly. The tool attempts to detail every face, which significantly slows down the process and compromises the quality of the results. Example Face number one would be index 0, face 2 would be index 1, face 3 would be index 2. 20K subscribers in the comfyui community. I'm doing some test with comfyUI. Hi, i am trying to perform face swap on animal characters on children's storybooks. (i. Fingers crossed. Belittling their efforts will get you banned. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. 8 even. You can't use that model for generations/ksampler, it's still only useful for swapping. Like 0. Each node needs a load image of the face you want to swap. We do not encourage stolen content and unauthorized face-swap (deepfake). and so on. See, this is another big problem with IP adapter (and me) is that it's totally unclear what all it's for and what it should be used for. If you are going to use an LLM then give it examples of good prompts from civitai to emulate. if you have manager installed click on it and click on download models then search for control net models download what you need. And above all, BE NICE. I'm utilizing the Detailer (SEGS) from the ComfyUI-Impact-Pack and am encountering a challenge in crowded scenes. To add content, your account must be vetted/verified. I have Matteo's workflow for combining the 2 face ipadapters with the 2 ksamplers but can't figure out at what point and in which location to add Openpose to workflow. (Same image takes 5. 0 of my AP Workflow for ComfyUI. Is there a way to configure it to focus solely on detailing the largest face in the scene? Since version 0. e. latent. Run the WebUI. it really depends on what you're trying to accomplish there are many models to choose from depending on what you want to do. The only way to keep the code open and free is by sponsoring its development. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Something is happening when it passes into the upscaling phase that causes the faces to just shift toward something more generic. In my experience Windows 10/11 users will need at least 3 components: "ReActor Node for ComfyUI" by Gourieff in ComfyUI. I don't know what half of the controls do on these nodes because I didn't find any documentation for them 😯 And while face/full body inpaints are good and sometimes great with this scheme, hands still come out with polydactily and/or fused fingers most of the time. Recently, BOOLEAN was added to ComfyUI and Impact Pack is updated to use it. 45K views 4 months ago ComfyUI IPAdapter Plus. So that the underlying model makes the image accordingly to the prompt and the face is the last thing that is changed. Please share your tips, tricks, and… I have learned a few things about ComfyUI in the last two months. Where can I learn about face fix, hands fix, legs fix, and overall body enhancement? Hi. The issue is most probably related to the insight face node. For anime look, i suggest inpainting the face afterward, but you want to experiment with the denoise level. mtb node has face swap, kinda like roop, but not as good as training with lora. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. TripoSR was just released and I just felt like I had to create a node for comfyUI so I can experiement with it. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. Also, if this is new and exciting to you, feel free to post It's right above you have to daisy chain reactor nodes and imput the face index's. 5 and IP adapter FaceID. Let us know if you find it useful and stay tuned for the next post! DISCLAIMER: all images here are generated. If you want some some face likeness, try detailing the the face using impact pack, but use the old mmdet model, because the new utralytic model is realistic. Important ControlNet Settings: Enable: Yes Preprocessor: ip-adapter_clip_sd15 Model: ip-adapter-plus-face_sd15 The smaller faces become, the worse they get, but this depends a lot on the model and the prompt too, so your results will vary. Here's FaceSwap with Reactor with a very cool optional, which is FaceAnalysis. After reviewing this new model, it appears we're very close to having a closer face swap from the input image. I can manage basic parts like image-to-image, text-to-image, and upscaling. Upload your desired face image in this ControlNet tab. Visual Studio Build Tools For the Txt2Img and the Face Swap/Detail groups I've used ZavyChromaXL, and I decided to swap out Face ID for InstantID. Now you can try it out too! TripoSR is a state-of-the-art open-source model for fast 3D reconstruction from a single image, collaboratively developed by Tripo AI and Stability AI. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. Choose a weight between 0. I've tried ipadapter plus face, instant id, reactor, pulid but the result is not same as the real face images. 19K subscribers in the comfyui community. I've been using a ComfyUI workflow, but I've run into issues that I haven't been able to resolve, even with ChatGPT's help. New FaceID model released! Time to see how it works and how it performs. However, since prompting is pretty much the core skill required to work with any gen-ai tool, it’ll be worthwhile studying that in more detail than ComfyUI, at least to begin with. Yes, I've actually been doing that. Hello to everyone because people ask here my full workflow, and my node system for ComfyUI but here what I am using : - First I used Cinema 4D with the sound effector mograph to create the animation, there is many tutorial online how to set it up. There are lots of discussions in web about it, incl. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. Please share your tips, tricks, and workflows for using this software to create your AI art. one is the face (the redhead woman) and the other one is only the head position (the one from tron evolution). I had 5 min to toss this together. Is there any other way to do this? Welcome to the unofficial ComfyUI subreddit. It uses the face position and angle from the darker image and draws in that location the redhead woman. I guess you are using the newest IPadapter face id. I’m wondering if anyone can help. A lot of people are just discovering this technology, and want to show off what they created. 5. Thanks for all your videos, and your willingness to share your very in depth knowledge of comfy/diffusion topics, I would be interested in getting to know more in depth how you go about creating your custom nodes like the one to compare the likeness between two different images that you mentioned in a video a while back and which now you made a node for it and showed in this video, which is We would like to show you a description here but the site won’t allow us. 2 seconds, with TensorRT. I’m hoping to use InstantID as part of an inpainting process to change the face of an already existing image but can’t seem to figure it out. You need to use the IPAdapter FaceID node if you want to use Face ID Plus V2. Exciting times. May 1, 2024 · Discover how to use FaceDetailer, InstantID, and IP-Adapter in ComfyUI for high-quality face swaps. Hello all, it turns out that, while generating photos with the reactor node, some turn out fine and some turn out extremely blurry. Step 4 (optional): Inpaint to add back face detail. The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . Thank you and have a Great day :) It's a bit late, but the issue was that you needed to update your ComfyUI version to the latest one. 1’s 200,000 GPU hours. I wonder if any of you happens to know the reason for this. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I'm struggling with face fix, hands fix, legs fix, and overall body enhancement. K12sysadmin is for K12 techs. Workflows: https://f. I would recommend watching Latent Vision's videos on Youtube, you will be learning from the creator of IPAdapter Plus. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. Yeah what I like to do with comfyui is that I crank up the weight but also don't let the IP adapter start until very late. Please keep posted images SFW. You should give an image that has the whole head including a clear face for it to work. 5-1. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. Please share your tips, tricks, and… I installed instant id for confyui, and tried the example workflows to understand how to use it, the multi id one looks like this, and is used to create an image with 2 people starting from 2 faces, can anyone help me understand which type of image should be inserted in the third input node? thanks to those who will answer 44 votes, 54 comments. it seems to produce fairly decent results in the original SDXL output, but when it gets to upscaling and face detailing things start looking less different again. What I want to do is, I have a image of one real person and want to make full body images with same face as the original image. 25K subscribers in the comfyui community. elak mcehe gebv xkv ddfuit hcbd vngbdz ezawgivoo eytyy thmye