-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Attention Mask for LORAs? #2445
Comments
masks are useful when there are multiple loras |
@jakeute Usually the output of a sampler is not very predictable, unless you're using controlnet modules to predefine the shape of the output. So in this case, masked loras in conjunction with controlnet modules would also be usefull. |
The IPAdapter's ATTENTION MASK is the reference portion of the target. |
@suede299 yes, that's exactly how IPAdapters "attention mask" works. You can be pretty sure where a certain lora should be applied to, as soon you're using controlnet like canny, openposen or segmentation controlnet for example. I really don't have an idea if this is even possible. But I know that this would be very helpful! |
@manzonif That's awesome! That means it is technically possible to implement such a feature? Is there anyone out there who is capable of creating such a node out of this code? Unfortunately I don't even have an idea where to start and I can even barely read python :( |
technically yes. |
@manzonif That would be great! If I can help you in any kind of way please let me know! (e.g. testing a node) |
It seems there is an addon for A1111 to mask the output of LORAs: I would really really appreciate if someone could make a lora-attention-masking node for comfyui! |
I still have some hope this attention mask node could be developed.. |
+1 |
My hope is fading :-( |
…4472 and #2445 (#3422) * Update node_helpers.py to use generic pillow wrapper to resolve multiple meta-data related issues. replaced open_image function with a generic pillow function that takes Pil functions as a dependency injection and applies the ImageFile.LOAD_TRUNCATED_IMAGES try except fix to them. This provides an extensible function to handle related errors that can wrap offending functions when discovered without the need to repeat code. * Update a few Pil functions to use node_helpers.pillow wrapper Update a Pil function calls in a few locations to use the generic node_helpers.pillow wrapper that takes the function as a dependency injection and uses the try except method with ImageFIle.LOAD_TRUNCATED_IMAGES solution * Corrected comment in issue #s fixed. * Update node_helpers.py to remove import of Image from PIL import of Image is no longer required as functions are Injected
Hi @shawnington , how does this commit (0fecfd2) solve the missing attention mask for LORAs ? |
That is the issue number 2445 is for the PIL library, it was not intended to be linked to this feature request. |
…omfyanonymous#4472 and comfyanonymous#2445 (comfyanonymous#3422) * Update node_helpers.py to use generic pillow wrapper to resolve multiple meta-data related issues. replaced open_image function with a generic pillow function that takes Pil functions as a dependency injection and applies the ImageFile.LOAD_TRUNCATED_IMAGES try except fix to them. This provides an extensible function to handle related errors that can wrap offending functions when discovered without the need to repeat code. * Update a few Pil functions to use node_helpers.pillow wrapper Update a Pil function calls in a few locations to use the generic node_helpers.pillow wrapper that takes the function as a dependency injection and uses the try except method with ImageFIle.LOAD_TRUNCATED_IMAGES solution * Corrected comment in issue #s fixed. * Update node_helpers.py to remove import of Image from PIL import of Image is no longer required as functions are Injected
@AlexD81 , @manzonif , @jakeute, @suede299 I found these two custom nodes in AnimageDiffEvolved which seem to do what I was searching for: ADE_ConditioningSetMaskAndCombine The first node loads the Lora but the lora is only activated for a specific masked area in the second node. Could anyone also confirm? Maybe this issue can be closed then. |
Do the LoraHooks only work with animated diff models? I am trying to add ADE_RegisterLoraHook and ADE_ConditioningSetMaskAndCombine with an SDXL workflow. Is it possible to use a standard sampler or do I need to set up a workflow with an AnimatedDiff sampler? My goal is to generate an image with two characters, each coming from a different lora in the same generation. Regional conditioning is working, but the loras seem to be unmaskable. |
@Phando Yes, it works with usual Loras as well. Here's a basic workflow that can show you how it works in general: I had an issue in creating separate face masks, so I did it manually (upper right corner). In this example the Loras have to same trigger words. So if your Loras have different trigger words, you should set up two different ADD_conditions. I'm not quite sure if I handle it right, but to me it seems it works just fine. |
Thank you for the sample workflow! |
It worked! |
Is it possible to add a Node to mask the area where a specific LORA will be applied to?
Matteos IPAdapter Node has got such a "attention mask". I found it very useful and it also would be greate to restrict the influence area of LORAs.
(I know, it's already similar by using "SetLatentNoiseMask", but it would need to add sample processes each LORA - and it's just not the same)
Idk if it's even possible to implement such mask, but at least I want to ask for it.
The text was updated successfully, but these errors were encountered: