-
Notifications
You must be signed in to change notification settings - Fork 86
Issues: Alpha-VLLM/Lumina-T2X
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Is it possible to replace the model's Text Encoder with other models, such as google/gemma-2-9b?
#96
opened Jul 8, 2024 by
lymanzhao
当前lumina_next_t2i是否支持batch_size大于1的推断?lumina_next_t2i/sample.py中的batch_size参数似乎没有使用?
#108
opened Aug 21, 2024 by
henyilee
Do you have any plans to accelerate the Lumina model using TensorRT?
#97
opened Jul 9, 2024 by
csdY123
Could the author release a training script for training LoRA or fine-tuning based on Diffusers?
#102
opened Jul 18, 2024 by
wangqixun
Request for Guidance on Reproducing Model Architecture for Image Classification
#101
opened Jul 16, 2024 by
yardenfren1996
ImportError: attempted relative import beyond top-level package
#81
opened Jun 19, 2024 by
SoftologyPro
ProTip!
Exclude everything labeled
bug
with -label:bug.