Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train MVDiffusion w/ random camera trajectory but w/o depth cond #45

Open
OrangeSodahub opened this issue Mar 15, 2024 · 2 comments
Open

Comments

@OrangeSodahub
Copy link

Hi, I'm very interested in your work. And I want to know that if I could train your depth version MVDiffusion model but using SD without depth cond. (e.g. SDv1.5)? And if yes, do you have any advice on how to sample images from the entire camera trajectory to perform consistency well according to your experience. And the last question is, if the precision of depth values matter in training process? Maybe I use some depth values from bounding box to train.

@Tangshitao
Copy link
Owner

I don't think current pipeline can be used to train images without depths.

@OrangeSodahub
Copy link
Author

OrangeSodahub commented Mar 17, 2024

@Tangshitao Sorry I don't exactly understand what you mean. Let me clarify something, I mean using depths to calculate the correspondence but don't use depths as the condition when generating images. Do you mean without depth cond. the results are not expected as with depth cond.?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants