Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can Your Current Implementation Synthesize Lip Movement Using Audio As Input? #6

Closed
MXGray opened this issue Nov 30, 2017 · 1 comment

Comments

@MXGray
Copy link

MXGray commented Nov 30, 2017

Thanks for your awesome work! I just want to clarify if your current implementation can produce lip movement video output from audio input? i.e. Audio to lip movement video. Or, does your code need to be modified to do this? Please advise. Thanks again! :)

@astorfi
Copy link
Owner

astorfi commented Nov 30, 2017

@MXGray Thanks for the kind words ... Ideally, if the error would be zero (which is not!), it should be able to do it. So basically, the design is for matching purposes and not the generation. Possibly, using GANs (Generative adversarial networks) may do your desired task in a better way.

Bests

@astorfi astorfi closed this as completed Dec 12, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants