-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
You provided the "cs" option but are not calling with keys in sorted order #359
Comments
Sure, full log would be nice! My guess is its related to the dataset setup and use of -j 1, so I'll try to replicate it with corpora that I have, but a couple of questions for you:
In the meantime, you can try rerunning it with |
I am currently facing the same issue as well. I am running version The directory structure looks as follows:
where ABA and ASI are speakers, and each of the files under them are corresponding utterances. The data has a total of 12 speakers.
In the log file mentioned in the previous output, I can see the same error:
|
I have 59 speakers/69881 utterances. The utterances are often (but not always) named as 'uttXXX.wav', where 'XXX' is a number. The naming of utterances across speakers is not consistent. It is possible that different speakers have the same filename (but are in different folders). In addition to alphanumeric characters, filenames can have '.' or '_'. There is one speaker whose .wav files begin with the speaker's name. Also note that I didn't have this problem when running with v2.0.0a17. Incidentally, how do you get the detailed version when running the conda mfa? Here's the log:
|
I uploaded version 2.0.0b8 last night, should have a fix for this, can you try upgrading and rerunning with the |
I tried using the conda version of MFA downloaded on 2021/11/23. (mfa version shows only 2.0.0.) It gave me the above error when running
mfa train speakers lexicon.txt output --clean --verbose -j 1 --output_model_path output.zip
. This was encountered after "Initializing speaker-adapted triphone training...". Any idea why?I can attach the full log if that's helpful.
Thanks
The text was updated successfully, but these errors were encountered: