forked from kaldi-asr/kaldi
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker #7
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* [infra] docker images automatically using gh * minor change
The example for the post-to-tacc fails , but with the correct of `ark:- |` there is no piping error
Coauthored-By: Jonghwan Hyeon <hyeon0145@gmail.com>
Co-authored-by: npovey <you@example.com>
…ion_dependent_subword_lexicon.py (kaldi-asr#4794)
* Update run_blstm.sh fix bug aspire run_blstm.sh * Update egs/aspire/s5/local/nnet3/run_blstm.sh Co-authored-by: Cy 'kkm' Katsnelson <kkm@pobox.com> Co-authored-by: Cy 'kkm' Katsnelson <kkm@pobox.com>
* Remove unused variable. * cudadecoder: Make word alignment optional. For CTC models using word pieces or graphemes, there is not enough positional information to use the word alignment. I tried marking every unit as "singleton" word_boundary.txt, but this explodes the state space very, very often. See: nvidia-riva/riva-asrlib-decoder#3 With the "_" character in CTC models predicting word pieces, we at the very least know which word pieces begin a word and which ones are either in the middle of the word or the end of a word, but the algorithm would still need to be rewritten, especially since "blank" is not a silence phoneme (it can appear between). I did look into using the lexicon-based word alignment. I don't have a specific complaint about it, but I did get a weird error where it couldn't create a final state at all in the output lattice, which caused Connect() to output an empty lattice. This may be because I wasn't quite sure how to handle the blank token. I treat it as its own phoneme, bcause of limitations in TransitionInformation, but this doesn't really make any sense. Needless to say, while the CTM outputs of the cuda decoder will be correct from a WER point of view, their time stamps won't be correct, but they probably never were in the first place, for CTC models.
Fix "glossaries_opt" variable name at line number 39. It's misspelled due to which words in the glossaries weren't reserved while creating BPE.
This is to fix a CI error. It appears that this is from using "ubuntu-latest" in the CI workflow. It got upgraded to ubuntu 22.04 automatically, and this doesn't have python2.7 by default.
Fix reported issues w.r.t python2.7 and some apple silicone quirks
Support for both OpenFST 1.7.3 and 1.8.2
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
test