This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Attention matrix in textual entailment demo #1033
Labels
Comments
This probably isn't super high priority for us, but it shouldn't be too difficult - Feel free to submit a PR for this - we will definitely accept it. Feel free to ask more questions etc if you need more pointers. |
murphp15
pushed a commit
to murphp15/allennlp
that referenced
this issue
May 15, 2018
murphp15
pushed a commit
to murphp15/allennlp
that referenced
this issue
May 15, 2018
murphp15
pushed a commit
to murphp15/allennlp
that referenced
this issue
May 15, 2018
murphp15
pushed a commit
to murphp15/allennlp
that referenced
this issue
May 15, 2018
murphp15
pushed a commit
to murphp15/allennlp
that referenced
this issue
May 15, 2018
gabrielStanovsky
pushed a commit
to gabrielStanovsky/allennlp
that referenced
this issue
Sep 7, 2018
…ai#1219) * Fixes allenai#1033 * changes following PR review. 1. The predictor is now responsible for tokenizing hypothesis and premise. 2. The model no longer takes the metadata parameter anymore. * Removed some extra blank lines * Fix spacing issues
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Feature request: Can you expose the attention matrix in textual entailment demo?
The text was updated successfully, but these errors were encountered: