Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Attention matrix in textual entailment demo #1033

Closed
luffycodes opened this issue Mar 25, 2018 · 1 comment
Closed

Attention matrix in textual entailment demo #1033

luffycodes opened this issue Mar 25, 2018 · 1 comment

Comments

@luffycodes
Copy link

Feature request: Can you expose the attention matrix in textual entailment demo?

@DeNeutoy
Copy link
Contributor

DeNeutoy commented Mar 26, 2018

This probably isn't super high priority for us, but it shouldn't be too difficult -
you can see here and here.
The attention that needs to be returned from the model.
Here is an example of the react components that you need to add to the demo.

Feel free to submit a PR for this - we will definitely accept it. Feel free to ask more questions etc if you need more pointers.

murphp15 pushed a commit to murphp15/allennlp that referenced this issue May 15, 2018
murphp15 pushed a commit to murphp15/allennlp that referenced this issue May 15, 2018
murphp15 pushed a commit to murphp15/allennlp that referenced this issue May 15, 2018
murphp15 pushed a commit to murphp15/allennlp that referenced this issue May 15, 2018
murphp15 pushed a commit to murphp15/allennlp that referenced this issue May 15, 2018
gabrielStanovsky pushed a commit to gabrielStanovsky/allennlp that referenced this issue Sep 7, 2018
…ai#1219)

* Fixes allenai#1033

* changes following PR review.
1. The predictor is now responsible for tokenizing hypothesis and premise.
2. The model no longer takes the metadata parameter anymore.

* Removed some extra blank lines

* Fix spacing issues
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants