-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Conversation
{"verb": "will", | ||
"description": "[ARGM-ADV: If you liked the music we were playing last night] , [ARG0: you] [V: will] [ARG1: absolutely love what we 're playing tomorrow] !", | ||
"tags": ["B-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "I-ARGM-ADV", "O", "B-ARG0", "B-V", "B-ARG1", "I-ARG1", "I-ARG1", "I-ARG1", "I-ARG1", "I-ARG1", "I-ARG1", "O"]}, | ||
"description": "If you liked the music we were playing last night , you [V: will] absolutely love what we 're playing tomorrow !", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's pretty cool that the new model picks up this case - I hadn't thought that the previous model was incorrect, but in this sentence, "will" is a light verb and doesn't really hold any semantic meaning (I think). Progress 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, great work! I had some minor comments that aren't a big deal, as this is an infrequently used script. Feel free to ignore them if you want.
""" | ||
serialization_directory : str, required. | ||
The directory containing the serialized weights. | ||
the directory containing the serialized weights. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean to do all of these capitalization changes?
|
||
def main(serialization_directory, device): | ||
def main(serialization_directory: int, | ||
device: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe name this cuda_device
, to be consistent?
""" | ||
torch.set_grad_enabled(False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't there some with
block you're supposed to replace this with?
|
||
if domain is not None: | ||
config["dataset_reader"]["domain_identifier"] = domain | ||
prefix = f"{domain}_{prefix}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't noted in the docs.
model = archive.model | ||
model.eval() | ||
|
||
prediction_file_path = os.path.join(serialization_directory, prefix + "_predictions.txt") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: if prefix is empty, or already ends with an underscore, this behaves a little oddly. You could make that nicer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The second argument is a string concatenation, rather than multiple arguments to os.join, so I don't think it does? It would just be "/_predictions.txt"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and /_predictions.txt
is what I meant by "behaves a little oddly" - the beginning underscore is unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was doing the review from a tablet yesterday, so I didn't want to type out the whole thing, but I was thinking something like:
if prefix and prefix[-1] != '_':
prefix += '_'
Then you don't need to add an underscore at the beginning of the filename.
Again, this really isn't a big deal; it's up to you.
New SRL model using ELMo - this moves our F1 from 78.9(boo) to 84.97 (84.93 on test). I used a bit of a hacky repackaging method here so that we can distribute it - it basically involves loading in the GPU specific model, copying the config and weights into a new model which doesn't have the GPU only LSTM implementation. Full results below for posterity: ``` corr. excess missed prec. rec. F1 ------------------------------------------------------------ Overall 70191 12778 12062 84.60 85.34 84.97 ---------- ARG0 16309 1586 1327 91.14 92.48 91.80 ARG1 25106 3566 3123 87.56 88.94 88.24 ARG2 7735 1642 1795 82.49 81.16 81.82 ARG3 402 186 274 68.37 59.47 63.61 ARG4 446 121 137 78.66 76.50 77.57 ARG5 7 4 9 63.64 43.75 51.85 ARGA 0 0 4 0.00 0.00 0.00 ARGM-ADJ 162 131 92 55.29 63.78 59.23 ARGM-ADV 2085 1135 1083 64.75 65.81 65.28 ARGM-CAU 401 145 151 73.44 72.64 73.04 ARGM-COM 19 27 15 41.30 55.88 47.50 ARGM-DIR 316 217 233 59.29 57.56 58.41 ARGM-DIS 2445 580 555 80.83 81.50 81.16 ARGM-DSP 0 0 2 0.00 0.00 0.00 ARGM-EXT 126 120 147 51.22 46.15 48.55 ARGM-GOL 16 49 81 24.62 16.49 19.75 ARGM-LOC 1572 718 605 68.65 72.21 70.38 ARGM-LVB 47 10 8 82.46 85.45 83.93 ARGM-MNR 1389 608 604 69.55 69.69 69.62 ARGM-MOD 2776 72 48 97.47 98.30 97.88 ARGM-NEG 1501 83 60 94.76 96.16 95.45 ARGM-PNC 23 79 90 22.55 20.35 21.40 ARGM-PRD 72 176 344 29.03 17.31 21.69 ARGM-PRP 336 244 192 57.93 63.64 60.65 ARGM-REC 4 2 9 66.67 30.77 42.11 ARGM-TMP 5186 1011 814 83.69 86.43 85.04 R-ARG0 917 80 85 91.98 91.52 91.75 R-ARG1 632 99 82 86.46 88.52 87.47 R-ARG2 34 16 23 68.00 59.65 63.55 R-ARG3 0 0 7 0.00 0.00 0.00 R-ARG4 1 0 1 100.00 50.00 66.67 R-ARGM-ADV 0 1 3 0.00 0.00 0.00 R-ARGM-CAU 1 3 1 25.00 50.00 33.33 R-ARGM-COM 0 1 0 0.00 0.00 0.00 R-ARGM-DIR 0 0 1 0.00 0.00 0.00 R-ARGM-EXT 0 0 2 0.00 0.00 0.00 R-ARGM-LOC 68 42 21 61.82 76.40 68.34 R-ARGM-MNR 9 3 5 75.00 64.29 69.23 R-ARGM-PRP 0 0 1 0.00 0.00 0.00 R-ARGM-TMP 48 21 28 69.57 63.16 66.21 ------------------------------------------------------------ V 34783 0 0 100.00 100.00 100.00 ------------------------------------------------------------ ```
New SRL model using ELMo - this moves our F1 from 78.9(boo) to 84.97 (84.93 on test).
I used a bit of a hacky repackaging method here so that we can distribute it - it basically involves loading in the GPU specific model, copying the config and weights into a new model which doesn't have the GPU only LSTM implementation.
Full results below for posterity: