You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not getting the same sensitivity values when trained for 15 more epochs. The sensitivity values are not retained for the 1st epoch itself!
I have used COVIDNet-CXR-Large model and the dataset files being train_COVIDx2.txt and test_COVIDx2.txt.
It has been mentioned in the paper that you have used a learning rate policy which reduces the learning rate if the learning is stagnated for a period of time. The factor and patience values, 0.7 & 5 respectively have also been mentioned in the paper. However, I did not come across any line in the code which implement this.
I have tried to train the model on the same dataset for another 30 epochs as well with different learning rates (2e-07 & 2e-08). The sensitivity kept dropping.
Am I missing something?
The text was updated successfully, but these errors were encountered:
ShraddhaGS
changed the title
Not able to obtain the same sensitivity values when trained on COVIDNet-CXR-Large model
Not able to obtain the same sensitivity values (96.8) when trained on COVIDNet-CXR-Large model
Apr 30, 2020
hello, I want to ask the way to run COVID-CXR-Large,do you use pb or ckpt?
when i use ckpt, the error info are listed.
tensorflow.python.framework.errors_impl.OutOfRangeError: 2 root error(s) found.
(0) Out of range: Read less bytes than requested
[[node save/RestoreV2 (defined at /home/shanjiang/workspace/chaiqifei/anaconda3/envs/covid/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[save/RestoreV2/_301]]
(1) Out of range: Read less bytes than requested
Has this ever happened to you?
I am not getting the same sensitivity values when trained for 15 more epochs. The sensitivity values are not retained for the 1st epoch itself!
I have used COVIDNet-CXR-Large model and the dataset files being train_COVIDx2.txt and test_COVIDx2.txt.
It has been mentioned in the paper that you have used a learning rate policy which reduces the learning rate if the learning is stagnated for a period of time. The factor and patience values, 0.7 & 5 respectively have also been mentioned in the paper. However, I did not come across any line in the code which implement this.
I have tried to train the model on the same dataset for another 30 epochs as well with different learning rates (2e-07 & 2e-08). The sensitivity kept dropping.
Am I missing something?
The text was updated successfully, but these errors were encountered: