-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusion Matrix and Results Should Be Updated Or Clarified #1
Comments
Hi Joseph, I just pushed a clarification to the results section of the README.md. Still working on the new dataset. |
Thanks! I appreciate the quick response. Sorry if it's a bit of a nitpick, but given that there are at least a couple of news media articles linking directly to this repo now, I figure it makes sense to avoid possible misunderstandings. |
Reopening this so that it will be displayed more prominently to avoid duplicate issues. |
We just released a new dataset and model, along with the results. Please confirm if the results are the consistent with what you get from the model. Thanks! |
So I'm getting the following bug running the small train, small eval, and large train checkpoints:
The large eval checkpoint works though and gets: [[94. 6. 0.] which is consistent with the results in the readme. |
Hi josephius, I believe that error just means you have loaded in the same model twice, try restarting the notebook and only loading one model at a time. Alternatively, if you're using tensorflow and loading graphs, you can use: tf.reset_default_graph() and declare a new tf.Graph() within each Session. |
Cannot feed value of shape (1,) for Tensor 'input_1:0', which has shape '(?, 224, 224, 3)' |
@AasiaRehman you are not feeding the correct data item to the image input tensor. Perhaps you are feeding the label by accident. |
@jamesrenhoulee The results appear to be consistent with the confusion matrices in the readme. COVID-Net Small (Train) Small (Eval) Large (Train) Large (Eval) |
It looks like additional training and test examples were added but the Confusion Matrix and Results have not been updated to reflect this. I recommend either updating the results, or if the results are not available yet (possibly still training the new model?) a quick note added to make sure that there isn't confusion about the Confusion Matrix, which only shows 8 ground truth COVID-19 samples still. As there are two false positives in the Confusion Matrix, it's possible to assume that the results have been miscalculated with false negatives as false positives, which would reverse the precision and recall.
The text was updated successfully, but these errors were encountered: