Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Change criteo example notebook performance metric(s) and validate model #225

Open
rdipietro opened this issue Aug 18, 2020 · 0 comments

Comments

@rdipietro
Copy link
Contributor

Right now, examples/criteo-example.ipynb doesn't have a meaningful measure of performance:

  • the performance metric is accuracy, which isn't meaningful because of the extreme class imbalance (about 97 negatives for every 3 positives)

Also the quality of the current model is possibly no better than a simple majority-vote classifier (e.g., def model(x): return 0):

  • Performance outputs are currently not saved as part of the notebook
  • If we run the notebook using day 0 for training and day 1 for testing, we achieve 96.5% accuracy
  • The accuracy of def model(x): return 0 is (fraction of examples with label 0), which is about 97%.

So it would be beneficial to

  • Include a more meaningful metric, e.g. AUC as done in the Joy of Cooking DLRM example
    • Implementation note: AUC can't be computed on a per-batch basis. In the context of fast.ai, see the AUROC class
  • Validate the model using the full dataset once this metric is included
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant