Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about global explanation and local explanation #476

Open
JWKKWJ123 opened this issue Sep 16, 2023 · 8 comments
Open

Question about global explanation and local explanation #476

JWKKWJ123 opened this issue Sep 16, 2023 · 8 comments

Comments

@JWKKWJ123
Copy link

Hi all,
The show(ebm_local) and show(ebm_local) functions can show the plots of feature importance and local(subject-wise) prediction very well and I like the plot very well. But I still need to output the global feature importance and local predictions to draw plots that suit my need. I would like to ask is there functions to output the feature importance and local predictions?
By the way, I am confused by the local explanation. For the number in red box, is it the output of each feature? For the contribution for the prediction in the blue box, does it range between 0 and 1? Does the contribution for the prediction both related to the global feature importance and the local prediction?

local_explanation

@paulbkoch
Copy link
Collaborator

Hi @JWKKWJ123 --

Global feature importances can be obtained via the term_importances function (terms include both individual features and also pairs):
https://interpret.ml/docs/ExplainableBoostingClassifier.html#interpret.glassbox.ExplainableBoostingClassifier.term_importances

Local per-feature score contributions can be obtained with the predict_and_contrib function:
https://interpret.ml/docs/ExplainableBoostingClassifier.html#interpret.glassbox.ExplainableBoostingClassifier.predict_and_contrib

The number in the red box is the value that you assigned to the feature for this sample and then passed in via the X parameter to explain_local. I suspect given that all the numbers in your example are between 0 and 1 that you're scaling them.

The contribution in the blue box does not range between 0 and 1. For classification the score contributions are in logits, so having a +1 contribution from a single feature would be fairly significant and it appears at least for this model and particular sample that no feature has this level of contribution.

@JWKKWJ123
Copy link
Author

Hi Paul,
Thank you so much! I hadn't noticed such a comprehensive tutorial for this package before.
I indeed did sigmoid activation for the features (range[0,1]).
We applied the EBM for the diagnosis of dementia based on brain MRI, and put it on arxiv: https://arxiv.org/abs/2308.07778

@JWKKWJ123
Copy link
Author

Hi Paul,
By read the tutorial I found I can use the function: interpret.preserve() to save the globel explanation into a html file. However, I am wondering whether I can save the local explanation of each subject into a html/png/jpg file? As far as I know, the interpret.preserve() function can't do it.

@Harsha-Nori
Copy link
Collaborator

Hey @JWKKWJ123, I left some instructions on doing custom image exports here here: #161 (comment)

You can use any of the supported plotly image export formats via the kaleido library (which I think includes PNG, HTML, PDF, SVG, etc.)

@JWKKWJ123
Copy link
Author

JWKKWJ123 commented Oct 2, 2023

Hi Harsha,
Thank you very much! Now I can output the global and local explanation in html/figure format.
I found that I can do it in local environment, but I can't do it in google-colab, it seems that the kaleido is incompatible with google-colab (I can't figure out the reason),

@sarmedwahab
Copy link

Do anyone of you know how to set the label size in EBM explanation plots, I am using it for research but its plots labels are very small, making it unreadable in document at 100% resolution.

@JWKKWJ123
Copy link
Author

Do anyone of you know how to set the label size in EBM explanation plots, I am using it for research but its plots labels are very small, making it unreadable in document at 100% resolution.

Actually I have the same question, the features in my experiment have long names. Now I use the function ‘ ebm.term_importances( )’ (train the ebm at first) to output the feature importance and use 'seaborn' package to draw the plots by my self.
I also want to ask how to set the font and size of labels in EBM explanation plots?

@sarmedwahab
Copy link

I have actually reached out to some researchers in my field, they have used the EBM plots and had good resolution plots in their article, they referred me to go through plotly and matplotlib api docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants