You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
I try to figure out how is split gain getting calculated here, it is key measure.
I noticed in issue #1230
The supporter wrote: The split gain and leaf output is calculated by sum_grad / sum_hess.
I want to know why? seems the split gain is related to the way we measure impurity(GINI,Entropy,etc)
In entropy case, I remember the split gain should be H(Y)-H(Y|X), and how it related to sum_grad/sum_hess?
And should it be different between classification and regression case? I mean it seems for regression and classification, we should have different way to calculate. But if it is the same, then we may use the same logic to calculate the impurity.
Any material regarding this is welcomed.
The text was updated successfully, but these errors were encountered:
xgboost math explanation
I think this article write very well about the details
The only big part it didn't cover is the shrinkage or learning rate.
I think the discussion here may also provide some insight. Discussion regarding learning rate
Hi
I try to figure out how is split gain getting calculated here, it is key measure.
I noticed in issue #1230
The supporter wrote: The split gain and leaf output is calculated by sum_grad / sum_hess.
I want to know why? seems the split gain is related to the way we measure impurity(GINI,Entropy,etc)
In entropy case, I remember the split gain should be H(Y)-H(Y|X), and how it related to sum_grad/sum_hess?
And should it be different between classification and regression case? I mean it seems for regression and classification, we should have different way to calculate. But if it is the same, then we may use the same logic to calculate the impurity.
Any material regarding this is welcomed.
The text was updated successfully, but these errors were encountered: