Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Arbitrary base learner #3180

Closed
zachmayer opened this issue Jun 22, 2020 · 3 comments
Closed

[Feature request] Arbitrary base learner #3180

zachmayer opened this issue Jun 22, 2020 · 3 comments

Comments

@zachmayer
Copy link

Summary

Its pretty cool that I can define my own loss function and gradient for LightGBM, and then use the linear, tree, or dart base learners to optimize my loss function.

It'd be really cool if I could specify my own base learner, perhaps in the form of an sklearn class with a fit method, a predict method, and support for sample weights.

It'd really open up a whole new world of possibilities to be able to use the LightGBM algorithm to fit a wider range of possible base learners.

Motivation

Custom objectives / custom loss functions are really useful. But I want to take it one step further, and also customize the base learner used by LightGBM.

Description

Xgboost supports tree-based base-learners, as well as linear base learners. As far as I can tell LightGBM only supports tree-based base learners.

It'd be really cool to be able to use linear base learners with LightGBM.

It would be even cooler if I could specify my own base learners, and use LightGBM as a platform for doing my own research into different forms of boosting.

References

@StrikerRUS
Copy link
Collaborator

Closed in favor of being in #2302. We decided to keep all feature requests in one place.

Welcome to contribute this feature! Please re-open this issue (or post a comment if you are not a topic starter) if you are actively working on implementing this feature.

@zachmayer
Copy link
Author

Just a follow up on this:

  • ngboost supports arbitrary base learners, which solves the problem for me for now.
  • There's an interesting new package called Grownet which has some evidence that boosting different weak learners (specifically neural networks) is useful. (There's a paper too)

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants