-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train own dataset for regression? #26
Comments
Hi, could you please check the shape of your inputs and labels? in particular, dataset['train_label'] should have shape |
@SuleymanSuleymanzade, it appears that you may have a data slicing issue when creating your dataset. Can you post the shapes of each of your dataset componets, like so?:
I haven't seen this error yet, but the fact that your training and test data appear to contain the same data (
|
Hello, how to train own dataset for regression task?
I created the dataset in this way to check the regression task.
but when I set model to train
model.train(dataset, opt="LBFGS", steps=20, lamb=0.01, lamb_entropy=10.);
it gave me an error:
`File /opt/conda/lib/python3.10/site-packages/kan/LBFGS.py:319, in LBFGS.step(self, closure)
316 state.setdefault('n_iter', 0)
318 # evaluate initial f(x) and df/dx
--> 319 orig_loss = closure()
320 loss = float(orig_loss)
321 current_evals = 1
File /opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/kan/KAN.py:897, in KAN.train..closure()
895 train_loss = loss_fn(pred[id_], dataset['train_label'][train_id][id_].to(device))
896 else:
--> 897 train_loss = loss_fn(pred, dataset['train_label'][train_id].to(device))
898 reg_ = reg(self.acts_scale)
899 objective = train_loss + lamb*reg_
IndexError: index 2941 is out of bounds for dimension 0 with size 2000`
The text was updated successfully, but these errors were encountered: