Fixed a small bug rarely causing type mismatch #24
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
when you use the derivative_extraction function, you are likely to concatenate it with your features, and your features' type might be chosen carefully in a manner sensitive to memory usage.
for example in my project i have a large dataset (which is normal for the use case of this library) which i use float32 as the datatype of it's values, since it's accuracy is enough and it reduces the memory footprint by half compared to float64, but when i used the derivative_extraction it calculated it's values in float64 and when i concatenated the derivatives and the original features all the values where converted to float64 and the system's memory usage was doubled, which wasn't immediately apparent.
this is not that critical and it could be fixed by converting types from outside the library call, but why bother with the inconvenience when changing the datatype of the derivatives array from the incoming feature's datatype is not needed.