-
-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add opencl headers #716
Add opencl headers #716
Conversation
@syclik does the team agree with breaking up the PR into smaller ones like I posted above? If we do that should I
I think the biggest piece of review for this PR is how the users are going to call the GPU code and making sure the little make file bits are sensible |
Smaller PRs are definitely better.
If it were me, I’d just work on the next branching from the PR assuming it’s get in with changes that can be merged. But you can either do that or either of your two options.
I’m gonna be tied up until after StanCon. Will get back to you shortly after.
… On Jan 9, 2018, at 8:36 AM, Steve Bronder ***@***.***> wrote:
@syclik do guys guys agree with breaking up the PR into smaller ones like I posted above? If we do that should I
Wait till each smaller PR is approved and merged before working on the next one
Create all the PRs now and chain them together, rebasing after each sub PR is fixed up and approved
I think the biggest piece of review for this PR is how the users are going to call the GPU code and making sure the little make file bits are sensible
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I'm not @syclik, but agree that smaller pull requests are much easier to review.
You can base each branch on the previous one, or you can just wait and do them in line---your call.
In my mind, the big thing is the big and config to get the GPU code turned on. That's mainly going to be done in the interfaces, though, so it's not such a big deal for the math lib.
… On Jan 9, 2018, at 11:36 AM, Steve Bronder ***@***.***> wrote:
@syclik do guys guys agree with breaking up the PR into smaller ones like I posted above? If we do that should I
• Wait till each smaller PR is approved and merged before working on the next one
• Create all the PRs now and chain them together, rebasing after each sub PR is fixed up and approved
I think the biggest piece of review for this PR is how the users are going to call the GPU code and making sure the little make file bits are sensible
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Now you have to go to the job page and click build - there’s a pipelines
job tutorial on discourse too if that helps
…On Fri, Jan 12, 2018 at 19:28 Steve Bronder ***@***.***> wrote:
Jenkins, retest please
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#716 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAxJ7D6gGI8Z_wrTVHNR0kvpPnwVQLKRks5tKCLDgaJpZM4RXRnD>
.
|
Thanks Sean I'll check it out |
My failure on Jenkins is
Will my logic here in |
A wild guess... maybe it's the space before 'Darwin'. Make is
space-sensitive.
…On Tue, Jan 16, 2018 at 1:00 AM, Steve Bronder ***@***.***> wrote:
My failure on Jenkins is
[Distribution tests] g++-4.9 -Wall -I . -isystem lib/eigen_3.3.3 -isystem lib/boost_1.65.1 -isystem lib/cvodes_2.9.0/include -isystem lib/opencl_1.2.8 -std=c++1y -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -Wno-unused-function -Wno-uninitialized -DSTAN_GPU -DSTAN_DEVICE_CPU -DGTEST_USE_OWN_TR1_TUPLE -DGTEST_HAS_PTHREAD=0 -isystem lib/gtest_1.7.0/include -isystem lib/gtest_1.7.0 -O0 -DNO_FPRINTF_OUTPUT -pipe -lpthread -o test/prob/neg_binomial/neg_binomial_cdf_log_00000_generated_fd_test test/prob/neg_binomial/neg_binomial_cdf_log_00000_generated_fd_test.o lib/gtest_1.7.0/src/gtest_main.cc lib/gtest_1.7.0/src/gtest-all.o -lOpenCL
[Distribution tests] /usr/bin/ld: cannot find -lOpenCL
[Distribution tests] collect2: error: ld returned 1 exit status
Will my logic here
<https://github.com/rok-cesnovar/math/blob/3629f8cb16f26ac50ddc81bf6e40eef00ec130cc/make/default_compiler_options#L24>
in default_compiler_options not grab the mac OS like I think it will?
Since I believe we need -framework OpenCL in order to avoid the above
failure (assuming Jenkins is a mac)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#716 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAZ_FxmJ3UKTUCb_IFJFbnOcpZPa2KX6ks5tLDr6gaJpZM4RXRnD>
.
|
I can take a closer look at this tomorrow but off the top of my head - there's the |
Yes @seantalts I pulled my current code from
|
Yeah, you can restart individual jobs when Travis fails to connect to the
internet or something. At some point we might try to figure out if having
Travis at all is worth the flakiness :p
Jenkins should be testing things - what are your expectations that aren’t
happening exactly? Is it not running new tests on git push?
…On Tue, Jan 16, 2018 at 10:16 Steve Bronder ***@***.***> wrote:
Yes @seantalts <https://github.com/seantalts> I pulled my current code
from detect_os, but I'll change it to look at OS_TYPE. Also,
1. Should I restart travis when apt-get or other base level things
fail (such as the current job)?
2. This is no longer running against Jenkins? Did I do something wrong
here or is Jenkins not currently running?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#716 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAxJ7D6Wm-hdqTOlR00qbN2xqU6u5AdCks5tLOdogaJpZM4RXRnD>
.
|
I’m in a car but if you haven’t yet check out the new Jenkins jobs tutorial
on Discourse. We’re now using Math Pipeline.
…On Tue, Jan 16, 2018 at 10:21 Sean Talts ***@***.***> wrote:
Yeah, you can restart individual jobs when Travis fails to connect to the
internet or something. At some point we might try to figure out if having
Travis at all is worth the flakiness :p
Jenkins should be testing things - what are your expectations that aren’t
happening exactly? Is it not running new tests on git push?
On Tue, Jan 16, 2018 at 10:16 Steve Bronder ***@***.***>
wrote:
> Yes @seantalts <https://github.com/seantalts> I pulled my current code
> from detect_os, but I'll change it to look at OS_TYPE. Also,
>
> 1. Should I restart travis when apt-get or other base level things
> fail (such as the current job)?
> 2. This is no longer running against Jenkins? Did I do something
> wrong here or is Jenkins not currently running?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#716 (comment)>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AAxJ7D6Wm-hdqTOlR00qbN2xqU6u5AdCks5tLOdogaJpZM4RXRnD>
> .
>
|
Jenkins is currently not running on git push (or at least it is not showing up on the checks in the github PR) I found the tutorial last night and need to give it a more thorough read through tonight. |
Looks like Jenkins is out of space. I'll delete some old builds and order a new hard drive... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question for now.
.travis.yml
Outdated
@@ -110,5 +125,5 @@ matrix: | |||
|
|||
script: ./runTests.py -j2 $TESTFOLDER | |||
|
|||
sudo: false | |||
sudo: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@seantalts, any opinions on changing sudo
to true
? Although... it looks like this should be required
?
https://docs.travis-ci.com/user/reference/overview/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be nice if we could avoid sudo - it makes each VM spin up take ~5 minutes more to run, and we run into a lot of time limits already. Like if we could on Travis use a compiler flag to add to the library path instead of a symlink requiring sudo that could be good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
^I'll look into how to do that
.travis/amd_sdk.sh
Outdated
@@ -0,0 +1,38 @@ | |||
#!/bin/bash | |||
|
|||
# Original script from https://github.com/gregvw/amd_sdk/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SteveBronder, is this script going to be stable? It looks like the README says it isn't.
This script no longer works as AMD has changed their website to add another page where you must accept terms and conditions to download.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original script does fail, but this version is updated to get past the terms and conditions
make/libraries
Outdated
@@ -6,7 +6,7 @@ BOOST ?= $(MATH)lib/boost_1.65.1 | |||
GTEST ?= $(MATH)lib/gtest_1.7.0 | |||
CPPLINT ?= $(MATH)lib/cpplint_4.45 | |||
CVODES ?= $(MATH)lib/cvodes_2.9.0 | |||
|
|||
OPENCL ?= $(MATH)lib/opencl_1.2.8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mind killing the white space at the end of the line?
make/default_compiler_options
Outdated
@@ -12,8 +12,18 @@ AR = ar | |||
# CPPFLAGS are used for both C and C++ compilation | |||
CPPFLAGS = -DNO_FPRINTF_OUTPUT -pipe | |||
# CXXFLAGS are just used for C++ | |||
CXXFLAGS = -Wall -I . -isystem $(EIGEN) -isystem $(BOOST) -isystem $(CVODES)/include -std=c++1y -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -Wno-unused-function -Wno-uninitialized | |||
CXXFLAGS = -Wall -I . -isystem $(EIGEN) -isystem $(BOOST) -isystem $(CVODES)/include -isystem $(OPENCL) -std=c++1y -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -Wno-unused-function -Wno-uninitialized -DSTAN_GPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Questions:
- are the OpenCL headers absolutely required to compile the code now?
- doesn't setting
-DSTAN_GPU
force use of GPU for all configurations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are the OpenCL headers absolutely required to compile the code now?
They are only required when when we have -DSTAN_GPU
since that includes the gpu header files in mat.hpp. Is there a better way to approach this?
doesn't setting -DSTAN_GPU force use of GPU for all configurations?
Not sure what you mean by 'all configuration', but -DSTAN_GPU
tells the compiler to include all of the gpu files. In the gpu_setup PR we also have -DSTAN_DEVICE_CPU
in the makefile so the OpenCL compiles and runs on a CPU instead of a GPU since travis does not have GPU access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. This isn't the way to update the makefiles. I'll help you out.
By "all configurations," I mean:
- Windows with g++
- Mac with clang++
- Mac with g++
- Linux with g++ (multiple versions)
- Linux with clang++
By setting it the way you did, it affects all of these configurations. We want to selectively do it when STAN_GPU
is defined. In the meantime, perhaps just remove the changes to the makefile for this pull request? We can get this one in which would just be including the opencl library into the correct place?
I'm going to suggest that for this pull request, we back out the changes to the makefiles and the purpose of this pull request is to just get the opencl headers into the right folder in the math library. We can work on the makefiles on the next pull request. That'll make this PR really easy to merge. Thoughts? |
Yes I am cool with that |
remove POST_LDLIBS from multiple_translation_units and add OpenCL to the default compiler options Remove space before 'Darwin' in make file testing on travis remove everything but the OpenCL headers ...
aa9d79b
to
3b15ea9
Compare
Cool, this now just adds the OpenCL headers and license |
Great! I think one of the next PRs can focus just on the builds. We can work out the complexity and edge cases without having to look at a large file diff. (We'll need the amd setup script + changes to the makefile in that PR.) |
Some maintenance related questions:
|
The OpenCL header files themselves are the same, and I will do a diff to double check that. From the OpenCL Registry site
They should be the same and would require the regular governance that the team uses when deciding to update Boost Also, before we merge this I think we may actually only need the |
The downstream cholesky gpu functions work with just the files
Is it better to keep everything as they have in their repo or just take the files we need? |
If it's not too large and all licensed homogeneously, just duplicating their repo makes it easier to use other things in the future and also easier to do drop-in upgrades. I'm not sure what the "too large" cutoff should be here, but 10MB or so should be OK I'd think. For comparison, we're including all of Boost (something like 20MB+ enormous) and all of Eigen (something like 2MB+) and I think we're including all of CVODES. Eventually, if we want to run through RStan, we'll need an R library wrapper package or we'll need to cherry pick to comply with CRAN's size limits. |
Just checked and all the libs we have are up to date with the current OpenCL header repo. Are you guys cool with merging this? Can I click the button? |
@rok-cesnovar has already been thinking about this, their group has previous experience implementing OpenCL in R with their bayesCL package |
Yup. Merge! |
:) |
Sorry to post in the closed pull request, just wanted to anwser what Steve mentioned. We had no problems getting that package through the CRAN procedure (we just had to manually reply to the auto generated response that they need to run it on a testing environment with a GPU). A library wrapper would be the easiest solution, I agree. Similar to Boost header files or ViennaCL's C++ header files package |
Submission Checklist
./runTests.py test/unit
make cpplint
Summary:
Adds The OpenCL header files and options in the make files to include OpenCL
Rok and I thought it would be better to break the GPU PR into several smaller PRs that will flow by:
Intended Effect:
How to Verify:
This PR only contains the OpenCL headers and tests should all execute as normal
Side Effects:
None
Documentation:
?
Copyright and Licensing
Please list the copyright holder for the work you are submitting (this will be you or your assignee, such as a university or company):
By submitting this pull request, the copyright holder is agreeing to license the submitted work under the following licenses:
OpenCL headers are copyrighted by The Khronos Group Inc.
Steve Bronder
Rok Češnovar and Erik Štrumbelj (Faculty of Computer and Information Science, University of Ljubljana)