Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clang tidy cleanup and using std algorithms #1373

Closed
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions stan/math/opencl/opencl_context.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ class opencl_context {
int device_id = 0;

msg << "Number of Platforms: " << all_platforms.size() << "\n";
for (auto plat_iter : all_platforms) {
for (const auto& plat_iter : all_platforms) {
cl::Platform platform(plat_iter);

msg << "Platform ID: " << platform_id++ << "\n";
Expand All @@ -277,7 +277,7 @@ class opencl_context {
std::vector<cl::Device> all_devices;
platform.getDevices(CL_DEVICE_TYPE_ALL, &all_devices);

for (auto device_iter : all_devices) {
for (const auto& device_iter : all_devices) {
cl::Device device(device_iter);

msg << "\tDevice " << device_id++ << ": "
Expand Down
7 changes: 2 additions & 5 deletions stan/math/prim/arr/fun/dot.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,14 @@

#include <stan/math/prim/meta.hpp>
#include <vector>
#include <numeric>
#include <cstddef>

namespace stan {
namespace math {

inline double dot(const std::vector<double>& x, const std::vector<double>& y) {
double sum = 0.0;
for (size_t i = 0; i < x.size(); ++i) {
sum += x[i] * y[i];
}
return sum;
return std::inner_product(x.begin(), x.end(), y.begin(), 0.0);
}

} // namespace math
Expand Down
7 changes: 2 additions & 5 deletions stan/math/prim/arr/fun/dot_self.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,14 @@

#include <stan/math/prim/meta.hpp>
#include <vector>
#include <numeric>
#include <cstddef>

namespace stan {
namespace math {

inline double dot_self(const std::vector<double>& x) {
double sum = 0.0;
for (double i : x) {
sum += i * i;
}
return sum;
return std::inner_product(x.begin(), x.end(), x.begin(), 0.0);
}

} // namespace math
Expand Down
26 changes: 9 additions & 17 deletions stan/math/prim/arr/fun/log_sum_exp.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@
#define STAN_MATH_PRIM_ARR_FUN_LOG_SUM_EXP_HPP

#include <stan/math/prim/meta.hpp>
#include <cmath>
#include <cstdlib>
#include <algorithm>
#include <numeric>
#include <limits>
#include <vector>
#include <cmath>
#include <cstdlib>

namespace stan {
namespace math {
Expand All @@ -27,21 +29,11 @@ inline double log_sum_exp(const std::vector<double>& x) {
using std::exp;
using std::log;
using std::numeric_limits;
double max = -numeric_limits<double>::infinity();
for (double xx : x) {
if (xx > max) {
max = xx;
}
}

double sum = 0.0;
for (size_t ii = 0; ii < x.size(); ii++) {
if (x[ii] != -numeric_limits<double>::infinity()) {
sum += exp(x[ii] - max);
}
}

return max + log(sum);
double max_val = *std::max_element(x.begin(), x.end());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very neat!

double sum = std::accumulate(
x.begin(), x.end(), 0.0,
[&max_val](auto& acc, auto&& x_i) { return acc + exp(x_i - max_val); });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this generate code that's as efficient as before? It will come down to how efficiently it can compile that closure.

How do we test?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just did this on godbolt, lhs is the code (bottom is current (labled editor 1) and top is the new one (editor 2) middle is the output from the new stuff and far right is the output from the current stuff. You can highlight certain instructions and it usually pops up a little 'heres what this does'. You can click 'add' in the top right to get a diff view of the two outputs, though it usually looks wonky at O3. You can click and drag any of the tabs for each little block to move stuff. If you right click the highlighted code on the lhs it should have an options to take you to where that line is happening in whichever of the bottom two outputs, though it's not always exact.

I like to look at -O0 to see where stuff is then looking at -O3. About lines 40-60'ish is where the loop and exp calculation happen. The code is super similar, the lambda version removes a compare and a few moves. But those are mostly because we don't do the if statement in there anymore. I can look tmrw at just removing that check there with the old version.

https://godbolt.org/z/Xe8ev_

godbolt is pretty neat! I learned last night you can also get a real graph of the call graph!

https://godbolt.org/z/cCqIAH

There's a way to make a PR on their repo so we can get Stan up there, would like to find time for that in the next week or so

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another cool internet benchmark thing!

http://quick-bench.com/3Wdd56xscm20sShrc0xZx2qgdsE

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the rules for capture are like argument passing, so that primitives like max_val should be captured by value, not by reference.

return max_val + log(sum);
}

} // namespace math
Expand Down
11 changes: 3 additions & 8 deletions stan/math/prim/arr/fun/promote_elements.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -19,20 +19,15 @@ namespace math {
* @tparam S type of input elements, must be assignable to T
*/
template <typename T, typename S>
struct promote_elements<std::vector<T>, std::vector<S> > {
struct promote_elements<std::vector<T>, std::vector<S>> {
/**
* Return input vector of type S as vector of type T.
*
* @param u vector of type S, assignable to type T
* @returns vector of type T
*/
inline static std::vector<T> promote(const std::vector<S>& u) {
std::vector<T> t;
t.reserve(u.size());
for (size_t i = 0; i < u.size(); ++i) {
t.push_back(promote_elements<T, S>::promote(u[i]));
}
return t;
return {u.begin(), u.end()};
}
};

Expand All @@ -44,7 +39,7 @@ struct promote_elements<std::vector<T>, std::vector<S> > {
* @tparam T type of elements
*/
template <typename T>
struct promote_elements<std::vector<T>, std::vector<T> > {
struct promote_elements<std::vector<T>, std::vector<T>> {
/**
* Return input vector.
*
Expand Down
1 change: 1 addition & 0 deletions stan/math/prim/arr/fun/sum.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
#include <stan/math/prim/meta.hpp>
#include <cstddef>
#include <vector>
#include <algorithm>
#include <numeric>

namespace stan {
Expand Down
2 changes: 1 addition & 1 deletion stan/math/prim/mat/fun/accumulator.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class accumulator {
/**
* Destroy an accumulator.
*/
~accumulator() {}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for defining the accumulator destructor as empty here? tmk this still calls the destructor for all the accumulates members

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is OK to leave as default---as is, it's not virtual and breaks the rule of 3(5).

~accumulator() = default;

/**
* Add the specified arithmetic type value to the buffer after
Expand Down
52 changes: 26 additions & 26 deletions stan/math/prim/mat/fun/gp_exp_quad_cov.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -191,8 +191,8 @@ gp_exp_quad_cov(const std::vector<T_x> &x, const T_sigma &sigma,
return cov;
}

for (size_t n = 0; n < x_size; ++n) {
check_not_nan("gp_exp_quad_cov", "x", x[n]);
for (auto &&x_i : x) {
check_not_nan("gp_exp_quad_cov", "x", x_i);
}

cov = internal::gp_exp_quad_cov(x, square(sigma),
Expand Down Expand Up @@ -275,11 +275,11 @@ gp_exp_quad_cov(const std::vector<T_x1> &x1, const std::vector<T_x2> &x2,
return cov;
}

for (size_t i = 0; i < x1_size; ++i) {
check_not_nan(function_name, "x1", x1[i]);
for (auto &&x1_i : x1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As is, I think these can be const.

These should be using a vectorized check_not_nan so that the index can also be printed and we don't have all this boilerplate looping.

Another alternative would be a for-each loop, which doesn't actually simplify things here, especiallyw ith explicit capture of the function name.

std::for_each(x1.begin(), x1.end(),
              [&function_name](double x) { return check_not_nan(function_name, "x", x); });

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be using a vectorized check_not_nan so that the index can also be printed and we don't have all this boilerplate looping.

Agree this should use a vectorized check_nan, but the vectorized version of check_not_nan does not work for vectors of eigen matrices atm :-(

After Andrew and I sort out the more generic templating discussion in #1425 then I'm going to come back to these check functions and clean them up so we can do that.

check_not_nan(function_name, "x1", x1_i);
}
for (size_t i = 0; i < x2_size; ++i) {
check_not_nan(function_name, "x2", x2[i]);
for (auto &&x2_i : x2) {
check_not_nan(function_name, "x2", x2_i);
}

cov = internal::gp_exp_quad_cov(x1, x2, square(sigma),
Expand Down Expand Up @@ -325,11 +325,11 @@ gp_exp_quad_cov(const std::vector<Eigen::Matrix<T_x1, -1, 1>> &x1,
}

const char *function_name = "gp_exp_quad_cov";
for (size_t i = 0; i < x1_size; ++i) {
check_not_nan(function_name, "x1", x1[i]);
for (auto &&x1_i : x1) {
check_not_nan(function_name, "x1", x1_i);
}
for (size_t i = 0; i < x2_size; ++i) {
check_not_nan(function_name, "x2", x2[i]);
for (auto &&x2_i : x2) {
check_not_nan(function_name, "x2", x2_i);
}
check_positive_finite(function_name, "magnitude", sigma);
check_positive_finite(function_name, "length scale", length_scale);
Expand Down Expand Up @@ -369,8 +369,8 @@ inline Eigen::MatrixXd gp_exp_quad_cov(const std::vector<double> &x,
}
const auto total_size = x_size + cov.size();
if (total_size < opencl_context.tuning_opts().gp_exp_quad_cov_simple) {
for (size_t n = 0; n < x_size; ++n) {
check_not_nan("gp_exp_quad_cov", "x", x[n]);
for (auto x_i : x) {
check_not_nan("gp_exp_quad_cov", "x", x_i);
}

cov = internal::gp_exp_quad_cov(x, square(sigma),
Expand Down Expand Up @@ -415,8 +415,8 @@ inline Eigen::MatrixXd gp_exp_quad_cov(const std::vector<Eigen::VectorXd> &x,
const size_t inner_x1_size = x[0].size();
const auto total_size = x_size * inner_x1_size + cov.size();
if (total_size < opencl_context.tuning_opts().gp_exp_quad_cov_complex) {
for (size_t i = 0; i < x_size; ++i) {
check_not_nan("gp_exp_quad_cov", "x", x[i]);
for (auto &&x_i : x) {
check_not_nan("gp_exp_quad_cov", "x", x_i);
}
cov = internal::gp_exp_quad_cov(x, square(sigma),
-0.5 / square(length_scale));
Expand Down Expand Up @@ -503,11 +503,11 @@ inline typename Eigen::MatrixXd gp_exp_quad_cov(const std::vector<double> &x1,
}
const auto total_size = x1.size() + x2.size() + cov.size();
if (total_size < opencl_context.tuning_opts().gp_exp_quad_cov_simple) {
for (size_t i = 0; i < x1.size(); ++i) {
check_not_nan(function_name, "x1", x1[i]);
for (auto x_i : x1) {
check_not_nan(function_name, "x1", x_i);
}
for (size_t i = 0; i < x2.size(); ++i) {
check_not_nan(function_name, "x2", x2[i]);
for (auto x_i : x2) {
check_not_nan(function_name, "x2", x_i);
}

cov = internal::gp_exp_quad_cov(x1, x2, square(sigma),
Expand Down Expand Up @@ -560,11 +560,11 @@ inline typename Eigen::MatrixXd gp_exp_quad_cov(
const auto total_size
= x1_size * x1_inner_size + x2_size * x2_inner_size + cov.size();
if (total_size < opencl_context.tuning_opts().gp_exp_quad_cov_complex) {
for (size_t i = 0; i < x1.size(); ++i) {
check_not_nan(function_name, "x1", x1[i]);
for (auto &&x_i : x1) {
check_not_nan(function_name, "x1", x_i);
}
for (size_t i = 0; i < x2.size(); ++i) {
check_not_nan(function_name, "x2", x2[i]);
for (auto &&x_i : x2) {
check_not_nan(function_name, "x2", x_i);
}

cov = internal::gp_exp_quad_cov(x1, x2, square(sigma),
Expand Down Expand Up @@ -621,11 +621,11 @@ inline typename Eigen::MatrixXd gp_exp_quad_cov(
const auto total_size
= x1_size * x1_inner_size + x2_size * x2_inner_size + l_size + cov.size();
if (total_size < opencl_context.tuning_opts().gp_exp_quad_cov_complex) {
for (size_t i = 0; i < x1_size; ++i) {
check_not_nan(function_name, "x1", x1[i]);
for (auto &&x1_i : x1) {
check_not_nan(function_name, "x1", x1_i);
}
for (size_t i = 0; i < x2_size; ++i) {
check_not_nan(function_name, "x1", x2[i]);
for (auto &&x2_i : x2) {
check_not_nan(function_name, "x1", x2_i);
}
cov = internal::gp_exp_quad_cov(divide_columns(x1, length_scale),
divide_columns(x2, length_scale),
Expand Down
2 changes: 1 addition & 1 deletion stan/math/prim/mat/fun/matrix_exp_action_handler.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ class matrix_exp_action_handler {
public:
/* Constructor
*/
matrix_exp_action_handler() {}
matrix_exp_action_handler() = default;

/* Perform the matrix exponential action exp(A*t)*B
* @param [in] mat matrix A
Expand Down
6 changes: 1 addition & 5 deletions stan/math/prim/mat/fun/promote_elements.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,7 @@ struct promote_elements<Eigen::Matrix<T, R, C>, Eigen::Matrix<S, R, C> > {
*/
inline static Eigen::Matrix<T, R, C> promote(
const Eigen::Matrix<S, R, C>& u) {
Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic> t(u.rows(), u.cols());
for (int i = 0; i < u.size(); ++i) {
t(i) = promote_elements<T, S>::promote(u(i));
}
return t;
return u.template cast<T>();
}
};

Expand Down
2 changes: 1 addition & 1 deletion stan/math/prim/mat/meta/broadcast_array.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ namespace internal {
template <typename ViewElt, typename OpElt, int R, int C>
class empty_broadcast_array<ViewElt, Eigen::Matrix<OpElt, R, C> > {
public:
empty_broadcast_array() {}
empty_broadcast_array() = default;
/**
* Not implemented so cannot be called.
*/
Expand Down
6 changes: 3 additions & 3 deletions stan/math/prim/mat/meta/operands_and_partials.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ class ops_partials_edge<ViewElt, Eigen::Matrix<Op, R, C>> {
using partials_t = empty_broadcast_array<ViewElt, Eigen::Matrix<Op, R, C>>;
partials_t partials_;
empty_broadcast_array<partials_t, Eigen::Matrix<Op, R, C>> partials_vec_;
ops_partials_edge() {}
ops_partials_edge() = default;
explicit ops_partials_edge(const Eigen::Matrix<Op, R, C>& /* ops */) {}

private:
Expand All @@ -38,7 +38,7 @@ class ops_partials_edge<ViewElt, std::vector<Eigen::Matrix<Op, R, C>>> {
public:
using partials_t = empty_broadcast_array<ViewElt, Eigen::Matrix<Op, R, C>>;
empty_broadcast_array<partials_t, Eigen::Matrix<Op, R, C>> partials_vec_;
ops_partials_edge() {}
ops_partials_edge() = default;
explicit ops_partials_edge(
const std::vector<Eigen::Matrix<Op, R, C>>& /* ops */) {}

Expand All @@ -59,7 +59,7 @@ class ops_partials_edge<ViewElt, std::vector<std::vector<Op>>> {
= empty_broadcast_array<ViewElt, std::vector<std::vector<Op>>>;
partials_t partials_;
empty_broadcast_array<partials_t, std::vector<std::vector<Op>>> partials_vec_;
ops_partials_edge() {}
ops_partials_edge() = default;
explicit ops_partials_edge(const std::vector<std::vector<Op>>& /* ops */) {}

private:
Expand Down
7 changes: 5 additions & 2 deletions stan/math/prim/mat/prob/ordered_logistic_glm_lpmf.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -82,11 +82,14 @@ ordered_logistic_glm_lpmf(
check_finite(function, "First cut-point", cuts[0]);
}

if (size_zero(y, cuts))
if (size_zero(y, cuts)) {
return 0;
}

if (!include_summand<propto, T_x_scalar, T_beta_scalar, T_cuts_scalar>::value)
if (!include_summand<propto, T_x_scalar, T_beta_scalar,
T_cuts_scalar>::value) {
return 0;
}

const auto& x_val = value_of_rec(x);
const auto& beta_val = value_of_rec(beta);
Expand Down
2 changes: 1 addition & 1 deletion stan/math/prim/scal/meta/broadcast_array.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ class broadcast_array {
template <typename T, typename S>
class empty_broadcast_array {
public:
empty_broadcast_array() {}
empty_broadcast_array() = default;
/**
* Not implemented so cannot be called.
*/
Expand Down
2 changes: 1 addition & 1 deletion stan/math/prim/scal/meta/operands_and_partials.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ class ops_partials_edge {
public:
empty_broadcast_array<ViewElt, Op> partials_;

ops_partials_edge() {}
ops_partials_edge() = default;
explicit ops_partials_edge(const Op& /* op */) {}

private:
Expand Down
23 changes: 9 additions & 14 deletions stan/math/rev/arr/fun/log_sum_exp.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@
#include <stan/math/rev/core.hpp>
#include <stan/math/rev/scal/fun/calculate_chain.hpp>
#include <stan/math/prim/arr/fun/log_sum_exp.hpp>
#include <vector>
#include <algorithm>
#include <limits>
#include <numeric>
#include <vector>

namespace stan {
namespace math {
Expand All @@ -15,19 +17,12 @@ inline double log_sum_exp_as_double(const std::vector<var>& x) {
using std::exp;
using std::log;
using std::numeric_limits;
double max = -numeric_limits<double>::infinity();
for (size_t i = 0; i < x.size(); ++i) {
if (x[i] > max) {
max = x[i].val();
}
}
double sum = 0.0;
for (size_t i = 0; i < x.size(); ++i) {
if (x[i] != -numeric_limits<double>::infinity()) {
sum += exp(x[i].val() - max);
}
}
return max + log(sum);
double max_val = std::max_element(x.begin(), x.end())->val();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional]
This is soooo close to the double version, the only difference being the ->val() pulling out the double based value. Could the (recursive?) value_of for max_val computation allow these to be combined into a single implementation? Maybe not worth it given again how complicated the indirection would be.

Copy link
Collaborator Author

@SteveBronder SteveBronder Nov 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, it's so close! I think for a v v clean version of this we need a vectorized value_of. Then in the constructor for log_sum_exp_vector_vari we could just call op_vector_vari(log_sum_exp(value_of(x)), x).

I put a comment above log_sum_exp_as_double about this and can do those value_of's in a separate PR

double sum = std::accumulate(x.begin(), x.end(), 0.0,
[&max_val](auto& acc, auto&& x_i) {
return acc + exp(x_i.val() - max_val);
});
return max_val + log(sum);
}

class log_sum_exp_vector_vari : public op_vector_vari {
Expand Down
Loading