Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot statically link libtorch to build on M1 #786

Closed
MidKnightXI opened this issue Aug 17, 2023 · 6 comments
Closed

Cannot statically link libtorch to build on M1 #786

MidKnightXI opened this issue Aug 17, 2023 · 6 comments

Comments

@MidKnightXI
Copy link

I guess it's a pretty common issue but I've installed pytorch via homebrew and added the path to it to my env and when I'm trying to build my program, the build is failing with a pretty huge error message (See below).

   Compiling libc v0.2.147
   Compiling autocfg v1.1.0
   Compiling cfg-if v1.0.0
   Compiling proc-macro2 v1.0.66
   Compiling unicode-ident v1.0.11
   Compiling pkg-config v0.3.27
   Compiling adler v1.0.2
   Compiling crc32fast v1.3.2
   Compiling scopeguard v1.2.0
   Compiling crossbeam-utils v0.8.16
   Compiling vcpkg v0.2.15
   Compiling simd-adler32 v0.3.7
   Compiling thiserror v1.0.47
   Compiling rayon-core v1.11.0
   Compiling miniz_oxide v0.7.1
   Compiling byteorder v1.4.3
   Compiling memoffset v0.9.0
   Compiling crossbeam-epoch v0.9.15
   Compiling num-traits v0.2.16
   Compiling lock_api v0.4.10
   Compiling crossbeam-channel v0.5.8
   Compiling num-integer v0.1.45
   Compiling quote v1.0.33
   Compiling cc v1.0.82
   Compiling time v0.1.45
   Compiling num_cpus v1.16.0
   Compiling syn v2.0.29
   Compiling crossbeam-deque v0.8.3
   Compiling curl v0.4.44
   Compiling anyhow v1.0.75
   Compiling getrandom v0.1.16
   Compiling socket2 v0.4.9
   Compiling flate2 v1.0.27
   Compiling futures-core v0.3.28
   Compiling cmake v0.1.50
   Compiling getrandom v0.2.10
   Compiling libz-sys v1.1.12
   Compiling bzip2-sys v0.1.11+1.0.8
   Compiling curl-sys v0.4.65+curl-8.2.1
   Compiling num-complex v0.2.4
   Compiling either v1.9.0
   Compiling spin v0.9.8
   Compiling rayon v1.7.0
   Compiling nanorand v0.7.0
   Compiling rand_core v0.5.1
   Compiling num-rational v0.4.1
   Compiling ndarray v0.13.1
   Compiling ppv-lite86 v0.2.17
   Compiling serde v1.0.183
   Compiling rawpointer v0.2.1
   Compiling futures-sink v0.3.28
   Compiling weezl v0.1.7
   Compiling matrixmultiply v0.2.4
   Compiling rand_chacha v0.2.2
   Compiling fdeflate v0.3.0
   Compiling zune-inflate v0.2.54
   Compiling smallvec v1.11.0
   Compiling serde_json v1.0.105
   Compiling bzip2 v0.4.4
   Compiling half v2.2.1
   Compiling color_quant v1.1.0
   Compiling lebe v0.5.2
   Compiling bytemuck v1.13.1
   Compiling bit_field v0.10.2
   Compiling bitflags v1.3.2
   Compiling png v0.17.9
   Compiling gif v0.12.0
   Compiling rand v0.7.3
   Compiling qoi v0.4.1
   Compiling ryu v1.0.15
   Compiling half v1.8.2
   Compiling lazy_static v1.4.0
   Compiling itoa v1.0.9
   Compiling jpeg-decoder v0.3.0
   Compiling thiserror-impl v1.0.47
   Compiling pin-project-internal v1.1.3
   Compiling serde_derive v1.0.183
   Compiling tiff v0.9.0
   Compiling pin-project v1.1.3
   Compiling zip v0.5.13
   Compiling flume v0.10.14
   Compiling exr v1.7.0
   Compiling torch-sys v0.3.1
   Compiling image v0.24.7
The following warnings were emitted during compilation:

warning: clang: warning: -Wl,-rpath=/opt/homebrew/Cellar/pytorch/2.0.1/lib: 'linker' input unused [-Wunused-command-line-argument]
warning: In file included from libtch/torch_api.cpp:14:
warning: libtch/stb_image_write.h:742:13: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]
warning:       len = sprintf(buffer, "EXPOSURE=          1.0000000000000\n\n-Y %d +X %d\n", y, x);
warning:             ^
warning: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here
warning: __deprecated_msg("This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")
warning: ^
warning: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'
warning:         #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))
warning:                                                       ^
warning: libtch/torch_api.cpp:142:9: error: no member named '_amp_non_finite_check_and_unscale_' in namespace 'at'; did you mean '_amp_foreach_non_finite_check_and_unscale_'?
warning:     at::_amp_non_finite_check_and_unscale_(*t, *found_inf, *inf_scale);
warning:     ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
warning:         _amp_foreach_non_finite_check_and_unscale_
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_amp_foreach_non_finite_check_and_unscale.h:26:13: note: '_amp_foreach_non_finite_check_and_unscale_' declared here
warning: inline void _amp_foreach_non_finite_check_and_unscale_(at::TensorList self, at::Tensor & found_inf, const at::Tensor & inv_scale) {
warning:             ^
warning: libtch/torch_api.cpp:487:109: error: no viable conversion from 'vector<torch::autograd::Edge>' to 'bool'
warning:     auto vl = torch::autograd::Engine::get_default_engine().execute(roots, grads, keep_graph, create_graph, inputs_);
warning:                                                                                                             ^~~~~~~
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/torch/csrc/autograd/engine.h:149:12: note: passing argument to parameter 'accumulate_grad' here
warning:       bool accumulate_grad,
warning:            ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:187:29: error: no member named '_addmv_impl_' in namespace 'torch'
warning:     auto outputs__ = torch::_addmv_impl_(*self, *self2, *mat, *vec);
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:211:20: error: no matching constructor for initialization of 'torch::Tensor'
warning:     out__[0] = new torch::Tensor(outputs__);
warning:                    ^             ~~~~~~~~~
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:103:12: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl>' for 1st argument
warning:   explicit Tensor(
warning:            ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:106:3: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'const at::Tensor' for 1st argument
warning:   Tensor(const Tensor &tensor) = default;
warning:   ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:107:3: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'at::Tensor' for 1st argument
warning:   Tensor(Tensor &&tensor) = default;
warning:   ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:110:12: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'const at::TensorBase' for 1st argument
warning:   explicit Tensor(const TensorBase &base): TensorBase(base) {}
warning:            ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:111:16: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'at::TensorBase' for 1st argument
warning:   /*implicit*/ Tensor(TensorBase &&base): TensorBase(std::move(base)) {}
warning:                ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:100:3: note: candidate constructor not viable: requires 0 arguments, but 1 was provided
warning:   Tensor() = default;
warning:   ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:95:12: note: candidate constructor not viable: requires 2 arguments, but 1 was provided
warning:   explicit Tensor(unsafe_borrow_t, const TensorBase& rhs): TensorBase(unsafe_borrow_t{}, rhs) {}
warning:            ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:217:29: error: no member named '_baddbmm_mkl_' in namespace 'torch'
warning:     auto outputs__ = torch::_baddbmm_mkl_(*self, *batch1, *batch2);
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:224:29: error: no member named '_bmm' in namespace 'torch'
warning:     auto outputs__ = torch::_bmm(*self, *mat2, (bool)deterministic);
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:231:29: error: no member named '_bmm_out' in namespace 'torch'
warning:     auto outputs__ = torch::_bmm_out(*out, *self, *mat2, (bool)deterministic);
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:294:29: error: no member named '_cat' in namespace 'torch'; did you mean 'cat'?
warning:     auto outputs__ = torch::_cat(of_carray_tensor(tensors_data, tensors_len), dim);
warning:                      ~~~~~~~^~~~
warning:                             cat
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cat.h:26:19: note: 'cat' declared here
warning: inline at::Tensor cat(const at::ITensorListRef & tensors, int64_t dim=0) {
warning:                   ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:301:29: error: no member named '_cat_out' in namespace 'torch'; did you mean 'cat_out'?
warning:     auto outputs__ = torch::_cat_out(*out, of_carray_tensor(tensors_data, tensors_len), dim);
warning:                      ~~~~~~~^~~~~~~~
warning:                             cat_out
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cat.h:31:21: note: 'cat_out' declared here
warning: inline at::Tensor & cat_out(at::Tensor & out, const at::ITensorListRef & tensors, int64_t dim=0) {
warning:                     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:315:29: error: no member named '_cholesky_helper' in namespace 'torch'
warning:     auto outputs__ = torch::_cholesky_helper(*self, (bool)upper);
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:371:29: error: no member named '_convolution_nogroup' in namespace 'torch'
warning:     auto outputs__ = torch::_convolution_nogroup(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len));
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:415:386: error: too few arguments to function call, expected 16, have 15
warning:     auto outputs__ = torch::_cudnn_rnn(*input, of_carray_tensor(weight_data, weight_len), weight_stride0, (weight_buf ? *weight_buf : torch::Tensor()), *hx, (cx ? *cx : torch::Tensor()), mode, hidden_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? *dropout_state : torch::Tensor()));
warning:                      ~~~~~~~~~~~~~~~~~                                                                                                                                                                                                                                                                                                                                                           ^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_cudnn_rnn.h:26:77: note: '_cudnn_rnn' declared here
warning: inline ::std::tuple<at::Tensor,at::Tensor,at::Tensor,at::Tensor,at::Tensor> _cudnn_rnn(const at::Tensor & input, at::TensorList weight, int64_t weight_stride0, const c10::optional<at::Tensor> & weight_buf, const at::Tensor & hx, const c10::optional<at::Tensor> & cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, bool batch_first, double dropout, bool train, bool bidirectional, at::IntArrayRef batch_sizes, const c10::optional<at::Tensor> & dropout_state) {
warning:                                                                             ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:426:203: error: too few arguments to function call, expected 9, have 8
warning:     auto outputs__ = torch::_cudnn_rnn_flatten_weight(of_carray_tensor(weight_arr_data, weight_arr_len), weight_stride0, input_size, mode, hidden_size, num_layers, (bool)batch_first, (bool)bidirectional);
warning:                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                                     ^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_cudnn_rnn_flatten_weight.h:26:19: note: '_cudnn_rnn_flatten_weight' declared here
warning: inline at::Tensor _cudnn_rnn_flatten_weight(at::TensorList weight_arr, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, bool batch_first, bool bidirectional) {
warning:                   ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:433:29: error: no member named '_cumprod' in namespace 'torch'; did you mean 'cumprod'?
warning:     auto outputs__ = torch::_cumprod(*self, dim);
warning:                      ~~~~~~~^~~~~~~~
warning:                             cumprod
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumprod.h:26:19: note: 'cumprod' declared here
warning: inline at::Tensor cumprod(const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {
warning:                   ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:440:29: error: no member named '_cumprod_out' in namespace 'torch'; did you mean 'cumprod_out'?
warning:     auto outputs__ = torch::_cumprod_out(*out, *self, dim);
warning:                      ~~~~~~~^~~~~~~~~~~~
warning:                             cumprod_out
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumprod.h:31:21: note: 'cumprod_out' declared here
warning: inline at::Tensor & cumprod_out(at::Tensor & out, const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {
warning:                     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:447:29: error: no member named '_cumsum' in namespace 'torch'; did you mean 'cumsum'?
warning:     auto outputs__ = torch::_cumsum(*self, dim);
warning:                      ~~~~~~~^~~~~~~
warning:                             cumsum
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumsum.h:26:19: note: 'cumsum' declared here
warning: inline at::Tensor cumsum(const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {
warning:                   ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:454:29: error: no member named '_cumsum_out' in namespace 'torch'; did you mean 'cumsum_out'?
warning:     auto outputs__ = torch::_cumsum_out(*out, *self, dim);
warning:                      ~~~~~~~^~~~~~~~~~~
warning:                             cumsum_out
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumsum.h:31:21: note: 'cumsum_out' declared here
warning: inline at::Tensor & cumsum_out(at::Tensor & out, const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {
warning:                     ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:492:110: error: no viable conversion from 'torch::Tensor' to 'int64_t' (aka 'long long')
warning:     auto outputs__ = torch::_embedding_bag_dense_backward(*grad, *indices, *offsets, *offset2bag, *bag_size, *maximum_indices, num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? *per_sample_weights : torch::Tensor()));
warning:                                                                                                              ^~~~~~~~~~~~~~~~
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: /opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_embedding_bag_dense_backward.h:26:206: note: passing argument to parameter 'num_weights' here
warning: inline at::Tensor _embedding_bag_dense_backward(const at::Tensor & grad, const at::Tensor & indices, const at::Tensor & offset2bag, const at::Tensor & bag_size, const at::Tensor & maximum_indices, int64_t num_weights, bool scale_grad_by_freq, int64_t mode, const c10::optional<at::Tensor> & per_sample_weights, int64_t padding_idx=-1) {
warning:                                                                                                                                                                                                              ^
warning: In file included from libtch/torch_api.cpp:1215:
warning: libtch/torch_api_generated.cpp.h:576:29: error: no member named '_fft_with_size' in namespace 'torch'
warning:     auto outputs__ = torch::_fft_with_size(*self, signal_ndim, (bool)complex_input, (bool)complex_output, (bool)inverse, torch::IntArrayRef(checked_signal_sizes_data, checked_signal_sizes_len), (bool)normalized, (bool)onesided, torch::IntArrayRef(output_sizes_data, output_sizes_len));
warning:                      ~~~~~~~^
warning: libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'
warning:     x \
warning:     ^
warning: fatal error: too many errors emitted, stopping now [-ferror-limit=]
warning: 1 warning and 20 errors generated.

error: failed to run custom build command for `torch-sys v0.3.1`

Caused by:
  process didn't exit successfully: `/Users/midknight/perso/BlurWarp/runner/target/debug/build/torch-sys-b6be2d465ad1e7b5/build-script-build` (exit status: 1)
  --- stdout
  cargo:rerun-if-env-changed=TORCH_CUDA_VERSION
  cargo:rerun-if-env-changed=LIBTORCH
  cargo:rustc-link-search=native=/opt/homebrew/Cellar/pytorch/2.0.1/lib
  cargo:rerun-if-env-changed=LIBTORCH_USE_CMAKE
  cargo:rerun-if-changed=libtch/torch_api.cpp
  cargo:rerun-if-changed=libtch/torch_api.h
  cargo:rerun-if-changed=libtch/torch_api_generated.cpp.h
  cargo:rerun-if-changed=libtch/torch_api_generated.h
  cargo:rerun-if-changed=libtch/stb_image_write.h
  cargo:rerun-if-changed=libtch/stb_image_resize.h
  cargo:rerun-if-changed=libtch/stb_image.h
  cargo:rerun-if-env-changed=LIBTORCH_CXX11_ABI
  TARGET = Some("aarch64-apple-darwin")
  OPT_LEVEL = Some("0")
  HOST = Some("aarch64-apple-darwin")
  cargo:rerun-if-env-changed=CXX_aarch64-apple-darwin
  CXX_aarch64-apple-darwin = None
  cargo:rerun-if-env-changed=CXX_aarch64_apple_darwin
  CXX_aarch64_apple_darwin = None
  cargo:rerun-if-env-changed=HOST_CXX
  HOST_CXX = None
  cargo:rerun-if-env-changed=CXX
  CXX = None
  cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("true")
  CARGO_CFG_TARGET_FEATURE = Some("aes,crc,dit,dotprod,dpb,dpb2,fcma,fhm,flagm,fp16,frintts,jsconv,lor,lse,neon,paca,pacg,pan,pmuv3,ras,rcpc,rcpc2,rdm,sb,sha2,sha3,ssbs,vh")
  cargo:rerun-if-env-changed=CXXFLAGS_aarch64-apple-darwin
  CXXFLAGS_aarch64-apple-darwin = None
  cargo:rerun-if-env-changed=CXXFLAGS_aarch64_apple_darwin
  CXXFLAGS_aarch64_apple_darwin = None
  cargo:rerun-if-env-changed=HOST_CXXFLAGS
  HOST_CXXFLAGS = None
  cargo:rerun-if-env-changed=CXXFLAGS
  CXXFLAGS = None
  running: "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-2" "-fno-omit-frame-pointer" "-arch" "arm64" "-I" "/opt/homebrew/Cellar/pytorch/2.0.1/include" "-I" "/opt/homebrew/Cellar/pytorch/2.0.1/include/torch/csrc/api/include" "-Wl,-rpath=/opt/homebrew/Cellar/pytorch/2.0.1/lib" "-std=c++14" "-D_GLIBCXX_USE_CXX11_ABI=1" "-o" "/Users/midknight/perso/BlurWarp/runner/target/debug/build/torch-sys-5655e054fb7291c1/out/libtch/torch_api.o" "-c" "libtch/torch_api.cpp"
  cargo:warning=clang: warning: -Wl,-rpath=/opt/homebrew/Cellar/pytorch/2.0.1/lib: 'linker' input unused [-Wunused-command-line-argument]

  cargo:warning=In file included from libtch/torch_api.cpp:14:

  cargo:warning=libtch/stb_image_write.h:742:13: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]

  cargo:warning=      len = sprintf(buffer, "EXPOSURE=          1.0000000000000\n\n-Y %d +X %d\n", y, x);

  cargo:warning=            ^

  cargo:warning=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here

  cargo:warning=__deprecated_msg("This function is provided for compatibility reasons only.  Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.")

  cargo:warning=^

  cargo:warning=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'

  cargo:warning=        #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))

  cargo:warning=                                                      ^

  cargo:warning=libtch/torch_api.cpp:142:9: error: no member named '_amp_non_finite_check_and_unscale_' in namespace 'at'; did you mean '_amp_foreach_non_finite_check_and_unscale_'?

  cargo:warning=    at::_amp_non_finite_check_and_unscale_(*t, *found_inf, *inf_scale);

  cargo:warning=    ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  cargo:warning=        _amp_foreach_non_finite_check_and_unscale_

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_amp_foreach_non_finite_check_and_unscale.h:26:13: note: '_amp_foreach_non_finite_check_and_unscale_' declared here

  cargo:warning=inline void _amp_foreach_non_finite_check_and_unscale_(at::TensorList self, at::Tensor & found_inf, const at::Tensor & inv_scale) {

  cargo:warning=            ^

  cargo:warning=libtch/torch_api.cpp:487:109: error: no viable conversion from 'vector<torch::autograd::Edge>' to 'bool'

  cargo:warning=    auto vl = torch::autograd::Engine::get_default_engine().execute(roots, grads, keep_graph, create_graph, inputs_);

  cargo:warning=                                                                                                            ^~~~~~~

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/torch/csrc/autograd/engine.h:149:12: note: passing argument to parameter 'accumulate_grad' here

  cargo:warning=      bool accumulate_grad,

  cargo:warning=           ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:187:29: error: no member named '_addmv_impl_' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_addmv_impl_(*self, *self2, *mat, *vec);

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:211:20: error: no matching constructor for initialization of 'torch::Tensor'

  cargo:warning=    out__[0] = new torch::Tensor(outputs__);

  cargo:warning=                   ^             ~~~~~~~~~

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:103:12: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl>' for 1st argument

  cargo:warning=  explicit Tensor(

  cargo:warning=           ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:106:3: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'const at::Tensor' for 1st argument

  cargo:warning=  Tensor(const Tensor &tensor) = default;

  cargo:warning=  ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:107:3: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'at::Tensor' for 1st argument

  cargo:warning=  Tensor(Tensor &&tensor) = default;

  cargo:warning=  ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:110:12: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'const at::TensorBase' for 1st argument

  cargo:warning=  explicit Tensor(const TensorBase &base): TensorBase(base) {}

  cargo:warning=           ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:111:16: note: candidate constructor not viable: no known conversion from '::std::tuple<at::Tensor, at::Tensor>' to 'at::TensorBase' for 1st argument

  cargo:warning=  /*implicit*/ Tensor(TensorBase &&base): TensorBase(std::move(base)) {}

  cargo:warning=               ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:100:3: note: candidate constructor not viable: requires 0 arguments, but 1 was provided

  cargo:warning=  Tensor() = default;

  cargo:warning=  ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/core/TensorBody.h:95:12: note: candidate constructor not viable: requires 2 arguments, but 1 was provided

  cargo:warning=  explicit Tensor(unsafe_borrow_t, const TensorBase& rhs): TensorBase(unsafe_borrow_t{}, rhs) {}

  cargo:warning=           ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:217:29: error: no member named '_baddbmm_mkl_' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_baddbmm_mkl_(*self, *batch1, *batch2);

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:224:29: error: no member named '_bmm' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_bmm(*self, *mat2, (bool)deterministic);

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:231:29: error: no member named '_bmm_out' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_bmm_out(*out, *self, *mat2, (bool)deterministic);

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:294:29: error: no member named '_cat' in namespace 'torch'; did you mean 'cat'?

  cargo:warning=    auto outputs__ = torch::_cat(of_carray_tensor(tensors_data, tensors_len), dim);

  cargo:warning=                     ~~~~~~~^~~~

  cargo:warning=                            cat

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cat.h:26:19: note: 'cat' declared here

  cargo:warning=inline at::Tensor cat(const at::ITensorListRef & tensors, int64_t dim=0) {

  cargo:warning=                  ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:301:29: error: no member named '_cat_out' in namespace 'torch'; did you mean 'cat_out'?

  cargo:warning=    auto outputs__ = torch::_cat_out(*out, of_carray_tensor(tensors_data, tensors_len), dim);

  cargo:warning=                     ~~~~~~~^~~~~~~~

  cargo:warning=                            cat_out

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cat.h:31:21: note: 'cat_out' declared here

  cargo:warning=inline at::Tensor & cat_out(at::Tensor & out, const at::ITensorListRef & tensors, int64_t dim=0) {

  cargo:warning=                    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:315:29: error: no member named '_cholesky_helper' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_cholesky_helper(*self, (bool)upper);

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:371:29: error: no member named '_convolution_nogroup' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_convolution_nogroup(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len));

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:415:386: error: too few arguments to function call, expected 16, have 15

  cargo:warning=    auto outputs__ = torch::_cudnn_rnn(*input, of_carray_tensor(weight_data, weight_len), weight_stride0, (weight_buf ? *weight_buf : torch::Tensor()), *hx, (cx ? *cx : torch::Tensor()), mode, hidden_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? *dropout_state : torch::Tensor()));

  cargo:warning=                     ~~~~~~~~~~~~~~~~~                                                                                                                                                                                                                                                                                                                                                           ^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_cudnn_rnn.h:26:77: note: '_cudnn_rnn' declared here

  cargo:warning=inline ::std::tuple<at::Tensor,at::Tensor,at::Tensor,at::Tensor,at::Tensor> _cudnn_rnn(const at::Tensor & input, at::TensorList weight, int64_t weight_stride0, const c10::optional<at::Tensor> & weight_buf, const at::Tensor & hx, const c10::optional<at::Tensor> & cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, bool batch_first, double dropout, bool train, bool bidirectional, at::IntArrayRef batch_sizes, const c10::optional<at::Tensor> & dropout_state) {

  cargo:warning=                                                                            ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:426:203: error: too few arguments to function call, expected 9, have 8

  cargo:warning=    auto outputs__ = torch::_cudnn_rnn_flatten_weight(of_carray_tensor(weight_arr_data, weight_arr_len), weight_stride0, input_size, mode, hidden_size, num_layers, (bool)batch_first, (bool)bidirectional);

  cargo:warning=                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                                                                                                                                     ^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_cudnn_rnn_flatten_weight.h:26:19: note: '_cudnn_rnn_flatten_weight' declared here

  cargo:warning=inline at::Tensor _cudnn_rnn_flatten_weight(at::TensorList weight_arr, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, bool batch_first, bool bidirectional) {

  cargo:warning=                  ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:433:29: error: no member named '_cumprod' in namespace 'torch'; did you mean 'cumprod'?

  cargo:warning=    auto outputs__ = torch::_cumprod(*self, dim);

  cargo:warning=                     ~~~~~~~^~~~~~~~

  cargo:warning=                            cumprod

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumprod.h:26:19: note: 'cumprod' declared here

  cargo:warning=inline at::Tensor cumprod(const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {

  cargo:warning=                  ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:440:29: error: no member named '_cumprod_out' in namespace 'torch'; did you mean 'cumprod_out'?

  cargo:warning=    auto outputs__ = torch::_cumprod_out(*out, *self, dim);

  cargo:warning=                     ~~~~~~~^~~~~~~~~~~~

  cargo:warning=                            cumprod_out

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumprod.h:31:21: note: 'cumprod_out' declared here

  cargo:warning=inline at::Tensor & cumprod_out(at::Tensor & out, const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {

  cargo:warning=                    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:447:29: error: no member named '_cumsum' in namespace 'torch'; did you mean 'cumsum'?

  cargo:warning=    auto outputs__ = torch::_cumsum(*self, dim);

  cargo:warning=                     ~~~~~~~^~~~~~~

  cargo:warning=                            cumsum

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumsum.h:26:19: note: 'cumsum' declared here

  cargo:warning=inline at::Tensor cumsum(const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {

  cargo:warning=                  ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:454:29: error: no member named '_cumsum_out' in namespace 'torch'; did you mean 'cumsum_out'?

  cargo:warning=    auto outputs__ = torch::_cumsum_out(*out, *self, dim);

  cargo:warning=                     ~~~~~~~^~~~~~~~~~~

  cargo:warning=                            cumsum_out

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/cumsum.h:31:21: note: 'cumsum_out' declared here

  cargo:warning=inline at::Tensor & cumsum_out(at::Tensor & out, const at::Tensor & self, int64_t dim, c10::optional<at::ScalarType> dtype=c10::nullopt) {

  cargo:warning=                    ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:492:110: error: no viable conversion from 'torch::Tensor' to 'int64_t' (aka 'long long')

  cargo:warning=    auto outputs__ = torch::_embedding_bag_dense_backward(*grad, *indices, *offsets, *offset2bag, *bag_size, *maximum_indices, num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? *per_sample_weights : torch::Tensor()));

  cargo:warning=                                                                                                             ^~~~~~~~~~~~~~~~

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=/opt/homebrew/Cellar/pytorch/2.0.1/include/ATen/ops/_embedding_bag_dense_backward.h:26:206: note: passing argument to parameter 'num_weights' here

  cargo:warning=inline at::Tensor _embedding_bag_dense_backward(const at::Tensor & grad, const at::Tensor & indices, const at::Tensor & offset2bag, const at::Tensor & bag_size, const at::Tensor & maximum_indices, int64_t num_weights, bool scale_grad_by_freq, int64_t mode, const c10::optional<at::Tensor> & per_sample_weights, int64_t padding_idx=-1) {

  cargo:warning=                                                                                                                                                                                                             ^

  cargo:warning=In file included from libtch/torch_api.cpp:1215:

  cargo:warning=libtch/torch_api_generated.cpp.h:576:29: error: no member named '_fft_with_size' in namespace 'torch'

  cargo:warning=    auto outputs__ = torch::_fft_with_size(*self, signal_ndim, (bool)complex_input, (bool)complex_output, (bool)inverse, torch::IntArrayRef(checked_signal_sizes_data, checked_signal_sizes_len), (bool)normalized, (bool)onesided, torch::IntArrayRef(output_sizes_data, output_sizes_len));

  cargo:warning=                     ~~~~~~~^

  cargo:warning=libtch/torch_api.h:16:5: note: expanded from macro 'PROTECT'

  cargo:warning=    x \

  cargo:warning=    ^

  cargo:warning=fatal error: too many errors emitted, stopping now [-ferror-limit=]

  cargo:warning=1 warning and 20 errors generated.

  exit status: 1

  --- stderr


  error occurred: Command "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-2" "-fno-omit-frame-pointer" "-arch" "arm64" "-I" "/opt/homebrew/Cellar/pytorch/2.0.1/include" "-I" "/opt/homebrew/Cellar/pytorch/2.0.1/include/torch/csrc/api/include" "-Wl,-rpath=/opt/homebrew/Cellar/pytorch/2.0.1/lib" "-std=c++14" "-D_GLIBCXX_USE_CXX11_ABI=1" "-o" "/Users/midknight/perso/BlurWarp/runner/target/debug/build/torch-sys-5655e054fb7291c1/out/libtch/torch_api.o" "-c" "libtch/torch_api.cpp" with args "c++" did not execute successfully (status code exit status: 1).
@LaurentMazare
Copy link
Owner

Not sure this is M1 or mac related, my guess is more that you're using an old version of torch-sys/tch which are not compatible with pytorch 2.0.1 (also even if the latest version of tch should work, it is only guaranteed to work with 2.0.0)

@MidKnightXI
Copy link
Author

MidKnightXI commented Aug 17, 2023

Even with the latest version of torsh-sys the build is failing :/

Also maybe I'm wrong but from what I seen pytorch is only available in 2.0.1 on homebrew.

@LaurentMazare
Copy link
Owner

Still with the same errors?

@MidKnightXI
Copy link
Author

Yes still the same error.

I managed to fix it by using your feature download-libtorch which allow the build to finish successfully.
But when trying to run the binary I get this error:

dyld[32239]: Library not loaded: @rpath/libtorch_cpu.dylib
  Referenced from: <335A5D2D-A758-350D-B354-6183AD443523> /Users/midknight/perso/BlurWarp/runner/target/debug/blurwarp
  Reason: tried: '/usr/local/lib/libtorch_cpu.dylib' (no such file), '/usr/lib/libtorch_cpu.dylib' (no such file, not in dyld cache)
[1]    32239 abort      ./target/debug/blurwarp

@LaurentMazare
Copy link
Owner

Dynamic linking is the default, you may have to tweak your DYLD_LIBRARY_PATH appropriately so that the executable loader finds the library. Also maybe look at #488 as it has more information for running on M1/M2s.

@MidKnightXI
Copy link
Author

MidKnightXI commented Aug 19, 2023

I tried to build without statically linked pytorch, it works.

But I'm still getting an error when trying the static linked build:

error: could not find native static library `asmjit`, perhaps an -L flag is missing?

error: could not compile `blurwarp` (bin "blurwarp") due to previous error

And of course if I try the -L flag on cargo it doesn't work

error: unexpected argument '-L' found

Usage: cargo build [OPTIONS]

For more information, try '--help'.

@MidKnightXI MidKnightXI changed the title Failed to build on M1 Cannot statically link libtorch to build on M1 Aug 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants