-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
help v2 #9
Comments
what's the GCC version? Similar as #3 |
(e4s) user@LAPTOP-E4D0U85P:~/e4s$ python GCC 11.2, I'm on wsl2 so any help regarding downgrading is appreciated (e4s) user@LAPTOP-E4D0U85P:~/e4s$ conda install -c gouarin gcc-7 UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package libstdcxx-ng conflicts for: Package libgcc-ng conflicts for:
Your installed version is: 2.35 tried to onstall gcc as a package in the conda env itself, didnt work. any tips to install this ?????????? |
11.2........ any chance you can update it so that it also works with wsl2? or add docker support? from what ive seen people are having issues i mean, im eager excited about trying the code but seems like the only pc this runs in is yours haha .... |
Sorry that I have no Windows machine. To be honest, I tested the installation steps with a pure clean Ubuntu environment, but didn’t find this issues… could you please downgrade your GCC and CUDA to recommended version, then try again? |
I'll gladly downgrade the cuda, but when it comes to GCC i've looked online and failed to downgrade it, i Know its external to your project but i'd be grateful if you could give me a hand with this.. i also dont know if i have to downgrade is the one from Ubuntu or the one inside the conda environment |
(I am using WSL2) |
I never tried WSL before. I think you should try to downgrade the GCC of WSL system first. |
ive sent 4 hours researching how to do this and failed, it always displays 11.2 so yes in theory "i just have to downgrade" lol but its not that easy, ive been in this position before, wasting tons of time in stuff i know its not gonna work. how about you make a dockerfile? bet you know how to make one, every other serious face swap project also provides one which is why i'm used to wsl2 + docker, what do you say?.......................... otherwise its a no go for people using windows, not even wsl2.............................please |
this time i tried in ubuntu using wsl2 and conda
(e4s) user@LAPTOP-E4D0U85P:~/e4s$ python scripts/face_swap.py --source=example/input/faceswap/source.jpg --target=example/input/faceswap/target.jpg
Traceback (most recent call last):
File "/home/user/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1516, in _run_ninja_build
subprocess.run(
File "/home/user/miniconda3/envs/e4s/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "scripts/face_swap.py", line 16, in
from src.pretrained.gpen.gpen_demo import init_gpen_pretrained_model, GPEN_demo
File "/home/user/e4s/scripts/src/pretrained/gpen/gpen_demo.py", line 15, in
from src.pretrained.gpen.face_enhancement import FaceEnhancement
File "/home/user/e4s/scripts/src/pretrained/gpen/face_enhancement.py", line 11, in
from src.pretrained.gpen.face_model.face_gan import FaceGAN
File "/home/user/e4s/scripts/src/pretrained/gpen/face_model/face_gan.py", line 14, in
from src.pretrained.gpen.face_model.gpen_model import FullGenerator, FullGenerator_SR
File "/home/user/e4s/scripts/src/pretrained/gpen/face_model/gpen_model.py", line 16, in
from src.pretrained.gpen.face_model.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "/home/user/e4s/scripts/src/pretrained/gpen/face_model/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/home/user/e4s/scripts/src/pretrained/gpen/face_model/op/fused_act.py", line 13, in
fused = load(
File "/home/user/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 969, in load
return jit_compile(
File "/home/user/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1176, in jit_compile
write_ninja_file_and_build_library(
File "/home/user/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1280, in write_ninja_file_and_build_library
run_ninja_build(
File "/home/user/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1538, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused': [1/2] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/user/.local/lib/python3.8/site-packages/torch/include -isystem /home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/.local/lib/python3.8/site-packages/torch/include/TH -isystem /home/user/.local/lib/python3.8/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/e4s/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -std=c++14 -c /home/user/e4s/scripts/src/pretrained/gpen/face_model/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/user/.local/lib/python3.8/site-packages/torch/include -isystem /home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/.local/lib/python3.8/site-packages/torch/include/TH -isystem /home/user/.local/lib/python3.8/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/e4s/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -std=c++14 -c /home/user/e4s/scripts/src/pretrained/gpen/face_model/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
435 | function(_Functor&& __f)
| ^
/usr/include/c++/11/bits/std_function.h:435:145: note: ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
530 | operator=(_Functor&& __f)
| ^
/usr/include/c++/11/bits/std_function.h:530:146: note: ‘_ArgTypes’
ninja: build stopped: subcommand failed.
(e4s) user@LAPTOP-E4D0U85P:~/e4s$ conda list
packages in environment at /home/user/miniconda3/envs/e4s:
Name Version Build Channel
_libgcc_mutex 0.1 main conda-forge
_openmp_mutex 5.1 1_gnu
blas 1.0 mkl conda-forge
brotlipy 0.7.0 py38h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2023.5.7 py38h06a4308_0
cffi 1.15.1 py38h5eee18b_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
cryptography 39.0.1 py38h9ce1e76_0
cudatoolkit 11.1.74 h6bb024c_0 nvidia
dlib 19.24.0 py38he2161a6_0 conda-forge
ffmpeg 4.3 hf484d3e_0 pytorch
flit-core 3.8.0 py38h06a4308_0
freetype 2.12.1 h4a9f257_0
giflib 5.2.1 h5eee18b_3
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
idna 3.4 py38h06a4308_0
imageio 2.28.1 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
jpeg 9e h5eee18b_1
lame 3.100 h7b6447c_0
lazy-loader 0.2 pypi_0 pypi
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libblas 3.9.0 12_linux64_mkl conda-forge
libcblas 3.9.0 12_linux64_mkl conda-forge
libdeflate 1.17 h5eee18b_0
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.2 h7f8727e_0
liblapack 3.9.0 12_linux64_mkl conda-forge
libpng 1.6.39 h5eee18b_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.16.0 h27cfd23_0
libtiff 4.5.0 h6a678d5_2
libunistring 0.9.10 h27cfd23_0
libuv 1.44.2 h5eee18b_0
libwebp 1.2.4 h11a3e52_1
libwebp-base 1.2.4 h5eee18b_1
lz4-c 1.9.4 h6a678d5_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
networkx 3.1 pypi_0 pypi
ninja 1.11.1 pypi_0 pypi
ninja-base 1.10.2 hd09550d_5
numpy 1.23.5 py38h14f4228_0
numpy-base 1.23.5 py38h31eccc5_0
opencv-python 4.7.0.72 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openssl 1.1.1t h7f8727e_0
packaging 23.1 pypi_0 pypi
pillow 9.4.0 py38h6a678d5_0
pip 23.0.1 py38h06a4308_0
pycparser 2.21 pyhd3eb1b0_0
pyopenssl 23.0.0 py38h06a4308_0
pysocks 1.7.1 py38h06a4308_0
python 3.8.16 h7a1cb2a_3
python_abi 3.8 2_cp38 conda-forge
pytorch 1.8.2 py3.8_cuda11.1_cudnn8.0.5_0 pytorch-lts
pytorch-mutex 1.0 cuda pytorch
pywavelets 1.4.1 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
readline 8.2 h5eee18b_0
requests 2.28.1 py38h06a4308_1
scikit-image 0.20.0 pypi_0 pypi
scipy 1.9.1 pypi_0 pypi
setuptools 65.6.3 py38h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.1 h5eee18b_0
tifffile 2023.4.12 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
torch 1.10.1+cu113 pypi_0 pypi
torchaudio 0.10.1+cu113 pypi_0 pypi
torchvision 0.11.2+cu113 pypi_0 pypi
tqdm 4.65.0 pypi_0 pypi
typing_extensions 4.4.0 py38h06a4308_0
urllib3 1.26.15 py38h06a4308_0
wheel 0.38.4 py38h06a4308_0
xz 5.2.10 h5eee18b_1
zlib 1.2.13 h5eee18b_0
zstd 1.5.4 hc292b87_0
The text was updated successfully, but these errors were encountered: