Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: CPU Inference vllm_ops not defined #4275

Closed
bsu3338 opened this issue Apr 22, 2024 · 5 comments
Closed

[Bug]: CPU Inference vllm_ops not defined #4275

bsu3338 opened this issue Apr 22, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@bsu3338
Copy link

bsu3338 commented Apr 22, 2024

Your current environment

Collecting environment information...
WARNING 04-22 21:56:34 ray_utils.py:46] Failed to import Ray with ModuleNotFoundError("No module named 'ray'"). For distributed inference, please install Ray with `pip install ray`.
PyTorch version: 2.2.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.0-20-amd64-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        46 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               16
On-line CPU(s) list:                  0-15
Vendor ID:                            GenuineIntel
Model name:                           Intel(R) Xeon(R) Gold 6234 CPU @ 3.30GHz
CPU family:                           6
Model:                                85
Thread(s) per core:                   1
Core(s) per socket:                   8
Socket(s):                            2
Stepping:                             7
CPU max MHz:                          4000.0000
CPU min MHz:                          1200.0000
BogoMIPS:                             6600.00
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            512 KiB (16 instances)
L1i cache:                            512 KiB (16 instances)
L2 cache:                             16 MiB (16 instances)
L3 cache:                             49.5 MiB (2 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0-7
NUMA node1 CPU(s):                    8-15
Vulnerability Gather data sampling:   Mitigation; Microcode
Vulnerability Itlb multihit:          KVM: Mitigation: VMX disabled
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.1+cpu
[pip3] triton==2.3.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

🐛 Describe the bug

Cloned the existing repository and built a docker image according to the CPU instructions docker build -f Dockerfile.cpu -t vllm-cpu-env --shm-size=4g .

Did a sample curl request for a chat with powershell:

$body = @{
  model = "meta-llama/Meta-Llama-3-70B-Instruct"
  prompt = "Explain in detail the cause of the US Civil War"
  temperature = 0
}

$authHeader = @{
    "Authorization" = "Bearer token-1234"
}

Invoke-RestMethod -Uri http://10.10.0.154:8000/v1/models -Headers $authHeader -ContentType 'application/json'
Invoke-RestMethod -Uri http://10.10.0.154:8000/v1/completions -Method Post -Body ($body | ConvertTo-Json) -Headers $authHeader -ContentType 'application/json'

Used below for docker compose:

services:
    vllm-cpu-env:
      image: vllm-cpu-env
      command: ["python3","-m","vllm.entrypoints.openai.api_server", "--model", "meta-llama/Meta-Llama-3-70B-Instruct","--api-key","token-1234","--trust-remote-code","--dtype","auto"]
      ports:
        - 8000:8000
      volumes:
        - /srv/huggingface:/root/.cache/huggingface
      environment:
        - VLLM_TARGET_DEVICE=cpu
        - HUGGING_FACE_HUB_TOKEN=hf_RANDOM
        - VLLM_CPU_KVCACHE_SPACE=40

Get the below error:

vllm-cpu-env-1  | INFO 04-22 21:49:37 pynccl_utils.py:18] It is expected if you are not running on NVIDIA GPUs.
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43] Engine background task failed
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43] Traceback (most recent call last):
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     task.result()
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     has_requests_in_progress = await asyncio.wait_for(
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return fut.result()
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 470, in engine_step
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     request_outputs = await self.engine.step_async()
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 213, in step_async
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     output = await self.model_executor.execute_model_async(
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/executor/cpu_executor.py", line 113, in execute_model_async
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     output = await make_async(self.driver_worker.execute_model)(
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     result = self.fn(*self.args, **self.kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return func(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/worker/cpu_worker.py", line 289, in execute_model
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     output = self.model_runner.execute_model(seq_group_metadata_list,
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return func(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/worker/cpu_model_runner.py", line 418, in execute_model
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     hidden_states = model_executable(**execute_model_kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return forward_call(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 360, in forward
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     hidden_states = self.model(input_ids, positions, kv_caches,
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return forward_call(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 286, in forward
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     hidden_states, residual = layer(
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return forward_call(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 224, in forward
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     hidden_states = self.input_layernorm(hidden_states)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     return forward_call(*args, **kwargs)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/model_executor/layers/layernorm.py", line 60, in forward
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     ops.rms_norm(
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]   File "/workspace/vllm/vllm/_custom_ops.py", line 106, in rms_norm
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43]     vllm_ops.rms_norm(out, input, weight, epsilon)
vllm-cpu-env-1  | ERROR 04-22 21:49:37 async_llm_engine.py:43] NameError: name 'vllm_ops' is not defined
vllm-cpu-env-1  | INFO 04-22 21:49:37 async_llm_engine.py:154] Aborted request cmpl-d8d24d7bd31b4fe9b781df2330783cb6-0.
vllm-cpu-env-1  | Exception in callback functools.partial(<function _raise_exception_on_finish at 0x7f49bbed3f40>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f49b3aabbb0>>)
vllm-cpu-env-1  | handle: <Handle functools.partial(<function _raise_exception_on_finish at 0x7f49bbed3f40>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f49b3aabbb0>>)>
vllm-cpu-env-1  | Traceback (most recent call last):
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
vllm-cpu-env-1  |     task.result()
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
vllm-cpu-env-1  |     has_requests_in_progress = await asyncio.wait_for(
vllm-cpu-env-1  |   File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
vllm-cpu-env-1  |     return fut.result()
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 470, in engine_step
vllm-cpu-env-1  |     request_outputs = await self.engine.step_async()
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 213, in step_async
vllm-cpu-env-1  |     output = await self.model_executor.execute_model_async(
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/executor/cpu_executor.py", line 113, in execute_model_async
vllm-cpu-env-1  |     output = await make_async(self.driver_worker.execute_model)(
vllm-cpu-env-1  |   File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
vllm-cpu-env-1  |     result = self.fn(*self.args, **self.kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  |     return func(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/worker/cpu_worker.py", line 289, in execute_model
vllm-cpu-env-1  |     output = self.model_runner.execute_model(seq_group_metadata_list,
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  |     return func(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/worker/cpu_model_runner.py", line 418, in execute_model
vllm-cpu-env-1  |     hidden_states = model_executable(**execute_model_kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 360, in forward
vllm-cpu-env-1  |     hidden_states = self.model(input_ids, positions, kv_caches,
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 286, in forward
vllm-cpu-env-1  |     hidden_states, residual = layer(
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 224, in forward
vllm-cpu-env-1  |     hidden_states = self.input_layernorm(hidden_states)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/layers/layernorm.py", line 60, in forward
vllm-cpu-env-1  |     ops.rms_norm(
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/_custom_ops.py", line 106, in rms_norm
vllm-cpu-env-1  |     vllm_ops.rms_norm(out, input, weight, epsilon)
vllm-cpu-env-1  | NameError: name 'vllm_ops' is not defined
vllm-cpu-env-1  |
vllm-cpu-env-1  | The above exception was the direct cause of the following exception:
vllm-cpu-env-1  |
vllm-cpu-env-1  | Traceback (most recent call last):
vllm-cpu-env-1  |   File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 45, in _raise_exception_on_finish
vllm-cpu-env-1  |     raise AsyncEngineDeadError(
vllm-cpu-env-1  | vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
vllm-cpu-env-1  | INFO:     10.50.148.42:54639 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
vllm-cpu-env-1  | ERROR:    Exception in ASGI application
vllm-cpu-env-1  |   + Exception Group Traceback (most recent call last):
vllm-cpu-env-1  |   |   File "/usr/local/lib/python3.10/dist-packages/starlette/_utils.py", line 87, in collapse_excgroups
vllm-cpu-env-1  |   |     yield
vllm-cpu-env-1  |   |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 190, in __call__
vllm-cpu-env-1  |   |     async with anyio.create_task_group() as task_group:
vllm-cpu-env-1  |   |   File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
vllm-cpu-env-1  |   |     raise BaseExceptionGroup(
vllm-cpu-env-1  |   | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
vllm-cpu-env-1  |   +-+---------------- 1 ----------------
vllm-cpu-env-1  |     | Traceback (most recent call last):
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
vllm-cpu-env-1  |     |     result = await app(  # type: ignore[func-returns-value]
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
vllm-cpu-env-1  |     |     return await self.app(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
vllm-cpu-env-1  |     |     await super().__call__(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
vllm-cpu-env-1  |     |     await self.middleware_stack(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
vllm-cpu-env-1  |     |     raise exc
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
vllm-cpu-env-1  |     |     await self.app(scope, receive, _send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 189, in __call__
vllm-cpu-env-1  |     |     with collapse_excgroups():
vllm-cpu-env-1  |     |   File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
vllm-cpu-env-1  |     |     self.gen.throw(typ, value, traceback)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/_utils.py", line 93, in collapse_excgroups
vllm-cpu-env-1  |     |     raise exc
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 191, in __call__
vllm-cpu-env-1  |     |     response = await self.dispatch_func(request, call_next)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 136, in authentication
vllm-cpu-env-1  |     |     return await call_next(request)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 165, in call_next
vllm-cpu-env-1  |     |     raise app_exc
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 151, in coro
vllm-cpu-env-1  |     |     await self.app(scope, receive_or_disconnect, send_no_error)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 85, in __call__
vllm-cpu-env-1  |     |     await self.app(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 65, in __call__
vllm-cpu-env-1  |     |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
vllm-cpu-env-1  |     |     raise exc
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
vllm-cpu-env-1  |     |     await app(scope, receive, sender)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 756, in __call__
vllm-cpu-env-1  |     |     await self.middleware_stack(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 776, in app
vllm-cpu-env-1  |     |     await route.handle(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 297, in handle
vllm-cpu-env-1  |     |     await self.app(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 77, in app
vllm-cpu-env-1  |     |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
vllm-cpu-env-1  |     |     raise exc
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
vllm-cpu-env-1  |     |     await app(scope, receive, sender)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 72, in app
vllm-cpu-env-1  |     |     response = await func(request)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
vllm-cpu-env-1  |     |     raw_response = await run_endpoint_function(
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 191, in run_endpoint_function
vllm-cpu-env-1  |     |     return await dependant.call(**values)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 103, in create_completion
vllm-cpu-env-1  |     |     generator = await openai_serving_completion.create_completion(
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/entrypoints/openai/serving_completion.py", line 153, in create_completion
vllm-cpu-env-1  |     |     async for i, res in result_generator:
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/utils.py", line 228, in consumer
vllm-cpu-env-1  |     |     raise item
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/utils.py", line 213, in producer
vllm-cpu-env-1  |     |     async for item in iterator:
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 661, in generate
vllm-cpu-env-1  |     |     raise e
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 655, in generate
vllm-cpu-env-1  |     |     async for request_output in stream:
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 77, in __anext__
vllm-cpu-env-1  |     |     raise result
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
vllm-cpu-env-1  |     |     task.result()
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
vllm-cpu-env-1  |     |     has_requests_in_progress = await asyncio.wait_for(
vllm-cpu-env-1  |     |   File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
vllm-cpu-env-1  |     |     return fut.result()
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 470, in engine_step
vllm-cpu-env-1  |     |     request_outputs = await self.engine.step_async()
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 213, in step_async
vllm-cpu-env-1  |     |     output = await self.model_executor.execute_model_async(
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/executor/cpu_executor.py", line 113, in execute_model_async
vllm-cpu-env-1  |     |     output = await make_async(self.driver_worker.execute_model)(
vllm-cpu-env-1  |     |   File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
vllm-cpu-env-1  |     |     result = self.fn(*self.args, **self.kwargs)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  |     |     return func(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/worker/cpu_worker.py", line 289, in execute_model
vllm-cpu-env-1  |     |     output = self.model_runner.execute_model(seq_group_metadata_list,
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  |     |     return func(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/worker/cpu_model_runner.py", line 418, in execute_model
vllm-cpu-env-1  |     |     hidden_states = model_executable(**execute_model_kwargs)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 360, in forward
vllm-cpu-env-1  |     |     hidden_states = self.model(input_ids, positions, kv_caches,
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 286, in forward
vllm-cpu-env-1  |     |     hidden_states, residual = layer(
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 224, in forward
vllm-cpu-env-1  |     |     hidden_states = self.input_layernorm(hidden_states)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/model_executor/layers/layernorm.py", line 60, in forward
vllm-cpu-env-1  |     |     ops.rms_norm(
vllm-cpu-env-1  |     |   File "/workspace/vllm/vllm/_custom_ops.py", line 106, in rms_norm
vllm-cpu-env-1  |     |     vllm_ops.rms_norm(out, input, weight, epsilon)
vllm-cpu-env-1  |     | NameError: name 'vllm_ops' is not defined
vllm-cpu-env-1  |     +------------------------------------
vllm-cpu-env-1  |
vllm-cpu-env-1  | During handling of the above exception, another exception occurred:
vllm-cpu-env-1  |
vllm-cpu-env-1  | Traceback (most recent call last):
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
vllm-cpu-env-1  |     result = await app(  # type: ignore[func-returns-value]
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
vllm-cpu-env-1  |     return await self.app(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
vllm-cpu-env-1  |     await super().__call__(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
vllm-cpu-env-1  |     await self.middleware_stack(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
vllm-cpu-env-1  |     raise exc
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
vllm-cpu-env-1  |     await self.app(scope, receive, _send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 189, in __call__
vllm-cpu-env-1  |     with collapse_excgroups():
vllm-cpu-env-1  |   File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
vllm-cpu-env-1  |     self.gen.throw(typ, value, traceback)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/_utils.py", line 93, in collapse_excgroups
vllm-cpu-env-1  |     raise exc
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 191, in __call__
vllm-cpu-env-1  |     response = await self.dispatch_func(request, call_next)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 136, in authentication
vllm-cpu-env-1  |     return await call_next(request)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 165, in call_next
vllm-cpu-env-1  |     raise app_exc
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/base.py", line 151, in coro
vllm-cpu-env-1  |     await self.app(scope, receive_or_disconnect, send_no_error)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 85, in __call__
vllm-cpu-env-1  |     await self.app(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 65, in __call__
vllm-cpu-env-1  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
vllm-cpu-env-1  |     raise exc
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
vllm-cpu-env-1  |     await app(scope, receive, sender)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 756, in __call__
vllm-cpu-env-1  |     await self.middleware_stack(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 776, in app
vllm-cpu-env-1  |     await route.handle(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 297, in handle
vllm-cpu-env-1  |     await self.app(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 77, in app
vllm-cpu-env-1  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
vllm-cpu-env-1  |     raise exc
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
vllm-cpu-env-1  |     await app(scope, receive, sender)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 72, in app
vllm-cpu-env-1  |     response = await func(request)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
vllm-cpu-env-1  |     raw_response = await run_endpoint_function(
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 191, in run_endpoint_function
vllm-cpu-env-1  |     return await dependant.call(**values)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 103, in create_completion
vllm-cpu-env-1  |     generator = await openai_serving_completion.create_completion(
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/entrypoints/openai/serving_completion.py", line 153, in create_completion
vllm-cpu-env-1  |     async for i, res in result_generator:
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/utils.py", line 228, in consumer
vllm-cpu-env-1  |     raise item
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/utils.py", line 213, in producer
vllm-cpu-env-1  |     async for item in iterator:
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 661, in generate
vllm-cpu-env-1  |     raise e
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 655, in generate
vllm-cpu-env-1  |     async for request_output in stream:
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 77, in __anext__
vllm-cpu-env-1  |     raise result
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
vllm-cpu-env-1  |     task.result()
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
vllm-cpu-env-1  |     has_requests_in_progress = await asyncio.wait_for(
vllm-cpu-env-1  |   File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
vllm-cpu-env-1  |     return fut.result()
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 470, in engine_step
vllm-cpu-env-1  |     request_outputs = await self.engine.step_async()
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 213, in step_async
vllm-cpu-env-1  |     output = await self.model_executor.execute_model_async(
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/executor/cpu_executor.py", line 113, in execute_model_async
vllm-cpu-env-1  |     output = await make_async(self.driver_worker.execute_model)(
vllm-cpu-env-1  |   File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
vllm-cpu-env-1  |     result = self.fn(*self.args, **self.kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  |     return func(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/worker/cpu_worker.py", line 289, in execute_model
vllm-cpu-env-1  |     output = self.model_runner.execute_model(seq_group_metadata_list,
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-cpu-env-1  |     return func(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/worker/cpu_model_runner.py", line 418, in execute_model
vllm-cpu-env-1  |     hidden_states = model_executable(**execute_model_kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 360, in forward
vllm-cpu-env-1  |     hidden_states = self.model(input_ids, positions, kv_caches,
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 286, in forward
vllm-cpu-env-1  |     hidden_states, residual = layer(
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/models/llama.py", line 224, in forward
vllm-cpu-env-1  |     hidden_states = self.input_layernorm(hidden_states)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
vllm-cpu-env-1  |     return self._call_impl(*args, **kwargs)
vllm-cpu-env-1  |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
vllm-cpu-env-1  |     return forward_call(*args, **kwargs)
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/model_executor/layers/layernorm.py", line 60, in forward
vllm-cpu-env-1  |     ops.rms_norm(
vllm-cpu-env-1  |   File "/workspace/vllm/vllm/_custom_ops.py", line 106, in rms_norm
vllm-cpu-env-1  |     vllm_ops.rms_norm(out, input, weight, epsilon)
vllm-cpu-env-1  | NameError: name 'vllm_ops' is not defined

@bsu3338 bsu3338 added the bug Something isn't working label Apr 22, 2024
@zhouyuan
Copy link
Contributor

@bsu3338
I ran into a similar issue on a vllm development env, it's due to there's a folder called vllm in current directory which makes trouble for import python packages like from vllm import vllm_opts

@bsu3338
Copy link
Author

bsu3338 commented Apr 23, 2024

@zhouyuan
This looks to be how the Docker container was designed. This is what I did to fix it, but surely there is a better way.

services:
    vllm-cpu-env:
      image: vllm-cpu-env
      command: ["python3","-m","vllm.entrypoints.openai.api_server", "--model", "meta-llama/Meta-Llama-3-70B-Instruct","--api-key","token-1234","--trust-remote-code","--dtype","auto"]
      ports:
        - 8000:8000
      volumes:
        - /srv/huggingface:/root/.cache/huggingface
        - /srv/empty:/workspace/vllm/
      environment:
        - VLLM_TARGET_DEVICE=cpu
        - HUGGING_FACE_HUB_TOKEN=hf_RANDOM
        - VLLM_CPU_KVCACHE_SPACE=40

@danthegoodman1
Copy link

The above docker compose worked for me as well

@eartvit
Copy link

eartvit commented May 8, 2024

The problem occurs also in a kubernetes container (i.e. for Red Hat OpenShift).

I noticed that for some reason the library (the compiled .egg) does not get properly installed inside the container and python can't find it.

The workaround I found was to declare the WORKDIR again as the location of the .egg right before the ENTRYPOINT.
I stored the built container image on quay.io which I successfully tested with Mistral-7b-Instruct-v0.2 on an RH OpenShift 4.14 cluster.

Below is the Containerfile I used for the build:

FROM registry.access.redhat.com/ubi9/python-311

USER 0

RUN yum upgrade -y && yum install -y \
    make \
    findutils \
    wget numactl-libs \
    libgcc gcc gcc-c++ \
    && yum clean all \
    && rm -rf /var/cache/yum/*


##############
# vLLM Layer #
##############

WORKDIR /opt/app-root/src

USER 1001

RUN pip install --upgrade pip \
    && pip install wheel packaging ninja setuptools>=49.4.0 numpy

COPY --chown=1001:0 ./ /opt/app-root/src

RUN pip install -v --no-cache-dir -r requirements-cpu.txt --extra-index-url https://download.pytorch.org/whl/cpu

# Fix permissions to support pip in Openshift environments \
RUN chmod -R g+w /opt/app-root/lib/python3.11/site-packages && \
    fix-permissions /opt/app-root -P


RUN VLLM_TARGET_DEVICE=cpu python3 setup.py install

EXPOSE 8000 8080

WORKDIR /opt/app-root/lib64/python3.11/site-packages/vllm-0.4.2+cpu-py3.11-linux-x86_64.egg

ENTRYPOINT ["python3", "-m", "vllm.entrypoints.openai.api_server"]

Also, as a sidenote, the compiler installed inside the image was gcc-11.4.1 (the default one for rhel9 based images).
Obviously the above solution is far from optimal given that one needs to know the resulting location of the .egg file, nevertheless, until a permanent fix is provided, I think it's a viable option.

@DarkLight1337
Copy link
Member

Fixed by #5009

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants