Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. #19

Open
ZeeMenng opened this issue Sep 28, 2024 · 5 comments

Comments

@ZeeMenng
Copy link

换了各种weight_type都不行。

image
@wailovet
Copy link
Contributor

关掉fp8_fast_mode可以吗

@ZeeMenng
Copy link
Author

不行,关掉后显示RuntimeError: unsupported scalarType
image
got prompt
transformer type: 5b
GGUF: False
model weight dtype: torch.float8_e4m3fn manual cast dtype: torch.float16
Encoded latents shape: torch.Size([1, 1, 16, 60, 90])
/opt/homebrew/Caskroom/miniconda/base/lib/python3.12/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: huggingface/transformers#31884
warnings.warn(
Requested to load SD3ClipModel_
Loading 1 new model
loaded completely 0.0 4541.693359375 True
!!! Exception during processing !!! unsupported scalarType
Traceback (most recent call last):
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/Users/ZeeMenng/Project/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ZeeMenng/Project/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/nodes.py", line 841, in process
autocast_context = torch.autocast(mm.get_autocast_device(device)) if autocastcondition else nullcontext()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 229, in init
dtype = torch.get_autocast_dtype(device_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: unsupported scalarType

@wailovet
Copy link
Contributor

更新一下ComfyUI-CogVideoXWrapper试试

@ZeeMenng
Copy link
Author

全都升级过了,一样的错误,试了各种办法不行,很奇怪。

我的是M1 Pro,Python我用3.12和3.11两个版本分别作了尝试,都不行。不知道哪个依赖包版本有问题,还是怎样的。

image image image

@wailovet
Copy link
Contributor

目前看起来支持应该有些问题, 可以看 kijai/ComfyUI-CogVideoXWrapper#59

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants