You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since #8075, MMQ is now the default for Nvidia GPUs with int8 tensor core support (vs cuBLAS).
There are flags to force MMQ or cuBLAS, and if you don't specify those then llama.cpp picks automatically. However, as far as I can see, there is no indication which is being picked if it's done automatically.
This FR is to request some sort of output showing which is being picked, either at compile-time or run-time - a small using MMQ kernels or using cuBLAS kernels would suffice, or even a line during compile-time (if that's when the decision happens).
Motivation
MMQ is faster with smaller batch sizes, but is slower at higher batch sizes.
Having an indication of whether MMQ or cuBLAS is being used would help to show why certain batch sizes may be suboptimal for speed, or whether it is beneficial to force MMQ or cuBLAS when compiling.
E.g. "my speeds are slow at higher batch sizes and I see MMQ is enabled, maybe I should disable it" or "I'd like to optimize speed at lower batch sizes and I see that MMQ is not enabled, maybe I'll try and force it on"
Possible Implementation
N/A, not familiar enough with the codebase to suggest where this could be added.
The text was updated successfully, but these errors were encountered:
Since #8075, MMQ is now the default for Nvidia GPUs with int8 tensor core support (vs cuBLAS).
There are flags to force MMQ or cuBLAS, and if you don't specify those then llama.cpp picks automatically. However, as far as I can see, there is no indication which is being picked if it's done automatically.
This FR is to request some sort of output showing which is being picked, either at compile-time or run-time - a small using MMQ kernels or using cuBLAS kernels would suffice, or even a line during compile-time (if that's when the decision happens).
Motivation
MMQ is faster with smaller batch sizes, but is slower at higher batch sizes.
Having an indication of whether MMQ or cuBLAS is being used would help to show why certain batch sizes may be suboptimal for speed, or whether it is beneficial to force MMQ or cuBLAS when compiling.
E.g. "my speeds are slow at higher batch sizes and I see MMQ is enabled, maybe I should disable it" or "I'd like to optimize speed at lower batch sizes and I see that MMQ is not enabled, maybe I'll try and force it on"
Possible Implementation
N/A, not familiar enough with the codebase to suggest where this could be added.
Prerequisites
Feature Description
Continuing discussion at #8340.
Since #8075, MMQ is now the default for Nvidia GPUs with int8 tensor core support (vs cuBLAS).
There are flags to force MMQ or cuBLAS, and if you don't specify those then
llama.cpp
picks automatically. However, as far as I can see, there is no indication which is being picked if it's done automatically.This FR is to request some sort of output showing which is being picked, either at compile-time or run-time - a small
using MMQ kernels
orusing cuBLAS kernels
would suffice, or even a line during compile-time (if that's when the decision happens).Motivation
MMQ is faster with smaller batch sizes, but is slower at higher batch sizes.
Having an indication of whether MMQ or cuBLAS is being used would help to show why certain batch sizes may be suboptimal for speed, or whether it is beneficial to force MMQ or cuBLAS when compiling.
E.g. "my speeds are slow at higher batch sizes and I see MMQ is enabled, maybe I should disable it" or "I'd like to optimize speed at lower batch sizes and I see that MMQ is not enabled, maybe I'll try and force it on"
Possible Implementation
N/A, not familiar enough with the codebase to suggest where this could be added.
The text was updated successfully, but these errors were encountered: