Skip to content

VCVTNE2PS2BF16

Henk-Jan Lebbink edited this page May 28, 2019 · 3 revisions

VCVTNE2PS2BF16 — Convert Two Packed Single Data to One Packed BF16 Data

Opcode/ Instruction Op/ En 64/32 bit Mode Support CPUID Feature Flag Description
EVEX.128.F2.0F38.W0 72 /r VCVTNE2PS2BF16 xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst A V/V AVX512VL AVX512_BF16 Convert packed single data from xmm2 and xmm3/m128/m32bcst to packed BF16 data in xmm1 with writemask k1.
EVEX.256.F2.0F38.W0 72 /r VCVTNE2PS2BF16 ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst A V/V AVX512VL AVX512_BF16 Convert packed single data from ymm2 and ymm3/m256/m32bcst to packed BF16 data in ymm1 with writemask k1.
EVEX.512.F2.0F38.W0 72 /r VCVTNE2PS2BF16 zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst A V/V AVX512F AVX512_BF16 Convert packed single data from zmm2 and zmm3/m512/m32bcst to packed BF16 data in zmm1 with writemask k1.

Instruction Operand Encoding

Op/En Tuple Operand 1 Operand 2 Operand 3 Operand 4
A Full ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) NA

Description

This instruction converts two SIMD registers of packed single data into a single register of packed BF16 data.

This instruction does not support memory fault suppression.

“Round to nearest even” rounding mode is used. Output denormals are always flushed to zero and input denormals are always treated as zero. MXCSR is not consulted nor updated.

Operation

VCVTNE2PS2BF16 dest, src1, src2

VL = (128, 256, 512)
KL = VL/16
origdestdest
FOR i0 to KL-1:
    IF k1[ i ] or *no writemask*:
        IF src is memory and evex.b == 1:
            tsrc2.fp32[0]
        ELSE if i < k1/2:
            tsrc2.fp32[ i ]
        ELSE:
            tsrc1.fp32[ i-KL/2]
        // see for definition of convert helper function 
        dest.word[i] ← convert_fp32_to_bfloat16(t)
    ELSE IF *zeroing*:
        dest.word[ i ] ← 0
    ELSE:  // merge masking, dest element unchanged
        dest.word[ i ] ← origdest.word[ i ]
DEST[MAXVL-1:VL] ← 0

Intel C/C++ Compiler Intrinsic Equivalent

VCVTNE2PS2BF16 __m128bh _mm_cvtne2ps_pbh (__m128, __m128);
VCVTNE2PS2BF16 __m128bh _mm_mask_cvtne2ps_pbh (__m128bh, __mmask8, __m128, __m128);
VCVTNE2PS2BF16 __m128bh _mm_maskz_cvtne2ps_pbh (__mmask8, __m128, __m128);
VCVTNE2PS2BF16 __m256bh _mm256_cvtne2ps_pbh (__m256, __m256);
VCVTNE2PS2BF16 __m256bh _mm256_mask_cvtne2ps_pbh (__m256bh, __mmask16, __m256, __m256);
VCVTNE2PS2BF16 __m256bh _mm256_maskz_cvtne2ps_pbh (__mmask16, __m256, __m256);
VCVTNE2PS2BF16 __m512bh _mm512_cvtne2ps_pbh (__m512, __m512);
VCVTNE2PS2BF16 __m512bh _mm512_mask_cvtne2ps_pbh (__m512bh, __mmask32, __m512, __m512);
VCVTNE2PS2BF16 __m512bh _mm512_maskz_cvtne2ps_pbh (__mmask32, __m512, __m512);

SIMD Floating-Point Exceptions

None.

Other Exceptions

See Exceptions Type E4NF.


Source: Intel® Architecture Instruction Set Extensions and Future Features Programming Reference (May 2019)
Generated: 28-5-2019

Clone this wiki locally