是否可以使用 SIMD 对 C 中的非平凡循环进行矢量化? (复用一个输入的多长度 5 双精度点积)

Is it possible to vectorize non-trivial loop in C with SIMD? (multiple length 5 double-precision dot products reusing one input)

我有一个性能关键 C 代码,其中 > 90% 的时间都花在了一个基本操作上:

我使用的C代码是:

static void function(double *X1, double *Y1, double *X2, double *Y2, double *output) {
    double Z1, Z2;
    int i, j, k;
    for (i = 0, j = 0; i < 25; j++) { // sweep Y
        Z1 = 0;
        Z2 = 0;
        for (k = 0; k < 5; k++, i++) { // sweep X
            Z1 += X1[k] * Y1[i];
            Z2 += X2[k] * Y2[i];
        }
        output[j] = Z1*Z2;
    }
}

长度固定(X为5;Y为25;输出为5)。我已经尝试了我所知道的一切来使它更快。当我使用带有 -O3 -march=native -Rpass-analysis=loop-vectorize -Rpass=loop-vectorize -Rpass-missed=loop-vectorize 的 clang 编译此代码时,我收到此消息:

remark: the cost-model indicates that vectorization is not beneficial [-Rpass-missed=loop-vectorize]

但我认为加快速度的方法是以某种方式使用 SIMD。如有任何建议,我们将不胜感激。

通过分别加载下半部分和上半部分寄存器,您至少可以同时处理 2 个元素。展开 i 两个可能会产生一个小边缘...

__restrict 关键字(如果适用)允许预加载五个常数系数 X1[0..4], X2[0..4]。如果 X1X2 部分别名输出,最好让编译器知道它(通过使用相同的数组)。这样,随着完整函数的展开,编译器不会不必要地重新加载任何元素。

typedef double __attribute__((vector_size(16))) f2;

void function2(double *X1, double *Y1, double *X2, double *Y2, double *__restrict output) {
    double Z1, Z2;
    int i = 0, j, k;
    for (j = 0; j < 5; j++) { // sweep Y
        f2 Z12 = {0.0, 0.0};
        for (k = 0; k < 5; k++, i++) { 
            f2 Y12 = {Y1[i], Y2[i]};
            f2 X12 = {X1[k], X2[k]};
            Z12 += X12 * Y12;
        }
        output[j] = Z12[0]*Z12[1];
    }
}

如果可能,请考虑交织Y1Y2,X1X3:

void function2(f2 const *X12, f2 const *Y12, double *output) {
    int i = 0, j, k;
    for (j = 0; j < 5; j++) { // sweep Y
        f2 Z12 = X12[0] * Y12[0]; 
        for (k = 1; k < 5; k++, i++) { 
            Z12 += X12[k] * Y12[k];
        }
        output[j] = Z12[0]*Z12[1]; // possibly [j * 2]?
    }
}

通过内在函数可能会获得稍微更好的性能,但是,这个答案强调自动矢量化。

尝试以下版本,它需要 SSE2 和 FMA3。未经测试。

void function_fma( const double* X1, const double* Y1, const double* X2, const double* Y2, double* output )
{
    // Load X1 and X2 vectors into 6 registers; the instruction set has 16 of them available, BTW.
    const __m128d x1_0 = _mm_loadu_pd( X1 );
    const __m128d x1_1 = _mm_loadu_pd( X1 + 2 );
    const __m128d x1_2 = _mm_load_sd( X1 + 4 );

    const __m128d x2_0 = _mm_loadu_pd( X2 );
    const __m128d x2_1 = _mm_loadu_pd( X2 + 2 );
    const __m128d x2_2 = _mm_load_sd( X2 + 4 );

    // 5 iterations of the outer loop
    const double* const y1End = Y1 + 25;
    while( Y1 < y1End )
    {
        // Multiply first 2 values
        __m128d z1 = _mm_mul_pd( x1_0, _mm_loadu_pd( Y1 ) );
        __m128d z2 = _mm_mul_pd( x2_0, _mm_loadu_pd( Y2 ) );

        // Multiply + accumulate next 2 values
        z1 = _mm_fmadd_pd( x1_1, _mm_loadu_pd( Y1 + 2 ), z1 );
        z2 = _mm_fmadd_pd( x2_1, _mm_loadu_pd( Y2 + 2 ), z2 );

        // Horizontal sum both vectors
        z1 = _mm_add_sd( z1, _mm_unpackhi_pd( z1, z1 ) );
        z2 = _mm_add_sd( z2, _mm_unpackhi_pd( z2, z2 ) );

        // Multiply + accumulate the last 5-th value
        z1 = _mm_fmadd_sd( x1_2, _mm_load_sd( Y1 + 4 ), z1 );
        z2 = _mm_fmadd_sd( x2_2, _mm_load_sd( Y2 + 4 ), z2 );

        // Advance Y pointers
        Y1 += 5;
        Y2 += 5;

        // Compute and store z1 * z2
        z1 = _mm_mul_sd( z1, z2 );
        _mm_store_sd( output, z1 );

        // Advance output pointer
        output++;
    }
}

可以使用 AVX 进一步微优化,但我不确定它会有多大帮助,因为内循环太短了。我认为这两个额外的 FMA 指令比计算 32 字节 AVX 向量的水平和的开销更小。

更新:这是另一个版本,它总体上需要更少的指令,但代价是几次随机播放。对于您的用例,可能不会更快。需要 SSE 4.1,但我认为所有具有 FMA3 的 CPU 也都具有 SSE 4.1。

void function_fma_v2( const double* X1, const double* Y1, const double* X2, const double* Y2, double* output )
{
    // Load X1 and X2 vectors into 5 registers
    const __m128d x1_0 = _mm_loadu_pd( X1 );
    const __m128d x1_1 = _mm_loadu_pd( X1 + 2 );
    __m128d xLast = _mm_load_sd( X1 + 4 );

    const __m128d x2_0 = _mm_loadu_pd( X2 );
    const __m128d x2_1 = _mm_loadu_pd( X2 + 2 );
    xLast = _mm_loadh_pd( xLast, X2 + 4 );

    // 5 iterations of the outer loop
    const double* const y1End = Y1 + 25;
    while( Y1 < y1End )
    {
        // Multiply first 2 values
        __m128d z1 = _mm_mul_pd( x1_0, _mm_loadu_pd( Y1 ) );
        __m128d z2 = _mm_mul_pd( x2_0, _mm_loadu_pd( Y2 ) );

        // Multiply + accumulate next 2 values
        z1 = _mm_fmadd_pd( x1_1, _mm_loadu_pd( Y1 + 2 ), z1 );
        z2 = _mm_fmadd_pd( x2_1, _mm_loadu_pd( Y2 + 2 ), z2 );

        // Horizontal sum both vectors while transposing
        __m128d res = _mm_shuffle_pd( z1, z2, _MM_SHUFFLE2( 0, 1 ) );   // [ z1.y, z2.x ]
        // On Intel CPUs that blend SSE4 instruction doesn't use shuffle port,
        // throughput is 3x better than shuffle or unpack. On AMD they're equal.
        res = _mm_add_pd( res, _mm_blend_pd( z1, z2, 0b10 ) );  // [ z1.x + z1.y, z2.x + z2.y ]

        // Load the last 5-th Y values into a single vector
        __m128d yLast = _mm_load_sd( Y1 + 4 );
        yLast = _mm_loadh_pd( yLast, Y2 + 4 );

        // Advance Y pointers
        Y1 += 5;
        Y2 += 5;

        // Multiply + accumulate the last 5-th value
        res = _mm_fmadd_pd( xLast, yLast, res );

        // Compute and store z1 * z2
        res = _mm_mul_sd( res, _mm_unpackhi_pd( res, res ) );
        _mm_store_sd( output, res );
        // Advance output pointer
        output++;
    }
}

从评论中的扩展讨论来看,您似乎主要对减少读取 X1X2 和写入 output 之间的延迟感兴趣。您正在计算的是两个矩阵向量乘积的逐元素乘积。两个 MV 产品可以准并行发生(使用 OOO 执行),但是两个 MV 产品都需要五个产品的总和,您可以按顺序(就像您现在所做的那样)或以树状方式进行减少:

Z = ((X[0]*Y[0] + X[1]*Y[1]) + X[2]*Y[2])  +  ([X[3]*Y[3] + [X[4]*Y[4]);

这导致关键路径 mulsd - fmaddsd - fmaddsd - addsd,其后是 Z1*Z2 的乘法。这意味着,假设每个 FLOP 有 4 个周期的延迟,您将有 20 个周期的延迟加上读写内存的延迟(除非您能够将所有内容保存在寄存器中——这需要您显示周围的代码)。如果线性累积值,则关键路径为 mulsd - fmaddsd - fmaddsd - fmaddsd - fmaddsd - mulsd (即 24 个周期 + read/write)

现在,如果您能够更改 Y 的内存顺序,那么转置这些矩阵将是有益的,因为这样您就可以轻松地并行计算 output[0 ~ 3](假设您有 AVX) ,通过首先广播加载 X 的每个条目并进行打包累加。

void function_fma( const double* X1, const double* Y1, const double* X2, const double* Y2, double* output )
{
    // Load X1 and X2 vectors into 10 registers.
    const __m256d x1_0 = _mm256_broadcast_sd( X1 );
    const __m256d x1_1 = _mm256_broadcast_sd( X1 + 1 );
    const __m256d x1_2 = _mm256_broadcast_sd( X1 + 2 );
    const __m256d x1_3 = _mm256_broadcast_sd( X1 + 3 );
    const __m256d x1_4 = _mm256_broadcast_sd( X1 + 4 );

    const __m256d x2_0 = _mm256_broadcast_sd( X2 );
    const __m256d x2_1 = _mm256_broadcast_sd( X2 + 1 );
    const __m256d x2_2 = _mm256_broadcast_sd( X2 + 2 );
    const __m256d x2_3 = _mm256_broadcast_sd( X2 + 3 );
    const __m256d x2_4 = _mm256_broadcast_sd( X2 + 4 );

    // first four values:
    {
        // Multiply column 0
        __m256d z1 = _mm256_mul_pd( x1_0, _mm256_loadu_pd( Y1 ) );
        __m256d z2 = _mm256_mul_pd( x2_0, _mm256_loadu_pd( Y2 ) );

        // Multiply + accumulate column 1 and column 2
        z1 = _mm256_fmadd_pd( x1_1, _mm256_loadu_pd( Y1 + 5 ), z1 );
        z2 = _mm256_fmadd_pd( x2_1, _mm256_loadu_pd( Y2 + 5 ), z2 );
        z1 = _mm256_fmadd_pd( x1_2, _mm256_loadu_pd( Y1 + 10 ), z1 );
        z2 = _mm256_fmadd_pd( x2_2, _mm256_loadu_pd( Y2 + 10 ), z2 );

        // Multiply column 3
        __m256d z1_ = _mm256_mul_pd( x1_3, _mm256_loadu_pd( Y1 + 15 ) );
        __m256d z2_ = _mm256_mul_pd( x2_3, _mm256_loadu_pd( Y2 + 15 ) );

        // Multiply + accumulate column 4
        z1_ = _mm256_fmadd_pd( x1_4, _mm256_loadu_pd( Y1 + 20 ), z1_ );
        z2_ = _mm256_fmadd_pd( x2_4, _mm256_loadu_pd( Y2 + 20 ), z2_ );

        // Add both partial sum
        z1 = _mm256_add_pd( z1, z1_ );
        z2 = _mm256_add_pd( z2, z2_ );

        // Multiply and store result
        _mm256_store_pd(output, _mm256_mul_pd(z1, z2));
    }
    // last value:
    {
        // Multiply column 0
        __m128d z1 = _mm_mul_sd( _mm256_castpd256_pd128(x1_0), _mm_load_sd( Y1 + 4) );
        __m128d z2 = _mm_mul_sd( _mm256_castpd256_pd128(x2_0), _mm_load_sd( Y2 + 4) );

        // Multiply + accumulate column 1 and column 2
        z1 = _mm_fmadd_sd( _mm256_castpd256_pd128(x1_1), _mm_load_sd( Y1 + 9 ), z1 );
        z2 = _mm_fmadd_sd( _mm256_castpd256_pd128(x2_1), _mm_load_sd( Y2 + 9 ), z2 );
        z1 = _mm_fmadd_sd( _mm256_castpd256_pd128(x1_2), _mm_load_sd( Y1 + 14 ), z1 );
        z2 = _mm_fmadd_sd( _mm256_castpd256_pd128(x2_2), _mm_load_sd( Y2 + 14 ), z2 );

        // Multiply column 3
        __m128d z1_ = _mm_mul_sd( _mm256_castpd256_pd128(x1_3), _mm_load_sd( Y1 + 19 ) );
        __m128d z2_ = _mm_mul_sd( _mm256_castpd256_pd128(x2_3), _mm_load_sd( Y2 + 19 ) );

        // Multiply + accumulate column 4
        z1_ = _mm_fmadd_sd( _mm256_castpd256_pd128(x1_4), _mm_load_sd( Y1 + 24 ), z1_ );
        z2_ = _mm_fmadd_sd( _mm256_castpd256_pd128(x2_4), _mm_load_sd( Y2 + 24 ), z2_ );

        // Add both partial sum
        z1 = _mm_add_sd( z1, z1_ );
        z2 = _mm_add_sd( z2, z2_ );

        // Multiply and store result
        _mm_store_sd(output+4, _mm_mul_sd(z1, z2));
    }
}

如果你没有 FMA,你可以用乘法和加法代替它们(这不会改变延迟很多,因为只有加法在关键路径上——吞吐量可能会降低大约 50%,当然)。此外,如果您没有 AVX,则可以通过将两个值乘以两倍来计算前四个值。