我在使用 SIMD 和 pthreads 做什么导致我的程序变慢?

What am I doing with SIMD and pthreads that is slowing my program down?

!!!家庭作业 - 作业 !!!

请不要 post 代码,因为我想完成我自己,而是如果可能的话,用一般信息指出正确的方向,或者指出思想上的错误或其他可能有用的错误和相关资源。


我有一个方法可以创建 double 的正方形 npages * npages 矩阵帽。

我已经使用 pthreads、SIMD 以及 pthreads 和 SIMD 实现了。我使用了 xcode 仪器时间分析器,发现只有 pthreads 的版本是最快的,其次是只有 SIMD 的版本,最慢的是同时具有 SIMD 和 pthreads 的版本。

因为它是家庭作业,所以它可以在多台不同的机器上 运行 但是我们得到了 header #include 所以假设我们至少可以使用 AVX。我们给出了程序将使用多少个线程作为程序的参数并将其存储在全局变量 g_nthreads.

在我的测试中,我一直在我的机器上测试它,这是一个具有 4 个硬件核心和 8 个逻辑核心的 IvyBridge,我一直在用 4 个线程作为参数和 8 个线程作为参数来测试它。


运行 次:

仅 SIMD:

*331ms - 对于 consturct_matrix_hat 函数 *

仅 PTHREADS(8 个线程):

70ms - 每个线程并发

SIMD 和 PTHREADS(8 个线程):

110ms - 每个线程并发


我在做什么,在使用两种优化形式时速度会变慢吗?

我将post每次实施:


所有版本共享这些宏:

#define BIG_CHUNK    (g_n2/g_nthreads)
#define SMALL_CHUNK  (g_npages/g_nthreads)
#define MOD          BIG_CHUNK - (BIG_CHUNK % 4)
#define IDX(a, b)    ((a * g_npages) + b)

线程数:

// struct used for passing arguments
typedef struct {
    double* restrict m;
    double* restrict m_hat;
    int t_id;
    char padding[44];
} t_arg_matrix_hat;

// Construct matrix_hat with pthreads
static void* pthread_construct_matrix_hat(void* arg) {
    t_arg_matrix_hat* t_arg = (t_arg_matrix_hat*) arg;
    // set coordinate limits thread is able to act upon
    size_t start = t_arg->t_id * BIG_CHUNK;
    size_t end = t_arg->t_id + 1 != g_nthreads ? (t_arg->t_id + 1) * BIG_CHUNK : g_n2;

    // Initialise coordinates with given uniform value
    for (size_t i = start; i < end; i++) {
        t_arg->m_hat[i] = ((g_dampener * t_arg->m[i]) + HAT);
    }

    return NULL;
}

// Construct matrix_hat
double* construct_matrix_hat(double* matrix) {
    double* matrix_hat = malloc(sizeof(double) * g_n2);

    // create structs to send and retrieve matrix and value from threads
    t_arg_matrix_hat t_args[g_nthreads];
    for (size_t i = 0; i < g_nthreads; i++) {
        t_args[i] = (t_arg_matrix_hat) {
            .m = matrix,
            .m_hat = matrix_hat,
            .t_id = i
        };
    }
    // create threads and send structs with matrix and value to divide the matrix and
    // initialise the coordinates with the given value
    pthread_t threads[g_nthreads];
    for (size_t i = 0; i < g_nthreads; i++) {
        pthread_create(threads + i, NULL, pthread_construct_matrix_hat, t_args + i);
    }
    // join threads after all coordinates have been intialised
    for (size_t i = 0; i < g_nthreads; i++) {
        pthread_join(threads[i], NULL);
    }

    return matrix_hat;
}

SIMD:

// Construct matrix_hat
double* construct_matrix_hat(double* matrix) {
    double* matrix_hat = malloc(sizeof(double) * g_n2);

    double dampeners[4] = {g_dampener, g_dampener, g_dampener, g_dampener};
    __m256d b = _mm256_loadu_pd(dampeners);
    // Use simd to subtract values from each other
    for (size_t i = 0; i < g_mod; i += 4) {
        __m256d a = _mm256_loadu_pd(matrix + i);
        __m256d res = _mm256_mul_pd(a, b);
        _mm256_storeu_pd(&matrix_hat[i], res);
    }
    // Subtract values from each other that weren't included in simd
    for (size_t i = g_mod; i < g_n2; i++) {
        matrix_hat[i] = g_dampener * matrix[i];
    }
    double hats[4] = {HAT, HAT, HAT, HAT};
    b = _mm256_loadu_pd(hats);
    // Use simd to raise each value to the power 2
    for (size_t i = 0; i < g_mod; i += 4) {
        __m256d a = _mm256_loadu_pd(matrix_hat + i);
        __m256d res = _mm256_add_pd(a, b);
        _mm256_storeu_pd(&matrix_hat[i], res);
    }
    // Raise each value to the power 2 that wasn't included in simd
    for (size_t i = g_mod; i < g_n2; i++) {
        matrix_hat[i] += HAT;
    }

    return matrix_hat;
}

P 线程和 SIMD:

// struct used for passing arguments
typedef struct {
    double* restrict m;
    double* restrict m_hat;
    int t_id;
    char padding[44];
} t_arg_matrix_hat;

// Construct matrix_hat with pthreads
static void* pthread_construct_matrix_hat(void* arg) {
    t_arg_matrix_hat* t_arg = (t_arg_matrix_hat*) arg;
    // set coordinate limits thread is able to act upon
    size_t start = t_arg->t_id * BIG_CHUNK;
    size_t end = t_arg->t_id + 1 != g_nthreads ? (t_arg->t_id + 1) * BIG_CHUNK : g_n2;
    size_t leftovers = start + MOD;

    __m256d b1 = _mm256_loadu_pd(dampeners);
    //
    for (size_t i = start; i < leftovers; i += 4) {
        __m256d a1 = _mm256_loadu_pd(t_arg->m + i);
        __m256d r1 = _mm256_mul_pd(a1, b1);
        _mm256_storeu_pd(&t_arg->m_hat[i], r1);
    }
    //
    for (size_t i = leftovers; i < end; i++) {
        t_arg->m_hat[i] = dampeners[0] * t_arg->m[i];
    }

    __m256d b2 = _mm256_loadu_pd(hats);
    //
    for (size_t i = start; i < leftovers; i += 4) {
        __m256d a2 = _mm256_loadu_pd(t_arg->m_hat + i);
        __m256d r2 = _mm256_add_pd(a2, b2);
        _mm256_storeu_pd(&t_arg->m_hat[i], r2);
    }
    //
    for (size_t i = leftovers; i < end; i++) {
        t_arg->m_hat[i] += hats[0];
    }

    return NULL;
}

// Construct matrix_hat
double* construct_matrix_hat(double* matrix) {
    double* matrix_hat = malloc(sizeof(double) * g_n2);

    // create structs to send and retrieve matrix and value from threads
    t_arg_matrix_hat t_args[g_nthreads];
    for (size_t i = 0; i < g_nthreads; i++) {
        t_args[i] = (t_arg_matrix_hat) {
            .m = matrix,
            .m_hat = matrix_hat,
            .t_id = i
        };
    }
    // create threads and send structs with matrix and value to divide the matrix and
    // initialise the coordinates with the given value
    pthread_t threads[g_nthreads];
    for (size_t i = 0; i < g_nthreads; i++) {
        pthread_create(threads + i, NULL, pthread_construct_matrix_hat, t_args + i);
    }
    // join threads after all coordinates have been intialised
    for (size_t i = 0; i < g_nthreads; i++) {
        pthread_join(threads[i], NULL);
    }

    return matrix_hat;
}

我认为这是因为 你的 SIMD 代码效率低得可怕:它在内存中循环两次,而不是在存储之前用乘法进行加法。您没有测试 SIMD 与标量基线,但如果您测试过,您可能会发现您的 SIMD 代码也不是单线程加速。

如果您想自己解决剩余的作业,请停止阅读此处。

如果您使用 gcc -O3 -march=ivybridge,pthread 版本中的简单标量循环可能会自动矢量化为类似于您应该使用内部函数完成的操作。您甚至使用了 restrict,因此它可能会意识到指针不能相互重叠,也不能与 g_dampener 重叠。

// this probably autovectorizes well.
// Initialise coordinates with given uniform value
for (size_t i = start; i < end; i++) {
    t_arg->m_hat[i] = ((g_dampener * t_arg->m[i]) + HAT);
}


// but this would be even safer to help the compiler's aliasing analysis:
double dampener = g_dampener;   // in case the compiler things one of the pointers might point at the global
double *restrict hat = t_arg->hat;
const double *restrict mat = t_arg->m;
... same loop but using these locals instead of 

对于 FP 循环来说这可能不是问题,因为 double 绝对不能与 double * 别名。


编码风格也很讨厌。您应该尽可能为 __m256d 变量指定有意义的名称。

此外,您使用 malloc,这不能保证 matrix_hat 将对齐到 32B 边界。 C11's aligned_alloc is probably the nicest wayposix_memalign(笨拙的界面)、_mm_malloc(必须使用 _mm_free,而不是 free(3))或其他选项。

double* construct_matrix_hat(const double* matrix) {
    // double* matrix_hat = malloc(sizeof(double) * g_n2);
    double* matrix_hat = aligned_alloc(64, sizeof(double) * g_n2);

    // double dampeners[4] = {g_dampener, g_dampener, g_dampener, g_dampener};  // This idiom is terrible, and might actually compile to code that stores it 4 times on the stack and then loads.
    __m256d vdamp = _mm256_set1_pd(g_dampener);    // will compile to a broadcast-load (vbroadcastsd)
    __m256d vhat  = _mm256_set1_pd(HAT);

    size_t last_full_vector = g_n2 & ~3ULL;    // don't load this from a global.
    // it's better for the compiler to see how it's calculated from g_n2

       // ??? Use simd to subtract values from each other          // huh?  this is a multiply, not a subtract.  Also, everyone can see it's using SIMD, that part adds no new information
    // if you really want to manually vectorize this, instead of using an OpenMP pragma or -O3 on the scalar loop, then:
    for (size_t i = 0; i < last_full_vector; i += 4) {
        __m256d vmat = _mm256_loadu_pd(matrix + i);
        __m256d vmul = _mm256_mul_pd(vmat, vdamp);
        __m256d vres = _mm256_add_pd(vmul, vhat);
        _mm256_store_pd(&matrix_hat[i], vres);    // aligned store.  Doesn't matter for performance.
    }
    #if 0
    // Scalar cleanup
    for (size_t i = last_vector; i < g_n2; i++) {
        matrix_hat[i] = g_dampener * matrix[i] + HAT;
    }
    #else
    // assume that g_n2 >= 4, and do a potentially-overlapping unaligned vector
    if (last_full_vector != g_n2) {
        // Or have this always run, and have the main loop stop one element sooner (so this overlaps by 0..3 instead of by 1..3 with a conditional)
        assert(g_n2 >= 4);
        __m256d vmat = _mm256_loadu_pd(matrix + g_n2 - 4);
        __m256d vmul = _mm256_mul_pd(vmat, vdamp);
        __m256d vres = _mm256_add_pd(vmul, vhat);
        _mm256_storeu_pd(&matrix_hat[g_n2-4], vres);
    }
    #endif

    return matrix_hat;
}

这个版本compiles (after defining a couple globals) to the asm we expect。顺便说一句,普通人将大小作为函数参数传递。这是避免由于 C 别名规则导致的优化失败的另一种方法。

无论如何,最好的办法是让 OpenMP 自动对其进行矢量化,因为这样您就不必自己编写清理循环。数据组织没有什么棘手的,所以它很容易向量化。 (这不是减少,就像你的另一个问题一样,所以没有循环携带的依赖或操作顺序问题)。