C 与 Python/numpy 的数学表现不佳
Poor maths performance in C vs Python/numpy
接近重复/相关:
- How does BLAS get such extreme performance?(如果你想在 C 中快速使用 matmul,认真使用一个好的 BLAS 库,除非你想手动调整你自己的 asm 版本。)但这并不意味着它不有趣编译优化程度较低的矩阵代码时会发生什么。
- how to optimize matrix multiplication (matmul) code to run fast on a single processor core
出于兴趣,我决定比较(不熟练的)手写 C 与 Python/numpy 对两个填充有从 0 到 1 的随机数的大型方矩阵执行简单矩阵乘法的性能。
我发现 python/numpy 比我的 C 代码高出 10,000 多倍这显然是不对的,那么我的 C 代码有什么问题导致它的性能如此糟糕? (甚至用 -O3 或 -Ofast 编译)
python:
import time
import numpy as np
t0 = time.time()
m1 = np.random.rand(2000, 2000)
m2 = np.random.rand(2000, 2000)
t1 = time.time()
m3 = m1 @ m2
t2 = time.time()
print('creation time: ', t1 - t0, ' \n multiplication time: ', t2 - t1)
C:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
clock_t t0=clock(), t1, t2;
// create matrices and allocate memory
int m_size = 2000;
int i, j, k;
double running_sum;
double *m1[m_size], *m2[m_size], *m3[m_size];
double f_rand_max = (double)RAND_MAX;
for(i = 0; i < m_size; i++) {
m1[i] = (double *)malloc(sizeof(double)*m_size);
m2[i] = (double *)malloc(sizeof(double)*m_size);
m3[i] = (double *)malloc(sizeof(double)*m_size);
}
// populate with random numbers 0 - 1
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++) {
m1[i][j] = (double)rand() / f_rand_max;
m2[i][j] = (double)rand() / f_rand_max;
}
t1 = clock();
// multiply together
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++) {
running_sum = 0;
for (k = 0; k < m_size; k++)
running_sum += m1[i][k] * m2[k][j];
m3[i][j] = running_sum;
}
t2 = clock();
float t01 = ((float)(t1 - t0) / CLOCKS_PER_SEC );
float t12 = ((float)(t2 - t1) / CLOCKS_PER_SEC );
printf("creation time: %f", t01 );
printf("\nmultiplication time: %f", t12 );
return 0;
}
编辑:已更正 python 做一个适当的点积,它稍微缩小了差距,C 以微秒的分辨率计时,并使用可比较的双精度数据类型,而不是浮点数,因为最初发布。
输出:
$ gcc -O3 -march=native bench.c
$ ./a.out
creation time: 0.092651
multiplication time: 139.945068
$ python3 bench.py
creation time: 0.1473407745361328
multiplication time: 0.329038143157959
有人指出,这里用 C 语言实现的朴素算法可以改进,以便更好地利用编译器优化和缓存。
编辑:修改了 C 代码以转置第二个矩阵以实现更高效的访问模式,差距缩小得更多
修改乘法码:
// transpose m2 in order to capitalise on cache efficiencies
// store transposed matrix in m3 for now
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++)
m3[j][i] = m2[i][j];
// swap the pointers
void *mtemp = *m3;
*m3 = *m2;
*m2 = mtemp;
// multiply together
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++) {
running_sum = 0;
for (k = 0; k < m_size; k++)
running_sum += m1[i][k] * m2[j][k];
m3[i][j] = running_sum;
}
结果:
$ gcc -O3 -march=native bench2.c
$ ./a.out
creation time: 0.107767
multiplication time: 10.843431
$ python3 bench.py
creation time: 0.1488208770751953
multiplication time: 0.3335080146789551
编辑:使用 -0fast 编译,我确信这是一个公平的比较,将差异降低到刚刚超过一个数量级(对 numpy 有利)。
$ gcc -Ofast -march=native bench2.c
$ ./a.out
creation time: 0.098201
multiplication time: 4.766985
$ python3 bench.py
creation time: 0.13812589645385742
multiplication time: 0.3441300392150879
编辑:建议将索引从 arr[i][j] 更改为 arr[i*m_size + j] 这产生了小的性能提升:
for m_size = 10000
$ gcc -Ofast -march=native bench3.c # indexed by arr[ i * m_size + j ]
$ ./a.out
creation time: 1.280863
multiplication time: 626.327820
$ gcc -Ofast -march=native bench2.c # indexed by art[I][j]
$ ./a.out
creation time: 2.410230
multiplication time: 708.979980
$ python3 bench.py
creation time: 3.8284950256347656
multiplication time: 39.06089973449707
最新代码bench3.c:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
clock_t t0, t1, t2;
t0 = clock();
// create matrices and allocate memory
int m_size = 10000;
int i, j, k, x, y;
double running_sum;
double *m1 = (double *)malloc(sizeof(double)*m_size*m_size),
*m2 = (double *)malloc(sizeof(double)*m_size*m_size),
*m3 = (double *)malloc(sizeof(double)*m_size*m_size);
double f_rand_max = (double)RAND_MAX;
// populate with random numbers 0 - 1
for (i=0; i < m_size; i++) {
x = i * m_size;
for (j=0; j < m_size; j++)
m1[x + j] = ((double)rand()) / f_rand_max;
m2[x + j] = ((double)rand()) / f_rand_max;
m3[x + j] = ((double)rand()) / f_rand_max;
}
t1 = clock();
// transpose m2 in order to capitalise on cache efficiencies
// store transposed matrix in m3 for now
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++)
m3[j*m_size + i] = m2[i * m_size + j];
// swap the pointers
double *mtemp = m3;
m3 = m2;
m2 = mtemp;
// multiply together
for (i=0; i < m_size; i++) {
x = i * m_size;
for (j=0; j < m_size; j++) {
running_sum = 0;
y = j * m_size;
for (k = 0; k < m_size; k++)
running_sum += m1[x + k] * m2[y + k];
m3[x + j] = running_sum;
}
}
t2 = clock();
float t01 = ((float)(t1 - t0) / CLOCKS_PER_SEC );
float t12 = ((float)(t2 - t1) / CLOCKS_PER_SEC );
printf("creation time: %f", t01 );
printf("\nmultiplication time: %f", t12 );
return 0;
}
结论:所以最初的 x10,000 差异的荒谬因素主要是由于错误地将 Python/numpy 中的逐元素乘法与 C 代码进行了比较,并且没有使用所有可用的优化进行编译,并且编写了高度可能没有利用缓存的低效内存访问模式。
'fair' 比较(即正确但效率极低的单线程算法,使用 -Ofast 编译)产生 x350 的性能因数差异
为改进内存访问模式而进行的一些简单编辑使大型矩阵 (10000 x 10000) 乘法的比较降低到 x16 倍(对 numpy 有利)。此外,numpy 会自动利用我机器上的所有四个虚拟核心,而这个 C 不会,因此性能差异可能是 x4 - x8 的一个因素(取决于这个程序在超线程上的表现运行)。我认为 x4 - x8 的一个因素是相当明智的,因为我真的不知道我在做什么,只是把一些代码敲在一起,而 numpy 是基于 BLAS 的,据我所知,多年来已经进行了广泛的优化来自各地的专家,所以我考虑这个问题 answered/solved。
接近重复/相关:
- How does BLAS get such extreme performance?(如果你想在 C 中快速使用 matmul,认真使用一个好的 BLAS 库,除非你想手动调整你自己的 asm 版本。)但这并不意味着它不有趣编译优化程度较低的矩阵代码时会发生什么。
- how to optimize matrix multiplication (matmul) code to run fast on a single processor core
出于兴趣,我决定比较(不熟练的)手写 C 与 Python/numpy 对两个填充有从 0 到 1 的随机数的大型方矩阵执行简单矩阵乘法的性能。
我发现 python/numpy 比我的 C 代码高出 10,000 多倍这显然是不对的,那么我的 C 代码有什么问题导致它的性能如此糟糕? (甚至用 -O3 或 -Ofast 编译)
python:
import time
import numpy as np
t0 = time.time()
m1 = np.random.rand(2000, 2000)
m2 = np.random.rand(2000, 2000)
t1 = time.time()
m3 = m1 @ m2
t2 = time.time()
print('creation time: ', t1 - t0, ' \n multiplication time: ', t2 - t1)
C:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
clock_t t0=clock(), t1, t2;
// create matrices and allocate memory
int m_size = 2000;
int i, j, k;
double running_sum;
double *m1[m_size], *m2[m_size], *m3[m_size];
double f_rand_max = (double)RAND_MAX;
for(i = 0; i < m_size; i++) {
m1[i] = (double *)malloc(sizeof(double)*m_size);
m2[i] = (double *)malloc(sizeof(double)*m_size);
m3[i] = (double *)malloc(sizeof(double)*m_size);
}
// populate with random numbers 0 - 1
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++) {
m1[i][j] = (double)rand() / f_rand_max;
m2[i][j] = (double)rand() / f_rand_max;
}
t1 = clock();
// multiply together
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++) {
running_sum = 0;
for (k = 0; k < m_size; k++)
running_sum += m1[i][k] * m2[k][j];
m3[i][j] = running_sum;
}
t2 = clock();
float t01 = ((float)(t1 - t0) / CLOCKS_PER_SEC );
float t12 = ((float)(t2 - t1) / CLOCKS_PER_SEC );
printf("creation time: %f", t01 );
printf("\nmultiplication time: %f", t12 );
return 0;
}
编辑:已更正 python 做一个适当的点积,它稍微缩小了差距,C 以微秒的分辨率计时,并使用可比较的双精度数据类型,而不是浮点数,因为最初发布。
输出:
$ gcc -O3 -march=native bench.c
$ ./a.out
creation time: 0.092651
multiplication time: 139.945068
$ python3 bench.py
creation time: 0.1473407745361328
multiplication time: 0.329038143157959
有人指出,这里用 C 语言实现的朴素算法可以改进,以便更好地利用编译器优化和缓存。
编辑:修改了 C 代码以转置第二个矩阵以实现更高效的访问模式,差距缩小得更多
修改乘法码:
// transpose m2 in order to capitalise on cache efficiencies
// store transposed matrix in m3 for now
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++)
m3[j][i] = m2[i][j];
// swap the pointers
void *mtemp = *m3;
*m3 = *m2;
*m2 = mtemp;
// multiply together
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++) {
running_sum = 0;
for (k = 0; k < m_size; k++)
running_sum += m1[i][k] * m2[j][k];
m3[i][j] = running_sum;
}
结果:
$ gcc -O3 -march=native bench2.c
$ ./a.out
creation time: 0.107767
multiplication time: 10.843431
$ python3 bench.py
creation time: 0.1488208770751953
multiplication time: 0.3335080146789551
编辑:使用 -0fast 编译,我确信这是一个公平的比较,将差异降低到刚刚超过一个数量级(对 numpy 有利)。
$ gcc -Ofast -march=native bench2.c
$ ./a.out
creation time: 0.098201
multiplication time: 4.766985
$ python3 bench.py
creation time: 0.13812589645385742
multiplication time: 0.3441300392150879
编辑:建议将索引从 arr[i][j] 更改为 arr[i*m_size + j] 这产生了小的性能提升:
for m_size = 10000
$ gcc -Ofast -march=native bench3.c # indexed by arr[ i * m_size + j ]
$ ./a.out
creation time: 1.280863
multiplication time: 626.327820
$ gcc -Ofast -march=native bench2.c # indexed by art[I][j]
$ ./a.out
creation time: 2.410230
multiplication time: 708.979980
$ python3 bench.py
creation time: 3.8284950256347656
multiplication time: 39.06089973449707
最新代码bench3.c:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
clock_t t0, t1, t2;
t0 = clock();
// create matrices and allocate memory
int m_size = 10000;
int i, j, k, x, y;
double running_sum;
double *m1 = (double *)malloc(sizeof(double)*m_size*m_size),
*m2 = (double *)malloc(sizeof(double)*m_size*m_size),
*m3 = (double *)malloc(sizeof(double)*m_size*m_size);
double f_rand_max = (double)RAND_MAX;
// populate with random numbers 0 - 1
for (i=0; i < m_size; i++) {
x = i * m_size;
for (j=0; j < m_size; j++)
m1[x + j] = ((double)rand()) / f_rand_max;
m2[x + j] = ((double)rand()) / f_rand_max;
m3[x + j] = ((double)rand()) / f_rand_max;
}
t1 = clock();
// transpose m2 in order to capitalise on cache efficiencies
// store transposed matrix in m3 for now
for (i=0; i < m_size; i++)
for (j=0; j < m_size; j++)
m3[j*m_size + i] = m2[i * m_size + j];
// swap the pointers
double *mtemp = m3;
m3 = m2;
m2 = mtemp;
// multiply together
for (i=0; i < m_size; i++) {
x = i * m_size;
for (j=0; j < m_size; j++) {
running_sum = 0;
y = j * m_size;
for (k = 0; k < m_size; k++)
running_sum += m1[x + k] * m2[y + k];
m3[x + j] = running_sum;
}
}
t2 = clock();
float t01 = ((float)(t1 - t0) / CLOCKS_PER_SEC );
float t12 = ((float)(t2 - t1) / CLOCKS_PER_SEC );
printf("creation time: %f", t01 );
printf("\nmultiplication time: %f", t12 );
return 0;
}
结论:所以最初的 x10,000 差异的荒谬因素主要是由于错误地将 Python/numpy 中的逐元素乘法与 C 代码进行了比较,并且没有使用所有可用的优化进行编译,并且编写了高度可能没有利用缓存的低效内存访问模式。 'fair' 比较(即正确但效率极低的单线程算法,使用 -Ofast 编译)产生 x350 的性能因数差异 为改进内存访问模式而进行的一些简单编辑使大型矩阵 (10000 x 10000) 乘法的比较降低到 x16 倍(对 numpy 有利)。此外,numpy 会自动利用我机器上的所有四个虚拟核心,而这个 C 不会,因此性能差异可能是 x4 - x8 的一个因素(取决于这个程序在超线程上的表现运行)。我认为 x4 - x8 的一个因素是相当明智的,因为我真的不知道我在做什么,只是把一些代码敲在一起,而 numpy 是基于 BLAS 的,据我所知,多年来已经进行了广泛的优化来自各地的专家,所以我考虑这个问题 answered/solved。