为什么 C++ 标准库中没有 SIMD 功能?
Why is there no SIMD functionality in the C++ standard library?
SSE 自 1999 年问世以来,它及其后续扩展是提高 C++ 程序性能的最强大工具之一。然而,没有明确使用它的标准化 containers/algorithms 等(我知道吗?)。是否有一个原因?是否有一项提案从未通过?
(parallelism TS v2) for explicit short vector SIMD types that map to the SIMD extensions of common ISAs but only GCC implements it as of August 2021. The Cppreference documentation for it linked above is incomplete but there are additional details covered in Working Draft, Technical Specification for C++ Extensions for Parallelism, Document N4808. The ideas behind this proposal were developed during a PhD project (2015 thesis here). The author of the GCC implementation wrote an article 中有关于将现有 SSE 字符串处理算法转换为使用他的库的 2019 迭代的实验性支持,实现类似的性能和更高的可读性。下面是使用它和生成的程序集的一些简单代码:
乘加
#include <experimental/simd> // Fails on MSVC 19 and others
using vec4f = std::experimental::fixed_size_simd<float,4>;
void madd(vec4f& out, const vec4f& a, const vec4f& b)
{
out += a * b;
}
使用 -march=znver2 -Ofast -ffast-math
编译,我们确实得到了为此生成的硬件融合乘加:
madd(std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> >&, std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> > const&, std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> > const&):
vmovaps xmm0, XMMWORD PTR [rdx]
vmovaps xmm1, XMMWORD PTR [rdi]
vfmadd132ps xmm0, xmm1, XMMWORD PTR [rsi]
vmovaps XMMWORD PTR [rdi], xmm0
ret
点积
一个dot/inner产品可以简写为:
float dot_product(const vec4f a, const vec4f b)
{
return reduce(a * b);
}
-Ofast -ffast-math -march=znver2
:
dot_product(std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> >, std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> >):
vmovaps xmm1, XMMWORD PTR [rsi]
vmulps xmm1, xmm1, XMMWORD PTR [rdi]
vpermilps xmm0, xmm1, 27
vaddps xmm0, xmm0, xmm1
vpermilpd xmm1, xmm0, 3
vaddps xmm0, xmm0, xmm1
ret
SSE 自 1999 年问世以来,它及其后续扩展是提高 C++ 程序性能的最强大工具之一。然而,没有明确使用它的标准化 containers/algorithms 等(我知道吗?)。是否有一个原因?是否有一项提案从未通过?
(parallelism TS v2) for explicit short vector SIMD types that map to the SIMD extensions of common ISAs but only GCC implements it as of August 2021. The Cppreference documentation for it linked above is incomplete but there are additional details covered in Working Draft, Technical Specification for C++ Extensions for Parallelism, Document N4808. The ideas behind this proposal were developed during a PhD project (2015 thesis here). The author of the GCC implementation wrote an article 中有关于将现有 SSE 字符串处理算法转换为使用他的库的 2019 迭代的实验性支持,实现类似的性能和更高的可读性。下面是使用它和生成的程序集的一些简单代码:
乘加
#include <experimental/simd> // Fails on MSVC 19 and others
using vec4f = std::experimental::fixed_size_simd<float,4>;
void madd(vec4f& out, const vec4f& a, const vec4f& b)
{
out += a * b;
}
使用 -march=znver2 -Ofast -ffast-math
编译,我们确实得到了为此生成的硬件融合乘加:
madd(std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> >&, std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> > const&, std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> > const&):
vmovaps xmm0, XMMWORD PTR [rdx]
vmovaps xmm1, XMMWORD PTR [rdi]
vfmadd132ps xmm0, xmm1, XMMWORD PTR [rsi]
vmovaps XMMWORD PTR [rdi], xmm0
ret
点积
一个dot/inner产品可以简写为:
float dot_product(const vec4f a, const vec4f b)
{
return reduce(a * b);
}
-Ofast -ffast-math -march=znver2
:
dot_product(std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> >, std::experimental::parallelism_v2::simd<float, std::experimental::parallelism_v2::simd_abi::_Fixed<4> >):
vmovaps xmm1, XMMWORD PTR [rsi]
vmulps xmm1, xmm1, XMMWORD PTR [rdi]
vpermilps xmm0, xmm1, 27
vaddps xmm0, xmm0, xmm1
vpermilpd xmm1, xmm0, 3
vaddps xmm0, xmm0, xmm1
ret