如何使用 SSE/AVX 有效地执行 double/int64 转换?

How to efficiently perform double/int64 conversions with SSE/AVX?

SSE2 包含在单精度浮点数和 32 位整数之间转换向量的指令。

但是没有双精度和 64 位整数的等价物。换句话说,他们失踪了:

AVX好像也没有

模拟这些内在函数的最有效方法是什么?

在 AVX512 之前没有单一指令,它添加了转换 to/from 64 位整数,有符号或无符号。 (还支持转换 to/from 32 位无符号)。请参阅 _mm512_cvtpd_epi64 等内在函数和更窄的 AVX512VL 版本,如 _mm256_cvtpd_epi64.

如果您只有 AVX2 或更低版本,您将需要像下面这样的技巧来进行打包转换。 (对于标量,x86-64 具有来自 SSE2 的标量 int64_t <-> double 或 float,但标量 uint64_t <-> FP 需要技巧,直到 AVX512 添加无符号转换。标量 32 位无符号可以是通过零扩展到 64 位签名来完成。)


如果您愿意偷工减料,double <-> int64 只需两条指令即可完成转换:

  • 如果您不关心无穷大或 NaN
  • 对于 double <-> int64_t,您只关心 [-2^51, 2^51].
  • 范围内的值
  • 对于 double <-> uint64_t,您只关心 [0, 2^52).
  • 范围内的值

双 -> uint64_t

//  Only works for inputs in the range: [0, 2^52)
__m128i double_to_uint64(__m128d x){
    x = _mm_add_pd(x, _mm_set1_pd(0x0010000000000000));
    return _mm_xor_si128(
        _mm_castpd_si128(x),
        _mm_castpd_si128(_mm_set1_pd(0x0010000000000000))
    );
}

双 -> int64_t

//  Only works for inputs in the range: [-2^51, 2^51]
__m128i double_to_int64(__m128d x){
    x = _mm_add_pd(x, _mm_set1_pd(0x0018000000000000));
    return _mm_sub_epi64(
        _mm_castpd_si128(x),
        _mm_castpd_si128(_mm_set1_pd(0x0018000000000000))
    );
}

uint64_t -> 双

//  Only works for inputs in the range: [0, 2^52)
__m128d uint64_to_double(__m128i x){
    x = _mm_or_si128(x, _mm_castpd_si128(_mm_set1_pd(0x0010000000000000)));
    return _mm_sub_pd(_mm_castsi128_pd(x), _mm_set1_pd(0x0010000000000000));
}

int64_t -> 双

//  Only works for inputs in the range: [-2^51, 2^51]
__m128d int64_to_double(__m128i x){
    x = _mm_add_epi64(x, _mm_castpd_si128(_mm_set1_pd(0x0018000000000000)));
    return _mm_sub_pd(_mm_castsi128_pd(x), _mm_set1_pd(0x0018000000000000));
}

舍入行为:

  • 对于 double -> uint64_t 转换,舍入在当前舍入模式下可以正常工作。 (通常是四舍五入)
  • 对于 double -> int64_t 转换,舍入将遵循除截断之外的所有模式的当前舍入模式。如果当前舍入模式是截断(向零舍入),它实际上会向负无穷大舍入。

它是如何工作的?

尽管这个技巧只有 2 条指令,但它并不是完全不言自明的。

关键是要认识到,对于双精度浮点数,[2^52, 2^53) 范围内的值在尾数的最低位下方有 "binary place"。换句话说,如果将指数和符号位清零,则尾数将精确地变为整数表示形式。

要从 double -> uint64_t 转换为 x,您需要添加幻数 M,它是 2^52 的浮点值。这将 x 放入 [2^52, 2^53) 的 "normalized" 范围内,并方便地将小数部分四舍五入。

现在剩下的就是删除高 12 位。这很容易通过屏蔽它来完成。最快的方法是识别那些高 12 位与 M 的高 12 位相同。因此,我们可以简单地减去或异或 M,而不是引入额外的掩码常量。 XOR 具有更高的吞吐量。

uint64_t -> double 转换就是这个过程的逆过程。您加回 M 的指数位。然后通过在浮点数中减去 M 来对数字进行非标准化。

有符号整数转换稍微复杂一些,因为您需要处理 2 的补码符号扩展。我将把这些留作 reader.

的练习

相关: A fast method to round a double to a 32-bit int explained


全范围 int64 -> 双:

时隔多年,我终于有这个需求了。

  • 5 条说明 uint64_t -> double
  • 6 条指令 int64_t -> double

uint64_t -> 双

__m128d uint64_to_double_full(__m128i x){
    __m128i xH = _mm_srli_epi64(x, 32);
    xH = _mm_or_si128(xH, _mm_castpd_si128(_mm_set1_pd(19342813113834066795298816.)));          //  2^84
    __m128i xL = _mm_blend_epi16(x, _mm_castpd_si128(_mm_set1_pd(0x0010000000000000)), 0xcc);   //  2^52
    __m128d f = _mm_sub_pd(_mm_castsi128_pd(xH), _mm_set1_pd(19342813118337666422669312.));     //  2^84 + 2^52
    return _mm_add_pd(f, _mm_castsi128_pd(xL));
}

int64_t -> 双

__m128d int64_to_double_full(__m128i x){
    __m128i xH = _mm_srai_epi32(x, 16);
    xH = _mm_blend_epi16(xH, _mm_setzero_si128(), 0x33);
    xH = _mm_add_epi64(xH, _mm_castpd_si128(_mm_set1_pd(442721857769029238784.)));              //  3*2^67
    __m128i xL = _mm_blend_epi16(x, _mm_castpd_si128(_mm_set1_pd(0x0010000000000000)), 0x88);   //  2^52
    __m128d f = _mm_sub_pd(_mm_castsi128_pd(xH), _mm_set1_pd(442726361368656609280.));          //  3*2^67 + 2^52
    return _mm_add_pd(f, _mm_castsi128_pd(xL));
}

这些适用于整个 64 位范围并且正确舍入为当前舍入行为。

这些与下面 wim 的回答类似 - 但有更多滥用优化。因此,破译这些也将作为练习留给 reader.

这个答案是关于 64 位整数到 double 的转换,没有偷工减料。在此答案的先前版本中(请参见下面的 Fast and accurate conversion by splitting .... 段), 结果表明 将 64 位整数拆分为 32 位低位和 32 位高位部分非常有效, 将这些部分转换为双精度,并计算 low + high * 2^32.

这些转换的指令数是:

  • int64_to_double_full_range 9 条指令(muladd 合为一条 fma
  • uint64_to_double_full_range 7 条指令(muladd 合为一条 fma

受 Mysticial 更新后的答案的启发,具有更好的优化准确转换, 我进一步优化了 int64_t 以进行双重转换:

  • int64_to_double_fast_precise: 5 条指令。
  • uint64_to_double_fast_precise: 5 条指令。

int64_to_double_fast_precise 转换比 Mysticial 的解决方案少了一条指令。 uint64_to_double_fast_precise 代码与 Mysticial 的解决方案基本相同(但带有 vpblendd 而不是 vpblendw)。它包含在这里是因为它与 int64_to_double_fast_precise 转换:指令相同,只是常量不同:


#include <stdio.h>
#include <immintrin.h>
#include <stdint.h>

__m256d int64_to_double_fast_precise(const __m256i v)
/* Optimized full range int64_t to double conversion           */
/* Emulate _mm256_cvtepi64_pd()                                */
{
    __m256i magic_i_lo   = _mm256_set1_epi64x(0x4330000000000000);                /* 2^52               encoded as floating-point  */
    __m256i magic_i_hi32 = _mm256_set1_epi64x(0x4530000080000000);                /* 2^84 + 2^63        encoded as floating-point  */
    __m256i magic_i_all  = _mm256_set1_epi64x(0x4530000080100000);                /* 2^84 + 2^63 + 2^52 encoded as floating-point  */
    __m256d magic_d_all  = _mm256_castsi256_pd(magic_i_all);

    __m256i v_lo         = _mm256_blend_epi32(magic_i_lo, v, 0b01010101);         /* Blend the 32 lowest significant bits of v with magic_int_lo                                                   */
    __m256i v_hi         = _mm256_srli_epi64(v, 32);                              /* Extract the 32 most significant bits of v                                                                     */
            v_hi         = _mm256_xor_si256(v_hi, magic_i_hi32);                  /* Flip the msb of v_hi and blend with 0x45300000                                                                */
    __m256d v_hi_dbl     = _mm256_sub_pd(_mm256_castsi256_pd(v_hi), magic_d_all); /* Compute in double precision:                                                                                  */
    __m256d result       = _mm256_add_pd(v_hi_dbl, _mm256_castsi256_pd(v_lo));    /* (v_hi - magic_d_all) + v_lo  Do not assume associativity of floating point addition !!                        */
            return result;                                                        /* With gcc use -O3, then -fno-associative-math is default. Do not use -Ofast, which enables -fassociative-math! */
                                                                                  /* With icc use -fp-model precise                                                                                */
}


__m256d uint64_to_double_fast_precise(const __m256i v)                    
/* Optimized full range uint64_t to double conversion          */
/* This code is essentially identical to Mysticial's solution. */
/* Emulate _mm256_cvtepu64_pd()                                */
{
    __m256i magic_i_lo   = _mm256_set1_epi64x(0x4330000000000000);                /* 2^52        encoded as floating-point  */
    __m256i magic_i_hi32 = _mm256_set1_epi64x(0x4530000000000000);                /* 2^84        encoded as floating-point  */
    __m256i magic_i_all  = _mm256_set1_epi64x(0x4530000000100000);                /* 2^84 + 2^52 encoded as floating-point  */
    __m256d magic_d_all  = _mm256_castsi256_pd(magic_i_all);

    __m256i v_lo         = _mm256_blend_epi32(magic_i_lo, v, 0b01010101);         /* Blend the 32 lowest significant bits of v with magic_int_lo                                                   */
    __m256i v_hi         = _mm256_srli_epi64(v, 32);                              /* Extract the 32 most significant bits of v                                                                     */
            v_hi         = _mm256_xor_si256(v_hi, magic_i_hi32);                  /* Blend v_hi with 0x45300000                                                                                    */
    __m256d v_hi_dbl     = _mm256_sub_pd(_mm256_castsi256_pd(v_hi), magic_d_all); /* Compute in double precision:                                                                                  */
    __m256d result       = _mm256_add_pd(v_hi_dbl, _mm256_castsi256_pd(v_lo));    /* (v_hi - magic_d_all) + v_lo  Do not assume associativity of floating point addition !!                        */
            return result;                                                        /* With gcc use -O3, then -fno-associative-math is default. Do not use -Ofast, which enables -fassociative-math! */
                                                                                  /* With icc use -fp-model precise                                                                                */
}


int main(){
    int i;
    uint64_t j;
    __m256i j_4;
    __m256d v;
    double x[4];
    double x0, x1, a0, a1;

    j = 0ull;
    printf("\nAccurate int64_to_double\n");
    for (i = 0; i < 260; i++){
        j_4= _mm256_set_epi64x(0, 0, -j, j);

        v  = int64_to_double_fast_precise(j_4);
        _mm256_storeu_pd(x,v);
        x0 = x[0];
        x1 = x[1];
        a0 = _mm_cvtsd_f64(_mm_cvtsi64_sd(_mm_setzero_pd(),j));
        a1 = _mm_cvtsd_f64(_mm_cvtsi64_sd(_mm_setzero_pd(),-j));
        printf(" j =%21li   v =%23.1f   v=%23.1f   -v=%23.1f   -v=%23.1f   d=%.1f   d=%.1f\n", j, x0, a0, x1, a1, x0-a0, x1-a1);
        j  = j+(j>>2)-(j>>5)+1ull;
    }
    
    j = 0ull;
    printf("\nAccurate uint64_to_double\n");
    for (i = 0; i < 260; i++){
        if (i==258){j=-1;}
        if (i==259){j=-2;}
        j_4= _mm256_set_epi64x(0, 0, -j, j);

        v  = uint64_to_double_fast_precise(j_4);
        _mm256_storeu_pd(x,v);
        x0 = x[0];
        x1 = x[1];
        a0 = (double)((uint64_t)j);
        a1 = (double)((uint64_t)-j);
        printf(" j =%21li   v =%23.1f   v=%23.1f   -v=%23.1f   -v=%23.1f   d=%.1f   d=%.1f\n", j, x0, a0, x1, a1, x0-a0, x1-a1);
        j  = j+(j>>2)-(j>>5)+1ull;
    }
    return 0;
}

如果启用不安全的数学优化选项,转换可能会失败。使用 gcc,-O3 是 安全,但是 -Ofast 可能会导致错误的结果,因为我们可能不假设关联性 这里的浮点加法(同样适用于 Mysticial 的转换)。 使用 icc 使用 -fp-model precise.



通过将 64 位整数拆分为 32 位低位部分和 32 位高位部分,实现快速准确的转换。

我们假设整数输入和双精度输出都在 256 位宽的 AVX 寄存器中。 考虑了两种方法:

  1. int64_to_double_based_on_cvtsi2sd():按照问题评论中的建议,使用 cvtsi2sd 4 次并进行一些数据改组。 不幸的是,cvtsi2sd 和数据混洗指令都需要执行端口 5。这限制了这种方法的性能。

  2. int64_to_double_full_range():我们可以使用Mysticial的快速​​转换方法两次,以获得 完整 64 位整数范围的准确转换。 64 位整数分为 32 位低位和 32 位高位部分 ,类似于这个问题的答案: 。 这些片段中的每一个都适用于 Mysticial 的整数到双精度的转换。 最后将高部分乘以 2^32 并添加到低部分。 有符号转换比无符号转换稍微复杂一点 (uint64_to_double_full_range()), 因为 srai_epi64() 不存在。

代码:

#include <stdio.h>
#include <immintrin.h>
#include <stdint.h>

/* 
gcc -O3 -Wall -m64 -mfma -mavx2 -march=broadwell cvt_int_64_double.c
./a.out A
time ./a.out B
time ./a.out C
etc.
*/


inline __m256d uint64_to_double256(__m256i x){                  /*  Mysticial's fast uint64_to_double. Works for inputs in the range: [0, 2^52)     */
    x = _mm256_or_si256(x, _mm256_castpd_si256(_mm256_set1_pd(0x0010000000000000)));
    return _mm256_sub_pd(_mm256_castsi256_pd(x), _mm256_set1_pd(0x0010000000000000));
}

inline __m256d int64_to_double256(__m256i x){                   /*  Mysticial's fast int64_to_double. Works for inputs in the range: (-2^51, 2^51)  */
    x = _mm256_add_epi64(x, _mm256_castpd_si256(_mm256_set1_pd(0x0018000000000000)));
    return _mm256_sub_pd(_mm256_castsi256_pd(x), _mm256_set1_pd(0x0018000000000000));
}


__m256d int64_to_double_full_range(const __m256i v)
{
    __m256i msk_lo       =_mm256_set1_epi64x(0xFFFFFFFF);
    __m256d cnst2_32_dbl =_mm256_set1_pd(4294967296.0);                 /* 2^32                                                                    */

    __m256i v_lo         = _mm256_and_si256(v,msk_lo);                  /* extract the 32 lowest significant bits of v                             */
    __m256i v_hi         = _mm256_srli_epi64(v,32);                     /* 32 most significant bits of v. srai_epi64 doesn't exist                 */
    __m256i v_sign       = _mm256_srai_epi32(v,32);                     /* broadcast sign bit to the 32 most significant bits                      */
            v_hi         = _mm256_blend_epi32(v_hi,v_sign,0b10101010);  /* restore the correct sign of v_hi                                        */
    __m256d v_lo_dbl     = int64_to_double256(v_lo);                    /* v_lo is within specified range of int64_to_double                       */ 
    __m256d v_hi_dbl     = int64_to_double256(v_hi);                    /* v_hi is within specified range of int64_to_double                       */ 
            v_hi_dbl     = _mm256_mul_pd(cnst2_32_dbl,v_hi_dbl);        /* _mm256_mul_pd and _mm256_add_pd may compile to a single fma instruction */
    return _mm256_add_pd(v_hi_dbl,v_lo_dbl);                            /* rounding occurs if the integer doesn't exist as a double                */   
}


__m256d int64_to_double_based_on_cvtsi2sd(const __m256i v)
{   __m128d zero         = _mm_setzero_pd();                            /* to avoid uninitialized variables in_mm_cvtsi64_sd                       */
    __m128i v_lo         = _mm256_castsi256_si128(v);
    __m128i v_hi         = _mm256_extracti128_si256(v,1);
    __m128d v_0          = _mm_cvtsi64_sd(zero,_mm_cvtsi128_si64(v_lo));
    __m128d v_2          = _mm_cvtsi64_sd(zero,_mm_cvtsi128_si64(v_hi));
    __m128d v_1          = _mm_cvtsi64_sd(zero,_mm_extract_epi64(v_lo,1));
    __m128d v_3          = _mm_cvtsi64_sd(zero,_mm_extract_epi64(v_hi,1));
    __m128d v_01         = _mm_unpacklo_pd(v_0,v_1);
    __m128d v_23         = _mm_unpacklo_pd(v_2,v_3);
    __m256d v_dbl        = _mm256_castpd128_pd256(v_01);
            v_dbl        = _mm256_insertf128_pd(v_dbl,v_23,1);
    return v_dbl;
}


__m256d uint64_to_double_full_range(const __m256i v)                    
{
    __m256i msk_lo       =_mm256_set1_epi64x(0xFFFFFFFF);
    __m256d cnst2_32_dbl =_mm256_set1_pd(4294967296.0);                 /* 2^32                                                                    */

    __m256i v_lo         = _mm256_and_si256(v,msk_lo);                  /* extract the 32 lowest significant bits of v                             */
    __m256i v_hi         = _mm256_srli_epi64(v,32);                     /* 32 most significant bits of v                                           */
    __m256d v_lo_dbl     = uint64_to_double256(v_lo);                   /* v_lo is within specified range of uint64_to_double                      */ 
    __m256d v_hi_dbl     = uint64_to_double256(v_hi);                   /* v_hi is within specified range of uint64_to_double                      */ 
            v_hi_dbl     = _mm256_mul_pd(cnst2_32_dbl,v_hi_dbl);        
    return _mm256_add_pd(v_hi_dbl,v_lo_dbl);                            /* rounding may occur for inputs >2^52                                     */ 
}



int main(int argc, char **argv){
  int i;
  uint64_t j;
  __m256i j_4, j_inc;
  __m256d v, v_acc;
  double x[4];
  char test = argv[1][0];

  if (test=='A'){               /* test the conversions for several integer values                                       */
    j = 1ull;
    printf("\nint64_to_double_full_range\n");
    for (i = 0; i<30; i++){
      j_4= _mm256_set_epi64x(j-3,j+3,-j,j);
      v  = int64_to_double_full_range(j_4);
      _mm256_storeu_pd(x,v);
      printf("j =%21li    v =%23.1f    -v=%23.1f    v+3=%23.1f    v-3=%23.1f  \n",j,x[0],x[1],x[2],x[3]);
      j  = j*7ull;
    }

    j = 1ull;
    printf("\nint64_to_double_based_on_cvtsi2sd\n");
    for (i = 0; i<30; i++){
      j_4= _mm256_set_epi64x(j-3,j+3,-j,j);
      v  = int64_to_double_based_on_cvtsi2sd(j_4);
      _mm256_storeu_pd(x,v);
      printf("j =%21li    v =%23.1f    -v=%23.1f    v+3=%23.1f    v-3=%23.1f  \n",j,x[0],x[1],x[2],x[3]);
      j  = j*7ull;
    }

    j = 1ull;                       
    printf("\nuint64_to_double_full_range\n");
    for (i = 0; i<30; i++){
      j_4= _mm256_set_epi64x(j-3,j+3,j,j);
      v  = uint64_to_double_full_range(j_4);
      _mm256_storeu_pd(x,v);
      printf("j =%21lu    v =%23.1f   v+3=%23.1f    v-3=%23.1f \n",j,x[0],x[2],x[3]);
      j  = j*7ull;    
    }
  }
  else{
    j_4   = _mm256_set_epi64x(-123,-4004,-312313,-23412731);  
    j_inc = _mm256_set_epi64x(1,1,1,1);  
    v_acc = _mm256_setzero_pd();
    switch(test){

      case 'B' :{                  
        printf("\nLatency int64_to_double_cvtsi2sd()\n");      /* simple test to get a rough idea of the latency of int64_to_double_cvtsi2sd()     */
        for (i = 0; i<1000000000; i++){
          v  =int64_to_double_based_on_cvtsi2sd(j_4);
          j_4= _mm256_castpd_si256(v);                         /* cast without conversion, use output as an input in the next step                 */
        }
        _mm256_storeu_pd(x,v);
      }
      break;

      case 'C' :{                  
        printf("\nLatency int64_to_double_full_range()\n");    /* simple test to get a rough idea of the latency of int64_to_double_full_range()    */
        for (i = 0; i<1000000000; i++){
          v  = int64_to_double_full_range(j_4);
          j_4= _mm256_castpd_si256(v);
        }
        _mm256_storeu_pd(x,v);
      }
      break;

      case 'D' :{                  
        printf("\nThroughput int64_to_double_cvtsi2sd()\n");   /* simple test to get a rough idea of the throughput of int64_to_double_cvtsi2sd()   */
        for (i = 0; i<1000000000; i++){
          j_4   = _mm256_add_epi64(j_4,j_inc);                 /* each step a different input                                                       */
          v     = int64_to_double_based_on_cvtsi2sd(j_4);
          v_acc = _mm256_xor_pd(v,v_acc);                      /* use somehow the results                                                           */
        }
        _mm256_storeu_pd(x,v_acc);
      }
      break;

      case 'E' :{                  
        printf("\nThroughput int64_to_double_full_range()\n"); /* simple test to get a rough idea of the throughput of int64_to_double_full_range() */
        for (i = 0; i<1000000000; i++){
          j_4   = _mm256_add_epi64(j_4,j_inc);  
          v     = int64_to_double_full_range(j_4);
          v_acc = _mm256_xor_pd(v,v_acc);           
        }    
        _mm256_storeu_pd(x,v_acc);
      }
      break;

      default : {}
    }  
    printf("v =%23.1f    -v =%23.1f    v =%23.1f    -v =%23.1f  \n",x[0],x[1],x[2],x[3]);
  }

  return 0;
}

这些函数的实际性能可能取决于周围的代码和 cpu 代。

在英特尔 skylake i5 6500 系统上,使用上面代码中的简单测试 B、C、D 和 E 进行 1e9 转换(256 位宽)的计时结果:

Latency experiment int64_to_double_based_on_cvtsi2sd()      (test B)  5.02 sec.
Latency experiment int64_to_double_full_range()             (test C)  3.77 sec.
Throughput experiment int64_to_double_based_on_cvtsi2sd()   (test D)  2.82 sec.
Throughput experiment int64_to_double_full_range()          (test E)  1.07 sec.

int64_to_double_full_range()int64_to_double_based_on_cvtsi2sd() 之间的吞吐量差异比我预期的要大。

感谢@mysticial 和@wim 提供全面的 i64->f64。我为 Highway SIMD 包装器想出了一个全范围截断 f64->i64。

first version 尝试更改舍入模式,但 Clang 重新排序它们并忽略了 asm volatile、memory/cc clobbers,甚至 atomic fence。我不清楚如何保证安全; NOINLINE 有效,但会导致大量溢出。

第二个版本 (Compiler Explorer link) 模拟 FP 重规范化,根据 llvm-mca(8-10 个周期 rthroughput/total)结果证明速度更快。

// Full-range F64 -> I64 conversion
#include <hwy/highway.h>

namespace hwy {
namespace HWY_NAMESPACE {

HWY_API Vec256<int64_t> I64FromF64(Full256<int64_t> di, const Vec256<double> v) {
  const RebindToFloat<decltype(di)> dd;
  using VD = decltype(v);
  using VI = decltype(Zero(di));

  const VI k0 = Zero(di);
  const VI k1 = Set(di, 1);
  const VI k51 = Set(di, 51);

  // Exponent indicates whether the number can be represented as int64_t.
  const VI biased_exp = ShiftRight<52>(BitCast(di, v)) & Set(di, 0x7FF);
  const VI exp = biased_exp - Set(di, 0x3FF);
  const auto in_range = exp < Set(di, 63);

  // If we were to cap the exponent at 51 and add 2^52, the number would be in
  // [2^52, 2^53) and mantissa bits could be read out directly. We need to
  // round-to-0 (truncate), but changing rounding mode in MXCSR hits a
  // compiler reordering bug: https://gcc.godbolt.org/z/4hKj6c6qc . We instead
  // manually shift the mantissa into place (we already have many of the
  // inputs anyway).
  const VI shift_mnt = Max(k51 - exp, k0);
  const VI shift_int = Max(exp - k51, k0);
  const VI mantissa = BitCast(di, v) & Set(di, (1ULL << 52) - 1);
  // Include implicit 1-bit; shift by one more to ensure it's in the mantissa.
  const VI int52 = (mantissa | Set(di, 1ULL << 52)) >> (shift_mnt + k1);
  // For inputs larger than 2^52, insert zeros at the bottom.
  const VI shifted = int52 << shift_int;
  // Restore the one bit lost when shifting in the implicit 1-bit.
  const VI restored = shifted | ((mantissa & k1) << (shift_int - k1));

  // Saturate to LimitsMin (unchanged when negating below) or LimitsMax.
  const VI sign_mask = BroadcastSignBit(BitCast(di, v));
  const VI limit = Set(di, LimitsMax<int64_t>()) - sign_mask;
  const VI magnitude = IfThenElse(in_range, restored, limit);

  // If the input was negative, negate the integer (two's complement).
  return (magnitude ^ sign_mask) - sign_mask;
}

void Test(const double* pd, int64_t* pi) {
    Full256<int64_t> di;
    Full256<double> dd;
    for (size_t i = 0; i < 256; i += Lanes(di)) {
      Store(I64FromF64(di, Load(dd, pd + i)), di, pi + i);
    }
}

}
}

如果有人看到任何简化算法的潜力,请发表评论。