将一系列整数转换为字符串 - 为什么应用比 astype 快得​​多?

Converting a series of ints to strings - Why is apply much faster than astype?

我有一个 pandas.Series 包含整数,但我需要将它们转换为字符串以供某些下游工具使用。所以假设我有一个 Series 对象:

import numpy as np
import pandas as pd

x = pd.Series(np.random.randint(0, 100, 1000000))

在 Whosebug 和其他网站上,我看到大多数人认为最好的方法是:

%% timeit
x = x.astype(str)

这大约需要 2 秒。

当我使用x = x.apply(str)时,只需要0.2秒。

为什么 x.astype(str) 这么慢?推荐的方式应该是x.apply(str)

我主要对 python 3 对此的行为感兴趣。

性能

值得在开始任何调查之前查看实际性能,因为与流行观点相反,list(map(str, x)) 似乎比 x.apply(str)

import pandas as pd, numpy as np

### Versions: Pandas 0.20.3, Numpy 1.13.1, Python 3.6.2 ###

x = pd.Series(np.random.randint(0, 100, 100000))

%timeit x.apply(str)          # 42ms   (1)
%timeit x.map(str)            # 42ms   (2)
%timeit x.astype(str)         # 559ms  (3)
%timeit [str(i) for i in x]   # 566ms  (4)
%timeit list(map(str, x))     # 536ms  (5)
%timeit x.values.astype(str)  # 25ms   (6)

注意点:

  1. (5) 比 (3) / (4) 稍微快一点,我们预计随着更多的工作转移到 C [假设没有使用 lambda 函数]。
  2. (6) 是迄今为止最快的。
  3. (1) / (2) 相似。
  4. (3) / (4) 相似。

为什么 x.map / x.apply 快?

这个好像是因为它用的快compiled Cython code:

cpdef ndarray[object] astype_str(ndarray arr):
    cdef:
        Py_ssize_t i, n = arr.size
        ndarray[object] result = np.empty(n, dtype=object)

    for i in range(n):
        # we can use the unsafe version because we know `result` is mutable
        # since it was created from `np.empty`
        util.set_value_at_unsafe(result, i, str(arr[i]))

    return result

为什么 x.astype(str) 慢?

Pandas 将 str 应用于系列中的每个项目,而不是使用上面的 Cython。

因此性能与 [str(i) for i in x] / list(map(str, x)).

相当

为什么x.values.astype(str)这么快?

Numpy 不会对数组的每个元素应用函数。 One description 我找到了:

If you did s.values.astype(str) what you get back is an object holding int. This is numpy doing the conversion, whereas pandas iterates over each item and calls str(item) on it. So if you do s.astype(str) you have an object holding str.

在 no-nulls 的情况下存在技术原因 why the numpy version hasn't been implemented

让我们从一些一般性建议开始:如果您有兴趣找到 Python 代码的瓶颈,您可以使用分析器找到大部分时间吃掉的 functions/parts .在这种情况下,我使用了行分析器,因为您实际上可以看到每行的实现和花费的时间。

但是,默认情况下,这些工具不适用于 C 或 Cython。鉴于 CPython(这是我正在使用的 Python 解释器),NumPy 和 pandas 大量使用 C 和 Cython,我在分析方面的进展将受到限制.

实际上:人们可能可以通过使用调试符号和跟踪重新编译它来将分析扩展到 Cython 代码,也可能扩展到 C 代码,但是编译这些库并不是一件容易的事,所以我不会那样做(但是如果有人喜欢这样做 Cython documentation includes a page about profiling Cython code).

但让我们看看我能走多远:

线路分析Python代码

我将在这里使用 line-profiler 和一个 Jupyter 笔记本:

%load_ext line_profiler

import numpy as np
import pandas as pd

x = pd.Series(np.random.randint(0, 100, 100000))

分析x.astype

%lprun -f x.astype x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
    87                                                   @wraps(func)
    88                                                   def wrapper(*args, **kwargs):
    89         1           12     12.0      0.0              old_arg_value = kwargs.pop(old_arg_name, None)
    90         1            5      5.0      0.0              if old_arg_value is not None:
    91                                                           if mapping is not None:
   ...
   118         1       663354 663354.0    100.0              return func(*args, **kwargs)

所以这只是一个装饰器,100% 的时间花在装饰函数上。那么让我们分析一下装饰函数:

%lprun -f x.astype.__wrapped__ x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
  3896                                               @deprecate_kwarg(old_arg_name='raise_on_error', new_arg_name='errors',
  3897                                                                mapping={True: 'raise', False: 'ignore'})
  3898                                               def astype(self, dtype, copy=True, errors='raise', **kwargs):
  3899                                                   """
  ...
  3975                                                   """
  3976         1           28     28.0      0.0          if is_dict_like(dtype):
  3977                                                       if self.ndim == 1:  # i.e. Series
  ...
  4001                                           
  4002                                                   # else, only a single dtype is given
  4003         1           14     14.0      0.0          new_data = self._data.astype(dtype=dtype, copy=copy, errors=errors,
  4004         1       685863 685863.0     99.9                                       **kwargs)
  4005         1          340    340.0      0.0          return self._constructor(new_data).__finalize__(self)

Source

同样有一行是瓶颈所以让我们检查一下_data.astype方法:

%lprun -f x._data.astype x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
  3461                                               def astype(self, dtype, **kwargs):
  3462         1       695866 695866.0    100.0          return self.apply('astype', dtype=dtype, **kwargs)

好的,另一个代表,让我们看看_data.apply做了什么:

%lprun -f x._data.apply x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
  3251                                               def apply(self, f, axes=None, filter=None, do_integrity_check=False,
  3252                                                         consolidate=True, **kwargs):
  3253                                                   """
  ...
  3271                                                   """
  3272                                           
  3273         1           12     12.0      0.0          result_blocks = []
  ...
  3309                                           
  3310         1           10     10.0      0.0          aligned_args = dict((k, kwargs[k])
  3311         1           29     29.0      0.0                              for k in align_keys
  3312                                                                       if hasattr(kwargs[k], 'reindex_axis'))
  3313                                           
  3314         2           28     14.0      0.0          for b in self.blocks:
  ...
  3329         1       674974 674974.0    100.0              applied = getattr(b, f)(**kwargs)
  3330         1           30     30.0      0.0              result_blocks = _extend_blocks(applied, result_blocks)
  3331                                           
  3332         1           10     10.0      0.0          if len(result_blocks) == 0:
  3333                                                       return self.make_empty(axes or self.axes)
  3334         1           10     10.0      0.0          bm = self.__class__(result_blocks, axes or self.axes,
  3335         1           76     76.0      0.0                              do_integrity_check=do_integrity_check)
  3336         1           13     13.0      0.0          bm._consolidate_inplace()
  3337         1            7      7.0      0.0          return bm

Source

又一次...一个函数调用一直占用时间,这次是 x._data.blocks[0].astype:

%lprun -f x._data.blocks[0].astype x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
   542                                               def astype(self, dtype, copy=False, errors='raise', values=None, **kwargs):
   543         1           18     18.0      0.0          return self._astype(dtype, copy=copy, errors=errors, values=values,
   544         1       671092 671092.0    100.0                              **kwargs)

..这是另一个代表...

%lprun -f x._data.blocks[0]._astype x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
   546                                               def _astype(self, dtype, copy=False, errors='raise', values=None,
   547                                                           klass=None, mgr=None, **kwargs):
   548                                                   """
   ...
   557                                                   """
   558         1           11     11.0      0.0          errors_legal_values = ('raise', 'ignore')
   559                                           
   560         1            8      8.0      0.0          if errors not in errors_legal_values:
   561                                                       invalid_arg = ("Expected value of kwarg 'errors' to be one of {}. "
   562                                                                      "Supplied value is '{}'".format(
   563                                                                          list(errors_legal_values), errors))
   564                                                       raise ValueError(invalid_arg)
   565                                           
   566         1           23     23.0      0.0          if inspect.isclass(dtype) and issubclass(dtype, ExtensionDtype):
   567                                                       msg = ("Expected an instance of {}, but got the class instead. "
   568                                                              "Try instantiating 'dtype'.".format(dtype.__name__))
   569                                                       raise TypeError(msg)
   570                                           
   571                                                   # may need to convert to categorical
   572                                                   # this is only called for non-categoricals
   573         1           72     72.0      0.0          if self.is_categorical_astype(dtype):
   ...
   595                                           
   596                                                   # astype processing
   597         1           16     16.0      0.0          dtype = np.dtype(dtype)
   598         1           19     19.0      0.0          if self.dtype == dtype:
   ...
   603         1            8      8.0      0.0          if klass is None:
   604         1           13     13.0      0.0              if dtype == np.object_:
   605                                                           klass = ObjectBlock
   606         1            6      6.0      0.0          try:
   607                                                       # force the copy here
   608         1            7      7.0      0.0              if values is None:
   609                                           
   610         1            8      8.0      0.0                  if issubclass(dtype.type,
   611         1           14     14.0      0.0                                (compat.text_type, compat.string_types)):
   612                                           
   613                                                               # use native type formatting for datetime/tz/timedelta
   614         1           15     15.0      0.0                      if self.is_datelike:
   615                                                                   values = self.to_native_types()
   616                                           
   617                                                               # astype formatting
   618                                                               else:
   619         1            8      8.0      0.0                          values = self.values
   620                                           
   621                                                           else:
   622                                                               values = self.get_values(dtype=dtype)
   623                                           
   624                                                           # _astype_nansafe works fine with 1-d only
   625         1       665777 665777.0     99.9                  values = astype_nansafe(values.ravel(), dtype, copy=True)
   626         1           32     32.0      0.0                  values = values.reshape(self.shape)
   627                                           
   628         1           17     17.0      0.0              newb = make_block(values, placement=self.mgr_locs, dtype=dtype,
   629         1          269    269.0      0.0                                klass=klass)
   630                                                   except:
   631                                                       if errors == 'raise':
   632                                                           raise
   633                                                       newb = self.copy() if copy else self
   634                                           
   635         1            8      8.0      0.0          if newb.is_numeric and self.is_numeric:
   ...
   642         1            6      6.0      0.0          return newb

Source

...好吧,还是没有。让我们看看 astype_nansafe:

%lprun -f pd.core.internals.astype_nansafe x.astype(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
   640                                           def astype_nansafe(arr, dtype, copy=True):
   641                                               """ return a view if copy is False, but
   642                                                   need to be very careful as the result shape could change! """
   643         1           13     13.0      0.0      if not isinstance(dtype, np.dtype):
   644                                                   dtype = pandas_dtype(dtype)
   645                                           
   646         1            8      8.0      0.0      if issubclass(dtype.type, text_type):
   647                                                   # in Py3 that's str, in Py2 that's unicode
   648         1       663317 663317.0    100.0          return lib.astype_unicode(arr.ravel()).reshape(arr.shape)
   ...

Source

又是一行占了 100%,所以我将进一步介绍一个功能:

%lprun -f pd.core.dtypes.cast.lib.astype_unicode x.astype(str)

UserWarning: Could not extract a code object for the object <built-in function astype_unicode>

好的,我们找到了一个built-in function,这意味着它是一个C函数。在这种情况下,它是一个 Cython 函数。但这意味着我们无法使用 line-profiler 进行更深入的挖掘。所以我暂时就到这里了。

分析x.apply

%lprun -f x.apply x.apply(str)
Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
  2426                                               def apply(self, func, convert_dtype=True, args=(), **kwds):
  2427                                                   """
  ...
  2523                                                   """
  2524         1           84     84.0      0.0          if len(self) == 0:
  2525                                                       return self._constructor(dtype=self.dtype,
  2526                                                                                index=self.index).__finalize__(self)
  2527                                           
  2528                                                   # dispatch to agg
  2529         1           11     11.0      0.0          if isinstance(func, (list, dict)):
  2530                                                       return self.aggregate(func, *args, **kwds)
  2531                                           
  2532                                                   # if we are a string, try to dispatch
  2533         1           12     12.0      0.0          if isinstance(func, compat.string_types):
  2534                                                       return self._try_aggregate_string_function(func, *args, **kwds)
  2535                                           
  2536                                                   # handle ufuncs and lambdas
  2537         1            7      7.0      0.0          if kwds or args and not isinstance(func, np.ufunc):
  2538                                                       f = lambda x: func(x, *args, **kwds)
  2539                                                   else:
  2540         1            6      6.0      0.0              f = func
  2541                                           
  2542         1          154    154.0      0.1          with np.errstate(all='ignore'):
  2543         1           11     11.0      0.0              if isinstance(f, np.ufunc):
  2544                                                           return f(self)
  2545                                           
  2546                                                       # row-wise access
  2547         1          188    188.0      0.1              if is_extension_type(self.dtype):
  2548                                                           mapped = self._values.map(f)
  2549                                                       else:
  2550         1         6238   6238.0      3.3                  values = self.asobject
  2551         1       181910 181910.0     95.5                  mapped = lib.map_infer(values, f, convert=convert_dtype)
  2552                                           
  2553         1           28     28.0      0.0          if len(mapped) and isinstance(mapped[0], Series):
  2554                                                       from pandas.core.frame import DataFrame
  2555                                                       return DataFrame(mapped.tolist(), index=self.index)
  2556                                                   else:
  2557         1           19     19.0      0.0              return self._constructor(mapped,
  2558         1         1870   1870.0      1.0                                       index=self.index).__finalize__(self)

Source

同样,这是一个占用大部分时间的函数:lib.map_infer ...

%lprun -f pd.core.series.lib.map_infer x.apply(str)
Could not extract a code object for the object <built-in function map_infer>

好的,这是另一个 Cython 函数。

这次有另一个(虽然不那么重要)贡献者,占 ~3%:values = self.asobject。但我暂时忽略它,因为我们对主要贡献者感兴趣。

进入C/Cython

astype

调用的函数

这是astype_unicode函数:

cpdef ndarray[object] astype_unicode(ndarray arr):
    cdef:
        Py_ssize_t i, n = arr.size
        ndarray[object] result = np.empty(n, dtype=object)

    for i in range(n):
        # we can use the unsafe version because we know `result` is mutable
        # since it was created from `np.empty`
        util.set_value_at_unsafe(result, i, unicode(arr[i]))

    return result

Source

这个函数使用这个助手:

cdef inline set_value_at_unsafe(ndarray arr, object loc, object value):
    cdef:
        Py_ssize_t i, sz
    if is_float_object(loc):
        casted = int(loc)
        if casted == loc:
            loc = casted
    i = <Py_ssize_t> loc
    sz = cnp.PyArray_SIZE(arr)

    if i < 0:
        i += sz
    elif i >= sz:
        raise IndexError('index out of bounds')

    assign_value_1d(arr, i, value)

Source

它本身使用这个 C 函数:

PANDAS_INLINE int assign_value_1d(PyArrayObject* ap, Py_ssize_t _i,
                                  PyObject* v) {
    npy_intp i = (npy_intp)_i;
    char* item = (char*)PyArray_DATA(ap) + i * PyArray_STRIDE(ap, 0);
    return PyArray_DESCR(ap)->f->setitem(v, item, ap);
}

Source

apply

调用的函数

这是map_infer函数的实现:

def map_infer(ndarray arr, object f, bint convert=1):
    cdef:
        Py_ssize_t i, n
        ndarray[object] result
        object val

    n = len(arr)
    result = np.empty(n, dtype=object)
    for i in range(n):
        val = f(util.get_value_at(arr, i))

        # unbox 0-dim arrays, GH #690
        if is_array(val) and PyArray_NDIM(val) == 0:
            # is there a faster way to unbox?
            val = val.item()

        result[i] = val

    if convert:
        return maybe_convert_objects(result,
                                     try_float=0,
                                     convert_datetime=0,
                                     convert_timedelta=0)

    return result

Source

有了这个帮手:

cdef inline object get_value_at(ndarray arr, object loc):
    cdef:
        Py_ssize_t i, sz
        int casted

    if is_float_object(loc):
        casted = int(loc)
        if casted == loc:
            loc = casted
    i = <Py_ssize_t> loc
    sz = cnp.PyArray_SIZE(arr)

    if i < 0 and sz > 0:
        i += sz
    elif i >= sz or sz == 0:
        raise IndexError('index out of bounds')

    return get_value_1d(arr, i)

Source

其中使用了这个C函数:

PANDAS_INLINE PyObject* get_value_1d(PyArrayObject* ap, Py_ssize_t i) {
    char* item = (char*)PyArray_DATA(ap) + i * PyArray_STRIDE(ap, 0);
    return PyArray_Scalar(item, PyArray_DESCR(ap), (PyObject*)ap);
}

Source

Cython代码的一些思考

最终调用的Cython代码之间存在一些差异。

astype 获取的路径使用 unicodeapply 路径使用传入的函数。让我们看看这是否有所不同(再次 IPython/Jupyter自己编译 Cython 代码非常容易):

%load_ext cython

%%cython

import numpy as np
cimport numpy as np

cpdef object func_called_by_astype(np.ndarray arr):
    cdef np.ndarray[object] ret = np.empty(arr.size, dtype=object)
    for i in range(arr.size):
        ret[i] = unicode(arr[i])
    return ret

cpdef object func_called_by_apply(np.ndarray arr, object f):
    cdef np.ndarray[object] ret = np.empty(arr.size, dtype=object)
    for i in range(arr.size):
        ret[i] = f(arr[i])
    return ret

时间:

import numpy as np

arr = np.random.randint(0, 10000, 1000000)
%timeit func_called_by_astype(arr)
514 ms ± 11.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit func_called_by_apply(arr, str)
632 ms ± 43.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

好吧,有区别,但错误,它实际上表明apply会稍微

但还记得我之前在apply函数中提到的asobject调用吗?这可能是原因吗?让我们看看:

import numpy as np

arr = np.random.randint(0, 10000, 1000000)
%timeit func_called_by_astype(arr)
557 ms ± 33.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit func_called_by_apply(arr.astype(object), str)
317 ms ± 13.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

现在看起来好多了。转换为对象数组使得 apply 调用的函数更快。原因很简单:str 是一个 Python 函数,如果您已经有 Python 个对象而 NumPy(或 Pandas)没有,这些通常会快得多需要为存储在数组中的值创建一个 Python 包装器(通常不是 Python 对象,除非数组是 dtype object)。

然而,这并不能解释您看到的 巨大 差异。我怀疑数组的迭代方式和结果中元素的设置方式实际上存在额外差异。很可能是:

val = f(util.get_value_at(arr, i))
if is_array(val) and PyArray_NDIM(val) == 0:
    val = val.item()
result[i] = val

部分 map_infer 函数比:

for i in range(n):
    # we can use the unsafe version because we know `result` is mutable
    # since it was created from `np.empty`
    util.set_value_at_unsafe(result, i, unicode(arr[i]))

astype(str)路径调用。第一个函数的注释似乎表明 map_infer 的作者实际上试图使代码尽可能快(参见关于 "is there a faster way to unbox?" 的注释,而另一个可能是在没有特别关心性能的情况下编写的.但这只是猜测。

同样在我的电脑上,我实际上已经非常接近 x.astype(str)x.apply(str) 的性能了:

import numpy as np

arr = np.random.randint(0, 100, 1000000)
s = pd.Series(arr)
%timeit s.astype(str)
535 ms ± 23.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit func_called_by_astype(arr)
547 ms ± 21.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


%timeit s.apply(str)
216 ms ± 8.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit func_called_by_apply(arr.astype(object), str)
272 ms ± 12.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

请注意,我还检查了其他一些 return 不同结果的变体:

%timeit s.values.astype(str)  # array of strings
407 ms ± 8.56 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit list(map(str, s.values.tolist()))  # list of strings
184 ms ± 5.02 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

有趣的是,listmap 的 Python 循环似乎是我电脑上最快的循环。

其实我做了一个小benchmark包括剧情:

import pandas as pd
import simple_benchmark

def Series_astype(series):
    return series.astype(str)

def Series_apply(series):
    return series.apply(str)

def Series_tolist_map(series):
    return list(map(str, series.values.tolist()))

def Series_values_astype(series):
    return series.values.astype(str)


arguments = {2**i: pd.Series(np.random.randint(0, 100, 2**i)) for i in range(2, 20)}
b = simple_benchmark.benchmark(
    [Series_astype, Series_apply, Series_tolist_map, Series_values_astype],
    arguments,
    argument_name='Series size'
)

%matplotlib notebook
b.plot()

请注意,这是一个对数对数图,因为我在基准测试中涉及的尺寸范围很大。然而,这里越低意味着越快。

不同版本的Python/NumPy/Pandas结果可能不同。所以如果你想比较它,这些是我的版本:

Versions
--------
Python 3.6.5
NumPy 1.14.2
Pandas 0.22.0