在我正在编写的函数中使用 pandas.read_csv 文档字符串
Use pandas.read_csv docstring in the function I'm writing
我想用以下内容编写一个函数 header :
def split_csv(file, sep=";", output_path=".", nrows=None, chunksize=None, low_memory=True, usecols=None):
如您所见,我使用的参数与 pd.read_csv
中的几个参数相同。我想知道(或做)的是将有关这些参数的文档字符串从 read_csv
转发到我自己的函数,而不必 copy/paste 它们。
编辑:据我所知,没有开箱即用的现有解决方案。所以也许建造一个是有序的。我的想法:
some_new_fancy_library.get_doc(for_function = pandas.read_csv,for_parameters = ['sep','nrows'])
会输出:
{'sep': 'doc as found in the docstring',
'nrows' : 'doc as found in the docstring', ...}
然后只需将字典的值插入到我自己的函数的文档字符串中即可
干杯
您可以使用正则表达式解析文档字符串和return您的函数的匹配参数:
import re
pat = re.compile(r'([\w_+]+ :)') # capturing group for arguments
splitted = pat.split(pd.read_csv.__doc__)
# Compare the parsed docstring against your function's arguments and only extract the required docstrings
docstrings = '\n'.join([''.join(splitted[i: i+2]) for i, s in enumerate(splitted) if s.rstrip(" :") in split_csv.__code__.co_varnames])
split_csv.__doc__ = docstrings
help(split_csv)
# Help on function split_csv in module __main__:
#
# split_csv(file, sep=';', output_path='.', nrows=None, chunksize=None, low_memory=True, usecols=None)
# sep : str, default ','
# Delimiter to use. If sep is None, the C engine cannot automatically detect
# the separator, but the Python parsing engine can, meaning the latter will
# be used and automatically detect the separator by Python's builtin sniffer
# tool, ``csv.Sniffer``. In addition, separators longer than 1 character and
# different from ``'\s+'`` will be interpreted as regular expressions and
# will also force the use of the Python parsing engine. Note that regex
# delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
#
# usecols : list-like or callable, default None
# Return a subset of the columns. If list-like, all elements must either
# be positional (i.e. integer indices into the document columns) or strings
# that correspond to column names provided either by the user in `names` or
# inferred from the document header row(s). For example, a valid list-like
# `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Element
# order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.
# To instantiate a DataFrame from ``data`` with element order preserved use
# ``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns
# in ``['foo', 'bar']`` order or
# ``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``
# for ``['bar', 'foo']`` order.
#
# If callable, the callable function will be evaluated against the column
# names, returning names where the callable function evaluates to True. An
# example of a valid callable argument would be ``lambda x: x.upper() in
# ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
# parsing time and lower memory usage.
#
# nrows : int, default None
# Number of rows of file to read. Useful for reading pieces of large files
#
# chunksize : int, default None
# Return TextFileReader object for iteration.
# See the `IO Tools docs
# <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
# for more information on ``iterator`` and ``chunksize``.
#
# low_memory : boolean, default True
# Internally process the file in chunks, resulting in lower memory use
# while parsing, but possibly mixed type inference. To ensure no mixed
# types either set False, or specify the type with the `dtype` parameter.
# Note that the entire file is read into a single DataFrame regardless,
# use the `chunksize` or `iterator` parameter to return the data in chunks.
# (Only valid with C parser)
当然,这取决于您对复制的函数具有准确的参数名称。如您所见,您需要自己添加不匹配的文档字符串(例如 file
、output_path
)。
我想用以下内容编写一个函数 header :
def split_csv(file, sep=";", output_path=".", nrows=None, chunksize=None, low_memory=True, usecols=None):
如您所见,我使用的参数与 pd.read_csv
中的几个参数相同。我想知道(或做)的是将有关这些参数的文档字符串从 read_csv
转发到我自己的函数,而不必 copy/paste 它们。
编辑:据我所知,没有开箱即用的现有解决方案。所以也许建造一个是有序的。我的想法:
some_new_fancy_library.get_doc(for_function = pandas.read_csv,for_parameters = ['sep','nrows'])
会输出:
{'sep': 'doc as found in the docstring',
'nrows' : 'doc as found in the docstring', ...}
然后只需将字典的值插入到我自己的函数的文档字符串中即可
干杯
您可以使用正则表达式解析文档字符串和return您的函数的匹配参数:
import re
pat = re.compile(r'([\w_+]+ :)') # capturing group for arguments
splitted = pat.split(pd.read_csv.__doc__)
# Compare the parsed docstring against your function's arguments and only extract the required docstrings
docstrings = '\n'.join([''.join(splitted[i: i+2]) for i, s in enumerate(splitted) if s.rstrip(" :") in split_csv.__code__.co_varnames])
split_csv.__doc__ = docstrings
help(split_csv)
# Help on function split_csv in module __main__:
#
# split_csv(file, sep=';', output_path='.', nrows=None, chunksize=None, low_memory=True, usecols=None)
# sep : str, default ','
# Delimiter to use. If sep is None, the C engine cannot automatically detect
# the separator, but the Python parsing engine can, meaning the latter will
# be used and automatically detect the separator by Python's builtin sniffer
# tool, ``csv.Sniffer``. In addition, separators longer than 1 character and
# different from ``'\s+'`` will be interpreted as regular expressions and
# will also force the use of the Python parsing engine. Note that regex
# delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
#
# usecols : list-like or callable, default None
# Return a subset of the columns. If list-like, all elements must either
# be positional (i.e. integer indices into the document columns) or strings
# that correspond to column names provided either by the user in `names` or
# inferred from the document header row(s). For example, a valid list-like
# `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Element
# order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.
# To instantiate a DataFrame from ``data`` with element order preserved use
# ``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns
# in ``['foo', 'bar']`` order or
# ``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``
# for ``['bar', 'foo']`` order.
#
# If callable, the callable function will be evaluated against the column
# names, returning names where the callable function evaluates to True. An
# example of a valid callable argument would be ``lambda x: x.upper() in
# ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
# parsing time and lower memory usage.
#
# nrows : int, default None
# Number of rows of file to read. Useful for reading pieces of large files
#
# chunksize : int, default None
# Return TextFileReader object for iteration.
# See the `IO Tools docs
# <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
# for more information on ``iterator`` and ``chunksize``.
#
# low_memory : boolean, default True
# Internally process the file in chunks, resulting in lower memory use
# while parsing, but possibly mixed type inference. To ensure no mixed
# types either set False, or specify the type with the `dtype` parameter.
# Note that the entire file is read into a single DataFrame regardless,
# use the `chunksize` or `iterator` parameter to return the data in chunks.
# (Only valid with C parser)
当然,这取决于您对复制的函数具有准确的参数名称。如您所见,您需要自己添加不匹配的文档字符串(例如 file
、output_path
)。