在 Dask 中使用 read_csv 进行列名转换

Column Name Shift using read_csv in Dask

我正在尝试使用 Intake to catalog a csv dataset. It uses the Dask implementation of read_csv,后者又使用 pandas 实现。

我看到的问题是我正在加载的 csv 文件没有索引列,因此 Dask 将第一列解释为索引,然后将列名向右移动。

一个例子:

日期时间 (dt) 列应该是第一列,但是当读取 csv 时,它被解释为索引并且列名被移动,因此从它们的正确位置偏移。我将列 names 列表和 dtypes 字典提供给 read_csv 调用。

据我所知,如果我使用 pandas,我会提供 index_col=False kwarg 来修复,如图所示 ,但是 Dask returns 是有意的错误说明:Keywords 'index' and 'index_col' not supported. Use dd.read_csv(...).set_index('my-index') instead。这似乎是由于并行化限制。

建议的修复(使用 set_index('my-index))在这种情况下无效,因为它希望在读取整个文件的同时还使用列名来设置索引。主要问题是,如果名称偏移,我无法准确设置索引列。

在 Dask 中加载没有明确包含索引列的 csv 的最佳方法是什么,以便解释的索引列至少保留指定的列名?

更多信息:

我正在使用的播放数据集:https://www.kaggle.com/NUFORC/ufo-sightings?select=scrubbed.csv

我正在使用的 Intake catalog.yml 文件如下:

name:
  intake-explore-catalog
metadata:
  version: 1
sources:
    ufo_sightings:
      description: data around ufo sightings
      driver: csv
      args:
        urlpath: "{{CATALOG_DIR}}/data/ufo_scrubbed.csv"
        csv_kwargs:
          header: 0
          names: ['dt', 'city', 'state', 'country', 'shape', 'duration_s', 'duration_hm', 'comments', 'date_posted', 'latitude']
          dtype: {'dt': 'str', 'city': 'str', 'state': 'str', 'country': 'str', 'shape': 'str', 'duration_s': 'str', 'duration_hm': 'str', 'comments': 'str', 'date_posted': 'str', 'latitude': 'str'}
          infer_datetime_format: true
      metadata:
        version: 1
        custom_field: blah

我正在使用以下方法加载目录和相应的数据集:

cat = intake.open_catalog("catalog.yml")
ufo_ds = cat.ufo_sightings.read()

这会产生上面显示的读入数据框和该数据的 csv 副本:

,dt,city,state,country,shape,duration_s,duration_hm,comments,date_posted,latitude
10/10/1949 20:30,san marcos,tx,us,cylinder,2700,45 minutes,This event took place in early fall around 1949-50. It occurred after a Boy Scout meeting in the Baptist Church. The Baptist Church sit,4/27/2004,29.8830556,-97.9411111
10/10/1949 21:00,lackland afb,tx,,light,7200,1-2 hrs,1949 Lackland AFB&#44 TX.  Lights racing across the sky & making 90 degree turns on a dime.,12/16/2005,29.38421,-98.581082
10/10/1955 17:00,chester (uk/england),,gb,circle,20,20 seconds,Green/Orange circular disc over Chester&#44 England,1/21/2008,53.2,-2.916667
10/10/1956 21:00,edna,tx,us,circle,20,1/2 hour,My older brother and twin sister were leaving the only Edna theater at about 9 PM&#44...we had our bikes and I took a different route home,1/17/2004,28.9783333,-96.6458333
10/10/1960 20:00,kaneohe,hi,us,light,900,15 minutes,AS a Marine 1st Lt. flying an FJ4B fighter/attack aircraft on a solo night exercise&#44 I was at 50&#44000&#39 in a "clean" aircraft (no ordinan,1/22/2004,21.4180556,-157.8036111

与 original/raw 数据 csv(无前导逗号)相比:

datetime,city,state,country,shape,duration (seconds),duration (hours/min),comments,date posted,latitude,longitude 
10/10/1949 20:30,san marcos,tx,us,cylinder,2700,45 minutes,"This event took place in early fall around 1949-50. It occurred after a Boy Scout meeting in the Baptist Church. The Baptist Church sit",4/27/2004,29.8830556,-97.9411111
10/10/1949 21:00,lackland afb,tx,,light,7200,1-2 hrs,"1949 Lackland AFB&#44 TX.  Lights racing across the sky & making 90 degree turns on a dime.",12/16/2005,29.38421,-98.581082
10/10/1955 17:00,chester (uk/england),,gb,circle,20,20 seconds,"Green/Orange circular disc over Chester&#44 England",1/21/2008,53.2,-2.916667
10/10/1956 21:00,edna,tx,us,circle,20,1/2 hour,"My older brother and twin sister were leaving the only Edna theater at about 9 PM&#44...we had our bikes and I took a different route home",1/17/2004,28.9783333,-96.6458333
10/10/1960 20:00,kaneohe,hi,us,light,900,15 minutes,"AS a Marine 1st Lt. flying an FJ4B fighter/attack aircraft on a solo night exercise&#44 I was at 50&#44000&#39 in a "clean" aircraft (no ordinan",1/22/2004,21.4180556,-157.8036111
10/10/1961 19:00,bristol,tn,us,sphere,300,5 minutes,"My father is now 89 my brother 52 the girl with us now 51 myself 49 and the other fellow which worked with my father if he&#39s still livi",4/27/2007,36.5950000,-82.1888889

任务调用:

df = dask.dataframe.read_csv('data/ufo_scrubbed.csv',
                            names=['dt',
                                   'city',
                                   'state',
                                   'country',
                                   'shape',
                                   'duration_s',
                                   'duration_hm',
                                   'comments',
                                   'date_posted',
                                   'latitude'],
                             dtype = {'dt': 'str',
                                       'city': 'str',
                                       'state': 'str',
                                       'country': 'str',
                                       'shape': 'str',
                                       'duration_s': 'str',
                                       'duration_hm': 'str',
                                       'comments': 'str',
                                       'date_posted': 'str',
                                       'latitude': 'str'}
                            )

不幸的是,header 行以逗号开头,这就是您的列名差一个的原因。你最好解决这个问题,而不是解决它。

但是,如果不提供列名,不会自动获取索引:

df = dask.dataframe.read_csv('file.csv', header=0)

这里的索引只是一个范围(每个分区从0开始计数)。然后您可以在事后分配列名

df2 = df.rename(columns=dict(zip(df.columns, df.columns[1:]), latitude='longitude')) 

你无法单独使用 Intake 处方来实现这一点,你必须通过 to_dask() 或 read() (分别用于 dask 或 pandas 输出来获取数据帧).