ValueError: Big-endian buffer not supported on little-endian compiler

ValueError: Big-endian buffer not supported on little-endian compiler

我正在使用 PVlib 对光伏阵列进行建模,但有时当我尝试访问天气预报数据时出现以下错误:

ValueError: Big-endian buffer not supported on little-endian compiler

我不确定为什么它只是偶尔发生,而不是每次我 运行 代码。下面是我 运行ning 的代码,最后一行是导致错误的代码。任何解决此问题的帮助将不胜感激,谢谢!!

# built-in python modules
import datetime
import inspect
import os
import pytz

# scientific python add-ons
import numpy as np
import pandas as pd

# plotting
# first line makes the plots appear in the notebook
%matplotlib inline 
import matplotlib.pyplot as plt
import matplotlib as mpl

#import the pvlib library
from pvlib import solarposition,irradiance,atmosphere,pvsystem
from pvlib.forecast import GFS
from pvlib.modelchain import ModelChain

pd.set_option('display.max_rows', 500)

latitude, longitude, tz = 21.300268, -157.80723, 'Pacific/Honolulu' 

# specify time range.
# start = pd.Timestamp(datetime.date.today(), tz=tz)
pacific = pytz.timezone('Etc/GMT+10')
# print(pacific)
# datetime.datetime(year, month, day, hour, minute, second, microsecond, tzinfo)
start2 = pd.Timestamp(datetime.datetime(2020, 2, 10, 13, 0, 0, 0, pacific))
# print(start)
# print(start2)
# print(datetime.date.today())

end = start2 + pd.Timedelta(days=1.5)

# Define forecast model
fm = GFS()

# get data from location specified above
forecast_data = fm.get_processed_data(latitude, longitude, start2, end)
# print(forecast_data)

我想我现在有解决办法了。出于某种原因,来自这些 UNIDATA DCSS 的数据偶尔会查询 return big-endian 字节。这与 here 中讨论的 Pandas Dataframe 或 Series 对象不兼容。我在 PVLIB 中找到了从 NetCDF4 获取数据并创建 Pandas Dataframe 的函数。查看 pvlib 然后 forecast.py 并调用函数 _netcdf2pandas。我将复制下面的源代码:

data_dict = {}
for key, data in netcdf_data.variables.items():
    # if accounts for possibility of extra variable returned
    if key not in query_variables:
        continue
    squeezed = data[:].squeeze()
    if squeezed.ndim == 1:
        data_dict[key] = squeezed
    elif squeezed.ndim == 2:
        for num, data_level in enumerate(squeezed.T):
            data_dict[key + '_' + str(num)] = data_level
    else:
        raise ValueError('cannot parse ndim > 2')

data = pd.DataFrame(data_dict, index=self.time)

目标是将 NetCDF4 数据压缩成单独的 Pandas 系列,将每个系列保存到字典中,然后将所有这些导入数据框和 return。我所做的只是在此处添加一个检查,以确定压缩序列是否为 Big-Endian 并将其转换为 Little-Endian。我修改后的代码如下:

for key, data in netcdf_data.variables.items():
    # if accounts for possibility of extra variable returned
    if key not in query_variables:
        continue
    squeezed = data[:].squeeze()

    # If the data is big endian, swap the byte order to make it little endian
    if squeezed.dtype.byteorder == '>':
        squeezed = squeezed.byteswap().newbyteorder()

    if squeezed.ndim == 1:
        data_dict[key] = squeezed
    elif squeezed.ndim == 2:
        for num, data_level in enumerate(squeezed.T):
            data_dict[key + '_' + str(num)] = data_level
    else:
        raise ValueError('cannot parse ndim > 2')

data = pd.DataFrame(data_dict, index=self.time)

我使用 to determine the endian-ness of each Series. The SciPy documentation 给了我一些关于这些字节顺序可能是什么类型的数据的线索。

Here is my pull request to pv-lib that fixes the problem for me。我希望这有帮助。我仍然不知道为什么问题不一致。大约 95% 的情况下,我对 get_processed_data 的尝试都会失败。当它工作时,我以为我找到了修复,然后 Pandas 会抛出字节序错误。在对 pv-lib 实施修复后,Pandas 没有关于大端或小端的任何错误。