为什么我不能为一个项目附加字典索引,但我可以为任何其他项目附加字典索引?

Why can't I append a dictionary index for one item but I can for any other item?

# Import libraries
from bs4 import BeautifulSoup
import requests
import pandas as pd
import time
import ast

start_time = time.time()
s = requests.Session()

#Get URL and extract content
page=1
traits = []
accessories, backgrounds, shoes = [], [], []

while page != 100:

    params = {
        ('arg', f"Qmer3VzaeFhb7c5uiwuHJbRuVCaUu72DcnSoUKb1EvnB2x/{page}"),
    }

    content = s.get('https://ipfs.infura.io:5001/api/v0/cat', params=params, auth=('', ''))
    soup = BeautifulSoup(content.text, 'html.parser')
    page = page + 1
    
    traits = ast.literal_eval(soup.text)['attributes']

    df = pd.DataFrame(traits)
    df1 = df[df['trait_type']=='ACCESSORIES']

    accessories.append(df1['value'].values[0])

任何人都可以向我解释我做错了什么吗?当我 运行 上述代码时,出现以下错误:

IndexError: index 0 is out of bounds for axis 0 with size 0

但是每当我使用不同的索引(如 BACKGRODS 或 SHOES)而不是 ACCESSORIES 时,就像下面的代码一样,我没有得到上述错误并且它 运行 完美。

# Import libraries
from bs4 import BeautifulSoup
import requests
import pandas as pd
import time
import ast

start_time = time.time()
s = requests.Session()

#Get URL and extract content
page=1
traits = []
accessories, backgrounds, shoes = [], [], []

while page != 100:

    params = {
        ('arg', f"Qmer3VzaeFhb7c5uiwuHJbRuVCaUu72DcnSoUKb1EvnB2x/{page}"),
    }

    content = s.get('https://ipfs.infura.io:5001/api/v0/cat', params=params, auth=('', ''))
    soup = BeautifulSoup(content.text, 'html.parser')
    page = page + 1
    
    traits = ast.literal_eval(soup.text)['attributes']

    df = pd.DataFrame(traits)
    df1 = df[df['trait_type']=='BACKGROUND']

    backgrounds.append(df1['value'].values[0])

这里的任何人都可以帮我找出我在这两个代码之间做的不同或错误的地方吗?

P.S。当 运行ning 任一代码直到附加行时,BACKGROUND 和 ACCESSORIES 都列在 df 和 df1 中。只有当我添加附加行时,ACCESSORIES 索引才会消失,但这不会发生在 BACKGROUND 或 SHOES 上。

以下代码解决了问题:

# Import libraries
from bs4 import BeautifulSoup
import requests
import pandas as pd
import time
import ast

start_time = time.time()
s = requests.Session()

#Get URL and extract content
page=1
traits = []
accessories, backgrounds, shoes = [], [], []

while page != 100:

    params = {
        ('arg', f"Qmer3VzaeFhb7c5uiwuHJbRuVCaUu72DcnSoUKb1EvnB2x/{page}"),
    }

    content = s.get('https://ipfs.infura.io:5001/api/v0/cat', params=params, auth=('', ''))
    soup = BeautifulSoup(content.text, 'html.parser')
    page = page + 1
    
    traits = ast.literal_eval(soup.text)['attributes']

    df = pd.DataFrame(traits)
    df1 = df[df['trait_type']=='ACCESSORIES']

    try:
        accessories.append(df1['value'].values[0])
    except:
        pass