打开通过 python 写入的 .xlsx 文件时。弹出错误:- 文件格式或文件扩展名无效,请确认文件未损坏

While opening a .xlsx file written through python. An error pops up :- File format or file extension is not valid, Verify that file is not corrupted

from selenium import webdriver
import time
from bs4 import BeautifulSoup as Soup
from urllib.request import urlopen
import datetime as dt
import csv
import pandas as pd

driver = webdriver.Firefox(executable_path='C://Downloads//webdrivers//geckodriver.exe')


c1 = 'amazon_data_' + dt.datetime.now().strftime("%d_%b_%y_%I_%M_%p")

d = open(str(c1) + '.csv', 'x', encoding='utf-8')
#d = open(str(c1) + '.xlsx', 'x', encoding='utf-8')

for c in range(1):

    a = f'https://www.flipkart.com/search?q=sony+headphones&as=on&as-show=on&otracker=AS_Query_HistoryAutoSuggest_1_4_na_na_na&otracker1=AS_Query_HistoryAutoSuggest_1_4_na_na_na&as-pos=1&as-type=HISTORY&suggestionId=sony+headphones&requestId=ad797917-16ae-401e-98df-1c79a43d40c3&as-backfill=on&page={c}'

    '''
    request_response = requests.head(a)

    status_code = request_response.status_code
    if status_code == 200:
        print(True)

    else:
        print(False)
        '''
    driver.get(a)

    # time.sleep(1)

    page_soup = Soup(urlopen(a), 'html5lib')

    container = page_soup.find_all('div', {'class': '_4ddWXP'})
    for containers in container:
        find_url = containers.find('a')['href']
        new_url = 'https://www.flipkart.com' + find_url

        fetch = driver.get(new_url)
        # time.sleep(1)
        page_source = driver.page_source
        page_soup = Soup(page_source, 'html.parser')
        for data in page_soup:

            try:
                product_name = data.find('span', {'class': 'B_NuCI'}).text.strip()
                price = data.find('div', {'class': "_30jeq3 _16Jk6d"}).text.strip()
                current_url = new_url
            except:
                print('Not Available')
            # print(product_name, '\n', price, '\n', current_url, '\n')
            d.write(product_name + price + current_url + '\n')
                

我收到错误

  1. 在尝试以 .xlsx 格式保存输出数据时,它正确地保存了文件。但是在打开它时,会弹出一个错误:- 扩展名的文件格式无效,请确认文件没有损坏并且文件扩展名与文件格式匹配。

我尝试过的东西

当我尝试使用 .csv 写入输出数据时,它可以正确保存。但是打开文件时,数据有一些特殊字符,数据不是单独写入的。

** .csv方式写入数据时单个cell的输出**

JBL a noise cancellation enabled Bluetooth~

上传图片以便更好地理解

我想要的东西

  1. 我想将此日期保存为 .xlsx 格式以及相关的以下 3
    headers :- product_name, 价格, URL.
  2. 我希望删除所有特殊字符,以便在以 .xlsx 格式写入数据时获得干净的输出。

我看到几个问题:

  1. 使用 open()write() 你不能创建 xlsx 因为它必须是用 zip 压缩的文件 .xml

  2. 一些数据有 ,,通常用作列的分隔符,您应该将数据放在 " " 中以正确创建列。最好使用模块 csvpandas,它会自动使用 " "。这可能是您的主要问题。

  3. 你把 seleniumbeautifulsoup 混在一起,有时你会弄得一团糟。

  4. 你使用 for data in page_soup 所以你得到页面上的所有 children 和 运行 这些元素的相同代码但是你应该直接从 page_soup

我会将所有数据放在列表中 - 每个项目都作为子列表 - 然后我会将其转换为 pandas.DataFrame 并使用 to_csv()to_excel()

保存

我什至会使用 selenium 来搜索元素(即 find_elements_by_xpath)而不是 beautifulsoup,但我在代码中跳过了这个想法。

from selenium import webdriver
import time
from bs4 import BeautifulSoup as BS
import datetime as dt
import pandas as pd

# - before loop -

all_rows = []

#driver = webdriver.Firefox(executable_path='C:\Downloads\webdrivers\geckodriver.exe')
driver = webdriver.Firefox()  # I have `geckodriver` in folder `/home/furas/bin` and I don't have to set `executable_path`

# - loop - 

for page in range(1):  # range(10)`
    print('--- page:', page, '---')
    
    url = f'https://www.flipkart.com/search?q=sony+headphones&as=on&as-show=on&otracker=AS_Query_HistoryAutoSuggest_1_4_na_na_na&otracker1=AS_Query_HistoryAutoSuggest_1_4_na_na_na&as-pos=1&as-type=HISTORY&suggestionId=sony+headphones&requestId=ad797917-16ae-401e-98df-1c79a43d40c3&as-backfill=on&page={page}'

    driver.get(url)
    time.sleep(3)  

    soup = BS(driver.page_source, 'html5lib')

    all_containers = soup.find_all('div', {'class': '_4ddWXP'})
    
    for container in all_containers:
        find_url = container.find('a')['href']
        print('find_url:', find_url)
        item_url = 'https://www.flipkart.com' + find_url

        driver.get(item_url)
        time.sleep(3)
        
        item_soup = BS(driver.page_source, 'html.parser')
        
        try:
            product_name = item_soup.find('span', {'class': 'B_NuCI'}).text.strip()
            price = item_soup.find('div', {'class': "_30jeq3 _16Jk6d"}).text.strip()

            print('product_name:', product_name)
            print('price:', price)
            print('item_url:', item_url)
            print('---')
            
            row = [product_name, price, item_url]
            all_rows.append(row)
                
        except Exception as ex:
            print('Not Available:', ex)
            print('---')
        
# - after loop -

df = pd.DataFrame(all_rows)

filename = dt.datetime.now().strftime("amazon_data_%d_%b_%y_%I_%M_%p.csv")
df.to_csv(filename)

#filename = dt.datetime.now().strftime("amazon_data_%d_%b_%y_%I_%M_%p.xlsx")
#df.to_excel(filename)