如何使用 xlwt 将 python 数据导出到 excel?
How do you export python data to excel using xlwt?
这是我的拼图:
from bs4 import BeautifulSoup
import requests
url = 'http://www.baseballpress.com/lineups'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
for names in soup.find_all(class_="players"):
print(names.text)
我想使用 xlwt 将我的内容导入 excel。我使用下面的代码来查看是否可以使用 python:
制作 excel sheet
import xlwt
wb = xlwt.Workbook()
ws = wb.add_sheet("Batters")
ws.write(0,0,"coding isn't easy")
wb.save("myfirst_xlwt")
上面的代码有效。我现在想把它应用到我原来的刮擦上。如何合并这两个代码?
我是新手,如有任何帮助,我们将不胜感激。谢谢你的时间! =)
我试图 运行 你的代码,但它没有找到任何 class of example
。它returns[]
。
关于xlwt
,基本上,它只是使用您指定的字符串写入一个单元格(带有行和列参数)。
wb = xlwt.Workbook()
ws = wb.add_sheet('sheet_name')
ws.write(0,0,"content") #Writes the first row, first col, in sheet called "sheet_name".
wb.save("example.xls")
不过,我认为 pandas
更适合这个目的。 xlwt
如果忘记行号和列号,有时会变得非常混乱。如果你能提供一些非空的结果,我可以写一个简单的脚本让你用pandas导出到Excel。
为了在您的示例中使用 pandas
,这里是代码。
from bs4 import BeautifulSoup
import requests
url = 'http://www.baseballpress.com/lineups'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
all_games = []
for g in soup.find_all(class_="game"):
players = g.find_all('a', class_='player-link')
game = {
'time': g.find(class_='game-time').text,
'weather': g.find(target='forecast').text.strip(),
'players': [_.text for _ in g.find_all('a', class_='player-link')],
}
all_games.append(game)
print(all_games) # This will print out a list of dict that contains the game information
import pandas as pd
df = pd.DataFrame.from_dict(all_games) # Construct dataframe from the list of dict
writer = pd.ExcelWriter('baseball.xlsx') # Init Pandas excel writer, using the file name 'baseball.xlsx'
df.to_excel(writer, 'baseball_sheet') # Writes to a sheet called 'baseball_sheet'. Format follows the Dataframe format.
writer.save() # Save excel
合并片段的最简单方法是使用 ws.write
any place you have a print
statement. You can use enumerate
来跟踪您的行索引:
from bs4 import BeautifulSoup
import requests
import xlwt
wb = xlwt.Workbook()
ws = wb.add_sheet("Batters")
url = 'http://www.baseballpress.com/lineups'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
for row, name in enumerate(soup.find_all(class_="players")):
ws.write(row, 0, name.text)
wb.save("myfirst_xlwt")
这是我的拼图:
from bs4 import BeautifulSoup
import requests
url = 'http://www.baseballpress.com/lineups'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
for names in soup.find_all(class_="players"):
print(names.text)
我想使用 xlwt 将我的内容导入 excel。我使用下面的代码来查看是否可以使用 python:
制作 excel sheetimport xlwt
wb = xlwt.Workbook()
ws = wb.add_sheet("Batters")
ws.write(0,0,"coding isn't easy")
wb.save("myfirst_xlwt")
上面的代码有效。我现在想把它应用到我原来的刮擦上。如何合并这两个代码?
我是新手,如有任何帮助,我们将不胜感激。谢谢你的时间! =)
我试图 运行 你的代码,但它没有找到任何 class of example
。它returns[]
。
关于xlwt
,基本上,它只是使用您指定的字符串写入一个单元格(带有行和列参数)。
wb = xlwt.Workbook()
ws = wb.add_sheet('sheet_name')
ws.write(0,0,"content") #Writes the first row, first col, in sheet called "sheet_name".
wb.save("example.xls")
不过,我认为 pandas
更适合这个目的。 xlwt
如果忘记行号和列号,有时会变得非常混乱。如果你能提供一些非空的结果,我可以写一个简单的脚本让你用pandas导出到Excel。
为了在您的示例中使用 pandas
,这里是代码。
from bs4 import BeautifulSoup
import requests
url = 'http://www.baseballpress.com/lineups'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
all_games = []
for g in soup.find_all(class_="game"):
players = g.find_all('a', class_='player-link')
game = {
'time': g.find(class_='game-time').text,
'weather': g.find(target='forecast').text.strip(),
'players': [_.text for _ in g.find_all('a', class_='player-link')],
}
all_games.append(game)
print(all_games) # This will print out a list of dict that contains the game information
import pandas as pd
df = pd.DataFrame.from_dict(all_games) # Construct dataframe from the list of dict
writer = pd.ExcelWriter('baseball.xlsx') # Init Pandas excel writer, using the file name 'baseball.xlsx'
df.to_excel(writer, 'baseball_sheet') # Writes to a sheet called 'baseball_sheet'. Format follows the Dataframe format.
writer.save() # Save excel
合并片段的最简单方法是使用 ws.write
any place you have a print
statement. You can use enumerate
来跟踪您的行索引:
from bs4 import BeautifulSoup
import requests
import xlwt
wb = xlwt.Workbook()
ws = wb.add_sheet("Batters")
url = 'http://www.baseballpress.com/lineups'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
for row, name in enumerate(soup.find_all(class_="players")):
ws.write(row, 0, name.text)
wb.save("myfirst_xlwt")