TypeError: must be convertible to a buffer, not ResultSet

TypeError: must be convertible to a buffer, not ResultSet

我正在尝试使用 scraperwiki and bs4 将 PDF 转换为文本文件。我收到 TypeError。我是 Python 的新手,非常感谢您的帮助。

此处出现错误:

File "scraper_wiki_download.py", line 53, in write_file
f.write(soup)

这是我的代码:

# Get content, regardless of whether an HTML, XML or PDF file
def send_Request(url):        
    response = http.urlopen('GET', url, preload_content=False)
    return response

# Use this to get PDF, covert to XML
def process_PDF(fileLocation):
    pdfToProcess = send_Request(fileLocation)
    pdfToObject = scraperwiki.pdftoxml(pdfToProcess.read())
    return pdfToObject

# returns a navigatibale tree, which you can iterate through
def parse_HTML_tree(contentToParse):
    soup = BeautifulSoup(contentToParse, 'lxml')
    return soup

pdf = process_PDF('http://www.sfbos.org/Modules/ShowDocument.aspx?documentid=54790')
pdfToSoup = parse_HTML_tree(pdf)
soupToArray = pdfToSoup.findAll('text')

def write_file(soup_array):
    with open('test.txt', "wb") as f:
        f.write(soup_array)

write_file(soupToArray)

我猜 soupToArray = pdfToSoup.findAll('text') returns 某种列表,但 f.write() 仅适用于字符串,因此您必须对其进行迭代并以某种方式将每个元素转换为字符串.打印 soupToArray 以查看它到底是什么样子。

到现在为止从未使用过 scraperwiki 但这得到了文本:

import scraperwiki
import requests
from bs4 import BeautifulSoup

pdf_xml = scraperwiki.pdftoxml(requests.get('http://www.sfbos.org/Modules/ShowDocument.aspx?documentid=54790').content)
print(BeautifulSoup(pdf_xml, "lxml").find_all("text"))