用美汤刮论坛——如何排除引用回复?

Scraping a Forum with Beautiful Soup - How to exclude Quoted Replies?

这是我第一次使用 Beautiful Soup 或为此进行网页抓取。我对到目前为止所取得的进展感到非常满意,但我遇到了一些障碍。

我正在尝试抓取特定主题中的所有帖子。但是,我想从引用的回复中排除文本。

An example:

我想从这些帖子中抓取文本,而不是抓取红框指示区域内的文本。

在 html 中,我要排除的部分在我需要 select 的部分内,这就是我遇到困难的原因。我附上了 html

的屏幕截图

HTML image

<div id="post_message_39096267"><!-- google_ad_section_start --><div style="margin:20px; margin-top:5px; ">
<div class="smallfont" style="margin-bottom:2px">Quote:</div>
<table cellpadding="6" cellspacing="0" border="0" width="100%">
<tbody><tr>
    <td class="alt2" style="border:1px inset">

            <div>
                Originally Posted by <strong>SAAN</strong>
                <a href="http://www.city-data.com/forum/economics/2056372-minimum-wage-vs-liveable-wage-post33645660.html#post33645660" rel="nofollow"><img class="inlineimg li fs-viewpost" src="http://pics3.city-data.com/trn.gif" border="0" alt="View Post" title="View Post"></a>
            </div>
            <div style="font-style:italic">I agree with trying to buy a 
cheap car outright, the problem is everyone I know that has done that -
5000 car, always ended up with these huge repair bills that are equivalent 
to car payments.  Most cars after 100K will need all sort of regulatr 
maintance that is easily a 0 repair to go along with anything that may 
break which is common with cars as they age.<br>
<br>
I have a 2yr old im making payments on and 14yr old car that is paid off, 
but needs 00 in maintenance.  When car shopping this summer, I saw many 
cars i could buy outright, but after adding u everything needed to make sure 
it needs nothing, your back into the price range of a car payment.</div>

    </td>
</tr>
</tbody></table>
</div>Depends on how long the car loan would be stretched. Just because you 
can get an 8 year loan and reduce payments to a level like the repairs on 
your old car doesn't make it a good idea, especially for new cars that <a 
href="/knowledge/Depreciation.html" title="View 'depreciate' definition from 
Wikipedia" class="knldlink" rel="nofollow">depreciate</a> quickly. You'd 
just be putting yourself into negative equity territory.<!-- 
google_ad_section_end --></div>

我在下面包含了我的代码:希望这能帮助您理解我在说什么。

from bs4 import BeautifulSoup
import urllib2


num_pages = 101
page_range = range(1,num_pages+1)
clean_posts = []

for page in page_range:
  print("Reading page: ", page, "...")
  if page == 1:
    page_url = urllib2.urlopen('http://www.city-data.com/forum/economics/2056372-minimum-wage-vs-liveable-wage.html')
  else:
    page_url = urllib2.urlopen('http://www.city-data.com/forum/economics/2056372-minimum-wage-vs-liveable-wage'+'-'+str(page)+'.html')


soup = BeautifulSoup(page_url)

postData = soup.find_all("div", id=lambda value: value and value.startswith("post_message_"))

posts = []
for post in postData:
    posts.append(BeautifulSoup(str(post)).get_text().encode("utf-8").strip().replace("\t", ""))

posts_stripped = [x.replace("\n","") for x in posts]

clean_posts.append(posts_stripped)

最后,如果您能给我一些可行的代码示例,并像我 9 岁那样向我解释哪些事情,我将不胜感激!

干杯 迪尔玛德

检查你的 post_message_ div 里面是否有另一个 div(引用 div)。如果是这样提取它。将原始 div (post_message_) 文本附加到您的列表中。用这个替换你的for post in postData

posts = []
for post in postData:
    hasQuote = post.find("div")
     if not hasQuote is None:
        hasQuote.extract()
    posts.append(post.get_text(strip=True))