如何查找和计算列表和文本之间的多个交集?
How can I find and count multiple intersections between a list and a text?
我目前正在 Python 开发一个程序来计算德语文本中的英国化程度。我想知道整个文本中出现了多少次英语化。为此,我列出了德语中所有英国化的列表,如下所示:
abchecken
abchillen
abdancen
abdimmen
abfall-container
abflug-terminal
名单还在继续……
然后我检查了这个列表和要分析的文本之间的交集,但这只给了我两个文本中出现的所有单词的列表,例如:Anglicisms : 4:{'abdancen', 'abchecken', 'terminal'}
我真的很想输出这些词出现的次数(最好按频率排序)例如:
Anglicisms: abdancen(5), abchecken(2), terminal(1)
这是我目前的代码:
#counters to zero
lines, blanklines, sentences, words = 0, 0, 0, 0
print ('-' * 50)
while True:
try:
#def text file
filename = input("Please enter filename: ")
textf = open(filename, 'r')
break
except IOError:
print( 'Cannot open file "%s" ' % filename )
#reads one line at a time
for line in textf:
print( line, ) # test
lines += 1
if line.startswith('\n'):
blanklines += 1
else:
#sentence ends with . or ! or ?
#count these characters
sentences += line.count('.') + line.count('!') + line.count('?')
#create a list of words
#use None to split at any whitespace regardless of length
tempwords = line.split(None)
print(tempwords)
#total words
words += len(tempwords)
#anglicisms
words1 = set(open(filename).read().split())
words2 = set(open("anglicisms.txt").read().split())
duplicates = words1.intersection(words2)
textf.close()
print( '-' * 50)
print( "Lines : ", lines)
print( "Blank lines : ", blanklines)
print( "Sentences : ", sentences)
print( "Words : ", words)
print( "Anglicisms : %d:%s"%(len(duplicates),duplicates))
我遇到的第二个问题是它没有计算那些换句话说的英国化。例如,如果 "big" 在英语列表中,而 "bigfoot" 在文本中,则忽略这种情况。我该如何解决?
来自瑞士的亲切问候!
我会这样做:
from collections import Counter
anglicisms = open("anglicisms.txt").read().split()
matches = []
for line in textf:
matches.extend([word for word in line.split() if word in anglicisms])
anglicismsInText = Counter(matches)
关于第二个问题,我觉得有点难做。以你的例子为例 "big" 是一种英国主义,“bigfoot”应该匹配,但是 "Abigail" 呢?或 "overbig"?每次在字符串中发现英语时它都应该匹配吗?一开始?在末尾?一旦你知道了,你应该构建一个匹配它的正则表达式
编辑:要匹配以英语开头的字符串,请执行以下操作:
def derivatesFromAnglicism(word):
return any([word.startswith(a) for a in anglicism])
matches.extend([word for word in line.split() if derivatesFromAnglicism(word)])
这解决了您的第一个问题:
anglicisms = ["a", "b", "c"]
words = ["b", "b", "b", "a", "a", "b", "c", "a", "b", "c", "c", "c", "c"]
results = map(lambda angli: (angli, words.count(angli)), anglicisms)
results.sort(key=lambda p:-p[1])
结果如下所示:
[('b', 5), ('c', 5), ('a', 3)]
对于你的第二个问题,我认为正确的方法是使用正则表达式。
我目前正在 Python 开发一个程序来计算德语文本中的英国化程度。我想知道整个文本中出现了多少次英语化。为此,我列出了德语中所有英国化的列表,如下所示:
abchecken
abchillen
abdancen
abdimmen
abfall-container
abflug-terminal
名单还在继续……
然后我检查了这个列表和要分析的文本之间的交集,但这只给了我两个文本中出现的所有单词的列表,例如:Anglicisms : 4:{'abdancen', 'abchecken', 'terminal'}
我真的很想输出这些词出现的次数(最好按频率排序)例如:
Anglicisms: abdancen(5), abchecken(2), terminal(1)
这是我目前的代码:
#counters to zero
lines, blanklines, sentences, words = 0, 0, 0, 0
print ('-' * 50)
while True:
try:
#def text file
filename = input("Please enter filename: ")
textf = open(filename, 'r')
break
except IOError:
print( 'Cannot open file "%s" ' % filename )
#reads one line at a time
for line in textf:
print( line, ) # test
lines += 1
if line.startswith('\n'):
blanklines += 1
else:
#sentence ends with . or ! or ?
#count these characters
sentences += line.count('.') + line.count('!') + line.count('?')
#create a list of words
#use None to split at any whitespace regardless of length
tempwords = line.split(None)
print(tempwords)
#total words
words += len(tempwords)
#anglicisms
words1 = set(open(filename).read().split())
words2 = set(open("anglicisms.txt").read().split())
duplicates = words1.intersection(words2)
textf.close()
print( '-' * 50)
print( "Lines : ", lines)
print( "Blank lines : ", blanklines)
print( "Sentences : ", sentences)
print( "Words : ", words)
print( "Anglicisms : %d:%s"%(len(duplicates),duplicates))
我遇到的第二个问题是它没有计算那些换句话说的英国化。例如,如果 "big" 在英语列表中,而 "bigfoot" 在文本中,则忽略这种情况。我该如何解决?
来自瑞士的亲切问候!
我会这样做:
from collections import Counter
anglicisms = open("anglicisms.txt").read().split()
matches = []
for line in textf:
matches.extend([word for word in line.split() if word in anglicisms])
anglicismsInText = Counter(matches)
关于第二个问题,我觉得有点难做。以你的例子为例 "big" 是一种英国主义,“bigfoot”应该匹配,但是 "Abigail" 呢?或 "overbig"?每次在字符串中发现英语时它都应该匹配吗?一开始?在末尾?一旦你知道了,你应该构建一个匹配它的正则表达式
编辑:要匹配以英语开头的字符串,请执行以下操作:
def derivatesFromAnglicism(word):
return any([word.startswith(a) for a in anglicism])
matches.extend([word for word in line.split() if derivatesFromAnglicism(word)])
这解决了您的第一个问题:
anglicisms = ["a", "b", "c"]
words = ["b", "b", "b", "a", "a", "b", "c", "a", "b", "c", "c", "c", "c"]
results = map(lambda angli: (angli, words.count(angli)), anglicisms)
results.sort(key=lambda p:-p[1])
结果如下所示:
[('b', 5), ('c', 5), ('a', 3)]
对于你的第二个问题,我认为正确的方法是使用正则表达式。