如何将组合了变音符号 ɔ̃、ɛ̃ 和 ɑ̃ 的字符与 python(从 utf-8 编码的文本文件导入)中的无重音字符进行比较?
How do I compare characters with combining diacritic marks ɔ̃, ɛ̃ and ɑ̃ to unaccented ones in python (imported from a utf-8 encoded text file)?
总结:我想对比一下 ɔ̃, ɛ̃, ɑ̃ 和 ɔ, ɛ, a, 都是不一样的,但是我的文本文件有 ɔ̃, ɛ̃, ɑ̃ 写成 ɔ~, ɛ~, a~.
我写了一个脚本,它同时沿着两个单词中的字符移动,比较它们以找到不同的字符对。单词长度相等(除了变音符号问题,它引入了一个额外的字符),并且表示仅相隔一个音素的两个法语单词的国际音标发音。
最终目标是过滤 anki 卡片列表,以便仅包含某些音素对,因为其他音素对太容易识别了。每对单词代表一个anki笔记
为此我需要将鼻音 ɔ̃、ɛ̃ 和 ɑ̃ 与其他声音区分开来,因为它们只会真正与它们自己混淆。
如所写,代码将重音字符视为字符加 ~,以此类推。因此,如果一个单词的唯一区别是最后一个重音字符和重音字符之间的区别,脚本在最后一个字母上没有发现差异,并且按照所写的那样会找到一个比另一个短的单词(另一个仍然有 ~ 左)和尝试比较一个字符时抛出错误。这本身就是一个整体 'problem',但如果我能让重音字符作为单个单位来读,那么单词将具有相同的长度,并且它会消失。
我不想像某些人那样用非重音字符代替重音字符进行比较,因为它们是不同的声音。
我已经尝试 'normalizing' 将 unicode 转换为 'combined' 形式,例如
unicodedata.normalize('NFKC', line)
,但它并没有改变任何东西。
这是一些输出,包括它刚刚抛出错误的行;打印输出显示代码正在比较的每个单词的单词和字符;该数字是单词中该字符的索引。因此,最后一个字母就是这两个字符的脚本 'thinks',它对 ɛ̃ 和 ɛ 的看法是一样的。当它报告差异时,它也会选择错误的字母对,这对是正确的很重要,因为我与允许对的主列表进行比较。
0 alyʁ alɔʁ a a # this first word is done well
1 alyʁ alɔʁ l l
2 alyʁ alɔʁ y ɔ # it doesn't continue to compare the ʁ because it found the difference
...
0 ɑ̃bisjø ɑ̃bisjɔ̃ ɑ ɑ
1 ɑ̃bisjø ɑ̃bisjɔ̃ ̃ ̃ # the tildes are compared / treated separately
2 ɑ̃bisjø ɑ̃bisjɔ̃ b b
3 ɑ̃bisjø ɑ̃bisjɔ̃ i i
4 ɑ̃bisjø ɑ̃bisjɔ̃ s s
5 ɑ̃bisjø ɑ̃bisjɔ̃ j j
6 ɑ̃bisjø ɑ̃bisjɔ̃ ø ɔ # luckily that wasn't where the difference was, this is
...
0 osi ɛ̃si o ɛ # here it should report (o, ɛ̃), not (o, ɛ)
...
0 bɛ̃ bɔ̃ b b
1 bɛ̃ bɔ̃ ɛ ɔ # an error of this type
...
0 bo ba b b
1 bo ba o a # this is working correctly
...
0 bjɛ bjɛ̃ b b
1 bjɛ bjɛ̃ j j
2 bjɛ bjɛ̃ ɛ ɛ # AND here's the money, it thinks these are the same letter, but it has also run out of characters to compare from the first word, so it throws the error below
Traceback (most recent call last):
File "C:\Users\tchak\OneDrive\Desktop\French.py", line 42, in <module>
letter1 = line[0][index]
IndexError: string index out of range
代码如下:
def lens(word):
return len(word)
# open file, and new file to write to
input_file = "./phonetics_input.txt"
output_file = "./phonetics_output.txt"
set1 = ["e", "ɛ", "œ", "ø", "ə"]
set2 = ["ø", "o", "œ", "ɔ", "ə"]
set3 = ["ə", "i", "y"]
set4 = ["u", "y", "ə"]
set5 = ["ɑ̃", "ɔ̃", "ɛ̃", "ə"]
set6 = ["a", "ə"]
vowelsets = [set1, set2, set3, set4, set5, set6]
with open(input_file, encoding="utf8") as ipf, open(output_file, encoding="utf8") as opf:
# for line in file;
vowelpairs= []
acceptedvowelpairs = []
input_lines = ipf.readlines()
print(len(input_lines))
for line in input_lines:
#find word ipa transctipts
unicodedata.normalize('NFKC', line)
line = line.split("/")
line.sort(key = lens)
line = line[0:2] # the shortest two strings after splitting are the ipa words
index = 0
letter1 = line[0][index]
letter2 = line[1][index]
print(index, line[0], line[1], letter1, letter2)
linelen = max(len(line[0]), len(line[1]))
while letter1 == letter2:
index += 1
letter1 = line[0][index] # throws the error here, technically, after printing the last characters and incrementing the index one more
letter2 = line[1][index]
print(index, line[0], line[1], letter1, letter2)
vowelpairs.append((letter1, letter2))
for i in vowelpairs:
for vowelset in vowelsets:
if set(i).issubset(vowelset):
acceptedvowelpairs.append(i)
print(len(vowelpairs))
print(len(acceptedvowelpairs))
我正在解决这个问题,方法是在处理之前对这些字符进行查找和替换,完成后进行反向查找和替换。
Unicode 规范化对描述的特定字符组合没有帮助,因为摘自 Unicode database UnicodeData.Txt
using simple regex "Latin.*Letter.*with tilde$"
gives ÃÑÕãñõĨĩŨũṼṽẼẽỸỹ
(no Latin letters Open O
, Open E
or Alpha
). So you need to iterate through both compared strings separately as follows (omitted most of your code above a Minimal, Reproducible Example):
import unicodedata
def lens(word):
return len(word)
input_lines = ['alyʁ/alɔʁ', 'ɑ̃bisjø/ɑ̃bisjɔ̃ ', 'osi/ɛ̃si', 'bɛ̃ /bɔ̃ ', 'bo/ba', 'bjɛ/bjɛ̃ ']
print(len(input_lines))
for line in input_lines:
print('')
#find word ipa transctipts
line = unicodedata.normalize('NFKC', line.rstrip('\n'))
line = line.split("/")
line.sort(key = lens)
word1, word2 = line[0:2] # the shortest two strings after splitting are the ipa words
index = i1 = i2 = 0
while i1 < len(word1) and i2 < len(word2):
letter1 = word1[i1]
i1 += 1
if i1 < len(word1) and unicodedata.category(word1[i1]) == 'Mn':
letter1 += word1[i1]
i1 += 1
letter2 = word2[i2]
i2 += 1
if i2 < len(word2) and unicodedata.category(word2[i2]) == 'Mn':
letter2 += word2[i2]
i2 += 1
same = chr(0xA0) if letter1 == letter2 else '#'
print(index, same, word1, word2, letter1, letter2)
index += 1
#if same != chr(0xA0):
# break
输出:.\SO335977.py
6
0 alyʁ alɔʁ a a
1 alyʁ alɔʁ l l
2 # alyʁ alɔʁ y ɔ
3 alyʁ alɔʁ ʁ ʁ
0 ɑ̃bisjø ɑ̃bisjɔ̃ ɑ̃ ɑ̃
1 ɑ̃bisjø ɑ̃bisjɔ̃ b b
2 ɑ̃bisjø ɑ̃bisjɔ̃ i i
3 ɑ̃bisjø ɑ̃bisjɔ̃ s s
4 ɑ̃bisjø ɑ̃bisjɔ̃ j j
5 # ɑ̃bisjø ɑ̃bisjɔ̃ ø ɔ̃
0 # osi ɛ̃si o ɛ̃
1 osi ɛ̃si s s
2 osi ɛ̃si i i
0 bɛ̃ bɔ̃ b b
1 # bɛ̃ bɔ̃ ɛ̃ ɔ̃
2 bɛ̃ bɔ̃
0 bo ba b b
1 # bo ba o a
0 bjɛ bjɛ̃ b b
1 bjɛ bjɛ̃ j j
2 # bjɛ bjɛ̃ ɛ ɛ̃
注意:变音符号 测试为 Unicode 类别 Mn
;您可以测试另一个条件(例如来自以下列表):
Mn Nonspacing_Mark:
非间距组合标记(零前进宽度)
Mc Spacing_Mark :
一个间距组合标记(正进宽)
Me Enclosing_Mark :
一个封闭的组合标记
M Mark :
Mn | Mc | Me
总结:我想对比一下 ɔ̃, ɛ̃, ɑ̃ 和 ɔ, ɛ, a, 都是不一样的,但是我的文本文件有 ɔ̃, ɛ̃, ɑ̃ 写成 ɔ~, ɛ~, a~.
我写了一个脚本,它同时沿着两个单词中的字符移动,比较它们以找到不同的字符对。单词长度相等(除了变音符号问题,它引入了一个额外的字符),并且表示仅相隔一个音素的两个法语单词的国际音标发音。
最终目标是过滤 anki 卡片列表,以便仅包含某些音素对,因为其他音素对太容易识别了。每对单词代表一个anki笔记
为此我需要将鼻音 ɔ̃、ɛ̃ 和 ɑ̃ 与其他声音区分开来,因为它们只会真正与它们自己混淆。
如所写,代码将重音字符视为字符加 ~,以此类推。因此,如果一个单词的唯一区别是最后一个重音字符和重音字符之间的区别,脚本在最后一个字母上没有发现差异,并且按照所写的那样会找到一个比另一个短的单词(另一个仍然有 ~ 左)和尝试比较一个字符时抛出错误。这本身就是一个整体 'problem',但如果我能让重音字符作为单个单位来读,那么单词将具有相同的长度,并且它会消失。
我不想像某些人那样用非重音字符代替重音字符进行比较,因为它们是不同的声音。
我已经尝试 'normalizing' 将 unicode 转换为 'combined' 形式,例如
unicodedata.normalize('NFKC', line)
,但它并没有改变任何东西。
这是一些输出,包括它刚刚抛出错误的行;打印输出显示代码正在比较的每个单词的单词和字符;该数字是单词中该字符的索引。因此,最后一个字母就是这两个字符的脚本 'thinks',它对 ɛ̃ 和 ɛ 的看法是一样的。当它报告差异时,它也会选择错误的字母对,这对是正确的很重要,因为我与允许对的主列表进行比较。
0 alyʁ alɔʁ a a # this first word is done well
1 alyʁ alɔʁ l l
2 alyʁ alɔʁ y ɔ # it doesn't continue to compare the ʁ because it found the difference
...
0 ɑ̃bisjø ɑ̃bisjɔ̃ ɑ ɑ
1 ɑ̃bisjø ɑ̃bisjɔ̃ ̃ ̃ # the tildes are compared / treated separately
2 ɑ̃bisjø ɑ̃bisjɔ̃ b b
3 ɑ̃bisjø ɑ̃bisjɔ̃ i i
4 ɑ̃bisjø ɑ̃bisjɔ̃ s s
5 ɑ̃bisjø ɑ̃bisjɔ̃ j j
6 ɑ̃bisjø ɑ̃bisjɔ̃ ø ɔ # luckily that wasn't where the difference was, this is
...
0 osi ɛ̃si o ɛ # here it should report (o, ɛ̃), not (o, ɛ)
...
0 bɛ̃ bɔ̃ b b
1 bɛ̃ bɔ̃ ɛ ɔ # an error of this type
...
0 bo ba b b
1 bo ba o a # this is working correctly
...
0 bjɛ bjɛ̃ b b
1 bjɛ bjɛ̃ j j
2 bjɛ bjɛ̃ ɛ ɛ # AND here's the money, it thinks these are the same letter, but it has also run out of characters to compare from the first word, so it throws the error below
Traceback (most recent call last):
File "C:\Users\tchak\OneDrive\Desktop\French.py", line 42, in <module>
letter1 = line[0][index]
IndexError: string index out of range
代码如下:
def lens(word):
return len(word)
# open file, and new file to write to
input_file = "./phonetics_input.txt"
output_file = "./phonetics_output.txt"
set1 = ["e", "ɛ", "œ", "ø", "ə"]
set2 = ["ø", "o", "œ", "ɔ", "ə"]
set3 = ["ə", "i", "y"]
set4 = ["u", "y", "ə"]
set5 = ["ɑ̃", "ɔ̃", "ɛ̃", "ə"]
set6 = ["a", "ə"]
vowelsets = [set1, set2, set3, set4, set5, set6]
with open(input_file, encoding="utf8") as ipf, open(output_file, encoding="utf8") as opf:
# for line in file;
vowelpairs= []
acceptedvowelpairs = []
input_lines = ipf.readlines()
print(len(input_lines))
for line in input_lines:
#find word ipa transctipts
unicodedata.normalize('NFKC', line)
line = line.split("/")
line.sort(key = lens)
line = line[0:2] # the shortest two strings after splitting are the ipa words
index = 0
letter1 = line[0][index]
letter2 = line[1][index]
print(index, line[0], line[1], letter1, letter2)
linelen = max(len(line[0]), len(line[1]))
while letter1 == letter2:
index += 1
letter1 = line[0][index] # throws the error here, technically, after printing the last characters and incrementing the index one more
letter2 = line[1][index]
print(index, line[0], line[1], letter1, letter2)
vowelpairs.append((letter1, letter2))
for i in vowelpairs:
for vowelset in vowelsets:
if set(i).issubset(vowelset):
acceptedvowelpairs.append(i)
print(len(vowelpairs))
print(len(acceptedvowelpairs))
我正在解决这个问题,方法是在处理之前对这些字符进行查找和替换,完成后进行反向查找和替换。
Unicode 规范化对描述的特定字符组合没有帮助,因为摘自 Unicode database UnicodeData.Txt
using simple regex "Latin.*Letter.*with tilde$"
gives ÃÑÕãñõĨĩŨũṼṽẼẽỸỹ
(no Latin letters Open O
, Open E
or Alpha
). So you need to iterate through both compared strings separately as follows (omitted most of your code above a Minimal, Reproducible Example):
import unicodedata
def lens(word):
return len(word)
input_lines = ['alyʁ/alɔʁ', 'ɑ̃bisjø/ɑ̃bisjɔ̃ ', 'osi/ɛ̃si', 'bɛ̃ /bɔ̃ ', 'bo/ba', 'bjɛ/bjɛ̃ ']
print(len(input_lines))
for line in input_lines:
print('')
#find word ipa transctipts
line = unicodedata.normalize('NFKC', line.rstrip('\n'))
line = line.split("/")
line.sort(key = lens)
word1, word2 = line[0:2] # the shortest two strings after splitting are the ipa words
index = i1 = i2 = 0
while i1 < len(word1) and i2 < len(word2):
letter1 = word1[i1]
i1 += 1
if i1 < len(word1) and unicodedata.category(word1[i1]) == 'Mn':
letter1 += word1[i1]
i1 += 1
letter2 = word2[i2]
i2 += 1
if i2 < len(word2) and unicodedata.category(word2[i2]) == 'Mn':
letter2 += word2[i2]
i2 += 1
same = chr(0xA0) if letter1 == letter2 else '#'
print(index, same, word1, word2, letter1, letter2)
index += 1
#if same != chr(0xA0):
# break
输出:.\SO335977.py
6
0 alyʁ alɔʁ a a
1 alyʁ alɔʁ l l
2 # alyʁ alɔʁ y ɔ
3 alyʁ alɔʁ ʁ ʁ
0 ɑ̃bisjø ɑ̃bisjɔ̃ ɑ̃ ɑ̃
1 ɑ̃bisjø ɑ̃bisjɔ̃ b b
2 ɑ̃bisjø ɑ̃bisjɔ̃ i i
3 ɑ̃bisjø ɑ̃bisjɔ̃ s s
4 ɑ̃bisjø ɑ̃bisjɔ̃ j j
5 # ɑ̃bisjø ɑ̃bisjɔ̃ ø ɔ̃
0 # osi ɛ̃si o ɛ̃
1 osi ɛ̃si s s
2 osi ɛ̃si i i
0 bɛ̃ bɔ̃ b b
1 # bɛ̃ bɔ̃ ɛ̃ ɔ̃
2 bɛ̃ bɔ̃
0 bo ba b b
1 # bo ba o a
0 bjɛ bjɛ̃ b b
1 bjɛ bjɛ̃ j j
2 # bjɛ bjɛ̃ ɛ ɛ̃
注意:变音符号 测试为 Unicode 类别 Mn
;您可以测试另一个条件(例如来自以下列表):
Mn Nonspacing_Mark:
非间距组合标记(零前进宽度)Mc Spacing_Mark :
一个间距组合标记(正进宽)Me Enclosing_Mark :
一个封闭的组合标记M Mark :
Mn | Mc | Me