使用 conda 安装新 ubuntu 20.04 的段错误

Segfault with for fresh ubuntu 20.04 install using conda

python 解释器在 运行 全新安装 ubuntu 20.04.2 时在 miniconda 环境中出现段错误。这似乎间歇性地发生,在环境的 conda 设置期间和执行如下代码期间运行宁“pip”时。

段错误总是在 运行 宁以下代码时发生,该代码从文件中读取文本并将结果标记化。段错误位置从 运行 运行 更改。同样的代码可以 运行 在另一台计算机上 ubuntu 18.04.

上具有相同的 conda 环境

核心转储总是指向 python 中的 unicodeobject.c 文件中的某个函数,但确切的函数会随着崩溃而变化。至少有一次崩溃有一个明确的取消引用指针 0x0,“unicode 对象”应该在该位置。

我的猜测是某些原因导致 python 解释器丢弃指向 unicode 对象,而它仍在处理导致段错误。但是解释器或 NLTK 中的任何错误都应该被更多用户注意到,我找不到任何人有类似问题。

已尝试但未能解决问题的方法:

  1. 重新格式化并重新安装ubuntu
  2. 切换到ubuntu18.04(在这台电脑上,另一台18.04的电脑可以运行代码就好了)
  3. 更换硬件,确保 RAM 或 SSD 磁盘没有损坏
  4. 更改为 python 版本 3.8.6、3.8.8、3.9.2
  5. 将 conda 环境从正常运行的计算机克隆到损坏的计算机

附件是故障处理程序的一个堆栈跟踪以及来自 gdb 的相应核心转储堆栈跟踪。

(eo) axel@minimind:~/test$ python tokenizer_mini.py 
2021-03-30 11:10:15.588399: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-03-30 11:10:15.588426: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Fatal Python error: Segmentation fault

Current thread 0x00007faa73bbe740 (most recent call first):
  File "tokenizer_mini.py", line 36 in preprocess_string
  File "tokenizer_mini.py", line 51 in <module>
Segmentation fault (core dumped)
#0  raise (sig=<optimized out>) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  <signal handler called>
#2  find_maxchar_surrogates (num_surrogates=<synthetic pointer>, maxchar=<synthetic pointer>, 
    end=0x4 <error: Cannot access memory at address 0x4>, begin=0x0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/unicodeobject.c:1703
#3  _PyUnicode_Ready (unicode=0x7f7e4e04d7f0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/unicodeobject.c:1742
#4  0x000055cd65f6df6a in PyUnicode_RichCompare (left=0x7f7e4cf43fb0, right=<optimized out>, op=2)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/unicodeobject.c:11205
#5  0x000055cd6601712a in do_richcompare (op=2, w=0x7f7e4e04d7f0, v=0x7f7e4cf43fb0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/object.c:726
#6  PyObject_RichCompare (op=2, w=0x7f7e4e04d7f0, v=0x7f7e4cf43fb0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/object.c:774
#7  PyObject_RichCompareBool (op=2, w=0x7f7e4e04d7f0, v=0x7f7e4cf43fb0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/object.c:796
#8  list_contains (a=0x7f7e4e04b4c0, el=0x7f7e4cf43fb0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/listobject.c:455
#9  0x000055cd660be41b in PySequence_Contains (ob=0x7f7e4cf43fb0, seq=0x7f7e4e04b4c0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/abstract.c:2083
#10 cmp_outcome (w=0x7f7e4e04b4c0, v=0x7f7e4cf43fb0, op=<optimized out>, tstate=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:5082
#11 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:2977
#12 0x000055cd6609f706 in PyEval_EvalFrameEx (throwflag=0, f=0x7f7e4f4d3c40)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:738
#13 function_code_fastcall (globals=<optimized out>, nargs=<optimized out>, args=<optimized out>, co=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/call.c:284
#14 _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/call.c:411
#15 0x000055cd660be54f in _PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7f7f391985b8, callable=0x7f7f39084160)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Include/cpython/abstract.h:115
#16 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0x55cd66c2e880)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4963
#17 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:3500
#18 0x000055cd6609e503 in PyEval_EvalFrameEx (throwflag=0, f=0x7f7f39198440)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4298
#19 _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, 
    argcount=<optimized out>, kwnames=<optimized out>, kwargs=<optimized out>, kwcount=<optimized out>, kwstep=<optimized out>, 
    defs=<optimized out>, defcount=<optimized out>, kwdefs=<optimized out>, closure=<optimized out>, name=<optimized out>, 
    qualname=<optimized out>) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4298
#20 0x000055cd6609f559 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, 
    args=<optimized out>, argcount=<optimized out>, kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4327
#21 0x000055cd661429ab in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:718
#22 0x000055cd66142a43 in run_eval_code_obj (co=0x7f7f3910f240, globals=0x7f7f391fad80, locals=0x7f7f391fad80)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:1165
#23 0x000055cd6615c6b3 in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x7f7f391fad80, locals=0x7f7f391fad80, 
    flags=<optimized out>, arena=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:1187
--Type <RET> for more, q to quit, c to continue without paging--
#24 0x000055cd661615b2 in pyrun_file (fp=0x55cd66c2cdf0, filename=0x7f7f391bbee0, start=<optimized out>, globals=0x7f7f391fad80, 
    locals=0x7f7f391fad80, closeit=1, flags=0x7ffe3ee6f8e8)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:1084
#25 0x000055cd66161792 in pyrun_simple_file (flags=0x7ffe3ee6f8e8, closeit=1, filename=0x7f7f391bbee0, fp=0x55cd66c2cdf0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:439
#26 PyRun_SimpleFileExFlags (fp=0x55cd66c2cdf0, filename=<optimized out>, closeit=1, flags=0x7ffe3ee6f8e8)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:472
#27 0x000055cd66161d0d in pymain_run_file (cf=0x7ffe3ee6f8e8, config=0x55cd66c2da70)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:391
#28 pymain_run_python (exitcode=0x7ffe3ee6f8e0)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:616
#29 Py_RunMain () at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:695
#30 0x000055cd66161ec9 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>)
    at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:1127
#31 0x00007f7f3a3620b3 in __libc_start_main (main=0x55cd65fe3490 <main>, argc=2, argv=0x7ffe3ee6fae8, init=<optimized out>, 
    fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffe3ee6fad8) at ../csu/libc-start.c:308
#32 0x000055cd660d7369 in _start () at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ast.c:937

使用的conda环境如下,使用Miniconda3-py38_4.9.2-Linux-x86_64.sh(注意段错误有时会在conda环境的设置过程中发生所以它可能是与环境无关)

name: eo
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.8.8
  - pip=20.3.1
  - pip:
    - transformers==4.3.2
    - tensorflow_gpu==2.4.0
    - scikit-learn==0.23.2
    - nltk==3.5
    - matplotlib==3.2.1
    - seaborn==0.11.0
    - tensorflow-addons==0.11.2
    - tf-models-official==2.4.0
    - gspread==3.6.0
    - oauth2client==4.1.3
    - ipykernel==5.4.2
    - autopep8==1.5.4
    - torch==1.7.1

下面的代码始终重现问题,读取的文件是包含 unicode 文本的简单文本文件:

from nltk.tokenize import wordpunct_tokenize
from tensorflow.keras.preprocessing.text import Tokenizer
from nltk.stem.snowball import SnowballStemmer
from nltk.corpus import stopwords
import pickle
from pathlib import Path
import faulthandler
faulthandler.enable()


def load_data(root_path, feature, index):
    feature_root = root_path / feature
    dir1 = str(index // 10_000)
    base_path = feature_root / dir1 / str(index)
    full_path = base_path.with_suffix('.txt')
    data = None
    with open(full_path, 'r', encoding='utf-8') as f:
        data = f.read()
    return data


def preprocess_string(text, stemmer, stop_words):
    word_tokens = wordpunct_tokenize(text.lower())
    alpha_tokens = []
    for w in word_tokens:
        try:
            if (w.isalpha() and w not in stop_words):
                alpha_tokens.append(w)
        except:
            print("Something went wrong when handling the word: ", w)

    clean_tokens = []
    for w in alpha_tokens:
        try:
            word = stemmer.stem(w)
            clean_tokens.append(word)
        except:
            print("Something went wrong when stemming the word: ", w)
            clean_tokens.append(w)
    return clean_tokens


stop_words = stopwords.words('english')
stemmer = SnowballStemmer(language='english')
tokenizer = Tokenizer()

root_path = '/srv/patent/EbbaOtto/E'
for idx in range(0, 57454):
    print(f'Processed {idx}/57454', end='\r')
    desc = str(load_data(Path(root_path), 'clean_description', idx))
    desc = preprocess_string(desc, stemmer, stop_words)
    tokenizer.fit_on_texts([desc])

为了所有搜索类似问题的人。这最终被解决为 CPU 中的硬件故障。用另一个相同品牌的替换 CPU 解决了这个问题。有趣的是,windows 台计算机上没有出现该问题。