如何获得 BioBERT 嵌入
How to get BioBERT embeddings
我在 pandas 数据框中有一个字段,其中有一个文本字段,我想为其生成 BioBERT 嵌入。有没有一种简单的方法可以用来生成向量嵌入?我想在另一个模型中使用它们。
这里是数据框的假设样本
Visit Code
Problem Assessment
1234
ge reflux working diagnosis well
4567
medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill
我试过这个包,但安装时收到错误
https://pypi.org/project/biobert-embedding
错误:
Collecting biobert-embedding
Using cached biobert-embedding-0.1.2.tar.gz (4.8 kB)
ERROR: Could not find a version that satisfies the requirement torch==1.2.0 (from biobert-embedding) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.7.1)
ERROR: No matching distribution found for torch==1.2.0 (from biobert-embedding)
非常感谢任何帮助!
尝试安装如下:
pip install biobert-embedding==0.1.2 torch==1.2.0 -f https://download.pytorch.org/whl/torch_stable.html
我扩展了你的示例数据框来说明你现在如何计算你的 problem assessments
的句子向量,并使用这些来计算相似 visit codes
.
之间的余弦相似度。
>>> from biobert_embedding.embedding import BiobertEmbedding
>>> from scipy.spatial import distance
>>> import pandas as pd
>>> data = {'Visit Code': [1234, 1235, 4567, 4568],
'Problem Assessment': ['ge reflux working diagnosis well',
'other reflux diagnosis poor',
'medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill',
'medication must be refilled diagnosis note called in brand Olmesartan 10mg qd 40 prn refill']}
>>> df = pd.DataFrame(data)
>>> df
Visit Code
Problem Assessment
0
1234
ge reflux working diagnosis well
1
1234
other reflux diagnosis poor
2
4567
medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill
3
4567
medication must be refilled diagnosis note called in brand Olmesartan 10mg qd 40 prn refill
>>> biobert = BiobertEmbedding()
>>> df['sentence embedding'] = df['Problem Assessment'].apply(lambda sentence: biobert.sentence_vector(sentence))
>>> df
Visit Code
Problem Assessment
sentence embedding
0
1234
ge reflux working diagnosis well
tensor([ 2.7189e-01, -1.6195e-01, 5.8270e-02, -3.2730e-01, 7.5583e-02, ...
1
1234
other reflux diagnosis poor
tensor([ 1.6971e-01, -2.1405e-01, 3.4427e-02, -2.3090e-01, 1.6007e-02, ...
2
4567
medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill
tensor([ 1.5370e-01, -3.9875e-01, 2.0089e-01, 4.1506e-02, 6.9854e-02, ...
3
4567
medication must be refilled diagnosis note called in brand Olmesartan 10mg qd 40 prn refill
tensor([ 2.2128e-01, -2.0283e-01, 2.2194e-01, 9.1156e-02, 1.1620e-01, ...
>>> df.groupby('Visit Code')['sentence embedding'].apply(lambda sentences: 1 - distance.cosine(sentences.values) )
Visit Code
1234 0.950492
4567 0.969715
Name: sentence embedding, dtype: float64
我们可以看到,正如预期的那样,相似的句子靠得很近
我在 pandas 数据框中有一个字段,其中有一个文本字段,我想为其生成 BioBERT 嵌入。有没有一种简单的方法可以用来生成向量嵌入?我想在另一个模型中使用它们。
这里是数据框的假设样本
Visit Code | Problem Assessment |
---|---|
1234 | ge reflux working diagnosis well |
4567 | medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill |
我试过这个包,但安装时收到错误 https://pypi.org/project/biobert-embedding
错误:
Collecting biobert-embedding
Using cached biobert-embedding-0.1.2.tar.gz (4.8 kB)
ERROR: Could not find a version that satisfies the requirement torch==1.2.0 (from biobert-embedding) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.7.1)
ERROR: No matching distribution found for torch==1.2.0 (from biobert-embedding)
非常感谢任何帮助!
尝试安装如下:
pip install biobert-embedding==0.1.2 torch==1.2.0 -f https://download.pytorch.org/whl/torch_stable.html
我扩展了你的示例数据框来说明你现在如何计算你的 problem assessments
的句子向量,并使用这些来计算相似 visit codes
.
>>> from biobert_embedding.embedding import BiobertEmbedding
>>> from scipy.spatial import distance
>>> import pandas as pd
>>> data = {'Visit Code': [1234, 1235, 4567, 4568],
'Problem Assessment': ['ge reflux working diagnosis well',
'other reflux diagnosis poor',
'medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill',
'medication must be refilled diagnosis note called in brand Olmesartan 10mg qd 40 prn refill']}
>>> df = pd.DataFrame(data)
>>> df
Visit Code | Problem Assessment | |
---|---|---|
0 | 1234 | ge reflux working diagnosis well |
1 | 1234 | other reflux diagnosis poor |
2 | 4567 | medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill |
3 | 4567 | medication must be refilled diagnosis note called in brand Olmesartan 10mg qd 40 prn refill |
>>> biobert = BiobertEmbedding()
>>> df['sentence embedding'] = df['Problem Assessment'].apply(lambda sentence: biobert.sentence_vector(sentence))
>>> df
Visit Code | Problem Assessment | sentence embedding | |
---|---|---|---|
0 | 1234 | ge reflux working diagnosis well | tensor([ 2.7189e-01, -1.6195e-01, 5.8270e-02, -3.2730e-01, 7.5583e-02, ... |
1 | 1234 | other reflux diagnosis poor | tensor([ 1.6971e-01, -2.1405e-01, 3.4427e-02, -2.3090e-01, 1.6007e-02, ... |
2 | 4567 | medication refill order working diagnosis note called in brand benicar 5mg qd 30 prn refill | tensor([ 1.5370e-01, -3.9875e-01, 2.0089e-01, 4.1506e-02, 6.9854e-02, ... |
3 | 4567 | medication must be refilled diagnosis note called in brand Olmesartan 10mg qd 40 prn refill | tensor([ 2.2128e-01, -2.0283e-01, 2.2194e-01, 9.1156e-02, 1.1620e-01, ... |
>>> df.groupby('Visit Code')['sentence embedding'].apply(lambda sentences: 1 - distance.cosine(sentences.values) )
Visit Code
1234 0.950492
4567 0.969715
Name: sentence embedding, dtype: float64
我们可以看到,正如预期的那样,相似的句子靠得很近