用于基于方面的情感分析的乌尔都语语言数据集
Urdu language dataset for aspect-based sentiment analysis
when i run my code i get this error this error because of what>
text_raw_indices = tokenizer.text_to_sequence(text_left + " " + aspect + " " + text_right)
text_raw_without_aspect_indices = tokenizer.text_to_sequence(text_left + " " + text_right)
text_left_indices = tokenizer.text_to_sequence(text_left)
text_left_with_aspect_indices = tokenizer.text_to_sequence(text_left + " " + aspect)
text_right_indices = tokenizer.text_to_sequence(text_right, reverse=True)
text_right_with_aspect_indices = tokenizer.text_to_sequence(" " + aspect + " " + text_right, reverse=True)
aspect_indices = tokenizer.text_to_sequence(aspect)
left_context_len = np.sum(text_left_indices != 0)
aspect_len = np.sum(aspect_indices != 0)
aspect_in_text = torch.tensor([left_context_len.item(), (left_context_len + aspect_len - 1).item()])
polarity = int(polarity) + 1
只要使用激光就可以了。它也包括乌尔都语。
您可以在此处阅读更多内容:
- https://engineering.fb.com/ai-research/laser-multilingual-sentence-embeddings/
- https://github.com/facebookresearch/LASER
还有非官方的 pypi
包 here。它替代了一些内部依赖项,但仍按预期工作。
最重要的问题是,我们可能会更好地帮助您:您想要实现的目标是什么,您的最终目标是什么?
when i run my code i get this error this error because of what>
text_raw_indices = tokenizer.text_to_sequence(text_left + " " + aspect + " " + text_right)
text_raw_without_aspect_indices = tokenizer.text_to_sequence(text_left + " " + text_right)
text_left_indices = tokenizer.text_to_sequence(text_left)
text_left_with_aspect_indices = tokenizer.text_to_sequence(text_left + " " + aspect)
text_right_indices = tokenizer.text_to_sequence(text_right, reverse=True)
text_right_with_aspect_indices = tokenizer.text_to_sequence(" " + aspect + " " + text_right, reverse=True)
aspect_indices = tokenizer.text_to_sequence(aspect)
left_context_len = np.sum(text_left_indices != 0)
aspect_len = np.sum(aspect_indices != 0)
aspect_in_text = torch.tensor([left_context_len.item(), (left_context_len + aspect_len - 1).item()])
polarity = int(polarity) + 1
只要使用激光就可以了。它也包括乌尔都语。
您可以在此处阅读更多内容:
- https://engineering.fb.com/ai-research/laser-multilingual-sentence-embeddings/
- https://github.com/facebookresearch/LASER
还有非官方的 pypi
包 here。它替代了一些内部依赖项,但仍按预期工作。
最重要的问题是,我们可能会更好地帮助您:您想要实现的目标是什么,您的最终目标是什么?