ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

An Improved Baseline for Sentence-level Relation Extraction

2021-05-29 21:32:37  阅读:338  来源: 互联网

标签:Linguistics Computational Improved Baseline level NA arXiv 类别 Association


Cover

论文地址: An Improved Baseline for Sentence-level Relation Extraction

Abstract & Contribution

目前的句子级的关系抽取任务效果,还有远远达不到人工的效果。

本文反思已有模型并指出两个被忽视的方面:

  1. 关系实例包含多个方面的实体信息,如实体名字、范围、类型;已有的模型并没有将其作为输入。
  2. 由于预定义的知识本体还具有一定的限制,所以不可避免地有一些关系并不在知识本体中并被标注为NA类别,但是实际上他们可能有更多样的语义关系。

最后提出一个经过改善的句子级关系抽取模型:

  1. 使用typed marker1提高对于实体表达的效果;
  2. 对被分类为NA的实例,使用confidence-based classification2进行分类,设定一个置信程度,如果最终的分数低于置信度再归类为NA类别。

本文模型再TACRED数据集上的F1达到75.0%,高出SOTA模型2.3%。

Model for RE

本文的RE模型主要是扩展了之前的 基于tranformer3的关系抽取4

实体表示Entity Representation

针对实体表示的多个问题,本文对比了多个实体表示方法,包括Entity mask5、Entity marker6、Entity marker (punct) 7、Typed entity marker8、Typed entity marker (punct)(本文提出的):

table 1

上图想说明的是:

  1. 本文提出的实体标注方法表现非常好,F1值达到了74.5%;
  2. 引入特殊的符号标记,让RoBERTa模型的效果更差;

另一方面,对于新实体的推理方面。之前的工作9表明:实体名字可能对关系类型缺乏启发性,并且只用实体对作为输入能达到更高的效果,因此建议RE分类器不使用entity mark可能对未遇见地实体不具有较好适用性。

但是,使用entity mask又会导致缺失实体信息,无法将实体信息更好学习10 11 12;如果不考虑实体的名字,那么任务不能通过外部知识库进行优化提高。

因此本文提出一个过滤评价(Filtered evaluation setting):对测试及进行筛选,筛选出其实体在训练集没出现过的的测试数据作为过滤测试集(Filtered test set)。然后评估结果如下图:Table 2

结论是:Typed Entity Marker(OURS)的效果依然比Entity Mask.

NA instances

接下来解决第二个问题:实际上,大量的数据都是NA类别数据,TACRED数据集中78.7%都是NA。

已有的解决方法是:使用一个额外的类别,如果分类为NA的概率大于分类为其他类别的概率,那么就分类为NA。

本文方法:使用基于置信度的分类模型,如果一个样本有对应类别关系,就给与较高的置信分,低于置信分的样本则被分为NA类别。本文方法类似于Bendale和Dhamija的开放数据集的分类13 14 和Liang的OOV检测15.我们给定一个句子 x x x,计算出每个类别的分类概率 p ∈ R ∣ R ∣ p \in \mathbb {R}^{|\mathcal R|} p∈R∣R∣和置信分数 c = m a x r ∈ R p r c= max_{r \in\mathbb {R}}p_r c=maxr∈R​pr​,通过最大的分类概率确定对应的类别。

为了保证本文方法可行,要有两个条件:

  1. NA类别足够低分;
  2. 其他类别足够高分;

(说了跟没说一样)

前者通过交叉熵损失函数已经实现,后者直接取最小化置信分又会导致优化不足,因此在“样本类别对应最大分类概率”动手脚,替代原来的置信分:

c s u p = ∑ r ∈ R p r 2 c_{sup} = \sum_{r \in \mathcal R} p^2_r csup​=r∈R∑​pr2​

L c o n f = l o g ( 1 − c s u p ) \mathcal{L} _{conf} = log (1-c_{sup}) Lconf​=log(1−csup​)

其中,根据高数的知识得出 c = m a x r ∈ R p r ⩽ c s u p c= max_{r \in\mathbb {R}}p_r \leqslant \sqrt{c_{sup}} c=maxr∈R​pr​⩽csup​ ​,最小化上述函数就相当于最小化c,这回使得训练更加稳定。然后用上述函数对关系 r r r的逻辑 l r l_r lr​进行求导得到:

∂ L c o n f l r = − 2 p r ( p r − ∑ r ∈ R p r 2 ) 1 − ∑ r ∈ R p r 2 \frac {\partial \mathcal{L} _{conf}} {l_r}= - \frac {2p_r(p_r - \sum_{r \in \mathcal R}p^2_r)} {1- \sum_{r \in \mathcal R}p^2_r} lr​∂Lconf​​=−1−∑r∈R​pr2​2pr​(pr​−∑r∈R​pr2​)​

并且:

  1. 当 p r = 1 ∣ R ∣ p_r = \frac {1}{| \mathcal{R}|} pr​=∣R∣1​时,置信分取最小值,也就是概率分布是平均分布的时候;
  2. 置信函数通过 1 1 − ∑ r ∈ R p r 2 \frac {1}{1- \sum_{r \in \mathcal R}p^2_r} 1−∑r∈R​pr2​1​,自动对训练实例进行加权。让拥有高置信分数的NA类别样本拥有更高的权重。

Experiments

  • 训练语料:TACRED 和 SemEval 2010
  • 学习率:3e-5,采用线性衰减的预热学习率的方式,参考资料Warmup预热学习率
  • Batch size:64
  • Epoch:5(TACRED) and 10(SemEval)

训练结果如图:

Table 3

Table 4

比基线模型16F1高出0.5%。


  1. Zexuan Zhong and Danqi Chen. 2020. A frustratingly easy approach for joint entity and relation extraction.arXiv preprint arXiv:2010.12812. ↩︎

  2. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva,Preslav Nakov, Diarmuid O S´eaghdha, Sebastian Pad ´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2019. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422. ↩︎

  3. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. ↩︎

  4. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics. ↩︎

  5. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017.Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,pages 35–45, Copenhagen, Denmark. Association for Computational Linguistics. ↩︎

  6. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics. ↩︎

  7. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. ↩︎

  8. Zexuan Zhong and Danqi Chen. 2020. A frustratingly easy approach for joint entity and relation extraction. arXiv preprint arXiv:2010.12812. ↩︎

  9. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215, Brussels, Belgium. Association for Computational Linguistics. ↩︎

  10. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441–1451, Florence, Italy. Association for Computational Linguistics. ↩︎

  11. Matthew E. Peters, Mark Neumann, IV RobertLLogan, Roy Schwartz, V. Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In EMNLP/IJCNLP. ↩︎

  12. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. ↩︎

  13. Abhijit Bendale and Terrance E Boult. 2016. Towards open set deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1563–1572. ↩︎

  14. Akshay Raj Dhamija, Manuel G ¨unther, and Terrance E Boult. 2018. Reducing network agnostophobia. In NeurIPS. ↩︎

  15. Shiyu Liang, Yixuan Li, and R Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations. ↩︎

  16. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895–2905, Florence, Italy. Association for Computational Linguistics ↩︎

标签:Linguistics,Computational,Improved,Baseline,level,NA,arXiv,类别,Association
来源: https://blog.csdn.net/weixin_37913277/article/details/117391227

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有