ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

sentence similarity vs text (multi-sentence) similarity

2021-01-26 17:57:56  阅读:243  来源: 互联网

标签:BERT sentence E5% text Sentence https similarity


1. sentence similarity

1.1 方法列举

BERT
Universal Sentence Encoder
ELECTRA embedding

1.2 介绍

1.2.1 BERT
With the advancement in language models, representation of sentences into vectors has been getting better lately. That might give some good result in your case. For example, BERT can be used to get the sentence embedding.

Supervised:BERT for sentence similarity if you have labelled set of data
在这里插入图片描述

You can use the pre-trained BERT model and you can pass two sentences and you can let the vector obtained at [CLS] pass through a feed forward neural network to decide whether the sentences are similar. This approach can work if you have labelled set of data. If you don’t have, consider the following :

Unsupervised:BERT for single sentence

在这里插入图片描述

You pass the variable length sentences to the BERT network and the vector obtained at the token [CLS] becomes the vector for the sentence. You can then use cosine similarity the way you have been using.

1.2.2 Universal Sentence Encoder

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46808.pdf

1.2.3 ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS

https://arxiv.org/pdf/2003.10555.pdf

a more sample-efficient pre-training task called replaced token detection.

Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not.

在这里插入图片描述

  • 1.3 实践

Easy sentence similarity with BERT Sentence Embeddings using John Snow Labs NLU:
https://medium.com/spark-nlp/easy-sentence-similarity-with-bert-sentence-embeddings-using-john-snow-labs-nlu-ea078deb6ebf

利用Bert构建句向量并计算相似度:
https://netycc.com/2018/12/05/%E5%88%A9%E7%94%A8bert%E6%9E%84%E5%BB%BA%E5%8F%A5%E5%90%91%E9%87%8F%E5%B9%B6%E8%AE%A1%E7%AE%97%E7%9B%B8%E4%BC%BC%E5%BA%A6/

bert-as-service框架:require only two lines of code to get sentence/token-level encodes.
Finally, bert-as-service uses BERT as a sentence encoder and hosts it as a service via ZeroMQ, allowing you to map sentences into fixed-length representations in just two lines of code.

2. text similarity

2.1 方法:
WDM (for word-level, WDM, for sentence-level, SDM)
Sentence Mover’s Similarity is a variation of Word Mover’s Similarity.

2.2 介绍:

2.2.1 WDM

One approach is using Word Mover’s Distance (WMD). WMD is an algorithm for finding the distance between texts of different lengths, where each word is represented as a word embedding vector.

The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to “travel” to reach the embedded words of another document.

For example:
在这里插入图片描述
Source: “From Word Embeddings To Document Distances” Paper

WMD can be modified to Sentence Mover’s Distance, comparing how far apart different sentence embeddings are to each other.

2.2.2 SDM

Sentence Mover’s Similarity:
https://homes.cs.washington.edu/~nasmith/papers/clark+celikyilmaz+smith.acl19.pdf

标签:BERT,sentence,E5%,text,Sentence,https,similarity
来源: https://blog.csdn.net/DecafTea/article/details/113184387

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有