ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

慢慢学习着用吧

2020-02-04 16:37:50  阅读:293  来源: 互联网

标签:word cereal 慢慢 similarity 学习 words model gensim


unlocking Text Data with Machine learning & Deep Learning Using Python

only a few lines for now, more later when i am more farmiliar with this shit.

But to train these models, it requires a huge amount of computing
power. So, let us go ahead and use Google’s pre-trained model, which has
been trained with over 100 billion words.
Download the model from the below path and keep it in your local
storage:
https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit
Import the gensim package and follow the steps to understand
Google’s word2vec.

import gensim package

import gensim

load the saved model

model = gensim.models.Word2Vec.load_word2vec_format(‘C:\
Users\GoogleNews-vectors-negative300.bin’, binary=True)
#Checking how similarity works.
print (model.similarity(‘this’, ‘is’))
Chapter 3 Converting text to Features
92
Output:
0.407970363878
#Lets check one more.
print (model.similarity(‘post’, ‘book’))
Output:
0.0572043891977
“This” and “is” have a good amount of similarity, but the similarity
between the words “post” and “book” is poor. For any given set of words,
it uses the vectors of both the words and calculates the similarity between
them.

Finding the odd one out.

model.doesnt_match(‘breakfast cereal dinner lunch’;.split())
Output:
‘cereal’
Of ‘breakfast’ , ‘cereal’ , ‘dinner’ and ‘lunch’, only cereal is the word that is
not anywhere related to the remaining 3 words.

It is also finding the relations between words.

word_vectors.most_similar(positive=[‘woman’, ‘king’],
negative=[‘man’])
Output:
queen: 0.7699
If you add ‘woman’ and ‘king’ and minus man, it is predicting queen as
output with 77% confidence. Isn’t this amazing?
king woman man queen
Chapter 3 Converting text to Features
93
Let’s have a look at few of the interesting examples using T – SNE plot
for word embeddings.
Above is the word embedding’s output representation of home
interiors and exteriors. If you clearly observe, all the words related to
electric fittings are near to each other; similarly, words related to bathroom
fittings are near to each other, and so on. This is the beauty of word
embeddings.

weixin_45514087 发布了2 篇原创文章 · 获赞 0 · 访问量 34 私信 关注

标签:word,cereal,慢慢,similarity,学习,words,model,gensim
来源: https://blog.csdn.net/weixin_45514087/article/details/104171115

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有