ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

逻辑回归

2022-04-20 10:32:11  阅读:172  来源: 互联网

标签:loc 逻辑 plt 回归 X2 new X1 data


简介

逻辑回归: 使用了逻辑回归函数对数据进行了拟合就叫逻辑回归??

\[P(x)=\frac{1}{1+e^{-x}}(sigmoid function) \]

\[y= \begin{cases}1, & P(x) \geq 0.5 \\ \hline 0, & P(x)<0.5\end{cases} \]

其中y为分类结果,P为概率分布,x为特征值。

分类问题的核心就是寻找决策边界。

损失函数

\[J_{i}=\left\{\begin{array}{l} -\log \left(P\left(x_{i}\right)\right), \text { if } y_{i}=1 \\ -\log \left(1-P\left(x_{i}\right)\right), \text { if } y_{i}=0 \end{array}\right. \]

联合表示

\[J=\frac{1}{m} \sum_{i=1}^{m} J_{i}=-\frac{1}{m}\left[\sum_{i=1}^{m}\left(\left(y_{i} \log \left(P\left(x_{i}\right)\right)\right.+\left(1-y_{i}\right) \log \left(1-P\left(x_{i}\right)\right)\right)\right] \]

重复计算直至收敛

\[\left\{\begin{aligned} t e m p_{\theta_{j}} &=\theta_{j}-\alpha \frac{\partial}{\partial \theta_{j}} J(\theta) \\ \theta_{j} &=t e m p_{\theta_{j}} \end{aligned}\right\} \]

评估模型表现
使用准确率

\[\text { Accuracy }=\frac{\text { 正确预测样本数量 }}{\text { 总样本数量 }} \]

参考链接

https://blog.csdn.net/weixin_46344368/article/details/105904589?spm=1001.2014.3001.5502

code

学生考试成绩已知两门成绩判定第三门成绩是否会合格

import pandas as pd
import numpy as np
data = pd.read_csv('examdata.csv')
data.head()
#visalize the data
from matplotlib import pyplot as plt
fig1=plt.figure()
plt.scatter(data.loc[:,'Exam1'],data.loc[:,'Exam2'])
plt.title('Exam1-Exam2')
plt.xlabel('Exam1')
plt.ylabel('Exam2')
plt.show()
# add label mask
mask=data.loc[:,'Pass']==1
print(mask) # print(~mask)
fig2=plt.figure()
passed=plt.scatter(data.loc[:,'Exam1'][mask],data.loc[:,'Exam2'][mask])
failed=plt.scatter(data.loc[:,'Exam1'][~mask],data.loc[:,'Exam2'][~mask])
plt.title('Exam1-Exam2')
plt.xlabel('Exam1')
plt.ylabel('Exam2')
plt.legend((passed,failed),('passed','failed'))
plt.show()
#define X,y
X = data.drop(['Pass'],axis=1)
y = data.loc[:,'Pass']
#X.head()
X1 = data.loc[:,'Exam1']
X2 = data.loc[:, 'Exam2']
y.head()
# eatablish the model and train it
from sklearn.linear_model import LogisticRegression
LR = LogisticRegression()
LR.fit(X,y)
# show the predicted result and its accuracy
y_predict = LR.predict(X)
print(y_predict)
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y,y_predict)
print(accuracy)
#exam1 = 70, exam2=65
y_test = LR.predict([[70,65]])
print('passed' if y_test==1 else 'failed')
print(LR.coef_,LR.intercept_)
theta0 = LR.intercept_
theta1,theta2 = LR.coef_[0][0],LR.coef_[0][1]
print(theta0,theta1,theta2)
X2_new = -(theta0+theta1*X1)/theta2
print(X2_new)
# 边界函数
fig3 = plt.figure()
passed=plt.scatter(data.loc[:,'Exam1'][mask],data.loc[:,'Exam2'][mask])
failed=plt.scatter(data.loc[:,'Exam1'][~mask],data.loc[:,'Exam2'][~mask])
plt.plot(X1,X2_new) #画出决策边界
plt.title('Exam1-Exam2')
plt.xlabel('Exam1')
plt.ylabel('Exam2')
plt.legend((passed,failed),('passed','failed'))
plt.show()  
# create new data
X1_2 = X1*X1
X2_2 = X2*X2
X1_X2 = X1*X2
print(X1,X1_2)
X_new = {'X1':X1,'X2':X2,'X1_2':X1_2,'X2_2':X2_2,'X1_X2':X1_X2}
X_new = pd.DataFrame(X_new)
print(X_new)
LR2 = LogisticRegression()
LR2.fit(X_new,y)
y2_predict = LR2.predict(X_new)
accuracy2 = accuracy_score(y,y2_predict)
print(accuracy2)
X1_new = X1.sort_values()
print(X1,X1_new)
theta0=LR2.intercept_
theta1,theta2,theta3,theta4,theta5=LR2.coef_[0][0],LR2.coef_[0][1],LR2.coef_[0][2],LR2.coef_[0][3],LR2.coef_[0][4]
a = theta4
b = theta5*X1_new+theta2
c = theta0+theta1*X1_new+theta3*X1_new*X1_new
X2_new_boundary = (-b+np.sqrt(b*b-4*a*c))/(2*a)
print(X2_new_boundary)
fig5 = plt.figure()
passed=plt.scatter(data.loc[:,'Exam1'][mask],data.loc[:,'Exam2'][mask])
failed=plt.scatter(data.loc[:,'Exam1'][~mask],data.loc[:,'Exam2'][~mask])
plt.plot(X1_new,X2_new_boundary) #画出决策边界
plt.title('Exam1-Exam2')
plt.xlabel('Exam1')
plt.ylabel('Exam2')
plt.legend((passed,failed),('passed','failed'))
plt.show()

标签:loc,逻辑,plt,回归,X2,new,X1,data
来源: https://www.cnblogs.com/eat-too-much/p/16168749.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有