ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

神经网络简介(二)

2020-12-20 10:01:07  阅读:188  来源: 互联网

标签:偏置 return 直线 简介 rate 神经网络 learn np


接上回书,这节我们介绍计算机如何找到合适的线性方程,即计算机如何学习?

我们来看一个简单的例子。如下图,有三个蓝点和三个红点,寻找区分这些点的直线。对于计算机来说,它可能随机从某个位置开始选择一个线性方程(如下方右图)。这条直线将整个样本空间化分为两个区域--蓝色区域与红色区域。可以看出,这条直线的分类效果比较差,所以我们需要移动它,即让这条直线更加靠近下图中两个分类错误的点(在蓝色区域的红点和在红色区域的蓝点),这样才能使分类结果越来越好。

下面为大家讲授使直线更加靠近目标点的方法。如下图所示,假设直线的方程为3x_{1}+4x_{2}-10=0,图中红色点为分类错误的点,其坐标为\left ( 4,5 \right )。要使直线靠近红点我们可以这么做:将直线方程的系数提取出来,同时为红点坐标增加偏置单元1,改为\left ( 4,5 ,1\right ),然后对应相减(如下图所示),将得到的值作为直线方程的系数即可。

          

但是!这样得到的直线方程移动较大,在数据点较多时可能会使分类结果更加不好,所以我们希望直线向红点做小幅移动。于是引入学习速率(Learning Rate)的概念。由上述分析可知,学习速率应是个较小的数字,此处假设学习速率为0.1,然后将0.1乘以增加了偏置单元的红点坐标值,然后相减(如下图)。得到新的直线方程2.6x_{1}+3.5x_{2}-10.1=0,此时你会惊奇的发现,直线更加靠近分类错误的红点了!!!没错,就是这么简单和神奇。

同样,假如有个蓝色的点\left ( 1,1 \right )位于红色区域,也可以按照上述方法使直线更加靠近该点。但要注意,求新的直线方程参数时要做加法,而不是减法。请牢记此方法,后续会经常使用!

      

将上述知识点进行总结(伪代码如下图所示)。对于n维数据,起始先随机分配权重和偏置,然后计算所有点的结果。对于分类错误点我们执行以下操作:

1、预测为0的点,也就是被分到红色区域的蓝色点。我们将当前权重加上学习速率乘以该点的坐标作为新的权重值,将当前偏置加上学习速率作为新的偏置。

2、预测为1的点,也就是被分到蓝色区域的红色点。我们将当前权重减去学习速率乘以该点的坐标作为新的权重值,将当前偏置减去学习速率作为新的偏置。

下面进行练习。使用感知器算法分类下面的数据。

0.78051,-0.063669,1
0.28774,0.29139,1
0.40714,0.17878,1
0.2923,0.4217,1
0.50922,0.35256,1
0.27785,0.10802,1
0.27527,0.33223,1
0.43999,0.31245,1
0.33557,0.42984,1
0.23448,0.24986,1
0.0084492,0.13658,1
0.12419,0.33595,1
0.25644,0.42624,1
0.4591,0.40426,1
0.44547,0.45117,1
0.42218,0.20118,1
0.49563,0.21445,1
0.30848,0.24306,1
0.39707,0.44438,1
0.32945,0.39217,1
0.40739,0.40271,1
0.3106,0.50702,1
0.49638,0.45384,1
0.10073,0.32053,1
0.69907,0.37307,1
0.29767,0.69648,1
0.15099,0.57341,1
0.16427,0.27759,1
0.33259,0.055964,1
0.53741,0.28637,1
0.19503,0.36879,1
0.40278,0.035148,1
0.21296,0.55169,1
0.48447,0.56991,1
0.25476,0.34596,1
0.21726,0.28641,1
0.67078,0.46538,1
0.3815,0.4622,1
0.53838,0.32774,1
0.4849,0.26071,1
0.37095,0.38809,1
0.54527,0.63911,1
0.32149,0.12007,1
0.42216,0.61666,1
0.10194,0.060408,1
0.15254,0.2168,1
0.45558,0.43769,1
0.28488,0.52142,1
0.27633,0.21264,1
0.39748,0.31902,1
0.5533,1,0
0.44274,0.59205,0
0.85176,0.6612,0
0.60436,0.86605,0
0.68243,0.48301,0
1,0.76815,0
0.72989,0.8107,0
0.67377,0.77975,0
0.78761,0.58177,0
0.71442,0.7668,0
0.49379,0.54226,0
0.78974,0.74233,0
0.67905,0.60921,0
0.6642,0.72519,0
0.79396,0.56789,0
0.70758,0.76022,0
0.59421,0.61857,0
0.49364,0.56224,0
0.77707,0.35025,0
0.79785,0.76921,0
0.70876,0.96764,0
0.69176,0.60865,0
0.66408,0.92075,0
0.65973,0.66666,0
0.64574,0.56845,0
0.89639,0.7085,0
0.85476,0.63167,0
0.62091,0.80424,0
0.79057,0.56108,0
0.58935,0.71582,0
0.56846,0.7406,0
0.65912,0.71548,0
0.70938,0.74041,0
0.59154,0.62927,0
0.45829,0.4641,0
0.79982,0.74847,0
0.60974,0.54757,0
0.68127,0.86985,0
0.76694,0.64736,0
0.69048,0.83058,0
0.68122,0.96541,0
0.73229,0.64245,0
0.76145,0.60138,0
0.58985,0.86955,0
0.73145,0.74516,0
0.77029,0.7014,0
0.73156,0.71782,0
0.44556,0.57991,0
0.85275,0.85987,0
0.51912,0.62359,0

程序代码如下:

import numpy as np
# Setting the random seed, feel free to change it and see different solutions.
np.random.seed(42)

def stepFunction(t):
    if t >= 0:
        return 1
    return 0

def prediction(X, W, b):
    return stepFunction((np.matmul(X,W)+b)[0])

# TODO: Fill in the code below to implement the perceptron trick.
# The function should receive as inputs the data X, the labels y,
# the weights W (as an array), and the bias b,
# update the weights and bias W, b, according to the perceptron algorithm,
# and return W and b.
def perceptronStep(X, y, W, b, learn_rate = 0.01):
    # Fill in code
    for i in range(len(X)):
        y_hat = prediction(X[i],W,b)
        if y[i]-y_hat == 1:
            W[0] += X[i][0]*learn_rate
            W[1] += X[i][1]*learn_rate
            b += learn_rate
        elif y[i]-y_hat == -1:
            W[0] -= X[i][0]*learn_rate
            W[1] -= X[i][1]*learn_rate
            b -= learn_rate
    return W, b
    
# This function runs the perceptron algorithm repeatedly on the dataset,
# and returns a few of the boundary lines obtained in the iterations,
# for plotting purposes.
# Feel free to play with the learning rate and the num_epochs,
# and see your results plotted below.
def trainPerceptronAlgorithm(X, y, learn_rate = 0.01, num_epochs = 25):
    x_min, x_max = min(X.T[0]), max(X.T[0])
    y_min, y_max = min(X.T[1]), max(X.T[1])
    W = np.array(np.random.rand(2,1))
    b = np.random.rand(1)[0] + x_max
    # These are the solution lines that get plotted below.
    boundary_lines = []
    for i in range(num_epochs):
        # In each epoch, we apply the perceptron step.
        W, b = perceptronStep(X, y, W, b, learn_rate)
        boundary_lines.append((-W[0]/W[1], -b/W[1]))
    return boundary_lines

实验结果:

其中绿色虚线为每次更新的直线方程,黑线为最终的直线方程。

 

本篇主要介绍了线性数据的分类,那么对于非线性数据该怎么分类呢?咱们下节见。

标签:偏置,return,直线,简介,rate,神经网络,learn,np
来源: https://blog.csdn.net/ting_qifengl/article/details/100577181

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有