99999久久久久久亚洲,欧美人与禽猛交狂配,高清日韩av在线影院,一个人在线高清免费观看,啦啦啦在线视频免费观看www

熱線電話:13121318867

登錄
首頁(yè)精彩閱讀神經(jīng)網(wǎng)絡(luò)從原理到實(shí)現(xiàn)
神經(jīng)網(wǎng)絡(luò)從原理到實(shí)現(xiàn)
2018-07-25
收藏

神經(jīng)網(wǎng)絡(luò)從原理到實(shí)現(xiàn)

1.簡(jiǎn)單介紹
    在機(jī)器學(xué)習(xí)和認(rèn)知科學(xué)領(lǐng)域,人工神經(jīng)網(wǎng)絡(luò)(artificial neural network,縮寫ANN),簡(jiǎn)稱神經(jīng)網(wǎng)絡(luò)(neural network,縮寫NN)或類神經(jīng)網(wǎng)絡(luò),是一種模仿生物神經(jīng)網(wǎng)絡(luò)(動(dòng)物的中樞神經(jīng)系統(tǒng),特別是大腦)的結(jié)構(gòu)和功能的數(shù)學(xué)模型或計(jì)算模型,用于對(duì)函數(shù)進(jìn)行估計(jì)或近似。神經(jīng)網(wǎng)絡(luò)由大量的人工神經(jīng)元聯(lián)結(jié)進(jìn)行計(jì)算。大多數(shù)情況下人工神經(jīng)網(wǎng)絡(luò)能在外界信息的基礎(chǔ)上改變內(nèi)部結(jié)構(gòu),是一種自適應(yīng)系統(tǒng)?,F(xiàn)代神經(jīng)網(wǎng)絡(luò)是一種非線性統(tǒng)計(jì)性數(shù)據(jù)建模工具。典型的神經(jīng)網(wǎng)絡(luò)具有以下三個(gè)部分:
    結(jié)構(gòu) (Architecture)結(jié)構(gòu)指定了網(wǎng)絡(luò)中的變量和它們的拓?fù)潢P(guān)系。例如,神經(jīng)網(wǎng)絡(luò)中的變量可以是神經(jīng)元連接的權(quán)重(weights)和神經(jīng)元的激勵(lì)值(activities of the neurons)。
    激勵(lì)函數(shù)(Activity Rule)大部分神經(jīng)網(wǎng)絡(luò)模型具有一個(gè)短時(shí)間尺度的動(dòng)力學(xué)規(guī)則,來(lái)定義神經(jīng)元如何根據(jù)其他神經(jīng)元的活動(dòng)來(lái)改變自己的激勵(lì)值。一般激勵(lì)函數(shù)依賴于網(wǎng)絡(luò)中的權(quán)重(即該網(wǎng)絡(luò)的參數(shù))。
    學(xué)習(xí)規(guī)則(Learning Rule)學(xué)習(xí)規(guī)則指定了網(wǎng)絡(luò)中的權(quán)重如何隨著時(shí)間推進(jìn)而調(diào)整。這一般被看做是一種長(zhǎng)時(shí)間尺度的動(dòng)力學(xué)規(guī)則。一般情況下,學(xué)習(xí)規(guī)則依賴于神經(jīng)元的激勵(lì)值。它也可能依賴于監(jiān)督者提供的目標(biāo)值和當(dāng)前權(quán)重的值。
2.初識(shí)神經(jīng)網(wǎng)絡(luò)

如上文所說(shuō),神經(jīng)網(wǎng)絡(luò)主要包括三個(gè)部分:結(jié)構(gòu)、激勵(lì)函數(shù)、學(xué)習(xí)規(guī)則。圖1是一個(gè)三層的神經(jīng)網(wǎng)絡(luò),輸入層有d個(gè)節(jié)點(diǎn),隱層有q個(gè)節(jié)點(diǎn),輸出層有l(wèi)個(gè)節(jié)點(diǎn)。除了輸入層,每一層的節(jié)點(diǎn)都包含一個(gè)非線性變換。


圖1

那么為什么要進(jìn)行非線性變換呢?

(1)如果只進(jìn)行線性變換,那么即使是多層的神經(jīng)網(wǎng)絡(luò),依然只有一層的效果。類似于0.6*(0.2x1+0.3x2)=0.12x1+0.18x2。
(2)進(jìn)行非線性變化,可以使得神經(jīng)網(wǎng)絡(luò)可以擬合任意一個(gè)函數(shù),圖2是一個(gè)四層網(wǎng)絡(luò)的圖。


圖2

下面使用數(shù)學(xué)公式描述每一個(gè)神經(jīng)元工作的方式

(1)輸出x
(2)計(jì)算z=w*x
(3)輸出new_x = f(z),這里的f是一個(gè)函數(shù),可以是sigmoid、tanh、relu等,f就是上文所說(shuō)到的激勵(lì)函數(shù)。

3.反向傳播(bp)算法

有了上面的網(wǎng)絡(luò)結(jié)構(gòu)和激勵(lì)函數(shù)之后,這個(gè)網(wǎng)絡(luò)是如何學(xué)習(xí)參數(shù)(學(xué)習(xí)規(guī)則)的呢?

首先我們先定義下本文使用的激活函數(shù)、目標(biāo)函數(shù)

(1)激活函數(shù)(sigmoid):

def sigmoid(z):
    return 1.0/(1.0+np.exp(-z))
sigmoid函數(shù)有一個(gè)十分重要的性質(zhì):,即計(jì)算導(dǎo)數(shù)十分方便。

def sigmoid_prime(z):
    return sigmoid(z)*(1-sigmoid(z))

下面給出一個(gè)簡(jiǎn)單的證明:

(2)目標(biāo)函數(shù)(差的平方和),公式中的1/2是為了計(jì)算導(dǎo)數(shù)方便。

然后,這個(gè)網(wǎng)絡(luò)是如何運(yùn)作的

(1)數(shù)據(jù)從輸入層到輸出層,經(jīng)過(guò)各種非線性變換的過(guò)程即前向傳播。

def feedforward(self, a):
    for b, w in zip(self.biases, self.weights):
        a = sigmoid(np.dot(w, a)+b)
    return a
其中,初始的權(quán)重(w)和偏置(b)是隨機(jī)賦值的

biases = [np.random.randn(y, 1) for y in sizes[1:]]
weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])]
(2)參數(shù)更新,即反向傳播

在寫代碼之前,先進(jìn)行推導(dǎo),即利用梯度下降更新參數(shù),以上面的網(wǎng)絡(luò)結(jié)構(gòu)(圖1)為例

(1)輸出層與隱層之間的參數(shù)更新

(2)隱層與輸入層之間的參數(shù)更新

有兩點(diǎn)需要強(qiáng)調(diào)下:


  1. (2)中的結(jié)果比(1)中的結(jié)果多了一個(gè)求和公式,這是因?yàn)橛?jì)算隱層與輸入層之間的參數(shù)時(shí),輸出層與隱層的每一個(gè)節(jié)點(diǎn)都有影響。
  2. (2)中參數(shù)更新的結(jié)果可以復(fù)用(1)中的參數(shù)更新結(jié)果,從某種程度上,與反向傳播這個(gè)算法名稱不謀而合,不得不驚嘆。
def backprop(self, x, y):
    """返回一個(gè)元組(nabla_b, nabla_w)代表目標(biāo)函數(shù)的梯度."""
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    # feedforward
    activation = x
    activations = [x] # list to store all the activations, layer by layer
    zs = [] # list to store all the z vectors, layer by layer
    for b, w in zip(self.biases, self.weights):
        z = np.dot(w, activation)+b
        zs.append(z)
        activation = sigmoid(z)
        activations.append(activation)
    # backward pass
    delta = self.cost_derivative(activations[-1], y) * \
        sigmoid_prime(zs[-1])
    nabla_b[-1] = delta
    nabla_w[-1] = np.dot(delta, activations[-2].transpose())
    """l = 1 表示最后一層神經(jīng)元,l = 2 是倒數(shù)第二層神經(jīng)元, 依此類推."""
    for l in xrange(2, self.num_layers):
        z = zs[-l]
        sp = sigmoid_prime(z)
        delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
        nabla_b[-l] = delta
        nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
    return (nabla_b, nabla_w)
4.完整代碼實(shí)現(xiàn)

# -*- coding: utf-8 -*-

import random
import numpy as np

class Network(object):

    def __init__(self, sizes):
    """參數(shù)sizes表示每一層神經(jīng)元的個(gè)數(shù),如[2,3,1],表示第一層有2個(gè)神經(jīng)元,第二層有3個(gè)神經(jīng)元,第三層有1個(gè)神經(jīng)元."""
        self.num_layers = len(sizes)
        self.sizes = sizes
        self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
        self.weights = [np.random.randn(y, x)
                        for x, y in zip(sizes[:-1], sizes[1:])]

    def feedforward(self, a):
        """前向傳播"""
        for b, w in zip(self.biases, self.weights):
            a = sigmoid(np.dot(w, a)+b)
        return a

    def SGD(self, training_data, epochs, mini_batch_size, eta,
            test_data=None):
        """隨機(jī)梯度下降"""
        if test_data:
            n_test = len(test_data)
        n = len(training_data)
        for j in xrange(epochs):
            random.shuffle(training_data)
            mini_batches = [
                training_data[k:k+mini_batch_size]
                for k in xrange(0, n, mini_batch_size)]
            for mini_batch in mini_batches:
                self.update_mini_batch(mini_batch, eta)
            if test_data:
                print "Epoch {0}: {1} / {2}".format(j, self.evaluate(test_data), n_test)
            else:
                print "Epoch {0} complete".format(j)

    def update_mini_batch(self, mini_batch, eta):
        """使用后向傳播算法進(jìn)行參數(shù)更新.mini_batch是一個(gè)元組(x, y)的列表、eta是學(xué)習(xí)速率"""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        for x, y in mini_batch:
            delta_nabla_b, delta_nabla_w = self.backprop(x, y)
            nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
            nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
        self.weights = [w-(eta/len(mini_batch))*nw
                        for w, nw in zip(self.weights, nabla_w)]
        self.biases = [b-(eta/len(mini_batch))*nb
                       for b, nb in zip(self.biases, nabla_b)]

    def backprop(self, x, y):
        """返回一個(gè)元組(nabla_b, nabla_w)代表目標(biāo)函數(shù)的梯度."""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # 前向傳播
        activation = x
        activations = [x] # list to store all the activations, layer by layer
        zs = [] # list to store all the z vectors, layer by layer
        for b, w in zip(self.biases, self.weights):
            z = np.dot(w, activation)+b
            zs.append(z)
            activation = sigmoid(z)
            activations.append(activation)
        # backward pass
        delta = self.cost_derivative(activations[-1], y) * sigmoid_prime(zs[-1])
        nabla_b[-1] = delta
        nabla_w[-1] = np.dot(delta, activations[-2].transpose())
        """l = 1 表示最后一層神經(jīng)元,l = 2 是倒數(shù)第二層神經(jīng)元, 依此類推."""
        for l in xrange(2, self.num_layers):
            z = zs[-l]
            sp = sigmoid_prime(z)
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
        return (nabla_b, nabla_w)

    def evaluate(self, test_data):
        """返回分類正確的個(gè)數(shù)"""
        test_results = [(np.argmax(self.feedforward(x)), y) for (x, y) in test_data]
        return sum(int(x == y) for (x, y) in test_results)

    def cost_derivative(self, output_activations, y):
        return (output_activations-y)

def sigmoid(z):
    return 1.0/(1.0+np.exp(-z))
5.簡(jiǎn)單應(yīng)用

# -*- coding: utf-8 -*-

from network import *

def vectorized_result(j,nclass):
    """離散數(shù)據(jù)進(jìn)行one-hot"""
    e = np.zeros((nclass, 1))
    e[j] = 1.0
    return e

def get_format_data(X,y,isTest):
    ndim = X.shape[1]
    nclass = len(np.unique(y))
    inputs = [np.reshape(x, (ndim, 1)) for x in X]
    if not isTest:
        results = [vectorized_result(y,nclass) for y in y]
    else:
        results = y
    data = zip(inputs, results)
    return data

#隨機(jī)生成數(shù)據(jù)
from sklearn.datasets import *
np.random.seed(0)
X, y = make_moons(200, noise=0.20)
ndim = X.shape[1]
nclass = len(np.unique(y))

#劃分訓(xùn)練、測(cè)試集
from sklearn.cross_validation import train_test_split
train_x,test_x,train_y,test_y = train_test_split(X,y,test_size=0.2,random_state=0)

training_data = get_format_data(train_x,train_y,False)
test_data = get_format_data(test_x,test_y,True)

net = Network(sizes=[ndim,10,nclass])
net.SGD(training_data=training_data,epochs=5,mini_batch_size=10,eta=0.1,test_data=test_data)

數(shù)據(jù)分析咨詢請(qǐng)掃描二維碼

若不方便掃碼,搜微信號(hào):CDAshujufenxi

數(shù)據(jù)分析師資訊
更多

OK
客服在線
立即咨詢
客服在線
立即咨詢
') } function initGt() { var handler = function (captchaObj) { captchaObj.appendTo('#captcha'); captchaObj.onReady(function () { $("#wait").hide(); }).onSuccess(function(){ $('.getcheckcode').removeClass('dis'); $('.getcheckcode').trigger('click'); }); window.captchaObj = captchaObj; }; $('#captcha').show(); $.ajax({ url: "/login/gtstart?t=" + (new Date()).getTime(), // 加隨機(jī)數(shù)防止緩存 type: "get", dataType: "json", success: function (data) { $('#text').hide(); $('#wait').show(); // 調(diào)用 initGeetest 進(jìn)行初始化 // 參數(shù)1:配置參數(shù) // 參數(shù)2:回調(diào),回調(diào)的第一個(gè)參數(shù)驗(yàn)證碼對(duì)象,之后可以使用它調(diào)用相應(yīng)的接口 initGeetest({ // 以下 4 個(gè)配置參數(shù)為必須,不能缺少 gt: data.gt, challenge: data.challenge, offline: !data.success, // 表示用戶后臺(tái)檢測(cè)極驗(yàn)服務(wù)器是否宕機(jī) new_captcha: data.new_captcha, // 用于宕機(jī)時(shí)表示是新驗(yàn)證碼的宕機(jī) product: "float", // 產(chǎn)品形式,包括:float,popup width: "280px", https: true // 更多配置參數(shù)說(shuō)明請(qǐng)參見(jiàn):http://docs.geetest.com/install/client/web-front/ }, handler); } }); } function codeCutdown() { if(_wait == 0){ //倒計(jì)時(shí)完成 $(".getcheckcode").removeClass('dis').html("重新獲取"); }else{ $(".getcheckcode").addClass('dis').html("重新獲取("+_wait+"s)"); _wait--; setTimeout(function () { codeCutdown(); },1000); } } function inputValidate(ele,telInput) { var oInput = ele; var inputVal = oInput.val(); var oType = ele.attr('data-type'); var oEtag = $('#etag').val(); var oErr = oInput.closest('.form_box').next('.err_txt'); var empTxt = '請(qǐng)輸入'+oInput.attr('placeholder')+'!'; var errTxt = '請(qǐng)輸入正確的'+oInput.attr('placeholder')+'!'; var pattern; if(inputVal==""){ if(!telInput){ errFun(oErr,empTxt); } return false; }else { switch (oType){ case 'login_mobile': pattern = /^1[3456789]\d{9}$/; if(inputVal.length==11) { $.ajax({ url: '/login/checkmobile', type: "post", dataType: "json", data: { mobile: inputVal, etag: oEtag, page_ur: window.location.href, page_referer: document.referrer }, success: function (data) { } }); } break; case 'login_yzm': pattern = /^\d{6}$/; break; } if(oType=='login_mobile'){ } if(!!validateFun(pattern,inputVal)){ errFun(oErr,'') if(telInput){ $('.getcheckcode').removeClass('dis'); } }else { if(!telInput) { errFun(oErr, errTxt); }else { $('.getcheckcode').addClass('dis'); } return false; } } return true; } function errFun(obj,msg) { obj.html(msg); if(msg==''){ $('.login_submit').removeClass('dis'); }else { $('.login_submit').addClass('dis'); } } function validateFun(pat,val) { return pat.test(val); }