
用十張圖解釋機(jī)器學(xué)習(xí)的基本概念
在解釋機(jī)器學(xué)習(xí)的基本概念的時(shí)候,我發(fā)現(xiàn)自己總是回到有限的幾幅圖中。以下是我認(rèn)為最有啟發(fā)性的條目列表。
1. Test and training error: 為什么低訓(xùn)練誤差并不總是一件好的事情呢:ESL 圖2.11.以模型復(fù)雜度為變量的測試及訓(xùn)練錯(cuò)誤函數(shù)。
2. Under and overfitting: 低度擬合或者過度擬合的例子。PRML 圖1.4.多項(xiàng)式曲線有各種各樣的命令M,以紅色曲線表示,由綠色曲線適應(yīng)數(shù)據(jù)集后生成。
3. Occam’s razor
ITILA 圖28.3.為什么貝葉斯推理可以具體化奧卡姆剃刀原理。這張圖給了為什么復(fù)雜模型原來是小概率事件這個(gè)問題一個(gè)基本的直觀的解釋。水平軸代表了可能的數(shù)據(jù)集D空間。貝葉斯定理以他們預(yù)測的數(shù)據(jù)出現(xiàn)的程度成比例地反饋模型。這些預(yù)測被數(shù)據(jù)D上歸一化概率分布量化。數(shù)據(jù)的概率給出了一種模型Hi,P(D|Hi)被稱作支持Hi模型的證據(jù)。一個(gè)簡單的模型H1僅可以做到一種有限預(yù)測,以P(D|H1)展示;一個(gè)更加強(qiáng)大的模型H2,舉例來說,可以比模型H1擁有更加自由的參數(shù),可以預(yù)測更多種類的數(shù)據(jù)集。這也表明,無論如何,H2在C1域中對數(shù)據(jù)集的預(yù)測做不到像H1那樣強(qiáng)大。假設(shè)相等的先驗(yàn)概率被分配給這兩種模型,之后數(shù)據(jù)集落在C1區(qū)域,不那么強(qiáng)大的模型H1將會(huì)是更加合適的模型。
4. Feature combinations:
(1)為什么集體相關(guān)的特征單獨(dú)來看時(shí)無關(guān)緊要,這也是(2)線性方法可能會(huì)失敗的原因。從Isabelle Guyon特征提取的幻燈片來看。
5. Irrelevant features:
為什么無關(guān)緊要的特征會(huì)損害KNN,聚類,以及其它以相似點(diǎn)聚集的方法。左右的圖展示了兩類數(shù)據(jù)很好地被分離在縱軸上。右圖添加了一條不切題的橫軸,它破壞了分組,并且使得許多點(diǎn)成為相反類的近鄰。
6. Basis functions
非線性的基礎(chǔ)函數(shù)是如何使一個(gè)低維度的非線性邊界的分類問題,轉(zhuǎn)變?yōu)橐粋€(gè)高維度的線性邊界問題。Andrew Moore的支持向量機(jī)SVM(Support Vector Machine)教程幻燈片中有:一個(gè)單維度的非線性帶有輸入x的分類問題轉(zhuǎn)化為一個(gè)2維的線性可分的z=(x,x^2)問題。
7. Discriminative vs. Generative:
為什么判別式學(xué)習(xí)比產(chǎn)生式更加簡單:PRML 圖1.27.這兩類方法的分類條件的密度舉例,有一個(gè)單一的輸入變量x(左圖),連同相應(yīng)的后驗(yàn)概率(右圖)。注意到左側(cè)的分類條件密度p(x|C1)的模式,在左圖中以藍(lán)色線條表示,對后驗(yàn)概率沒有影響。右圖中垂直的綠線展示了x中的決策邊界,它給出了最小的誤判率。
8. Loss functions:
學(xué)習(xí)算法可以被視作優(yōu)化不同的損失函數(shù):PRML 圖7.5. 應(yīng)用于支持向量機(jī)中的“鉸鏈”錯(cuò)誤函數(shù)圖形,以藍(lán)色線條表示,為了邏輯回歸,隨著錯(cuò)誤函數(shù)被因子1/ln(2)重新調(diào)整,它通過點(diǎn)(0,1),以紅色線條表示。黑色線條表示誤分,均方誤差以綠色線條表示。
9. Geometry of least squares:
ESL 圖3.2.帶有兩個(gè)預(yù)測的最小二乘回歸的N維幾何圖形。結(jié)果向量y正交投影到被輸入向量x1和x2所跨越的超平面。投影y^代表了最小二乘預(yù)測的向量。
10. Sparsity:
為什么Lasso算法(L1正規(guī)化或者拉普拉斯先驗(yàn))給出了稀疏的解決方案(比如:帶更多0的加權(quán)向量):ESL 圖3.11.lasso算法的估算圖像(左)以及嶺回歸算法的估算圖像(右)。展示了錯(cuò)誤的等值線以及約束函數(shù)。分別的,當(dāng)紅色橢圓是最小二乘誤差函數(shù)的等高線時(shí),實(shí)心的藍(lán)色區(qū)域是約束區(qū)域|β1| + |β2| ≤ t以及β12 + β22 ≤ t2。數(shù)據(jù)分析師培訓(xùn)
英文原文:
I find myself coming back to the same few pictures when explaining basic machine learning concepts. Below is a list I find most illuminating.
我發(fā)現(xiàn)自己在解釋基本的機(jī)器學(xué)習(xí)概念時(shí)經(jīng)常碰到少數(shù)相同的圖片。下面列舉了我認(rèn)為最有啟發(fā)性的圖片。
1. Test and training error(測試和訓(xùn)練錯(cuò)誤): Why lower training error is not always a good thing: ESL Figure 2.11. Test and training error as a function of model complexity.
2. Under and overfitting(欠擬合和過擬合): PRML Figure 1.4. Plots of polynomials having various orders M, shown as red curves, fitted to the data set generated by the green curve.
3. Occam’s razor(奧卡姆剃刀): ITILA Figure 28.3. Why Bayesian inference embodies Occam’s razor. This figure gives the basic intuition for why complex models can turn out to be less probable. The horizontal axis represents the space of possible data sets D. Bayes’ theorem rewards models in proportion to how much they predicted the data that occurred. These predictions are quantified by a normalized probability distribution on D. This probability of the data given model Hi, P (D | Hi), is called the evidence for Hi. A simple model H1 makes only a limited range of predictions, shown by P(D|H1); a more powerful model H2, that has, for example, more free parameters than H1, is able to predict a greater variety of data sets. This means, however, that H2 does not predict the data sets in region C1 as strongly as H1. Suppose that equal prior probabilities have been assigned to the two models. Then, if the data set falls in region C1, the less powerful model H1 will be the more probable model.
4. Feature combinations(Feature組合): (1) Why collectively relevant features may look individually irrelevant, and also (2) Why linear methods may fail. From Isabelle Guyon’s feature extraction slides.
5. Irrelevant features(不相關(guān)特征): Why irrelevant features hurt kNN, clustering, and other similarity based methods. The figure on the left shows two classes well separated on the vertical axis. The figure on the right adds an irrelevant horizontal axis which destroys the grouping and makes many points nearest neighbors of the opposite class.
6. Basis functions(基函數(shù)): How non-linear basis functions turn a low dimensional classification problem without a linear boundary into a high dimensional problem with a linear boundary. From SVM tutorial slides by Andrew Moore: a one dimensional non-linear classification problem with input x is turned into a 2-D problem z=(x, x^2) that is linearly separable.
7. Discriminative vs. Generative(判別與生成): Why discriminative learning may be easier than generative: PRML Figure 1.27. Example of the class-conditional densities for two classes having a single input variable x (left plot) together with the corresponding posterior probabilities (right plot). Note that the left-hand mode of the class-conditional density p(x|C1), shown in blue on the left plot, has no effect on the posterior probabilities. The vertical green line in the right plot shows the decision boundary in x that gives the minimum misclassification rate.
8. Loss functions(損失函數(shù)): Learning algorithms can be viewed as optimizing different loss functions: PRML Figure 7.5. Plot of the ‘hinge’ error function used in support vector machines, shown in blue, along with the error function for logistic regression, rescaled by a factor of 1/ln(2) so that it passes through the point (0, 1), shown in red. Also shown are the misclassification error in black and the squared error in green.
9. Geometry of least squares(最小二乘的幾何圖形): ESL Figure 3.2. The N-dimensional geometry of least squares regression with two predictors. The outcome vector y is orthogonally projected onto the hyperplane spanned by the input vectors x1 and x2. The projection y? represents the vector of the least squares predictions.
10. Sparsity(稀疏性): Why Lasso (L1 regularization or Laplacian prior) gives sparse solutions (i.e. weight vectors with more zeros): ESL Figure 3.11. Estimation picture for the lasso (left) and ridge regression (right). Shown are contours of the error and constraint functions. The solid blue areas are the constraint regions |β1| + |β2| ≤ t and β12 + β22 ≤ t2, respectively, while the red ellipses are the contours of the least squares error function.
數(shù)據(jù)分析咨詢請掃描二維碼
若不方便掃碼,搜微信號:CDAshujufenxi
LSTM 模型輸入長度選擇技巧:提升序列建模效能的關(guān)鍵? 在循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)家族中,長短期記憶網(wǎng)絡(luò)(LSTM)憑借其解決長序列 ...
2025-07-11CDA 數(shù)據(jù)分析師報(bào)考條件詳解與準(zhǔn)備指南? ? 在數(shù)據(jù)驅(qū)動(dòng)決策的時(shí)代浪潮下,CDA 數(shù)據(jù)分析師認(rèn)證愈發(fā)受到矚目,成為眾多有志投身數(shù) ...
2025-07-11數(shù)據(jù)透視表中兩列相乘合計(jì)的實(shí)用指南? 在數(shù)據(jù)分析的日常工作中,數(shù)據(jù)透視表憑借其強(qiáng)大的數(shù)據(jù)匯總和分析功能,成為了 Excel 用戶 ...
2025-07-11尊敬的考生: 您好! 我們誠摯通知您,CDA Level I和 Level II考試大綱將于 2025年7月25日 實(shí)施重大更新。 此次更新旨在確保認(rèn) ...
2025-07-10BI 大數(shù)據(jù)分析師:連接數(shù)據(jù)與業(yè)務(wù)的價(jià)值轉(zhuǎn)化者? ? 在大數(shù)據(jù)與商業(yè)智能(Business Intelligence,簡稱 BI)深度融合的時(shí)代,BI ...
2025-07-10SQL 在預(yù)測分析中的應(yīng)用:從數(shù)據(jù)查詢到趨勢預(yù)判? ? 在數(shù)據(jù)驅(qū)動(dòng)決策的時(shí)代,預(yù)測分析作為挖掘數(shù)據(jù)潛在價(jià)值的核心手段,正被廣泛 ...
2025-07-10數(shù)據(jù)查詢結(jié)束后:分析師的收尾工作與價(jià)值深化? ? 在數(shù)據(jù)分析的全流程中,“query end”(查詢結(jié)束)并非工作的終點(diǎn),而是將數(shù) ...
2025-07-10CDA 數(shù)據(jù)分析師考試:從報(bào)考到取證的全攻略? 在數(shù)字經(jīng)濟(jì)蓬勃發(fā)展的今天,數(shù)據(jù)分析師已成為各行業(yè)爭搶的核心人才,而 CDA(Certi ...
2025-07-09【CDA干貨】單樣本趨勢性檢驗(yàn):捕捉數(shù)據(jù)背后的時(shí)間軌跡? 在數(shù)據(jù)分析的版圖中,單樣本趨勢性檢驗(yàn)如同一位耐心的偵探,專注于從單 ...
2025-07-09year_month數(shù)據(jù)類型:時(shí)間維度的精準(zhǔn)切片? ? 在數(shù)據(jù)的世界里,時(shí)間是最不可或缺的維度之一,而year_month數(shù)據(jù)類型就像一把精準(zhǔn) ...
2025-07-09CDA 備考干貨:Python 在數(shù)據(jù)分析中的核心應(yīng)用與實(shí)戰(zhàn)技巧? ? 在 CDA 數(shù)據(jù)分析師認(rèn)證考試中,Python 作為數(shù)據(jù)處理與分析的核心 ...
2025-07-08SPSS 中的 Mann-Kendall 檢驗(yàn):數(shù)據(jù)趨勢與突變分析的有力工具? ? ? 在數(shù)據(jù)分析的廣袤領(lǐng)域中,準(zhǔn)確捕捉數(shù)據(jù)的趨勢變化以及識(shí)別 ...
2025-07-08備戰(zhàn) CDA 數(shù)據(jù)分析師考試:需要多久?如何規(guī)劃? CDA(Certified Data Analyst)數(shù)據(jù)分析師認(rèn)證作為國內(nèi)權(quán)威的數(shù)據(jù)分析能力認(rèn)證 ...
2025-07-08LSTM 輸出不確定的成因、影響與應(yīng)對策略? 長短期記憶網(wǎng)絡(luò)(LSTM)作為循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)的一種變體,憑借獨(dú)特的門控機(jī)制,在 ...
2025-07-07統(tǒng)計(jì)學(xué)方法在市場調(diào)研數(shù)據(jù)中的深度應(yīng)用? 市場調(diào)研是企業(yè)洞察市場動(dòng)態(tài)、了解消費(fèi)者需求的重要途徑,而統(tǒng)計(jì)學(xué)方法則是市場調(diào)研數(shù) ...
2025-07-07CDA數(shù)據(jù)分析師證書考試全攻略? 在數(shù)字化浪潮席卷全球的當(dāng)下,數(shù)據(jù)已成為企業(yè)決策、行業(yè)發(fā)展的核心驅(qū)動(dòng)力,數(shù)據(jù)分析師也因此成為 ...
2025-07-07剖析 CDA 數(shù)據(jù)分析師考試題型:解鎖高效備考與答題策略? CDA(Certified Data Analyst)數(shù)據(jù)分析師考試作為衡量數(shù)據(jù)專業(yè)能力的 ...
2025-07-04SQL Server 字符串截取轉(zhuǎn)日期:解鎖數(shù)據(jù)處理的關(guān)鍵技能? 在數(shù)據(jù)處理與分析工作中,數(shù)據(jù)格式的規(guī)范性是保證后續(xù)分析準(zhǔn)確性的基礎(chǔ) ...
2025-07-04CDA 數(shù)據(jù)分析師視角:從數(shù)據(jù)迷霧中探尋商業(yè)真相? 在數(shù)字化浪潮席卷全球的今天,數(shù)據(jù)已成為企業(yè)決策的核心驅(qū)動(dòng)力,CDA(Certifie ...
2025-07-04CDA 數(shù)據(jù)分析師:開啟數(shù)據(jù)職業(yè)發(fā)展新征程? ? 在數(shù)據(jù)成為核心生產(chǎn)要素的今天,數(shù)據(jù)分析師的職業(yè)價(jià)值愈發(fā)凸顯。CDA(Certified D ...
2025-07-03