
用十張圖解釋機(jī)器學(xué)習(xí)的基本概念
在解釋機(jī)器學(xué)習(xí)的基本概念的時候,我發(fā)現(xiàn)自己總是回到有限的幾幅圖中。以下是我認(rèn)為最有啟發(fā)性的條目列表。
1. Test and training error: 為什么低訓(xùn)練誤差并不總是一件好的事情呢:ESL 圖2.11.以模型復(fù)雜度為變量的測試及訓(xùn)練錯誤函數(shù)。
2. Under and overfitting: 低度擬合或者過度擬合的例子。PRML 圖1.4.多項式曲線有各種各樣的命令M,以紅色曲線表示,由綠色曲線適應(yīng)數(shù)據(jù)集后生成。
3. Occam’s razor
ITILA 圖28.3.為什么貝葉斯推理可以具體化奧卡姆剃刀原理。這張圖給了為什么復(fù)雜模型原來是小概率事件這個問題一個基本的直觀的解釋。水平軸代表了可能的數(shù)據(jù)集D空間。貝葉斯定理以他們預(yù)測的數(shù)據(jù)出現(xiàn)的程度成比例地反饋模型。這些預(yù)測被數(shù)據(jù)D上歸一化概率分布量化。數(shù)據(jù)的概率給出了一種模型Hi,P(D|Hi)被稱作支持Hi模型的證據(jù)。一個簡單的模型H1僅可以做到一種有限預(yù)測,以P(D|H1)展示;一個更加強(qiáng)大的模型H2,舉例來說,可以比模型H1擁有更加自由的參數(shù),可以預(yù)測更多種類的數(shù)據(jù)集。這也表明,無論如何,H2在C1域中對數(shù)據(jù)集的預(yù)測做不到像H1那樣強(qiáng)大。假設(shè)相等的先驗概率被分配給這兩種模型,之后數(shù)據(jù)集落在C1區(qū)域,不那么強(qiáng)大的模型H1將會是更加合適的模型。
4. Feature combinations:
(1)為什么集體相關(guān)的特征單獨來看時無關(guān)緊要,這也是(2)線性方法可能會失敗的原因。從Isabelle Guyon特征提取的幻燈片來看。
5. Irrelevant features:
為什么無關(guān)緊要的特征會損害KNN,聚類,以及其它以相似點聚集的方法。左右的圖展示了兩類數(shù)據(jù)很好地被分離在縱軸上。右圖添加了一條不切題的橫軸,它破壞了分組,并且使得許多點成為相反類的近鄰。
6. Basis functions
非線性的基礎(chǔ)函數(shù)是如何使一個低維度的非線性邊界的分類問題,轉(zhuǎn)變?yōu)橐粋€高維度的線性邊界問題。Andrew Moore的支持向量機(jī)SVM(Support Vector Machine)教程幻燈片中有:一個單維度的非線性帶有輸入x的分類問題轉(zhuǎn)化為一個2維的線性可分的z=(x,x^2)問題。
7. Discriminative vs. Generative:
為什么判別式學(xué)習(xí)比產(chǎn)生式更加簡單:PRML 圖1.27.這兩類方法的分類條件的密度舉例,有一個單一的輸入變量x(左圖),連同相應(yīng)的后驗概率(右圖)。注意到左側(cè)的分類條件密度p(x|C1)的模式,在左圖中以藍(lán)色線條表示,對后驗概率沒有影響。右圖中垂直的綠線展示了x中的決策邊界,它給出了最小的誤判率。
8. Loss functions:
學(xué)習(xí)算法可以被視作優(yōu)化不同的損失函數(shù):PRML 圖7.5. 應(yīng)用于支持向量機(jī)中的“鉸鏈”錯誤函數(shù)圖形,以藍(lán)色線條表示,為了邏輯回歸,隨著錯誤函數(shù)被因子1/ln(2)重新調(diào)整,它通過點(0,1),以紅色線條表示。黑色線條表示誤分,均方誤差以綠色線條表示。
9. Geometry of least squares:
ESL 圖3.2.帶有兩個預(yù)測的最小二乘回歸的N維幾何圖形。結(jié)果向量y正交投影到被輸入向量x1和x2所跨越的超平面。投影y^代表了最小二乘預(yù)測的向量。
10. Sparsity:
為什么Lasso算法(L1正規(guī)化或者拉普拉斯先驗)給出了稀疏的解決方案(比如:帶更多0的加權(quán)向量):ESL 圖3.11.lasso算法的估算圖像(左)以及嶺回歸算法的估算圖像(右)。展示了錯誤的等值線以及約束函數(shù)。分別的,當(dāng)紅色橢圓是最小二乘誤差函數(shù)的等高線時,實心的藍(lán)色區(qū)域是約束區(qū)域|β1| + |β2| ≤ t以及β12 + β22 ≤ t2。數(shù)據(jù)分析師培訓(xùn)
英文原文:
I find myself coming back to the same few pictures when explaining basic machine learning concepts. Below is a list I find most illuminating.
我發(fā)現(xiàn)自己在解釋基本的機(jī)器學(xué)習(xí)概念時經(jīng)常碰到少數(shù)相同的圖片。下面列舉了我認(rèn)為最有啟發(fā)性的圖片。
1. Test and training error(測試和訓(xùn)練錯誤): Why lower training error is not always a good thing: ESL Figure 2.11. Test and training error as a function of model complexity.
2. Under and overfitting(欠擬合和過擬合): PRML Figure 1.4. Plots of polynomials having various orders M, shown as red curves, fitted to the data set generated by the green curve.
3. Occam’s razor(奧卡姆剃刀): ITILA Figure 28.3. Why Bayesian inference embodies Occam’s razor. This figure gives the basic intuition for why complex models can turn out to be less probable. The horizontal axis represents the space of possible data sets D. Bayes’ theorem rewards models in proportion to how much they predicted the data that occurred. These predictions are quantified by a normalized probability distribution on D. This probability of the data given model Hi, P (D | Hi), is called the evidence for Hi. A simple model H1 makes only a limited range of predictions, shown by P(D|H1); a more powerful model H2, that has, for example, more free parameters than H1, is able to predict a greater variety of data sets. This means, however, that H2 does not predict the data sets in region C1 as strongly as H1. Suppose that equal prior probabilities have been assigned to the two models. Then, if the data set falls in region C1, the less powerful model H1 will be the more probable model.
4. Feature combinations(Feature組合): (1) Why collectively relevant features may look individually irrelevant, and also (2) Why linear methods may fail. From Isabelle Guyon’s feature extraction slides.
5. Irrelevant features(不相關(guān)特征): Why irrelevant features hurt kNN, clustering, and other similarity based methods. The figure on the left shows two classes well separated on the vertical axis. The figure on the right adds an irrelevant horizontal axis which destroys the grouping and makes many points nearest neighbors of the opposite class.
6. Basis functions(基函數(shù)): How non-linear basis functions turn a low dimensional classification problem without a linear boundary into a high dimensional problem with a linear boundary. From SVM tutorial slides by Andrew Moore: a one dimensional non-linear classification problem with input x is turned into a 2-D problem z=(x, x^2) that is linearly separable.
7. Discriminative vs. Generative(判別與生成): Why discriminative learning may be easier than generative: PRML Figure 1.27. Example of the class-conditional densities for two classes having a single input variable x (left plot) together with the corresponding posterior probabilities (right plot). Note that the left-hand mode of the class-conditional density p(x|C1), shown in blue on the left plot, has no effect on the posterior probabilities. The vertical green line in the right plot shows the decision boundary in x that gives the minimum misclassification rate.
8. Loss functions(損失函數(shù)): Learning algorithms can be viewed as optimizing different loss functions: PRML Figure 7.5. Plot of the ‘hinge’ error function used in support vector machines, shown in blue, along with the error function for logistic regression, rescaled by a factor of 1/ln(2) so that it passes through the point (0, 1), shown in red. Also shown are the misclassification error in black and the squared error in green.
9. Geometry of least squares(最小二乘的幾何圖形): ESL Figure 3.2. The N-dimensional geometry of least squares regression with two predictors. The outcome vector y is orthogonally projected onto the hyperplane spanned by the input vectors x1 and x2. The projection y? represents the vector of the least squares predictions.
10. Sparsity(稀疏性): Why Lasso (L1 regularization or Laplacian prior) gives sparse solutions (i.e. weight vectors with more zeros): ESL Figure 3.11. Estimation picture for the lasso (left) and ridge regression (right). Shown are contours of the error and constraint functions. The solid blue areas are the constraint regions |β1| + |β2| ≤ t and β12 + β22 ≤ t2, respectively, while the red ellipses are the contours of the least squares error function.
數(shù)據(jù)分析咨詢請掃描二維碼
若不方便掃碼,搜微信號:CDAshujufenxi
SQL Server 中 CONVERT 函數(shù)的日期轉(zhuǎn)換:從基礎(chǔ)用法到實戰(zhàn)優(yōu)化 在 SQL Server 的數(shù)據(jù)處理中,日期格式轉(zhuǎn)換是高頻需求 —— 無論 ...
2025-09-18MySQL 大表拆分與關(guān)聯(lián)查詢效率:打破 “拆分必慢” 的認(rèn)知誤區(qū) 在 MySQL 數(shù)據(jù)庫管理中,“大表” 始終是性能優(yōu)化繞不開的話題。 ...
2025-09-18CDA 數(shù)據(jù)分析師:表結(jié)構(gòu)數(shù)據(jù) “獲取 - 加工 - 使用” 全流程的賦能者 表結(jié)構(gòu)數(shù)據(jù)(如數(shù)據(jù)庫表、Excel 表、CSV 文件)是企業(yè)數(shù)字 ...
2025-09-18DSGE 模型中的 Et:理性預(yù)期算子的內(nèi)涵、作用與應(yīng)用解析 動態(tài)隨機(jī)一般均衡(Dynamic Stochastic General Equilibrium, DSGE)模 ...
2025-09-17Python 提取 TIF 中地名的完整指南 一、先明確:TIF 中的地名有哪兩種存在形式? 在開始提取前,需先判斷 TIF 文件的類型 —— ...
2025-09-17CDA 數(shù)據(jù)分析師:解鎖表結(jié)構(gòu)數(shù)據(jù)特征價值的專業(yè)核心 表結(jié)構(gòu)數(shù)據(jù)(以 “行 - 列” 規(guī)范存儲的結(jié)構(gòu)化數(shù)據(jù),如數(shù)據(jù)庫表、Excel 表、 ...
2025-09-17Excel 導(dǎo)入數(shù)據(jù)含缺失值?詳解 dropna 函數(shù)的功能與實戰(zhàn)應(yīng)用 在用 Python(如 pandas 庫)處理 Excel 數(shù)據(jù)時,“缺失值” 是高頻 ...
2025-09-16深入解析卡方檢驗與 t 檢驗:差異、適用場景與實踐應(yīng)用 在數(shù)據(jù)分析與統(tǒng)計學(xué)領(lǐng)域,假設(shè)檢驗是驗證研究假設(shè)、判斷數(shù)據(jù)差異是否 “ ...
2025-09-16CDA 數(shù)據(jù)分析師:掌控表格結(jié)構(gòu)數(shù)據(jù)全功能周期的專業(yè)操盤手 表格結(jié)構(gòu)數(shù)據(jù)(以 “行 - 列” 存儲的結(jié)構(gòu)化數(shù)據(jù),如 Excel 表、數(shù)據(jù) ...
2025-09-16MySQL 執(zhí)行計劃中 rows 數(shù)量的準(zhǔn)確性解析:原理、影響因素與優(yōu)化 在 MySQL SQL 調(diào)優(yōu)中,EXPLAIN執(zhí)行計劃是核心工具,而其中的row ...
2025-09-15解析 Python 中 Response 對象的 text 與 content:區(qū)別、場景與實踐指南 在 Python 進(jìn)行 HTTP 網(wǎng)絡(luò)請求開發(fā)時(如使用requests ...
2025-09-15CDA 數(shù)據(jù)分析師:激活表格結(jié)構(gòu)數(shù)據(jù)價值的核心操盤手 表格結(jié)構(gòu)數(shù)據(jù)(如 Excel 表格、數(shù)據(jù)庫表)是企業(yè)最基礎(chǔ)、最核心的數(shù)據(jù)形態(tài) ...
2025-09-15Python HTTP 請求工具對比:urllib.request 與 requests 的核心差異與選擇指南 在 Python 處理 HTTP 請求(如接口調(diào)用、數(shù)據(jù)爬取 ...
2025-09-12解決 pd.read_csv 讀取長浮點數(shù)據(jù)的科學(xué)計數(shù)法問題 為幫助 Python 數(shù)據(jù)從業(yè)者解決pd.read_csv讀取長浮點數(shù)據(jù)時的科學(xué)計數(shù)法問題 ...
2025-09-12CDA 數(shù)據(jù)分析師:業(yè)務(wù)數(shù)據(jù)分析步驟的落地者與價值優(yōu)化者 業(yè)務(wù)數(shù)據(jù)分析是企業(yè)解決日常運(yùn)營問題、提升執(zhí)行效率的核心手段,其價值 ...
2025-09-12用 SQL 驗證業(yè)務(wù)邏輯:從規(guī)則拆解到數(shù)據(jù)把關(guān)的實戰(zhàn)指南 在業(yè)務(wù)系統(tǒng)落地過程中,“業(yè)務(wù)邏輯” 是連接 “需求設(shè)計” 與 “用戶體驗 ...
2025-09-11塔吉特百貨孕婦營銷案例:數(shù)據(jù)驅(qū)動下的精準(zhǔn)零售革命與啟示 在零售行業(yè) “流量紅利見頂” 的當(dāng)下,精準(zhǔn)營銷成為企業(yè)突圍的核心方 ...
2025-09-11CDA 數(shù)據(jù)分析師與戰(zhàn)略 / 業(yè)務(wù)數(shù)據(jù)分析:概念辨析與協(xié)同價值 在數(shù)據(jù)驅(qū)動決策的體系中,“戰(zhàn)略數(shù)據(jù)分析”“業(yè)務(wù)數(shù)據(jù)分析” 是企業(yè) ...
2025-09-11Excel 數(shù)據(jù)聚類分析:從操作實踐到業(yè)務(wù)價值挖掘 在數(shù)據(jù)分析場景中,聚類分析作為 “無監(jiān)督分組” 的核心工具,能從雜亂數(shù)據(jù)中挖 ...
2025-09-10統(tǒng)計模型的核心目的:從數(shù)據(jù)解讀到?jīng)Q策支撐的價值導(dǎo)向 統(tǒng)計模型作為數(shù)據(jù)分析的核心工具,并非簡單的 “公式堆砌”,而是圍繞特定 ...
2025-09-10