在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → 陈天奇xgboost PPT

陈天奇xgboost PPT

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:1.31M
  • 下载次数:6
  • 浏览次数:105
  • 发布时间:2021-03-13
  • 实例类别:一般编程问题
  • 发 布 人:好学IT男
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
陈天奇xgboost PPT.pdf
Elements in Supervised Learning o NotationS: i ra i-th training example Model: how to make prediction i given Linear model:!y=∑ wjNij(include linear/logistic regression The prediction score i can have different interpretations depending on the task Linear regression i is the predicted score Logistic regression: 1/(1+exp(yi))is predicted the probability of the instance being positive Others.for example in ranking i can be the rank score Parameters: the things we need to learn from data Linear model: 0= wilj=1, .,, d Elements continued: Objective Function Objective function that is everywhere Obi()=I(e)+9(6) Loss on training data: L=I(yi, ii) Square loss: l(9i, ii)=(yi-yi)2 Logistic loss: l(yi, gi)=i In(1+eyi)+(1-yi)ln(1+eyi) Regularization how complicated the model is? L2norm:92(v)=划ol|2 L1 norm(lasso): Q(w)=Xlwll 1 Putting known knowledge into context Ridge regression: >,(i -WTCi)2+All Linear model, square loss l2 regularization Lasso:∑ i=1(92 1x)2+l1 Linear model, square loss, L1 regularization o Logistic regression ∑=1{y;ln(1+e-0)+(1-y)ln(1+e)+川2 Linear model, logistic loss, L2 regularization o The conceptual separation between model, parameter objective also gives you engineering benefits Think of how you can implement Sgd for both ridge regression and logistic regression Objective and Bias variance Trade-off Obj(⊙)=L(Q)+(e o Why do we want to contain two component in the objective? o Optimizing training loss encourages predictive models Fitting well in training data at least get you close to training data which is hopefully close to the underlying distribution Optimizing regularization encourages simple models Simpler models tends to have smaller variance in future predictions, making prediction stable Outline o Review of key concepts of supervised learning Regression Tree and Ensemble(what are we learning o Gradient Boosting(How do we Learn ● Summary Regression Tree(CART) regression tree (also known as classification and regression tree) Decision rules same as in decision tree Contains one score in each leaf value Input: age, gender, occupation Regression Tree Ensemble Tree Ensemble methods Very widely used, look for gBM, random forest Almost half of data mining competition are won by using some variants of tree ensemble methods Invariant to scaling of inputs so you do not need to do careful features normalizatⅰon Learn higher order interaction between features Can be scalable, and are used in industry 【实例截图】
【核心代码】

标签:

实例下载地址

陈天奇xgboost PPT

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警