在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → 【PDF】《Machine learning A Probabilistic Perspective》 MLAPP;by Kevin Murphy

【PDF】《Machine learning A Probabilistic Perspective》 MLAPP;by Kevin Murphy

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:25.69M
  • 下载次数:10
  • 浏览次数:1034
  • 发布时间:2020-07-24
  • 实例类别:一般编程问题
  • 发 布 人:robot666
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
完整版,带目录,机器学习必备经典;大部头要用力啃。 Machine learning A Probabilistic Perspective
Machine Learning A Probabilistic Perspective Kevin P. Murphy The mit Press Cambridge, Massachusetts London, England o 2012 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means(including photocopying, recording, or information storage and retrieval)without permission in writing from the publisher For information about special quantity discounts, please email special_sales@mitpress. mit. edu This book was set in the HEx programming language by the author. Printed and bound in the United States of Am Library of Congress Cataloging-in-Publication Information Murphy, Kevin P ng:a pi obabilist ctive/Kevin P. Murphy p. cm. -(Adaptive computation and machine learning series) Includes bibliographical references and index isBn 978-0-262-01802-9 (hardcover: alk. paper 1. Machine learning. 2. Probabilities. I. Title Q325.5M872012 006.31-dc23 2012004558 109876 This book is dedicated to alessandro, Michael and stefano and to the memory of gerard Joseph murphy Contents P react XXVII 1 Introduction Machine learning: what and why? 1..1 Types of machine learning 1.2 Supervised learning 1.2.1 Classification 3 1.2.2 Regression 8 3 Unsupervised learning 9 1.3.1 1.3.2 Discovering latent factors 11 1.3.3 Discovering graph structure 13 1.3.4 Matrix completion 14 1.4 Some basic concepts in machine learning 16 1.4.1 Parametric vs non-parametric models 16 1.4.2 A simple non-parametric classifier: K-nearest neighbors 16 1.4.3 The curse of dimensionality 18 1.4.4 Parametric models for classification and regression 19 1.4.5 Linear regression 19 1.4.6 Logistic regression 1.4.7 Overfitting 22 1.4.8 Model selection 1.4.9 No free lunch theorem 24 2 Probability 2.1 Introduction 27 2.2 A brief review of probability theory 28 2. 2. 1 Discrete random variables 28 2. 2.2 Fundamental rules 28 2.2.3B 29 2. 2. 4 Independence and conditional independence 30 2. 2. 5 Continuous random variable 32 CONTENTS 2.2.6 Quantiles 33 2.2.7 Mean and variance 33 2.3 Some common discrete distributions 34 2.3.1 The binomial and bernoulli distributions 34 2.3.2 The multinomial and multinoulli distributions 35 2. 3.3 The Poisson distribution 37 2.3.4 The empirical distribution 37 2.4 Some common continuous distributions 38 2.4.1 Gaussian (normal) distribution 38 2.4.2D te pdf 39 2.4.3 The Laplace distribution 41 2.4.4 The gamma distribution 41 2.4.5 The beta distribution 42 2.4.6 Pareto distribution 2.5 Joint probability distributions 44 2.5.1 Covariance and correlation 44 2.5.2 The multivariate gaussian 2.5.3 Multivariate Student t distribution 46 2.5.4 Dirichlet distribution 47 2.6 Transformations of random variables 49 2. 6. 1 Linear transformations 49 2.6.2 General transformations 50 2.6.3 Central limit theorem 51 2.7 Monte Carlo approximation 52 2.7.1 Example: change of variables, the MC way 53 2.7.2 Example: estimating T by Monte Carlo integration 2.7.3 Accuracy of Monte Carlo approximation 54 2.8 Information theory 56 2.8.1 Entropy 2.8.2 KL dive 57 2.8.3 Mutual information 59 3 Generative models for discrete data 65 3.1 Introducti 65 3.2 Bayesian concept learning 65 3.2.1 Likelihood 67 3.2.2 Prior 67 3.2.3P 68 3.2.4 Poste dictive distribution 3.2.5 A more complex prior 72 3.3 The beta-binomial model 72 3.3.1 Likelihood 73 3.3.2 P rior 74 3.3.3 Poster 3.3.4 Posterior predictive distribution CONTENTS 3.4 The Dirichlet-multinomial model 78 3. 4. 1 Likelihood 79 3.4.2 Prior 79 3.4.3 Posterior 79 3.4.4 Posterior predictive 81 3.5 Naive Bayes classifiers 82 3.5.1 Model fitting 83 3.5.2 Using the model for prediction 85 3.5.3 The log-sum-exp trick 80 3.5.4 Feature selection using mutual information 86 3.5.5 Classifying documents using bag of words 8 4 Gaussian models 4.1 Introduction 97 4.1.1 Notation 97 4. 1.2 Basics 97 4. 1.3 MlE for an mvn 99 4.1.4 Maximum entropy derivation of the gaussian 101 4.2 Gaussian discriminant analysis 101 4.2.1 Quadratic discriminant analysis(QDA) 102 4.2.2 Linear discriminant analysis (LDA) 103 4.2.3 Two-claSs LDA 104 4.2.4 MLE for discriminant analysis 106 4.2.5 Strategies for preventing overfitting 106 4.2.6 Regularized LDA* 10 4.2.7 Diagonal LDA 4.2.8 Nearest shrunken centroids classifier 109 4.3 Inference in jointly Gaussian distributions 110 4.3.1 Statement of the result 111 4.3.2 Examples 4.3.3 Information form 115 4.3.4 Proof of the result 116 4.4 Linear Gaussian systems 119 4.4.1 Statement of the result 119 4.4.2 Examples 120 4.4.3 Proof of the result 124 4.5 Digression: The Wishart distribution 4.5. 1 Inverse Wishart distribution 126 4.5.2 Visualizing the wishart distribution* 127 4.6 Inferring the parameters of an MVn 127 4.6.1 Posterior distribution of u 128 4.6.2 Posterior distribution of e 128 4.6.3 Posterior distribution of u and 2* 132 4.6.4 Sensor fusion with unknown precisions 138 【实例截图】
【核心代码】

标签:

实例下载地址

【PDF】《Machine learning A Probabilistic Perspective》 MLAPP;by Kevin Murphy

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警