在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → Understanding Machine Learning: From Theory to Algorithms

Understanding Machine Learning: From Theory to Algorithms

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:2.48M
  • 下载次数:8
  • 浏览次数:84
  • 发布时间:2020-08-11
  • 实例类别:一般编程问题
  • 发 布 人:robot666
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
Understanding Machine Learning: From Theory to Algorithms,偏理论一些的机器学习书籍
Understanding Machine Learning Machinc learning is onc of the fastest growing arcas of computer scicncc with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a princi pled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Fol lowing a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous text- books. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorith mic paradigms including stochastic gradient descent, neural networks and structured output learning, and emerging theoretical concepts such as the pac-Bayes approach and compression-based bounds. Designed fc an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to stu- dents and nonexpert readers in statistics, computer science, mathematics and cnginccring Shai shalev-Shwartz is an Associate Professor at the School of computer Science and engineering at The hebrew university israel Shai Ben-David is a Professor in the School of Computer Science at the University of waterloo, canada UNDERSTANDING MACHINE LEARNING From Theory to Algorithms Shai shalev-Shwartz The Hebrew University Jerusalem Shai Ben-David University of Waterloo, Canada S5 L CAMBRIDGE 圆罗 UNIVERSITY PRESS CAMBRIDGE UNIVERSITY PRESS 32 Avenue of the americas. New york.NY10013-2473 USA Cambridge University Press is part of the University of Cambridge It furthers the Universitys mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence www.cambridge.org Informationonthistitlewww.cambridge.org/9781107057135 C Shai Shalev-Shwartz and Shai Ben-David 2014 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press First published 2014 Printed in the United States of America A catalog record for this publication is available from the British Library Library of Congress Cataloging in Publication data isbn 978-1-107-057133-5 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain accurate or appropriate Triple-s dedicates the book to triple reface The tcrm machine learning rcfcrs to the automated dctection of mcaningful patterns in data. In the past couple of de it h almost any task that requires information extraction from large data sets. We are surrounded by a machine learning based technology: search engines learn how to bring us the best results (while placing profitable ads), anti-spam software learns to filter our email messages, and credit card transact ions are secured by a software that learns how to detect frauds. Digital cameras learn to detect aces and intelligent personal assistance applications on smart-phones learn to recognize voice commands. Cars are equipped with accident prevention systems that are built using machine learning algorithms. Machine learning is a I so widely used in scientific applications such as bioinformatics, Inedicine and astronulnl One common feature of all of these applications is that, in contrast to more traditional uses of computers, in these cases, due to the complexity of the patterns that need to be detected, a human programmer cannot provide an explicit, fine- detailed specification of how such tasks should be executed Taking exalmple roiL intelligent beings, many of our skills are acquired or refined through learning from our experience (rather than following explicit instructions given to us). Machine learning tools arc concerned with endowing programs with the ability to"lcarn d adapt The first goal of this book is to providc a rigorous, yct casy to follow, intro- duction to the main concepts underlying machine learning: What is learning How can a machinc learn? How do we quantify the resources needed to learn a given concept? Is learning always possible? Can we know if the learning process succeeded or failed? The second goal of this book is to present several key machine learning algo- rithms. We chose to present algorithms that on one hand are successfully used in practice and on the other hand give a wide spectrum of different learnin techniques. Additionally, we pay specific attention to algorithms appropriate for large scale learning(a k a. "Big Data?), since in recent, years, our world has be- come increasingly "digitized"and the amount of data available for learning is dramatically increasing. As a result, in many applications data is plentiful and computation time is the main bottleneck. We therefore explicitly quantify both the amount of dat a and the amount of comput ation time needed to learn a given The book is divided into four parts. The first part aims at giving an initial rigorous answer to the fundamental questions of learning. We describe a gell- eralization of Valiant,s Probably Approximately Correct(PAC) learning mode which is a first solid answer to the question what is learning?". We describ the Empirical Risk Minimization(ERM), Structural Risk Minimization(SRM) Ild Minimun Description Length(MDL learning rules, which shows"how call a machinc learn. Wc quantify the amount of data nccded for Icarning usin the ERM, SRM, and MDL rules and show how learning might fail by deriving a"no-frcc-lunch"theorem. Wc also discuss how much computation timc is rc quired for learning. In the second part of the book we describe various learning algorithms For somc of the algorithms, we first prcscnt a morc gcncral learning principle, and then show how the algorithm follows the principle. While the first two parts of the book focus on the PAC model, the third part extends the scope variety of learning models. Finally, the last part of the book is devoted to advanced theory We made an at tempt to keep the book as self-contained as possible. However the reader is assumed to be comfortable with basic notions of probability, linear algebra, analysis, and a.Igorit hms. The first three parts of the book are intended for first year graduate students in computer science, engineering, mathematics statistics. It can also be accessible to undergraduate students with the adequate background. The Inore advanced chapters call be used by researchers intending to gather a deeper theoretical understanding. Acknowledgements The book is based on Introduction to Machine learning ce Shalev-Shwartz at thc Hcbrcw University and by Shai Bcn-David at the univer- sity of Waterloo. The first draft of the book grew out of the lecture notes for the coursc that was taught at the hebrew univcrsity by Shai Shalcv-Shwartz during 2010-2013. We greatly appreciate the help of Ohad Shamir, who served as a TA for the coursc in 2010. and of Alon Goncn. who scrvcd as a TA for tho course in 2011-2013. Ohad and Alon prepared few lecture notes and many of the exercises. Alon, to whom we are indebted for his help throughout the entire making of the book, has also prepared a solution manual We are deeply grateful for the most valuable work of Dana rubinstein. Dana has scientifically proofread and edited the manuscript ransforming it fr 1g O lecture-based chapters into fluent herent text Special thanks to Amit Daniely, who helped us with a careful read of the advanced part of the book and also wrote the advanced chapter on multiclass learnability. We are also grateful for the members of a book reading chub in Jerusalem that have carefully read and constructively criticized every line of the manuscript. The members of the reading chub are: Maya Alroy, Yossi arje- ani. Aharon Birnbaum. Alon Cohen. Alon Gonen, Roi Livni Ofer Meshi. Dan Rosenbaum Dana rubinstein shahar Somin Alon vinnikoy and Yoay wald We would also like to thank Gal Elida. Arlir Globersoll Nika hayhtalab. Sh Mannor, Amnon Shashua, Nati Srebro, and Ruth Urner for helpful discussions Shai Shalev-Shwartz. Jerusalem. Israel Shai Ben-David. Waterloo. Canada Contents trace page v Introduction 1.1 What Is Learning? 19 1.2 When Do We Need Machine learning 21 1.3 Types of Learning 1. 4 Relations to Other Fields 24 1. 5 How to read this book 25 1.5.1 Possible Course plans based on This book 1. 6 Notation 27 Part Foundations 31 A Gentle start 33 2.1 A Formal Model-The Statistical Learning Framework 2.2 Empirical Risk Minimization 35 2.2.1 Somethin o Wrong -Overfitting 2.3 Empirical Risk Minimization with Inductive Bias 2.3.1 Finite llypothesis Classes 2.4 Exercises 3 A Formal Learning Model 43 3.1 PAC Learning 43 3.2 A More General Learning Model 44 3.2.1 Releasing the Realizability Assumption- Agnostic PAO L 45 3.2.2 The Scope of Learning Problems Modeled 47 3.3 Summary 49 3.4 Bibliographic Remarks 50 3.5 Exercises 50 Learning via Uniform Convergence 4.1 Uniform Convergence Is Sufficient for Learnability 4.2 Finite Classes Are Agnostic PAC Learnable Understanding Machine Learning, 2014 by Shai Shalev-Shwartz and Shai Ben-David Published 2014 by Cambridge University Press Personal use only. Not for distribution. Do not post Plcasclinktohttp://www.cs.huji.ac.il/-shais/understandingmachinelearninG Contents 4.3 Summary 4.4 Bibliographic Remarks 58 4.5 Exercises 58 The Bias-Complexity Tradeoff 60 5.1 The No-Free-Lunch Theorem 61 5.1.1 No-Free-Lunch and Prior Knowledge 5.2 Frror Decomposition 64 5.3 Summary 5.4 Bibliographic Remarks 5.5 Exercises The vc-Dimension 6.1 Infinite-Size Classes Can Be learnable 6.2 The VC-DiImensioll 6.3 Examples 6.3.1 Threshold Functions 6.3.2 Intervals 71 6.3.3 Axis Aligned Rectangles 71 6.3.4 Finitc Classes 72 6.3.5 VC-Dimension and the Number of pa 72 6.4 Thc Fundamcntal Thcorcm of PAC learning 72 6.5 Proof of theorem 6.7 73 6.5.1 Saucr's Lcmma and the Growth Function 73 6.5.2 Uniform Convergence for Classes of Small Effective Size 75 6.6 Summary 78 6.7 Bibliographic remarks 78 6.8 Exercises Nonuniform Learnability 7.1 Nonuniform Learna bilit 7. 1.1 Characterizing Nonuniform Learnability 7.2 Structural Risk Minimization 7.3 Minimum Description Length and Occams Razor 89 7.3.1 Occa.m's r: 91 7.4 Other Notions of Learnability-Consistency 7.5 Discussing the Different Notions of Learnability 7.5.1 The No-Free-Lunch Theorem Revisited 95 7.6 Summary 7.7 Bibliographic Remarks 97 7. 8 Exercises 97 8 The runtime of learning 100 Computational Complexity of Learning 101 【实例截图】
【核心代码】

标签:

实例下载地址

Understanding Machine Learning: From Theory to Algorithms

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警