在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → 稀疏自编码深度学习的Matlab实现

稀疏自编码深度学习的Matlab实现

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:0.20M
  • 下载次数:18
  • 浏览次数:217
  • 发布时间:2020-09-01
  • 实例类别:一般编程问题
  • 发 布 人:robot666
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
稀疏自编码深度学习的Matlab实现,sparse Auto coding,Matlab code
train, m/ 7% CS294A/CS294W Programming Assignment Starter Code Instructions %%% This file contains code that helps you get started on the programming assignment. You will need to complete the code in sampleIMAgEsm l sparseAutoencoder Cost m and computeNumericalGradientm l For the purpose of completing the assignment, you do mot need to change the code in this file curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod train.m∥ %%======== %6% STEP 0: Here we provide the relevant parameters values that will l allow your sparse autoencoder to get good filters; you do not need to 9 change the parameters below visibleSize =8*8; number of input units hiddensize 25 number of hidden units sparsity Param =0.01; desired average activation of the hidden units 7 (This was denoted by the greek alpha rho, which looks like a lower-case p curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod 4/57 train.,m∥ in the lecture notes) 1 ambda=0.0001 %o weight decay parameter beta 3 %o weight of sparsity penalty term %%==: 79 STEP 1: Implement sampleIMAGES After implementing sampleIMAGES, the display_network command should fo display a random sample of 200 patches from the dataset patches sampleIMAgES; display_network(patches(:, randi(size(patches, 2), 204, 1)), 8) %为产生一个204维的列向量,每一维的值为0~10000 curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod 5/57 train.m/v %中的随机数,说明是随机取204个 patch来显示 %o Obtain random parameters theta theta= initializeParameters ( hiddenSize, visibleSize) %%=============三三三三==================================== 97 STEP 2: Implement sparseAutoencoder Cost You can implement all of the components (squared error cost, weight decay term sparsity penalty) in the cost function at once, but it may be easier to do %o it step-by-step and run gradient checking (see STEP 3 after each step We curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod 6/57 train. m v b suggest implementing the sparseAutoencoder Cost function using the following steps (a) Implement forward propagation in your neural networl and implement the % squared error term of the cost function. Implement backpropagation to compute the derivatives. Then (using lambda=beta=( run gradient Checking % to verify that the calculations corresponding to the squared error cost term are correct curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod 7/57 train. m vl (b) Add in the weight decay term (in both the cost func and the derivative calculations), then re-run Gradient Checking to verify correctness l (c) Add in the sparsity penalty term, then re-run gradi Checking to verify correctness Feel free to change the training settings when debugging your %o code. (For example, reducing the training set size curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod 8/57 train m vl/ number of hidden units may make your code run faster and setting beta and/or lambda to zero may be helpful for debugging However, in your final submission of the visualized weights, please use parameters we b gave in Step 0 above coS七 grad] sparseAutoencoderCost(theta, visibleSize, hiddensize, lambda, sparsityParam, beta, patches) 二〓二二二二二二二〓二〓二〓二〓=二====〓= curer:YiBinYUyuyibintony@163.com,WuYiUniversity ning, MATLAB Code for Sparse Autoencod 9/57 train.m vlll 96% STeP 3: Gradient Checking Hint: If you are debugging your code, performing gradien checking on smaller models and smaller training sets (e. g, using only 10 training examples and 1-2 hidden units) may speed things up l First, lets make sure your numerical gradient computation is correct for a %o simple function. After you have implemented computeNume run the following checkNumericalGradiento curer:YiBinYUyuyibintony@163.com,WuYiUniversity Deep Learning, MATLAB Code for Sparse Autoencode 10/57 【实例截图】
【核心代码】

标签:

实例下载地址

稀疏自编码深度学习的Matlab实现

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警