在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → 深度学习算法重要论文集合.rar

深度学习算法重要论文集合.rar

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:42.72M
  • 下载次数:16
  • 浏览次数:134
  • 发布时间:2021-12-13
  • 实例类别:一般编程问题
  • 发 布 人:js2021
  • 文件格式:.rar
  • 所需积分:2
 

实例介绍

【实例简介】
深度学习算法关键性的论文集合,特别是对于Contrastive Divergence, Deep Belief Nets,Restricted Boltzman Machine,Autoencoder等模型都有完整的描述
【实例截图】
【核心代码】
4744300845144237208.rar
└── Important papers Mentioned
├── A Practical Guide to Training Restricted Boltzmann Machines (2010)
│   ├── A Practical Guide to Training RBM.pdf
│   └── A Practical Guide to Training RBM简单介绍.docx
├── Autoencoders, Unsupervised Learning, and Deep Architecture(2011)
│   └── Autoencoders, Unsupervised Learning, and Deep Architecture.pdf
├── Continuation Method-Global Optimization
│   └── Allgower_E.L.,_Georg_K._Introduction_to_numerical_continuation_methods_(1990)(en)(388s).pdf
├── Contrastive_Divergence(2002)
│   ├── Contrastive_Divergence_learn_normal.m
│   ├── contrastive_divergence.ppt
│   └── Minimizing Contrastive Divergence.pdf
├── Efficient Learning of Sparse Representations with an Energy-Based Model(2007)
│   ├── Copy-Efficient Learning of Sparse Representations.pdf
│   ├── Efficient Learning of Sparse Representations.pdf
│   ├── Efficient Learning of Sparse Representations简单介绍.docx
│   ├── Learning Sparse Topographic Representations with Products of Student-t distribution.pdf
│   ├── Reference
│   │   ├── A Wavelet Approach,Berdlin.pdf
│   │   ├── Copy-EBM for Sparse Overcomplete Representations.pdf
│   │   ├── Copy - Sparse Coding with an Overcomplete Basis Set.pdf
│   │   ├── Energy-Based Models for Sparse Overcomplete Representations.pdf
│   │   ├── Forming sparse representations by local anti-Hebbian learning.pdf
│   │   ├── Hebbian theory - Wikipedia.pdf
│   │   ├── Sparse Coding with an Overcomplete Basis Set.pdf
│   │   └── Wavelet Representation,Mallat.pdf
│   └── Unsupervised Discovery of Non-Linear Structure using Contrastive-Divergence.pdf
├── Greedy Layer-Wise Training of Deep Networks(2007)
│   ├── Copy-Greedy Layer-Wise .pdf
│   ├── Greedy Layer-Wise Training of Deep Networks.pdf
│   └── Greedy Layer-Wise Training of Deep Networks简单介绍.docx
├── layerwise greedy pretraining for DBN-fast,learning algorithm(2006)
│   ├── A fast learning algorithm for deep belief nets.pdf
│   ├── Fast learning algorithm paper简单介绍.docx
│   ├── Reference
│   │   ├── Contrastive_Divergence
│   │   │   ├── Contrastive_Divergence_learn_normal.m
│   │   │   └── Minimizing Contrastive Divergence.pdf
│   │   ├── Explaning_Away
│   │   │   ├── ExplainingAway_bayes_tutorial.pdf
│   │   │   └── Explaining_Away.m
│   │   └── Wake-Sleep Algorithm
│   │   └── Wake-Sleep algorithm for unsupervised networks.pdf
│   └── Up_Down_Algorithm.m
├── Learning Deep Archiecture for AI(2009)
│   ├── Copy-Learning Deep Architectures.pdf
│   ├── Learning Deep Architectures for AI.pdf
│   ├── Learning Deep Architectures for AI 简单介绍.docx
│   └── Reference_papers
│   ├── Alternative to CD training RBM-currently intractable
│   │   └── Representational Power of RBM.pdf
│   ├── Autoassociators
│   │   ├── Nonlinear Autoassociation vs PCA
│   │   │   └── Nonlinear Autoassociation is not Equivalent to PCA.pdf
│   │   └── Spasity on Autoassociator
│   │   ├── Efficient Learning of Sparse Representations.pdf
│   │   ├── Sparse Feature Learning for Deep Belief Networks.pdf
│   │   └── Sparse&Locally Shift Invariant Feature Extractor.pdf
│   ├── Contrastive Divergence Learning
│   │   └── On Contrastive Divergence Learning.pdf
│   ├── Exponential Family Formula-Energy Function for RBM(Important)
│   │   ├── 20news_w100.mat
│   │   └── Exponential Family RBM(Harmoniums).pdf
│   ├── Global Optimization
│   │   ├── Introduction to Numerial Continuation Method.pdf
│   │   ├── Regularization Path-Controlling Temperature.pdf
│   │   └── Shaping-Training with a Curriculum.pdf
│   ├── Optimisation for Initialize layer-Overcomplete Case
│   │   ├── A NEW VIEW OF ICA.pdf
│   │   └── Energy Models for Sparse Overcomplete case.pdf
│   ├── Reconstruction Error General Formula
│   │   └── Unsupervised Layer-Wise Model Selection in DNN.pdf
│   ├── Using DBN to Learn Covariance Kernels for Gaussian Processes.pdf
│   ├── Variant of RBMs
│   │   ├── Conditional RBM with Variable Hidden Biases C.pdf
│   │   ├── Conditional RBM with Variable Weight Matrix W.pdf
│   │   ├── Factored RBMs.pdf
│   │   ├── RBM with lateral connections.pdf
│   │   └── Temporal RBM.pdf
│   └── Variational Approximation Methods
│   ├── Variational approximation methods-Tutorial.pdf
│   └── Variational Bayesian methods - Wikipedia, the free encyclopedia.pdf
├── On the Quantitative Analysis of Deep Belief Networks(2008)
│   └── On the Quantitative Analysis of Deep Belief Networks.pdf
├── Reducing the dimensionality of Neural Networks(2006)
│   ├── computetraj.m
│   ├── drawtraj.m
│   ├── Materials - Reducing the Dimensionality of Data with Neural Networks.pdf
│   ├── paper简介.docx
│   ├── Reducing the Dimensionality of Data with Neural Networks.pdf
│   ├── Refernece
│   │   └── Ink Procedure - Inferring Motor Programs from Images of.pdf
│   └── Training a deep autoencoder or a classifier on MNIST digits
│   ├── Autoencoder_Code
│   │   ├── backpropclassify.m
│   │   ├── backprop.m
│   │   ├── CG_CLASSIFY_INIT.m
│   │   ├── CG_CLASSIFY.m
│   │   ├── CG_MNIST.m
│   │   ├── converter.m
│   │   ├── makebatches.m
│   │   ├── mnistclassify.m
│   │   ├── mnistdeepauto.m
│   │   ├── mnistdisp.m
│   │   ├── rbmhidlinear.m
│   │   ├── rbm.m
│   │   └── README.txt
│   ├── Local Linear Embedding
│   │   ├── JDQR.m.tar.gz
│   │   ├── lle.m
│   │   ├── scurve.m
│   │   └── swissroll.m
│   ├── minimize.m
│   ├── mnistHelpFunction
│   │   ├── loadMNISTImages.m
│   │   ├── loadMNISTLabels.m
│   │   └── loadMNIST_SimpleExample.m
│   ├── readMNIST_MatlabCentral
│   │   ├── license.txt
│   │   └── readMNIST.m
│   ├── t10k-images.idx3-ubyte
│   ├── t10k-labels.idx1-ubyte
│   ├── train-images.idx3-ubyte
│   └── train-labels.idx1-ubyte
├── Sparse DBN(Dynamic Bayesian Network)
│   └── Why are DBNs sparse.pdf
├── Unsupervised Pre-training
│   ├── The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training.pdf
│   └── Training RBM using Approximations to Likelihood Gradient(2008)
│   └── Training RBM using Approximations to LG.pdf
└── Why Does Unsupervised Pre-training Help Deep Learning(2010)
└── Why Does Unsupervised Pre-training Help Deep Learning.pdf

38 directories, 94 files

标签:

实例下载地址

深度学习算法重要论文集合.rar

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警