实例介绍
一本有关循环神经网络的设计以及应用领域的书籍,深入浅出,很有参考价值
ACKNOWLEDGMENTS The editors thank Dr. R. K. Jain, University of South Australia, for his assistance as a reviewer We are indebted to samir unadkat and malina ciocoiu for their excellent work formatting the chapters and to others who assisted: Srinivasan Guruswami and Aravindkumar Ramalingam. Finally, we thank the chapter authors who not only shared their expertise in recurrent neural networks, but also patiently worked with us via the Internet to create this book. One of us(L M) thanks Lee Giles, Ashraf abelbar, and marty hagan for their assistance and helpful conversations and Karen Medsker for her patience, support, and technical advice THE EDITORS Larry medsker is a Professor of physics and Computer Science at American University. His research involves soft computing and hybrid intelligent systems that combine neural network and al techniques. Other areas of research are in nuclear physics and data analysis systems he is the author of two books: hybrid Neural Network and Expert Systems (1994)and Hybrid Intelligent Systems (1995). He co-authored with jay liebowitz. another book on Expert Systems and Neural Networks (1994). One of his current projects applies intelligent web based systems to problems of knowledge management and data mining at the U.S. Department of Labor. His Ph D in Physics is from Indiana University, and he has held positions at Bell Laboratories, University of Pennsylvania, and Florida state University. He is a member of the International Neural Network Society, American Physical Society, American Association for Artificial Intelligence, IEEE, and the D.C. Federation of Musicians, Local 161-710 L.C. Jain is a Director/Founder of the Knowledge-Based Intelligent Engineering Systems(KES)Centre, located in the University of South Australia. He is a fellow of the Institution of Engineers Australia. He has initiated a postgraduate stream by research in the Knowledge-Based Intelligent Engineering Systems area. He has presented a number of keynote addresses at International Conferences on Knowledge-Based Systems, Neural Networks, Fuzzy Systems and Hybrid Systems. He is the founding Editor-in-Chief of the International Journal of Knowledge-Based Intelligent engineering Systems and served as an Associate Editor of the IEEE Transactions on Industrial Electronics Professor Jain was the technical chair of the etd2000 International Conference in 1995 Publications Chair of the Australian and New Zealand Conference on Intelligent Information Systems in 1996 and the Conference Chair of the International Conference on Know ledge-Based Intelligent electronic Systems in 1997, 1998 and 1999. he served as the vice President of the electronics association of South australia in 1997. He is the editor-in-Chief of the International book Series on Computational Intelligence, CRC Press USA. His interests focus on the applications of novel techniques such as knowledge-based systems, artificial neural networks, fuzzy systems and genetic algorithms and the application of these techniques Table of Contents Chapter 1 Introduction Samir B. Unadkat, Malina M. Ciocoiu and Larry R Medsker Overview A. Recurrent Neural Net architectures B. Learning in recurrent Neural nets II. Design Issues And Theory A. Optimization B. Discrete-Time Systems C. Bavesian belief Revision D. Knowledge representation E. Long-Term Dependencies I Applications A. Chaotic recurrent networks B. Language Learning C. Sequential Autoassociation D. Trajectory Problems E. Filtering And Control F. Adaptive robot behavior Future Directions Chapter 2 Recurrent Neural Networks for Optimization The State of the art Youshen Xia and Jun Wang Introduction Continuous- Time Neural Networks for QP and LCP A. Problems and Design of Neural Networks B. Primal-Dual Neural Networks for LP and QP C. Neural networks for lCP Discrete-Time Neural Networks for QP and LCP A. Neural Networks for QP and lcP B. Primal-Dual Neural Network for Linear assignment Simulation results Concluding remarks Efficient Second-Order Learning Algorithms for Discrete- Time Recurrent Neural networks Euripedes P. dos Santos and Fernando J. Von Zuben Introduction II Spatial x Spatio-Temporal processing I. Computational Capability Recurrent Neural Networks as Nonlinear Dynamic Systems Recurrent Neural Networks and Second-Order Learning algorithms VI. Recurrent neural Network architectures VIL. State Space Representation for Recurrent Neural Networks VIll. Second-Order Information in Optimization-Based Learning algorithms IX. The Conjugate Gradient Algorithm A. The algorithm B. The Case of Non-Quadratic Functions C. Scaled Conjugate gradient algorithm An Impr oved SCGM Method A. Hybridization in the Choice of B B. Exact Multiplication by the Hessian XI The Learning algorithm for Recurrent Neural Networks A. Computation of VEr(w B. Computation of H(w)v XII Simulation results XII. Concluding remarks Chapter 4 Designing High Order recurrent Networks for Bayesian Belief Revision Ahsraf abdelbar Introduction Belief revision and reasoning Under Uncertainty A. Reasoning Under Uncertainty B. Bavesian belief Networks C. Beliefs D. Approaches to Finding Map Assignments Hopfield Networks and Mean Field Annealin A. Optimization and the hopfield network B. Boltzmann machine C. Mean field anneali IV. High Order recurrent Networks Efficient Data Structures for Implementing hOrNs VI. Designing HORNs for Belief Revision VIL Conclusions Chapter 5 Equivalence in Knowledge Representation: Automata, Recurrent Neural Networks, and Dynamical Fuzzy Systems C. Lee Giles Christian W. Omlin and K. K. Thornber Introduction a. Motivation B. Background C. Overview II Fuzzy Finite State Automata Representation of Fuzzy states A. Preliminaries B. DFA Encoding Algorithm C. Recurrent State Neurons with Variable Output Range D. Programming Fuzzy State Transitions Automata transformation A. Preliminaries B. Transformation algorithm Example D. Properties of the Transformation Algorithm Network Architecture Network Stability Analysis A. Preliminaries B. Fixed Point Analysis for Sigmoidal Discriminant Function C. Network Stability ⅤI. Simulations ⅤII. Conclusions Chapter 6 Learning L ong-Term Dependencies in NARX Recurrent Neural Networks Tsungnan Lin, Bill G. Horne, Peter Tino, and C. Lee giles Introduction Vanishing Gradients and Long-Term Dependencies IIl. NARX Networks An Intuitive Explanation of NarX Network Behavior Experimental Results A. The latching problem B. An automaton problem VI Conclusion Oscillation Responses in a Chaotic Recurrent Network Judy dayhoff, Peter J. Palmadesso, and Fred richards Introduction Progression to chaos A. Activity measurements B. Different initial state External patterns A. Progression from Chaos to a fixed point B. Quick Re IV. Dynamic Adjustment of Pattern Strength V. Characteristics of the Pattern-to-Oscillation Map VI Discussion Chapter 8 Lessons From Language Learning Stefan C. Kremer Introduction A. Language Learning B. Classical grammar Induction C. Grammatical Induction D. Grammars in recurrent networks E. Outline II Lesson 1: Language Learning Is hard III. Lesson 2: When Possible, Search a Smaller Space A. An Example: where did i leave my keys? B. Reducing and Ordering in Grammatical Induction C. Restricted Hypothesis spaces in Connectionist Networks D. Lesson 2.1: Choose an Appropriate Network Topology E. Lesson 2. 2. Choose a limited Number of hidden units F. Lesson 2.3: Fix Some Weights G. Lesson 2. 4: Set Initial Weights Lesson 3: Search the Most Likely Places First Lesson 4 Order your training data A. CI al results B. Input Ordering Used in Recurrent Networks C. How Recurrent Networks Pay Attention to Order Summ Chapter g Recurrent Autoassociative Networks: Developing Distributed Representations of Structured Sequences by Autoassociation Ivelin stoianov Sequences, hierarchy, and representations III. Neural Networks And Sequential Processing AB Architectures Representing Natural Language Recurrent autoassociative networks A. Training RAN With The Backpropagation Through Time Learning algorithm B. Experimenting with RANs: Learning Syllables a Cascade of rans A. Simulation With a Cascade of RANs: Representing Polysyllabic Words B. A More Realistic Experiment: Looking for Systematicity V. Going further to a Cognitive model ⅤII. Discussion VII. Conclusions Chapter 10 Comparison of Recurrent Neural Networks for Trajectory Generation David G. Hagner, Mohamad H. Hassoun, and Paul B. Watta Architecture II. Training Set IV. Error Function and Performance metric Training Algorithms A. Gradient Descent and Conjugate Gradient Descent B. Recursive Least squares and the Kalman filter Simulations A. Algorithm Speed b. Circle results C. Figure-Fight results D. Algorithm Analysis E. Algorithm Stability F. Convergence Criteria G. Trajectory Stability and Convergence Dynamics VIL. Conclusions Chapter 11 Training algorithms for Recurrent Neural Nets that Eliminate the Need for Computation of Error Gradients with Application to Trajectory Production Problem Malur K. Sundareshan, Yee Chin Wong, and Thomas Condarcure Introduction II. Description of the Learning Problem and Some Issues in Spatiotemporal Training A. General framework and Training goals B. Recurrent neural network architectures C. Some issues of interest in Neural network training Training by Methods of Learning Automata A. Some basics on Learning automata B. Application to Training Recurrent Networks C. Trajectory Generation Performance Training by simplex optimization Method A. Some Basics on Simplex Optimization B. Application to Training Recurrent Networks C. Trajectory Generation Performance Conclusions Chapter 12 Training Recurrent Neural Networks for Filtering and Control Martin T Hagan, Orlando De Jesus, and Roger Schultz Introduction Preliminaries A. Layered Feedforward Network B. Layered Digital Recurrent Network Principles of Dynamic Learnin Dynamic Backprop for the Ldrn A. Preliminaries B. Explicit derivatives C. Complete FP Algorithms for the LdRN Neurocontrol applicati Vi Recurrent filter VI. Summar Chapter 13 Remembering How To Behave: Recurrent Neural Networks for Adaptive robot behavior T Ziemke II. Background III. Recurrent Neural Networks for Adaptive robot behavior A. Motivation B. Robot and simulator C. Robot Control architectu D. Experiment I E. Experiment 2 IV. Summary and Discussion 【实例截图】
【核心代码】
标签:
小贴士
感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。
- 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
- 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
- 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
- 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。
关于好例子网
本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明
网友评论
我要评论