在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → Optimal Control Theory - An Introduction.(Kirk D. Dover, 2004

Optimal Control Theory - An Introduction.(Kirk D. Dover, 2004

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:10.35M
  • 下载次数:12
  • 浏览次数:205
  • 发布时间:2020-08-30
  • 实例类别:一般编程问题
  • 发 布 人:robot666
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
Optimal Control Theory - An Introduction.(Kirk D. Dover, 2004
Copyright Copyright o 1970, 1998 by Donald E. Kirk All rights reserved Bibliographical Note This Dover edition, first published in 2004, is an unabridged republication of Englewood Cliffs, New Jersey, in iginally published by Prentice-Hall, Inc the thirteenth printing of the work or Solutions manual Readers who would like to receive Solutions to Selected Exercises for this book may request them from the publisher at the following e-mail address editors@doverpublications.com Library of Congress Cataloging-in-Publication Data Kirk, donald e. 1937 Optimal control theory an introduction / Donald E. Kirk p cm Originally published: Englewood Cliffs, N.J.: Prentice-Hall, 1970(Prentice Hall networks series) Includes bibliographical references and index ISBN0486-434842(pbk) 1. Control theory. 2. Mathematical optimization. I. Title QA4023K522004 003.5dc22 2003070111 Manufactured in the United States of America Dover Publications. Inc, 31 East 2nd Street. Mineola. N.Y. 11501 Preface Optimal control theory-which is playing an increasingly important role in the design of modern systems--has as its objective the maximization of the return from, or the minimization of the cost of, the operation of physical, social, and economic processes This book introduces three facets of optimal control theory-dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization-at a level appropriate for a first-or second-year graduate course, an undergraduate honors course, or for directed self-study. A reasonable proficiency in the use of state variable methods is assumed however, this and other prerequisites are reviewed in Chapter 1. In the interest of flexibility, the book is divided into the following parts: Part I: Describing the System and Evaluating Its Performance Chapters 1 and 2) Part II: Dynamic Programming ( Chapter 3) Part III: The Calculus of Variations and Pontryagin,s Minimum Principle Chapters 4 and 5) Part Iv: Iterative Numerical Techniques for Finding Optimal controls and Trajectories Part V: Conclusion (Chapter 7) Because of the simplicity of the concept, dynamic programming(Part D is presented before Pontryagin's minimum principle(Part IID), thus enabling Preface the reader to solve meaningful problems at an early stage and providing motivation for the material which follows Parts ii and IIi are self-contained they may be studied in either order, or either may be omitted without affect- ing the treatment in the other The problems provided in Parts I through iv are designed to introduce additional topics as well as to illustrate the basic concepts My experience indicates that it is possible to discuss, at a moderate pace, Chapters 1 through 4, Sections 5. 1 through 5.3, and parts of Sections 5.4 and 5.5 in a one-quarter, four-credit-hour course. This material provides adequate background for reading the remainder of the book and other literature on optimal control theory. To study the entire book, a course of one semester's duration is recommended My thanks go to Professor Robert D. Strum for encouraging me to undertake the writing of this book, and for his helpful comments along the way. I also wish to express my appreciation to Professor John R. Ward for his constructive criticism of the presentation. Professor Charles H. Rothauge, Chairman of the Electrical Engineering Department at the Naval Postgraduate School, aided my efforts by providing a climate favorable for preparing and testing the manuscript. i thank Professors Jose B. Cruz, Jr William R. Perkins, and Ronald A rohrer for introducing optimal control heory to me at the university of Illinois undoubtedly their influence is reflected in this book. The valuable comments made by Professors James S. Demetry, Gene F. Franklin, Robert W. Newcomb, Ronald A. Rohrer, nd Michael K. Sain are also gratefully acknowledged. In proofreading the manuscript I received generous assistance from my wife, Judy, and from Lcdr. D. T. Cowdrill and Lcdr. R.R. Owens, USN. Perhaps my greatest debt of gratitude is to the students whose comments were invaluable in preparing the final version of the book. DONALD E. KIRK Carmel, california Contents PART I: DESCRIBING THE SYSTEM AND EVALUATING ITS PERFORMANCE introduction 1.1 Problem Formulation 3 1.2 State Variable Representation of Systems 16 1.3 Concluding Remarks 22 23 2. The Performance Measure 2. 1 Performance Measures for Optimal control problems 29 2.2 Selecting a Performance Measure 34 2. 3 Selection of a Performance Measure: The Carrier Landing of a Jet Aircraft 42 References 47 Problems 47 PART I: DYNAMIC PROGRAMMING 3. Dynamic Programming 3 3.1 The Optimal Control Law 53 3.2 The Principle of Optimality 54 Contents 3.3 Application of the Principle of optimality to Decision-Making 55 3.4 Dynamic Programming Applied to a Routing Problem 56 3.5 An Optimal Control System 58 3.6 Interpolation 64 3.7 A Recurrence Relation of Dynamic Programming 67 3.8 Computational Procedure for Solving Control Problems 70 3.9 Characteristics of Dynamic Programming Solution 75 3.10 Analytical Results--Discrete Linear Regulator Problems 78 3. 11 The Hamilton-Jacobi-Bellman Equation 86 3. 12 Continuous Linear Regulator Problems 90 3.13 The Hamilton-Jacobi-Bellman Equation--Some Observations 93 3. 14 Summary 94 References 95 Problems 96 PART U: THE CALCULUS OF VARIATIONS AND PONTRYAGIN'S MINIM UM PRINCIPLE 4. The Calculus of variations 107 4. 1 Fundamental Concepts 108 4.2 Functionals of a Single Function 123 4.3 Functionals Involving Several Independent Functions 143 4.4 Piecewise-Smooth Extremals 154 4.5 Constrained Extrema 161 4.6 Summary References 178 Problems 178 5. The Variational Approach to Optimal Contro! Problems l84 5.1 Necessary Conditions for Optimal Control 184 5.2 Linear regulator Problems 209 5.3 Pontryagin,s Minimum Principle and State Inequality Constraints 227 5.4 Minimum-Time Problems 240 5.5 Minimum Control-Effort Problems 259 5.6 Singular Intervals in Optimal Control Problems 291 5.7 Summary and Conclusions 308 References 309 Problems 310 C PART IV: ITERATIVE NUMERICAL TECHNIQUES FOR FINDING OPTIMAL CONTROLS AND TRAJECTORIES 6. Numerical Determination of Optimal Trajectories 329 6.1 Two-Point Boundary-Value Problems 330 6.2 The Method of Steepest Descent 331 6.3 Variation of Extremals 343 6.4 Quasilinearization 357 6.5 Summary of Iterative Techniques for Solving Two-Point Boundary-Value Problems 371 6.6 Gradient Projection 373 References 408 Problems 409 PART V: CONCLUSION 7. Summation 417 7.1 The Relationship Between Dynamic Programming and the Minimum Principle 417 7.2 Summary 423 7. 3 Controller Design 425 7.4 Conclusion 427 References 427 APPENDICES 429 1. Useful matrix Properties and Definitions 429 2. Difference Equation Representation of linear Sampled-Data Systems 432 3. Special Types of Euler Equations 434 4. Answers to Selected Problems 437 Index 443 Describing the System Evalvating Its Performance Introduction Classical control system design is generally a trial-and-error process in which various methods of analysis are used iteratively to determine the design parameters of an"acceptable"system. Acceptable performance is generally defined in terms of time and frequency domain criteria such as rise time settling time, peak overshoot, gain and phase margin, and bandwidth radi cally different performance criteria must be satisfied, however, by the com plex, multiple-input, multiple-output systems required to meet the demands of modern technology. For example, the design of a spacecraft attitude control system that minimizes fuel expenditure is not amenable to solution y classical methods. A new and direct approach to the synthesis of these complex systems, called optimal control theory, has been made feasible by the development of the digital computer. The objective of optimal control theory is to determine the control signals that will cause a process to satisfy the physical constraints and at the same time minimize(or maximize) some performance criterion. later, we shall give a more explicit mathematical statement of "the optimal control prob- lem, " but first let us consider the matter of problem formulation 1.1 PROBLEM FORMULATION The axiom"A problem well put is a problem half solved"may be a slight exaggeration, but its intent is nonetheless appropriate. In this section, we 【实例截图】
【核心代码】

标签:

实例下载地址

Optimal Control Theory - An Introduction.(Kirk D. Dover, 2004

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警