Recurrent Neural Networks

Free Download

Size: 5 MB (5463922 bytes)

Pages: 389/389

File format:

Language:

Category:


Table of contents :
RECURRENT NEURAL NETWORKS……Page 1
PREFACE……Page 2
THE EDITORS……Page 4
ACKNOWLEDGMENTS……Page 3
Table of Contents……Page 5
I. OVERVIEW……Page 11
Table of Contents……Page 0
A. RECURRENT NEURAL NET ARCHITECTURES……Page 12
B. LEARNING IN RECURRENT NEURAL NETS……Page 13
B. DISCRETE-TIME SYSTEMS……Page 14
C. BAYESIAN BELIEF REVISION……Page 15
E. LONG-TERM DEPENDENCIES……Page 16
B. LANGUAGE LEARNING……Page 17
C. SEQUENTIAL AUTOASSOCIATION……Page 18
E. FILTERING AND CONTROL……Page 19
REFERENCES……Page 20
I. INTRODUCTION……Page 23
A. PROBLEMS AND DESIGN OF NEURAL NETWORKS……Page 25
B. PRIMAL-DUAL NEURAL NETWORKS FOR LP AND……Page 30
C. NEURAL NETWORKS FOR LCP……Page 33
A. NEURAL NETWORKS FOR QP AND LCP……Page 35
B. PRIMAL-DUAL NEURAL NETWORK FOR LINEAR ASSIGNMENT……Page 38
IV. SIMULATION RESULTS……Page 47
V. CONCLUDING REMARKS……Page 51
REFERENCES……Page 52
I. INTRODUCTION……Page 56
II. SPATIAL SPATIO-TEMPORAL PROCESSING……Page 57
IV. RECURRENT NEURAL NETWORKS AS NONLINEAR DYNAMIC SYSTEMS……Page 58
V. RECURRENT NEURAL NETWORKS AND SECOND-ORDER LEARNING ALGORITHMS……Page 60
VI. RECURRENT NEURAL NETWORK ARCHITECTURES……Page 62
VII. STATE SPACE REPRESENTATION FOR RECURRENT NEURAL NETWORKS……Page 65
VIII. SECOND-ORDER INFORMATION IN OPTIMIZATION- BASED LEARNING ALGORITHMS……Page 67
IX. THE CONJUGATE GRADIENT ALGORITHM……Page 68
A. THE ALGORITHM……Page 69
B. THE CASE OF NON-QUADRATIC FUNCTIONS……Page 70
C. SCALED CONJUGATE GRADIENT ALGORITHM……Page 71
X. AN IMPROVED SCGM METHOD……Page 72
A. HYBRIDIZATION IN THE CHOICE OF……Page 73
B. EXACT MULTIPLICATION BY THE HESSIAN [PEARLMUTTER, 1994]……Page 74
XI. THE LEARNING ALGORITHM FOR RECURRENT NEURAL NETWORKS……Page 75
A. COMPUTATION OF……Page 76
B. COMPUTATION OF H(w)v……Page 77
XII. SIMULATION RESULTS……Page 79
XIII. CONCLUDING REMARKS……Page 80
REFERENCES……Page 81
A. REASONING UNDER UNCERTAINTY……Page 85
B. BAYESIAN BELIEF NETWORKS……Page 87
C. BELIEF REVISION……Page 89
A. OPTIMIZATION AND THE HOPFIELD NETWORK……Page 90
IV. HIGH ORDER RECURRENT NETWORKS……Page 92
V. EFFICIENT DATA STRUCTURES FOR IMPLEMENTING HORNS……Page 95
VI. DESIGNING HORNS FOR BELIEF REVISION……Page 96
VII. CONCLUSION……Page 100
ACKNOWLEDGMENT……Page 101
REFERENCES……Page 102
A. MOTIVATION……Page 106
B. BACKGROUND……Page 107
C. OVERVIEW……Page 109
II. FUZZY FINITE STATE AUTOMATA……Page 110
B. DFA ENCODING ALGORITHM……Page 111
C. RECURRENT STATE NEURONS WITH VARIABLE OUT-PUT RANGE……Page 112
B. TRANSFORMATION ALGORITHM……Page 114
C. EXAMPLE……Page 116
D. PROPERTIES OF THE TRANSFORMATION ALGORITHM……Page 117
V. NETWORK ARCHITECTURE……Page 120
A. PRELIMINARIES……Page 122
B. FIXED POINT ANALYSIS FOR SIGMOIDAL DISCRIMINANT FUNCTION……Page 123
C. NETWORK STABILITY……Page 129
VIII. CONCLUSIONS……Page 131
REFERENCES……Page 133
I. INTRODUCTION……Page 139
II. VANISHING GRADIENTS AND LONG-TERM DEPENDENCIES……Page 140
III. NARX NETWORKS……Page 142
IV. AN INTUITIVE EXPLANATION OF NARX NETWORK BEHAVIOR……Page 144
A. THE LATCHING PROBLEM……Page 145
B. AN AUTOMATON PROBLEM……Page 148
ACKNOWLEDGMENTS……Page 151
APPENDIX: A CLOSER LOOK AT ROBUST INFORMATION LATCHING……Page 152
REFERENCES……Page 154
I. INTRODUCTION……Page 158
II. PROGRESSION TO CHAOS……Page 160
A. ACTIVITY MEASUREMENTS……Page 162
B. DIFFERENT INITIAL STATES……Page 163
III. EXTERNAL PATTERNS……Page 165
B. QUICK RESPONSE……Page 166
IV. DYNAMIC ADJUSTMENT OF PATTERN STRENGTH……Page 169
V. CHARACTERISTICS OF THE PATTERN-TO-OSCILLATION MAP……Page 171
VI. DISCUSSION……Page 178
REFERENCES……Page 180
A. LANGUAGE LEARNING……Page 183
B. CLASSICAL GRAMMAR INDUCTION……Page 184
D. GRAMMARS IN RECURRENT NETWORKS……Page 185
II. LESSON 1: LANGUAGE LEARNING IS HARD……Page 186
A. AN EXAMPLE: WHERE DID I LEAVE MY KEYS?……Page 187
C. RESTRICTED HYPOTHESIS SPACES IN CONNECTIONIST NETWORKS……Page 188
D. LESSON 2.1: CHOOSE AN APPROPRIATE NETWORK TOPOLOGY……Page 189
E. LESSON 2.2: CHOOSE A LIMITED NUMBER OF HIDDEN UNITS……Page 192
F. LESSON 2.3: FIX SOME WEIGHTS……Page 193
G. LESSON 2.4: SET INITIAL WEIGHTS……Page 194
IV. LESSON 3: SEARCH THE MOST LIKELY PLACES FIRST……Page 196
A. CLASSICAL RESULTS……Page 198
B. INPUT ORDERING USED IN RECURRENT NETWORKS……Page 199
C. HOWRECURRENT NETWORKS PAY ATTENTION TO ORDERS……Page 200
VI. SUMMARY……Page 203
REFERENCES……Page 204
I. INTRODUCTION……Page 209
II. SEQUENCES, HIERARCHY, AND REPRESENTATIONS……Page 211
A. ARCHITECTURES……Page 213
B. REPRESENTING NATURAL LANGUAGE……Page 215
IV. RECURRENT AUTOASSOCIATIVE NETWORKS……Page 220
A. TRAINING RAN WITH THE BACKPROPOGATION THROUGH TIME LEARNING ALGORITHM……Page 222
1. Forward Pass……Page 223
2. Backward Through Time Pass……Page 224
B. EXPERIMENTING WITH RANs: LEARNING SYLLABLES……Page 225
V. A CASCADE OF RANs……Page 228
A. SIMULATION WITH A CASCADE OF RANs: REPRESENTING POLYSYLLABIC WORDS…….Page 232
B. A MORE REALISTIC EXPERIMENT: LOOKING FOR SYSTEMATICITY……Page 233
VI. GOING FURTHER TO A COGNITIVE MODEL……Page 235
VII. DISCUSSION……Page 237
VIII. CONCLUSIONS……Page 240
REFERENCES……Page 241
I. INTRODUCTION……Page 246
II. ARCHITECTURE……Page 247
III. TRAINING SET……Page 250
IV. ERROR FUNCTION AND PERFORMANCE METRIC……Page 251
V. TRAINING ALGORITHMS……Page 255
A. GRADIENT DESCENT AND CONJUGATE GRADIENT DESCENT……Page 256
B. RECURSIVE LEAST SQUARES AND THE KALMAN FILTER……Page 258
A. ALGORITHM SPEED……Page 260
B. CIRCLE RESULTS……Page 262
C. FIGURE-EIGHT RESULTS……Page 267
D. ALGORITHM ANALYSIS……Page 271
E. ALGORITHM STABILITY……Page 272
F. CONVERGENCE CRITERIA……Page 274
G. TRAJECTORY STABILITY AND CONVERGENCE DYNAMICS……Page 275
VII. CONCLUSIONS……Page 276
REFERENCES……Page 277
I. INTRODUCTION……Page 280
A. GENERAL FRAMEWORK AND TRAINING GOALS……Page 283
B. RECURRENT NEURAL NETWORK ARCHITECTURES……Page 285
1. AN OPTIMIZATION FRAMEWORK FOR SPATIOTEMPORAL LEARNING……Page 287
2. INCREMENTAL LEARNING……Page 289
3. TEACHER FORCING……Page 291
A. SOME BASICS ON LEARNING AUTOMATA……Page 292
B. APPLICATION TO TRAINING RECURRENT NETWORKS……Page 294
C. TRAJECTORY GENERATION PERFORMANCE……Page 296
1. EXPERIMENT 1……Page 298
2. EXPERIMENT 2……Page 299
A. SOME BASICS ON SIMPLEX OPTIMIZATION……Page 301
B. APPLICATION TO TRAINING RECURRENT NETWORKS……Page 307
1. EXPERIMENT 1……Page 315
2. EXPERIMENT 2……Page 318
3. EXPERIMENT 3……Page 321
REFERENCES……Page 324
II. PRELIMINARIES……Page 327
A. LAYERED FEEDFORWARD NETWORK……Page 328
B. LAYERED DIGITAL RECURRENT NETWORK……Page 329
III. PRINCIPLES OF DYNAMIC LEARNING……Page 330
A. PRELIMINARIES……Page 334
B. EXPLICIT DERIVATIVES……Page 335
C. COMPLETE FP ALGORITHM FOR THE LDRN……Page 336
V. NEUROCONTROL APPLICATION……Page 339
VI. RECURRENT FILTER……Page 347
REFERENCES……Page 355
II. BACKGROUND……Page 357
A. MOTIVATION……Page 362
B. ROBOT AND SIMULATOR……Page 363
1. Environment and task……Page 364
2. Network training……Page 366
3. Results……Page 367
4. Analysis……Page 369
1. Environment, task and training……Page 374
2. Results……Page 375
3. Analysis……Page 378
IV. SUMMARY AND DISCUSSION……Page 384
REFERENCES……Page 385

Reviews

There are no reviews yet.

Be the first to review “Recurrent Neural Networks”
Shopping Cart
Scroll to Top