Learning with Recurrent Neural Networks

Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a...

Full description

Bibliographic Details
Main Author: Hammer, Barbara (Author, http://id.loc.gov/vocabulary/relators/aut)
Corporate Author: SpringerLink (Online service)
Format: Electronic eBook
Language:English
Published: London : Springer London : Imprint: Springer, 2000.
Edition:1st ed. 2000.
Series:Lecture Notes in Control and Information Sciences, 254
Subjects:
Online Access:Full Text via HEAL-Link
Table of Contents:
  • Introduction, Recurrent and Folding Networks: Definitions, Training, Background, Applications
  • Approximation Ability: Foundationa, Approximation in Probability, Approximation in the Maximum Norm, Discussions and Open Questions
  • Learnability: The Learning Scenario, PAC Learnability, Bounds on the VC-dimension of Folding Networks, Consquences for Learnability, Lower Bounds for the LRAAM, Discussion and Open Questions
  • Complexity: The Loading Problem, The Perceptron Case, The Sigmoidal Case, Discussion and Open Questions
  • Conclusion.