Skip to main content
Taylor & Francis Group Logo
    Advanced Search

    Click here to search products using title name,author name and keywords.

    • Login
    • Hi, User  
      • Your Account
      • Logout
      Advanced Search

      Click here to search products using title name,author name and keywords.

      Breadcrumbs Section. Click here to navigate to respective pages.

      Chapter

      Convergence of Time-Invariant Dynamical Systems
      loading

      Chapter

      Convergence of Time-Invariant Dynamical Systems

      DOI link for Convergence of Time-Invariant Dynamical Systems

      Convergence of Time-Invariant Dynamical Systems book

      Convergence of Time-Invariant Dynamical Systems

      DOI link for Convergence of Time-Invariant Dynamical Systems

      Convergence of Time-Invariant Dynamical Systems book

      ByRichard M. Golden
      BookStatistical Machine Learning

      Click here to navigate to parent product.

      Edition 1st Edition
      First Published 2020
      Imprint Chapman and Hall/CRC
      Pages 18
      eBook ISBN 9781351051507
      Share
      Share

      ABSTRACT

      The objective of Chapter 6 is to provide methods for providing sufficient conditions for ensuring that time-invariant discrete-time and time-invariant continuous-time dynamical systems will converge to either an isolated critical point or a collection of critical points. Such dynamical systems are often used to represent inference algorithms and batch learning algorithms. The concept of a Lyapunov function is then introduced for analyzing the dynamics of discrete-time and continuous-time dynamical systems. A Lyapunov function tends to decrease as the dynamical system evolves. The Finite State Space Invariant Set Theorem is introduced for the purpose of proving the convergence of a discrete-time dynamical system on a finite state space. Next, LaSalle's Invariant Set Theorem is introduced for providing sufficient conditions for the trajectories of a discrete-time or continuous-time dynamical system to converge to either an individual critical point or a collection of critical points. These convergence theorems are then used to investigate the asymptotic behavior of commonly used batch learning algorithms such as: gradient descent, and ADAGRAD. These convergence theorems are also used to investigate the asymptotic behavior of clustering algorithms, the ICM (iterated conditional modes) algorithm, the Hopfield (1982) algorithm, and the BSB (Brain-State-in-a-Box) algorithm.

      T&F logoTaylor & Francis Group logo
      • Policies
        • Privacy Policy
        • Terms & Conditions
        • Cookie Policy
        • Privacy Policy
        • Terms & Conditions
        • Cookie Policy
      • Journals
        • Taylor & Francis Online
        • CogentOA
        • Taylor & Francis Online
        • CogentOA
      • Corporate
        • Taylor & Francis Group
        • Taylor & Francis Group
        • Taylor & Francis Group
        • Taylor & Francis Group
      • Help & Contact
        • Students/Researchers
        • Librarians/Institutions
        • Students/Researchers
        • Librarians/Institutions
      • Connect with us

      Connect with us

      Registered in England & Wales No. 3099067
      5 Howick Place | London | SW1P 1WG © 2022 Informa UK Limited