A Rate of Convergence for Learning Theory with Consensus

TR Number
Date
2015-02-04
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

This thesis poses and solves a distribution free learning problem with consensus that arises in the study of estimation and control strategies for distributed sensor networks. Each node i for i = 1, . . . , n of the sensor network collects independent and identically distributed local measurements {z i} := {z i j}j∈N := {(x i j , yi j )}j∈N ⊆ X × Y := Z that are generated by the probability measure ρ i on Z. Each node i for i = 1, . . . , n of the network constructs a sequence of estimates {f i k }k∈N from its local measurements {z i} and from information functionals whose values are exchanged with other nodes as specified by the communication graph G for the network. The optimal estimate of the distribution free learning problem with consensus is cast as a saddle point problem which characterizes the consensus-constrained optimal estimate. This thesis introduces a two stage learning dynamic wherein local estimation is carried out via local least square approximations based on wavelet constructions and information exchange is associated with the Lagrange multipliers of the saddle point problem. Rates of convergence for the two stage learning dynamic are derived based on certain recent probabilistic bounds derived for wavelet approximation of regressor functions.

Description
Keywords
Learning theory, infinite dimensional estimation, convergence rate, consensus, communication network
Citation
Collections