Higher Dimensional Consensus: Learning in Large-Scale Networks - Computer Science > Information TheoryReportar como inadecuado




Higher Dimensional Consensus: Learning in Large-Scale Networks - Computer Science > Information Theory - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Abstract: The paper presents higher dimension consensus HDC for large-scale networks.HDC generalizes the well-known average-consensus algorithm. It divides thenodes of the large-scale network into anchors and sensors. Anchors are nodeswhose states are fixed over the HDC iterations, whereas sensors are nodes thatupdate their states as a linear combination of the neighboring states. Underappropriate conditions, we show that the sensor states converge to a linearcombination of the anchor states. Through the concept of anchors, HDC capturesin a unified framework several interesting network tasks, including distributedsensor localization, leader-follower, distributed Jacobi to solve linearsystems of algebraic equations, and, of course, average-consensus. In manynetwork applications, it is of interest to learn the weights of the distributedlinear algorithm so that the sensors converge to a desired state. We term thisinverse problem the HDC learning problem. We pose learning in HDC as aconstrained non-convex optimization problem, which we cast in the framework ofmulti-objective optimization MOP and to which we apply Pareto optimality. Weprove analytically relevant properties of the MOP solutions and of the Paretofront from which we derive the solution to learning in HDC. Finally, the papershows how the MOP approach resolves interesting tradeoffs speed of convergenceversus quality of the final state arising in learning in HDC in resourceconstrained networks.



Autor: Usman A. Khan, Soummya Kar, Jose M. F. Moura

Fuente: https://arxiv.org/







Documentos relacionados