Lin Xiao, Stephen Boyd, and Sanjay Lall
We consider a sensor network in which each sensor takes measurements, at various times, of some unknown parameters, corrupted by independent Gaussian noises. Each node can take a finite or infinite number of measurements, at arbitrary times (i.e., asynchronously). We propose a space-time diffusion scheme, that relies only on peer-to-peer communication, and allows every node to asymptotically compute the global maximum-likelihood estimate of the unknown parameters. At each iteration, information is diffused across the network by a temporal update step and a spatial update step. Both steps update each node’s state by a weighted average of its current value and locally available data: new measurements for the time update, and neighbors’ data for the spatial update. At any time, any node can compute a local weighted least-squares estimate of the unknown parameters, which converges to the global maximum-likelihood solution. With an infinite number of measurements, these estimates converge to the true parameter values in mean square sense. We show that this scheme is robust to unreliable communication links, and works in a network with dynamically changing topology.
|Published in||Proceedings of Fifth International Conference on Information Processing in Sensor Networks (IPSN 2006)|
|Publisher||Association for Computing Machinery, Inc.|
Copyright © 2007 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or firstname.lastname@example.org. The definitive version of this paper can be found at ACM’s Digital Library --http://www.acm.org/dl/.