Now that the Internet’s basic protocols are more than 30 years old, network scientists are increasingly turning their attention to ad hoc networks — communications networks set up, on the fly, by wireless devices — where unsolved problems still abound. Most theoretical analyses of ad hoc networks have assumed that the communications links within the network are stable. But that often isn’t the case with real-world wireless devices — as anyone who’s used a cellphone knows.
At the Association for Computing Machinery’s Symposium on Principles of Distributed Computing in July, past and present researchers (graduate student Mohsen Ghaffari, Professor Nancy Lynch, and Cal Newport) from the Theory of Distributed Systems Group at MIT’s Computer Science and Artificial Intelligence Laboratory presented a new framework for analyzing ad hoc networks in which the quality of the communications links fluctuates. Within that framework, they provide mathematical bounds on the efficiency with which messages can propagate through the network, and they describe new algorithms that can achieve maximal efficiency.
“When people start designing theoretical algorithms, they tend to rely too heavily on the specific assumptions of the models. So the algorithms tend to be unrealistic and fragile.” In the past, some researchers have tried to model the unreliability of network links as random fluctuations.“But if you assume real randomness, then you can count on the randomness,” Lynch says. Continue reading at MIT NEWS. Photo by MIT NEWS.