Prof. Kin K. Leung

欧洲科学院院士,IEEE Fellow

Prof. Kin K. Leung
Imperial College London, United Kingdom

Kin K. Leung received his B.S. degree from the Chinese University of Hong Kong, and his M.S. and Ph.D. degrees from University of California, Los Angeles. He joined AT&T Bell Labs in New Jersey in 1986 and worked at its successor companies until 2004. Since then, he has been the Tanaka Chair Professor in the Electrical and Electronic Engineering (EEE), and Computing Departments at Imperial College in London. He serves as the Head of Communications and Signal Processing Group in the EEE Department at Imperial. His current research focuses on optimization and machine-learning techniques for system design and control of large-scale communications, computer and sensor networks. He also works on multi-antenna and cross-layer designs for wireless networks.

He is a Fellow of the Royal Academy of Engineering (2022), IEEE Fellow (2001), IET Fellow (2022), and member of Academia Europaea (2012). He received the Distinguished Member of Technical Staff Award from AT&T Bell Labs (1994) and the Royal Society Wolfson Research Merits Award (2004-09). Jointly with his collaborators, he received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize (2021), the IEEE ComSoc Best Survey Paper Award (2022), the U.S.–UK Science and Technology Stocktake Award (2021), the Lanchester Prize Honorable Mention Award (1997), and several best conference paper awards. He currently serves as the IEEE ComSoc Distinguished Lecturer (2022-23). He was a member (2009-11) and the chairman (2012-15) of the IEEE Fellow Evaluation Committee for the ComSoc. He has served as guest editor and editor for 10 IEEE and ACM journals. Currently, he chairs the Steering Committee for the IEEE Transactions on Mobile Computing, and is an editor for the ACM Computing Survey and International Journal on Sensor Networks.

Speech Title: Optimization by Machine Learning for Communication and Computer Infrastructures

Abstract: Optimization techniques are widely used to allocate and share limited resources to competing demands in communication and computer infrastructures. The speaker will start by showing the well-known Transport Control Protocol (TCP) as a distributed solution to achieve the optimal allocation of network bandwidth. Unfortunately, factors such as multiple grades of service quality, variable transmission power, and tradeoffs between communication and computation often make the optimization problem for resource allocation non-convex. New distributed solution techniques are needed to solve these problems.

As an illustrative example, the speaker will consider in-network data processing in sensor networks where data are aggregated (fused) along the way they are transferred toward the end user. Finding the optimal solution for the distributed processing problem is NP-hard, but for specific settings, the problem can lead to a distributed framework for achieving the optimal tradeoff between communications and computation costs.

As discussed above, gradient-based iterative algorithms are commonly used to solve the optimization problems. Much research focuses on improving the iteration convergence. However, when the system parameters change, it requires a new solution from the iterative methods. The speaker will present a new machine-learning method by using two Coupled Long Short-Term Memory (CLSTM) networks to quickly and robustly produce the optimal or near-optimal solutions to constrained optimization problems over a range of system parameters