Jia (Kevin) Liu,

Associate Professor of Electrical and Computer Engineering, The Ohio State University

  • Home
  • Research
  • Publications
  • Awards
  • Grants
  • Activities
  • Teaching
  • My Group

CAREER: Computing-Aware Network Optimization for Efficient Distributed Data Analytics at the Wireless Edge

NSF CNS-2110259
Principal Investigator: Jia (Kevin) Liu

Intellectual Merits

The proposed networking-computing co-optimization research in this project will serve as a foundation to facilitate a plethora of data analytics and ML/AI applications with high networking and computing performances. Due to its unique scientific and engineering challenges, this research program encompasses strong and holistic expertise in mathematical modeling, optimization, control, queueing theory, stochastic analysis, as well as deep knowledge of ML/AI system operations in practice. The proposed research will not only advance the knowledge in the co-designs of wireless networking and computing, but will also serve a critical need in the communications, signal processing, and ML/AI research communities. We explore a cross-disciplinary understanding between wireless networking and data analytics through a unified research program, which consists of the development of tractable theoretical models, exploration of theoretical performance bounds and limits, and the development of practical low-complexity distributed algorithms and protocols that are easy to implement. Moreover, not only will the networking-computing co-design in this project directly contribute to distributed ML/AI data analytics, they are also transformative in that they will shed networking insights on the design of general data-intensive computational systems.


Major Activities

  1. Communication-Efficient Distributed Optimization (Communication Efficiency)

    To adjust to wireless communication channels that are often capacity-constrained, we focus on information sparsification and coded compression at the physical layer (PHY) of each node in the network to enable communication-efficient optimization algorithms for distributed ML/AI data analytics. We aim to develop low-complexity algorithms that do not compromise convergence speed and are easy to implement in practice. By carefully exploiting the statistical properties of sparsification and compression operators, we strive to develop a series of coded compressed distributed first-order optimization algorithms, which achieve the same convergence rate as the classical centralized gradient-based methods (hence optimal in first-order sense) and orders-of-magnitude improvement in communication costs at the PHY layer.

  2. Joint Queueing-Computing Scheduling (Impacts of Latency)

    Having addressed communication efficiency at the PHY layer, in our second thrust, we turn our attention to the medium access control layer (MAC) to reconcile network performance optimization (e.g., throughput, delay, etc.) with computing convergence efficiency. One important goal of this thrust is to understand the impacts of the latency of stochastic gradients (due to the combined effect of path-loss/fading, interference, computation stragglers, and asynchronism between distributed machines, etc.) on the convergence performance of distributed first-order optimization algorithms. Based on the obtained insights, we then propose to design joint queueing-computing scheduling schemes to induce desirable gradient delay distributions that guarantee satisfactory convergence performance of the distributed first-order optimization algorithms.

  3. Admission Control and Computing Resource Virtualization (Resource Management)

    Following the last two thrusts on networking-computing co-designs for a single data analytics job, we move further up to the transport layer (TRAN) to consider network admission control and system computing resource virtualization to minimize total job completion time for multiple data analytics tasks arriving randomly at the wireless edge clouds. Job completion time minimization at edge clouds is highly challenging since it contains mixed (generalized) cover and packing constraints, for which even the feasibility checking is NP-Hard. To address these challenges, we propose a primal-dual online resource virtualization framework coupled with approximation designs to provide strong competitive ratio guarantees.


Products

  1. Peizhong Ju, Haibo Yang, Jia Liu, Yingbin Liang, and Ness B. Shroff, "Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning?," in Proc. ACM MobiHoc, Athens, Greece, Oct. 2024 (acceptance rate: 24.5%).

  2. Haibo Yang, Peiwen Qiu, Prashant Khanduri, Minghong Fang, and Jia Liu, "Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation," in Proc. ICML, Vienna, Austria, Jul. 2024 (acceptance rate: 27.5%).

  3. Tianchen Zhou*, FNU Hairi*, Haibo Yang, Jia Liu, Tian Tong, Fan Yang, Michinari Momma, Yan Gao, "Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning," in Proc. ICML, Vienna, Austria, Jul. 2024 (acceptance rate: 27.5%).

  4. Minghong Fang, Zifan Zhang, Hairi FNU, Prashant Khanduri, Jia Liu, Songtao Lu, Neil Gong, Yuchen Liu, "Toward Byzantine-Robust Decentralized Federated Learning," in Proc. ACM CCS, Salt Lake City, UT, Oct. 2024 (acceptance rate: 19%).

  5. Zhuqing Liu, Xin Zhang, Jia Liu, Zhengyuan Zhu, and Songtao Lu, "PILOT: An O(1/T)-Convergent Approach for Policy Evaluation with Nonlinear Function Approximation," in Proc. ICLR, Vienna, Austria, May 2024 (Spotlight Presentation, spotlight rate: 5%, acceptance rate: 31%).

  6. FNU Hairi, Zifan Zhang, and Jia Liu, "Sample and Communication Efficient Fully Decentralized MARL Policy Evaluation via a New Approach: Local TD Update," in Proc. ACM AAMAS, Auckland, New Zealand, May 2024 (acceptance rate: 25%).

  7. Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, and Michinari Momma "Federated Multi-Objective Learning," in Proc. NeurIPS, New Orleans, LA, Dec. 2023 (acceptance rate: 26.1%).

  8. Zhuqing Liu, Xin Zhang, Songtao Lu, and Jia Liu, "PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities," in Proc. ACM MobiHoc, Washington, DC, Oct. 2023 (acceptance rate: 21.9%).

  9. Zhuqing Liu, Xin Zhang, Prashant Khanduri, Songtao Lu, and Jia Liu, "Prometheus: Taming Sample and Communication Complexities in Constrained Decentralized Stochastic Bilevel Learning," in Proc. ICML, Honolulu, HI, Jul. 2023 (acceptance rate: 27.9%).

  10. Prashant Khanduri, Ioannis Tsaknakis, Yihua Zhang, Jia Liu, Sijia Liu, Jiawei Zhang, and Mingyi Hong, "Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach," in Proc. ICML, Honolulu, HI, Jul. 2023 (acceptance rate: 27.9%).

  11. Peiwen Qiu, Yining Li, Zhuqing Liu, Prashant Khanduri, Jia Liu, Ness B. Shroff, Elizabeth S. Bentley, and Kurt Turck, "DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization," in Proc. IEEE INFOCOM, New York City, NY, May 2023 (acceptance rate: 19.2%).

  12. Sen Lin, Ming Shi, Anish Arora, Raef Bassily, Elisa Bertino, Constantine Caramanis, Kaushik Chowdhury, Eylem Ekici, Atilla Eryilmaz, Stratis Ioannidis, Nan Jiang, Gauri Joshi, Jim Kurose, Yingbin Liang, Zhiqiang Lin, Jia Liu, Mingyan Liu, Tommaso Melodia, Aryan Mokhtari, Rob Nowak, Sewoong Oh, Srini Parthasarathy, Chunyi Peng, Hulya Seferoglu, Ness Shroff, Sanjay Shakkottai, Kannan Srinivasan, Ameet Talwalkar, Aylin Yener and Lei Ying, "Leveraging Synergies Between AI and Networking to Build Next Generation Edge Networks," in Proc. IEEE International Conference on Collaboration and Internet Computing (CIC), Virtual, Dec. 2022.

  13. Haibo Yang, Peiwen Qiu, Prashant Khanduri, and Jia Liu, "With a Little Help from My Friend: Server-Aided Federated Learning with Partial Client Participation," in Proc. NeurIPS Workshop on Federated Learning: Recent Advances and New Challenges, (FL-NeurIPS'22), New Orleans, LA, Dec. 2022.

  14. Minghong Fang, Jia Liu, Neil Gong, and Elizabeth S. Bentley, "AFLGuard: Byzantine-robust Asynchronous Federated Learning," in Proc. ACM ACSAC, Austin, TX, Dec. 2022 (acceptance rate: 24.1%).

  15. Haibo Yang, Peiwen Qiu, and Jia Liu, "Taming Fat-Tailed (“Heavier-Tailed” with Potentially Infinite Variance) Noise in Federated Learning," in Proc. NeurIPS, New Orleans, LA, Dec. 2022 (acceptance rate: 25.6%).

  16. Haibo Yang, Zhuqing Liu, Xin Zhang, and Jia Liu, "SAGDA: Achieving O(ε-2) Communication Complexity in Federated Min-Max Learning," in Proc. NeurIPS, New Orleans, LA, Dec. 2022 (acceptance rate: 25.6%).

  17. Songtao Lu, Siliang Zeng, Xiaodong Cui, Mark S. Squillante, Lior Horesh, Brian Kingsbury, Jia Liu, and Mingyi Hong, "A Stochastic Linearized Augmented Lagrangian Method for Decentralized Bilevel Optimization," in Proc. NeurIPS, New Orleans, LA, Dec. 2022 (acceptance rate: 25.6%).

  18. Menglu Yu, Bo Ji, Hridesh Rajan, and Jia Liu, "On Scheduling Ring-All-Reduce Learning Jobs in Multi-Tenant GPU Clusters with Communication Contention," in Proc. ACM MobiHoc, Seoul, South Korea, Oct. 2022 (acceptance rate: 19.8%).

  19. Zhuqing Liu, Xin Zhang, Prashant Khanduri, Songtao Lu, and Jia Liu, "INTERACT: Achieving Low Sample and Communication Complexities in Decentralized Bilevel Learning over Networks," in Proc. ACM MobiHoc, Seoul, South Korea, Oct. 2022 (acceptance rate: 19.8%).

  20. Zhuqing Liu, Xin Zhang, and Jia Liu, "SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters," in Proc. ACM MobiHoc, Seoul, South Korea, Oct. 2022 (acceptance rate: 19.8%).

  21. Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, and Zhengyuan Zhu, "NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data," in Proc. ACM MobiHoc, Seoul, South Korea, Oct. 2022 (acceptance rate: 19.8%).

  22. Jinmiao Fu, Shaoyuan Xu, Huidong Liu, Yang Liu, Ning Xie, Chien-Chih Wang, Bryan Wang, Jia Liu, and Yi Sun, "CMA-CLIP: Cross-Modality Attention CLIP for Text-Image Classification," in Proc. IEEE ICIP, Bordeaux, France, Oct. 2022.

  23. Haibo Yang, Xin Zhang, Prashant Khanduri, and Jia Liu, "Anarchic Federated Learning," in Proc. ICML, Baltimore, MD, July 2022 (Long Oral Presentation, long oral presentation rate: 2%, acceptance rate: 21.9%).

  24. Michinari Momma, Chaosheng Dong, and Jia Liu, "A Multi-Objective / Multi-Task Learning Framework Induced by Pareto Stationarity," in Proc. ICML, Baltimore, MD, July 2022 (Spotlight Presentation, spotlight rate: 5%, acceptance rate: 21.9%).

  25. Jiayu Mao*, Haibo Yang*, Peiwen Qiu, Jia Liu, and Aylin Yener, "CHARLES: Channel-Quality-Adaptive Over-the-Air Federated Learning over Wireless Networks," in Proc. IEEE SPAWC, Oulu, Finland, June 2022 (Invited Paper).

  26. Haibo Yang, Peiwen Qiu, Jia Liu, and Aylin Yener, "Over-the-Air Federated Learning With Joint Adaptive Computation and Power Control," in Proc. IEEE ISIT, Espoo, Finland, June 2022.

  27. Minghong Fang, Jia Liu, Michinari Momma, and Yi Sun, "FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data," in Proc. ACM SACMAT, Virtual Event, June 2022.

  28. FNU Hairi, Jia Liu, and Songtao Lu,"Finite-Time Convergence and Sample Complexity of Multi-Agent Actor-Critic Reinforcement Learning with Average Reward," in Proc. ICLR, Virtual Event, April 2022 (Spotlight Presentation, spotlight rate: 5%, acceptance rate: 32%).

  29. Tianchen Zhou, Jia Liu, Chaosheng Dong, Yi Sun,"Bandit Learning with Joint Effect of Incentivized Sampling, Delayed Sampling Feedback, and Self-Reinforcing User Preferences," in Proc. ICLR, Virtual Event, April 2022 (acceptance rate: 32%).

  30. Prashant Khanduri, Haibo Yang, Mingyi Hong, Jia Liu, Hoi To Wai, Sijia Liu,"Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach," in Proc. ICLR, Virtual Event, April 2022 (acceptance rate: 32%).

  31. Tianxiang Gao, Hailiang Liu, Jia Liu, Hridesh Rajan, and Hongyang Gao,"A Global Convergence Theory for Deep ReLU Implicit Networks via Over-parameterization," in Proc. ICLR, Virtual Event, April 2022 (acceptance rate: 32%).

  32. Menglu Yu, Ye Tian, Bo Ji, Chuan Wu, Hridesh Rajan, and Jia Liu, "GADGET: Online Resource Optimization for Scheduling Ring-All-Reduce Learning Jobs," in Proc. IEEE INFOCOM, Virtual Event, May 2022 (acceptance rate: 19.9%).

  33. Xin Zhang, Zhuqing Liu, Jia Liu, Zhengyuan Zhu, and Songtao Lu, "Taming Communication and Sample Complexities in Decentralized Policy Evaluation for Cooperative Multi-Agent Reinforcement Learning," in Proc. NeurIPS, Virtual Event, Dec. 2021 (acceptance rate: 26%).

  34. Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, and Pramod Varshney, "STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning," in Proc. NeurIPS, Virtual Event, Dec. 2021 (acceptance rate: 26%).

  35. Wenbo Ren, Jia Liu, and Ness B. Shroff, "Sample Complexity Bounds for Active Ranking from Multi-wise Comparisons," in Proc. NeurIPS, Virtual Event, Dec. 2021 (acceptance rate: 26%).

  36. Haibo Yang, Jia Liu, and Elizabeth S. Bentley,"CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning," in Proc. IEEE/IFIP WiOpt, Virtual Event, Oct. 2021.

  37. Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat and Pramod K. Varshney,"Achieving Optimal Sample and Communication Complexities for Non-IID Federated Learning," in Proc. ICML Workshop on Federated Learning for User Privacy and Data Confidentiality (FL-ICML'21), Virtual Event, Jul. 2021.

  38. Fengjiao Li, Jia Liu, and Bo Ji, "Federated Learning with Fair Worker Selection: A Multi-Round Submodular Maximization Approach," in Proc. IEEE MASS, Virtual Event, Oct. 2021 (acceptance rate: 28.3%).

  39. Tianchen Zhou, Jia Liu, Chaosheng Dong, and Jingyuan Deng, "Incentivized Bandit Learning with Self-Reinforcing User Preferences," in Proc. ICML, Virtual Event, Jul. 2021 (Spotlight Presentation, spotlight rate: 5%, acceptance rate: 20.4%).

  40. Xin Zhang, Jia Liu, Zhengyuan Zhu, and Elizabeth S. Bentley, "GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning," in Proc. ACM MobiHoc, Shanghai, China, Jul. 2021 (acceptance rate: 20.1%).

  41. Tianxiang Gao, Songtao Lu, Jia Liu, and Chris Chu, "On the Convergence of Randomized Bregman Coordinate Descent for Non-Lipschtiz Composite Problems," in Proc. IEEE ICASSP, Virtual Event, Jun. 2021.

  42. Haibo Yang, Minghong Fang, and Jia Liu, "Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning," in Proc. ICLR, Virtual Event, May 2021 (acceptance rate: 28.6%).

  43. Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian and Jia Liu, "Data Poisoning Attacks and Defenses to Crowdsourcing Systems," in Proc. ACM WWW (TheWebConf), Virtual Event, Apr. 2021 (acceptance rate: 20.6%).

  44. Wenbo Ren, Jia Liu, and Ness Shroff, "On Logarithmic Regret for Bandits with Knapsacks," in Proc. IEEE CISS, Special Session on Online Optimization and Learning, Virtual Event, March 2021 (Invited Paper).

  45. Xiaoyu Cao*, Minghong Fang*, Jia Liu, and Neil Gong, "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping," in Proc. NDSS, Virtual Event, Feb. 2021 (*co-primary authors, acceptance rate: 16%).

  46. Xin Zhang, Jia Liu, Zhengyuan Zhu, and Elizabeth S. Bentley, "Low Sample and Communication Complexities in Decentralized Learning: A Triple Hybrid Approach," in Proc. IEEE INFOCOM, Virtual Event, May 2021 (acceptance rate: 19.9%).

  47. Menglu Yu, Chuan Wu, Bo Ji, and Jia Liu, "A Sum-of-Ratios Multi-Dimensional-Knapsack Decomposition for DNN Resource Scheduling," in Proc. IEEE INFOCOM, Virtual Event, May 2021 (acceptance rate: 19.9%).

  48. Xin Zhang, Jia Liu, Zhengyuan Zhu, and Elizabeth Bentley, "Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks," in Proc. IEEE INFOCOM, Paris, France, Apr. 2019 (acceptance rate: 19.6%).

  49. Xin Zhang, Jia Liu, Zhengyuan Zhu, and Elizabeth Bentley, "Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors," in Proc. IEEE INFOCOM, Toronto, Canada, Jul. 2020 (acceptance rate: 19.8%).

  50. Xin Zhang, Jia Liu, and Zhengyuan Zhu, "Taming Convergence for Asynchronous Stochastic Gradient Descent with Unbounded Delay in Non-Convex Learning," in Proc. IEEE CDC, Jeju Island, Korea, December 2020.


 
Copyright © 2004- Jia (Kevin) Liu. All rights reserved.
Updated: . Design adapted from TEMPLATED.