News
I gave an invited talk on OptiNIC at the Open Compute Project (OCP) Time Appliances Project (TAP).
I will be serving on the Program Committees for APNet 2026, ICNP 2026, and NetFabAI 2026.
I presented OptiNIC at the DARPA/SRC ACE Center for Evolvable Computing Industry Liaison Meeting.
I successfully defended my Ph.D. in Computer Science at Purdue University, concluding my doctoral work on systems, networks, and AI/ML infrastructure.
I received the Broadcom Research Award for my work in systems and networking for large-scale AI.
Our paper, “Reimagining RDMA Through the Lens of ML,” was accepted to IEEE Computer Architecture Letters (CAL), presenting a new perspective on RDMA architectures for ML workloads.
I presented OptiReduce / Ultima at the DARPA/SRC CUBIC Center for Ubiquitous Connectivity as an invited research presentation.
I became an Academic Affiliate in the Department of Computer Science and Engineering at the University of Michigan, Ann Arbor.
I presented OptiReduce / Ultima at the DARPA/SRC ACE Center for Evolvable Computing Industry Liaison Meeting.
I presented my work on ultra–low-latency CDN edge applications to Microsoft Research (Intelligent Networked Systems Group) and the Microsoft Azure Front Door team.
I joined Microsoft Research in Summer 2025, working on ultra–low-latency applications for Azure Front Door (CDN edge).
The OptiReduce project website is live at optireduce.github.io.
OptiReduce introduces a resilient AllReduce framework that uses bounded-loss reliability to improve tail performance for distributed deep learning. I presented it at NSDI 2025 in Philadelphia, PA.
I was selected to serve on the NSDI 2025 pre-review task force and contribute to the review process.
I received the Google Research Scholar Award for my work on tail-optimal collective communication for distributed training.
I presented Ultima at the IBM/IEEE AI Compute Symposium (AICS ’23) at the IBM T. J. Watson Research Center in Yorktown Heights, NY.
I presented a live demo of Ultima at the FABRIC KNIT 7 community workshop (Sept 27, 2023) and shared early results on resilient collective communication.
I gave a research talk at HPE’s Networking and Distributed Systems Lab on LLM parallelism strategies, the resulting communication patterns, and deployment-aware design choices.
I presented an analysis of RoCE vs. InfiniBand performance gaps for large-scale ML workloads to NVIDIA’s Mellanox networking group.
Ultima was accepted to the OSDI 2022 poster session, presenting early ideas on resilient and tail-optimal collective communication for distributed deep learning.
FuzzUSB was presented at IEEE S&P 2022, introducing hybrid stateful fuzzing for USB gadget stacks.
I presented my work on “Constructing the Face of Network Data” at the SIGCOMM 2021 poster session, exploring new methods for network traffic analysis and security.
I started my Ph.D. in Computer Science at Purdue University, focusing on systems, networks, and AI/ML infrastructure research.