I am an Associate Professor of Computer Science at Stony Brook University. I direct the Computer Architecture Stony Brook (COMPAS) Lab. Prior to joining Stony Brook, I completed my Ph.D. at Carnegie Mellon University (CMU) under the supervision of Babak Falsafi. While completing my dissertation, I spent several years working remotely from Ecole Polytechnique Fédérale de Lausanne (EPFL).
My research interests are in the area of computer architecture, with emphasis on the design of server systems. I work on the entire computing stack, from server software and operating systems, to networks and processor microarchitecture. My current research projects include FPGA accelerator integration into server environments (e.g., Intel HARP, Microsoft Catapult, and Amazon F1), FPGA programmability (e.g., virtual memory and high-level synthesis), accelerators for machine learning (e.g., transformers and convolutional neural networks), efficient network processing and software-defined networking, speculative performance and energy-enhancing techniques for high-performance processors, and programming models and mechanisms for emerging memory technologies (e.g., HBM and 3D XPoint).
If you are a PhD student at Stony Brook and want to work with me, please send me email to arrange an appointment.
2018 | |
[7] | Panning for gold.com: Understanding the dynamics of domain dropcatching , In Proceedings of the ACM Web Conference (WWW), 2018. [bib] [pdf] |
[6] | Mantis: A Fast, Small, and Exact Large-Scale Sequence Search Index , In 21st Annual International Conference on Research in Computational Molecular Biology (RECOMB), 2018. [bib] [pdf] |
[5] | Taming the Killer Microsecond , In 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2018. [bib] [pdf] |
[4] | Impact of Device Parameters on Internet-based Mobile Applications , In 2018 Conference on Internet Measurement Conference (IMC), 2018. [bib] [pdf] |
[3] | Medusa: A Scalable Memory Interconnect for Many-Port DNN Accelerators and Wide DRAM Controller Interfaces , In 28th International Conference on Field Programmable Logic and Applications (FPL), 2018. [bib] [pdf] |
[2] | FPGASwarm: High Throughput Model Checking Using FPGAs , In 28th International Conference on Field Programmable Logic and Applications (FPL), 2018. [bib] [pdf] |
[1] | A Full-System VM-HDL Co-Simulation Framework for Servers with PCIe-Connected FPGAs , In 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA), 2018. [bib] [pdf] |
Computer architecture, with particular emphasis on the design of efficient server systems. Most recently, my main focus has been on Machine Learning Accelerators, developing hardware techniques to enable fast and efficient implementations of deep learning, and making FPGA-based accelerators more practical and easier to program. More broadly, my work seeks to understand the fundamental properties and interactions of application software, operating systems, networks, processor microarchitecture, and datacenter dynamics, to enable software and hardware co-design of high-performance, power-efficient, and compact servers.
These days, it seems like everyone's favorite hobby is to travel. Below is a map that shows the countries I visited.
If you need to speak with me, please feel free to drop by my office at any time. However, to ensure that I will be there and not busy, it's always best to send an email ahead of your visit.
If you prefer to explicitly schedule an appointment, please send me email. You can check my general availability by consulting my calendar.
March 13, 2025: The MDA funds our work toward Energy Efficient and Fault Tolerant Acceleration of Deep Neural Networks.
December 2, 2024: A Case for Hardware Memoization in Server CPUs will appear in CAL.
October 10, 2024: Ready or Not, Here I Come: Characterizing the Security of Prematurely-public Web Applications will appear at ACSAC'24.
September 6, 2024: Xipeng Shen and I will be serving as co-Program Chairs for International Conference on Supercomputing (ICS'25). Please submit your best work!
June 1, 2024: The SUNY-IBM AI Research Alliance funds our work on MLISA, an instruction set architecture extension for AI Accelerators.
April 26, 2024: Our paper NUCAlloc: Fine-Grained Block Placement in Hashed Last-Level NUCA Caches will appear at ICS'24.