I am an Associate Professor of Computer Science at Stony Brook University. I direct the Computer Architecture Stony Brook (COMPAS) Lab. Prior to joining Stony Brook, I completed my Ph.D. at Carnegie Mellon University (CMU) under the supervision of Babak Falsafi. While completing my dissertation, I spent several years working remotely from Ecole Polytechnique Fédérale de Lausanne (EPFL).
My research interests are in the area of computer architecture, with emphasis on the design of server systems. I work on the entire computing stack, from server software and operating systems, to networks and processor microarchitecture. My current research projects include FPGA accelerator integration into server environments (e.g., Intel HARP, Microsoft Catapult, and Amazon F1), FPGA programmability (e.g., virtual memory and high-level synthesis), accelerators for machine learning (e.g., transformers and convolutional neural networks), efficient network processing and software-defined networking, speculative performance and energy-enhancing techniques for high-performance processors, and programming models and mechanisms for emerging memory technologies (e.g., HBM and 3D XPoint).
If you are a PhD student at Stony Brook and want to work with me, please send me email to arrange an appointment.
2014 | |
[11] | A Case for Specialized Processors for Scale-Out Workloads , In IEEE Micro's Top Picks, 2014. (original at ASPLOS'12) [bib] [pdf] |
2012 | |
[10] | Quantifying the Mismatch between Emerging Scale-Out Applications and Modern Processors , In ACM Transactions on Computer Systems (TOCS), ACM, volume 30, 2012. [bib] [pdf] |
[9] | Clearing the Clouds: A Study of Emerging Scale-out Workloads on Modern Hardware , In 17th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2012. (recognized as Best Paper by the program committee, recognized as Top Pick of 2013 by IEEE Micro, and received the ACM SIGARCH/SIGPLAN/SIGOPS ASPLOS 2023 Influential Paper Award (test-of-time)) [bib] [pdf] |
2011 | |
[8] | Toward Dark Silicon in Servers , In IEEE Micro, volume 31, 2011. [bib] [pdf] |
2010 | |
[7] | Near-Optimal Cache Block Placement with Reactive Nonuniform Cache Architectures , In IEEE Micro's Top Picks, volume 30, 2010. (original at ISCA'09) [bib] [pdf] |
[6] | Making Address-Correlated Prefetching Practical , In IEEE Micro's Top Picks, volume 30, 2010. (original at HPCA'09) [bib] [pdf] |
2009 | |
[5] | Reactive NUCA: near-optimal block placement and replication in distributed caches , In 36th International Symposium on Computer Architecture (ISCA), 2009. (recognized as Top Pick of 2009 by IEEE Micro) [bib] [pdf] |
[4] | Practical Off-Chip Meta-Data for Temporal Memory Streaming , In 15th International Symposium on High Performance Computer Architecture (HPCA), 2009. (recognized as Top Pick of 2009 by IEEE Micro) [bib] [pdf] |
2008 | |
[3] | Temporal Instruction Fetch Streaming , In 41st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2008. [bib] [pdf] |
[2] | Temporal Streams in Commercial Server Applications , In 2008 IEEE International Symposium on Workload Characterization (IISWC), 2008. [bib] [pdf] |
2006 | |
[1] | SimFlex: Statistical Sampling of Computer System Simulation , In IEEE Micro, volume 26, 2006. [bib] [pdf] |
Computer architecture, with particular emphasis on the design of efficient server systems. Most recently, my main focus has been on Machine Learning Accelerators, developing hardware techniques to enable fast and efficient implementations of deep learning, and making FPGA-based accelerators more practical and easier to program. More broadly, my work seeks to understand the fundamental properties and interactions of application software, operating systems, networks, processor microarchitecture, and datacenter dynamics, to enable software and hardware co-design of high-performance, power-efficient, and compact servers.
These days, it seems like everyone's favorite hobby is to travel. Below is a map that shows the countries I visited.
If you need to speak with me, please feel free to drop by my office at any time. However, to ensure that I will be there and not busy, it's always best to send an email ahead of your visit.
If you prefer to explicitly schedule an appointment, please send me email. You can check my general availability by consulting my calendar.
March 13, 2025: The MDA funds our work toward Energy Efficient and Fault Tolerant Acceleration of Deep Neural Networks.
December 2, 2024: A Case for Hardware Memoization in Server CPUs will appear in CAL.
October 10, 2024: Ready or Not, Here I Come: Characterizing the Security of Prematurely-public Web Applications will appear at ACSAC'24.
September 6, 2024: Xipeng Shen and I will be serving as co-Program Chairs for International Conference on Supercomputing (ICS'25). Please submit your best work!
June 1, 2024: The SUNY-IBM AI Research Alliance funds our work on MLISA, an instruction set architecture extension for AI Accelerators.
April 26, 2024: Our paper NUCAlloc: Fine-Grained Block Placement in Hashed Last-Level NUCA Caches will appear at ICS'24.