Curriculum Vitae
Education
- B.Eng. in Huazhong University of Science and Technology, 2012-2016
- Ph.D in Stony Brook University, 2016-present
Work experience
- SDE Intern@Amazon Web Services (AWS) Redshift AQUA Team, 2020
Projects
- Exploring sparsity in the Multi-Head Attention, 2020-present
- Long Short-Term Memory Neural Network Acceleration on FPGA, 2018-2020
- High resolution Time-to-Digital Converter on FPGA, 2019-2020
- Deep Learning on Spectrum Sensing and its FPGA applications, 2018-2019
- Scalable memory interconnect for many-port DNN accelerators on FPGA, 2017-2018
Skills
Programming Languages:
- Programming: C/C++, Python
- HDL and high-level HDL: Verilog/SystemVerilog, SpinalHDL
- Deep Learning Framework and Libraries: Tensorflow, Huggingface
FPGAs:
- Xilinx FPGA: Spartan-6, Virtex-7 and Virtex-Ultrascale+
Publications
- M. Treviso et al., “Efficient Methods for Natural Language Processing: A Survey,” Transactions of the Association for Computational Linguistics, vol. 11, pp. 826–860, Jul. 2023. [arXiv ver.]
- T. Ji, S. Jain, M. Ferdman, P. Milder, H. A. Schwartz, and N. Balasubramanian, “On the Distribution and Sparsity of Attention within Transformers”, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Aug. 2021. [arXiv ver.]
- Y. Shen, T. Ji, M. Ferdman, and P. Milder, “Argus: An End-to-End Framework for Accelerating CNNs on FPGAs”, IEEE Micro, vol. 39, no. 5, pp. 17-25, Sep. 2019, doi: 10.1109/MM.2019.2930607.
- Y. Shen, T. Ji, M. Ferdman, and P. Milder, “Medusa: A Scalable Interconnect for Many-Port DNN Accelerators and Wide DRAM Controller Interfaces”, in 2018 28th International Conference on Field Programmable Logic and Applications (FPL), Aug. 2018, pp. 101-1014, doi: 10.1109/FPL.2018.00026. [arXiv ver.]
Teaching
Teaching Assistant:
- ESE 345: Computer Architecture, Fall 2017
- ESE 382: Digital Design using VHDL&PLDs, Spring 2018
- ESE 118: Digital Logic Design, Spring 2020