I am a senior student at Beihang University, supervised by Prof. Xianglong Liu. After participating in adversarial learning, Iā€™m currently devoted to the quantization of pretrained large models. I hope to gain a deeper understanding of models, and in turn make them more efficient and robust.

šŸ“ Publications

arxiv
sym

How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study

Wei Huang*, Xudong Ma*, Haotong Qin, Xingyu Zheng, Chengtao Lv, Hong Chen, Jie Luo, Xiaojuan Qi, Xianglong Liu, Michele Magno

Github Hugging Face

  • This paper explores LLaMA3ā€™s capabilities when quantized to low bit-width, demonstrating its value in advancing future models.
arxiv
sym

BinaryDM: Towards Accurate Binarization of Diffusion Model

Xingyu Zheng*, Haotong Qin*, Xudong Ma, Mingyuan Zhang, Haojie Hao, Jiakai Wang, Zixiang Zhao, Jinyang Guo, Xianglong Liu

Github

  • This paper proposes BinaryDM, a novel accurate quantization-aware training approach to push the weights of diffusion models towards the limit of 1-bit.
arxiv
sym

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Haotong Qin*, Xudong Ma*, Xingyu Zheng, Xiaoyang Li, Yang Zhang, Shouda Liu, Jie Luo, Xianglong Liu, Michele Magno

Github

  • This paper proposes a novel IR-QLoRA for pushing quantized LLMs with LoRA to be highly accurate through information retention.
ACM MM 2023
sym

DIsolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Jun Guo, Xingyu Zheng, Aishan Liu, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

Github

  • This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.

šŸ“– Educations

  • 2020.09 - 2024.06 (now), Beihang University, Beijing.