Unsplashed background img 1


About Me

Linfeng Zhang is an assistant professor at the School of Artificial Intelligence, Shanghai Jiao Tong University, and received his Ph.D. from the Institute for Interdisciplinary Information Sciences at Tsinghua University. He has won the World Artificial Intelligence Conference Cloud Sail Award (top 20), Microsoft Fellowship (12 in Asia), Beijing Outstanding Graduate, and Tsinghua University Outstanding Doctoral Dissertation. He also serves as an Area Chair for conferences such as ACL, NeurIPS, and ICLR. He has published 40 high-quality papers in CCF-A and CAAI-A level conferences and journals as (co-)first author or corresponding author, with over 3000 citations. His research achievements and work experience have been featured in official media outlets including People's Daily, China Youth Daily, Youth Shanghai (Shanghai Communist Youth League), and Youth Beijing (Beijing Communist Youth League), with related news receiving over 100 million views online.

Our laboratory is recruiting postdoctoral researchers, undergraduate or graduate research assistants, and students for the class of 2026/2027. If you are interested, please email me.

Academic Service

Reviewing papers for conferences and journals including NeurIPS, ICML, ICLR, CVPR, ECCV, ICCV, AAAI, IJCAI, AISTATS, IEEE TPAMI, IEEE TCSVT, IEEE TIP, IEEE TMI, PR, TACO, Scientific Reports and others.

Area Chair and Guest Editor for conferences and journals including NeurIPS2025, IJCNN2025, ACL2025, EMNLP2025, ICLR2026, Big Data and Cognitive Computing.

Research Directions
Efficient Large Models

The current generative large models have billions of parameters, leading to extremely high training and inference costs. By researching compression and acceleration methods, we can reduce deployment costs and enable better real-world utilization of large models.

Efficient Multimodal Large Models

Multimodal large models integrate text, image, audio, and video data. We research efficient methods to process and generate multimodal content, reducing computational costs while maintaining high-quality performance.

Efficient Data Utilization

Current AI models require massive amounts of data for training. We study how to utilize data more efficiently through data cleaning, synthesis, and intelligent sampling, reducing training costs while improving model performance.

Efficient Multimodal Generation

We develop lightweight and efficient AIGC models for text-to-image, text-to-video, and other multimodal generation tasks, enabling high-quality content generation with lower computational costs.

Foundation Models

We research pre-training and post-training methods for large models and multimodal large models, focusing on applications in scientific research and industrial settings.



Recent News

Publication Acceptance Trend
Publication Distribution
  • • December 7, 2025: The recruitment for the 2026 class at EPIC Lab has concluded. Welcome Chang Zou, Jiacheng Liu, Qingyan Wei, and Shuang Chen to join EPIC Lab!
  • • October 15, 2025: Three papers from the lab were accepted by NeurIPS2025. Congratulations!
  • • October 15, 2025: Seven papers from the lab were accepted by AAAI2025. Congratulations!
  • • August 15, 2025: Two papers from the lab were accepted by ACM MM2025. Congratulations!
  • • June 10, 2025: Five papers from the lab were accepted by ICCV2025. Congratulations!
  • • June 15, 2025: Shaobo Wang, a Ph.D. student in the lab, won the 2025 CIE-Tencent Hunyuan Doctoral Incentive (28th in China). Congratulations!
  • • May 15, 2025: Four papers were accepted in ACL2025. Congratulations!
  • • March 15, 2025: Four papers were accepted by CVPR2025 with one of them receiving full scores. Congratulations!
  • • January 22, 2025: Three papers were accepted by ICLR2025. One of them was accepted as oral presentation. Congratulations!
  • • December 12, 2024: One paper was accepted by AAAI2025. Congratulations!
  • • December 12, 2024: Information about Zhang Linfeng was reported by Pengpai Newspaper, with over 100,000,000 views, ranking first on Weibo's hot search and Zhihu's hot search, reported by media such as People's Daily and the Communist Youth League.
  • • October 10, 2024: One paper from the lab was accepted by NeurIPS2024. Congratulations!
  • • September 9, 2024: The recruitment of graduate students for the 2025 class at the lab has concluded, welcoming Wen Zichen, Li Xuelin, and Yan Zexuan to join EPIC Lab.
  • • August 8, 2024: At the invitation of Professor Hu Xuming, Zhang Linfeng went to the Hong Kong University of Science and Technology (Guangzhou) as a visiting assistant professor.
  • • June 6, 2024: Zhang Linfeng obtained his doctoral degree in engineering from the Institute for Interdisciplinary Information Sciences at Tsinghua University, and was awarded the titles of Outstanding Graduate of Beijing, Outstanding Doctoral Dissertation of Tsinghua University, Tsinghua University Qihang Gold Award, Outstanding Graduate of the Institute for Interdisciplinary Information Sciences, and represented the institute at the Tsinghua University Graduate Symposium, speaking at the graduation ceremony.
Lab Members

Linfeng Zhang
Research Interest: Efficient AI Models and Data Utilization

Linfeng Zhang got his Bachelor degree in Northeastern University and then got his Ph.D. degree in Tsinghua Univeristy. Currently, he leads Efficient and Precision Intelligent Computing (EPIC) lab in Shanghai Jiaotong Univeristy.

Shaobo Wang
Research Interest: Efficient Data-Centric AI
Contact: shaobowang1009@sjtu.edu.cn

Shaobo Wang is a Ph.D. candidate in the EPIC Lab at SAI, Shanghai Jiao Tong University, starting in 2024. Building on a strong background in efficient AI, explainable AI, and deep learning theory, he focuses his research on data synthesis and data reduction.

Internship Experience: Alibaba Qwen

Yifeng Gao
Research Interest: Efficient LLM
Contact: yifenggao.cn@gmail.com

Yifeng Gao is a master student in EPIC Lab, Shanghai Jiaotong University. His research interests focus on developing capable, reliable and efficient AI with algorithm and computing co-design.

Xuelin Li
Research Interest: Efficient LLM
Contact: lxl.curiosity@gmail.com

Xuelin Li pursues a Ph.D. degree at the EPIC Lab. He graduates with a Bachelor's degree from the University of Electronic Science and Technology of China (UESTC), where he achieved a perfect GPA: 4.0/4.0 in all courses within the School of Software. During his undergraduate studies, he received numerous awards, including National Scholarship. His research interests focus on developing efficient inference paradigms for trustworthy multimodal large language models.

Internship Experience: Ant Group

Zichen Wen
Research Interest: Efficient Multi-Modal LLM
Contact: Zichen.Wen@outlook.com

Zichen Wen is a Ph.D. student in the EPIC Lab at Shanghai Jiao Tong University, under the supervision of Prof. Linfeng Zhang. He holds a B.S. degree in Computer Science from the University of Electronic Science and Technology of China. During his undergraduate studies, he published multiple research papers in prestigious AI conferences, including AAAI, ACM MM,etc. His research interests lie in Efficient Multi-Modal Large Models and Trustworthy AI, focusing on advancing the efficiency, reliability, and ethical aspects of artificial intelligence systems.

Internship Experience: Kimi, Shanghai AI Lab

Zexuan Yan
Research Interest: Efficient multimodal and AIGC models
Contact: yzx_ustc@mail.ustc.edu.cn

Zexuan Yan joins the EPIC Lab of Zhang Linfeng's research group at the School of Artificial Intelligence, Shanghai Jiao Tong University. His research interests include multimodal models, AIGC, and diffusion model acceleration.

Internship Experience: Xiaohongshu, Alibaba

Chang Zou
Research Interest: Efficient Image and Video Generation
Contact: https://github.com/Shenyi-Z

Chang Zou is currently an undergraduate student at Yingcai Honors College, University of Electronic Science and Technology of China (UESTC), expected to complete his bachelor's degree in 2026. Originally from Chengdu, Sichuan, he doesn’t eat spicy food despite his hometown’s reputation. His primary research focus is on the efficient acceleration of AIGC, particularly Diffusion Models, and he has a solid background in mathematics and physics. In 2024, he began his internship at the EPIC Lab, where, under the guidance of his advisor, Linfeng Zhang, he contributed to submissions for ICLR and CVPR.

Internship Experience: Tencent Hunyuan (Qingyun Program)

Xuyang Liu
Research Interest: Token-wise Acceleration for MLLM
Contact: https://xuyang-liu16.github.io/

Xuyang Liu is currently pursuing his M.S. degree at the College of Electronics and Information Engineering, Sichuan University. He is also a research intern at Taobao & Tmall Group, where he focuses on efficient multi-modal large language models. In 2024, he joined the EPIC Lab as a research intern under the guidance of Prof. Linfeng Zhang, contributing to the development of a comprehensive collection of resources on token-level model compression. His research interests include Efficient AI, covering areas such as discrimination, adaptation, reconstruction, and generation.

Internship Experience: Taobao, Ant Group, Oppo

Qingyan Wei
Research Interest: Efficient dLLMs and AIGC Models
Contact: florawei0506@gmail.com

Qingyan Wei is a new member of the EPIC Lab, focusing on efficient deep learning language models and AIGC models. Her research interests include developing lightweight and high-performance AI systems for various applications.

Internship Experience: Tencent

Jiacheng Liu
Research Interest: Efficient Diffusion · Interactive World Models · Long-video Reasoning
Contact: ljc.mytcl@gmail.com · https://tammytcl.github.io/Liu_Homepage/

Jiacheng Liu is an Undergraduate student from Shandong University and an Incoming Ph.D. candidate at Shanghai Jiao Tong University, starting in 2026. He will join the EPIC Lab, focusing on efficient generative models. His research centers on two synergistic pillars: Efficient Diffusion, aiming to optimize inference and reduce computational costs, and World Models, specifically targeting interactive video generation and long-horizon reasoning. He is dedicated to building models that are faster and capable of controllable, extended visual environments. Guided by a philosophy of "simplicity with depth," he favors elegant, logically coherent solutions and is driven by the pursuit of quiet, powerful ideas.

Internship Experience: Tencent (Qingyun Program)

Shuang Chen
Research Interest: Efficient multimodal LLM
Contact: Charles-2022@sjtu.edu.cn

Shuang Chen is a master's student admitted in 2026. His research interests lie in inference acceleration for multimodal large language models, with a particular focus on improving efficiency while maintaining model performance. He is interested in exploring techniques such as model compression, token pruning, and optimized inference pipelines to enable scalable and practical deployment of multimodal systems.

Yiyu Wang
Research Interest: Accurate and Efficient Long-Video Understanding
Contact: ustywan8@ljmu.ac.uk

Yiyu Wang will join the EPIC Laboratory in 2026 to pursue a Ph.D. under the supervision of Professor Linfeng Zhang. His current research focuses on multimodal large models, video understanding, and streaming video agents, aiming to enhance the reasoning capabilities of large models in long-form and real-time video scenarios.

Yuanhuiyi Lyu
Research Interest: Unified Understanding-Generation Models · Multimodal Generation
Contact: ryan.lyu.mail@gmail.com · https://qc-ly.github.io/

Yuanhuiyi Lyu is a Ph.D. student in Artificial Intelligence at The Hong Kong University of Science and Technology (Guangzhou). He received his B.Eng. in Artificial Intelligence from Northeastern University. His current research focuses on unified understanding-generation models and multimodal generative models.

Zegang Cheng
Research Interest: Efficient Generative Models · Reinforcement Learning
Contact: chengzegang@gmail.com · https://github.com/chengzegang

Zegang Cheng is an incoming Ph.D. student at The Hong Kong University of Science and Technology (Guangzhou) in 2026, joining the EPIC Lab as a joint-supervision student. He previously earned an M.S. in Computer Science from New York University. His current research interests include efficient generative models and reinforcement learning.

Yicun Yang
Research Interest: dLLM Training and Inference · Dataset Distillation
Contact: yangyicun187@gmail.com · Google Scholar

Yicun Yang is currently a senior undergraduate student majoring in Software Engineering at Harbin Institute of Technology (HIT). He will join the EPIC Lab as a joint-supervision Ph.D. student at HKUST(GZ) in Fall 2026. His research focuses on scaling pre-training for diffusion language models and accelerating their inference.

Internships: Antgroup

Xiangqi Jin
Research Interest: Efficient LLM · Reinforcement Learning · LLM Agents
Contact: xiangqijin@outlook.com

Xiangqi Jin is an undergraduate student (Class of 2022) at the School of Information and Software Engineering, University of Electronic Science and Technology of China (UESTC). He will join the EPIC Lab as a joint-supervision Ph.D. student at HKUST(GZ) in Fall 2026. His research focuses on efficient large language models, with active exploration in LLM-based reinforcement learning and LLM agents.

Liang Feng
Research Interest: LLM Data Synthesis · Efficient AIGC · AI for Science

Liang Feng’s long-term research question is: “How can intelligence grow and scale, while remaining precisely calibrated to the complexity of the real world?” He currently studies how to use Agents to synthesize data, improve LLM performance in scientific domains, and build the benchmarks that science needs. He believes synthetic data is essential for LLM self-learning and unbounded growth, while well-designed benchmarks serve both as calibration tools and as ultimate rewards in reinforcement learning frameworks.

Internships: Alibaba, ByteDance

Publication

WaveEX: Accelerating Flow Matching-based Speech Generation via Wavelet-guided Extrapolation
Xiaoqian Liu, Xiyan Gui, Zhengkun Ge, Yuan Ge, Chang Zou, Jiacheng Liu, Zhikang Niu, Qixi Zheng, Chen Xu, Xie Chen, Tong Xiao, JingBo Zhu, Linfeng Zhang
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper

Forecast then Calibrate: Feature Caching as ODE for Efficient Diffusion Transformers
Shikang Zheng, Liang Feng, Xinyu Wang, Qinming Zhou, Peiliang Cai, Chang Zou, Jiacheng Liu, Yuqi Lin, Junjie Chen, Yue Ma, Linfeng Zhang
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper
SageLM: A Multi-aspect and Explainable Large Language Model for Speech Judgement
Yuan Ge, Junxiang Zhang, Xiaoqian Liu, Bei Li, Xiangnan Ma, Chenglong Wang, Kaiyang Ye, Yangfan Du, Linfeng Zhang, Yuxin Huang, Tong Xiao, Zhengtao Yu, JingBo Zhu
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper

D$^2$Pruner: Debiased Importance and Structural Diversity for MLLM Token Pruning
Evelyn Zhang, Fufu Yu, Aoqi Wu, Zichen Wen, Ke Yan, Shouhong Ding, Biqing Qi, Linfeng Zhang
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper
ImagebindDC: Compressing Multimodal Data with Imagebind-based Condensation
Yue Min, Shaobo Wang, Jiaze Li, Tianle Niu, Junxin Fan, Yongliang Miao, Lijin Yang, Linfeng Zhang
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper

Prune2Drive: A Plug-and-Play Framework for Accelerating Vision-Language Models in Autonomous Driving
Minhao Xiong, Zichen Wen, Zhuangcheng Gu, Xuyang Liu, Rui Zhang, Hengrui Kang, Jiabing Yang, Junyuan Zhang, Weijia Li, Conghui He, Yafei Wang, Linfeng Zhang
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper
UNSEEN: Enhancing Dataset Pruning from a Generalization Perspective
Furui Xu, Shaobo Wang, Jiajun Zhang, Chenghao Sun, Haixiang Tang, Linfeng Zhang
The 40th Annual AAAI Conference on Artificial Intelligence (AAAI2026, CCF-A) paper

Training-Free and Hardware-Friendly Acceleration for Diffusion Models via Similarity-based Token Pruning
Evelyn Zhang, Jiayi Tang, Xuefei Ning, Linfeng Zhang
Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI2025, CCF-A, CAAI-A) paper
Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models
Shaobo Wang, Hongxuan Tang, Mingyang Wang, Hongrui Zhang, Xuyang Liu, Weiya Li, Xuming Hu, Linfeng Zhang
International Conference on Learning Representation (ICLR2025, CAAI-A) paper

Accelerating diffusion transformers with token-wise feature caching
Chang Zou, Xuyang Liu, Ting Liu, Siteng Huang, Linfeng Zhang
The International Conference on Learning Representations (ICLR2025, CAAI-A) paper
Dataset Distillation with Neural Characteristic Function: A Minmax Perspective
Shaobo Wang, Yicun Yang, Zhiyuan Liu, Chenghao Sun, Xuming Hu, Conghui He, Linfeng Zhang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2025, CCF-A, CAAI-A) paper

ProReflow: Progressive Reflow with Decomposed Velocity
Lei Ke, Haohang Xu, Xuefei Ning, Yu Li, Jiajun Li, Haoling Li, Yuxuan Lin, Dongsheng Jiang, Yujiu Yang, Linfeng Zhang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2025, CCF-A, CAAI-A) paper
Decouple-Then-Merge: Finetune Diffusion Models as Multi-Task Learning
Qianli Ma, Xuefei Ning, Dongrui Liu, Li Niu, Linfeng Zhang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2025, CCF-A, CAAI-A) paper

Data whisperer: Efficient data selection for task-specific llm fine-tuning via few-shot in-context learning
Shaobo Wang, Ziming Wang, Xiangqi Jin, Jize Wang, Jiajun Zhang, Kaixin Li, Zichen Wen, Zhong Li, Conghui He, Xuming Hu, Linfeng Zhang
ACL2025 (CCF-A) paper
Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem?
Zichen Wen, Yifeng Gao, Weijia Li, Conghui He, Linfeng Zhang
ACL2025 findings (CCF-A) paper

GraphKV: Breaking the Static Selection Paradigm with Graph-Based KV Cache Eviction
Xuelin Li, Xiangqi Jin, Linfeng Zhang
The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025) paper
Don't Just Chase"Highlighted Tokens" in MLLMs: Revisiting Visual Holistic Context Retention
Xin Zou, Di Lu, Yizhou Wang, Yibo Yan, Yuanhuiyi Lyu, Xu Zheng, Linfeng Zhang, Xuming Hu
The 2025 Conference on Neural Information Processing Systems (NeurIPS 2025) paper

Efficient Multi-modal Large Language Models via Progressive Consistency Distillation
Zichen Wen, Shaobo Wang, Yufa Zhou, Junyuan Zhang, Qintong Zhang, Yifeng Gao, Zhaorun Chen, Bin Wang, Weijia Li, Conghui He, Linfeng Zhang
The 2025 Conference on Neural Information Processing Systems (NeurIPS 2025) paper
EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models
Yantai Yang, Yuhao Wang, Zichen Wen, Luo Zhongwei, Chang Zou, Zhipeng Zhang, Chuan Wen, Linfeng Zhang
The 2025 Conference on Neural Information Processing Systems (NeurIPS 2025) paper

Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models
Xuyang Liu, Yiyu Wang, Junpeng Ma, Linfeng Zhang
The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025) paper
Data whisperer: Efficient data selection for task-specific llm fine-tuning via few-shot in-context learning
Shaobo Wang, Xiangqi Jin, Ziming Wang, Jize Wang, Jiajun Zhang, Kaixin Li, Zichen Wen, Zhong Li, Conghui He, Xuming Hu, Linfeng Zhang
The 2025 Annual Meeting of the Association for Computational Linguistics (ACL 2025) paper

Lazymar: Accelerating masked autoregressive models via feature caching
Feihong Yan, Qingyan Wei, Jiayi Tang, Jiajun Li, Yulin Wang, Xuming Hu, Huiqi Li, Linfeng Zhang
The 2025 IEEE/CVF International Conference on Computer Vision (ICCV 2025) paper
Eedit: Rethinking the spatial and temporal redundancy for efficient image editing
Zexuan Yan, Yue Ma, Chang Zou, Wenteng Chen, Qifeng Chen, Linfeng Zhang
The 2025 IEEE/CVF International Conference on Computer Vision (ICCV 2025) paper

From reusing to forecasting: Accelerating diffusion models with taylorseers
Jiacheng Liu, Chang Zou, Yuanhuiyi Lyu, Junjie Chen, Linfeng Zhang
The 2025 IEEE/CVF International Conference on Computer Vision (ICCV 2025) paper
Led-merging: Mitigating safety-utility conflicts in model merging with location-election-disjoint
Qianli Ma, Dongrui Liu, Qian Chen, Linfeng Zhang, Jing Shao
The 2025 International Conference on Machine Learning (ICML 2025) paper

Realrag: Retrieval-augmented realistic image generation via self-reflective contrastive learning
Yuanhuiyi Lyu, Xu Zheng, Lutao Jiang, Yibo Yan, Xin Zou, Huiyu Zhou, Linfeng Zhang, Xuming Hu
The 2025 International Conference on Machine Learning (ICML 2025) paper
Stop looking for important tokens in multimodal language models: Duplication matters more
Zichen Wen, Yifeng Gao, Shaobo Wang, Junyuan Zhang, Qintong Zhang, Weijia Li, Conghui He, Linfeng Zhang
The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025) paper

Unlocking Speech Instruction Data Potential with Query Rewriting
Yonghua Hei, Yibo Yan, Shuliang Liu, Huiyu Zhou, Linfeng Zhang, Xuming Hu
The 2025 Annual Meeting of the Association for Computational Linguistics (ACL 2025) Findings paper
RealRAG: Retrieval-augmented Realistic Image Generation via Self-reflective Contrastive Learning
Yuanhuiyi Lyu, Xu Zheng, Lutao Jiang, Yibo Yan, Xin Zou, Huiyu Zhou, Linfeng Zhang, Xuming Hu
ICML2025 (CCF-A) paper

Pointdistiller: structured knowledge distillation towards efficient and compact 3d detection
Linfeng Zhang, Runpei Dong, Huang-Shuo Tai, Kaisheng Ma
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023) paper
A Good Data Augmentation Policy Is Not All You Need: A Multi-Task Learning Perspective
Linfeng Zhang, Kaisheng Ma
IEEE Transactions on Circuits and Systems for Video Technology (IEEE TCSVT), 2023 paper

Tiny Updater: Towards Efficient Neural Network-Driven Software Updating
Linfeng Zhang, Kaisheng Ma
IEEE International Conference on Computer Vision (ICCV2023) paper
Multi-Frequency Representation with Privilege Information for Video Super-Resolution
Fei Li, Linfeng Zhang, Zikun Liu, Juan Lei, Zhenbo Li
IEEE International Conference on Computer Vision (ICCV2023) paper

ReKo: Region-aware Knowledge Distillation Towards Efficient Image-to-Image Translation
Linfeng Zhang, Runpei Dong, Xin Chen, Kaisheng Ma
The 34th British Machine Vision Conference 2023 (BMVC2023) paper
Structured Knowledge Distillation Towards Multi-view 3D Detection
Linfeng Zhang, Yukang Shi, Ke Wang, Hung-shuo Tai, Yuan He, Kaisheng Ma
The 34th British Machine Vision Conference 2023 (BMVC2023) paper

Revisiting Data Augmentation in Model Compression: An Empirical and Comprehensive Study
Muzhou Yu, Linfeng Zhang, Kaisheng Ma
International Joint Conference on Neural Networks 2023 paper

Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
Linfeng Zhang, Xin Chen, Xiaobing Tu, Pengfei Wan, Ning Xu, Kaisheng Ma
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2022) paper
Contrastive Deep Supervision
Linfeng Zhang, Xin Chen, Junbo Zhang, Runpei Dong, Kaisheng Ma
European Conference on Computer Vision (ECCV2022) paper

Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
Linfeng Zhang, Kaisheng Ma
The International Conference on Learning Representations (ICLR2021) paper
Wavelet J-Net: A Frequency Perceptive on Convolutional Neural Networks
Linfeng Zhang, Xiaoman Zhang, Chenglong Bao, Kaisheng Ma
International Joint Conference on Neural Networks (IJCNN2021) paper

Auxiliary Training: Towards Accurate and Robust Models
Linfeng Zhang, Muzhou Yu, Tong Chen, Zuoqiang Shi, Chenglong Bao, Kaisheng Ma
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2020) paper
Task-Oriented Feature Distillation
Linfeng Zhang, Yukang Shi, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao
Neural Information Processing Systems (NeurIPS2020) paper

Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma
IEEE International Conference on Computer Vision (ICCV2019) paper
SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models
Linfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Neural Information Processing Systems (NeurIPS2019) paper

Self-Distillation: Towards Efficient and Compact Neural Networks
Linfeng Zhang, Chenglong Bao, Kaisheng Ma
IEEE Transactions of Pattern Analysis and Machine Intelligence (IEEE TPAMI) paper
Structured Knowledge Distillation Towards Efficient Object Detection
Linfeng Zhang, Kaisheng Ma
IEEE Transactions of Pattern Analysis and Machine Intelligence (IEEE TPAMI) paper

Follow-Your-Emoji-Faster: Towards Efficient, Fine-Controllable, and Expressive Freestyle Portrait Animation
Yue Ma, Zexuan Yan, Hongyu Liu, Hongfa Wang, Heng Pan, Yingqing He, Junkun Yuan, Ailing Zeng, Chengfei Cai, Heung-Yeung Shum, Zhifeng Li, Wei Liu, Linfeng Zhang, Qifeng Chen
International Journal of Computer Vision (IJCV) paper
SpeCa: Accelerating Diffusion Transformers with Speculative Feature Caching
Jiacheng Liu, Chang Zou, Yuanhuiyi Lyu, Fei Ren, Shaobo Wang, Kaixin Li, Linfeng Zhang
The 2025 ACM International Conference on Multimedia (ACM MM 2025) paper

Compute Only 16 Tokens in One Timestep: Accelerating Diffusion Transformers with Cluster-Driven Feature Caching
Zhixin Zheng, Xinyu Wang, Chang Zou, Shaobo Wang, Linfeng Zhang
The 2025 ACM International Conference on Multimedia (ACM MM 2025) paper
Don't Just Chase"Highlighted Tokens" in MLLMs: Revisiting Visual Holistic Context Retention
Xin Zou, Di Lu, Yizhou Wang, Yibo Yan, Yuanhuiyi Lyu, Xu Zheng, Linfeng Zhang, Xuming Hu

Fine-grained emotion classification of Chinese microblogs based on graph convolution networks
Yuni Lai, Linfeng Zhang, Donghong Han, Rui Zhou, Guoren Wang
World Wide Web Journal (WWW Internet and Web Information Systems) paper

Partners

Tencent
Alibaba
Baidu
Ant Group
Huawei
HKUST(GZ)
Jide
Shanghai AI Innovation Institute
Memory Tensor
Deep Potential
ICBC
Shanghai AI Laboratory