Posts by Collection

portfolio

publications

Trust It or Not: Confidence-Guided Automatic Radiology Report Generation

Published in Neurocomputing, 2023

A confidence-guided approach for automatic radiology report generation.

Recommended citation: Zihao Lin*, Yixin Wang*, Zhe Xu, Haoyu Dong, Jiang Tian, Jie Luo, Zhongchao Shi, Lifu Huang, Yang Zhang, Jianping Fan, and Zhiqiang He (* equal contribution). "Trust It or Not: Confidence-Guided Automatic Radiology Report Generation." Neurocomputing, 127374.
Download Paper

Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models

Published in ACL 2024, 2024

A comprehensive evaluation of sequential memory editing methods in LLMs.

Recommended citation: Zihao Lin, Hongxuan Li, Yufan Zhou, Yuxiang Zhang, Mohammad Beigi, Lifu Huang. "Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).
Download Paper

MMedAgent: Learning to Use Medical Tools with Multi-modal Agent

Published in EMNLP 2024 Findings, 2024

A multi-modal medical agent that learns to use medical tools.

Recommended citation: Binxu Li, Tiankai Yan, Yuanting Pan, Zhe Xu, Jie Luo, Ruiyang Ji, Shilong Liu, Haoyu Dong*, Zihao Lin*, Yixin Wang* (* co-corresponding authors). "MMedAgent: Learning to Use Medical Tools with Multi-modal Agent." EMNLP 2024 Findings.
Download Paper

Persona-SQ: An end-to-end Framework for Personalized Suggested Questions Generation for Long Documents

Published in NAACL 2025 Demo Track, 2025

An end-to-end framework for generating personalized suggested questions for long documents.

Recommended citation: Zihao Lin, Zichao Wang, Yuanting Pan, Varun Manjunatha, Ryan Rossi, Angela Lau, Lifu Huang, Tong Sun. "Persona-SQ: An end-to-end Framework for Personalized Suggested Questions Generation for Long Documents." NAACL 2025, Demo Track.
Download Paper

RoRA-VLM: Robust Retrieval-Augmented Vision Language Models

Published in ICCV 2025 Workshop on Knowledge-Intensive Multimodal Reasoning, 2025

Robust retrieval-augmented vision language models.

Recommended citation: Jingyuan Qi, Zhiyang Xu, Rulin Shao, Zihao Lin, Yang Chen, Di Jin, Yu Cheng, Qifan Wang, Lifu Huang. "RoRA-VLM: Robust Retrieval-Augmented Vision Language Models." ICCV 2025 Workshop on Knowledge-Intensive Multimodal Reasoning.
Download Paper

R2I-Bench: Benchmarking Reasoning-Driven Text-to-Image Generation

Published in EMNLP 2025 (Outstanding Paper Award), 2025

A benchmark for evaluating reasoning-driven text-to-image generation.

Recommended citation: Kaijie Chen, Zihao Lin, Zhiyang Xu, Ying Shen, Yuguang Yao, Joy Rimchala, Jiaxin Zhang, Lifu Huang. "R2I-Bench: Benchmarking Reasoning-Driven Text-to-Image Generation." EMNLP 2025, Outstanding Paper Award.
Download Paper

Localizing Knowledge in Diffusion Transformers

Published in NeurIPS 2025, 2025

Localizing knowledge in Diffusion Transformers.

Recommended citation: Arman Zarei, Samyadeep Basu, Keivan Rezaei, Zihao Lin, Sayan Nag, Soheil Feizi. "Localizing Knowledge in Diffusion Transformers." NeurIPS 2025.
Download Paper

MiLDEdit: Reasoning-Based Multi-Layer Design Document Editing

Published in NeurIPS 2025 Workshop on Multimodal Algorithmic Reasoning (Spotlight), 2025

A reasoning-based approach for multi-layer design document editing.

Recommended citation: Zihao Lin, Wanrong Zhu, Jiuxiang Gu, Jihyung Kil, Christopher Tensmeyer, Lin Zhang, Shilong Liu, Lifu Huang, Vlad I Morariu, Tong Sun. "MiLDEdit: Reasoning-Based Multi-Layer Design Document Editing." NeurIPS 2025 Workshop on Multimodal Algorithmic Reasoning, Spotlight paper. (CVPR 2026 under review)
Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.