Jiaxin Zhang
AI - Senior Staff Research Scientist
Intuit AI Research
Office: 2535 Garcia Ave Mountain View, CA, 94043
Hi, there! I’m Jiaxin👋!.
🔭 I am an AI Senior Staff Research Scientist at Intuit AI Research, leading a research team specializing in Generative AI (including large language models, diffusion models, vision-language models), and AI Reliability (focusing on uncertainty, confidence, and robustness). Prior to this, I was a Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL) under the US Department of Energy (DOE). My research at ORNL was dedicated to advancing AI for Science on state-of-the-art supercomputers, like Summit and Frontier. I received my Ph.D. from the Johns Hopkins University with an emphasis on uncertainty quantification.
🤔 I am passionate about building reliable AI capability to assist humans in solving complex real-world challenges found at the convergence of language, vision, and science. My research concentrates on AI Reliability and Robustness, Uncertainty Quantification, LLM Alignement and Safety, Optimization, and AI4Science. I have authored over 50 papers, including 35+ as the first author, in leading AI conferences and journals such as NeurIPS, CVPR, EMNLP, AISTATS, and others. Additionally, I actively maintain several GitHub repositories that have collectively garnered over 2,000 stars.
⚡ Research Highlights
- Hallucination Detection and Mitigation
- Uncertainty Quantification in LLMs
- Prompt Optimization w/wo Security and Safety Constraints
- Knowledge Injection and Reliable RAG
- LLM Adaption and Fine-tuning
- Interleaved Text-and-Image Generation and Holistic Evaluation
- Constrained Generation and Inferece-time Decoding
- LLM Alignment with Feedback
- Thinking LLM and Reasoning
👯 Academic Service
- Area Chair: ACL, EMNLP, NAACL
- Program Committee: NeurIPS, ICML, ICLR, AAAI, AISTATS, ACL, EMNLP, NAACL, CVPR, ECCV, WACV, KDD, SDM
- Jounral Reviewer: Transactions on Machine Learning Research (TMLR)
🏆 Awards
- CTO Award, Intuit, 2024
- A2D Innovation Award, Intuit, 2024
- Promising Early‑Career Researcher Award, ORNL, 2020
- NeurIPS Travel Award, 2019
- Acheson J. Duncan Graduate Research Award, Johns Hopkins, 2018
- Dean’s Fellowship, Johns Hopkins, 2014
- National Scholarship of P.R. China, 2009, 2012
✈️ Conference Talks/Travels
- Dec 2024, NeurIPS @ Vancouver 🇨🇦
- Nov 2024, EMNLP @ Miami 🇺🇸
- Jul 2024, ICML @ Vienna 🇦🇹
- May 2024, AISTATS @ Valencia 🇪🇸
- Jan 2024, WACV @ Hawaii 🇺🇸
- Dec 2023, NeurIPS @ New Orleans 🇺🇸
- Dec 2023, EMNLP @ Singapore 🇸🇬
- Feb 2023, AAAI @ Washington DC 🇺🇸
- Jul 2022, ICML @ Baltimore 🇺🇸
- Jun 2022, CVPR @ New Orleans 🇺🇸
- 🦠COVID🦠, … 😷 … @ … 😷 …
- Dec 2019, NeurIPS @ Vancouver 🇨🇦
💬 I’m always looking for highly motivated Ph.D. students to work with me for research internship positions. Please feel free to email me with your CV if interested. .
news
Oct 15, 2024 | [Invited Talk] I will give a talk in NeurIPS 2024 Workshop “Interpretable AI: Past, Present and Future”, Dec, 2024, Vancouver, Canada! |
---|---|
Oct 10, 2024 | [EMNLP x 6] Six Long Papers (3 Main, 1 Findings, 2 Industry Track) are accepted by EMNLP 2024. 2 oral presentations and 4 poster presentations! See you in Miami! |
Aug 1, 2024 | Glad to share that I was promoted to be a Senior Staff Research Scientist @Intuit! |
Jun 20, 2024 | [Invited talk] I will present my research on hallucination detection and mitigation at Intuit Open Source Meetup! |
Jun 1, 2024 | Will serve as an Area Chair for EMNLP 2024! |
May 1, 2024 | One UQ paper was accepted by AISTATS 2024. See you in Valencia, Spain! |
Mar 4, 2024 | One paper on UQ for LLM was accepted by EACL 2024. |
Nov 10, 2023 | I created two Github Repos to share resources and papers on LLM Prompt Optimization and LLM RAG. Welcome to contribute and work together! |
Oct 24, 2023 | Two papers on “DECDM: Document Enhancement using Cycle-Consistent Diffusion Models” and “On the Quantification of Image Reconstruction Uncertainty without Training Data” are accpeted by WACV 2024! |
Oct 22, 2023 | Our paper on “A Divide-Conquer-Reasoning Approach to Consistency Evaluation and Improvement in Blackbox Large Language Models” is accepted by NeurIPS 2023 Workshop on Socially Responsible Language Modelling Research. |
Oct 7, 2023 | Our paper on SAC^3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency is accepted by EMNLP 2023! The code is coming soon! |
Sep 28, 2023 | One patent on “Model based document image enhancement” is issued and published. |
Sep 21, 2023 | Our paper on “Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision” is accepted by NeurIPS 2023! Cheers! |
Mar 27, 2023 | I was invited to be a Reviewer/PC member for NeurIPS 2023, ICLR 2024, ICASSP 2024, WACV 2024, SIAM SDM 2024. |
Mar 21, 2023 | I built a Github Repo that contains a collection of resources and papers on Reliability, Robustness and Safety in Large Language Models (LLMs). |
Feb 21, 2023 | Our paper titled “Speech Privacy Leakage from Shared Gradients in Distributed Learning” is accepted by ICASSP 2023! |
Dec 12, 2022 | Two papers on “Accelerating Inverse Learning via Intelligent Localization with Exploratory Sampling” and “AutoNF: Automated Architecture Optimization of Normalizing Flows with Unconstrained Continuous Relaxation Admitting Optimal Discrete Solution” are accpeted by AAAI 2023! |