Jiaxin Zhang

AI - Senior Staff Research Scientist
Intuit AI Research
Office: 2535 Garcia Ave Mountain View, CA, 94043

Hi, there! I’m Jiaxin👋!.

🔭 I am an AI Senior Staff Research Scientist at Intuit AI Research, leading a research team specializing in Generative AI (including large language models, diffusion models, vision-language models), and AI Reliability (focusing on uncertainty, confidence, and robustness). Prior to this, I was a Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL) under the US Department of Energy (DOE). My research at ORNL was dedicated to advancing AI for Science on state-of-the-art supercomputers, like Summit and Frontier. I received my Ph.D. from the Johns Hopkins University with an emphasis on uncertainty quantification.

🤔 I am passionate about building reliable AI capability to assist humans in solving complex real-world challenges found at the convergence of language, vision, and science. My research concentrates on AI Reliability and Robustness, Uncertainty Quantification, LLM Alignement and Safety, Optimization, and AI4Science. I have authored over 50 papers, including 35+ as the first author, in leading AI conferences and journals such as NeurIPS, CVPR, EMNLP, AISTATS, and others. Additionally, I actively maintain several GitHub repositories that have collectively garnered over 2,000 stars.

Research Highlights

👯 Academic Service

  • Area Chair: ACL, EMNLP, NAACL
  • Program Committee: NeurIPS, ICML, ICLR, AAAI, AISTATS, ACL, EMNLP, NAACL, CVPR, ECCV, WACV, KDD, SDM
  • Jounral Reviewer: Transactions on Machine Learning Research (TMLR)

🏆 Awards

  • CTO Award, Intuit, 2024
  • A2D Innovation Award, Intuit, 2024
  • Promising Early‑Career Researcher Award, ORNL, 2020
  • NeurIPS Travel Award, 2019
  • Acheson J. Duncan Graduate Research Award, Johns Hopkins, 2018
  • Dean’s Fellowship, Johns Hopkins, 2014
  • National Scholarship of P.R. China, 2009, 2012

✈️ Conference Talks/Travels

  • Dec 2024, NeurIPS @ Vancouver 🇨🇦
  • Nov 2024, EMNLP @ Miami 🇺🇸
  • Jul 2024, ICML @ Vienna 🇦🇹
  • May 2024, AISTATS @ Valencia 🇪🇸
  • Jan 2024, WACV @ Hawaii 🇺🇸
  • Dec 2023, NeurIPS @ New Orleans 🇺🇸
  • Dec 2023, EMNLP @ Singapore 🇸🇬
  • Feb 2023, AAAI @ Washington DC 🇺🇸
  • Jul 2022, ICML @ Baltimore 🇺🇸
  • Jun 2022, CVPR @ New Orleans 🇺🇸
  • 🦠COVID🦠, … 😷 … @ … 😷 …
  • Dec 2019, NeurIPS @ Vancouver 🇨🇦

💬 I’m always looking for highly motivated Ph.D. students to work with me for research internship positions. Please feel free to email me with your CV if interested. .

news

Oct 15, 2024 [Invited Talk] I will give a talk in NeurIPS 2024 Workshop “Interpretable AI: Past, Present and Future”, Dec, 2024, Vancouver, Canada!
Oct 10, 2024 [EMNLP x 6] Six Long Papers (3 Main, 1 Findings, 2 Industry Track) are accepted by EMNLP 2024. 2 oral presentations and 4 poster presentations! See you in Miami!
Aug 1, 2024 Glad to share that I was promoted to be a Senior Staff Research Scientist @Intuit!
Jun 20, 2024 [Invited talk] I will present my research on hallucination detection and mitigation at Intuit Open Source Meetup!
Jun 1, 2024 Will serve as an Area Chair for EMNLP 2024!
May 1, 2024 One UQ paper was accepted by AISTATS 2024. See you in Valencia, Spain!
Mar 4, 2024 One paper on UQ for LLM was accepted by EACL 2024.
Nov 10, 2023 I created two Github Repos to share resources and papers on LLM Prompt Optimization and LLM RAG. Welcome to contribute and work together!
Oct 24, 2023 Two papers on “DECDM: Document Enhancement using Cycle-Consistent Diffusion Models” and “On the Quantification of Image Reconstruction Uncertainty without Training Data” are accpeted by WACV 2024!
Oct 22, 2023 Our paper on “A Divide-Conquer-Reasoning Approach to Consistency Evaluation and Improvement in Blackbox Large Language Models” is accepted by NeurIPS 2023 Workshop on Socially Responsible Language Modelling Research.
Oct 7, 2023 Our paper on SAC^3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency is accepted by EMNLP 2023! The code is coming soon!
Sep 28, 2023 One patent on “Model based document image enhancement” is issued and published.
Sep 21, 2023 Our paper on “Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision” is accepted by NeurIPS 2023! Cheers!
Mar 27, 2023 I was invited to be a Reviewer/PC member for NeurIPS 2023, ICLR 2024, ICASSP 2024, WACV 2024, SIAM SDM 2024.
Mar 21, 2023 I built a Github Repo that contains a collection of resources and papers on Reliability, Robustness and Safety in Large Language Models (LLMs).
Feb 21, 2023 Our paper titled “Speech Privacy Leakage from Shared Gradients in Distributed Learning” is accepted by ICASSP 2023!
Dec 12, 2022 Two papers on “Accelerating Inverse Learning via Intelligent Localization with Exploratory Sampling” and “AutoNF: Automated Architecture Optimization of Normalizing Flows with Unconstrained Continuous Relaxation Admitting Optimal Discrete Solution” are accpeted by AAAI 2023!

selected publications

  1. EMNLP 2024
    Synthetic Knowledge Ingestion: Towards Knowledge Refinement and Injection for Enhancing Large Language Models
    Jiaxin Zhang, Wendi Cui, Yiran Huang, Kamalika Das, and Sricharan Kumar
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
  2. EMNLP 2024
    Do You Know What You Are Talking About? Characterizing Query-Knowledge Relevance For Reliable Retrieval Augmented Generation
    Zhuohang Li, Jiaxin Zhang, Chao Yan, Kamalika Das, Sricharan Kumar, Murat Kantarcioglu, and Bradley A Malin
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
  3. EMNLP 2024
    HyQE: Ranking Contexts with Hypothetical Query Embeddings
    Weichao Zhou, Jiaxin Zhang, Hilaf Hasson, Anu Singh, and Wenchao Li
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
  4. EMNLP 2024
    Holistic evaluation for interleaved text-and-image generation
    Minqian Liu, Zhiyang Xu, Zihao Lin, Trevor Ashby, Joy Rimchala, Jiaxin Zhang, and Lifu Huang
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
  5. EMNLP 2024
    Survival of the Safest: Towards Secure Prompt Optimization through Interleaved Multi-Objective Evolution
    Ankita Sinha, Wendi Cui, Kamalika Das, and Jiaxin Zhang
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing - Industry Track, 2024
  6. EMNLP 2024
    DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
    Wendi Cui, Zhuohang Li, Lopez Damien, Kamalika Das, Bradley Malin, Sricharan Kumar, and Jiaxin Zhang
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing - Industry Track, 2024
  7. EACL 2024
    SPUQ: Perturbation-Based Uncertainty Quantification for Large Language Models
    Xiang Gao, Jiaxin Zhang, Lalla Mouatadid, and Kamalika Das
    In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 2024
  8. arXiv
    PhaseEvo: Towards Unified In-Context Prompt Optimization for Large Language Models
    Wendi Cui, Jiaxin Zhang, Zhuohang Li, Hao Sun, Damien Lopez, Kamalika Das, Bradley Malin, and Sricharan Kumar
    2024
  9. AISTATS 2024
    Discriminant Distance-Aware Representation on Deterministic Uncertainty Quantification Methods
    Jiaxin Zhang, Kamalika Das, and Sricharan Kumar
    In International Conference on Artificial Intelligence and Statistics, 2024
  10. WACV 2024
    DECDM: Document Enhancement using Cycle-Consistent Diffusion Models
    Jiaxin Zhang, Joy Rimchala, Lalla Mouatadid, Kamalika Das, and Sricharan Kumar
    In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
  11. WACV 2024
    On the Quantification of Image Reconstruction Uncertainty without Training Data
    Jiaxin Zhang, Sirui Bi, and Victor Fung
    In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
  12. NeurIPS 2023
    Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision
    Jiaxin Zhang, Zhuohang Li, Kamalika Das, and Sricharan Kumar
    In Advances in Neural Information Processing Systems, 2023
  13. EMNLP 2023
    SAC^3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
    Jiaxin Zhang, Zhuohang Li, Kamalika Das, Bradley Malin, and Sricharan Kumar
    In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023
  14. AAAI 2023
    Accelerating Inverse Learning via Intelligent Localization with Exploratory Sampling
    Jiaxin Zhang, Sirui Bi, and Victor Fung
    Proceedings of the AAAI Conference on Artificial Intelligence, 2023
  15. AAAI 2023
    AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation
    Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, and Peng Li
    Proceedings of the AAAI Conference on Artificial Intelligence, 2023
  16. CVPR 2022
    Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
    Zhuohang Li, Jiaxin Zhang, Luyang Liu, and Jian Liu
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022
  17. AAAI 2022
    Gradient-based Novelty Detection Boosted by Self-supervised Binary Classification
    Jingbo Sun, Li Yang, Jiaxin Zhang, Frank Liu, Mahantesh Halappanavar, Deliang Fan, and Yu Cao
    In Proceedings of the AAAI Conference on Artificial Intelligence, 2022
  18. NeurIPS 2021
    On the Stochastic Stability of Deep Markov Models
    Jan Drgona, Sayak Mukherjee, Jiaxin Zhang, Frank Liu, and Mahantesh Halappanavar
    Advances in Neural Information Processing Systems, 2021
  19. UAI 2021
    Enabling Long-range Exploration in Minimization of Multimodal Functions
    Jiaxin Zhang, Hoang Tran, Dan Lu, and Guannan Zhang
    In Uncertainty in Artificial Intelligence, 2021
  20. AISTATS 2021
    A Scalable Gradient Free Method for Bayesian Experimental Design with Implicit Models
    Jiaxin Zhang, Sirui Bi, and Guannan Zhang
    In International Conference on Artificial Intelligence and Statistics, 2021
  21. NeurIPS 2019
    Learning Nonlinear Level Sets for Dimensionality Reduction in Function Approximation
    Guannan Zhang, Jiaxin Zhang, and Jacob Hinkle
    Advances in Neural Information Processing Systems, 2019