Jiaxin Zhang

Staff Research Scientst
Intuit AI Research
Office: 2535 Garcia Ave Mountain View, CA, 94043

Research: As an AI researcher, I am passionate about building AGI capability and assisting humans in solving complex real-world tasks ranging from Computer Vision (CV) and Natural Language Processing (NLP). My interest span multiple areas, including reliabile and robust AI, generative models (LLMs and diffusion models), uncertainty quantification, and AI for Science.

Previously: I was a Research Staff in Computer Science and Mathematics Dvision at Oak Ridge National Laboratory, US Department of Energy (DOE). I received my Ph.D. from Johns Hopkins University in 2018.

Publications: 50+ peer-review journal/conference papers, and 35+ first-author papers, including several top-tier AI conferences, e.g., NeurIPS, CVPR, EMNLP, etc, and high-impact journals, such as Nature series.

Service: Invited Reviewer or PC member in NeurIPS 2020-2023, ICML 2021-2023, ICLR 2021-2024, AISTATS 2021-2023, CVPR 2022, ECCV 2022, KDD 2023, ICASSP 2024, SIAM SDM 2024, WACV 2024, etc.

I’m always looking for highly motivated Ph.D. students to work with me for research internship positions. Please feel free to email me with your CV if interested. .

news

Nov 10, 2023 I created two Github Repos to share resources and papers on LLM Prompt Optimization and LLM RAG. Welcome to contribute and work together!
Oct 24, 2023 Two papers on “DECDM: Document Enhancement using Cycle-Consistent Diffusion Models” and “On the Quantification of Image Reconstruction Uncertainty without Training Data” are accpeted by WACV 2024!
Oct 22, 2023 Our paper on “A Divide-Conquer-Reasoning Approach to Consistency Evaluation and Improvement in Blackbox Large Language Models” is accepted by NeurIPS 2023 Workshop on Socially Responsible Language Modelling Research.
Oct 7, 2023 Our paper on SAC^3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency is accepted by EMNLP 2023! The code is coming soon!
Sep 28, 2023 One patent on “Model based document image enhancement” is issued and published.
Sep 21, 2023 Our paper on “Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision” is accepted by NeurIPS 2023! Cheers!
Mar 27, 2023 I was invited to be a Reviewer/PC member for NeurIPS 2023, ICLR 2024, ICASSP 2024, WACV 2024, SIAM SDM 2024.
Mar 21, 2023 I built a Github Repo that contains a collection of resources and papers on Reliability, Robustness and Safety in Large Language Models (LLMs).
Feb 21, 2023 Our paper titled “Speech Privacy Leakage from Shared Gradients in Distributed Learning” is accepted by ICASSP 2023!
Dec 12, 2022 Two papers on “Accelerating Inverse Learning via Intelligent Localization with Exploratory Sampling” and “AutoNF: Automated Architecture Optimization of Normalizing Flows with Unconstrained Continuous Relaxation Admitting Optimal Discrete Solution” are accpeted by AAAI 2023!

selected publications

  1. NeurIPS
    Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision
    Jiaxin Zhang, Zhuohang Li, Kamalika Das, and Sricharan Kumar
    In Advances in Neural Information Processing Systems, 2023
  2. EMNLP
    SAC^3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
    Jiaxin Zhang, Zhuohang Li, Kamalika Das, Bradley Malin, and Sricharan Kumar
    In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023
  3. WACV
    DECDM: Document Enhancement using Cycle-Consistent Diffusion Models
    Jiaxin Zhang, Joy Rimchala, Lalla Mouatadid, Kamalika Das, and Sricharan Kumar
    In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
  4. WACV
    On the Quantification of Image Reconstruction Uncertainty without Training Data
    Jiaxin Zhang, Sirui Bi, and Victor Fung
    In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
  5. NeurIPS Workshop
    A Divide-Conquer-Reasoning Approach to Consistency Evaluation and Improvement in Blackbox Large Language Models
    Wendi Cui, Jiaxin Zhang, Zhuohang Li, Damien Lopez, Kamalika Das, Malin Bradley, and Sricharan Kumar
    In NeurIPS 2023 Workshop on Socially Responsible Language Modelling Research, 2023
  6. ICASSP
    Speech Privacy Leakage from Shared Gradients in Distributed Learning
    Zhuohang Li, Jiaxin Zhang, and Jian Liu
    In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023
  7. AAAI
    Accelerating Inverse Learning via Intelligent Localization with Exploratory Sampling
    Jiaxin Zhang, Sirui Bi, and Victor Fung
    Proceedings of the AAAI Conference on Artificial Intelligence, 2023
  8. AAAI
    AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation
    Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, and Peng Li
    Proceedings of the AAAI Conference on Artificial Intelligence, 2023
  9. CVPR
    Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
    Zhuohang Li, Jiaxin Zhang, Luyang Liu, and Jian Liu
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022
  10. AAAI
    Gradient-based Novelty Detection Boosted by Self-supervised Binary Classification
    Jingbo Sun, Li Yang, Jiaxin Zhang, Frank Liu, Mahantesh Halappanavar, Deliang Fan, and Yu Cao
    In Proceedings of the AAAI Conference on Artificial Intelligence, 2022
  11. NeurIPS
    On the Stochastic Stability of Deep Markov Models
    Jan Drgona, Sayak Mukherjee, Jiaxin Zhang, Frank Liu, and Mahantesh Halappanavar
    Advances in Neural Information Processing Systems, 2021
  12. UAI
    Enabling Long-range Exploration in Minimization of Multimodal Functions
    Jiaxin Zhang, Hoang Tran, Dan Lu, and Guannan Zhang
    In Uncertainty in Artificial Intelligence, 2021
  13. AISTATS
    A Scalable Gradient Free Method for Bayesian Experimental Design with Implicit Models
    Jiaxin Zhang, Sirui Bi, and Guannan Zhang
    In International Conference on Artificial Intelligence and Statistics, 2021
  14. NeurIPS
    Learning Nonlinear Level Sets for Dimensionality Reduction in Function Approximation
    Guannan Zhang, Jiaxin Zhang, and Jacob Hinkle
    Advances in Neural Information Processing Systems, 2019
  15. RESS
    The Generalization of Latin Hypercube Sampling
    Michael D Shields, and Jiaxin Zhang
    Reliability Engineering & System Safety, 2016