cv
Contact Information
Jiaxin Zhang
AI Senior Staff Research Scientist
Salesforce AI Research
- Email: jxzhangai@gmail.com
- Location: Mountain View, CA, USA
- Office: 2535 Garcia Ave, Mountain View, CA, 94043
- Website: https://jxzhangjhu.github.io/
- Google Scholar: Profile
- GitHub: @jxzhangjhu
- LinkedIn: Profile
- Twitter: @jxzhangjhu
- ORCID: 0000-0002-7576-6110
Education
Ph.D.
Johns Hopkins University
[Year] - 2018
- Focus: Uncertainty Quantification
- Dissertation: Link to dissertation
[Add Master’s and Bachelor’s degrees if applicable - please update with your actual degrees]
Professional Experience
AI Senior Staff Research Scientist
Salesforce AI Research
[Please update with start date] - Present
- Leading a research team specializing in Generative AI (including large language models, diffusion models, vision-language models) and AI Reliability (focusing on uncertainty, confidence, and robustness)
- Research focus on:
- Hallucination Detection and Mitigation
- Uncertainty Quantification in LLMs
- Prompt Optimization with Security and Safety Constraints
- Knowledge Injection and Reliable RAG
- LLM Adaptation and Fine-tuning
- Interleaved Text-and-Image Generation
- Constrained Generation and Inference-time Decoding
- LLM Alignment with Feedback
Research Staff
Oak Ridge National Laboratory (ORNL)
Computer Science and Mathematics Division
US Department of Energy (DOE)
[Please update with dates]
- Advanced AI for Science research on state-of-the-art supercomputers
- Worked on Summit and Frontier supercomputers
- Focused on uncertainty quantification and AI reliability for scientific computing
Research Interests
My research concentrates on:
- AI Reliability and Robustness: Developing methods to ensure AI systems are trustworthy and perform consistently across diverse conditions
- Uncertainty Quantification: Quantifying and communicating uncertainty in AI model predictions, especially for LLMs
- LLM Alignment and Safety: Ensuring large language models are aligned with human values and safe for deployment
- Optimization: Developing efficient optimization methods for AI systems
- AI4Science: Applying AI techniques to advance scientific discovery
Awards & Honors
- CTO Award, Salesforce, 2024
- A2D Innovation Award, Salesforce, 2024
- Promising Early‑Career Researcher Award, Oak Ridge National Laboratory, 2020
- NeurIPS Travel Award, 2019
- Acheson J. Duncan Graduate Research Award, Johns Hopkins University, 2018
- Dean’s Fellowship, Johns Hopkins University, 2014
- National Scholarship of P.R. China, 2009, 2012
Academic Service
Area Chair
- ACL (Association for Computational Linguistics)
- EMNLP (Empirical Methods in Natural Language Processing)
- NAACL (North American Chapter of the Association for Computational Linguistics)
Program Committee Member
- NeurIPS (Neural Information Processing Systems)
- ICML (International Conference on Machine Learning)
- ICLR (International Conference on Learning Representations)
- AAAI (Association for the Advancement of Artificial Intelligence)
- AISTATS (International Conference on Artificial Intelligence and Statistics)
- ACL (Association for Computational Linguistics)
- EMNLP (Empirical Methods in Natural Language Processing)
- NAACL (North American Chapter of the Association for Computational Linguistics)
- CVPR (Computer Vision and Pattern Recognition)
- ECCV (European Conference on Computer Vision)
- WACV (Winter Conference on Applications of Computer Vision)
- KDD (Knowledge Discovery and Data Mining)
- SDM (SIAM International Conference on Data Mining)
Journal Reviewer
- Transactions on Machine Learning Research (TMLR)
Selected Publications
I have authored over 50 papers, including 35+ as the first author, in leading AI conferences and journals such as NeurIPS, CVPR, EMNLP, AISTATS, and others.
For complete publication list, please visit: Google Scholar Profile
Recent Highlights
- Hallucination Detection and Mitigation in Large Language Models
- Uncertainty Quantification methods for LLMs
- Prompt Optimization with security and safety constraints
- Knowledge Injection and Reliable RAG systems
- LLM Adaptation and Fine-tuning techniques
- Interleaved Text-and-Image Generation and holistic evaluation
- Constrained Generation and inference-time decoding
Conference Talks & Travels
- Dec 2024, NeurIPS @ Vancouver 🇨🇦
- Nov 2024, EMNLP @ Miami 🇺🇸
- Jul 2024, ICML @ Vienna 🇦🇹
- May 2024, AISTATS @ Valencia 🇪🇸
- Jan 2024, WACV @ Hawaii 🇺🇸
- Dec 2023, NeurIPS @ New Orleans 🇺🇸
- Dec 2023, EMNLP @ Singapore 🇸🇬
- Feb 2023, AAAI @ Washington DC 🇺🇸
- Jul 2022, ICML @ Baltimore 🇺🇸
- Jun 2022, CVPR @ New Orleans 🇺🇸
- Dec 2019, NeurIPS @ Vancouver 🇨🇦
Skills & Expertise
Research Areas
- Large Language Models (LLMs)
- Generative AI (Diffusion Models, Vision-Language Models)
- Uncertainty Quantification
- AI Reliability and Robustness
- Machine Learning Optimization
- AI for Science
Technical Skills
- [Please add your technical skills, programming languages, frameworks, etc.]
Tools & Platforms
- [Please add tools, platforms, cloud services, etc.]
Additional Information
Open Source Contributions
I actively maintain several GitHub repositories that have collectively garnered over 2,000 stars.
View repositories: GitHub Profile
Research Internships
I’m always looking for highly motivated Ph.D. students to work with me for research internship positions. Please feel free to email me with your CV if interested.
Last updated: [Please update with current date]