Overview
Conduct research and experimentation on responsible AI in financial risk, focusing on LLM-based multi-agent systems and AI model risk concepts for practical applications within the bank.
Tasks Summary
- Conduct literature and benchmark reviews on LLM multi-agent design, hallucination mitigation, alignment evaluation, and AI model risk practices.
- Develop conceptual proposals for enhancing risk decision processes with multi-agent architectures.
- Develop and test prototypes using prompt engineering, multi-agent orchestration, retrieval pipelines, or reinforcement learning.
- Evaluate prototype robustness using metrics for explainability, hallucination rate, consistency, repeatability, and governance.
- Produce a research paper, white paper, or equivalent publication-ready output.
- Prepare a methodology or playbook for potential adoption within AIIB.
- Present completed project details in a slide deck or project document.
Experience Requirements
- Demonstrated research experience with publications in reputable ML/AI venues.
- Familiarity with LLM architectures, multi-agent designs, Retrieval-Augmented Generation, reinforcement learning, or model evaluation frameworks.
- Demonstrated interest or experience in AI model governance, reliability testing, or alignment research.
Qualification Requirements
• Currently pursuing or recently completed a PhD in Machine Learning, Deep Learning, Generative AI, or a closely related field; Master’s candidates with strong research experience will be considered.