This repository explores how generative AI systems can be used to support research, investigation, and analytical writing—particularly in policy, health analytics, and other high‑stakes analytical contexts.
The focus throughout is on thinking with AI, not delegating judgment to AI.
Generative AI is treated as an assistant and research instrument: a tool that can help structure inquiry, surface alternatives, and improve clarity, while leaving responsibility for evidence, reasoning, and conclusions firmly with human analysts.
Generative AI can be useful for supporting analytical work such as:
- Structuring complex or ill‑defined questions
- Mapping arguments, counterarguments, and trade‑offs
- Summarizing and organizing large volumes of text
- Surfacing assumptions, gaps, and uncertainties
- Improving clarity, coherence, and accessibility in writing
However, generative AI is not a source of evidence, authority, or truth.
Fluent output does not imply correctness, and confidence does not substitute for validation.
This repository documents patterns of use that emphasize:
- Critical and reflective engagement
- Explicit treatment of uncertainty and assumptions
- Human authorship, accountability, and systematic review
- Awareness of analytical risks and failure modes
- Epistemic humility in the face of complex evidence
This repository is written for:
- Policy analysts and health system analysts
- Researchers working in applied or exploratory contexts
- Investigative writers and evaluators
- Students learning research and analytical methods
- Teams integrating AI tools into analytical workflows
While examples are generic, the framing is especially relevant where analysis informs decisions, resource allocation, or public communication.
The repository is organized to move from principles to practice:
- Conceptual foundations for treating AI as an assistant, not an authority
- Prompt patterns that support inquiry and synthesis
- Illustrative examples of AI‑assisted analytical work
- Review and revision practices for AI‑assisted drafts
- Organizational governance: acceptable use, disclosure, and accountability
- Common failure modes and subtle analytical risks
- Practical checklists to support pre‑submission review
Together, these materials support responsible, disciplined, and transparent use of generative AI in analytical work.
This repository does not provide:
- Tool‑specific instructions or vendor guidance
- Automation of decisions, recommendations, or conclusions
- Use of generative AI with confidential, personal, or protected data
- Claims of factual correctness or model reliability
- Technical evaluation of models or training data
The emphasis is methodological and epistemic, not technical.
Generative AI may assist reasoning, but responsibility for accuracy, interpretation, and impact remains human.
AI outputs become analytically meaningful only through deliberate review, verification, and judgment.
This repository reflects general research and analytical practices intended for educational and professional reflection.
It does not represent the views, guidance, or policies of any employer, organization, government, or institution, and should not be interpreted as formal policy or procedural instruction.