AI Transparency Journal

The AI Transparency Journal (AITJ) is an international, peer-reviewed journal dedicated to research on transparent and human-compatible AI. We study how AI systems can be made observable, intelligible, and controllable — and how humans and AI can live and work together safely and responsibly. We recognize this as a dynamic, interdisciplinary challenge requiring advances in technical methods such as mechanistic interpretability and alignment research, governance frameworks, and our understanding of how joint human–AI systems behave, adapt, and evolve. The journal serves researchers, practitioners, and policymakers across computer science, philosophy, cognitive science, law, ethics, and public policy. As a diamond open access journal, all content is freely available with no fees for authors or readers.

Call for Papers — Inaugural Issue (Vol. 1, No. 1)

We are now accepting submissions for our inaugural issue on the theme Transparency for Human–AI Co-Evolution: From Mechanistic Interpretability to Governance and Control. The issue is organized around four thematic tracks:

  • Detect — Observing and characterizing autonomous agent behaviour and human–AI collaboration dynamics
  • Understand — Mechanistic interpretability, explainability methods, and tooling for model inspection
  • Control — Alignment verification, safety guarantees, governance frameworks, and accountability structures
  • Co-Evolve — Cross-cutting research on human–AI mutual adaptation, expertise preservation, and societal implications

Read the full call for papers, including detailed topic lists and submission guidelines.