← All posts

Introducing GMD-AI: Why We Started This Group

Van-Tuan Tran ·
announcementvisionAI safety

Why GMD-AI?

Artificial intelligence is advancing at a pace that outstrips our ability to understand, verify, and secure it. Large language models generate text indistinguishable from human writing. Generative models produce images, audio, and video that blur the line between real and synthetic. Meanwhile, adversarial attacks grow more sophisticated, and the systems we rely on remain fragile in ways we are only beginning to map.

We founded the General Machine Intelligence for Discovery, Detection, Defense (GMD-AI) research group because we believe these challenges cannot be addressed in isolation. Discovering what AI can do, detecting when it deceives, and defending against the threats it enables are deeply intertwined problems. A group that works across all three has a sharper view of the landscape than one confined to a single pillar.

Our Three Pillars

Discovery

AI opens doors that were previously closed. From modular deep learning architectures that adapt to new tasks without forgetting old ones, to federated learning systems that train across institutions without exposing private data, we explore the frontiers of what machine intelligence makes possible. Our research in continual learning, mixture-of-experts models, and distributed optimization pushes toward systems that learn more like humans: incrementally, collaboratively, and efficiently.

Detection

As generative AI becomes more capable, the question shifts from “can AI create this?” to “can we tell that AI created this?” Our work in deepfake detection, AI-generated content identification, and generation-time watermarking aims to build reliable tools for authenticity verification. We investigate how detection methods generalize across generators, survive post-processing, and remain robust under adversarial manipulation. When AI creates, someone must verify — and we intend to build the tools to do so.

Defense

Every new capability introduces new attack surfaces. Adversarial examples fool classifiers. Data poisoning corrupts training pipelines. Model extraction steals intellectual property. Our defense research covers adversarial machine learning, robustness certification, and secure federated training protocols. We study not just how to patch individual vulnerabilities, but how to design systems that are resilient by construction.

Who We Are

GMD-AI is a collaborative research group currently based across two institutions in Ireland:

  • Van-Tuan Tran (Trinity College Dublin) — researching modular deep learning, federated learning, and continual learning.
  • Hong-Hanh Nguyen-Le (University College Dublin) — researching deepfake detection, AI security, adversarial ML, and federated learning.

We share a conviction that rigorous, open research is the best path toward AI systems that are both powerful and trustworthy.

What to Expect

This site is our home for sharing research, tools, and ideas. You will find:

  • Publications — our peer-reviewed papers and preprints
  • Projects — ongoing research initiatives with details on goals and progress
  • Software — open-source tools and libraries we build and maintain
  • Blog — research notes, tutorials, technical deep dives, and updates

We believe in building in the open. Our code is on GitHub, our papers are on arXiv, and this blog is where we think out loud.

Get Involved

If our work resonates with you — whether you are a researcher, a practitioner, or simply someone who cares about where AI is headed — we would love to hear from you. Check our contact page or find us through our institutional profiles.

The problems are hard. The stakes are real. Let’s work on them together.