IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI’s Chief Problem — Tribes and Tribulations

"Chief" has long been included in government job titles, particularly in IT. But as organizations have evolved, the lines between what each chief does have blurred. AI has only made the issue more pressing.

A row of people standing silhouetted as a workforce in front of a blue cityscape.
Shutterstock
Artificial intelligence has exploded across the public and private sectors, promising efficiency, insight and entirely new ways of working. Yet for all its transformative potential, one stubborn reality keeps emerging: AI governance is struggling to keep pace. The root cause, surprisingly, is not the technology itself. Instead, it lies in something far more familiar — and far more human.

It lies in our chiefs. And in the tribes they lead.

THE RISE OF THE DIGITAL CHIEFDOM


The word “chief” has been used in government titles since at least the 1200s, migrating into American governance as formal administrative systems took shape. Over time, as organizations became more complex, so did the chiefdoms responsible for running them. The modern era accelerated this trend dramatically.

When the Clinger–Cohen Act of 1996 formally established the chief information officer at the federal level, it marked a turning point. IT modernization needed central leadership, and creating a chief seemed the logical solution.

But this opened the door to an alphabet soup of new senior roles.

Soon we added:
  • Chief Technology Officer
  • Chief Data Officer
  • Chief Digital Officer
  • Chief Privacy Officer
  • Chief Innovation Officer
  • Chief Knowledge Officer
And now, in the age of AI, we are welcoming the newest entrant: the chief artificial intelligence officer (CAIO).

Each role emerged with purpose and good intentions. Yet each came with its own domain, mandate, staff and culture. In other words: its own tribe.

Every chief oversees a team that develops policies, procedures, objectives and norms. Over time, these teams grow protective of their missions. They build ways of working, communication styles, priorities and, yes, territories.

The CIO may focus on cybersecurity and enterprise architecture. The CDO prioritizes data quality and governance. The CTO emphasizes infrastructure and emerging technologies. The CPO is charged with minimizing risk. The innovation officer is tasked with pushing boundaries. And the CAIO? They are expected to transform everything — preferably quickly.

Each of these tribes is essential. But they are not always aligned. Often, they speak different operational languages and operate under different incentives. As AI enters the picture, these misalignments become more pronounced.

Because AI does not respect silos.

AI needs data quality (CDO), robust systems (CIO/CTO), ethical guardrails (CPO), experimentation (innovation) and strategic vision (CAIO). For the first time, all chiefs must share responsibility for a single technology whose applications cut across the entire enterprise.

This is where the tribulations begin.

AI'S CHIEF PROBLEM: OVERLAPPING MISSIONS, UNDEFINED BOUNDARIES


Organizations frequently complain that AI governance has become a “major stumbling block to innovation.” A common reason is that no one knows precisely who is in charge. Questions arise like:
  • Should the CAIO set enterprise AI policy?
  • Should the CDO own data pipelines?
  • Should the CIO maintain oversight of the tech stack?
  • Should the privacy office have veto power?
  • Who signs off on AI tools for HR, policing, finance or social services?
When roles overlap, accountability blurs. And when accountability blurs, decision-making slows. In many organizations, AI projects spend more time in review than in development.

The irony is striking: We created more chiefs to solve governance problems, but in doing so, we created new ones. This comes from a handful of issues:

Slowed Innovation: AI pilots can stall for months as they navigate approval processes involving multiple chiefs and committees. Each tribe assesses risks differently, and consensus is difficult to achieve.

Conflicting Policies and Priorities: Data governance rules may restrict access to data essential for AI training. Innovation teams advocate speed, whereas risk teams advocate caution. CTOs prefer stability; CAIOs need flexibility.

Organizational Confusion: Staff often do not know which direction to follow. Competing mandates create operational whiplash. In some agencies, three chiefs may lay claim to the same workflow.

Cultural Mismatch: Some tribes are mission-driven; others are compliance-driven. AI requires both, but cultural differences can impede shared understanding.

The result? AI potential remains largely untapped — not because organizations lack talent or ambition, but because tribal structures constrain collaboration.

FROM TRIBES TO TEAMS: RETHINKING AI GOVERNANCE


If AI is to achieve its promise, organizations need to re-examine how their tribes interact. Leaders must ask: Are our tribes working together — or working around each other?

The path forward includes:

1. Clarifying Decision Rights: Define which chief leads each part of the AI life cycle: strategy, ethics, data, infrastructure, model approvals, monitoring and workforce upskilling.

2. Establishing a Cross-Chief AI Governance Council: A standing group representing all chiefs ensures policies, priorities and risk frameworks are aligned rather than competing.

3. Creating Shared Outcomes: Shift KPIs from departmental performance to cross-functional success, e.g., “AI deployments meeting ethical, technical and operational benchmarks.”

4. Building a Unified AI Playbook: Document workflows, responsibilities, escalation paths and principles. Transparency reduces friction and eliminates guesswork.

5. Fostering a Culture of Collaboration: Encourage joint hiring, co-owned budgets, rotational assignments, and cross-tribal workshops. Culture shifts only when structures support them.

The most significant barrier to AI is not technical — it is organizational.

AI demands synthesis across data, technology, privacy, ethics, innovation and mission operations. Yet today’s chiefdoms were created in a sequential, siloed world. They were never designed for a technology that touches everything simultaneously.

To unleash AI’s potential, leaders must recognize the limits of tribal governance and commit to a more unified, federated model. When chiefs collaborate rather than compete, innovation accelerates, risks are better managed and organizations move forward with confidence.

AI may be the future, but the future depends on us — and how well we manage the tribes we ourselves have created.

Alan R. Shark, a senior fellow at the Center for Digital Government, is an associate professor at the Schar School of Policy and Government at George Mason University, where he also serves as a faculty member in the Center for Human AI Innovation in Society. He is also a senior fellow and former executive director of the Public Technology Institute, a fellow of the National Academy of Public Administration, and founder and co-chair of its Standing Panel on Technology Leadership. He is the host of the podcast series Sharkbytes.net. The Center for Digital Government and Government Technology are both divisions of e.Republic.