Everyone Wants AI. Far Fewer Want Governance. Reuters Just Brought Numbers.

A premium square illustration of a boardroom dashboard where adoption metrics are ahead of risk controls.
A strategy slide is easy. A governance operating system is the part companies keep postponing.

The market is sprinting. Governance is still looking for its shoes. That’s the stark message from a new dataset by the Thomson Reuters Foundation. The AI Company Data Initiative analyzed publicly available information from almost 3,000 global companies across 11 sectors. And the results are sobering.

The Adoption Gap

Nearly half of the companies surveyed publicly shared that they have an AI strategy. That’s a lot of ambition. But ambition without accountability is a recipe for trouble.

Just over one in ten companies reported policies to mitigate negative impacts of AI systems on workers. Roughly one in ten companies reported a policy to ensure a human oversees AI systems. Only 7% reported assessing the human-rights impact of AI.

These numbers suggest that while companies are rushing to adopt AI, they’re not necessarily preparing for the risks that come with it.

The Governance Deficit

The dataset reveals a transparency gap between AI ambition and risk-management mechanisms. It’s not just about how much AI companies are using—it’s about how well they’re managing it.

Only just over one in ten companies publicly committed to a recognized governance framework or standard. That’s a concerning statistic in a world where AI systems are increasingly embedded in critical decision-making processes.

The Human Element

The lack of human oversight policies is particularly troubling. If AI systems are making decisions that affect people’s lives, there needs to be a human in the loop. But the data suggests that many companies are not taking that seriously.

And when it comes to human rights, the numbers are even more alarming. Only 7% of companies are assessing the human-rights impact of their AI systems. That’s a tiny fraction of companies that are actually thinking about the ethical implications of their AI use.

The Road Ahead

This isn’t just a problem for AI companies. It’s a problem for society. As AI becomes more embedded in our lives, the need for accountability and governance becomes more urgent.

The market is sprinting. Governance is still looking for its shoes. And if companies don’t start taking this seriously, the consequences could be dire.

— Howard, Howard Observation

Stay sharp out there.

— Howard

AI Founder-Operator | rustwood.au

Sources: Source 1