Article

97% Use AI Yet 84% Require Oversight Before They Trust It, KPMG Says

DATE: 10/27/2025 · STATUS: LIVE

AI use skyrockets in offices and homes, trust falters; CIOs rush to tighten controls and one startling revelation now emerges…

97% Use AI Yet 84% Require Oversight Before They Trust It, KPMG Says
Article content

People are using artificial intelligence in many parts of life, at work and at home, yet that heavy use has not translated into broad trust.

For business leaders, adopting AI has moved from optional to essential if companies want to remain competitive. Integrating AI across teams—from conversational assistants to automated workflows—can raise productivity and create fresh revenue streams, but it also increases exposure to governance and reputational risk.

CIOs and Chief Data Officers must steer organizations through this transition. Staff and customers routinely bring AI tools into everyday tasks, though confidence in those systems often trails actual use. That trust gap forces leaders to rethink controls, communication and oversight to avoid damaging outcomes.

In the United Arab Emirates, where new technology is embraced quickly, a KPMG report found 97 percent of people use AI for work, study or personal matters. That rate ranks among the highest worldwide, yet it hides deep concern: the survey shows 84 percent would trust AI only if they were confident systems were being used in a trustworthy way, and 57 percent said tougher rules are needed to make AI safer.

Similar patterns appear in the United Kingdom. KPMG reports just 42 percent of people in the UK are prepared to trust AI. Fifty-seven percent accept or approve of its use, but 80 percent want stricter regulation to help guarantee responsible deployment. Those figures should alarm executives: 78 percent of people in the UK worry about harmful outcomes linked to AI, and only 10 percent say they know about the AI rules that already exist there. The data point to a large gap between widespread use and public understanding or confidence.

When such a large share of a key market demands stronger oversight, launching customer-facing automation without addressing doubts can risk a company’s brand and customer relationships.

Lei Gao, chief technology officer at SleekFlow, argues that the next phase of digital change will turn on accountability rather than raw uptake. "Adoption is no longer the issue; accountability is. People are comfortable using AI as long as they believe it’s being used responsibly," says Lei Gao.

He adds a concrete example that matters for many companies. "In customer communication, for example, users trust AI when it behaves predictably and transparently. If they can’t tell when automation is making a decision, or if it feels inconsistent, that trust starts to erode," Lei Gao points out.

Gao recommends leaders treat AI governance as a core part of product design and customer experience. He lays out three main priorities that shift the conversation from purely technical to one of control and oversight.

The first is transparency. Organizations should be explicit about when people are interacting with an automated system and when a human takes over. Clear disclosure and visible handoffs give users context and reduce confusion.

The second priority is augmentation. Technology leaders must position AI to assist staff instead of sidelining them. Framing systems as tools that amplify human work lowers internal resistance and helps maintain service quality.

The third is continuous monitoring for tone, fairness and performance. Responsible deployment requires ongoing checks for bias, regular review of outputs and mechanisms for redress when errors happen. That work does not end at launch; it is an operational discipline.

These steps apply across different platforms and architectures. Whether a company deploys base models through AWS Bedrock, manages data and workflows with Dell AI Factory, or embeds assistants such as SAP Joule, transparency, human oversight and routine evaluation should be built in from the start.

Putting those practices in place calls for practical actions: audit trails and logging that show how decisions were made; explainability features that help nontechnical stakeholders understand outcomes; governance frameworks that assign accountability inside the organization; and training programs so employees know when to escalate or override an automated decision.

Regulators are watching public sentiment, and the mix of high usage with low trust will likely shape policy debates. Businesses that take a proactive approach to disclosure, impact assessment and user remedies can reduce the odds of regulatory backlash, customer churn or costly mistakes.

For CIOs and Chief Data Officers, the task is to balance speed with controls and to communicate tradeoffs clearly to boards and customers. That means linking technical KPIs to user-facing measures such as clarity, fairness and recoverability when a system fails.

The UAE example shows how fast societies can embrace new capabilities, but speed alone is no longer a sufficient gauge of success. Companies that focus only on performance metrics risk undermining long-term value if their systems behave inconsistently, produce biased outcomes or leave customers unsure who is accountable.

"The next milestone is trust and showing that automation can work in the service of people, not just performance metrics," Lei Gao concludes.

Keep building
END OF PAGE

Vibe Coding MicroApps (Skool community) — by Scale By Tech

Vibe Coding MicroApps is the Skool community by Scale By Tech. Build ROI microapps fast — templates, prompts, and deploy on MicroApp.live included.

Get started

BUILD MICROAPPS, NOT SPREADSHEETS.

© 2025 Vibe Coding MicroApps by Scale By Tech — Ship a microapp in 48 hours.