Study: Trust in GenAI surges globally and India shows high adoption with cautious optimism

Despite 50% of Indian organizations having advanced
data infrastructure, data centralization and governance are still among the top
challenges for AI implementation.

Mumbai, India (Nov 19, 2025) : SAS, a global leader in data and AI, unveiled new research that
explores the use, impact and trustworthiness of AI. The IDC Data
and AI Impact Report: The Trust Imperative
, commissioned by SAS, found
that IT and business leaders report having greater trust in generative AI than
any other form of AI. 

The global research exploring AI use and adoption also
found that only 40% are investing to make AI systems trustworthy through
governance, explainability and ethical safeguards, even though organizations
prioritizing trustworthy AI are 60% more likely to double ROI of AI projects.
Paradoxically, among those reporting the least investment in trustworthy AI
systems, GenAI (e.g., ChatGPT) was viewed as 200% more trustworthy than
traditional AI (e.g., machine learning), despite the latter being the most
established, reliable and explainable form of AI.

“Our research shows a contradiction: that forms of AI
with humanlike interactivity and social familiarity seem to encourage the
greatest trust, regardless of actual reliability or accuracy,” said Kathy
Lange, Research Director of the AI and Automation Practice at IDC. “As AI
providers, professionals and personal users, we must ask: GenAI is trusted, but
is it always trustworthy? And are leaders applying the necessary guardrails and
AI governance practices to this emerging technology?”

“Indian business leaders are demonstrating a
sophisticated view of AI,” said Noshin Kagalwalla, Vice President &
Managing Director, SAS India. “Unlike many global counterparts who focus on
cost savings, Indian organizations are 21% more likely to prioritize
decision-making and show elevated interest in product innovation and process
efficiency. Their lowest priority is personal productivity, which signals a
strategic shift. AI is being seen not just as a tool for operational gains, but
as a catalyst for core business transformation. This mindset, combined with a
strong emphasis on trustworthy AI, is positioning India as a leader in
responsible and impactful AI adoption.”

 

Access the full research report here. 

The research draws on a global survey of
2,375 respondents conducted across North America, Latin America,
Europe, the Middle East and Africa, and Asia Pacific. Participants
included a balanced mix of IT professionals and
line-of-business leaders, offering perspectives from both technology and
business functions. 

India’s current AI landscape reflects many of the
global trends. The country’s AI maturity is being accelerated by government
initiatives like the IndiaAI Mission and Deep Tech Fund, which are driving
infrastructure development, domain-specific models and start-up financing. This
public sector momentum is complemented by corporate giants and a thriving
start-up ecosystem, positioning India as a key player in the global AI race.

 

Emerging AI technologies evoke most trust

Overall, the study found the most trusted AI
deployments were emerging technologies, like GenAI and agentic AI, over more
established forms of AI. Almost half of respondents (48%) reported “complete
trust” in GenAI, while a third said the same for agentic AI (33%). The least
trusted form of AI is traditional AI – less than one in five (18%) indicated
complete trust.

Even as they reported high trust in GenAI and agentic
AI, survey respondents expressed concerns, including data privacy (62%),
transparency and explainability (57%), and ethical use (56%). Despite their
high adoption rates, Indian organizations, on average, have 8% less trust in
GenAI than the global average. However, this skepticism didn’t lead to
hesitation – instead, it prompted them to invest more in trustworthy AI to
ensure GenAI works effectively. Other organizations with low trust in
generative AI can learn from this approach: Critically assess the technology,
identify trust gaps, and invest in solutions to close those gaps. This strategy
can lead to faster adoption and greater business impact.

 

Lagging AI guardrails weaken AI impact … and ROI

The study showed a rapid rise in AI usage –
particularly GenAI, which has quickly eclipsed traditional AI in both
visibility and application (81% vs. 66%). This has sparked a new level of risks
and ethical concerns.

Across all regions, IDC researchers identified a
misalignment in how much organizations trust AI versus how trustworthy the
technology truly is. Per the study, while nearly 8 in 10 (78%) organizations
claim to fully trust AI, only 40% have invested to make systems demonstrably
trustworthy through AI governance, explainability and ethical safeguards.

The research also showed a low priority placed on
implementing trustworthy AI measures when operationalizing AI projects. Among
respondents’ top three organizational priorities, only 2% selected developing
an AI governance framework, and less than 10% reported developing a responsible
AI policy. However, deprioritizing trustworthy AI measures may be preventing
these organizations from fully realizing their AI investments down the road.

Researchers divided survey respondents into
trustworthy AI leaders and trustworthy AI followers. Leaders invested the most
in practices, technologies and governance frameworks to make their AI systems
trustworthy – and appear to be reaping rewards. Those same trustworthy AI
leaders were 1.6 times more likely to report double or greater ROI on their AI
projects.

 

Lack of strong data foundations and governance
stall AI

As AI systems become more autonomous and deeply
integrated into critical processes, data foundations also become more
important. The quality, diversity and governance of data directly influence AI
outcomes, making smart data strategies essential to realizing benefits (e.g.,
ROI, productivity gains) and mitigating risks. 

The study identified three major hurdles preventing
success with AI implementations: weak data infrastructure, poor governance and
a lack of AI skills. Nearly half (49%) of organizations cite data foundations
that are not centralized or nonoptimized cloud data environments as a major
barrier. This top concern was followed by a lack of sufficient data governance
processes (44%) and a shortage of skilled specialists within their organization
(41%).

Respondents reported the No. 1 issue with managing the
data used in AI implementations to be difficulty in accessing relevant data
sources (58%). Other leading concerns included data privacy and compliance
issues (49%) and data quality (46%).

In India, 50% of organizations report having advanced
data infrastructure, yet data centralization and governance remain among the
top three challenges. This paradox reflects a growing awareness: as data
maturity increases, so does the complexity of managing it. The rapid adoption
of AI is pushing Indian enterprises to pursue data excellence more
aggressively, recognizing that strong data foundations are essential for
trustworthy and impactful AI.

“For the good of society, businesses and employees –
trust in AI is imperative,” said Bryan Harris, Chief Technology Officer at SAS.
“In order to achieve this, the AI industry must increase the success rate of
implementations, humans must critically review AI results, and leadership must
empower the workforce with AI.”

About
SAS

SAS is a global leader in data and AI. With SAS
software and industry-specific solutions, organizations transform data into
trusted decisions. SAS gives you THE POWER TO KNOW®

SAS and all other SAS Institute Inc. product or
service names are registered trademarks or trademarks of SAS Institute Inc. in
the USA and other countries. ® indicates USA registration. Other brand and
product names are trademarks of their respective companies. Copyright © 2025
SAS Institute Inc. All rights reserved.