Article

95% of retailers adopt generative AI, opening massive new cyberattack and data-leak risks

DATE: 9/25/2025 · STATUS: LIVE

Retailers sprint into generative AI, boosting service and creativity while sensitive data slips into new corners — what will happen next…

95% of retailers adopt generative AI, opening massive new cyberattack and data-leak risks
Article content

The retail sector has pushed hard into generative AI adoption, and a new report from cybersecurity firm Netskope makes clear that the move comes with steep security costs. Rapid uptake is reshaping how retailers use AI, but it is also expanding the places where sensitive information can leak or be attacked.

Netskope’s data shows 95% of retail organizations are now using generative AI applications, a jump from 73% a year earlier. The pace of change reflects intense pressure on retailers to modernize customer experiences and operations, and to keep pace with competitors who are already deploying AI across merchandising, customer service and back-office automation.

That expansion has produced a larger threat surface. As retailers embed AI into daily workflows, they risk sending proprietary data into third-party models and cloud services. Netskope’s report frames the sector as moving out of a chaotic early phase and into a more controlled, corporate-led model. Usage of personal AI accounts by staff has fallen from 74% to 36% since the start of the year, and adoption of company-approved GenAI tools has risen from 21% to 52%.

The shift signals growing concern about shadow AI practices and an effort to centralize control over which tools employees can access. On corporate desktops, ChatGPT remains the most commonly used generative AI, appearing in 81% of retail environments. Google Gemini has gained traction at 60% adoption, while Microsoft’s Copilot family posts figures of 56% and 51% respectively across different Copilot offerings.

ChatGPT has registered its first decline in popularity, the report finds, while Microsoft 365 Copilot shows accelerating adoption. Analysts point to Copilot’s deep integration with widely used productivity suites as a major factor in that rise, since employees can access AI features inside applications they open every day.

Security teams are being forced to grapple with the downside of widespread GenAI use: large volumes of sensitive data being fed into models. The single largest category of data exposed is company source code, which accounts for 47% of data policy violations tied to GenAI applications. Regulated information, such as confidential customer records and sensitive business data, accounts for about 39% of violations.

Those figures have prompted many retailers to ban consumer-oriented generative tools judged too risky. ZeroGPT tops the blacklist, blocked by 47% of organizations cited in the study amid concerns it stores user content and, in some reported cases, redirects data to third-party sites.

The move away from consumer apps is pushing interest toward enterprise-grade offerings from major cloud providers. These platforms let companies host models privately, control access, and build internal tools that do not send raw data to public services. OpenAI via Azure and Amazon Bedrock lead that segment, each reported in use by 16% of retail firms surveyed.

Even enterprise platforms carry risk. A single misconfiguration can link a powerful model to systems that hold proprietary information, creating exposure that could translate into substantial financial and reputational damage. Netskope warns that these platforms are not a cure-all and require careful configuration and governance.

Risk vectors extend beyond browser experiments. The report finds 63% of organizations are calling OpenAI’s API directly, embedding generative AI into back-end systems and automated workflows. That deep integration accelerates productivity gains but increases the blast radius if models receive sensitive inputs or are linked to production systems without adequate safeguards.

These GenAI-specific dangers overlay a broader pattern of poor cloud security hygiene. Attackers increasingly exploit trusted platforms to deliver malware, knowing that employees are more likely to interact with familiar services. Microsoft OneDrive is the most frequent source, with 11% of retailers reporting monthly malware incidents originating from the file-sharing platform. GitHub appears in about 9.7% of attacks, reflecting how developer tools and repositories can be misused as trusted channels in campaigns.

Employee behavior remains a persistent weak point. Social platforms like Facebook and LinkedIn are present in nearly every retail setting, at 96% and 94% penetration respectively, and personal cloud storage services are common on employee devices. Files uploaded to unapproved personal accounts are a frequent origin point for breaches; according to the report, 76% of policy violations tied to employee uploads to personal apps involve regulated data.

For retail security teams, the era of casual GenAI experimentation is over. Netskope’s findings press organizations to obtain full visibility into web traffic, introduce stricter controls to block high-risk applications and apply robust data protection policies that restrict what types of information can be shared with external models and APIs.

Absent firm governance and clearer rules of engagement, the next generative AI project could become the next headline-making breach.

Keep building
END OF PAGE

Vibe Coding MicroApps (Skool community) — by Scale By Tech

Vibe Coding MicroApps is the Skool community by Scale By Tech. Build ROI microapps fast — templates, prompts, and deploy on MicroApp.live included.

Get started

BUILD MICROAPPS, NOT SPREADSHEETS.

© 2025 Vibe Coding MicroApps by Scale By Tech — Ship a microapp in 48 hours.