Thursday, January 15, 2026
spot_img

Gartner Says CISOs Need to Champion AI TRiSM to Improve AI Results

spot_img
spot_img
spot_img
- Advertisement -
- Advertisement -
- Advertisement -Image 2
- Advertisement -

By 2026, organizations that operationalize artificial intelligence (AI) transparency, trust, and security will see their AI models achieve a 50% improvement in terms of adoption, business goals, and user acceptance, according to Gartner, Inc.

Mark Horvath, VP Analyst at Gartner
Mark Horvath, VP Analyst at Gartner

Mark Horvath, VP Analyst at Gartner said, “CISOs can’t let AI control their organization. AI requires new forms of trust, risk, and security management (TRiSM) that conventional controls don’t provide. Chief information security officers (CISOs) need to champion AI TRiSM to improve AI results, by, for example, increasing the speed of AI model-to-production, enabling better governance or rationalizing AI model portfolio, which caneliminate up to 80% of faulty and illegitimate information.”

Not only does AI pose considerable data risks as sensitive datasets are often used to train AI models, but the accuracy of model outputs and the quality of the data sets might vary over time, which can cause adverse consequences.

The implementation of AI TRiSM enables organizations to understand what their AI models are doing, how well they align with the original intentions, and what can be expected in terms of performance and business value.

AI TRiSM Is a Team Sport

Jeremy D'Hoinne, VP Analyst at Gartner. 
Jeremy D’Hoinne, VP Analyst at Gartner. 

AI TRiSM cannot be led by a single business unit. “It calls for education and cross-team collaboration,” Jeremy D’Hoinne, VP Analyst at Gartner.  “CISOs must have a clear understanding of their AI responsibilities within the broader dedicated AI teams, which can include staff from the legal, compliance, and IT and data analytics teams.”

Without a robust AI TRiSM program, AI models can work against the business introducing unexpected risks, which cause adverse model outcomes, privacy violations, substantial reputational damage, and other negative consequences.

AI Risk Management Priorities

Since AI may be seen as any other application, CISOs might need to recalibrate expectations within and outside of the team. Once the expectations are set, the CISO and their teams need to take the following five AI risk management actions:

  1. Capture the extent of exposure by inventorying AI used in the organization and ensure the right level of explainability. 
  2. Drive staff awareness across the organization by leading a formal AI risk education campaign. 
  3. Support model reliability, trustworthiness and security by incorporating risk management into model operations. 
  4. Eliminate exposures of internal and shared AI data by adopting data protection and privacy programs. 
  5. Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience. 

Covered By: NCN MAGAZINE / Gartner

If you have an interesting Article / Report/case study to share, please get in touch with us at editors@roymediative.com  roy@roymediative.com9811346846/9625243429.

- Advertisement -
spot_img
spot_img