3 Key Risks to Address to Safely Experiment with Generative AI

We had an active conversation at last week’s Coffee with Digital Trailblazers on navigating the risks when experimenting with generative AIs and large language models (LLMS). My panel included Joanne Friedman, Joe Puglisi, Heather May, Tyler James Johnson, Ashish Parulekar, Roman Dumiak, and Gary Berman. They shared where gen AI provides value and what risks Digital Trailblazers must address when experimenting with them in their organizations.

Risks in Generative AI and LLMs by Isaac Sacolick

We had already covered the impact of LLMs in every industry during a previous Coffee Hour, and I’ve been writing about the Gen AI opportunities for CIOs and their businesses. See my articles on five critical priorities for CIOs to lead on generative AI and how generative AI impacts your digital transformation priorities. If you want here-and-now ideas, see what ChatGPT and LLMs can do for your business and five AI search capabilities people will expect because of ChatGPT.

With every promising and transformational new technology and AI capability comes new risks, so it was time for us to discuss them at the coffee hour. Here’s the list we reviewed:

1. Elevate data governance: Now more important than ever

Data quality, categorization, mastering, and security were always important, but it’s been an uphill battle for many data governance leaders and CDOs to get executives’ priority and investment. Generative AI increases the risks by an order of magnitude in the following ways:

  • Where is your data going, and is critical IP leaking from your organization? Employees can easily cut/paste product information, code, and other trade secrets when prompting a gen AI tool.
  • Are you reviewing how SaaS uses your data to train their LLMs, whether they are using your data to provide the functionality to you, or they’re anonymizing the data for their generic models?
  • Do you have data quality practices against your unstructured data so that you can experiment with building a private LLM or participating in an industry-specific LLM, or using a small LLM?

Advice to Digital Trailblazers: If your leaders are excited by Gen AI’s promises, take advantage of the moment and bring data governance to the forefront of priorities.

2. Define the guardrails: How should employees experiment with Gen AI

Remember when IT was backpedaling through issues created by rogue and shadow IT? Some CIOs and IT leaders still struggle with this issue (if you are, let me know), and today, we have a new issue of rogue and shadow AI.   

Without guardrails, employees can pick their problem, use any Gen AI tool they can access, and use any accessible IP when prompting (see data governance above).

We discussed several issues where setting some guardrails and guidelines reduces risks.

  • How are people using their time, and are their principles or guidelines to help employees know what types of work are appropriate for trying a Gen AI tool?
  • What Gen AI tools should employees use, and how can they request a review of new tools that aren’t on the list? Where can IT create sandboxes to test new tools without exposing data to open LLMs?  
  • How should employees validate an AI’s results, especially when some AIs like ChatGPT train on older data, and most Gen AIs disclose the possibility of sharing false information when prompting an AI for facts?
  • Where can employees apply a Gen AI’s results, and what legalities need review? Are there copyright and trademark issues to consider based on the AI’s training data, and does it reference licensed material (such as GPL-licensed code) in its results?
  • Are there regulatory and safety considerations that employees should understand before prompting a Gen AI? This is particularly important in enterprises with multiple business units where some units have greater compliance and human safety considerations.
  • How will you validate your private LLMs for queries you can’t easily anticipate? How might a bad actor prompt your LLM to extract and use information in detrimental ways?

Advice to Digital Trailblazers: We want employees to experiment with defined practices and controls. Digital Trailblazers should communicate the guardrails and identify ways to monitor and enforce them. Consider developing an experiment database for employees to log their experiments, share findings, ask questions, and document unexpected issues. 

3. Communicate expectations: What’s shared with board directors, leaders, and employees

Gen AI generates many emotions, from those wanting to chase shiny objects to fears that armageddon is nearing. We have young people in the organization stressing about their careers and more experienced employees, fearing their skills becoming obsolete faster. Leaders have visions and goals where Gen AI can provide short- and long-term business benefits, but the risks need continuous review as Gen AI rapidly evolves.

So, what are leaders communicating about Gen AI to customers, partners, board directors, leaders, and employees to set realistic expectations, quell fears, and share risk considerations? Below are several considerations:

  • What are you saying to the board about how the enterprise will transform and leverage gen AI capabilities? Which opportunities will the leadership team pursue around efficiencies, product evolutions, employee experiences, and new business opportunities? How will the organization use and protect its IP?  
  • Is the leadership team collaborating and defining a strategy, setting objectives, and outlining priorities around Gen AI? Are they aligned on the risks and taking proactive mitigation steps?
  • What are you saying to customers about any Gen AI capabilities you plan to offer and how you’re using and protecting their data?
  • What are you telling employees about the organization’s opportunities using Gen AI, the data risks, and the guardrails? Are you listening to employees’ concerns and providing career counseling and pathing for people whose roles and skills may be impacted by Gen AI?
  • What are the learning opportunities for employees who want to learn AI tools and participate in other Gen AI and LLM initiatives?

Advice to Digital Trailblazers: Communications should be the center of all innovation and transformation programs, especially when there are business risks, inflated expectations, and anxieties that need addressing. If you’re working with Gen AI, consider your responsibilities for keeping people informed and engaged. 

One final and critical risk

There’s a significant risk for organizations that ignore generative AI opportunities, especially when competitors take big bets or make bold moves. Consider what happens in healthcare when LLMs are applied to patient data or in financial services when LLMs aid portfolio analysis; it’s too easy for laggards to fall behind and face disruption.  

Isaac Sacolick
Join us for a future session of Coffee with Digital Trailblazers, where we discuss topics for aspiring transformation leaders. If you enjoy my thought leadership, please sign up for the Driving Digital Newsletter and read all about my transformation stories in Digital Trailblazer.

No comments:

Post a Comment

Comments on this blog are moderated and we do not accept comments that have links to other websites.

Share

About Isaac Sacolick

Isaac Sacolick is President of StarCIO, a technology leadership company that guides organizations on building digital transformation core competencies. He is the author of Digital Trailblazer and the Amazon bestseller Driving Digital and speaks about agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO, a digital transformation influencer, and has over 900 articles published at InfoWorld, CIO.com, his blog Social, Agile, and Transformation, and other sites. You can find him sharing new insights @NYIke on Twitter, his Driving Digital Standup YouTube channel, or during the Coffee with Digital Trailblazers.