12 Helpful Gen AI Strategies for Winning with Data Science

Every year, research reports cover the level of investment companies make in data science, the relatively small percentage of ML models that make it into production in front of customers, and the challenges in delivering ongoing business value. Even once data scientists successfully deploy models and deliver benefits, the complications of maintaining the models become an ongoing challenge.

12 Helpful Gen AI Strategies for Winning with Data Science

At a recent Coffee with Digital Trailblazers, I invited Howard Friedman and Akshay Swaminathan, authors of Winning with Data Science, as guests to share insights on the book and answer questions from the Digital Trailblazer Community

What separates ML and gen AI winners from failures?

My key question in this Coffee Hour: What practices are teams doing that separate out the winners from losing or not even winning enough?

Friedman and Swaminathan‘s first responses illustrate the challenges in developing and deploying models. “My absolute failure on the first go around with ML was a complete lack of thinking about deployment,” says Friedman. “I was so focused on the modeling that I didn’t think about the company and the operations.”

Swaminathan shared many of the issues data scientists face working in regulated industries where human safety is a concern. “If you look at how many of those models actually get deployed within health systems and end up impacting the lives of patients and clinicians, it’s an order of magnitude close to zero,” he says.

“Two categories of reasons why data science efforts fail, and we divided them into data and business-related reasons,” says Friedman. “The data reasons include not having high-quality data, the infrastructure, the personnel, the investments, or the quality control. Our book emphasizes the business reasons, including project management, communications, and collaboration. I’ve often seen failures because business cases weren’t there.”

“We’re seeing right now with generative AI is that many of the presumed benefits of these models are not always playing out,” adds Swaminathan.

12 AI success strategies for Digital Trailblazers

During the Coffee Hour, we discussed one use case in healthcare where ML reviews thousands of daily chatbot messages and flags patients at risk. What follows were Friedman and Swaminathan‘s recommendations and insights into finding appropriate business cases, developing models in regulated industries, and deploying human-in-the-loop ML and gen AI solutions that deliver value.

  1. Start with the deployment in mind. Think about how a model is going to be deployed and used. Who are the humans using the ML, and who in IT do we need to get engaged? Partner with the end users and think about deployment from day zero.
  2. Define the problem. “If you’re in acute distress, having a mental health crisis, and you have to wait eight, nine hours, that’s kind of a recipe for disaster. The goal was to build an ML model that could identify the messages that patients sent indicating suicidality, homicidal, self-harm, and domestic violence and then route those as quickly as possible to our crisis response team.”
  3. Learn the end-users’ workflow. “The crisis response team has a Slack channel where any clinician can come in and post, “Hey, I have a patient experiencing a crisis.” On the Slack channel, they’ll assign which crisis counselor to tag to contact the patient. Their entire workflow was in Slack.”
  4. Define the business case. “I see many companies feeling pressure to have a gen AI strategy, and it makes me very uncomfortable because it means someone starting with a proposed solution and saying, ‘Figure out how to use this tool.’ [Instead, ask questions]: What are the main issues in the business? What are the opportunities we think we can have in revenue? What are the issues that are causing operations costs to be high? You need to know a bit of the right language and be willing to ask good questions and challenge assumptions.”
  5. Establish compliance and governance upfront. “Governance itself is something that people often wrestle with later after the problem has reared its ugly head versus having it defined at the forefront.”
  6. Cultivate a multidisciplinary team. “Building the model is often the biggest focus. You need many other skill sets to get to deployment, including an implementation science skillset, MLOps, data engineering perspectives, and others.”
  7. Winning with Data Science Identify a data-compliant, scalable architecture. “[In healthcare], have a secure HIPAA compliant data lake and robust de-identification. We had a business agreement with Slack, so even if patient messages contained identifying information, it was okay because we contracted for a HIPAA compliant Slack. OpenAI can create for you a secure Azure instance where it’s a closed system. Open source genAI models have come considerably far if you have the in-house skills to build and host your own summarizer on your own servers.”
  8. Use statistical techniques and create synthetic data. “Less than 1% of messages were crisis messages. So, we had to devise ways to up-sample for those events, making those events more prevalent in our training dataset.”
  9. Reduce bias and improve training data. “We ensured that our training data selection process was not biased towards or against a certain population. We did a prospective trial where we deployed, and for a whole month, we measured all the messages that got flagged and collected labels on them. The model performed equally well across age groups, sexes, and racial groups. We also set up dashboards to monitor the model performance dynamically over time.”
  10. Aim well past minimal legal requirements. “[Sometimes], there’s a goal to address the legal minimum and nothing more. They don’t see the value of being best in class, and I think that’s a real shame because we know the terrible ramifications of biased models. Simply addressing the legal requirements is often insufficient, and [improving models and the quality of training data] must be an iterative cycle.”
  11. Measure the outcomes. “We reduced the response times from eight to nine hours to eight to nine minutes, and it’s still in production today. It’s helped detect close to 10,000 real patient mental health crises.”
  12. Continuously improve training data. “The models in deployment were collecting labels. We updated the model over time using those labels, and the training set is growing and growing. This is the beauty of having this feedback loop where your training set can grow over time.”

Finding top business cases for ML and gen AI

We ended the session with recommendations for finding top business cases. “Have blue sky sessions, bring together the C-suite, data scientists, and ops people, and flush out some ideas,” says Friedman.

The recommendation aligns with what I wrote in previous posts, including how CIOs can deliver short-term gen AI wins and visionary impacts by increasing the frequency of blue sky planning and brainstorming. I also covered several best practices in how generative AI impacts your digital transformation priorities.

So, where should Digital Trailblazers seek business cases?

“Focus on making money and break down the drivers of profitability,” says Friedman. “Human resources areas often are great areas because perhaps you’re spending a lot of time at a large company with people responding to queries. Financial analytics and understanding customer-level profitability are standards. If you have a B2C with physical locations, site selection modeling is a standard thing to do.”

At the end of the Coffee Hour, I recommend how Digital Trailblazers should interpret the C-level and board’s anxieties for genAI capabilities. “What they’re really saying is, what are my competitors or people I don’t know who are potential competitors doing that I can’t see yet, but if they come out with something revolutionary in the next six to 12 months and I don’t have a game plan, I’m going to be falling behind.”

Take that as a directive, not a requirement, and use Friedman and Swaminathan’s recommendations on Winning with Data Science as best practices in your journey.

The full recording of Episode 80 of the Coffee With Digital Trailblazers will be available on the Digital Trailblazer Community soon – so please sign up for access.

Isaac Sacolick
Join us for a future session of Coffee with Digital Trailblazers, where we discuss topics for aspiring transformation leaders. If you enjoy my thought leadership, please sign up for the Driving Digital Newsletter and read all about my transformation stories in Digital Trailblazer.

Coffee with Digital Trailblazers hosted by Isaac Sacolick Digital Trailblazers! Join us Fridays at 11am ET for a live audio discussion on digital transformation topics:  innovation, product management, agile, DevOps, data governance, and more!

Join the Community of StarCIO Digital Trailblazers

No comments:

Post a Comment

Comments on this blog are moderated and we do not accept comments that have links to other websites.


About Isaac Sacolick

Isaac Sacolick is President of StarCIO, a technology leadership company that guides organizations on building digital transformation core competencies. He is the author of Digital Trailblazer and the Amazon bestseller Driving Digital and speaks about agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO, a digital transformation influencer, and has over 900 articles published at InfoWorld, CIO.com, his blog Social, Agile, and Transformation, and other sites. You can find him sharing new insights @NYIke on Twitter, his Driving Digital Standup YouTube channel, or during the Coffee with Digital Trailblazers.