What Drives Agile Teams to Become Agile Organizations

Last week I moderated a panel at nGage's Enterprise Transformation Exchange on Developing an Agile IT Organization: Concepts, Culture and Concerns. The message from panelists was clear. Organizations looking to transform need to go beyond scrum practices and improving IT project execution. CIOs and other IT leaders are looking to agile practices to transform operations, develop new products, drive a stronger collaboration between business and technology organizations, and to establish a more agile mindset from employees up to executives. That means evolving from agile practices to an agile organization, culture, and mindset.

But there are stumbling blocks along the way and problems every organization needs to solve on their own. Here are some questions we received and how we collectively answered them.

How do you help developers adopt agile coming from a ticket-based support practice?

Development teams that were largely supporting enterprise systems are being asked to automate more processes, integrate more data, and develop more substantive changes to enterprise workflows. They are being asked to innovate and not just fix things. These business needs are a mismatch with ticket based IT systems that were more designed to help IT track and respond to incidents and small changes requests made by end users.

Practice changes aren't easy to implement and you can't just push the gas pedal to drive change faster. I recommended to the person asking the question that he start with the basics by getting his team committing to what they could get done and having a daily dialog to answer technical questions or issues. This is a lead in to agile, sprints, and standups but can be less intimidating to people than approaching them with a significant practice change. 

Handling different levels of scope creep

This is an interesting question. Since agile enables the product owner to adjust priorities every sprint, has agile made the notion of "scope creep" obsolete?

The answer is a matter of scale. We want product owners to adjust priorities especially when there is strong feedback from customers and end users on the desired behavior of the product. We want teams to find faster and more simplified ways to implement a user experience. These changes might require teams to to take two steps backward in order to leap six steps forward.

But what if the product owner wants to change the release dates, restate the definition of MVP, add requirements that requires a significant architecture change, prioritize implementations that require additional investment, or make significant compromises on security or quality requirements? We can all recognize that these are major shifts in strategy or attempts to work around standards - even when these aren't well defined.

It takes maturity to have practices to help address these issues when they come up. Teams should have a vision statement for every release and feature that help set boundaries. Technical standards need to be documented. And there should be a process (or dare I say, a governance model) for reviewing, resolving, and sometimes compromising when more strategic changes need consideration. 

Running scrum with very small teams

A third question came up about how to run scrum when teams are very small and people have to wear multiple hats. When talking about scrum, people hear about the need for product owners, scrum masters, team, leads, business analysts, developers for different skills, and testers. That might be hard to pull of when small organizations are supporting many products and platforms.

One way to address this is to start with small ambitions. Small teams may require different people writing stories based on subject matter expertise. For some stories, a developer may be coding and another is testing. The responsibilities of the scrum master may have to be shared across team members.

Starting with small ambitions allows team members to learn different responsibilities and adjust to different roles.

Agile and scrum are not cookie cutter practices

While agile has a set of basic principles outlined in the agile manifesto and scrum has some defined practices, there isn't a one-size fits all approach when applying it to different organizations and business drivers. In fact, it's an evolving practice that needs maturity and realignment as priorities and organizational needs advance. Even when you have certified scrum masters onboard, or are using agile coaches, or if you are adopting a framework like SAFe or LeSS, you have to build and adapt the practice based on many factors.

So once teams get used to the basic practices, they need to learn to mature them over time. They need to bring outside help when useful, but learn to drive the practice on their own. That's what separates teams that are practicing agile, to ones that are becoming agile organizations.

continue reading " What Drives Agile Teams to Become Agile Organizations"

12 Warning Signs of Bad Application Architecture

At a #CIOChat this weekend we were asked about warning signs for bad architecture. Here was my quick response capturing just a tweet-sized summary of architecture that "smells bad" -

Many responses were around "accidental architecture" that is "cobbled together" and sometimes degrading into "Frankenstein architecture." There were many technical issues identifies such as "data integration via feed files", "root cause analysis is rarely done", "testing with real live data", "diagrams remind you of fettuccine alfredo" (if there are diagrams), and the awful scenario of "when a system crashes and you go to do a restore and the tape is blank."

There were also many business issues identified. "Architecture does not start with technology" and that architectures become legacy issues "when the business treats everything as a one time cost."

Knowing the architecture issues before business issues emerge

By the time you have outages, poor performance, problems making enhancements, difficulties cross training new developers and other issues, it's already late to make architecture assessments and improvements. A seasoned CIO, CTO, or application architect can sport signs of trouble well before business and technical risks begin to materialize.

Some things that I look for -

  1. Lack of high level architecture documentation, or documentation is significantly outdated.
  2. Large number of architecture components or codebase size relative to the number of developers and engineers supporting the application.
  3. Minimal application monitoring and logging in place, so no one really knows how well the application is performing or what to do if issues are reported.
  4. Application platforms or components aren't upgraded regularly, or worse, they are running on unsupported versions.
  5. The last few attempts to enhance the application were categorized as disasters either because the improvements never made it to production, they took too long to implement, or their deployments created stability issues.
  6. There are no defined regression tests. Not even manual ones.
  7. The application is "stuck" on its computing architecture making it difficult to move to the cloud or alternative infrastructure. 
  8. There is significant investment in proprietary code to perform integration or technical functions that would be "out of the box" options on modernized platforms.
  9. Developers are afraid to make changes. New developers have significant learning curves before they can do basic enhancements. 
  10. There are hard coded connections, credentials, and other system parameters embedded in the code.
  11. Parts of the code is not in version control.
  12. The build and deployment process is manual. The documentation on the process either doesn't exist, or is something that needs to be modified with every release. 
I'm sure there are more than twelve!

continue reading "12 Warning Signs of Bad Application Architecture"

10 ways Digital Organizations are Smarter Faster than their Peers

A quote from my book, Driving Digital: The Leader's Guide to Business Transformation Through Technology

Driving Digital by Isaac Sacolick

Here's how digital organizations are smarter and faster -

  1. They change their culture by developing "bottoms up" practices like agile, ideation, relationship developing CRM, and citizen's development.

  2. They target their programs to early adopters, know how to scale their practices to mainstream participants, and collaborate to manage detractors. 

  3. They establish product management to target attractive markets and define customer value propositions. They have a focus on delivering amazing customer experiences. 

  4. They leverage agile practices and mindset to adjust priorities based on customer feedback. They strike a balance between self-organization and some process rigor. 

  5. They automate the most error prone, repetitive, and costly tasks in DevOps and business processes.

  6. They leverage data scientists, but also promote citizen data science programs so that a larger segment of the organization can be data driven.

  7. They invest in data governance so that people across the organization get access to appropriate data, have data dictionaries so that data and analytics are applied correctly, and have an ongoing investment in data quality.

  8. Digital organizations target a balanced portfolio of initiatives; some cost savings and compliance driven, but other target growth/revenue and research on new technical capabilities (AI, IoT, blockchain ...).

  9. They establish reference technical architectures and data models, and bake in security best practices to develop standards and roadmaps toward a reusable set of platforms and data assets. 

  10. They replace or sunset legacy systems so that they don't drain resources and slow down strategic efforts.

A lot more on this in Driving Digital!

continue reading "10 ways Digital Organizations are Smarter Faster than their Peers"

Leading Digital Transformation: Finding the right velocity in driving organizational change

I was recently asked, "Isaac, what keeps you up at night?" My answer is simple. In transformation programs, going too slow can lead to your business being disrupted. If your competitors are putting out great experiences, are more competitive winning business by leveraging data, or are demonstrating strategic business impacts with AI, blockchain, or IoT then your business is at risk. Here's a quote from my book Driving Digital: The Leader's Guide to Business Transformation Through Technology

Going too slow can also be very detrimental. It can lead to business failure and disruptions to entire industries. -- Isaac Sacolick

I then go on to tell the story of the newspaper industry that fell off the digital disruption cliff starting after the 2001 internet bubble burst. They simply couldn't adjust their business models fast enough to digital disruptions impacting how they serviced readers and advertisers.

Going too fast cast can burnout and alienate the team

So as a digital transformation leader - which can anyone in the organization leading or participating in a digital transformation initiative from the CEO, the CIO, the CDO, the CMO down to leaders on agile teams - going too fast is also an issue. In a previous post, I shared three signs you have overloaded your digital transformation program. Pressure to take on too many initiatives may bottleneck the overall program and burnout participants.

So what keeps me up at night is striking the right balance. Going fast enough so that the organization is leading digital transformation versus their peers and competitors but going steady enough so that teams don't burnout along the journey.

What it's like leading transformation programs

This leads me to another question that I'm often asked, "What is it like to lead transformation programs." Here is how I answer it

Leading transformation programs is like wearing a huge target on your back with people ready to shoot arrows at you. On one hand, their is pressure from the executives to get more done faster and without making their worlds difficult. On the other, there are detractors to digital transformation in the organizations that want to stand on the sidelines and prefer that old business methods remain intact. There are select members of your team that might want to run in a different direction, or slow down, or speed up, or implement things differently than the strategic direction. Finally, there are colleagues that may be envious and want to be the anointed leader of the transformation program.

It's not an easy task and I share some of the challenges leading transformation programs in Driving Digital. But here are three things that can help leaders avoid getting an oversized target on their backs

  • Communicate early, frequently, and targeted to the audience

  • Champion early adopters who are willing to lead and teach the larger organization

  • Pick the right battles. Consider your values before jumping into every debate

It's a journey. Make sure you lead efforts so that you're there for the whole ride.

continue reading "Leading Digital Transformation: Finding the right velocity in driving organizational change"

5 Recommendations on Implementing DevOps CI/CD Pipelines

CICD Illustrated
CI/CD is one of the key devops practices because it enables teams to align on development practices and ensure there is a consistent, reliable, and automated way to deliver applications to multiple compute environment.

If you're new to CI/CD, consider reading my InfoWorld posts on What is CI/CD and on getting started with CI/CD. What you'll see is that there are many steps to mature this practice with some steps that need alignment of team practices and others that require engineering work. There isn't one way to implement CI/CD and it's easy to get lost on where to start and how much to implement. So with that in mind, here are five of my recommendations on implementing continuous integration and continuous delivery:

1. Identify business and technical objectives

For most organizations, CI/CD pipelines aren't implemented overnight and are more often implemented incrementally. That means most devops teams have to prioritize what practices to develop, what processes to automate, and what platform stacks to focus on.

The best way to do this is to look at short term business priorities and align both devops and CI/CD objectives to them. If there are new applications being developed then that's an optimal time to focus on the CI/CD pipeline for them. If you are undergoing a cloud migration, then standardizing architectures and developing CD pipelines for applications that undergo the most frequent changes is a good starting point.

2. Start CI/CD with Continuous Testing

My friend an colleague said it best

"Organizations needs to focus on the fundamentals first, meaning ensure the source code has programmatic unit tests, passes static code analysis and security scans." -- Thomas J. Sweet
Increasing the speed of delivery only works if you're able to deploy quality code and near defect-free applications. That means implementing automated tests and plugging them into CI/CD to support continuous testing. And not just unit tests. Add in code analysis, security, and performance testing that should all be triggered from CI/CD with every major push to staging and production environments.

3. Standardize the architecture before implementing CD

Automation provides value when it can be repeated reliably. For development teams, that means deployment to multiple types of development and testing environments and one or more production environments. If the architecture of these environments are not standardized, it's hard to get the benefits of automation.

If you have to clean up the architecture, consider automating the infrastructure as code using chef, puppet or ansible and leveraging either docker or kubernetes containers.

4. Align short term business objectives with CI

Some teams get carried away and drive CI/CD toward continuous delivery, but, continuous delivery may not be appropriate for every business or application.

In addition, teams need to think through how they will implement longer running feature development. If the business objective is to launch just a handful of features that are not tightly coupled, then using feature branches may be a good enough solution to separate out feature tracks and merge when ready. However, if there are many features being developed over an extended period of time, then development teams might want to look at feature branches for some and feature flagging for others.

5. Let the system engineers implement CD

CD requires a lot of scripting, knowledge of the computing (cloud) architecture, and knowledge of the application's requirements. Teams might be tempted to let the developers on the team to take on the challenge of learning CI/CD tools and implementing the automation, but I believe the strongest teams will engage the engineers to take on this work.

Why? Because I'd rather see developers implementing business solutions and coding applications. It's better use of their skills. And, I'd rather see the engineers more versed in systems programming including IaC and CI/CD.

CI/CD should also drive platform rationalization

Some final thoughts...

There is an expense to standardize computing architectures and develop CI/CD pipelines for them. So larger organizations with multiple development stacks should consider consolidating to a handful of approaches. It's not easy or cheap to have lots of ways to code applications and automate software delivery.

continue reading "5 Recommendations on Implementing DevOps CI/CD Pipelines"

What is AIops? Collaboration, Practices, and Principles for Delivering AI Solutions

I heard about the term AIops term last week at The AI Conference, presented by O'Reilly and Intel AI. Is this a real practice? My answer is yes and here's why.

Consider that DevOps is the practice to align developers and operations on the agility, speed, and stability of making software releases. DevOps aims to align a multidisciplinary team on conflicting missions and priorities by standarding CI/CD pipelines, increasing monitoring, and other DevOps best practices.

Then consider DataOps, the emerging discipline of align data professionals including data-driven business managers, data scientists, citizen data scientists, data engineers, data stewards, database architects, ETL developers, and DBAs to align on strategies and practices to ingest, cleanse, store, govern, manage, and deliver data and analytics to the organization and its customers.

Both DataOps and DevOps align multi-skilled professionals on mission, values, technologies and practices to achieve short term goals and deliver on longer term competitive value.

So here's my definition of AIops:

AIops defines the mission, principles and practices that drive collaborative AI experimentation that delivers business results

AIops aligns key practices in the AI journey

AI also requires aligning multi-skilled professionals. In addition to AI specialists, it requires support from business managers, subject matter experts, data engineers, and technologists to align on mission, data sources, platforms and desired outcomes.

The team needs direction on experiments that drive business value.  This requires collaboration as defining overly bold experiments may be unachievable while missions that target marginal business value may not justify the investment.

To be successful in AI, the team needs access to a large volume of relatively clean data. The AIOps team then must take steps to tag data for supervised learning AIs or define reward functions for unsupervised problems. The practice of organizing and standardizing data for AI experiments is an AIops practice.

AI requires selecting platforms, tools, and infrastructure that needs to be ramped up and down as experiments are conducted. Teams can consider a multitude of platforms (TensorFlow, Keras, PyTorch, Caffe), cloud provider (AWS, Azure, Bluemix, Google), and a growing number of collaboration platforms (Dataiku, H20.ai, Databricks, Anodot, Clusterone and others) as part of their AI and machine learning environment.

With data ready, the team needs a working process for running experiments. I've suggested that agile experimentation is required for AI and the team needs to establish trackers to capture metadata and results of the trials conducted.

Once experiments are conducted, the results need to be analysed. The team needs to determine the success of the overall experiment and what follow-on experiments to prioritize.

When results yield satisfactory results, the team then needs to determine how to establish a production process to run new data through the AI models.

Why AIOps should be formalized

Organizations dedicating resources to AI experimentation recognize the journey that needs to be led by a collaborative, aligned team:

  • From early stages where people, partners, and platforms are established
  • Middle stages where the team develops its practices, grows data sets, and automates processing steps
  • Later stages where agile AI experimentation begins to show results and production practices are established

Organizations making larger investments and committed to longer term experimentation in AI can define their AIops mission, practices and culture to align teams and deliver results.

continue reading "What is AIops? Collaboration, Practices, and Principles for Delivering AI Solutions"

The basics of Deep Learning and Bayesian Networks in under five minutes

Still confused about deep learning, how it works, what is its shortcomings, and what is its origins? Watch this 4.5 minute keynote snippet by Zoubin Ghahramani, Chief Scientist at Uber, at O'Reilly's Artificial Intelligence Conference going on in NYC this week. He nails it.

Paraphrasing Zoubin: Deep learning is neural networks rebranded. Compute power enables us to run many layers of weighted computational neurons, hence the phrase "deep". They are data hungry, computationally intensive, uninterpretable black boxes that can be easily fooled.

But ...

They can do amazing things and using them is becoming easier. See the video snippet of the keunote below and watch other highlights from the conference.

continue reading "The basics of Deep Learning and Bayesian Networks in under five minutes"