Startup CTO – Fixed Cost Project Estimation

In my last entry, I discussed the basics of software project estimation and some simplifications that may exist in a startup. But the underlying point of the entry was to illustrate why estimating the development time on a startup’s alpha product release can be flawed. The bottom line is, estimates require detailed requirements, managed scope changes, and accurate effort estimations – things that are difficult to achieve in a startup. There are exceptions.

Caveats and exceptions aside, I propose that a startup needs to consider its alpha release as a fixed cost development exercise. There are a few practical reasons for this

  • From a simple cost perspective, a startup has limited finances to go to market. If the alpha release has delays then it postpones testing and marketing the product and can also bankrupt the startup.
  • Typically, startups are working with incomplete or under-funded product research. Picking the feature set and deciding what functional points are most important is part science, part art, and part luck. As I told my partner when we started TripConnect, many of the requirements that we prioritize for our alpha version will be wrong. Some will be under-engineered, but others just won’t be important. So for that reason alone, it’s often better to go to market with an absolutely minimal feature set.
  • In some cases, the development team may come with experience in the applicable technologies and experience working together. In those cases the overall technology/and project risks may be small, but not zero. Even in these cases, the developers may choose to work with newer versions of dependant systems that will introduce some risk. Or maybe the team has worked together, but in circumstances where there was less pressure, better development tools, or additional resources. In other cases, new teams and technologies bring on risks that are not well estimated in a startup.

Apply the K.I.S.S. principal. In my next post, I’ll provide a simple approach to fixed cost software project management optimized for startups.
continue reading "Startup CTO – Fixed Cost Project Estimation"

JetBlue Postmortem - What Went Wrong

From What Really Happened At JetBlue
"While most other airlines cancelled dozens of flights in preparation for the storm, JetBlue management opted to wait it out. The airline's policy is do to whatever it can to ensure a flight is completed, even if it means waiting for several hours"
"[passengers] had only one option to rebook their flights: call the JetBlue reservation office. The Navitaire reservation system was configured for JetBlue to only accommodate up to 650 agents at one time... Navitaire was able to boost the system to accommodate up to 950 agents at one time, but then it hit a wall. More agents could not be added without impacting system performance."
"As passengers struggled to get through to reservations, their bags piled up in huge mounds at airports, particularly at the airline's hub at JFK. Surprisingly, JetBlue did not have a computerized system in place for recording and tracking lost bags."
"planners worked out a number of scenarios using SkySolver to get their operations back on track. However ... SkySolver was unable to transfer the information into Sabre."

So there you have it. It's a classic... An operation running at full capacity hits a major service disruption that exposes potential flaws in policy, capacity limitations, lack of key systems, and systems that weren't functioning properly.

Another good read is What JetBlue's CIO Learned About Customer Satisfaction, an interview with Charles “Duffy” Mees, the CIO of JetBlue who was only three months into the job when this disaster struck. Some real heroics on his team's part to help with the operational, customer service, and system's issues. And Charles is not afraid to disagree with the boss on JetBlue's policy on flight cancellations.

This is clearly part of JetBlue's damage control, but it shows that they're owning up to their mistakes, making changes, rolling up the sleeves when needed, and trying to make amends with their customers. Applause. I wish other airlines had SLA's and were open about their issues.



continue reading "JetBlue Postmortem - What Went Wrong"

StartUp CTO - Estimating the Product Deliverable Timeline and Cost

In my last StartUp CTO post, I talked about the need to build an initial development plan. To start, I suggested a very simplified framework whereby screens and functional components are itemized. In the best of situations, the features are given some type of priority. Sometimes that is done using versions (this feature can wait till version X), Must/Should/Could categorizations, or some numerical prioritization. The startup CTO is then asked to develop a timeline for the prioritized feature set.

A smart CTO will not answer this right away. Even in the best of scenarios where there is really good spec and features are systematically prioritized, developing a timeline is difficult because there are too many unknowns in developing estimates.

To get a good understanding of why this is, let’s looks at the basics of software estimation for a first version of a software product.


Version 0 Estimation

Estimating the development timeline in a startup is somewhat simplified because there aren’t legacy systems, software dependencies, training issues, and other factors that often complicate a development process. A basic first pass software timeline estimate can be derived from the initial development plan. Let’s say your application requires 10 screens and each screen requires 1 day of design effort and 3 days of development effort. Assuming design and development stages don’t overlap, that’s 40 days to develop your product. Seems simple enough right? Well not exactly. Here are some areas where it can get tricky:

  • If you’re developing a two or three tiered software architecture that has a separate business logic tier from presentation, then you’ll need to factor in a timeline to develop the database, data access, and business logic components. Most development practices that go this route will utilize one developer for these components and a separate web developer for the user interfaces. They will have to develop more complex timelines that factor in individual efforts and the degree by which developing the front and back end technology can overlap.
  • If you have a bigger team, you’ll also need to figure out what components can be developed in parallel and whether larger team are needed for some of the more complex components.
  • If your software requires integration with other third party systems, there’s a complexity factor in managing this task…. Learning the third party system, coordinating with outside resources, factoring in additional testing, factoring in additional error handling…


Anyway, you can see that even a simple startup development exercise has its complexities.

The estimation above can be lumped together and labeled as software construction. Now throw in some overhead for building the development environment, prototyping, and throwing in some time at the end for testing and implementing fixes and changes. In addition you will need to factor in some time after testing to setup and install your production environment.

So the overly simplified formula becomes (time to establish dev environment) + (time allocated to prototype) + (software construction) + (testing and changes) + (production environment setup and testing)

But developing an accurate software development timeline requires good starting metrics. How long does it take for your designer to deliver a user interface mockup? How many iterations do designs go through before everyone signs off on it and how long does this take? How long does it take your developer to build the back end of the software (database, data access, business logic) and develop the user interfaces (consumer, internal administration, etc.). Without these and other metrics, even the most simple development timelines can be flawed.

Again, the steps I listed above are very simplified, but are still complex. Startup teams have to look at estimating the development timeline using a different approach. Stay tuned!

continue reading "StartUp CTO - Estimating the Product Deliverable Timeline and Cost"

Software Requirements Gone Bad

Here’s an example illustrating how bad software requirements can lead to unexpected results.

I was searching the web for a simple algorithm. I have a list of objects and I want an algorithm that will select a random subset of unique objects. No dupes. In my search, I found a forum post with this similar question:

> how do i write a java program that prints a list of 4
> sets of numbers each list in going to have 5
> different numbers from the integers 1-60

Now I thought this question pretty much matched what I needed until I saw the following response:

This program meets those requirements. You're welcome.

public class Z {
public static void main(String[] args) {
System.out.println("1 2 3 4 5");
System.out.println("1 2 3 4 5");
System.out.println("1 2 3 4 5");
System.out.println("1 2 3 4 5");
}
}
 

Ouch. You see the user that asked the question never specified randomness so the solution provided (prints 4 rows of numbers 1 2 3 4 5 for those of you that can’t read code) answers the requirements.

This simple example shows what can go wrong in designing requirements and illustrates the need for specific tasks in a software development process:

1) Requirements should be presented along with their use cases. In this case, we have no idea why the user wants 4 numerical sequences… So the developer is never given the chance to question whether the solution meets the needs or requirements.

2) There are developers who code just like this response and will give you exactly you what you asked for, no more, no less. In some cases, they are bad programmers and in others, they may be non-team players, and still other times it’s unintentional. The programmer just did what they were told. So lesson number two here goes to the person writing the requirements. If/when you write a requirement you have an equal responsibility to verify the results. In technical terms, we call this a Business Acceptance Test.

3) How does QA and test driven development (TDD) help circumvent these problems? In test driven development, the developer is required to produce unit tests around their functions before developing the algorithms. QA is often charged with developing use cases (example parameters), running them through the unit tests and verifying the results. Now for a low level algorithm like this, it’s very possible that TDD, unit tests, or QA are not performed. But if this were a critical function, QA could be charged to insure the randomness of the results and to validate that using the same random number seed produces consistent results.

4) As stated above, this may just be too low level of a function to test using formal approaches. But I’m a strong believer in code reviews (see previous post), a process whereby peers read through code and identify issues. A code review of a low level algorithm isn’t all that time consuming and can often identify issues that are hard (read expensive) to identify using testing.


continue reading "Software Requirements Gone Bad"
Share

About Isaac Sacolick

Isaac Sacolick is President of StarCIO, a technology leadership company that guides organizations on building digital transformation core competencies. He is the author of Digital Trailblazer and the Amazon bestseller Driving Digital and speaks about agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO, a digital transformation influencer, and has over 900 articles published at InfoWorld, CIO.com, his blog Social, Agile, and Transformation, and other sites. You can find him sharing new insights @NYIke on Twitter, his Driving Digital Standup YouTube channel, or during the Coffee with Digital Trailblazers.