In the last chapter we went over the history of Agile and discussed some of its principles and how the planning process we are about to detail is aligned with those principles. In this chapter we'll go over common pitfalls of software development, and show how having a process can help avoid those pitfalls.
"Why is this so important?" "Can't we just start coding?" "We know what we want to build! We have a data model! Can't we fix the mistakes along the way?" "Don't we just want to start building, and enough with all the discussion?"
The flexibility of software may make it appear simple, however once we get down into the weeds, things can quickly become complicated. In The Mythical Man Month, author Frederick Brooks said:
In many creative activities the medium of execution is intractable. Lumber splits; paints smear; electrical circuits ring. These physical limitations of the medium constrain the ideas that may be expressed, and they also create unexpected difficulties in the implementation.
Computer programming, however, creates with an exceedingly tractable medium. The programmer builds from pure thought-stuff: concepts and very flexible representations thereof. Because the medium is tractable, we expect few difficulties in implementation; hence our pervasive optimism. Because our ideas are faulty, we have bugs; hence our optimism is unjustified.
Managing this increase in complication, tempering this optimism, and ensuring we are all building the same thing necessitates some amount of process. Failure to adhere to an agreed upon process and lack of discipline are sure to lead to bugs and missed deadlines at best; system failures, business loss, or even casualties at worst.
Below, we list some common problems that Agile planning strategies help mitigate. As we detail some of the specific processes, we'll reference back to these problems, and then outline how that process helps improve the situation.
There is an old saying that begins: "when you assume you make an... The point of which is that assumptions get us into trouble. Assumptions also breed other assumptions. In software development this can happen in two main ways:
Assuming what to build: When we assume what to build, we are stating that we know better than our customers or the market. In very few cases we might know better, but often we are just making educated guesses. Some requirements, such as "add Facebook login" has baked into it the assumptions that "our users have Facebook accounts" and "our users would prefer to use those accounts to log in to our application." If these assumptions don't pan out, then we might have spent valuable time building the feature for nothing.
Assuming how to build: Assuming how to build something typically manifests itself internally when the development team makes assumptions based on vague requirements. "Allow users to login" is a terrible requirement. One developer or team might very well implement a Facebook login "because it's a popular way to log in", while another developer could add in 2-factor authentication "because it's more secure." The chance of whatever that's built lining up to the designer's idea of what they wanted is almost nil.
Avoiding assumptions requires that we be explicit with what we want at all steps of the development process, and constantly flush out hidden assumptions by asking questions.
In software the "scope" of a feature is a term used to describe how "big" it is and/or how much impact it has on the rest of an application. Various factors can contribute to the scope expanding, or creeping, causing a once simple feature to become complex. Scope creep can be fueled by assumptions, a lack of clear requirements or deadlines, or stakeholders fighting for their own interests.
A simple email and password login for a web app might soon grow to "need Facebook login because marketing wants to target millennials" and "be fully mobile compatible because the director has a windows phone" and "integrate with Salesforce so sales can track engagement."
Confident developers will often say "yeah that is no problem" but eventually all the extra features become a problem as delivery schedules are missed and the software starts to bloat.
It's important to be very clear about what, exactly, should be built, and in what time frame.
A "missing" requirement is indicative of a particularly sinister problem with developing software. What has happened is that a product manager has asked development to build a feature with certain requirements. Development then builds that feature and delivers it. At this point a use case is discovered, a performance issue is found, or an original requirement is simply not to the liking of the product manager.
Both sides will shirk responsibility, the product manager stating "a requirement was missed," or "development should have realized xyz." Developers maintain they "built what they were asked to build." Both sides are guilty of a lack of communication. It is the responsibility of developers to use their expertise to flesh out technical aspects of a requirement and the responsibility of product managers to understand and articulate the correct feature set.
Some process structure is required in order to provide a framework for the necessary communication that will help avoid this problem.
On the other side of the coin from "missing" requirements is simply missed requirements. The developers simply did not deliver one of the requirements of a feature. The designer asked for a login page with error messages displayed if the login fails. The developers delivered a login page but missed the error box, a failure simply redirects back to the login page. A requirement was missed by development.
Requirements are often much more complicated than this and to accuse developers of carelessness is to simply deny human nature. Even good developers make mistakes. The best developers will build themselves safety nets because they accept their own fallibility.
Missed requirements can point to a process breakdown in how they are conveyed to developers or how developers confirm they are done. However a great way to avoid missed requirements, as we will see, is to successively translate the requirements into executable tests that will fail if a requirement is missed. Then there can be a simple way to verify that what is asked for has been delivered.
A problem seen frequently in software development is that the requirements will change. The way these changes are introduced to a developer or team, over time, will determine if the team becomes successful or is slowly eroded and destroyed. Business plans will change, competition will need to be thwarted, the market will react unpredictably, there are all sorts of reasons that what we built last week needs to be completely different this week.
Without a framework for managing changing requirements, development can become a free-for-all. Eager developers will switch quickly, writing slapdash code to please their superiors. Senior developers will retreat to solid ground, preferring to work on established parts of the system they know won't change so as not to have weeks of earnest effort thrown away. Resentment will grow. Management won't be trusted. Features will be avoided in the hopes they will soon change anyway.
The tools detailed in this book, which are influenced by the core Agile methodologies, will help to remove uncertainty in stages, and will also help avoid lots of wasted effort and provide a means to manage changing requirements.
How can you tell when a feature is complete? If the requirements are clear, they can be checked. This isn't always the case; plus there are always little things to be tweaked and code to be improved. "My part is done," a developer might say, "but I need the sys admin to update the database." Technically the feature is not done, but how is this managed?
It's possible that we can structure our development to use tests to help define the full scope of the work that needs to be built and give a real measure of what "done" means. For example, when we have a suite of tests that run against all levels of our software, we can be confident that when the feature's tests pass, we can consider that feature complete.
Management wants to know how long things will take for budgeting reasons and when things will be done for sales and marketing reasons. The time it takes to build software is notoriously hard to estimate, something often compounded by the fact that in many cases the problems being solved are new or a new version of previously solved problems.
It's relatively easy to estimate the cost in time and money of a house foundation: form construction takes x dollars per linear foot, concrete is y dollars per cubic foot, a b and c need to be rented for d hours, etc. Plus it has been done hundreds of time before.
Software apps, each with their own specific needs and customer nuances, can not be simply broken down and estimated by numbers of classes, or database tables, or web pages. You can get close, but you will always be wrong, and more accuracy is gained with the more details you specify in the requirements.
Asked to build a website, a team might say it will take 3 months, at the end of which they will need an additional 2 months, at the end of which they will need an addtional 1 month. Management is furious, but the only way to really estimate the timeline is to complete the job.
As strange as that sounds, techniques are available to, over time, determine the regular throughput for a team, for specific tasks, and extrapolate that to new projects to get better estimates.
Not only does management want to know how long something will take but also how far along a current project is. Without methods for compartmentalizing features into units, management will feel like they are left in the dark, or that nothing has been happening for weeks.
By focusing on customer behaviors and parceling deliverables into specific use cases that cut vertically across the software stack, the whole team can get a clearer picture of how many "things" are complete, and how many more are left.
The worst pitfall of software development is delivering software that doesn't work as intended, or has bugs, or is difficult to maintain. Not only is it embarrassing for the company, but it should be embarrassing for the development team and ultimately the individual developer who lacked the personal responsibility and care to make it right.
As mentioned above, with regards to missed requirements, we all make mistakes, and bugs will always happen. However as software continues to grow more and more complicated, it is essential to build it in such a way that it can be quickly verified to perform the correct functionality.
To do otherwise is simply irresponsible and arrogant. "We don't really do testing" should be a red flag when heard by any interviewing programmer. At best it means the developers are putting out fires by fixing bugs or inefficiently manually checking the application, at worst developers simply believe they always write correct code.
Employing the techniques of behavior and test driven development has twofold benefits: it helps to first define exactly which code will be written, and it provides a repeatable method for verifying that the whole application works as intended.
At the end of the day, beyond avoiding all of the problems with software development detailed above, it's really about caring about what you are doing. The reason that the content of this book is important, the reason that you should be reading this book, is that you care about the quality of the work you produce and desire to learn new strategies to make it better.
Now that you are in the right frame of mind, let's jump right into the planning process. In the next section of the book, we will start with an overview of the planning process, then go into the details of each step. Soon you'll have the skills to plan out a simple software application using Agile methodologies.