Saturday, May 26, 2018

Understanding your problem is half the solution (actually the most important half)

Problem understanding
Before we can solve a problem, we need to know exactly what the problem is, and we should put a good amount of thinking and resources into understanding it. And because today’s problems are so complex, we know they can’t be solved by being broken down into specific components.

Russell Ackoff (1979) has one of the most compelling metaphors for complex problems I have encountered so far. He called them “messes”. How many times you heard, or have spoken the phrase “this project is a mess” yourself? I have countless times. That said, the word “mess” means many things to many people so it means not much at all without context. Ackoff defined it as follows:

“Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situations messes. Problems are abstractions extracted from messes by analysis; they are to messes as atoms are to tables and chairs.”

The only real means to achieve a shared understanding of a problem is through dialogue. Unfortunately, in this day and age where hours are equated to cash and naïve simplicity reigns, time spend on understanding problems is viewed as time wasted.

“Everything Should Be Made as Simple as Possible, But Not Simpler” – Albert Einstein

Management demands action, not talk and collaborative analysis. Especially the kind of meetings that involves debate and discussion are seen as “just talk”. This is understandable considering the number of meaningless meetings most people experience, but I believe debate and discussion are necessary to create a shared understanding of a problem. I would not use the same time split as Einstein, but that is only because the problems I work on are not saving the world.

“Given one hour to save the world, I would spend 55 minutes defining the problem and 5 minutes finding the solution.” - Albert Einstein

The next time you’re in a meeting to address a problem, pay attention to how much time is spent discussing or understanding the problem vs. how much time is spent on solutions. If your experience is typical, perhaps a few minutes of an hour-long meeting about the problem will be spent understanding the problem.

When I started paying attention, I realized meeting after meeting that the problem would be briefly summarized and then people would spend a huge amount of energy brainstorming or fleshing out solutions.

“It's so much easier to suggest solutions when you don't know too much about the problem.” - Malcolm Forbes

So, what happens when we don’t understand the problem? When the problem is not well understood, “solutions” only create new problems. In fact, there’s no guarantee the solutions will address the problem at all. Conversely, the more we understand the problem, the more likely we understand the root cause and can create countermeasures so the problem won’t recur. Understanding the problem is the first step of any problem-solving. The second step is defining how you measure success. After all, you would like to know if your solution is actually solving the problem.

“We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.” – Russell L. Ackoff

Read more…

Tuesday, May 01, 2018

How to review your team’s software development practices

Software Development Practices
An important part of the project reviews I do is a software development practices review. Notice I do not say "best practices". The term “best practices” is without meaning in many contexts. At best, it’s vague, subjective, and highly dependent on context.

Wikipedia defines best practices as:

“A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things.”


In other words, a “best practice” is a practice that has been somehow empirically proven to be the best. Although there is quite some research on software development practices, I think defining the “best” software development practices, that would be the "best" for all projects, based on this kind of research is impossible.

What I am looking for in my software development practices review is the application of “standard practices” or “accepted practices”. A good example of this might be surgeons washing their hands prior to surgery. It is such a widely accepted and standardized practice that it would be scandalous for a surgeon not to follow it. Washing hands versus not washing hands have been empirically demonstrated to be better in terms of patient outcomes. Is the washing of hands the absolute “best” thing in the universe that a surgeon could do prior to surgery? Well, this philosophical question is not especially important. What matters is that hand-washing is beneficial and widely accepted.

The review

Your review would be sitting in a room with the development team and go through the list of practices from the second part of this article. Just ask the team about each practice and let them tell their stories. You might learn a lot of other things too! After talking about a practice the team should agree on one of the following answers:

1) We do not do this
2) We do not need this
3) We do this, but not enough/consistently
4) We do this, but we do not see the expected benefits
5) We do this and see the expected benefits

After agreeing on an answer, everybody in the team should give input on why this is the case. You could use the 5 whys technique when you feel that it helps you get important information.

Of course there are exceptions based on specific context, and of course, there are wildly varying degrees of maturity for each practice. But in general, you can say that the more times the team tells you “No, we do not do this” the more room for improvement there is. This is the positive formulation. You could also say the more “We do not do this" you count, the more issues with the software and your project you can expect.

The moment you should listen very carefully is when the team says “No, we do not NEED this”. Here you will learn about your project specific challenges and environment AND you will learn about the mindset of your team.

Answer numbers 3 and 4 will indicate possible room for improvements on the implementation of a practice and with that on the software development process as a whole.

You can combine this review with the delivery team review. The answers to the software development practices review will give you valuable information on team dynamics, mindset, individual skills, individual knowledge, as well as skills and knowledge as a team,

The practices

Since the creation of the first programming languages in the 1950’s there currently exists a widespread agreement in the software development community on which software engineering practices help creating better software. I will explain the in my opinion most effective practices in the rest of this article.

1. Separate Development and Deployment Environments

In order to develop and deploy software effectively, you will need a number of different environments. This practice seems to be so logical, still, I see time and again that essential environments are not available. Let’s start with the first environment.

1) Development: The development environment is the environment in which changes to software are developed, most simply an individual developer's workstation. This differs from the ultimate target environment in various ways – the target may not be a desktop computer (it may be a smartphone, embedded system, headless machine in a data center, etc.), and even if otherwise similar, the developer's environment will include development tools like a compiler, integrated development environment, different or additional versions of libraries and support software, etc., which are not present in a user's environment.

2) Integration: In the context of version control, particularly with multiple developers, finer distinctions are drawn: a developer has a working copy of source code on their machine, and changes are submitted to the repository, being committed either to the trunk or a branch, depending on development methodology. The environment on an individual workstation, in which changes are worked on and tried out, may be referred to as the local environment, sandbox or development environment. Building the repository's copy of the source code in a clean environment is a separate step, part of integration (integrating disparate changes), and this environment is usually called the integration environment; in continuous integration this is done frequently, as often as for every version. The source code level concept of "committing a change to the repository", followed by building the trunk or branch, corresponds to pushing to release from local (individual developer's environment) to integration (clean build); a bad release at this step means a change broke the build, and rolling back the release corresponds to either rolling back all changes from that point onward, or undoing just the breaking change, if possible.

3) Test: The purpose of the test environment is to allow human testers to exercise new and changed code via either automated checks or non-automated techniques. After the developer accepts the new code and configurations through unit testing in the development environment, the items are moved to one or more test environments. Upon test failure, the test environment can remove the faulty code from the test platforms, contact the responsible developer, and provide detailed test and result logs. If all tests pass, the test environment or a continuous integration framework controlling the tests can automatically promote the code to the next deployment environment.

Different types of testing suggest different types of test environments, some or all of which may be virtualized to allow rapid, parallel testing to take place. For example, automated user interface tests may occur across several virtual operating systems and displays (real or virtual). Performance tests may require a normalized physical baseline hardware configuration, so that performance test results can be compared over time. Availability or durability testing may depend on failure simulators in virtual hardware and virtual networks.

Tests may be serial (one after the other) or parallel (some or all at once) depending on the sophistication of the test environment. A significant goal for agile and other high-productivity software development practices is reducing the time from software design or specification to delivery in production. Highly automated and parallelized test environments are important contributors to rapid software development.

4) Staging: Staging is an environment for final testing immediately prior to deploying to production. It seeks to mirror the actual production environment as closely as possible and may connect to other production services and data, such as databases. For example, servers will be run on remote machines, rather than locally (as on a developer's workstation during dev, or on a single test machine during a test), which tests the effect of networking on the system. This environment is also know as UAT, which stands for User Acceptance Test.

The primary use of a staging environment is to test all installation/configuration/migration scripts and procedures, before they are applied to production environment. This ensures that all major and minor upgrades to the production environment will be completed reliably without errors, in minimum time.

Another important use of staging is for performance testing, particularly load testing, as this often depends sensitively on the environment.

5) Production: The production environment is also known as live, particularly for servers, as it is the environment that users directly interact with. Deploying to production is the most sensitive step; it may be done by deploying new code directly (overwriting old code, so only one copy is present at a time), or by deploying a configuration change. This can take various forms: deploying a parallel installation of a new version of code, and switching between them with a configuration change; deploying a new version of code with the old behavior and a feature flag, and switching to the new behavior with a configuration change that performs a flag flip; or by deploying separate servers (one running the old code, one the new) and redirecting traffic from old to new with a configuration change at the traffic routing level. These in turn may be done all at once or gradually, in phases.

Deploying a new release generally requires a restart, unless hot swapping is possible, and thus requires either an interruption in service (usual for user software, where applications are restarted), or redundancy – either restarting instances slowly behind a load balancer, or starting up new servers ahead of time and then simply redirecting traffic to the new servers.

When deploying a new release to production, rather than immediately deploying to all instances or users, it may be deployed to a single instance or fraction of users first, and then either deployed to all or gradually deployed in phases, in order to catch any last-minute problems. This is similar to staging, except actually done in production, and is referred to as a canary release, by analogy with coal mining. This adds complexity due to multiple releases being run simultaneously, and is thus usually over quickly, to avoid compatibility problems.

In some exceptional cases you could do without a Test Environment and use the Staging Environment for this, but all other environments should be present.

2. Use of Version Control

Version control is any kind of practice that tracks and provides control over changes to source code. Teams can use version control software to maintain documentation and configuration files as well as source code.

As teams design, develop and deploy software, it is common for multiple versions of the same software to be deployed in different sites and for the software's developers to be working simultaneously on updates. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).

At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the version control process have been developed. This ensures that the majority of management of version control steps is hidden behind the scenes.

Moreover, in software development, legal and business practice and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated version control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.

Version control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.

3. Clear Branching Strategy

Branching strategy has always been one of those sticky topics which always causes many questions.  Many senior programmers are baffled by the ins-and-outs of branching and merging. And for good reason; it is a difficult topic. Many strategies exist; main only, development isolation, release isolation, feature isolation, etc.

I’ve been around in many different organizations. I’ve been the person who was told what the branching strategy was, and I have been the person who designed it.  I’ve seen it done just about every way possible, and after all that, I have come to the following conclusion.

Keep it simple. Working directly off the trunk is by far the best approach in my opinion.

In a future post, I will show you what I think is the most simple and effective branching strategy.  A strategy I have effectively used in the past and have developed over time.  It can be summarized as follows:

1) Everyone works off of trunk.
2) Branch when you release code.
3) Branch off a release when you need to create a bug fix for allready released code.
4) Branch for prototypes.

4. Use of a Bug Tracking System

A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. When your team is not using some kind of a system for this than you are in for a lot of trouble.

Many bug tracking systems, such as those used by most open source software projects, allow end-users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other software project management applications.

The main benefit of a bug-tracking system is to provide a clear centralized overview of development requests (including bugs, defects and improvements, the boundary is often fuzzy), and their state. The prioritized list of pending items (often called backlog) provides valuable input when defining the product road map, or maybe just "the next release".

A second benefit is that it gives you very useful information about the quantity, type and environment of bugs/defects that are discovered. There is a big difference between finding them at the test environment versus the production environment. In general, you can say the later you find them, the more they cost to fix.

5. Collective Code Ownership

Collective Ownership encourages everyone to contribute new ideas to all parts of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or refactor. No one person becomes a bottleneck for changes. This is easy to do when you have all your code covered with unit tests and automated acceptance tests.


6. Continuously Refactoring

Code should be written to solve the known problem at the time. Often, teams become wiser about the problem they are solving, and continuously refactoring and changing code ensures the code base is forever meeting the most current needs of the business in the most efficient way. In order to guarantee that changes do not break existing functionality, your regression tests should be automated. I.e. unit tests are essential.


7. Writing Unit Tests

The purpose of unit testing is not for finding bugs. It is a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If you code is covered by a reasonable amount of unit tests, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests. Without unit tests your refactoring efforts will be a major risk every time you do it.


8. Code Reviews

Code review is a systematic examination (sometimes referred to as peer review) of source code. It is intended to find mistakes overlooked in software development, improving the overall quality of software. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.

Code review practices fall into two main categories: formal code review and lightweight code review. Formal code review, such as a Fagan inspection, involves a careful and detailed process with multiple participants and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are extremely thorough and have been proven effective at finding defects in the code under review.

Lightweight code review typically requires less overhead than formal code inspections. Lightweight reviews are often conducted as part of the normal development process:

1) Over-the-shoulder – one developer looks over the author's shoulder as the latter walks through the code.

2) Email pass-around – source code management system emails code to reviewers automatically after check in is made.

3) Pair programming – Having 2 developers work on one piece of code, using one keyboard and one monitor. Pairing results in higher quality output because it greatly reduces wasted time and defects, and results in high collaboration. It is nothing else as continuous code reviews. Hence, when implemented you do not need code reviews before merging your branches, hence continuous integration can be done faster. This is common in Extreme Programming.

4) Tool-assisted code review – authors and reviewers use software tools, informal ones such as pastebins and IRC, or specialized tools designed for peer code review.

A code review case study published in the book Best Kept Secrets of Peer Code Review found that lightweight reviews uncovered as many bugs as formal reviews, but were faster and more cost-effective. In my opinion, it does not matter what kind of code reviews you do, but there should go NO code in production that has not been peer-reviewed.

9. Build Automation

Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and creating all necessary artifacts to deploy the application on a target environment.

Build automation is considered the first step in moving toward implementing a culture of Continuous Delivery and DevOps. Build automation combined with Continuous Integration, deployment, application release automation, and many other processes help move an organization forward in establishing software delivery best practices

10. Automated Tests and Test Automation

In the world of testing in general, and continuous integration and delivery in particular, there are two types of automation:

1) Automated Tests
2) Test Automation

While it might just seem like two different ways to say the same thing, these terms actually have very different meanings.

Automated tests are tests that can be run automated, often developed in a programming language. In this case, we talk about the individual test cases, either unit-tests, integration/service, performance tests, end-2-end tests or acceptance tests. The latter is also known as Specification by Example.

Test automation is a broader concept and includes automated tests. From my perspective, it should be about the full automation of test cycles from check-in up-to deployment. Also called continuous testing. Both automated testing and test automation are important to continuous delivery, but it's really the latter that makes continuous delivery of a high quality even possible.

11. Continuous Integration

Martin Fowler defines Continuous Integration (CI) in his key article as follows: "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly." You see, without unit tests and test automation, it is impossible to do CI right. And only when you do CI right you might be able to succeed at Continuous Deployment.


12. Continuous Delivery

Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing. Since every change is delivered to a staging environment using complete automation, you can have confidence the application can be deployed to production with a push of a button when the business is ready. Continuous deployment is the next step of continuous delivery: Every change that passes the automated tests is deployed to production automatically. Continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.



A simple continuous delivery pipeline could look like this:

1) Continuous integration server picks-up changes in the source code
2) Starts running the unit-tests
3) Deploys (automated) to an integration environment
4) Runs automated integration tests
5) Deploys (automated) to an acceptance environment
6) Runs automated acceptance tests
7) Deploys (automated or manual) to production

13. Configuration Management by Code

The operating system, host configuration, operational tasks etc. are automated with code by developers and system administrators. As code is used, configuration changes become standard and repeatable. This relieves developers and system administrators of the burden of configuring the operating system, system applications or server software manually.


14. Code Documentation

Inevitably, documentation and code comments become lies over time. In practice, few people update comments and/or documentation when things change. Strive to make your code readable and self-documenting through good naming practices and known programming style.

15. Step by step development process guide 

This guide is essential for onboarding of new people and inspecting and adapting the way the team works. I work a lot with Kanban and ScrumBan and an important concept of these is to make your process explicit.

16. Step by step deployment process guide 

Somebody who is usually not doing this should be able to deploy on production with this guide on the table. You will never know when you need it, but the day will come, and than you are happy you have this. Of course the more you go into the direction of Continuous Delivery, the smaller this guide becomes, because all documentation of this process is coded in your automated processes.

17. Monitoring and Logging

To gauge the impact that the performance of application and infrastructure have on consumers, organizations monitor metrics and logs. The data and logs generated by applications and infrastructure are captured, categorized and then analyzed by organizations to understand how users are impacted by changes or updates. This makes it easy to detect sources of unexpected problems or changes. It is necessary that there be a constant monitoring, to ensure a steady availability of services and an increment in the speed at which infrastructure is updated. When these data are analyzed in real-time, organizations proficiently monitor their services
.

18. Being aware of technical debt

The metaphor of technical debt in code and design can be defined as follows: You start at an optimal level of code. In the next release, you are adding a new feature. This would take an effort E. This, of course, assuming that estimations are somewhere near reality.
If the level of code was less than optimal, the effort will be E + T.

Where T is the technical debt. Writing bad code is like going further into debt. You take the loan now, and you repay the debt later. The bigger the mess, the larger the delay in the next release.

The term “technical debt” was first introduced by Ward Cunningham. It was in the early 90s when the disconnects between development and business were growing bigger and bigger. The business people would urge developers do release untested, ugly code in order to get their product or new features faster. The developers tried to explain why this was a bad mistake. Some things will never change...

Most products and projects are still released much earlier than the developers have wanted. Assuming that developers are not just being stubborn (I know, maybe an ever bigger assumption as decent estimations), you would think that we didn’t manage to get the message across to the business. We have done an awesome job explaining what technical debt is and what the results are going to be. The business people understand it. But they are just willing to take the loan now. Can you blame them? Business wants something out there, in the field, that will sell now.

No problem, just make sure the consequences of these decisions are clear for all parties involved.

19. Good design

Good design is hard to judge, but luckily enough bad design is easy to “smell”. Software Developers are notorious for using different criteria for evaluating good design but, from experience, I tend to agree with Bob Martin and Martin Fowler who have said that there is a set of criteria that engineers usually agree upon when it comes to bad design.

And because you can't recognise good design until you know what bad design is, and once you know what good design should avoid you can easily judge whether a said engineering principle has any merit or is just fuzz waiting to distract you from your real goal of building software that is useful to people we just use bad design as a start to determine if we have a good design.

A piece of software that fulfils its requirement and yet exhibits any or all of the following traits can be considered to have "bad design":

1) Rigidity: It's too hard to make changes because every change affects too many other parts of the system.
2) Fragility: When you make a change, unexpected parts of the system break.
3) Immobility: It's hard to reuse a chunk of code elsewhere because it cannot be disentangled from its current application/usage.
4) Viscosity: It's hard to do the "right thing" so developers take alternate actions.
5) Needless Complexity: Overdesign
6) Needless Repetition: Mouse abuse
7) Opacity: Disorganized expression

Conclusion

As you have noticed in the descriptions of the practices above they are “layered”. To do x, you will need to do y first. For example, Continuous Integration without Build Automation is not possible. Test Automation without Automated Tests neither. And so on. Good software development practices start with the foundational layers,  and then build on top. When the foundation is weak, all else will be weak as well.

Read more…