Tuesday, May 01, 2018

How to review your team’s software development practices

Software Development Practices
An important part of the project reviews I do is a software development practices review. Notice I do not say "best practices". The term “best practices” is without meaning in many contexts. At best, it’s vague, subjective, and highly dependent on context.

Wikipedia defines best practices as:

“A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things.”


In other words, a “best practice” is a practice that has been somehow empirically proven to be the best. Although there is quite some research on software development practices, I think defining the “best” software development practices, that would be the "best" for all projects, based on this kind of research is impossible.

What I am looking for in my software development practices review is the application of “standard practices” or “accepted practices”. A good example of this might be surgeons washing their hands prior to surgery. It is such a widely accepted and standardized practice that it would be scandalous for a surgeon not to follow it. Washing hands versus not washing hands have been empirically demonstrated to be better in terms of patient outcomes. Is the washing of hands the absolute “best” thing in the universe that a surgeon could do prior to surgery? Well, this philosophical question is not especially important. What matters is that hand-washing is beneficial and widely accepted.

The review

Your review would be sitting in a room with the development team and go through the list of practices from the second part of this article. Just ask the team about each practice and let them tell their stories. You might learn a lot of other things too! After talking about a practice the team should agree on one of the following answers:

1) We do not do this
2) We do not need this
3) We do this, but not enough/consistently
4) We do this, but we do not see the expected benefits
5) We do this and see the expected benefits

After agreeing on an answer, everybody in the team should give input on why this is the case. You could use the 5 whys technique when you feel that it helps you get important information.

Of course there are exceptions based on specific context, and of course, there are wildly varying degrees of maturity for each practice. But in general, you can say that the more times the team tells you “No, we do not do this” the more room for improvement there is. This is the positive formulation. You could also say the more “We do not do this" you count, the more issues with the software and your project you can expect.

The moment you should listen very carefully is when the team says “No, we do not NEED this”. Here you will learn about your project specific challenges and environment AND you will learn about the mindset of your team.

Answer numbers 3 and 4 will indicate possible room for improvements on the implementation of a practice and with that on the software development process as a whole.

You can combine this review with the delivery team review. The answers to the software development practices review will give you valuable information on team dynamics, mindset, individual skills, individual knowledge, as well as skills and knowledge as a team,

The practices

Since the creation of the first programming languages in the 1950’s there currently exists a widespread agreement in the software development community on which software engineering practices help creating better software. I will explain the in my opinion most effective practices in the rest of this article.

1. Separate Development and Deployment Environments

In order to develop and deploy software effectively, you will need a number of different environments. This practice seems to be so logical, still, I see time and again that essential environments are not available. Let’s start with the first environment.

1) Development: The development environment is the environment in which changes to software are developed, most simply an individual developer's workstation. This differs from the ultimate target environment in various ways – the target may not be a desktop computer (it may be a smartphone, embedded system, headless machine in a data center, etc.), and even if otherwise similar, the developer's environment will include development tools like a compiler, integrated development environment, different or additional versions of libraries and support software, etc., which are not present in a user's environment.

2) Integration: In the context of version control, particularly with multiple developers, finer distinctions are drawn: a developer has a working copy of source code on their machine, and changes are submitted to the repository, being committed either to the trunk or a branch, depending on development methodology. The environment on an individual workstation, in which changes are worked on and tried out, may be referred to as the local environment, sandbox or development environment. Building the repository's copy of the source code in a clean environment is a separate step, part of integration (integrating disparate changes), and this environment is usually called the integration environment; in continuous integration this is done frequently, as often as for every version. The source code level concept of "committing a change to the repository", followed by building the trunk or branch, corresponds to pushing to release from local (individual developer's environment) to integration (clean build); a bad release at this step means a change broke the build, and rolling back the release corresponds to either rolling back all changes from that point onward, or undoing just the breaking change, if possible.

3) Test: The purpose of the test environment is to allow human testers to exercise new and changed code via either automated checks or non-automated techniques. After the developer accepts the new code and configurations through unit testing in the development environment, the items are moved to one or more test environments. Upon test failure, the test environment can remove the faulty code from the test platforms, contact the responsible developer, and provide detailed test and result logs. If all tests pass, the test environment or a continuous integration framework controlling the tests can automatically promote the code to the next deployment environment.

Different types of testing suggest different types of test environments, some or all of which may be virtualized to allow rapid, parallel testing to take place. For example, automated user interface tests may occur across several virtual operating systems and displays (real or virtual). Performance tests may require a normalized physical baseline hardware configuration, so that performance test results can be compared over time. Availability or durability testing may depend on failure simulators in virtual hardware and virtual networks.

Tests may be serial (one after the other) or parallel (some or all at once) depending on the sophistication of the test environment. A significant goal for agile and other high-productivity software development practices is reducing the time from software design or specification to delivery in production. Highly automated and parallelized test environments are important contributors to rapid software development.

4) Staging: Staging is an environment for final testing immediately prior to deploying to production. It seeks to mirror the actual production environment as closely as possible and may connect to other production services and data, such as databases. For example, servers will be run on remote machines, rather than locally (as on a developer's workstation during dev, or on a single test machine during a test), which tests the effect of networking on the system. This environment is also know as UAT, which stands for User Acceptance Test.

The primary use of a staging environment is to test all installation/configuration/migration scripts and procedures, before they are applied to production environment. This ensures that all major and minor upgrades to the production environment will be completed reliably without errors, in minimum time.

Another important use of staging is for performance testing, particularly load testing, as this often depends sensitively on the environment.

5) Production: The production environment is also known as live, particularly for servers, as it is the environment that users directly interact with. Deploying to production is the most sensitive step; it may be done by deploying new code directly (overwriting old code, so only one copy is present at a time), or by deploying a configuration change. This can take various forms: deploying a parallel installation of a new version of code, and switching between them with a configuration change; deploying a new version of code with the old behavior and a feature flag, and switching to the new behavior with a configuration change that performs a flag flip; or by deploying separate servers (one running the old code, one the new) and redirecting traffic from old to new with a configuration change at the traffic routing level. These in turn may be done all at once or gradually, in phases.

Deploying a new release generally requires a restart, unless hot swapping is possible, and thus requires either an interruption in service (usual for user software, where applications are restarted), or redundancy – either restarting instances slowly behind a load balancer, or starting up new servers ahead of time and then simply redirecting traffic to the new servers.

When deploying a new release to production, rather than immediately deploying to all instances or users, it may be deployed to a single instance or fraction of users first, and then either deployed to all or gradually deployed in phases, in order to catch any last-minute problems. This is similar to staging, except actually done in production, and is referred to as a canary release, by analogy with coal mining. This adds complexity due to multiple releases being run simultaneously, and is thus usually over quickly, to avoid compatibility problems.

In some exceptional cases you could do without a Test Environment and use the Staging Environment for this, but all other environments should be present.

2. Use of Version Control

Version control is any kind of practice that tracks and provides control over changes to source code. Teams can use version control software to maintain documentation and configuration files as well as source code.

As teams design, develop and deploy software, it is common for multiple versions of the same software to be deployed in different sites and for the software's developers to be working simultaneously on updates. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk).

At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the version control process have been developed. This ensures that the majority of management of version control steps is hidden behind the scenes.

Moreover, in software development, legal and business practice and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated version control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations.

Version control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.

3. Clear Branching Strategy

Branching strategy has always been one of those sticky topics which always causes many questions.  Many senior programmers are baffled by the ins-and-outs of branching and merging. And for good reason; it is a difficult topic. Many strategies exist; main only, development isolation, release isolation, feature isolation, etc.

I’ve been around in many different organizations. I’ve been the person who was told what the branching strategy was, and I have been the person who designed it.  I’ve seen it done just about every way possible, and after all that, I have come to the following conclusion.

Keep it simple. Working directly off the trunk is by far the best approach in my opinion.

In a future post, I will show you what I think is the most simple and effective branching strategy.  A strategy I have effectively used in the past and have developed over time.  It can be summarized as follows:

1) Everyone works off of trunk.
2) Branch when you release code.
3) Branch off a release when you need to create a bug fix for allready released code.
4) Branch for prototypes.

4. Use of a Bug Tracking System

A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. When your team is not using some kind of a system for this than you are in for a lot of trouble.

Many bug tracking systems, such as those used by most open source software projects, allow end-users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other software project management applications.

The main benefit of a bug-tracking system is to provide a clear centralized overview of development requests (including bugs, defects and improvements, the boundary is often fuzzy), and their state. The prioritized list of pending items (often called backlog) provides valuable input when defining the product road map, or maybe just "the next release".

A second benefit is that it gives you very useful information about the quantity, type and environment of bugs/defects that are discovered. There is a big difference between finding them at the test environment versus the production environment. In general, you can say the later you find them, the more they cost to fix.

5. Collective Code Ownership

Collective Ownership encourages everyone to contribute new ideas to all parts of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or refactor. No one person becomes a bottleneck for changes. This is easy to do when you have all your code covered with unit tests and automated acceptance tests.


6. Continuously Refactoring

Code should be written to solve the known problem at the time. Often, teams become wiser about the problem they are solving, and continuously refactoring and changing code ensures the code base is forever meeting the most current needs of the business in the most efficient way. In order to guarantee that changes do not break existing functionality, your regression tests should be automated. I.e. unit tests are essential.


7. Writing Unit Tests

The purpose of unit testing is not for finding bugs. It is a specification for the expected behaviors of the code under test. The code under test is the implementation for those expected behaviors. So unit test and the code under test are used to check the correctness of each other and protect each other. Later when someone changed the code under test, and it changed the behavior that is expected by the original author, the test will fail. If you code is covered by a reasonable amount of unit tests, you can maintain the code without breaking the existing feature. That’s why Michael Feathers define legacy code in his book as code without unit tests. Without unit tests your refactoring efforts will be a major risk every time you do it.


8. Code Reviews

Code review is a systematic examination (sometimes referred to as peer review) of source code. It is intended to find mistakes overlooked in software development, improving the overall quality of software. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.

Code review practices fall into two main categories: formal code review and lightweight code review. Formal code review, such as a Fagan inspection, involves a careful and detailed process with multiple participants and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are extremely thorough and have been proven effective at finding defects in the code under review.

Lightweight code review typically requires less overhead than formal code inspections. Lightweight reviews are often conducted as part of the normal development process:

1) Over-the-shoulder – one developer looks over the author's shoulder as the latter walks through the code.

2) Email pass-around – source code management system emails code to reviewers automatically after check in is made.

3) Pair programming – Having 2 developers work on one piece of code, using one keyboard and one monitor. Pairing results in higher quality output because it greatly reduces wasted time and defects, and results in high collaboration. It is nothing else as continuous code reviews. Hence, when implemented you do not need code reviews before merging your branches, hence continuous integration can be done faster. This is common in Extreme Programming.

4) Tool-assisted code review – authors and reviewers use software tools, informal ones such as pastebins and IRC, or specialized tools designed for peer code review.

A code review case study published in the book Best Kept Secrets of Peer Code Review found that lightweight reviews uncovered as many bugs as formal reviews, but were faster and more cost-effective. In my opinion, it does not matter what kind of code reviews you do, but there should go NO code in production that has not been peer-reviewed.

9. Build Automation

Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and creating all necessary artifacts to deploy the application on a target environment.

Build automation is considered the first step in moving toward implementing a culture of Continuous Delivery and DevOps. Build automation combined with Continuous Integration, deployment, application release automation, and many other processes help move an organization forward in establishing software delivery best practices

10. Automated Tests and Test Automation

In the world of testing in general, and continuous integration and delivery in particular, there are two types of automation:

1) Automated Tests
2) Test Automation

While it might just seem like two different ways to say the same thing, these terms actually have very different meanings.

Automated tests are tests that can be run automated, often developed in a programming language. In this case, we talk about the individual test cases, either unit-tests, integration/service, performance tests, end-2-end tests or acceptance tests. The latter is also known as Specification by Example.

Test automation is a broader concept and includes automated tests. From my perspective, it should be about the full automation of test cycles from check-in up-to deployment. Also called continuous testing. Both automated testing and test automation are important to continuous delivery, but it's really the latter that makes continuous delivery of a high quality even possible.

11. Continuous Integration

Martin Fowler defines Continuous Integration (CI) in his key article as follows: "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly." You see, without unit tests and test automation, it is impossible to do CI right. And only when you do CI right you might be able to succeed at Continuous Deployment.


12. Continuous Delivery

Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing. Since every change is delivered to a staging environment using complete automation, you can have confidence the application can be deployed to production with a push of a button when the business is ready. Continuous deployment is the next step of continuous delivery: Every change that passes the automated tests is deployed to production automatically. Continuous deployment should be the goal of most companies that are not constrained by regulatory or other requirements.



A simple continuous delivery pipeline could look like this:

1) Continuous integration server picks-up changes in the source code
2) Starts running the unit-tests
3) Deploys (automated) to an integration environment
4) Runs automated integration tests
5) Deploys (automated) to an acceptance environment
6) Runs automated acceptance tests
7) Deploys (automated or manual) to production

13. Configuration Management by Code

The operating system, host configuration, operational tasks etc. are automated with code by developers and system administrators. As code is used, configuration changes become standard and repeatable. This relieves developers and system administrators of the burden of configuring the operating system, system applications or server software manually.


14. Code Documentation

Inevitably, documentation and code comments become lies over time. In practice, few people update comments and/or documentation when things change. Strive to make your code readable and self-documenting through good naming practices and known programming style.

15. Step by step development process guide 

This guide is essential for onboarding of new people and inspecting and adapting the way the team works. I work a lot with Kanban and ScrumBan and an important concept of these is to make your process explicit.

16. Step by step deployment process guide 

Somebody who is usually not doing this should be able to deploy on production with this guide on the table. You will never know when you need it, but the day will come, and than you are happy you have this. Of course the more you go into the direction of Continuous Delivery, the smaller this guide becomes, because all documentation of this process is coded in your automated processes.

17. Monitoring and Logging

To gauge the impact that the performance of application and infrastructure have on consumers, organizations monitor metrics and logs. The data and logs generated by applications and infrastructure are captured, categorized and then analyzed by organizations to understand how users are impacted by changes or updates. This makes it easy to detect sources of unexpected problems or changes. It is necessary that there be a constant monitoring, to ensure a steady availability of services and an increment in the speed at which infrastructure is updated. When these data are analyzed in real-time, organizations proficiently monitor their services
.

18. Being aware of technical debt

The metaphor of technical debt in code and design can be defined as follows: You start at an optimal level of code. In the next release, you are adding a new feature. This would take an effort E. This, of course, assuming that estimations are somewhere near reality.
If the level of code was less than optimal, the effort will be E + T.

Where T is the technical debt. Writing bad code is like going further into debt. You take the loan now, and you repay the debt later. The bigger the mess, the larger the delay in the next release.

The term “technical debt” was first introduced by Ward Cunningham. It was in the early 90s when the disconnects between development and business were growing bigger and bigger. The business people would urge developers do release untested, ugly code in order to get their product or new features faster. The developers tried to explain why this was a bad mistake. Some things will never change...

Most products and projects are still released much earlier than the developers have wanted. Assuming that developers are not just being stubborn (I know, maybe an ever bigger assumption as decent estimations), you would think that we didn’t manage to get the message across to the business. We have done an awesome job explaining what technical debt is and what the results are going to be. The business people understand it. But they are just willing to take the loan now. Can you blame them? Business wants something out there, in the field, that will sell now.

No problem, just make sure the consequences of these decisions are clear for all parties involved.

19. Good design

Good design is hard to judge, but luckily enough bad design is easy to “smell”. Software Developers are notorious for using different criteria for evaluating good design but, from experience, I tend to agree with Bob Martin and Martin Fowler who have said that there is a set of criteria that engineers usually agree upon when it comes to bad design.

And because you can't recognise good design until you know what bad design is, and once you know what good design should avoid you can easily judge whether a said engineering principle has any merit or is just fuzz waiting to distract you from your real goal of building software that is useful to people we just use bad design as a start to determine if we have a good design.

A piece of software that fulfils its requirement and yet exhibits any or all of the following traits can be considered to have "bad design":

1) Rigidity: It's too hard to make changes because every change affects too many other parts of the system.
2) Fragility: When you make a change, unexpected parts of the system break.
3) Immobility: It's hard to reuse a chunk of code elsewhere because it cannot be disentangled from its current application/usage.
4) Viscosity: It's hard to do the "right thing" so developers take alternate actions.
5) Needless Complexity: Overdesign
6) Needless Repetition: Mouse abuse
7) Opacity: Disorganized expression

Conclusion

As you have noticed in the descriptions of the practices above they are “layered”. To do x, you will need to do y first. For example, Continuous Integration without Build Automation is not possible. Test Automation without Automated Tests neither. And so on. Good software development practices start with the foundational layers,  and then build on top. When the foundation is weak, all else will be weak as well.

Read more…

Tuesday, April 24, 2018

Proactively manage your project with RAID lists (Risks, Assumptions, Issues and Decisions)

RAID Lists
RAID is an acronym for Risks, Assumptions, Issues, and Decisions. Some use the "D" for dependencies instead of decisions, and some use the "A" for actions instead of assumptions. I personally track dependencies on my assumption list (because that is what dependencies are) and I have no need for a separate action list since I track actions in a separate column of each of the other lists.

Although I speak from four lists you can, of course, use one tool to maintain them and filter on type when necessary. This is actually advisable since items will move from one list to another on a regular basis, but we will come back to that later.

Often the terms risks, assumptions, and issues cause confusion. Their definitions are similar but applied differently in different context or areas of work.

Risk is made up of two parts: the probability of something going wrong, and the negative consequences if it does, often called impact. Caution, risk is not the same as uncertainty. Risk and uncertainty are different terms, but most people think they are the same and ignore them. Managing risk is easier because you can identify risks and develop a response plan in advance based on your past experience. However, managing uncertainty is very difficult as previous information is not available, too many parameters are involved, and you cannot predict the outcome.

Assumptions are hypotheses about necessary conditions, both internal and external, identified in a design to ensure that the presumed cause-effect relationships function as expected and that planned activities will produce expected results. In other words, an assumption is an event, condition or fact that we need to happen or stay the same in order to assure project success.

An issue is a current problem – something that is happening now. Issues can be either something that was not predicted or a risk that has materialized. It can take the form of an unresolved decision, situation or problem that will significantly impact the project.

We could explain these terms with the following simple expressions

Risk: "What if?"
Assumption: "We need that it happens/stays this way".
Issue: "Oh, damn!".

If you have predicted the issue and treated it as a risk, you would already have a response plan, and the expression would be: "Hmm, that was to be expected. Let's handle it as discussed".

Even though risk, assumption, and issue have different definitions, they are deeply connected. An assumption might have a response plan if it doesn't happen or change, a risk will become an issue at the moment it happens, and an (unpredicted) issue must be included in the risk list once we identify it (and there is a chance that it will happen again).

The secret to good RAID lists is to record the right risks, assumptions, issues, and decisions at the right level of detail. Too many and in too much detail simply create unnecessary bureaucracy. Too few in too little details does not provide actionable insights. For example, I have often seen risk lists populated with boilerplate risks: these are generic project risks, such as 'we may not get sufficient executive sponsorship' which don't really add any insight to the project. If you genuinely think that is a risk, you are better off identifying the underlying reasons which might cause this, and then express the risk in those terms.

For me, these four lists are the base for proactive instead of reactive project management.

Proactive behavior involves acting in advance of a future situation, rather than just reacting. It means taking control and making things happen rather than just adjusting to a situation or waiting for something to happen.
RAID logs are an excellent governance mechanism, and worth keeping even if for no other reason than so that you have all of the information to hand if the internal audit department or some other stakeholder decides to audit your project. But don't forget that projects are fundamentally about doing things. So identifying risks and issues without also identifying actions to mitigate them, and making decisions to resolve them will not get you very far.

Read more…

Saturday, April 14, 2018

Start your project with a Walking Skeleton

Start with a walking skeleton
In order to reduce risk on large software development projects, you need to figure out all the big unknowns as early as possible. The best way to do this is to have a real end-to-end test with no stubs against a system that’s deployed in production.

You could do this by building a so-called Walking Skeleton, a term coined by Alistair Cockburn. He defined it as a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel. A similar concept called “Tracer Bullets” was introduced in The Pragmatic Programmer.

A Walking Skeleton is a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel.
If the system needs to talk to one or more datastores then the walking skeleton should perform a simple query against each of them, as well as simple requests against any external or internal service. If it needs to output something to the screen, insert an item to a queue or create a file, you need to exercise these in the simplest possible way. As part of building it, you should write your deployment and build scripts, set up the project, including its tests, and make sure all the automations are in place — such as Continuous Integration, monitoring, and exception handling. The focus is the infrastructure, not the features. Only after you have your walking skeleton should you write your first automated acceptance tests.

This is only the skeleton of the application, but the parts are connected and the skeleton does walk in the sense that it exercises all the system’s parts as you currently understand them. Because of this partial understanding, you must make the walking skeleton minimal. But it’s not a prototype and not a proof of concept — it’s production code, so you should definitely write tests as you work on it.

High risk first 

According to Hofstadter’s Law, “It always takes longer than you expect, even when you take into account Hofstadter’s Law”. This law is valid all too often. It makes sense than to work on the riskiest parts of the project first, which are usually the parts which have dependencies: on third-party services, on in-house services, on other groups in the organization you belong to. It makes sense to get the ball rolling with these groups simply because you don’t know how long it will take and what problems should arise.

Making changes to an architecture is harder and more expensive the longer it has been around and the bigger it gets. We want to find mistakes early. This approach gives us a short feedback cycle, from which we can more quickly adapt and work iteratively as necessary to meet the business' prioritized list of runtime-discernable quality attributes. Assumptions about the architecture are validated earlier. The architecture is more easily evolved because problems are found at an earlier stage when less has been invested in its implementation.

The bigger the system, the more important it is to use this strategy. In a small application, one developer can implement a feature from top to bottom relatively quickly, but this becomes impractical with larger systems. It is quite common to have multiple developers on a single team or even on multiple, possibly distributed teams involved in implementing end-to-end. Consequently, more coordination is necessary.

No shortcuts

It’s important to stress that until the walking skeleton is deployed to production (possibly behind a feature flag or just hidden from the outside world) you are not ready to write the first acceptance test. You want to exercise your deployment and build scripts and discover as many potential problems as you can as early as possible. The Walking Skeleton is a way to validate the architecture and get early feedback so that it can be improved. You will be missing this feedback if you cut corners or take shortcuts.

Conclusion

Start with a Walking Skeleton, keep it running, and grow it incrementally.

Read more…

Monday, April 09, 2018

It's never too early to think about performance

Automated Performance Testing
Business users specify their needs primarily through functional requirements. The non-functional aspects of the systems, like performance, responsiveness, up-time, support needs, and so on, are left up to the development team.

Testing of these non-functional requirements is left until very late in the development cycle, and is sometimes delegated completely to the operations team. This is a big mistake that is made far too often. Having separate development and operations team is already a mistake by itself, but I will leave that discussion for another article.

I was recently part of two large software development projects were performance was addressed to late, and the costs and time necessary fixing it was a magnitude larger as it would have to address performance early in the project. Not to mention the bad reputation the teams and systems got after going live with such a bad performance that users could hardly do their daily work with the system. 

Besides knowing before you go live that users are not going to be happy (and therefore should NOT go live) there is another big advantage of early performance testing. If you aren't looking at performance until late in the project cycle, you have lost an incredible amount of information as to when performance changed. If performance is going to be an important architectural and design criterion, then performance testing should begin as soon as possible. If you are using an Agile methodology based on two-week iterations, I'd say performance testing should be included in the process no later than the third iteration.

Why is this so important? The biggest reason is that at the very least you know the kinds of changes that made performance fall off a cliff. Instead of having to think about the entire architecture when you encounter performance problems, you can focus on the most recent changes. 

Doing performance testing early and often provides you with a narrow range of changes on which to focus. In early testing, you may not even try to diagnose performance, but you do have a baseline of performance figures to work from. This trend data provides vital information in diagnosing the source of performance issues and resolving them.

This approach also allows for the architectural and design choices to be validated against the actual performance requirements. Particularly for systems with hard performance requirements, early validation is crucial to delivering the system in a timely fashion.

“Fast” is not a requirement 

"Fast" is not a requirement. Neither is "responsive". Nor "extensible". The main reason why not is that you have no objective way to tell if they're met. 

Some simple questions to ask: How many? In what period? How often? How soon? Increasing or decreasing? At what rate? If these questions cannot be answered then the need is not understood. The answers should be in the business case for the system and if they are not, then some hard thinking needs to be done. If you work as an architect and the business hasn't (or won't) tell you these numbers ask yourself why not. Then go get them. The next time someone tells you that a system needs to be "scalable" ask them where new users are going to come from and why. Ask how many and by when? Reject "lots" and "soon" as answers.

Uncertain quantitative criteria must be given as a range: the least, the nominal, and the most. If this range cannot be given, then the required behavior is not understood. As an architecture unfolds it can be checked against these criteria to see if it is (still) in tolerance. As the performance against some criteria drifts over time, valuable feedback is obtained. Finding these ranges and checking against them is a time-consuming and expensive business. 

If no one cares enough about the system being "performant" (neither a requirement nor a word) to pay for performance tests, then more than likely performance doesn't matter. quote

You are then free to focus your efforts on aspects of the system that are worth paying for.

Automated Performance Testing

In order to keep costs and spend time on performance testing in check, I advise you to automate this as much as possible. Tools like Taurus simplifies the automation of performance testing, is built for developers and DevOps, and relies on JMeter, Selenium, Gatling and Grinder as underlying engines. It also enables parallel testing, its configuration format is readable and can be parsed by your version control system, it’s tool friendly and tests can be expressed using YAML or JSON.

Here are some types of tests you can run automated:
- Load Tests are conducted to understand the behavior of the system under a specific expected load.
- Stress Tests are used to understand the upper limits of capacity within the system.
- Soak Tests determine if the system can sustain the continuous expected load.
- Spike Tests determine if the system can sustain a suddenly increasing load generated by a large number of users.
- Isolation Tests determine if a previously detected system issue has been fixed by repeating a test execution that resulted in a system problem.

Technical testing is notoriously difficult to get going. Setting up the appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time. By addressing performance testing early you can establish your test environment incrementally avoiding much more expensive efforts once after you discover performance issues.

Test Automation & Code Quality Workshop

One very big pain point for software development projects is trying to implement a more or less agile project delivery method without implementing automated tests and test automation. Automated performance testing is part of this.  

That is why Falko Schmidt and myself decided to design a two-day workshop to address this pain point and learn you how to solve it. We have worked together on a number of software development projects for clients like PwC and Helsana. Falko in the role of the lead developer or test automation specialist, myself as project coach or project recovery manager.

We both see test automation as an essential part of modern software development and project success and designed this workshop to spread the word and teach what we have learned over the years.

So ...

- Are you a company that wants to release faster and in a better quality?
- Are you managing a software development effort and your team does not have the skills to automate your testing?
- Are you a developer who would like to improve the quality of his own work?
- Are you tired of fixing the bugs in the code of your colleagues?
- Does your test automation efforts do not bring the expected benefits?
- Do you want to increase your market value and employability as an allround software developer?

Then this workshop is for you!

Have a look at www.test-automation.ch to see in detail what we offer and sign up. The next dates for our public workshop in Zurich are May 24/25 and July 5/6.

When you are interested in an in-house workshop for your team just contact us.

Read more…

Friday, March 30, 2018

Doing the right number of projects

Agile Project Portfolio Management Goals
I have written about project portfolio management extensively and created the Agile Project Portfolio Management Framework. In all my articles I relentlessy repeated the three goals of Agile Project Portfolio Management.

1) Maximize the value of the portfolio

2) Seek the right balance of projects, thus achieving a balanced portfolio

3) Create a strong link to strategy, thus the need to build strategy into the portfolio

After some thinking and new experiences I have decided to add a very important goal.

4) Doing the right number of projects

Getting this one wrong makes it very hard, or even impossible, to achieve the other three. And as you propable allready have guessed, doing not enough projects is seldomly the case.

So what is the right number of projects?

Typically the half of what you are currently doing. Of course the answer of this question is depending on the altitude of your portfolio (i.e. only IT, or your whole company), the size of your projects, and the size of your company. But there are a number of guidelines from the Agile and Lean world that we can use.

PMI project portfolio management is all about value optimization and optimizing resource allocation. Both are designed in such a way, that in my opinion it will result in the opposite. As we have seen in the article about funding, running projects at hundred percent utilization is an economic disaster. Any small amount of unplanned work will cause delays, which will be even worse because of time spent on re-planning. And value is only created when it is delivered and not when it is planned. Hence we should focus on delivering value as quickly as possible with our given constraints.

This brings us to lean thinking. Lean is sometimes described as focusing on smaller batch (project) size, shorter queues, and faster cycle time. Focus on delivering value quickly. Lean is much more than this. The pillars are respect for people and continuous improvement resting on a foundation of manager-teachers in lean thinking. Queue management is just a tool far from the essence of lean thinking. But it is a very helpful tool that can have a very positive impact on your project portfolio management process.

Some example of queues in project and portfolio management?
 - products or projects in a portfolio
 - new features for one product
 - detailed requirements specifications waiting for design
 - design documents waiting to be coded
 - code waiting to be tested
 - the code of a single developer waiting to be integrated with other developers
 - large components waiting to be integrated
 - large components and systems waiting to be tested

In lean thinking, work in progress (WIP) queues are identified as waste, and hence to be removed or reduced, because:
 - WIP queues (as most queues) increase average cycle time and reduce value delivery, and thus may lower lifetime profit.
 - WIP queues are partially-done inventory (of projects in this case) with an investment of time and money for which there has been no return on investment.
 - As with all inventory, WIP queues hide—and allow replication of—defects because the pile of inventory has not been consumed or tested by a downstream process to reveal hidden problems; for example, a pile of un-integrated code.

On a similar note, we typically have a lot shared resource queues in our portfolio. Especially the main know-how carriers in a business unit are shared by multiple parallel projects. These queues are getting bigger due to project re-planning when one of them is behind schedule.

Also consider project portfolio throughput from a cost of delay point of view. You might, for example, be able to complete a project with perfect resource management (all staff is perfectly busy) in 12 months for $1 million. Alternatively, you could hire some extra people and have them sitting around occasionally at a total cost of $1.5 million, but the project would be completed in only 6 months.

What's that 6 months difference worth? Well, if the project is strategic in nature, it could be worth everything. It could mean being first to market with a new product or possessing a required capability for an upcoming bid which you don't even know about yet. It could mean impressing the heck out of some skeptical new client or being prepared for an external audit. There are many scenarios were the benefits outweigh the cost savings.

Additional to delivering the project faster, when you are done after 6 months instead of 12 months you can use the existing team for a different project delivering even more benefits for your organization. So not only do you get your benefits for your original project sooner and/or longer, you will get those for your next project sooner as well because it starts earlier and is staffed with an experienced team.

So what does this mean for your organization?

For a typical organization this means the following five things:

1) Doing fewer projects in parallel (i.e. limit work in progress)). On average only half of what you are doing now. Focus on limiting the number of concurrent projects in your organizational units.

2) Focus on doing the right projects.

3) Keep your project size small (i.e. limit batch size)

4) Focus on improving your project delivery capability.

5) Implement an agile delivery method for projects were it makes sense (i.e. reduce cycle time because you deliver value during the project, and not only at the end)

The Agile Project Portfolio Management Framework

Based on this article series I have written The Agile Project Portfolio Management Framework Guide. Besides this guide I have written many more articles on Agile Project Portfolio Management. You can find those here.

When you need help with setting up your Agile Project Portfolio Management process, or want to have a review of your existing portfolio, have a look at Project Portfolio Review. When you are interested in a workshop on Agile Project Portfolio Management have a look here.

Read more…

Sunday, March 25, 2018

It is time to face your project's reality

Projects fail for many reasons: optimistic estimates; technology that doesn’t work as advertised; misunderstanding the needs of key stakeholders; plans driven by deadlines rather than feasibility; the belief of the team that given enough pizza they can solve any problem by tomorrow morning.

In my experience, many of these reasons come back to one root cause: we lose touch with reality. A few examples:

- We spend insufficient time exploring and validating assumptions at the start of a project, so we operate from different expectations of what’s required or how they’ll work together. This makes it almost impossible to build meaningful estimates and schedules. The project begins from a position of unrealistic expectations and is playing catch-up forever after. See my article “No validation? No project!”.

- We start with an over-optimistic assessment of our team’s capabilities and skills, or of the technology and methodologies, we are using. Initial slow progress is explained by lack of familiarity. Eventually, the reality of slower than expected progress hits us.

- Sponsors may not realize how much effort and commitment they need to put into sponsorship, or what skills will be required. They may assume that the project manager can handle many of the things that are properly their responsibility. A project with weak leadership will then have trouble connecting to broader organizational objectives.

- We negotiate poorly with key stakeholders. How often have you seen an executive beat down the initial estimate from a technical team? The team thinks they’re estimating, but the executive is conducting a negotiation. At least one person has lost touch with reality in this situation.

- Teams under pressure find it difficult to step back and think about what’s really going on. Minor problems pile up because everyone is so busy that they can’t see the underlying trend.

- Even if we can see the underlying trend, we have trouble reporting it. A culture that says “come to me with solutions, not problems”, may mean that unsolved problems don’t get talked about. A culture where senior managers are expected to have all the answers may mean that sponsors can’t admit it that they don’t understand what their project manager is saying. Often we just don’t know who to go to with their observations. People with problems have trouble connecting to people who may have solutions, and the project’s leadership remains disconnected from the real situation.

- And, of course, many of us don’t even want to think about what the real risks of the project might be.

This doesn’t happen because we are lazy, or because they are trying to sabotage their projects. Skills and experience may provide some insulation, but they don’t eliminate the problems completely. We lose touch with reality for reasons rooted in basic psychology. They are called cognitive biases. There are many of them, and most could impact your project. I will discuss the, in my opinion, most "dangerous" ones.

Overconfidence

"The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts” – Bertrand Russel

We tend to overestimate our own ability. Surveys show, for example, that most of us consider ourselves to be above average intelligence, or to be better software developers than average, or to be better project managers than average. Likewise, most consultancies and system integrator claim a significantly higher success rate than the industry average. This specific effect of overconfidence is known as the Dunning-Kruger effect.

Overconfidence is a direct cause of underestimation on many projects. It helps driving phenomena like scope creep (developers think they can implement a feature with little effort) and the famous ninety-ninety rule.

"The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time." - Tom Cargill, Bell Labs

Oversimplification

"Everything should be made as simple as possible, but not simpler" – Albert Einstein
We tend to use simplifying assumptions and mental model in order to deal with complexity. This is sometimes essential, for example when rapid decision making is important, but it can lead to unintended outcomes very fast. Most of us have been caught by time, cost and complexity estimates that escalate as we get further into the details. Yet most big projects are started with very broad, “high level” estimates.

Likewise, we often operate from mental models for how projects, teams or organizations should work. These are simplifications of reality; they embed large numbers of assumptions. If we use them unthinkingly, we fail to recognize situations where the assumptions are violated. We begin to operate in model land rather than the real world. Anybody who has done some work with one of the MBB management consultants should have experience with this. Not to mention the software and solution architects that have not done any real work since switching to PowerPoint architecture.

When we fail to recognize the true complexity of our projects, their requirements and the organizations that they are serving, we accept an unrealistic scope, estimates, staffing, and plans. We also fail to recognize and manage key risks.

Avoiding Pain

“Pain in this life is not avoidable, but the pain we create avoiding pain is avoidable” – RD Laing

We tend to avoid pain. I am not talking about physical pain, but emotional pain. Sometimes this means deferring immediate pain at the risk (or even certainty) of needing to deal with greater future pain. For example, rather to face up to a tough conversation now (“it probably can’t be done”), we take high-risk options (“the only way it can be done is if we cut all these corners and hope for the best”). Likewise, rather than going through the pain of admitting that we have been delayed, we find reasons to persuade ourselves that we will catch up by the next milestone. Like that will ever happen…

The pain we are avoiding comes from a variety of sources and is very much depending on your personality.

- We may be avoiding confrontation, anger or disappointment. This is why so many of us have trouble saying “no”. We would rather fail later than deal with anger or disappointment now. Especially with people we love, like, fear and/or respect.

- We may be avoiding the feeling for the loss of control or mastery. We may focus on tasks we are good at, rather than those that are important. Technical people often derive a lot of their identity from their ability to solve problems. Admitting they can’t solve a problem represents a deep threat to their sense of identity. Project managers who like to feel in control of the situation (which they never are), may avoid finding out or understanding things they can’t control (which are almost all important things in a project).

- We might be avoiding cultural taboos. Some teams and cultures have difficulties saying “no” because it is just not something you do. There are string cultural pressures to make “positive” responses. Many organizations find it difficult to discuss risks for similar reasons. There is a strong pressure to “smile, it probably never happens”.

- We might be avoiding loss of “face”. For example, senior managers find it difficult to admit their ignorance to the people reporting to them. Rather than seek clarification to status reports, they may allow things to drift. This is a common reason for weak project sponsors and steering committee members. They simply do not know how to help and do not want to ask. Strong project managers can manage/engage their StC and sponsor, but inexperienced project managers, who are most likely to need support themselves, can find this hard to work with.

- We may be avoiding apparent costs. Spending time to clarify objectives; allocating time to risk mitigation activities; undertaking further analysis of the problem; performing independent reviews and assurance. These all involve immediate costs with uncertain paybacks. There is often a string pressure to avoid these immediate and obvious costs in order to reduce the budget.

Confirmation Bias

“We tend to accept information that confirms our prior beliefs and ignore or discredit information that does not. This confirmation bias settles over our eyes like distorting spectacles for everything we look at." - Kyle Hill

We tend to look for evidence that confirms our existing hopes and beliefs. We want, for example, to believe that our project is on course so we seek evidence that shows we are hitting milestones and ignore information that shows where we are behind schedule. As a team makes commitments to budgets, estimates, schedules, and deliverables, this bias becomes more embedded. They begin to focus their energy on finding ways to achieve these commitments and to avoid any signs of problems or risks. The opposite can happen as well. As soon as a team believes the project will fail, it most probably will, because they will focus on finding reasons why it will.

Confirmation bias is often built in our project approval process. People say what they think they need to say in order to get the projects they believe in approved. They may not lie, but they focus only cursory attention to the costs, risks and side effects. This then becomes the approved version of the project’s storyline, to become embedded in people’s thinking through repetition.

Availability cascade

"Vision without execution is hallucination." - Thomas Edison                    
The availability cascade is a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse. If we hear a message often enough, we begin to believe it. This is the basis of most marketing and propaganda. Eventually, we may end up believing our own propaganda. A team takes initial, rough estimates and works with them, gradually losing sight of the fact that there is little basis behind them; a manager repeats the high-level vision to all who will listen and starts to mistake the vision for delivered reality.

Perceptual Bias

“How does a project get to be a year late? One day at a time” – Fred Brooks
We often find it harder to recognize gradual trends than sudden changes. We get locked into our view of individual details rather than seeing the big picture. When we are under pressure we see those days as individual, disconnected events and lose track of the bigger trend towards disaster.

Anchoring Effect

You've seen the anchoring effect in action a million times, usually in the form of a suggested retail price. When you see a purse with a price tag that claims it normally sells for $400, but is marked down to $200, that little "yippee!" you feel inside at the prospect of getting a bargain? The anchoring effect. The seller simply told you that the purse was worth $400, and we tend to accept this line of bull because we are basically trusting people and trained to use price as our gauge of value.

Anchoring is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgments are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the anchor.

So you can imagine now why your first rough estimations are maybe not so harmless.

Reinforcement Loops

All the above described biases often combine and reinforce each other. We want to believe that we are above average at what we do, so confirmation bias reinforces our overconfidence. As we continue to confirm how good we are, repetition bias adds further reinforcement. This growing overconfidence makes us more prone to use pain-avoidance strategies. We assume we will be able to solve any problems, so we do not face up to potential difficulties or invest in analysis and risk mitigation. It is cheap to fly in two independent experts to validate your solution architecture for your multimillion software development project compared to finding out later that your design will not work and you could have known this from the get-go. Those experts might even have some other great tips and tricks as well.

Organizational Accelerators

“The limits of my language means the limits of my world.” - Ludwig Wittgenstein

On top of the biases and their reinforcement loops, we have the organizational culture that can make things worse.

Organizations that focus on looking for blame instead of solutions increase the pain associated with admitting problems increasing the chance that people will hide them till the very end. Strong hierarchical organizations can increase the pain associated with confrontation or loss of face (think Yakuza), making it difficult for people at a different level to admit their uncertainties to each other. Organizations with a strong collegial culture have a higher risk of “groupthink”, the tendency for people in a group to converge on consensus at the expense of critical examination.

Projects that span organization boundaries face additional challenges. You have to deal with not one, but multiple cultures. This is especially true in joint ventures and outsourcing relationships that involve multiple organizations. But is also effects projects working across group and unit boundaries within a single organization.

People may be uncertain as to what information can be shared or how it should be shared. Different professional backgrounds, languages and conventions may make it difficult to understand the information that is shared.

Contracts and legal concerns can create further barriers. Penalty clauses, for example, may help align objectives across organizations, but they also create incentives to hide problems and to be very reluctant in supporting change. Procurement regulations may restrict what information can be shared at key stages in the project. All these barriers can make it harder for people to uncover and deal with all the biases described above.

Conclusion

Underlying all this, reality does not always gives us the message we would like to hear. We do not want to hear that we are less intelligent than we would like, or less capable of solving problems, or that we code too slowly. Project failure is one of the prices we pay to avoid (or at least delay) hearing these messages.

Eventually, however, our projects come face-to-face with reality. Something will happen that we cannot ignore anymore. Unfortunately, by the time this happens, it is often too late.

This is why project reviews exist. They are a way to help teams, sponsors, project managers, or other stakeholders manage these human biases and keep in touch with what is really happing on their projects. Of course, they are not the only way. Test automation, project validation, iterative delivery, financial controlling, real estimations, pair programming and many others will help reducing the biases. But every now and then a project review can save you many headaches.

Read more…

Tuesday, March 20, 2018

Five types of reviews for projects

Project Review, Project Audit, Project Retrospective, Gate Review
Reviews can serve a wide range of stakeholders and fulfill a variety of roles. It’s therefore not surprising that organizations undertake several different types of review on there most important projects. From my perspective, there are five distinct types of review. Each with its own focus and outcome.

1) Project Review: Can happen at any point through the project. It checks progress against the original schedule, budget, and deliverables. It looks at the effectiveness of the team, project management, engineering practices, and other related processes. It typically delivers some sort of assessment of the likelihood of project success and identifies areas of concern and corrective actions. This kind of review is also known as a Project Healthcheck or Project Evaluation. These are the kind of reviews I mostly do. Typically as part of a project recovery process.

2) Gate Review: Happens at the end of a project phase or at some other defined point in the project’s lifecycle. It typically represents a decision point, using the outputs from an evaluation to decide whether continued investment in the project is justified.

3) Project Audit: An objective evaluation by a group outside the project team. A project audit is about being compliant and about the now. An audit aims to show the extent to which your project conforms to the required organizational and project standards. So, if your organization uses PRINCE2 or their own project management methodology, an audit will look at how closely you follow the processes. An audit can take place during or after the project.

4) Project Retrospective: Happens as the project closes down. It assesses the overall success of the project and identifies what did or didn’t work during its execution, generating lessons learned for the future. Also known as Postmortem or Post-project Review.

5) Benefits Realization Review: Happens after the organization has had some chance to use the outputs from the project. It evaluates the extent to which the benefits identified in the original business case have been achieved.

Read more…

Monday, March 19, 2018

How to do a project delivery team review

Project Delivery Team Review
In my experience, most issues that arise in software development projects are people issues, not technical ones. DeMarco and Lister showed this in their highly recommended book Peopleware: Productive Projects and Teams already two decades ago, and many others after them as well. Successful projects consistently give significant weight to the human factor.

An important part of the project reviews I do is the project delivery team review. The project delivery team consists of the project team (developers, testers, solution architect, business analysts, and so on), the project manager, project management office and line managers. Stakeholders, sponsor, and Steering Committee are not part of this project delivery team review. These are covered in a separate review because the focus there is a completely different one. From here on I will refer to the project delivery team just as the team.

When reviewing a team I usually look at three levels:

1) Team performance
2) Project management
3) Team members

These are not three separate reviews, but rather three overlapping perspectives of the team, which are part of a single review. While a substantial part of the interviewing can be performed concurrently, information from the review of team performance will provide input to the project management review, which will then provide input for the team member reviews. This will be discussed in detail in the following sections.

Team performance

I start with a review of the project delivery team and focus on the performance of the team as a unit. Interview the initiating manager to get a general perspective of the team, and then meet with the project manager and other parties who interface regularly with the team (interviews can be conducted with multiple participants). The following list covers ten key issues that should be considered during your review and resulting interviews:

1) General team performance: Look at the team as a single unit. How well does the team perform? Are team activities well coordinated, and do team members cooperate well with each other? Look at the team’s accomplishments. Were they due to teamwork, or would individual team members working on their own have achieved them in any case? Look at the team’s setbacks: Could better teamwork have prevented them? How has the team performed over time? Has performance improved or declined?

2) Skill set: Look at the skill set necessary to deliver the project: Does the team have these skills? Does the team have the necessary experience? Are the team member’s skills up to date? Does the development organization have a training program for software developers, and if so, is the team taking advantage of the program? See my article "14 Essential software engineering practices for your agile project" for more background and ideas.

3) Unstaffed positions: At this point, we are not yet looking at understaffing (budgeting for a team that is too small to deliver the project); rather, we are looking at budgeted team positions that have not been filled. Are all approved team positions staffed? If not, for how long have unstaffed positions been empty? Is the project manager constantly looking for new people to fill empty positions? Are the right people in the right roles? Are positions part-time staffed that should be fulltime staffed?

4) Size of team: Look at the size of the team that has been budgeted in relation to the amount of work to be done. Is the team understaffed (too small)? Is the team overstaffed (too big)? Note that in some cases, overstaffing can be a problem (redundancy, overlapping responsibilities, greater management overhead, budget used for the wrong things). How bad is the understaffing or overstaffing?

5) Development facilities: Look at the facilities and tools available to the team. Does the team have the necessary development tools to deliver the project? Do team members have an appropriate workspace, desks, chairs, powerful laptop/pc, 2 screens, and meeting rooms? Does the team have the necessary communications facilities (chat, Kanban board, whiteboards, flipcharts)? Does the team have enough support (administrative support, technical support, PMO)?

6) Organization of team(s): Is the software being developed at a single location, or is it a distributed team? How is the team distributed (different offices, different companies, different countries, offshore, nearshore)? How is the team mixed (internal vs external, location vs location, junior vs senior)? Does the mix and/or split make sense? Would it not be better to have one team at one location? Is the team a feature team or a component team? See my articles "Agile Team Organization" and "Outsourcing technical competence" for more background and ideas.

7) Team atmosphere: Clearly, the failure of a project will affect the morale and spirit of the team. But is there anything else that is causing low morale (if so, what is it)? Is there a lack of commitment to the project. Do team members seem to like each other, goof off together, and celebrate each other's success? Do team members hold each other accountable to high standards, and challenge each other to grow?

8) Relationships with other groups and teams: Look at the project team’s relationships with other external groups and teams. How well do they work with the infrastructure team, the independent test team, the field support team, the architecture team, stakeholders and other projects?

9) Conflicts: Are there any internal or external team conflicts or tensions that could disrupt the project? How long have they existed? How severe are they? Have there been attempts to resolve them? How did this go? Are there issues/opportunities the team isn't discussing because they're too uncomfortable?

10) Management: How good is the relationship between the team and management (the project manager and line management)? Is line management aware of the team problems? Have there been any attempts to resolve the problems? Have the managers responsible for the project been changed recently or frequently? Does line management still support the project and the team?

These ten points are not distinct, meaning that they do not need to be covered separately (they are not a checklist in the sense that you finish one and move on to the next). The review, interviews, and meetings can cover a broad range of topics at a single session, and some of the topics can be discussed concurrently. Use the input from the team performance review to prioritize the later interviews with the individual team members and the other project stakeholders. This way you will be able to put the main pieces of the picture in place more quickly. Reevaluate the priorities after each interview or meeting, and be prepared to repeat interviews when new questions and information comes on your path.

Project Management

The project management review focuses mainly on the leadership (not the administrative / PMO) aspects of the project and considers the main problems confronting the project manager and the way they are handled.

Very often, project managers are the first to be blamed for a failed project. It is important, however, to remember that projects fail for many reasons, several of which are unrelated to the qualities of the project manager. You should approach the review with no preconceptions. Although senior management may sometimes instinctively blame project management, it is your responsibility, after examining the facts, to independently determine if there are any major incompatibilities between the qualities of the project manager and the needs of the project. This is especially important because of the personal impact of your evaluation results.

1) Skills: Look at the main skills required for the project (avoid a comprehensive wish list for an ideal project manager). Concentrate on core skills without which it is clear that the project manager will not be able to successfully lead the project. Does the project manager have these skills? In my opinion for a software development project these skills consist of:

- a good understanding of the software development process
- a good understanding of the business and the business problem that will be addressed
- leadership skills
- project management skills

2) Relationships: How do others perceive the project manager’s ability to successfully lead the project? Look at the quality of the project manager’s relationships with the following groups:

- Team: Look at individual relationships, respect as a leader, and loyalty. Have there been any severe conflicts?
- Senior management: Look at individual relationships with key managers, issues of respect, and confidence in the project manager’s required capabilities. Have there been any severe conflicts?
- Other groups: Look at the working relationship with project support groups, such as the test group, quality assurance, field support, marketing, other projects, and so on.
- Other project stakeholders: In addition to senior management, look at the relationship with customers, users, investors, sub-contractors, partners, and so on.

Have any of these relationships contributed to problems? If other project problems are solved, does the disturbed relationship(s) still endanger project success? See my article "10 Principles of Stakeholder Engagement" for more background and ideas.

3) Confidence in the project: Is the project manager confident that the project can be realized? Is doing the project worth the effort? In the project manager’s opinion, what are the top three well-defined things that need to be done to get the project done?

4) Spirit, motivation, commitment: How committed is the project manager to the success of the project? Look at the project manager’s personal identification with the project, the amount of time invested in the project (full time, part-time, and so on), loyalty to the project team, and more generally the level of enthusiasm in making the project a success. Clearly, much of this is related to the previous point.

5) Other problems: What other problems related to the project manager may have contributed to a negative impact on the project? Consider, for example, personal problems or sickness.

6) General capabilities: Based on points 1 to 5 look at the project manager’s overall ability to successfully manage the project (leadership, personality, competence, and so on). Concentrate only on major points (remember that this is not a performance review). Ask the following question: If other project problems (that are unrelated to the project manager’s work) are solved can the project manager lead the project to success?

Team Members

This review will identify team member problems that can prevent the project from being successfully completed. This is achieved through individual interviews that focus on the team problems identified during the team unit review. It is important to remember that this is not a performance review for the team members, and it is not concerned with problems that will not affect the overall success of the project.

If the team is large, there may not be sufficient time to meet with all team members individually—in such cases, select the ones relevant to the main team problems that have already been uncovered.

The following list includes seven key points to be considered during the individual team member interviews:

1) Role: Determine the team member’s official job function, his/her description of their role in the project, and compare it with the role as described by the project manager. Are there significant discrepancies? When the person's role is ScrumMaster have a look at the ScrumMaster Checklist.

2) Skills: Does the team member have the skills necessary to perform the project team role (avoid a comprehensive wish list and concentrate only on the main skills)?

3) Achievements and contributions: Look at the results (not effort) of the team member’s work on the project so far. What have been the main contributions to the project? Have the team member’s activities been a factor in the success/failure of the project? If so, how big is this factor?

4) Relationships: Look at the team member’s relationship with other persons involved in the project, including:

- Other team members
- The project manager
- Members of other groups/teams
- The project stakeholders

Are any of these relationships problematic? If so, how serious are the problems and how do they affect the project?

5) Morale and commitment issues: Does the team member have any motivation issues that go beyond the situation of the project? How committed is the team member to the project and especially to the attempt to bring it to success? Can the team member be expected to stay with the project if a recovery plan is implemented?

6) Other problems: What other problems have affected the team member’s work on the project? Look at issues such as work environment, tools and facilities, training, and personal problems.

7) Capabilities: Based on the points (1 to 6) look at the team member’s overall ability to successfully perform the role. Concentrate only on major points. Ask the following question: If a new project plan is implemented, will the team member be capable of successfully fulfilling the requirements of the, potentially new, role?

Results

As described in the previous sections there are several sources of information, and three different levels, for the evaluation of the project delivery team. The goal of the project delivery team review is to combine the information to form a single coherent view of the team. I usually do the following:

1) Complete and write down my view of things, identify information/evidence gaps and identify contradicting information. 

2) Archive all interview minutes/recordings, received documents, written documents, and all related emails. This will save substantial time if the findings ever need to be re-examined.

3) Prepare a short team overview document. The document should be no more than 3–6 pages (depending on the size and complexity of the team) and should take between two and four hours to prepare. Include the main sources of information, the list of interviews, the reasoning that led to any significant findings, and any problems or incompatibles that arose during the evaluation.

4) Review the overview with the initiating manager. Review a draft of the document with the initiating manager. Expand any key points that are unclear and correct any factual errors.

5) Adjust the evaluation. Additional information about the team may well continue to emerge throughout the disentanglement process. If any of the new information affects your main findings, be prepared to adjust your conclusions. However, avoid frequent changes to your findings, especially when the new data does not significantly affect your key conclusions.

Due to the information on individual team members, the team overview document cannot be considered a public document. However, a summary of the findings should be available to the team and to the project stakeholders (consult with the initiating manager regarding any business reasons to limit distribution).

Read more…

Saturday, March 17, 2018

Official launch test automation and code quality workshop

So I have written many times about test automation, automated tests, continuous integration, technical competence, DevOps and so on. This because with many of my project recovery engagements these are part of the problem.

One very big pain point is typically trying to implement a more or less agile project delivery method without implementing automated test and test automation. 

That is why Falko Schmidt and myself decided to design a two-day workshop to address this pain point and learn you how to solve it. We have worked together on a number of software development projects for clients like PwC and Helsana. Falko in the role of the lead developer or test automation specialist, myself as project coach or project recovery manager.

We both see test automation as an essential part of modern software development and project success and designed this workshop to spread the word and teach what we have learned over the years.

So ...

- Are you a company that wants to release faster and in a better quality?
- Are you managing a software development effort and your team does not have the skills to automate your testing?
- Are you a developer who would like to improve the quality of his own work?
- Are you tired of fixing the bugs in the code of your colleagues?
- Does your test automation efforts do not bring the expected benefits?
- Do you want to increase your market value and employability as an allround software developer?

Then this workshop is for you!

Have a look at www.test-automation.ch to see in detail what we offer and sign up. The next dates for our public workshop in Zurich are May 24/25 and July 5/6.

When you are interested in an in-house workshop for your team just contact us.

Read more…