Saturday, July 25, 2015

The 2015 State of Scrum Report

This post is a summary of the salient points from the Scrum Alliance State of Scrum Survey 2015.


In February 2015, Scrum Alliance® surveyed almost 5,000 people about their use of Scrum. The survey respondents make up a diverse group, representing 108 countries and more than 14 industries. They reflect a range of functional areas, including IT software development, product development, operations, human resources, executives, and sales and marketing. Most have a technology slant, with 44% working in software development and 33% in IT. And they’re an Agile-savvy group, involved in an average of 4 Agile projects in the last 12 months.

I. WHO IS PRACTICING SCRUM?
• Scrum practices are currently in place among 82% of respondents, and another 11% are piloting Scrum.
IT and software development professionals continue to be the primary users of Scrum, followed by product development and operations professionals.

II. WHY ARE THEY PRACTICING SCRUM?

•Nearly half the respondents (49%) cite fulfilling customer needs as the highest business priority for Scrum projects.
• Meanwhile, the second-highest priority is all about the business — meeting budget, time, and scope constraints.

III. HOW ARE THEY PRACTICING SCRUM?

• The average team size is 7 people.
• Most Scrum teams (60%) follow 2-week sprints.
• 81% hold a team Scrum each day.
• 83% conduct sprint planning prior to each sprint.
• 90% use at least some Scrum artifacts
• 81% hold retrospective meetings.
• 42% of respondents report using Scrum exclusively.
• Of those using a combination of practices, 63% practice Scrum alongside Waterfall.
• 43% combine Scrum with Kanban.
• 21% combine Scrum with Lean.

IV. IS SCRUM WORKING?
• The overall success rate of projects delivered using Scrum is 62%.
• Teams of the recommended size for Scrum — 4 to 9 members — report the most frequent success

V. ROLE OF CERTIFICATION
• It’s rarely required but commonly recommended.
• 81% of respondents believe certification has helped their Scrum practice.

VII. THE FUTURE OF SCRUM
 • 95% of respondents say they plan to continue to use Scrum moving forward.

VIII PRACTICES WIDELY FOLLOWED

  • Sprint retrospectives, Done criteria, Continuous Integration/Build, Refactoring are some of the key practices widely followed


You can download the detailed report from the Link of State of Scrum Report - 2015.

Saturday, July 18, 2015

Agile Testing practices mapped to Continuous Delivery

In this post, I will  attempt to map the agile test practices to the continuous delivery practices. Some of these practices are related and if done effectively right from test strategy, it can greatly help in delivering valuable software seamlessly. I am sure readers of this post may agree that test code lives as longs as the code lives. So we must invest on  test code and the quality of it.

What is Continuous Delivery
Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time. It is used in software development to automate and improve the process of software delivery.



Picture Courtesy: Wikipedia
In Continuous Delivery, there is deployment pipeline that is formed to ensure quick and seamless feedback to the developers and testers about the quality of the check- in.  The tests are divided into various types like Unit Test, Functional Test, “ities” like Reliability/Security /Maintainability testing which is run in different pipelines depending on the time taken to execute the test cases. These test cases is run in various stages as the time taken for these test cases execution is less to high. It is important that the test cases and scripts are designed in such a way that these aspects can be done in a seamless way when the code and test code is checked in. These points requires a much deeper and strong test engineering practice understanding and adherence to the core agile principles of collaboration, self organization, technical excellence,  attention to working software .




The key test practices important for the effective continuous delivery practices are divided at Task, Story, Sprint and Release Level. There is definitely some overlap between each of the practice across the various levels. I will try to explain the details of each of the practices more from a test requirement for continuous delivery perspective.
Level
Test Practice
Details
Release
Test Strategy
Test Strategy is like an umbrella practice in which the team can think about how the test will be designed, organized, executed, reported, analyzed etc.
Test strategy should cover the details about:
-          Different type of test cases that will be written like Unit Test, Integration Test, Functional Test, “ities” Test . This is an important aspect as some thinking in the beginning will help in later test organization in an effective way.
-          Automation Architecture
-          Test Suite organization
-          Development and Test Collaboration
-          Etc.
Automation Architecture
Automation architecture is the architecture of the overall Test automation – How the various components will be tested. All the principles of a System architecture is equally applicable for a test automation system. Sometimes Automation architecture is seen as some list of tools or framework only.
Test Suite Organization
Test Suite organization is the practice in which it is decided how the test cases will be grouped and organized. Example All UT test cases can be grouped under UT Test Suite at each feature level, similarly the FT and the ‘ities’ test cases.
Operation /Infrastructure Test
Operation /Infrastructure test are all the tests related to the environment, infrastructure etc. Many times we see that everything works fine till we have deployed the system in the production and once it is deployed to the production some functionality don’t work . This happens due to test not being thought from an operation or infrastructure angle. Practices like Test Driven Development for Infrastructure code can be considered.
Testing for ‘ilities’
Testing for ‘ilities’ are test from performance, reliability, stress, volume, security , maintainability and related aspects.
Hardening and ZFR
Hardening and Zero Feature Release is the release or time period for any system development where there is 1-2 release being planned and done with no new feature added. The test is intended to test the stability of the system with no new features added. This is also termed as hardening.
Test Pyramid
Test Pyramid is a practice in which there is a pyramid formed with the unit test, functional test, ‘ities’ test in decreasing order. What it means is that overall test should comprise of 50-60% unit test , 20-30% Functional test and 10-20% ‘ities’ test.
Deployment Pipeline Design
Deployment pipeline design is about thinking ahead about what kind of testing will be done in which pipeline and ensuring that all the infrastructure readiness like build scripting, servers etc is made available from this perspective.
Sprint
Development and Test Collaboration
Development and test team should brainstorm about the scenarios, implementation approach, test design, test organization and related aspects.  Many times team work in silos and lots of anti patterns gets generated due to this. One such anti pattern is ‘Dual Test Pyramid’. I will have a separate post for all the anti patterns.
Test Pyramid implementation
Test Pyramid Implementation is about ensuring that at constant interval we check that the tests are being written based on our initial strategy and also fine tune the test strategy based on the implementation.
Deployment Pipeline implementation
Deployment pipeline implementation is about orchestrating the right test suite at the right pipeline as per the test strategy and ensure that feedback is provided to the right stakeholder at right time
Exploratory Testing
Irrespective of how much automation we do or how much test design we do, exploratory testing should always be done. It can help in bringing out many such scenarios which we may not have been thought about. Further these test cases can be automated and integrated to the system
Story
Story Level Test
Story level testing is testing done at each story level where each type test cases are written at story level
No false alarms
Many times it is seen that if test fails we are not sure if the issue is with test code or the code. Such false alarms should be avoided and clean test code practices should be followed. It is a good idea to do test code refactoring if such symptoms are seen.
Task
Clean Test Code
Clean test code practices are similar to following coding guidelines and ensure that there are no bad smells in the test code. Test Code should be given equal importance as it lives as long as the code lives.
Dev Testing
Development test should focus on the unit test and key functional test as per the test strategy. Here the Development and Test team should closely collaborate and ensure that whatever is written is organized appropriately and fits into the right deployment pipeline.
Local /Private Build
Private Build is a practice in which developer runs the unit test and static check is done before the code is checked in to the configuration library. This is an important discipline to ensure that there is always test available as a safety net. It also ensures that each check-in does not break any build.
General
TIER
Testing is Everybody’s Responsibility. Both Developer and Testers should take equal responsibility of testing. In many organizations, the ratio of tester to developer ratio is 1:3. But the ratio of test code to code is almost 1:1. By basic common sense we can make out it is not possible to write all test case by test team only. All should take equal responsibility of testing and ensure the quality of the testing.

Let me add some some examples which are taken from the best practices of open source

Example 1: Developer Test Suite organization for hadoop-hdfs module

First level structure:
Second level structure inside the hdfs folder:
Next level structure inside the server / datanode folder:
Note:
  1. This structure is by design and not by accident
  2. It is a commonly used good practice to mimic your product code organization in your test code organization

Example 2: Why do we need developer testing if we already have system testing?

Have you heard about the “Zune bug”?
Zune was a portable digital music player from Microsoft. On December 31, 2008, when Zune users across world were trying to use it for their New Year eve’s party, it froze. The music players just hung and did not respond.
The issue was a simple code level defect which anyone with basic programming knowledge could have detected. The defect was in the following piece of code:
year = ORIGINYEAR; /* = 1980 */

while (days > 365)
{
   if (IsLeapYear(year))
   {
       if (days > 366)
       {
           days -= 366;
           year += 1;
       }
   }
   else
   {
       days -= 365;
       year += 1;
   }
}

As you could notice, the defect is in the missing else block of the “if(days > 366)” condition which will result in an infinite loop when “days = 366” (on Dec 31st, 2008).
This a defect which system tests may miss. However, a unit test with the input “days = 366” can catch it easily. Developer testing can ensure that the code we wrote is indeed the code we wanted to write.

Example 3: Clean Test code

We will use the example of a test case from hadoop-hdfs to observe some clean test code practices.

The first candidate for refactoring in test code is often the duplicated setup/configuration code. The snippet above shows two good practices to make this cleaner.

A test case which is longer than a few lines is difficult to understand and maintain. The code snippet above extracts out the lengthier logical code segment to an inner function.
Mocking can accelerate your test automation by handling dependencies more predictably and can help you create more maintainable tests as shown in the snippet above.
Good test practice implementation is an important factor for effective continuous delivery system.  In the next posts I will cover some of the anti patterns which I have observed in this.
Good test practice implementation is an important factor for effective continuous delivery system.  Let us given consider test code equally important as source code. As long as we don't do this, effective continuous delivery or deployment system may only remain a WISH.

Thursday, July 16, 2015

Use the best of Lean Startup, Scrum and XP

This is short blog on using the best practices from Lean Startup, Scrum and XP.
During product development there are situations where sometimes it is not clear what the customer really wants. In such situations, assuming that the problem is clear and developing the features in the iterations may not be a good idea. This is because even if we run an iteration which is as short as 2-3 weeks and then get feedback, it will be a futile exercise as we may develop some features assuming that what we are developing may solve some customer’s problem and later we come to know that customer don’t want the features, which we developed. So we need to have some methodology which helps in developing what the customer really wants. Here is where we can use the best of Lean Startup, Scrum and XP.

Lean Startup
Lean startup is a method for developing businesses and products. Startups can shorten their product development cycles by adopting a combination of business-hypothesis-driven experimentation, iterative product releases and validated learning. Here the practice of Problem Validation, Solution Validation and Scaling can be used.

Scrum
Scrum is an iterative and incremental agile software development methodology for managing product development. 

 Extreme programming (XP)
 Extreme programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements.

Problem and Solution Validation
Problem validation is when we are really ascertaining the real problem which customer if facing. Here we may use simple practices like interview, Talk in depth with customer, quick wire framing or prototyping to really be sure that the product development team understood the problem correctly. Sometimes requirement is realized or developed as a product in an iterative manner and that is considered as problem validation. In fact this is not true. Moment we start developing the requirement, we are doing solution validation. After the problem validation the next step is when we propose to the customer about the feature or requirement which if the product has, can resolve the problem or not. Here practices like wireframing, scrum and XP practices can help in quick feedback and development. The underlying point is that every time we have a hypothesis which we prove right or wrong though various experiments in the problem and solution validation stages.
Below is a proposed approach which can be followed for the product development where the product can be developed in an iterative fashion using the Scrum, XP and Lean Startup practices.



During the Product Validation and Solution Validation, Practices from Scrum, XP, and practices from Lean Startup – Lean Canvas, Wireframing, Hypothesis driven approach etc. can be used. We can use Hypothesis to arrive at the  Minimum Viable product and use various experiments to prove the hypothesis. This way we can step by step develop and iteratively show the product to the customer and take necessary feedback. 
Remember that we should not jump into features or solution development without validating the problem. Lean Startup and the best of Scrum and XP helps us in doing this.

Some references :
User Story Mapping- Jeff Patton with Peter Economy
The Lean Startup- Eric Ries
The Startup Owners Manual- Steve Blank
Running Lean- Ash Maurya
http://theleanstartup.com/
www.leanstack.com
http://steveblank.com/ 

Architecting for Continuous Delivery

This short article will provide details about the various architecture specific requirements for good implementation of continuous delivery...