Measuring the Quality of your Mainframe Builds

Rational Developer for System z (RDz) provides a number of features that accelerate a developer’s ability to produce high quality software.  Rational Team Concert (RTC) includes capabilities to specifically manage the build, promotion and deployment of mainframe applications.  Combining these products allows the whole to be greater than the sum of their parts.  In particular, the combination allows the quality of development builds to be measured thereby measuring the quality of code delivered by developers.  Mainframe dependency builds can include batch Code Review which runs a software analysis that inspects your code for compliance with rules and best practices.  zUnit Testing provides an automated solution for executing and verifying Enterprise COBOL and PL/I unit test cases that can be included in your dependency builds.  Finally, automated Code Coverage provides tools to measure and report on test coverage of an application, indicating what source code lines were tested and which lines remain to be tested.  Each of these capabilities can be included in any dependency build and reported in its results.  Coupled with RTC’s ability to allow developers to request personal builds, that is, run the team’s build against the developer’s undelivered code, you can measure the quality of the code both before and after it is shared and integrated with the rest of the team’s work.

Here’s an example of a personal build run to build the JKE Banking Mortgage Application that comes as part of the Money that Matters sample

image

In this example, the developer Deb ran a personal build of the isdz.mortgage.dev.rdt build definition.  The build included execution of ten zUnit tests, five of which failed.  Going to the Tests tab allows us to drill down further.

image

These tests were run automatically after the build of the application completed and were reported back to RTC and captured in the build results.  Executing these unit tests uncovered an issue with Deb’s changes.  Fortunately this was discovered prior to delivering (sharing) her changes with the rest of the team and potentially breaking the team build.  Deb can use RDz’s debug capabilities to triage and assess what needs to be corrected.

Teams often establish coding standards or rules for developers to follow in order to improve the maintainability and quality of their code.  Many development environments like RDz provide an automated means of checking these rules.  A good practice for developers is to perform this analysis from time to time before finalizing their work.  Even better is to include this analysis as part of your application build.  Here again, the combination of RDz and RTC supports both. 

Here’s an example of an analysis run against changes Deb just made which detects a couple of violations against the team’s coding rules.

image

Just like the zUnit tests were run after the build, so to the code review can be performed in a ‘batch’ mode automatically and the results published with the build.  In the example below, the report.csv and report.xml files, included on the Downloads tab of the build results, capture the automated code review report.

image

The developer can view the Code Review report by opening either file and then compare those results with the results previously run manually from RDz.  Scrolling to the bottom of the XML file, the developer sees all of the files that have rule violations. Notice that these files have file ids. This mapping is important when viewing the results.

image

Navigating higher in the file there are results which shows which files and lines have rule violations. This image below shows that file 4, which maps to JKEMPMT.cbl, has a rule violation on line 160.

image

Had I shown the full file, you’d see the section header indicating the specific violation incurred.  The fileID and line number combined with the section header indicates which files on which lines have which violations.  Of course it is best to find these prior to the build but having them as part of the build allows nothing to slip by. 

Next up is review of the Code Coverage results. These show the developer what percentage of lines of code is executed on their application when zUnit is executed. This should be a quality metric to see how well the unit testing covers the application. In an actual development environment there should be a standard set that states that there should be some percentage number that unit testing must cover on an application source before that application source is allowed to be promoted to formal testing. If the code coverage percentage does not meet that threshold, the developer must add additional zUnit test cases before the developer is allowed to request the application code be promoted to formal test.

In order to view the code coverage results, the developer looks at the Downloads tab and notices there are two downloads for code coverage results.

image

Download JKEBank_CC_results_<build_id>.tar in order to see the results on the JKE Banking application that is executed in CICS then extract its contents. Within the tar file there are several zip files.

image

This is because the zUnit tests call the JKE Banking application multiple times, and there is a results file for each test that calls the JKE Banking application.  Extracting the test results and opening to the html/index.html file displays the code coverage results.

image

Opening the code coverage results the developer can see that the zUnit test covered 64% of the JKEMPMT module and 87% of the JKECSMRT module.

I’ve shown that a dependency build can include automatic analysis of adherence to coding standards, execute unit tests and assess their code coverage.  These results are published with the overall build results and together provide a measurement of the overall build quality and recently delivered code changes.

Developers run personal builds to check the quality of their changes and ensure they won’t break the team build.  Since personal build is really just the team build run against the developer’s changes, the team build too will perform the same measures.  When run regularly as part of a continuous integration or delivery practice, scheduled or otherwise, the team lead can compare results against previous baseline results to track whether either measure is degrading (and better yet, improving). 

I’ve been intentionally sparse on details on how all the above was implemented.  In my particular environment, the code review, zUnit tests and code coverage were run from a Rexx script invoked as a post build command.

In subsequent posts I plan to dive into each of these measures to show more of the technical details behind setting them up.

Advertisements