Alternatives to filtering on lifecycle traceability links in CLM 6.0 configuration management enabled projects

Today I’d like to continue on the theme started in Finding suspect links and requirements to reconcile in Collaborative Lifecycle Management (CLM) 6.0 configuration management enabled projects by addressing another consideration from Enabling configuration management in CLM 6.0 applications.

Do you need to filter views based on lifecycle traceability links?

In CLM 6.0 for those projects with configuration management enabled, you can view artifacts with their lifecycle traceability links, but cannot filter by those relationships.  There are three key areas this impacts:

  • Limit by lifecycle status in DOORS Next Generation (DNG)
  • Traceability views for RQM test artifacts
  • Linking RTC plans to artifact collections

I’ll explore some alternative workarounds to these in this blog.

Limit by lifecycle status in DOORS Next Generation (DNG)

In CLM 5.x, views of requirements artifacts can be filtered by the status of lifecycle artifacts linked to them.  The same is true in CLM 6.0 but only for projects that don’t have configuration management enabled.  This limitation should be addressed in a future release by work item 97071.   Below is an example showing all feature requirements with failed test case runs.

image

Similarly, the following shows all feature requirements whose linked development items have been resolved.

image

In CLM 6.0, for configuration management enabled projects, the lifecycle status filter option doesn’t even appear.

image

It is still possible to show a view of requirements with their lifecycle traceability links shown; it’s only the filtering that isn’t possible (at present).

image

Here you could scroll through the Validated By column, for instance, and manually scan for test cases whose icon indicated a failure occurred.  This wouldn’t be viable for more than a short list of requirements.

What’s needed then is to display a view/query of the linked artifacts, filtered appropriately, and display, if possible, their linked requirements.

For example, show all failed test case runs in Rational Quality Manager (RQM) and their associated requirements.  When displaying all test cases, you can see visually by their associated icon, whether the test case has been run successfully or not.  This isn’t ideal given you aren’t able to filter by the test case status and must instead visually pick out the failed runs.  It does, however, show the linked requirement being validated.

image

Alternatively, show a view of test case results and filter by their status.  Below is a list of test case results that are Failed or Blocked and their associated test case.

image

Unfortunately, it doesn’t also show the related requirement.  Instead you would need to drill down into the test case from the failed run and see its linked requirements.

Use of the Jazz Reporting Service Report Builder may be an option in limited cases.  First, it’s use for configuration management aware reporting is Technology Preview in 6.0 and only really contains configuration data for RQM.  For DNG, only the default configuration data is included for configuration management enabled DNG projects.  If your requirements configuration management needs are basic, where a single configuration/stream is sufficient, this may be an option.

For example, the following report shows all requirements with failed/blocked test case runs.

image

Now you’ll likely have multiple DNG projects.  DNG doesn’t publish project area data to the Lifecycle Query Engine (LQE) using Configurations data source so you can’t choose only those requirements artifacts from a given project, limit scope by that project nor set a condition to query by some DNG project area attribute.  You can, however, choose test artifacts for a given project area (and configuration) so if there’s a 1:1 relationship between DNG and RQM projects, you can produce a report that just shows the requirements from failed test case runs in the desired RQM project belonging to the desired global configuration (this is what is shown in the previous screenshot).

I tried looking at this from the opposite direction, that is, show all failed/blocked test case runs and their associated requirements.  You get the right list of failed runs, but it shows all their associated requirements, not all of which were tested and failed in that run.

image

For the other example I gave earlier, show all requirements whose linked development items were resolved, you could go to Rational Team Concert (RTC) and run a lifecycle query such as Plan Items implementing Requirements, but you’d need to visually look for plan items whose status met your criteria as the query isn’t editable and thus you couldn’t add a filter.

image

Traceability views for RQM test artifacts

In CLM 5.x, views of test artifacts can be filtered by the presence (or not) of a linked development item.

image

The same is true in CLM 6.0 but only for projects that don’t have configuration management enabled.  RQM projects that have configuration management enabled, don’t include that filter option.  This limitation should be addressed in a future release by work item 134672.

image

From this view, you would then need to visually scan the Test Development Item column for whichever condition you needed.

RTC has some lifecycle queries, such as Plan Items with no Test Case or Plan Items with failing Tests that could help.

image

image

Here again, Report Builder could help as you could construct a report that shows test cases with or without associated work items.  For example, the report below shows test cases without any work item associated.

image

Linking RTC plans to artifact collections

In CLM 5.x, it is possible to link an RTC plan to a DNG requirements collection and/or a RQM test plan.  Use of these grouped artifacts allows for shared scope and constraints across these lifecycle plans and are useful in auto filling gaps in plans or reconciling changes.

image

In CLM 6.0, links to collections and test plans from an RTC plan only resolve to the default configuration in the project they reside in.  In other words, you cannot link an RTC plan to a versioned requirements collection or test plan.  This limitation should be addressed in a future release by work item 355613.  The primary limitation that translates to is you are unable to auto generate workitems for requirements in a collection when working with projects that have configuration management enabled.

image

Missing that capability then the only work around is to manually create the workitems so that every requirement in the collection is linked to a corresponding work item.

A traceability plan view in RTC that includes a color filter will help identify those plan items without requirements links.

image

Such a view will highlight cases where a work item may need to be removed as the scope has changed, e.g. the collection has had requirements removed.

In DNG, view the collection with the Implemented By column included and scan for requirements with no corresponding work item.

image

If your requirements set is too large to view manually, export the collection view to a CSV file then open the exported file and filter or sort by the Implemented By column to more easily see those requirements without work items.

image

Conclusion

Of the limitations discussed, I find the first one, inability to filter by lifecycle status, will be more problematic for customers though I’ve found it’s usage to be mixed.  I’m also not particularly enamored with the workarounds described because they too are limited and involve some manual steps.  I would be interested in hearing how significant these limitations are in your environment or if you have additional ideas on workarounds for them.

Advertisements

Help! My RTC Database is getting big!

Many customers who have been using RTC for some time are seeing their database size grow, in some cases, to 100s of GBs if not TBs of data.  While growth is to be expected and is of no issue to modern DBMSes, proactive administrators want to understand the growth and how it can be mitigated especially in light of anticipated user growth in their environments.  Naturally, they come to us and ask what can be done.  While at this time we don’t have the solutions many customers are asking for (project copy, move, archive, delete, etc.), that isn’t to say we don’t have approaches that may be of value in some situations.

Working with our own self-hosted RTC environment as well as those of our enterprise customers, we generally find that the largest contributors to database size our build results, work item attachments and versioned content.  How would you know that?  Fortunately, we have a couple of out of the box reports you can run: Latest Repository Metrics and Latest Repository Metrics by Namespace.  Below are some samples showing a subset of the available namespaces and item types.

image

image

Looking at all the namespaces and item types begs the question…what do they all mean?  Yeah, they aren’t all obvious to me either.  Luckily, I have access  to the smart developers who wrote this stuff and can tell me.  If you find one you don’t know, send me a note/comment or post it on the jazz.net forums.  image

Once you find the larger contributors to size, the next questions asked are can they be deleted and who (that is, which project) is producing them.  In keeping with my team’s No Nonsense Tech Talk theme, I’ll be honest, there’s not much we can delete/archive and we certainly can’t do it at a project level, which would be of greater value, nor can we easily tell who produced it all.  It’s not all doom and gloom because there are some things we can do.

As mentioned earlier, we can delete build results, which often are a huge contributor to size growth.  We can delete work items, even attachments from work items.  Versioned content can be deleted, though you don’t usually want to do that, except for security reasons or to remove binaries versioned by mistake.  Then there are plans, streams, workspaces, etc, that can be deleted, but these don’t tend to take up much space.

So what happens when something is deleted? Well, in some cases, it’s not really removed from the database, only reference to it is removed or made less accessible.  For example, work item attachments don’t really go away when removed from a work item.  Try this.  Add an attachment to a work item then save the work item.  Hover over the attachment link and copy it.  Now remove the attachment from the work item then save the work item.  In the browser, paste the copied attachment URL and it will be found.  Similarly, if you delete a work item that has attachments, the attachment links still remain valid.  However, if you delete (not remove) a work item from the Eclipse client, the work item is actually deleted.

If you find that you’ve removed but not deleted an attachment, it is possible to go back and have it truly deleted.  To do so, using the Eclipse client, paste the URL to the attachment (which should be visible in a discussion comment from when it was first attached) somewhere into the work item (into a comment or the description), right click over that link and select “add to favorites”. Once it is in the favorites, you can drag it from Favorites and drop onto the Attachments section, which re-attaches it to the work item, at which point you can then delete it in the normal way.

Now some things like build results and versioned content once deleted can truly be deleted from the database.

At the repository level there are two key background tasks that are used to mark content as deletable then later deletes it.

  • an “Item Cleanup Task” background task at repository level marks newly orphaned content blobs as needing deletion (runs every 17 mins by default)
  • a “Deleted Items Scrub Task” background task at repository level deletes any content blobs that have been orphaned for more than ~2 hours (runs every 24 hours by default)

Once these both run, any content blobs that were deleted more than 2 hours ago should be fully deleted from the database.

However, DBMSes (particularly those in production) don’t generally release storage allocated for their tables immediately.  A compaction task usually needs to be run to reclaim the disk space.  The DBMS should have tools to indicate in advance how much space can be reclaimed by compaction.  Typical utilities to do this are shown below.  Details for using them should be left to a qualified DBA.

  • Oracle – ALTER TABLE … SHRINK SPACE
  • DB2 – REORG
  • SQL Server – SHRINKDATABASE

My teammate Takehiko Amano has done a very nice job of showing how deleting versioned content and later running DB2 commands reduces database size.  See his article Reducing the size of the Rational Team Concert repository database.

We find the build results often take up a good amount of the RTC database size.  These results often include all the log files from compilations, tests and other activities performed during the build.  Some times they will contain downloadable outputs, e.g. application archives and executables.  What happens is these results are often kept around and never deleted.  In some cases, especially for release milestones, they should be, but all those personal or interim continuous builds don’t need to be kept.  Build results can be deleted and will result in their database content being orphaned and subsequently deleted per the aforementioned process.  Rather than manually deleting results, consider setting up a pruning policy to automatically delete old results.  For those results you want to keep around and not be pruned, just mark them as not deletable.

In cases you know your build results are taking up a lot of space, the natural follow on question is which builds and who owns them.  Our development team recently had cause to address that question which resulted in a very useful script written by Nick Edgar, the RTC Build Component Lead.

Nick created a Groovy script that collects into a CSV file the pruning policy settings and footprint data for each build in all projects across a repository.

SNAGHTML2fbd67b2

He further creates an interactive HTML report that parses the CSV file for display in a more visual form.  image

With this information you can find out which builds from which projects are taking up the most space and whether they have a pruning policy in place.  Armed with this information, an administrator could go to the appropriate release teams and take action.  Imagine running it weekly and posting to a dashboard or emailing it weekly to release teams.  The Groovy script to collect the data and index.html to render the report are attached to task work item 330478.

For gathering the CSV data you’ll need to 1) install Groovy, 2) install a build system toolkit that’s compatible (ideally at same version) as RTC server, 3) set environment variables (see top of Groovy script), and 4) run the script with: groovy -cp “$BUILD_TOOLKIT/*” <groovy file name> <any arguments needed by script>.

For the chart, just put the chart index.html file and CSV in the same directory and open the HTML file. Some browsers will require these to be served up by a web server to allow the HTML file to read the CSV file. For my testing, I used Python’s simple server support for this: python -m SimpleHTTPServer.

Given I am referencing code samples, I’ll keep our lawyers happy by stating that any code referenced is derived from examples on Jazz.net as well as the RTC SDK. The usage of such code is governed by this license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

Being able to monitor the size and growth of your data, getting granular and actionable information about it and ultimately, being able to do something positive about that growth, is a key concern for IBM Rational and something we are exploring as part of our Platinum Initiative.  I welcome your input in this area.  Perhaps we can interact at IBM InterConnect 2015.

Tracking the status of your builds and getting their results

Our team was asked recently how we go about getting access to the CLM software downloads.  The primary answers were of course, the jazz.net product downloads while others use a ready made virtual or cloud image.  I tend to support customers who are at different version levels.  I also tend to want to run different versions of CLM natively.  Some times I get the download from jazz.net, which is really the official channel (not to mention IBM Passport Advantage site).  However, the typical download will involve use of IBM Installation Manager, which works great, but when you want to install multiple versions, you end up with multiple similarly named program groups.  I choose a method which allows me to install from a zip archive.

The point of this blog entry is not to advocate my method of getting the download.  In fact I wouldn’t recommend it for any production deployment.  What I want to do is use my method to illustrate some cool features of RTC that you can apply in your environment, not for getting our downloads but to allow others in your organization access to those you produce and to monitor progress along the way.

As I said, I like to use the zip archive installs.  This works for me since I don’t need an enterprise install and can typically get away with Derby and Tomcat.  The jazz.net downloads have zip installs for the RTC client, Jazz Build Engine and a combined JTS/CCM but not a full CLM install (JTS, CCM, RQM, RDNG).  If I want the zip install of CLM, I have to go elsewhere.

So let’s say I want the GA version of CLM 5.0 along with the RTC Eclipse client and Jazz Build Engine.  I will end up using combinations of the following capabilities:

  • Build Queue
  • Build Result
  • Track Build Item

Note once again, this scenario is for illustrative purposes only to highlight those capabilities and not promoting it as how you should go about getting your Jazz downloads.

Firstly, I know that the Jazz CLM development team has a Jazz build definition to produce one of its releases.  Using the Team Concert Web UI, I can browse the Jazz Collaborative ALM project area to see the list of build definitions for the project and in particular, the Jazz Collaborative ALM Development team area.

image

I select the build definition for the build producing the bits for the release I am interested, in this case, calm.30-50(aka calm.jcb) for the version 5.0 release I want.

Note the list of most recent build results for the calm.30-50(aka calm.jcb) build.  See the Tags column.  RTC allows build results to be tagged.  The development team uses  tags for different purposes (e.g. passing a certain test stage) but one is to denote which build result is declared final for a particular milestone or release.  Here I can see that build result CALM-I20140509-1731 completed successfully, is green, is tagged 5.0-ga and thus produced the final bits for the 5.0 generally available (GA) release of CLM. 

image

I select the result link to navigate to the CALM-I20140509-1731 build result.  Of interest to me here are the Downloads and External Links tabs. 

image

The items on the Downloads tab represent artifacts produced by the build.  These can be published to the build result using the artifactLinkPublisher Ant task, available via the Jazz Build Toolkit.  In this scenario, links to the various platform CLM Installation Manager installers and full CLM zip archive install are published with the build result.  I want a 64-bit Linux install so I will select the Workbench-for-CLM-Linux64-Redhat_5.0.0.0-CALM-I20140509-1731.zip link.

SNAGHTMLef0fb8b

Once downloaded, I can expand the archive, navigate to the server folder and start the Jazz Team Server and go through the setup and import the appropriate licenses.

This only gets me the CLM installation.  I also want to get the corresponding RTC Eclipse Client and Jazz Build Engine.  Since this is a GA release, I can get these from the typical jazz.net downloads page, but to further my illustration, I will use the External Links tab.

On this tab, I see the list of contributing builds from other projects.  I am interested in the contributing build from the Team Concert project area, that is, RTC – rtc.jcb RTC-I20140509-1417.  Note these links can be produced using the linkPublisher Ant task.

image

Navigating to the result of the contributing RTC build, I see the build is tagged for 5.0 and has it’s own Downloads and External Links tabs. 

image

For my purposes, I am interested only in the Downloads tab so I can find the download link for the zip install version of the Build System Toolkit and Eclipse Client.

SNAGHTMLf0edc97

Once downloaded, I can expand and run. 

One last feature, which is very useful if I am trying to get access to the milestone or release bits early or just to be aware of how we are progressing at declaring its final build, is the Track Build Item work item type. 

The development team uses these work items to collaborate across the teams and disciplines on the builds being produced, their contributors, schedule, test status, blockers found and finally, which will be designated as final/green for the milestone/release.  For example, the Track Build Item 324131 was used to track the CLM 5.0.1 RC1 milestone release. 

The description section tells me a lot of useful information about the release and its primary build and contributors. 

image

Looking through the discussion comments, I see a great deal of collaboration leading to green declaration of the final build.

A build is ready for test:

image

A problem occurred with the build

image

A build may be ready to be declared final but the team needs to coordinate on some approvals.

image

The team is declaring the latest build as green and ready to move on to the next release.

image

With all this useful information available, should I want to stay current on the release and pick and choose which builds to get early access to, I would subscribe to the work item and monitor the comments.

In conclusion then, by browsing the build queues to find a build result that has been tagged then using the result’s Downloads and External Links, coupled with the Track Build Item work item, development teams have a powerful way to monitor and collaborate on builds throughout the release cycle and obtain their results.

Automating changes to a build workspace configuration

Here’s the scenario.  A build engineer gets a request to establish a build workspace and corresponding build workspace.  He creates the build workspace, gets the desired components added to it, sets the flow targets then changes ownership of the workspace to the build user’s functional ID.  After that, he creates the build definition and associates it with the newly created workspace.  All is good and developers can begin to use the new build definition.  After a time, a request comes in to change the configuration of the build workspace, specifically the components to use.  Since the build workspace is owned by the build user, any changes to it must be made by that user.  Many organizations eschew the use of functional IDs or at least minimize who knows their credentials and are concerned about extra maintenance brought on by password expiration rules.  What to do?

This specific scenario came up recently with a customer of mine.  In particular, their build workspaces have three flow targets.  The current/default is the stream with the application source and it is scoped to include a subset of the components in the stream.  The other two streams are build script and configuration related.  At times, the release team needs to change the components included in the scope of the application source stream.  They do so currently by logging in as the build user, something they detest doing.

What they much prefer is support for team ownership of repository workspaces but that isn’t currently possible (though requested via 271760).  We instead proposed a solution that put the workspace configuration change in the build script which is already performed as the build user and owner of the workspace.

As of 4.0.1, the SCM CLI includes the capability to add/change workspace flow targets.  It was later refactored in 4.0.6 to the current verb-oriented form.

${scmExe} –non-interactive set flowtarget -r ${repositoryAddress} -u ${userId} -P ${password} ${team.scm.workspaceUUID} ${targetStreamUUID} –flow-components ${componentsToLoad}

Where:

  • scmExe – path to scm CLI executable
  • repositoryAddress – URL of CCM server
  • userId – build user ID
  • password – password of build user
  • team.scm.workspaceUUID – build repository workspace UUID
  • targetStreamUUID – stream flow target to set component scope
  • componentsToLoad – space delimited list of components

Unfortunately the SCM CLI does not understand the password file format used by the Ant tasks. You either need to give it as plain text using –P shown above or login outside the build system with the option to remain logged in (on each build machine).

A simple way to get the UUID of a workspace or stream is to open it up in the Eclipse editor and select Copy URL workspace editor menu in the view header or browse to it in the Web UI to get the URL.  The end portion of the URL is the UUID.  For example, the UUID for the repository workspace URL shown below is _GVqXYLRpEeOdavKqgVc36Q

https://clm.jkebanking.net:9443/ccm/resource/itemOid/com.ibm.team.scm.Workspace/_GVqXYLRpEeOdavKqgVc36Q

For components with spaces in their name, care must be taken to offset them by appropriate quotes.  For my tests, single quotes worked on Linux and double on Windows.

The command will set the scope of the specified flow target (e.g. application development stream) for the specified workspace (e.g. build workspace) to include only those components in the specified list.  The current configuration of the component(s) in the target stream is used.

Note if multiple flow targets exist for the workspace and a component listed is included in multiple flow targets, then scoped flows need to be used to avoid conflicts.  See How should my source code be loaded from Jazz SCM?

Let’s take a look at an example.  Below, the build.brm.continuous repository workspace, owned by the build user, has six components and three flow targets in its configuration. Banking Logic, Database, Java UI and Prerequisites are from the BRM StreamBuild comes from Build Scripts and Build Config comes from the stream of the same name.

image

The BRM Stream is scoped to only include only a subset of the components.

SNAGHTML9b0895d

Assume that we wanted to add the Database component to the scope.  Using SCM CLI command similar to that shown in the screenshot below, the scope can be changed to add it in.

SNAGHTML9bded5d

This results in the flow target scope being changed.

SNAGHTML9beca53

Now to add this to the build script so it can be automated.  Observe in the editor screenshot below that the command has been added to an exec statement in an Ant build script.  image

The targetStreamUUID and componentsToLoad values need to be passed to the build script.  Add these as properties of the build definitions.  For example:

image

When the build is requested, the componentsToLoad value can be changed.  In our original example, the Database component can be added in by editing the componentsToLoad build property at the time of the build request.

SNAGHTML9cfe091

Should you be concerned that adding the ‘set flowtarget’ command to the build script will add unnecessary overhead, albeit very minimal, to every build execution even when component list isn’t changed, you can create a build script and definition that only performs the ‘set flowtarget’ command and run it when needed.

The example shown was for changing the flow target component scope.  The technique used can be applied for other build workspace manipulations needed and supported by the SCM CLI.

Thanks to Nick Edgar, RTC Build component lead, for making me aware of the ‘set flowtarget’ command and suggesting its application to the problem described.

Do I need that other Jazz server?

I have been working with a team trying to determine if they need a second Change and Configuration Management (CCM) server to support their Rational Team Concert (RTC) deployment.  The performance of the existing CCM isn’t bad.  While the CPU utilization is low, the memory consumption is peaking a little high so some monitoring would be helpful to understand the usage patterns more.

The main motivation for considering a second CCM server is the expected growth to occur this year.  They know their current average and peak license usage, current count of licensed/registered users and how many new registered users they expect to add.  They also know the recommended range of users for a CCM based on IBM’s testing and analysis as shown in CLM Sizing Strategy.

The slight disconnect is that the user ranges we recommend are based on concurrent users not how many registered users there are or how many on average have a license in use (checked out).  Unfortunately, as yet, we don’t have a good way to measure the actual number of concurrent users.

So what do we do?  I see two ways to make this estimate.

First, the CLM Sizing Strategy states:

The number of concurrent users can often be estimated from the number of registered users. In an organization located on one continent or spanning just a few time zones, the number of concurrent users can be estimated to be between 20% and 25% of the total number of registered users. In an organization which spans the globe where registered users work in many different time zones, the number of concurrent users can be estimated to be between 10% and 15% of the total number of registered users.

What you can then do is take a percentage of the current plus expected registered users, e.g. 20-25% above when spanning few time zones, and compare that to the RTC ranges based on workload.  That will help you determine if another CCM (or more) is needed.  The percentage you apply may be different based on your experience.  In fact, in my customer case, a large number of users were registered and licensed in anticipation of them being migrated to RTC.  Applying that percentage today would give them a higher number of estimated concurrent users than was being experienced.

Another method would be to look at the average/peak number of licenses in use for RTC vs those that are licensed.  From this ratio, assuming linear growth, you can project how many licenses would be in use in the future.  Again, you would then compare that to the recommended ranges.  This method should be a little more accurate than the anecdotal, gut feel percentage of registered users in the first method.  We at least know that the number of concurrent users isn’t any more than the number of licenses in use.  Based on what you know of the users  to be added, linear growth may not be quite right and you may need to adjust the projection (e.g. the users being added are expected to be heavier users of RTC).

This latter method is what we used.  It seemed more accurate plus, as mentioned, the number of licensed users included many who weren’t onboarded yet.

Now the CLM Sizing Strategy is based on the workloads we generated on a given set of hardware.  Your workload and hardware may be such that you can handle more users (or maybe less).  I’ve had customers report that they were able to support more users than our recommended ranges.

This then shows that all the numbers and percentages need to be balanced against real experience with the environment, not only the license usage data but other things like CPU utilization, JVM size, database size, etc.  Monitor your current environment and perform trend analysis to assess if the server is nearing capacity.  Look at Monitoring: Where to Start? and Monitoring and Troubleshooting Guide for IBM Rational Collaborative Lifecycle Management (CLM).

Note that using the built in Jazz server license usage reports or the Rational License Key Server Administration and Reporting Tool, one can get an idea of license usage trends. For example, the reporting tool will show license usage by product. The recent release now includes support for authorized licenses.  Support for showing reports by role is being considered for a future release.

peakReport.jpg

As mentioned in the beginning, in this customer’s case, they weren’t experiencing any performance issues with the server.  Their CPU utilization looked fine.  There were memory usage peaks hitting 80% – 90% of the total heap so I suggested monitoring this and increase the heap as needed.

In the end, though the straight numbers and percentages suggested additional servers would be needed, this customer didn’t have good monitoring data, wasn’t experiencing performance issues and seemed to have capacity to support more users.  My recommendation was to hold off on a topology decision, put some monitoring in place, adjust the server configuration as needed/able (memory, JVM, CPU (number, speed)), assess performance/capacity of server vs expected users yet to add, if capacity/performance can’t be adjusted by manipulating the server configuration and it looks like the growth will be more than the server can handle, then add the additional CCM.

We didn’t want to take the addition of the second CCM lightly because doing so would incur another set of issues, that being the behavioral differences between a single CCM environment and a multiple CCM environment that I’ve blogged about previously and documented on the deployment wiki.  See Planning for Multiple Jazz Application Server Instances.

When assessing whether a multiple CCM topology was needed, one consideration we made was whether to mitigate some of the primary behavioral differences of concern to the customer by making one CCM be the hub for all work items and planning and the other CCMs for source control and build only.  There are tradeoffs for sure but that’s another blog discussion for another time.

Cross Project Planning with Rational Team Concert – the good, the bad and the ugly

Creativity is not one of my strong suits so coming up with catchy blog titles is a challenge for me.  I could incorporate some Clint Eastwood references throughout but as I said, creativity is in short supply.  What I want to convey in this entry is more than just the concepts of the Cross Project Planning capability of Rational Team Concert (RTC) but get into some of its good while also some of its bad/ugly parts too.

I recently gave a talk at an enablement event for some of our field technical resources. My goal was to give them the ‘skinny’ on some of its fundamentals but also its limitations, uses and best practices.  What follows is a summary of what I presented.

I’ll say from the outset that I had high hopes for Cross Project Planning when it first came out in v4.0.  I remember working with a customer in Australia just after the 4.0 release came out who was a perfect fit for Cross Project Planning.  I naively made some assumptions about what you would be able to do with these plans.  After proceeding down the path to position use of these plans by that customer, I soon ran into one of its glaring omissions – they don’t roll up progress across all the projects/plans it is tracking.  Despite that missing feature, I do find good uses for Cross Project Plans which I’ll note below.

Cross Project Plan Fundamentals

Often times development projects involve multiple teams whose progress and completion must be tracked and visible at a comprehensive level.  The Rational CLM project is like this. High level plan items for CLM are carried out often by two or more of the CLM application teams: RRC, RTC, RQM, RRDI, etc.  This program of projects approach is quite common today.  Rather than being limited to a single team/iteration, starting in RTC 4.0, plans support a ‘tracks’ link type for work items, enabling plans to track high-level items that belong to different projects.

Components of a Cross Project Plan

Top level project area tracking effort by one or more other projects

  • Contains one or more Cross Project Plans
  • Work items to ‘track’ the development/execution of work done by other project teams

One project area for each project team contributing to the larger effort

  • Contains Plan level work items and their execution level work items
  • Contains one or more Iteration plans with at least one planned snapshot that are linked to the top level project’s Cross Project Plan

Example:  The Diagnostic Tool Project Plan tracks work carried out in the Web UI, Repository and Hardware projects that contribute to a larger, perhaps cross-project effort

image

Setting up a Cross Project Plan

  1. Create Project Areas for the top level project and each of the project teams contributing to the overall effort managed by the top level project
  2. Configure associations between the projects.
  3. Create plan and execution level work items for each of the project teams in their individual project areas
  4. Create Iteration plans, e.g. Sprint Backlog for each of the project teams for the iteration to be tracked/rolled up
  5. Take a planned snapshot for each of the iteration plans
  6. Create a Cross Project Plan
    • A cross-project plan shows all items that belong to it locally (matching the plan query) and the items that are tracked by them.
  7. Create tracking work items in the top level project and use the ‘tracks’ relationship to link them to the project team area plan items that will contribute to/implement them

Using a Cross Project Plan

Once the Cross Project Plan is in place you can view

  • Schedule roll up of the entire plan
  • Schedule roll up of each contributing plan item (based on snapshot)
  • Open schedule to the plan containing the tracked work items

image

Note if no planned snapshot exists, the iteration start/end dates are used

The default view is Project Schedule.  Create custom views to show:

  • Owned By to show resource assignments
  • Link Types (e.g. Tracks, Children) to show other relationships of interest
  • Status to view progress across the projects at a lifecycle state view
  • Change Tree configuration to show other valid link types and change the depth of links followed

image

Assess the health of a Cross Project Plan by adding the Cross-Project Planning Problems Check plan check.

  • The rolled-up schedule for an item on the plan exceeds the end date of the iteration that the cross project plan is associated with.
  • The “planned for” date of a plan item exceeds the cross project plan’s iteration end date.
  • The rolled-up schedule for an item on the plan exceeds the due date specified on the item.

image

Limitations of Cross Project Plans

What I find customers want from a Cross Project Plan, perhaps planning in general, are schedule consolidation, progress roll-up, common resource allocation and a broad view of status across projects.

Cross Project Plans do roll up schedules based on the planned snaphots of the tracked plans.  Unfortunately, progress isn’t rolled up but can be by reporting.  Resource allocation is limited and only with Traditional Planning though Cross Project Plans can give a visualization of all resources assigned across all projects. As for the broad view of project/plan status, Cross Project Plans can give a visualization of the status of each plan level item and those development items contributing to it across all projects.

Below is an itemized list of limitations I’ve collected.

  1. Plans roll up schedule but not progress
  2. Cross Project Plans limit what attributes from work items in other CCMs can be included in their plan views as the plan snapshots used include limited data
  3. Schedule roll up is only based on snapshot and not live data
  4. Plan snapshots do not save work items included by the tracks link
  5. Some performance issues when loading a plan with a tree view and pulling data across CCMs associated with different JTSes
  6. Cross project plan checks look for violations of all (currently 3) related rules.
    • No ability to turn off those that may not be of interest
    • Also, these are only visible in a plan view; would be nice to highlight these specifically on a dashboard widget. Plan View dashboard widget can render a plan view on the dashboard but won’t show the plan check errors (also, this widget can be slow)
  7. Cross-project linking supports the use of any OSLC-based link type supported in RTC, in addition to the Tracks link type specifically designated for cross-project planning.
    • Cross-Project Link Types in RTC:
      • Tracks – com.ibm.team.workitem.linktype.tracksworkitem
      • Contributes To – com.ibm.team.workitem.linktype.trackedworkitem
    • List of OSLC work item link types
      • Related Change Management – com.ibm.team.workitem.linktype.relatedChangeManagement
      • Affects Plan Item – com.ibm.team.workitem.linktype.cm.affectsPlanItem
      • Affected By Defect – com.ibm.team.workitem.linktype.cm.affectedByDefect
    • Other link types can be specified in Cross-Project Plans to create the tree view, but you may not see all elements in the tree, depending upon where the target Project Area lives. In this case, target work items that exist in Project Areas on other Jazz Team Servers will not be displayed, leading to insufficient information in your plan view.

Cross Project Planning Best Practices

  1. Programs of Projects
    • Track Business driven demands/changes/features in a Program level project
      • include both Roadmap and Release plans (cross project)
    • Track Team level execution/delivery on those business functions in one or more projects
      • Include Release and Sprint/Iteration plans
  2. Ensure that the plans which are being tracked have up-to-date “Planned” snapshots.
  3. Normalize unit of measurement for effort between projects, even if following different process
  4. A best-practice for what to use as the tracking work item types has not yet been codified.
    • use whatever makes best sense in the context of the types of items you will be rolling up
  5. Work items in the Cross Project Plan aren’t just those that track work in other projects
    • Consider what other work items would be of interest at a program level, e.g. risks, interface adoptions
  6. Limit tracking of work in projects in a different CCM to avoid potential performance issues loading data across CCM repositories (architect CCMs to have related projects together)
  7. Create additional plan views to show other non-schedule focused perspectives of the cross project plan
  8. Continue to limit the size of your plans to reduce plan load time

Key Planning References

Conclusion

Cross Project Plans may not be everything I wanted originally, but they do have some good uses and are providing value to customers.

Impacts of having Multiple CCMs in your RTC deployment

My colleague Ralph Schoon (rsjazz.wordpress.com) and I are working on documenting the motivations, tradeoffs, pros and cons when deploying multiple instances of the same Jazz application.  That is, cases where you have more than one CCM server and/or more than one QM server registered to the same JTS.

Our findings and guidance will eventually make its way to the Rational Deployment Wiki and an Innovate 2014 presentation.  I wanted to begin to surface some of the findings now to perhaps uncover some things we haven’t yet.

Jazz allows Multiple CLM Applications

It is possible to deploy multiple instances of the same application each registered to the same JTS.  Today this is possible for the Change Configuration Management (CCM) and Quality Management (QM) applications.  The Requirements Management application does not yet support multiple instances yet, however, follow RM Plan Item 76315 to track progress towards that.

There are several motivations for having multiple instances.

  • Isolate a department or subcontractor
  • Capacity planning given the amount of concurrent users supported by the application instance and or usage model require multiple application servers
  • An organization would like to separate confidential data from non-confidential data
  • Funding model for projects requires segregation

Multiple CCM Applications

The following sections describe functional/behavioral differences when working with Rational Team Concert capabilities on one instance versus when multiple instances exist.

Impact on Work Items

Copying/moving of Work Items to another Project Area is only possible within the same CCM repository

The Eclipse “current work item” functionality doesn’t work when the work item is in another repository.

The only link relationships between work items in different CCM repositories are ‘Related Change Request’ and ‘Tracks’.

Impact on Planning

Those working in multiple CCMs will need to maintain their work allocation and scheduled absences in each CCM where they have assigned work. Correct and complete allocation and time off entry ensures proper load and progress calculations in plan views.

Cross project plans allow one relationship type to show as a tree view. You can choose either tracks or parent/child, but not both. As mentioned in the work items section, parent/child relationships are not supported across CCM repositories.  Therefore, cross project plans cannot show parent/child hierarchy relationships.

Impact on SCM

It is possible to add a component located in another CCM repository to a stream.  This creates a new component in the local repository and uses distributed SCM to copy the change sets over.  Integrating local changes back requires using a repository workspace or stream and to deliver the changes back to the remote repository.  Flow targets between streams are supported and it is possible to deliver or accept changes from a stream to another in the pending changes view.

Snapshots/baselines are local to a repository; it is not currently possible to select an owner in a different repository.

When associating a change set to a work item in another CCM repository, use the ‘associate change request’ gesture to make the association rather than ‘associate work item’.

The precondition requiring change set to be associated to an approved work item only works for work items in same CCM repository as change set.  Similarly, the precondition requiring a descriptive change set cannot be satisfied by a change request link.  Most out of the box deliver preconditions can only be satisfied by work item links.

The locate change set capability only works when finding work items/change sets in the local repository.

Conclusion

Ralph and I welcome your comments to let us know if we have missed anything.  In subsequent posts I plan to cover impacts when you have multiple QM and JTS applications.  One day we’ll have support for multiple RM and I’ll post about that too.  Other relevant topics would also include strategies for distributing projects across multiple instances as well as how do you know how many users/projects can one instance handle.