How many artifacts do I have in my Jazz application repository?

One element of sizing the servers for the IBM Continuous Engineering (CE) solution is the current and projected data scale (along with data shape, user scale and workload). There are also recommended artifacts limits to keep an application performing well, such as 200K artifacts per DNG project area (as of v6.0.5 and noted here).

Whether you are trying to project future growth based on current sizing or ensure you are staying withing recommended limits, it is useful to know how many artifacts currently exist in a repository (or other “container” such as a project area). Each application provides different means of getting this information.

DOORS Next Generation (DNG)

Vaughn Rokosz has written a very good article on the impact of data shape on DNG performance. He provides several SQL and SPARQL queries to monitor artifact counts.  I won’t repeat them here but go to the link to minimally get the queries for total number of artifacts and versions in the repository and artifacts in the project areas.

Rational Team Concert (RTC), Rational Quality Manager (RQM) and Rational Model Manager (RMM)

Since these applications share a common storage service, they have similar means to get to the artifact counts. As a Jazz Admin you can run a repotools command or a web service.

Option 1: use repotools from command line
repotools-<context>.bat -listItemStats adminUserId=<jazz admin ID> adminPassword=<jazz admin password> repositoryURL=https://<server:port>/<context>logFile=<filename>

Option 2: use web service from browser

https://<server:port>/<context>/service/com.ibm.team.repository.migration.internal.stats.IDBTableSizeHttpService/

for <context>, use ccm, qm or am for Change Configuration Management, Quality Management or Architecture Management applications.

Note that both of these options can take some time to execute so be aware of possible load put on the server. I suggest running them during lighter load times. You can first run in a test environment with production like data to get a sense of timing and load.

Sample CCM artifact counts output

 

Sample QM artifact counts output

Starting with v6.0.3, administrators can monitor Jazz application metrics through the use of JMX MBeans. One of the MBeans is Item Count Details which contains similar information as provided by the listItemStats repotools command and IDBTableSizeHttpService web service. The Item Count Details MBean, once enabled can be viewed from RepoDebug or an enterprise monitoring tool capable of receiving published JMX inputs. This is the preferred method as you can capture that data over time, see trends, set alerts and thresholds and correlate with other monitored data.

CCMMBeans

Item Count Details MBean Objects

Attachment

Attachment Item details

Advertisements

Monitoring Jazz Applications using JMX MBeans

I recently published a blog post on jazz.net regarding our serviceability strategy and use of JMX MBeans to monitor Jazz Applications. If you’ve heard me speak on this topic, you know that I believe that having an monitoring strategy is a best practice and essentially imperative for any deployment involving our global configuration management capability. I would even extend that to deployment of RTC clustering as well.

Have a look at the blog post here:
Monitoring Jazz Applications using JMX MBeans

Resource-intensive scenarios that can degrade CLM application performance

About a year ago, I was asked to begin considering what scenarios could drive load on a Collaborative Lifecycle Management (CLM) application server that could lead to outages or overall diminish the end user’s quality of service.  These aren’t necessarily long running scenarios but those that are known to use large amounts of system resources (e.g. high CPU, memory, heap usage).  As such, they have been known at times to degrade server performance and negatively impact user experience.

After reviewing a number of problem reports and escalations plus several discussions with Support, Services and Development resources, I identified scenarios for several of the applications.  We coined the term ‘expensive scenarios’ though our User Assistance team recently indicated that it could be misconstrued and a more apt name would be ‘resource-intensive’.

The first set of scenarios were published in v6.0.3 and documented as Known Expensive Scenarios.  The title will be changed in the next release to be Known Resource-intensive Scenarios.

For each of the identified scenarios, there is a description of what it is and under what conditions it could become resource-intensive.  Further, if there are any known best practices to avoid or mitigate the scenario from becoming resource-intensive, these too are captured.  These practices could include adjusting some application advanced properties that tunes the scenario behavior some or a change in work practices for when and how the scenario is invoked.

For example, importing a large number of requirements into DOORS Next Generation (DNG) can consume high resources as subsequent to the import, indexing of the newly imported artifacts occurs, which can block other user activity.  When the volume of imported data is high and/or several occur at once, system performance could degrade.  The wiki describes this scenario, identifies that there are some advanced properties that limit the number of concurrent ReqIF imports as well as the recommendation that these imports be kept under 10K requirements or be performed when the system is lightly loaded.

Knowing these scenarios help in a couple of ways.  First, as your process and tools teams define usage models for one of these applications, knowing that a particular usage pattern can potentially drive load on the server leading to degraded performance allows that usage model to be adjusted to avoid or reduce the likelihood of that occurring. Second, in situations of poor performance or worse, knowing if these scenarios are occurring could help identify root cause.

This latter case is helped by the logging of start and stop markers when a resource-intensive scenario occurs.  Each marker includes the Scenario ID (from Table 1) and a unique instance ID.

ScenarioStartStopTo get additional details when the scenario occurs and to aid in understanding its characteristics, advanced (verbose) logging can be enabled.  This can be done from the Serviceability page of an application’s admin UI.  Note the enabling verbose logging does not require a server restart.

ScenarioEnableAdvLogging

Now when a performance or system anomaly occurs and the application logs are reviewed, should it have occurred during a resource-intensive scenario, you may have a clue as to cause.  The additional logging should at a minimum include the data specified in Table 2.

ScenarioAdvLogging

As part of our serviceability improvements in v6.0.3, the CLM applications publish various JMX MBeans that may be collected and trended by enterprise monitoring tools such as IBM Monitoring, Splunk, LogicMonitor and others.  MBeans exist for several application metrics including counts/occurrences of resource-intensive scenarios.

Each MBean to be published must first be enabled from an application’s admin UI advanced properties page.

MBeansEnable

After doing so, the monitoring application can be configured to capture that data and displayed on a dashboard.

MBeansStats

Having a comprehensive enterprise monitoring strategy is essential for a well-managed CLM environment.  Tracking occurrences of these scenarios and correlating them against other environment measurements give administrators (and IBM Support) insight when troubleshooting anomalies or proactively evaluating environment performance.  In a subsequent post, I will talk further about what to monitor.

 

Adopting the IBM Continuous Engineering (CE) solution Configuration Management Capability

Adopting the IBM Continuous Engineering (CE) solution Configuration Management Capability is the title of a webinar that Kathryn Fryer and I recently presented.  We’ve been working with ‘new’ configuration management capability since it was in development prior to its launch in v6.0.  Adopting it takes careful consideration in order to successfully realize its benefits.

Objectives of the presentation

In version 6, the IBM CE solution added exciting new configuration management capabilities across the lifecycle, better enabling parallel development and strategic reuse. Simply enabling these capabilities won’t help you realize their potential; you must consider changes to your process and usage model to achieve results. This presentation describes current considerations, limitations and strategies for adopting configuration management.

  • Configuration management overview
  • Trade-offs and considerations – as of current release (6.0.2)
    • Primary factors
    • Reporting
    • OSLC integrations
    • Linking
    • QM utilities
    • Additional considerations
  • Enabling configuration management
  • Upgrade and migration
  • Adoption path and additional resources

If you are interested in this presentation, you can find the replay of the webinar here in the DOORS Enlightenment Series.

The slides are shared here.

Additional Reading

Getting to a right-sized Jazz environment

2015-05-24 14.05.27You’ve just made the decision to adopt one of the Jazz solutions from IBM.  Of course, being the conscientious and proactive IT professional that you are, you want to ensure that you deploy the solution to an environment that is performant and scalable.  Undoubtedly you begin scouring the IBM Knowledge Center and the latest System Requirements.  You’ll find some help and guidance on Deployment and Installation and even a reference to advanced information on the Deployment wiki.  Unlike the incongruous electric vehicle charging station in a no parking zone, you are looking for definitive guidance but come away scratching your head still unsure of how many servers are needed and how big they should be.

This is a common question I am often asked, especially lately.  I’ve been advising customers in this regard for several years now and thought it would be good to start capturing some of my thoughts.  As much as we’d like it to be a cut and dried process, it’s not.  This is an art not a science.

My aim here is to capture my thought process and some of the questions I ask and references I use to arrive at a recommendation.  Additionally, I’ll add in some useful tips and best practices.  If this proves useful, it will eventually move over to the Deployment wiki.

I find that the topology and sizing recommendations are similar regardless of whether the server is to be physical or virtual, on-prem or in the cloud, managed or otherwise.  These impact other aspects of your deployment architecture to be sure, but generally not the number of servers to include in your deployment or their size.

BUS30093From the outset, let me say that no matter what recommendation I or one of my colleagues gives you, it’s only a point in time recommendation based on the limited information given, the fidelity of which will increase over time.  You must monitor your Jazz solution environment.  In this way you can watch for trends to know when a given server is at capacity and needs to scale by increasing system resources, changing the distribution of applications in the topology and/or adding a new server.  See Monitoring: Where to Start? for some initial guidance.  There’s a lot going on in the monitoring area ranging from publishing additional information to existing monitor solutions or providing a lightweight appliance with some monitoring capabilities.  Keep an eye on work items 386672 and 390245.

enterpriseBefore we even talk about how many servers and their size, the other standard recommendation is to ensure you have a strategy for keeping the Public URI stable which maximizes your flexibility in changing your topology.  We’ve also spent a lot of time deriving standard topologies based on our knowledge of the solution, functional and performance testing, and our experience with customers.  Those topologies show a range in number of servers included.  The evaluation topology is really only useful for demonstrations.  The departmental topology is useful for a small proof of concept or sandbox environment for developing your processes and procedures and required configuration and customization.  For most production environments, a distributed enterprise topology is needed.

The tricky part is that the enterprise topology specifies a minimum of 8 servers to host just the Jazz-based applications, not counting the Reverse Proxy Server, Database Server, License Server, Directory Server or any of the servers required for non-Jazz applications (IBM or 3rd Party).  For ‘large’ deployments of 1000 users or more that seems reasonable.  What about smaller deployments of 100, 200, 300, etc. users?  Clearly 8+ servers is overkill and will be a deterrent to standing up an environment.  This is where some of the ‘art’ comes in.  I find more often than not, I am recommending a topology that is some where between the department and enterprise topologies.  In some cases, a federated topology is needed when a deployment has separate and independent Jazz instances but needs to provide a common view from a reporting perspective and/or for global configurations, in case of a product line strategy. The driving need for separate instances could be isolation, sizing, reduce exposure to failures, organizational boundaries, merger/acquisition, customer/supplier separation, etc.

The other part of the ‘art’ is recommending the sizing for a given server.  Here I make extensive use of all the performance testing that has been done.

4.1.1The CLM Sizing Strategy provides a comfortable range of concurrent users that a given Jazz application can support on a given sized server for a given workload.  Should your range of users be higher or lower, your server be bigger or smaller or your workload be more or less demanding, then you can expect your range to be different or to need a different sizing.  In other words, judge your sizing or expected range of users up or down based on how closely you match the test environment and workload used to produce the CLM Sizing Strategy.  Concurrent use can come from direct use by the Jazz users but also 3rd party integrations as well as build systems and scripts.  All such usage drives load so be sure to factor that into the sizing.  There are other factors such as isolating one group of users and projects from another, that would motivate you to have separate servers even if all those users could be supported on a single server.

Should your expected number of concurrent users be beyond the range for a given application, you’ll likely need an additional application server of that type.  For example, the CLM Sizing Strategy indicates a comfortable range of 400-600 concurrent users on a CCM (RTC) server if just being used for work items (tracking and planning functions).  If you expect to have 900 concurrent users, it’s a reasonable assumption that you’ll need two CCM servers.  As of v6.0.2, scaling a Jazz application to support higher loads involves adding an additional server, which the Jazz architecture easily supports.  Be aware though that there are some behavioral differences and limitations when working with multiple applications (of same type) in a given Jazz instance.  See Planning for multiple Jazz application server instances and its related topic links to get a sense of considerations to be aware of up front as you define your topology and supporting usage models.  Note that we are currently investigating a scalable and highly available clustered solution which would, in most cases, remove the need for distributing projects and users across multiple application servers and thus avoid the behavioral differences mentioned.  Follow this investigation in work item 381515.

This post doesn’t address other servers likely needed in your topology such as a Reverse Proxy, Jazz Authorization Server (which can be clustered), Content Caching Proxy and License Key Server Administration and Reporting tool.  Be sure to read up on those so you understand when/how they should be incorporated into your topology.  Additionally, many of the performance and sizing references I listed earlier include recommendations for various JVM settings.  Review those and others included in the complete set of Performance Datasheets and Sizing Guidelines.  It isn’t just critical to get the server sizing right but the JVM properly tuned for a given application.

To get to the crux of the primary question of number of servers and their size, I ask a number of questions.  Here’s a quick checklist of them.

  1. What Jazz applications are you deploying?
  2. What other IBM or 3rd party tools are you integrating with your Jazz applications?
  3. How many total and concurrent users by role and geography are you targeting and expect to have initially?  What is the projected adoption rate?
  4. What is the average latency from each of the remote locations?
  5. How much data (number of artifacts by domain) are you migrating into the environment? What is the projected growth rate?
  6. If adopting Rational Team Concert, which capabilities will you be using (tracking and planning, SCM, build)?
  7. What is your build strategy? frequency/volume?
  8. Do you have any hard boundaries needed between groups of users, e.g. organizational, customer/supplier, etc. such that these groups should be separated onto distinct servers?
  9. Do you anticipate adopting the global or local configuration management capability (released in v6.0)?
  10. What are your reporting needs? document generation vs. ad hoc? frequency? volume/size?

Most of these questions primarily allow me to get a sense of what applications are needed and what could contribute to load on the servers.  This helps me determine whether the sizing guidance from the previously mentioned performance reports need to be judged higher or lower and how many servers to recommend.  Other uses are to determine if some optimization strategies are needed (questions 4 and 7).

4.1.1As you answer these questions, document them and revisit them periodically to determine if the original assumptions, that led to a given recommended topology and size, have changed and thus necessitate a change in the deployment architecture.  Validate them too with a cohesive monitoring strategy to determine if the environment usage is growing slower/faster than expected or detect if a server is nearing capacity.  Another good best practice is to create a suite of tests to establish a baseline of response times for common day to day scenarios from each primary location.  As you make changes in the environment, e.g. server hardware, memory or cores, software versions, network optimizations, etc., rerun the tests to check the effect of the changes.  How you construct the tests can be as simple as a manual run of a scenario and a tool to monitor and measure network activity (e.g. Firebug).  Alternatively, you can automate the tests using a performance testing tool.  Our performance testing team has begun to capture their practices and strategies in a series of articles starting with Creating a performance simulation for Rational Team Concert using Rational Performance Tester.

In closing, the kind of guidance I’ve talked about often comes out in the context of a larger discussion which looks at the technical deployment architecture in a more wholistic perspective, taking into account several of the non-functional requirements for a deployment.  This discussion is typically in the form of a Deployment Workshop and covers many of the deployment best practices captured on the Deployment wiki.  These non-functional requirements can impact your topology and deployment strategy.  Take advantage of the resources on the wiki or engage IBM to conduct one of these workshops.

Configuration Management much improved in CLM 6.0.1

The 6.0 release of the Rational solution for Collaborative Lifecycle Management (CLM) included the addition of new configuration management capabilities.  It had some limitations to consider, that is, temporary differences in some CLM capabilities when configuration management is enabled for a project versus not.  Some workarounds for these were detailed in Finding suspect links and requirements to reconcile in CLM 6.0 configuration management enabled projects and Alternatives to filtering on lifecycle traceability links in CLM 6.0 configuration management enabled projects

I am happy to say that the development team worked hard to address these and more in the 6.0.1 release.  A few considerations still remain, but far less than were in the previous release.  I am now comfortable recommending its use by many of my customers.  In this blog entry, I will compare and contrast the configuration management limitations between 6.0 and 6.0.1 and highlight a few other configuration management enhancements.

Considerations from v6.0 and their improvements in v6.0.1:

  1. Will you use the configuration management capabilities in a production environment or a pilot environment?

  2. Will you upgrade all CLM applications to 6.0?

  3. Do your configuration-enabled RM or QM project areas link to other RM or QM project areas?

    While these first three considerations all remain true, they were moved to the ‘Important Factors’ section as they are more recommendations and best practices versus changes in behavior from non-enabled projects.  In v6.0, configuration management was new and we thought it better to draw attention to these recommendations so included them at the top of the v6.0 considerations list.

    Piloting use of configuration management is recommended due to its complexities and to ensure it meets your needs.  It also gives opportunity to try out new usage models/workflows before implementing in production.

    Keeping all the CLM apps at the same v6.0.x rev level is the only practical way to take advantage of all the configuration management capabilities and ensure they work correctly.

    Because of the new linking model in v6.0 when using configuration management, it will always be the case that you’ll want to enable configuration management for all the RM and QM projects between which you’ll be linking artifacts otherwise linking will not always work as desired.

  4. Which type of reporting do you need?

    In v6.0, configuration-aware reporting was available for document generation only.  All other options were technology preview only and not to be used in production.  In v6.0.1, document generation continues to be available using built-in, template-based reporting in the CLM application or by IBM Rational Publishing Engine; it now includes interactive reporting through the Jazz Reporting Service Report Builder and Rational Engineering Lifecycle Manager. 

    Most configuration aware data is now available.  v6.0.1 added version aware data for DOORS Next Generation, versioned data in global and local configurations and Rational Team Concert enumeration values.  This means, for example, that you can construct a report that spans requirements, work items and tests for a particular configuration or a report that includes artifacts from multiple project areas within the same domain. Some gaps remain in available configuration aware data and some is only available via custom SPARQL queries.  Take a look at the limitations section of Getting started with reporting by using LQE data sources

    Access control for reporting on LQE data sources is now enforced at the project area level for RM, CCM and QM applications .  Project-area level access control is not yet implemented for DM and Link Validity (can be set manually in LQE).

  5. Do you need to link or synchronize your CLM project area data with other tools, products, or databases (including IBM, in-house, or other tools)?

    Most OSLC-based integrations outside the CLM applications do not support versioned artifacts.  To do so requires support for OASIS OSLC Configuration Management spec (draft).  We do expect the list of supporting applications to grow over time.  Note that integrations to RTC work items continue to work as expected, because work items aren’t versioned.  Several RQM test execution adapters have been verified to work correctly with enabled projects.  We expect progress to be made in this area throughout 2016 with other IBM and third-party applications.  For information on setting up configuration aware integrations, see Enabling your application to integrate with configuration-management-enabled CLM applications.

  6. Do you rely on suspect links for requirements in RM or for requirement reconciliation in QM?

    In v6.0.1, the new Link Validity Service replaces “suspect links”.  Now you can show and assert the validity of links between requirements, and between requirements and test artifacts.  Automatic “suspect” assertion occurs when attributes change.  Validity status is reused across configurations when the same artifacts with the same content are linked in multiple configurations.

    QM requirements reconciliation against linked requirements collections is now available in configuration management enabled projects. 

  7. Do you need to filter views based on lifecycle traceability links?

    RTC plans can now be linked to versioned requirements collections and test plans in v6.0.1.  What remains in this limitation area is it is still not yet possible in configuration management enabled projects to filter RM views based on lifecycle traceability status nor filter QM views based on RTC work item and plan traceability links.  These should all be addressed in a subsequent release.

  8. (For QM users) Do you use command-line tools, the mobile application for off-line execution, or import from Microsoft Excel or Word?

    In the v6.0.1 release, all command-line tools but the Mobile application for offline execution utility are now configuration aware and can be used with configuration management enabled projects.

The v6.0.1 release includes some other configuration management enhancements of note unrelated to the limitations/considerations:

  • One step creation of global configuration baseline hierarchy
  • Bulk create streams for selected baselines in the context of a global stream
  • Requirements change management supported by optionally requiring change sets to edit contents of a stream and further requiring those change sets to be associated with an approved work item to deliver those changes
  • Improved ability to deliver changes across streams that require merging of modules
  • Several improvements to make it easier to work with DNG change sets in personal streams

See v6.0.1 CLM New & Noteworthy for more details on these and other improvements.

One other enhancement not yet fully realized is the provision for Fine Grain Components.  Currently each project area is a component, which could to a proliferation of project areas for complex product hierarchies.  The intent in future is to support more granularity of component breakdown within a project area.  More work remains to get this fully supported.  In the mean time, some customers may limit their adoption of configuration management until this is supported.

To wrap up, I believe we’ve made great strides in improving the configuration management capability and addressing the limitations from its initial release.  To me, the primary limitation that will constrain a customer’s adoption of the capability is whether the needed integrations to 3rd party or other IBM applications are configuration aware and secondarily if there are any aspects of the configuration aware reporting that won’t meet there reporting needs. 

To give this release a try, you can download CLM v6.0.1 or access one of our sandboxes already populated with sample data (select the latest CLM milestone “with configuration management” when creating your sandbox).

Alternatives to filtering on lifecycle traceability links in CLM 6.0 configuration management enabled projects

Today I’d like to continue on the theme started in Finding suspect links and requirements to reconcile in Collaborative Lifecycle Management (CLM) 6.0 configuration management enabled projects by addressing another consideration from Enabling configuration management in CLM 6.0 applications.

Do you need to filter views based on lifecycle traceability links?

In CLM 6.0 for those projects with configuration management enabled, you can view artifacts with their lifecycle traceability links, but cannot filter by those relationships.  There are three key areas this impacts:

  • Limit by lifecycle status in DOORS Next Generation (DNG)
  • Traceability views for RQM test artifacts
  • Linking RTC plans to artifact collections

I’ll explore some alternative workarounds to these in this blog.

Limit by lifecycle status in DOORS Next Generation (DNG)

In CLM 5.x, views of requirements artifacts can be filtered by the status of lifecycle artifacts linked to them.  The same is true in CLM 6.0 but only for projects that don’t have configuration management enabled.  This limitation should be addressed in a future release by work item 97071.   Below is an example showing all feature requirements with failed test case runs.

image

Similarly, the following shows all feature requirements whose linked development items have been resolved.

image

In CLM 6.0, for configuration management enabled projects, the lifecycle status filter option doesn’t even appear.

image

It is still possible to show a view of requirements with their lifecycle traceability links shown; it’s only the filtering that isn’t possible (at present).

image

Here you could scroll through the Validated By column, for instance, and manually scan for test cases whose icon indicated a failure occurred.  This wouldn’t be viable for more than a short list of requirements.

What’s needed then is to display a view/query of the linked artifacts, filtered appropriately, and display, if possible, their linked requirements.

For example, show all failed test case runs in Rational Quality Manager (RQM) and their associated requirements.  When displaying all test cases, you can see visually by their associated icon, whether the test case has been run successfully or not.  This isn’t ideal given you aren’t able to filter by the test case status and must instead visually pick out the failed runs.  It does, however, show the linked requirement being validated.

image

Alternatively, show a view of test case results and filter by their status.  Below is a list of test case results that are Failed or Blocked and their associated test case.

image

Unfortunately, it doesn’t also show the related requirement.  Instead you would need to drill down into the test case from the failed run and see its linked requirements.

Use of the Jazz Reporting Service Report Builder may be an option in limited cases.  First, it’s use for configuration management aware reporting is Technology Preview in 6.0 and only really contains configuration data for RQM.  For DNG, only the default configuration data is included for configuration management enabled DNG projects.  If your requirements configuration management needs are basic, where a single configuration/stream is sufficient, this may be an option.

For example, the following report shows all requirements with failed/blocked test case runs.

image

Now you’ll likely have multiple DNG projects.  DNG doesn’t publish project area data to the Lifecycle Query Engine (LQE) using Configurations data source so you can’t choose only those requirements artifacts from a given project, limit scope by that project nor set a condition to query by some DNG project area attribute.  You can, however, choose test artifacts for a given project area (and configuration) so if there’s a 1:1 relationship between DNG and RQM projects, you can produce a report that just shows the requirements from failed test case runs in the desired RQM project belonging to the desired global configuration (this is what is shown in the previous screenshot).

I tried looking at this from the opposite direction, that is, show all failed/blocked test case runs and their associated requirements.  You get the right list of failed runs, but it shows all their associated requirements, not all of which were tested and failed in that run.

image

For the other example I gave earlier, show all requirements whose linked development items were resolved, you could go to Rational Team Concert (RTC) and run a lifecycle query such as Plan Items implementing Requirements, but you’d need to visually look for plan items whose status met your criteria as the query isn’t editable and thus you couldn’t add a filter.

image

Traceability views for RQM test artifacts

In CLM 5.x, views of test artifacts can be filtered by the presence (or not) of a linked development item.

image

The same is true in CLM 6.0 but only for projects that don’t have configuration management enabled.  RQM projects that have configuration management enabled, don’t include that filter option.  This limitation should be addressed in a future release by work item 134672.

image

From this view, you would then need to visually scan the Test Development Item column for whichever condition you needed.

RTC has some lifecycle queries, such as Plan Items with no Test Case or Plan Items with failing Tests that could help.

image

image

Here again, Report Builder could help as you could construct a report that shows test cases with or without associated work items.  For example, the report below shows test cases without any work item associated.

image

Linking RTC plans to artifact collections

In CLM 5.x, it is possible to link an RTC plan to a DNG requirements collection and/or a RQM test plan.  Use of these grouped artifacts allows for shared scope and constraints across these lifecycle plans and are useful in auto filling gaps in plans or reconciling changes.

image

In CLM 6.0, links to collections and test plans from an RTC plan only resolve to the default configuration in the project they reside in.  In other words, you cannot link an RTC plan to a versioned requirements collection or test plan.  This limitation should be addressed in a future release by work item 355613.  The primary limitation that translates to is you are unable to auto generate workitems for requirements in a collection when working with projects that have configuration management enabled.

image

Missing that capability then the only work around is to manually create the workitems so that every requirement in the collection is linked to a corresponding work item.

A traceability plan view in RTC that includes a color filter will help identify those plan items without requirements links.

image

Such a view will highlight cases where a work item may need to be removed as the scope has changed, e.g. the collection has had requirements removed.

In DNG, view the collection with the Implemented By column included and scan for requirements with no corresponding work item.

image

If your requirements set is too large to view manually, export the collection view to a CSV file then open the exported file and filter or sort by the Implemented By column to more easily see those requirements without work items.

image

Conclusion

Of the limitations discussed, I find the first one, inability to filter by lifecycle status, will be more problematic for customers though I’ve found it’s usage to be mixed.  I’m also not particularly enamored with the workarounds described because they too are limited and involve some manual steps.  I would be interested in hearing how significant these limitations are in your environment or if you have additional ideas on workarounds for them.