First, to keep the offering managers and marketing team happy, I should point out that Rational Team Concert (RTC) is no longer the name but is now Engineering Workflow Management (EWM). See Renaming the IBM Continuous Engineering Portfolio for more information.
It is still true that build results, work item attachments and versioned conent are the largest contributors to EWM database size. Previously I referenced out of the box reports useful for determining which EWM namespaces occupied the most space. Since then I have documented web services and repotools reports, and Item Count Details JMX MBean that can be used to get the same information. See How many artifacts do I have in my Jazz application repository?.
Once you know which artifacts are taking up the most space, you then need to know what can and cannot be deleted. I point out some of this in the previous blog but we now have an article on the deployment wiki that goes into more detail. See Deleting data in Rational Team Concert.
A new technique for reducing the size of your EWM database is using an external content repository such as Artifactory for managing large versioned files. See Rational Team Concert: External content repositories. The article describes how to configure EWM to use an external content repository, move content in and out of it as well as serveral new JMX MBeans to monitor the size of external content repositories, size of EWM SCM components and size of the largest files.
I think the improvements since the original posting make it easier to monitor the growth of your repositories, understand what is causing it and provides better strategies for mitigating growth. As always your feedback is appreciated. If you have other techniques you find useful, please pass them on.
The following is a refresh of a previous post in 2016.
You’ve just made the decision to adopt one of the Jazz solutions from IBM. Of course, being the conscientious and proactive IT professional that you are, you want to ensure that you deploy the solution to an environment that is performant and scalable. Undoubtedly you begin scouring the IBM Knowledge Center and the latest System Requirements. You’ll find some help and guidance on Deployment and installation planning and even a reference to advanced information on the Deployment wiki. Unlike the incongruous electric vehicle charging station in a no parking zone, you are looking for definitive guidance but come away scratching your head still unsure of how many servers are needed and how big they should be.
This is a common question I am often asked, especially lately. I’ve been advising customers in this regard for several years now and thought it would be good to start capturing some of my thoughts. As much as we’d like it to be a cut and dried process, it’s not. This is an art not a science.
My aim here is to capture my thought process and some of the questions I ask and references I use to arrive at a recommendation. Additionally, I’ll add in some useful tips and best practices.
I find that the topology and sizing recommendations are similar regardless of whether the server is to be physical or virtual, on-prem or in the cloud, managed or otherwise. These impact other aspects of your deployment architecture to be sure, but generally not the number of servers to include in your deployment or their size. One exception is that managed cloud environments often start lower than the recommended target since those managing understand how to monitor the environment, look for indicators that more resources are needed and can quickly respond to increasing demands.
From the outset, let me say that no matter what recommendation I or one of my colleagues gives you, it’s only a point in time recommendation based on the limited information given, the fidelity of which will increase over time. You must monitor your Jazz solution environment. In this way you can watch for trends to know when a given server is at capacity and needs to scale by increasing system resources, changing the distribution of applications in the topology and/or adding a new server. See Deployment Monitoring for some initial guidance. Since 6.0.3, we have added capabilities to monitor Jazz applications using JMX MBeans. Enterprise monitoring is a critical practice to include in your deployment strategy.
Before we even talk about how many servers and their size, the other standard recommendation is to ensure you have a strategy for keeping the Public URI stable which maximizes your flexibility in changing your topology. We’ve also spent a lot of time deriving standard topologies based on our knowledge of the solution, functional and performance testing, and our experience with customers. Those topologies show a range in number of servers included. The departmental topology is useful for a small proof of concept or sandbox environment for developing your processes and procedures and required configuration and customization. For most production environments, a distributed enterprise topology is needed.
The tricky part is that the enterprise topology specifies a minimum of 8 servers to host just the Jazz-based applications, not counting the Reverse Proxy Server, Database Server, License Server, Directory Server or any of the servers required for non-Jazz applications (IBM or 3rd Party). For ‘large’ deployments of 1000 users or more that seems reasonable. What about smaller deployments of 100, 200, 300, etc. users? Clearly 8+ servers is overkill and will be a deterrent to standing up an environment. This is where some of the ‘art’ comes in. I find more often than not, I am recommending a topology that is some where between the department and enterprise topologies. In some cases, a federated topology is needed when a deployment has separate and independent Jazz instances but needs to provide a common view from a reporting perspective and/or for global configurations, in case of a product line strategy. The driving need for separate instances could be isolation, sizing, reduced exposure to failures, organizational boundaries, merger/acquisition, customer/supplier separation, etc.
The other part of the ‘art’ is recommending the sizing for a given server. Here I make extensive use of all the performance testing that has been done, including the following.
The CLM Sizing Strategy provides a comfortable range of concurrent users that a given Jazz application can support on a given sized server for a given workload. Should your range of users be higher or lower, your server be bigger or smaller or your workload be more or less demanding, then you can expect your range to be different or to need a different sizing. In other words, judge your sizing or expected range of users up or down based on how closely you match the test environment and workload used to produce the CLM Sizing Strategy. Concurrent use can come from direct use by the Jazz users but also 3rd party integrations as well as build systems and scripts. All such usage drives load so be sure to factor that into the sizing. There are other factors such as isolating one group of users and projects from another, that would motivate you to have separate servers even if all those users could be supported on a single server.
Should your expected number of concurrent users be beyond the range for a given application, you’ll likely need an additional application server of that type. For example, the CLM Sizing Strategy indicates a comfortable range of 400-600 concurrent users on a CCM (Engineering Workflow Management) server if just being used for work items (tracking and planning functions). If you expect to have 900 concurrent users, it’s a reasonable assumption that you’ll need two CCM servers. Scaling a Jazz application to support higher loads involves adding an additional server, which the Jazz architecture easily supports through multi-server or clustering topology patterns. Be aware though that there are some behavioral differences and limitations when working with the multi-server (not clustered) pattern. See Planning for multiple Jazz application server instances and its related topic links to get a sense of considerations to be aware of up front as you define your topology and supporting usage models. As of 126.96.36.199, application clustering is only available with the CCM application.
What are your reporting needs? document generation vs. ad hoc? frequency? volume/size?
Most of these questions primarily allow me to get a sense of what applications are needed and what could contribute to load on the servers. This helps me determine whether the sizing guidance from the previously mentioned performance reports need to be judged higher or lower and how many servers to recommend. Other uses are to determine if some optimization strategies are needed.
As you answer these questions, document them and revisit them periodically to determine if the original assumptions, that led to a given recommended topology and size, have changed and thus necessitate a change in the deployment architecture. Validate them too with a cohesive monitoring strategy to determine if the environment usage is growing slower/faster than expected or detect if a server is nearing capacity. Another good best practice is to create a suite of tests to establish a baseline of response times for common day to day scenarios from each primary location. As you make changes in the environment, e.g. server hardware, memory or cores, software versions, network optimizations, etc., rerun the tests to check the effect of the changes. How you construct the tests can be as simple as a manual run of a scenario and a tool to monitor and measure network activity (e.g. Firebug). Alternatively, you can automate the tests using a performance testing tool. Our performance testing team has begun to capture their practices and strategies in a series of articles starting with Creating a performance simulation for Rational Team Concert using Rational Performance Tester.
In closing, the kind of guidance I’ve talked about often comes out in the context of a larger discussion which looks at the technical deployment architecture in a more wholistic perspective, taking into account several of the non-functional requirements for a deployment. This discussion is typically in the form of a Deployment Workshop and covers many of the deployment best practices captured on the Deployment wiki. These non-functional requirements can impact your topology and deployment strategy. Take advantage of the resources on the wiki or engage IBM to conduct one of these workshops.
As a deployment architect when advising customers on their topology, I’m often asked about the supported options for reverse proxies and load balancing. Further, some customers ask about using DNS aliasing over a Reverse Proxy, which is supported but not a best practice.
While some of these details can be found in the Knowledge Center and deloyment wiki, we’ve not had a central article listing options with any comparisons. My colleagues in IBM Support have recently published an article with these details here: Reverse Proxies and Load Balancers in CLM Deployment.
The IBM Continuous Engineering (CE) solution supports the notion of N-1 client compatibility. This means a v5.0.x client such as Rational Team Concert (RTC) for Eclipse IDE can connect to a v6.0.x RTC server. Customer deployments may have 100s if not 1000s of user deployments with various clients. It is not typically feasible to have them all upgraded at the same time as when a server upgrade occurs. In some cases, to upgrade the client requires a corresponding upgrade in development tooling, e.g. version of Eclipse, but doing so is not possible yet for other reasons.
In an environment when there is a mix of RTC clients (inclusive of build clients), a data format conversion on the server is required to ensure that the data from the older clients is made compatible with the format of the newer version managed by the RTC server. Reducing these conversions is one less thing for the server to do, especially when multiplied by 100s or 1000s of clients. Additionally, new capabilities from more recent versions often can’t be fully realized until older clients are upgraded.
There are two ways to determine the extent of how much N-1 activity is ongoing in your environment.
The sample table above was taken from a 6.0.3 RTC server. It shows that out of 75685 client logins since the server was last started, only 782 of them were from the most recent client. 70% are from 5.x clients. Ideally, these would be upgraded to the latest client.
Compatible client logins MBean
If using an enterprise monitoring application integrated with our JMX MBeans, starting in 6.0.5, the Client Compatible Logins MBean can be used to get similar details as shown in the table from ICounterContentService. The advantage of the MBean is that the login values can be tracked over time so you can assess whether efforts to reduce use of old clients are succeeding
Detecting which users (or builds) are using old clients
In order to determine which users, build servers or other integrations are using older clients, you could parse the access log for your reverse proxy. In particular, find all occurrences of versionCompatibility?clientVersion in the log.
As you can see from the sample access log, there are several instances of calls with a 5.0.2 client. The log entries have the IP address of the end client sourcing the call. If this can be associated to a particular user (or build user), you can then have someone to talk to regarding upgrading their client version.
One element of sizing the servers for the IBM Continuous Engineering (CE) solution is the current and projected data scale (along with data shape, user scale and workload). There are also recommended artifacts limits to keep an application performing well, such as 200K artifacts per DNG project area (as of v6.0.5 and noted here).
Whether you are trying to project future growth based on current sizing or ensure you are staying withing recommended limits, it is useful to know how many artifacts currently exist in a repository (or other “container” such as a project area). Each application provides different means of getting this information.
DOORS Next Generation (DNG)
Vaughn Rokosz has written a very good article on the impact of data shape on DNG performance. He provides several SQL and SPARQL queries to monitor artifact counts. I won’t repeat them here but go to the link to minimally get the queries for total number of artifacts and versions in the repository and artifacts in the project areas.
Rational Team Concert (RTC), Rational Quality Manager (RQM) and Rational Model Manager (RMM)
Since these applications share a common storage service, they have similar means to get to the artifact counts. As a Jazz Admin you can run a repotools command or a web service.
for <context>, use ccm, qm or am for Change Configuration Management, Quality Management or Architecture Management applications.
Note that both of these options can take some time to execute so be aware of possible load put on the server. I suggest running them during lighter load times. You can first run in a test environment with production like data to get a sense of timing and load.
Sample CCM artifact counts output
Sample QM artifact counts output
Starting with v6.0.3, administrators can monitor Jazz application metrics through the use of JMX MBeans. One of the MBeans is Item Count Details which contains similar information as provided by the listItemStats repotools command and IDBTableSizeHttpService web service. The Item Count Details MBean, once enabled can be viewed from RepoDebug or an enterprise monitoring tool capable of receiving published JMX inputs. This is the preferred method as you can capture that data over time, see trends, set alerts and thresholds and correlate with other monitored data.
I recently published a blog post on jazz.net regarding our serviceability strategy and use of JMX MBeans to monitor Jazz Applications. If you’ve heard me speak on this topic, you know that I believe that having an monitoring strategy is a best practice and essentially imperative for any deployment involving our global configuration management capability. I would even extend that to deployment of RTC clustering as well.
About a year ago, I was asked to begin considering what scenarios could drive load on a Collaborative Lifecycle Management (CLM) application server that could lead to outages or overall diminish the end user’s quality of service. These aren’t necessarily long running scenarios but those that are known to use large amounts of system resources (e.g. high CPU, memory, heap usage). As such, they have been known at times to degrade server performance and negatively impact user experience.
After reviewing a number of problem reports and escalations plus several discussions with Support, Services and Development resources, I identified scenarios for several of the applications. We coined the term ‘expensive scenarios’ though our User Assistance team recently indicated that it could be misconstrued and a more apt name would be ‘resource-intensive’.
The first set of scenarios were published in v6.0.3 and documented as Known Expensive Scenarios. The title will be changed in the next release to be Known Resource-intensive Scenarios.
For each of the identified scenarios, there is a description of what it is and under what conditions it could become resource-intensive. Further, if there are any known best practices to avoid or mitigate the scenario from becoming resource-intensive, these too are captured. These practices could include adjusting some application advanced properties that tunes the scenario behavior some or a change in work practices for when and how the scenario is invoked.
For example, importing a large number of requirements into DOORS Next Generation (DNG) can consume high resources as subsequent to the import, indexing of the newly imported artifacts occurs, which can block other user activity. When the volume of imported data is high and/or several occur at once, system performance could degrade. The wiki describes this scenario, identifies that there are some advanced properties that limit the number of concurrent ReqIF imports as well as the recommendation that these imports be kept under 10K requirements or be performed when the system is lightly loaded.
Knowing these scenarios help in a couple of ways. First, as your process and tools teams define usage models for one of these applications, knowing that a particular usage pattern can potentially drive load on the server leading to degraded performance allows that usage model to be adjusted to avoid or reduce the likelihood of that occurring. Second, in situations of poor performance or worse, knowing if these scenarios are occurring could help identify root cause.
This latter case is helped by the logging of start and stop markers when a resource-intensive scenario occurs. Each marker includes the Scenario ID (from Table 1) and a unique instance ID.
To get additional details when the scenario occurs and to aid in understanding its characteristics, advanced (verbose) logging can be enabled. This can be done from the Serviceability page of an application’s admin UI. Note the enabling verbose logging does not require a server restart.
Now when a performance or system anomaly occurs and the application logs are reviewed, should it have occurred during a resource-intensive scenario, you may have a clue as to cause. The additional logging should at a minimum include the data specified in Table 2.
As part of our serviceability improvements in v6.0.3, the CLM applications publish various JMX MBeans that may be collected and trended by enterprise monitoring tools such as IBM Monitoring, Splunk, LogicMonitor and others. MBeans exist for several application metrics including counts/occurrences of resource-intensive scenarios.
Each MBean to be published must first be enabled from an application’s admin UI advanced properties page.
After doing so, the monitoring application can be configured to capture that data and displayed on a dashboard.
Having a comprehensive enterprise monitoring strategy is essential for a well-managed CLM environment. Tracking occurrences of these scenarios and correlating them against other environment measurements give administrators (and IBM Support) insight when troubleshooting anomalies or proactively evaluating environment performance. In a subsequent post, I will talk further about what to monitor.