Rhapsody is designed to perform well for a wide variety of interface configurations, message volumes, and hardware, but (as with any software) some combinations of configuration, volume and operating environment will result in performance that is below expectation. Rhapsody provides a number of features to help identify performance problems as well as options for resolving them. This section introduces the possible causes of performance bottlenecks, how to identify them and possible resolutions.

Architecture

To understand the performance characteristics of the Rhapsody engine, it is useful to understand the basic architecture of the product, and how it manages messages.

Rhapsody stores all data on hard disk in a purpose-built message store. This message store has been designed and optimized for performance, and provides transactions much faster than SQL databases can provide.

Message Data

Message data is stored in specific message storage files. Once stored, this data is not changed until removed from the message archive. If the message body is altered during processing, a new copy of the message is stored. This enables the Management Console to display the message body at each stage of processing.

An important thing to realize, from a performance perspective, is that each change to a message body will cause another copy to be stored in the message store. Changes to message properties are not considered part of the message body, but are stored as metadata associated with a message body. This metadata adheres to the same rules as message bodies, but is usually so small that the changes are not considered a performance risk. Indexed properties also store additional information and can cause higher disk activity than non-indexed properties.

Queues

The processing of the message is performed by moving a pointer to the message between internal queues. These queues are associated with various communication points and routes defined in the Rhapsody configuration.

Each communication point has two queues — one for messages being received (input) and one for messages being sent (output). Routes have a single queue for messages that are currently being processed.

When a message arrives on a communication point, it is placed on the communication point's inbound queue. When a route is available to begin processing the message, it is moved from the communication point queue to the route's processing queue. It remains in this queue until it leaves the route and is passed to a communication point's outbound queue (or sometimes another route's processing queue).

Threading

Rhapsody provides a number of different threads that perform different tasks within Rhapsody. All threads can be viewed from the Management Console, but the most important threads are the communication point and the route execution threads. For details, refer to Threads in the Monitoring Rhapsody.

Communication Point Threads

Thread

Description

Controller Thread

Controls the running state of the thread. Used to stop and start other threads when new connections are made or destroyed.

Connection Thread

Used to send and receive messages on a connection. There can be many input threads for communication points that support multiple connections.

Route Execution Threads

There are a defined number of route execution threads running within a Rhapsody engine at any one time. These threads are used to process messages on routes.

By default, there are ten threads created and assigned to message processing. This means that only ten messages may ever be worked on at any one time, as each route thread can only work on a single message at a time.

A route execution thread takes the next waiting message from any route and is not tied to a specific route (although route priorities affect which routes are processed first). This means that all threads could be processing multiple messages on the same route at the same time.

Processing Optimization

Rhapsody processing has been optimized to ensure there is no loss of data due to poor communication with external systems. Receiving messages is considered the highest priority processing the engine can perform. This ensures the messages are received by Rhapsody and filed. Processing routes are run at a lower priority.

Rhapsody also keeps all queues on disk to ensure an accurate recovery should a system failure occur. Messages will begin processing from the exact same state when the system becomes available again.

I/O Throughput

The Rhapsody data store can be configured such that portions of the store are placed on separate devices in order to enhance performance. This is an advanced configuration and enabled by editing the rhapsody.properties file.

Generally, two partitions are reasonable, the first containing the message store (message bodies and message meta-data) and the second containing the balance of the data structures including queues, transactions and indexes amongst other components.

The performance advantages accrue from splitting the I/O requests onto multiple devices to reduce the amount of disk contention on a single device. The expectation is that each element of the data store configured as a separate item is provided as a separate mount point, not as separate paths on a single drive.

The rhapsody.properties file provides for separation of the following items:

  • Bulk of the store (default case - all of the store and default location for any component with the default path specified)
  • Message store
  • Message events
  • Id generation
  • Index store
  • Logging service
  • Backup service
  • Config
  • Message tables
  • Message tracking service
  • Monitoring service
  • Persistent map service
  • Queue service
  • Statistics store
  • Task scheduler
  • User management
  • Validation service

It is not necessary to separate each of these data stores onto separate devices as many of them include only a small I/O performance cost. However, for deployments where I/O throughput may be a constraint on performance, it is likely that isolating the raw messaging operations (relatively infrequent large operations in the message store) from the other data stores (high request volume but small operations) will incur a throughput increase. This has the effect of parallelizing I/O requests across those I/O devices.

Similarly, the same result can be achieved by using the correct disk technology to parallelize the requests. This can be achieved in some sites with SAN deployments which have multiple disk spindles with dynamic spindle allocation. In this case, the Rhapsody data stores can be deployed on a single logical I/O device and the parallelization is managed at the SAN level.

Archive Cleanup Disk Usage Strategy

When using Rhapsody’s in-built archiving for a limited period, the disk capacity should be sized such that Rhapsody has sufficient disk space to operate. Factors to consider when sizing the disk include the following:

  • The frequency and size of messages being processed by Rhapsody.
  • The archive cleanup period.
  • The number of properties being indexed and complexity of configuration.
  • The number of message body edits taking place per message (has a multiplicative impact on data store size).
  • The number of message property edits taking place (has a multiplicative impact on data store size).
  • The size of the Rhapsody configuration.
  • The version of Rhapsody (certain older versions include defects in both the transaction and message data store which lead to increased memory consumption).

Diagnosing Performance Issues

Refer to Performance Problems for details on diagnosing performance issues.