Thanks to the underlying Java platform, OpenDJ software runs well on a variety of processor architectures. Many directory service deployments meet their service-level agreements without the very latest or very fastest hardware.
For a server evaluation installation, you need 256 MB memory (32-bit) or 1 GB memory (64-bit) available to OpenDJ, with 100 MB free disk space for the software and a small set of sample data. For installation in production, read the rest of this section. You need at least 2 GB memory for OpenDJ and 4 times the disk space needed to house initial production data in LDIF format.[1] To get a more accurate estimate of the disk space needed, import a known fraction of the initial LDIF with OpenDJ configured as for production, run tests based on the estimated rates of change and growth in directory data, and then use the actual space used in the test environment to estimate how much disk space you need in production.
OpenDJ directory servers almost always benefit from having enough system memory to cache all directory database files used. The reason is that reading from and writing to memory is typically much faster than reading from and writing to disk storage. For small data sets, you might not need extra memory. For large directories with millions of user directory entries, the system might not have enough slots to house sufficient memory to cache everything. To improve performance in such cases, one approach is to add solid state drives as an intermediate cache between memory and disk storage.
Processor architectures that provide fast single thread execution tend to help OpenDJ software deliver the lowest response times. For top end performance in terms both of sub-millisecond response times and also of throughput ranging from tens of thousands to hundreds of thousands of operations per second, the latest x86/x64 architecture chips tend to perform better than others tested. Chip multi-threading (CMT) processors can do very well on directory servers providing pure search throughput, even though response times can be higher. Yet, CMT processors can be slow to absorb hundreds or thousands of write operations per second. Their slower threads get blocked waiting on resources, and thus are not optimal for topologies with high write throughput requirements.
On systems with fast processors and enough memory to cache directory data completely, the network can become a bottleneck. Even if a single 1 Gbit Ethernet interface offers plenty of bandwidth to handle your average traffic load, it can be too small for peak traffic loads. Furthermore, you might choose to use separate interfaces for administrative traffic and application traffic. To estimate what network hardware you need, calculate the size of the data you return to applications during peak load. For example, if you expect to have a peak load of 100,000 searches per second, each returning a full 8 KB entry, you need a network that can handle 800 MB/sec (3.2 Gbit/sec) throughput, not counting any other operations such as writes that result in replication traffic.
The storage hardware you choose must allow you to house not only directory data including historical data for replication, but also logs. If you choose to retain access logs for auditing purposes on a heavily used directory, dedicate storage for the log archives as well. Furthermore, your storage must also keep pace with the write throughput. Write throughput can arise from modify, modify DN, add, and delete operations, but it can also result from bind operations. Such is the case when the last successful bind is recorded, and when account lockout is configured, for example. In a replicated topology, not only does a directory service write entries to disk when they are changed, but a directory service also writes changelog data and historical information in order to resolve potential replication conflicts. You base your network throughput needs on peak loads. Also base your storage throughput needs on peak loads.
![]() |
Note |
|---|---|
|
OpenDJ servers do not currently support network file systems such as NFS for database storage. Provide sufficient disk space on local storage such as internal disk or an attached disk array. |
[1] OpenDJ stores data in Berkeley DB Java Edition, which is implemented as a rolling log. Berkeley DB appends updates to the end of the last log file, and marks old pages as deleted. Berkeley DB cleaner threads monitor the log file occupancy ratio, moving the data to get rid of old log files. Yet, with the default occupancy ratio of 50%, log files are cleaned only when they have less than 50% valid pages. As a result, the database can reach twice its initial size in the worst case.
Furthermore, when you import data from LDIF, OpenDJ stores not only the data, but also builds indexes for many of the attributes, resulting in some growth. Replication historical data and other operational attributes can also take up space.
Finally, it makes sense to leave space for growth in the database size as you modify and add entries over time.

![[Note]](common/images/admon/note.png)
