IBM WebSphere Application Server is a reliable enterprise-class application server that provides a set of core components, resources and services for developers to use in your application. Each application has unique needs, and often use different ways to use application server resources. In order to provide a high degree of flexibility and support such a wide range of applications, WebSphere Application Server provides a comprehensive set of parameters to help you enhance the tuning of the application.
Application server has been the most common tuning for the parameters of the default values, to ensure that the most widely used applications out of the box performance improvements. However, two applications can not use exactly the same way to use the application server and therefore can not ensure that a set of tuning parameters can be applied to all applications. It also highlights the implementation of a focused application performance testing and tuning of importance.
This article will discuss the WebSphere Application Server V7.0 (and earlier releases) and some of the most commonly used parameters, as well as the method to tune them. Provided with other relevant articles of different tuning recommendations, this paper will use the Apache DayTrader Performance Benchmark Sample case studies as the context of this article. With DayTrader application, you can clearly identify the main server components are used, to focus on tuning these areas and observe the changes of various tuning the benefits.
Continue reading, you need to remember that the application server performance tuning on some of the issues:
- Performance often at the expense of the application or application server features or functionality. Performance tuning changes in the calculation should be carefully considered trade-off between performance and features.
- Some of the factors outside the application server can sometimes affect performance, including hardware and operating system configuration, the system running processes, the performance of back-end database resources, network latency and so on. Implementation of performance evaluation in your own time, these factors must be taken into account.
- Performance improvements discussed here only for the DayTrader application, and specific to the work described here support the load and the composition of the hardware and software stack. This article introduces you to change the tuning to achieve the application performance will certainly be different and should be tested by your own assessment of the performance.
Apache DayTrader Performance Benchmark Sample application that simulates a simple stock trading system that allows users to log on / off, view the portfolio, check stock quotes, trade stocks and manage account information. DayTrader is not only a good functional test application, also provides a standard work load, is used to describe and measure the application server and component-level performance. DayTrader (and it relies on the development of the IBM Trade Performance Benchmark Sample application) is not the original intention of providing the best performance, but the application server release, and options to achieve the style, pattern comparison.
DayTrader based on a core group of Java ™ Enterprise Edition (Java EE) technologies, including Java servlets for the presentation layer and JavaServer ™ Pages (JSPs), Java Database Connectivity (JDBC), Java Message Service (JMS), Enterprise JavaBeans ™ (EJBs ) and for back-end business logic and persistence layer of the message-driven beans (MDBs). Figure 1 provides a high-level application architecture view.
To help you evaluate some of the common Java EE persistence and transaction management, DayTrader provides three different business service implementation. These implementations (or run-time mode) shown in Table 1.
|Direct||Servlet to JDBC||Use custom JDBC code directly against the database to create, read, update and delete (CRUD) operations. Database connection, commit and rollback will be managed manually in the code.|
|Session Direct||From the Servlet to JDBC to Stateless SessionBean||Use custom JDBC code directly perform CRUD operations on the database. Database connection will be managed manually in the code. Database commits and rollbacks will be automatically manages stateless session bean.|
|EJB||From Servlet to StatelessSessionBean to EntityBean||EJB container bear all the queries, transactions and database connections responsibilities.|
This article will discuss these run-time mode, in order to demonstrate a variety of changes to these three common tuning Java EE persistence and transaction implementation style.
As mentioned earlier, in the implementation of performance tuning, the understanding of application architecture, server components and the use of resources is very important. With this knowledge, you can quickly filter adjustable parameters and focus on the core of the application directly affects adjustable parameters.
Performance tuning is usually from the Java Virtual Machine (JVM) started, which is the basis for the application server. Accordance with the previous view, tuning is mainly used by the application driven application server components. For example, you can use the architecture (Figure 1) to determine the DayTrader application server component of some of the major variable:
- Web and EJB containers
- Related to the thread pool
- Database connection pool
- The default messaging provider
Rest of this article will be based on the components listed above will discuss in detail the impact DayTrader performance tuning options. These options can be divided into the following categories:
- Basic tuning: including most adjust and use the application server components, starting from the JVM. These settings are usually occupy the largest share.
- Advanced tuning: a second high-level tuning parameters are usually associated with a particular scene, and most often used to explore the system performance.
- Tuning asynchronous messaging: the options discussed here are specific to use the WebSphere Application Server components for asynchronous messaging messaging applications.
We will discuss in detail the applicability of these tuning parameters, their functions and, ultimately, weighing the impact on performance (if possible, for a variety of persistence and transaction management mode). We will also introduce in the tuning process to provide assistance for specific parameters of some of the tools. The section will also provide relevant WebSphere Application Server Information Center documentation or links to other related resources, and ultimately will be compiled in the reference section.
This section will discuss:
- JVM heap size
- Thread Tools Size
- Connection pool size
- Data source status cache size
- ORB passed by reference
JVM heap size parameter will directly affect the garbage collection behavior. By increasing the JVM heap size, can be triggered in the event of distribution failure and create more objects before garbage collection. This usually allows the application to increase the garbage collection (GC) interval between cycles. Unfortunately, the increase in the size of the heap to find a drawback is that garbage collection and disposal needs of the object will also increase the time required. Therefore, JVM heap size tuning often involves determining the time interval between garbage collection and the implementation of garbage collection pause time required between the point of balance.
To tune the JVM heap size, you need to enable verbose mode (verbose) GC. This action can WebSphere Application Server administrative console to complete: select Servers => Application servers => server_name => Process definition => Java Virtual Machine. Enable verbose mode by GC, JVM garbage collection each useful information will be printed out, such as the heap of free and used bytes, and the interval between garbage collection pause time. This information will be recorded in native_stderr.log file. Then you can use various tools to visualize the file heap usage.
WebSphere Application Server's default heap settings are as follows: the initial heap size is 50 MB, the maximum heap size is 256 MB. Under normal circumstances, should be the minimum and maximum heap size set to the same value to prevent the JVM heap settings dynamically. Important parameter is fixed by a constant, which not only helps to simplify the GC analysis, and also avoid with more memory allocation and deallocation costs related to overhead. The minimum and maximum heap size is set to the same value disadvantage is that when the initial slow start JVM because the JVM must allocate a larger heap.
In some scenarios, the minimum and maximum heap size set to different value is more favorable. One scenario is to run multiple workloads in different application server instances hosted on the same server. In this scenario, JVM can dynamically respond to changing workload requirements, and more efficient use of system memory.
GC verbose mode at startup case, we focused on were the initial and maximum heap size set to 256 MB, 512 MB, 1024 MB and 2048 MB 4 tests performed. You can manually analyze verbose GC, but with some graphical tools, such as the IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Visualizer tool (with a free download IBM Support Assistant included), can significantly simplify this process. The tool used in the experimental test model to view detailed GC data in order to find the appropriate DayTrader application heap size can make some adjustments.
Garbage collection in the verbose mode output to monitor after the first batch of projects including the collection of free heap. The indicators commonly used to determine whether there is an application for any Java memory leak. If this indicator does not reach a steady state value, and continued to decrease over time, the application is clear that a memory leak. Indicators of free heap after collection can also be combined with the heap size, shared computing servers and applications used by the working set of memory (or "memory footprint"). Just subtract from the total heap size of the heap value can be obtained free of this value.
Note, though verbose GC data and the table in Figure 2 can help detect memory leaks in Java heap, but can not detect the machine memory leaks. Native memory leaks occur is the local component (such as using C / C + + written by Java Native Interface API calls provider native libraries) cause the unit to the Java heap memory leak outside. In this case, the need to use platform tools (such as Linux ® platforms
top command, Windows ® platforms Perfmon command, etc.) to control the native Java process memory usage. WebSphere Application Server Support page provides information on the diagnosis of memory leaks detailed documentation.
The collection also need to consider the task of garbage collection intervals and the average pause time. GC interval is the interval between garbage collection cycle time. Garbage collection pause time is the time required to complete the cycle. Figure 3 shows the heap size of four refuse collection interval. Their final average of 0.6 seconds (256 MB), 1.5 Seconds (512 MB), 3.2 Seconds (1024 MB) and 6.7 seconds (2048 MB).
Figure 4 shows the average suspended four times the size of the heap. In the experimental test, they were the average of the final 62 ms (256 MB), 69 ms (512 MB), 83 ms (1024 MB) and 117 ms (2048 MB). This clearly shows the increase in the standard Java heap size of the trade-off. When the heap size increases, GC will increase the interval between, so that we can suspend the JVM to perform garbage collection to complete the more routine tasks. However, increasing the heap size also means that the garbage collector must deal with more objects, which can increase the GC pause time.
GC pause time interval, and constitute the time spent in garbage collection. Garbage collection percentage of time spent will be displayed in the IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Visualizer tool, and can be calculated using the following disclosure:
All cases the time occupied by garbage collection and the formation of the throughput (requests per second) in Figure 5.
DayTrader application proposed the initial and maximum heap size to 1024 MB. At this point, you will begin to gradually diminishing marginal returns, as to further increase the heap size will not bring in proportion to performance improvements. This garbage collection in the longer and shorter intervals to achieve a balance between the pause time, thereby reducing the time spent garbage collection.
JVM tuning is another important aspect of garbage collection strategy. Three major GC strategy is as follows:
- optthruput: (default) when the application is suspended when the garbage collection process in the implementation of the mark and sweep operations, in order to maximize application throughput.
- optavgpause: When the application is running concurrently mark and sweep operations, in order to reduce the maximum pause time; which will provide the best application response time.
- gencon: distinguish between short-and long-term target, while providing a shorter pause times and high throughput applications.
DayTrader application does not use a lot of long-term object, and use the default performance of the best GC strategy. However, the application is not the same, so you should evaluate the GC strategy in order to find the best strategy for your application. developerWorks article garbage collection strategy provides more information on the GC strategy.
Figure 6 shows through in various run-time mode DayTrader JVM heap size reduction achieved excellent performance improvements. In the table, the blue column is used to represent baseline throughput value, and the red bars that adjust the tuning parameters discussed in the throughput value. In order to facilitate a variety of run-time mode difference between the relative throughput, all with the EJB model metrics are benchmarks for comparison.
For example, before tuning, Session Direct and Direct modes are faster than the EJB model benchmark 23% and 86%. Root axis line, said second operation mode of the overall performance improvement. In this case, JVM run-time tuning mode for the three bring a different degree of improvement, as they were used to target specific distribution pattern. Performance improvements range from 3% (JDBC mode) to 9% (EJB mode) range.
This information will be at the end of the discussion at the tuning parameters are given. According to the previous section provides the parameters, adjusting the tuning parameters to achieve improved application performance can be cumulative. The end of this section provides a table (Figure 22) lists all the tuning changes can bring about the overall performance improvement.
- Tuning IBM's Java Virtual Machine
- Java technology, IBM style: garbage collection strategy, Part 1
- IBM Support Assistant
Server to perform the tasks running on WebSphere Application Server's many threads on a thread pool. Allow server thread pool thread component reuse, thus avoiding the creation of new threads at run time to service each new request. The most common application servers (and tune) of the three thread pool as follows:
- Web container: when the request passed through the use of HTTP. In DayTrader architecture (Figure 1), you can see most of the traffic through the Web Container into the DayTrader.
- Default: When a request for the incoming message-driven bean, or transmission chain is not defined for a particular use of the specific thread pool.
- ORB: When the request for the enterprise bean application from the EJB client, the remote EJB interface or another application server through the RMI / IIOP incoming use.
Important thread pool associated with the tuning options shown in Table 2.
|Minimum size||The maximum allowed number of threads in the pool. When the application server starts, the initial allocation will not be any thread to the thread pool. Thread will be allocated to the needs of their application server workloads are added to the thread pool until the number of threads in the pool size field is equal to the minimum value specified. After the change as the work load will be added and removed some threads. However, the number of threads in the pool are never less than the minimum size of the field in the specified value, even if some threads are idle.|
|Maximum size||Specify the default maximum number of threads the thread pool.|
|Thread inactivity timeout||Should be specified before the recovery thread is still time interval (in milliseconds). Value of 0 indicates no need to wait, negative (less than 0) indicates to wait forever.|
Assume that the machine contains an application server instance, then a good practice is the core of each server CPU to the default thread pool to use 5 threads, each of the server CPU for the ORB and the Web container thread pool with 10 threads. For more than 4 CPU machine, the default is usually to meet the needs of most applications. If the machine has multiple application server instances, these accordingly reduced the size of these values. On the contrary, sometimes the need to increase the size of the thread pool to address the response to I / O slow connection or back-end long-running problem. Table 3 lists the most common tuning thread pool's default thread pool size and still timeout.
|Thread pool||Minimum size||Maximum size||Still out|
|Web container||50||50||60000 ms|
To modify the thread pool settings, you can navigate to the Admin Console Servers => Application Servers => server_name => Thread Pool. You can also use Performance Advisors to obtain information about the thread pool size and other settings of the proposal.
IBM Tivoli ® Performance Viewer is embedded in the management console in a tool that allows you to view almost any server components of the performance monitoring infrastructure (Performance Monitoring Infrastructure, PMI) data. Viewer will provide recommendations to help users tune the system, and the settings for the inefficient to provide some viable alternative proposals. Access WebSphere Application Server Information Center , to learn how to start using Tivoli Performance Viewer and view PMI data.
Figure 7 shows the DayTrader application starts and when a stable, run-time peak load of the Web container thread pool PMI data. Pool size (yellow) is the average number of threads in the pool, while the activity counts (red) is the current number of active threads. From this figure we can see, the default setting, that is up to 50 Web container threads, most suitable for this situation, because all 50 threads are not assigned, and the concurrent average workload of about 18 threads. Since the default thread pool size is sufficient, there is no need to modify the thread pool size.
Prior to the WebSphere Application Server V6.x, concurrent client connections and Web container thread pool there is one to one relationship between threads. In other words, if you have 40 clients accessing the application, there are 40 threads in need of service requests. In the WebSphere Application Server V6.0 and 6.1, due to the introduction of the Native IO (NIO) and Asynchronous IO (AIO), so the thread can be used to expand the relatively small number of thousands of client connections. This explains why the average in Figure 7, only 18 threads to service HTTP load the driver from the 50 concurrent client connections. Based on this information, you can reduce the size of thread pool in order to reduce management thread pool size exceeds the needs of open source. However, this will affect the peak load response from the server (this time requires a lot of threads) of capacity. Should be determined by carefully considering the size of the thread pool, including the expected average and peak workloads.
Every time an application attempts to access the back-end library (such as databases), it requires resources to create, maintain and release to the database. To ease this process, the general application of pressure on resources, application servers, back-end allows you to create a connection pool, the application server for shared applications. Connection pool will be distributed in connection overhead across several user requests in order to preserve resources for future application requests. The important connection pool associated with the tuning options such as shown in Table 4.
|Minimum number of connections||To maintain the minimum number of physical connections. If the connection pool size is less than or equal to the minimum connection pool size, then the unused timeout thread (unused timeout thread) will not give up the physical connection. However, the pool will not create a separate connection to ensure the maintenance of a minimum connection pool size.|
|Maximum Connections||Pool can be created in the maximum number of physical connections. They are the physical connection to the backend database. Achieve this value, will not create a new physical connection; requester must wait until the current use of a physical connection is returned to the pool, or until it raises an exception ConnectionWaitTimeoutException (all based on connection timeout setting). Setting the maximum connections higher values can sometimes cause your back-end connection request submerged resources.|
|Thread inactivity timeout||Specifies the thread should wait until the static recovery time (milliseconds). Value of 0 means no waiting, negative (less than 0) indicates to wait forever.|
The goal of tuning the connection pool is to ensure that each thread has a database connection, and do not need to queue up to wait for a request to access the database. For the DayTrader application, each task needs to execute a query against the database. Since each thread to perform a task, so the concurrent threads need a database connection. Typically, all incoming HTTP requests in the Web container thread. Therefore, the maximum connection pool size should at least reach Web container maximum thread pool size. Sometimes, however, the request will be passed to the default thread pool, such as asynchronous message passing model beans.
Overall, the general best practice is to determine which thread pool service tasks and the corresponding need to adjust the DataSource connection pool size. In this case, the maximum connection pool size is set as the default and the Web container maximum thread pool size (70). To modify the connection pool settings, you can navigate to the management console Resources => JDBC => Data Sources => data_source => Connection pool properties. Remember, from the connection management perspective, all applications that can not have DayTrader excellent performance, so a thread can use multiple connections.
Figure 8 shows the peak load in the steady state DayTrader use the default connection pool size (minimum 1 / 10 maximum) run-time connection pool PMI data. FreePoolSize (yellow) is the number of free connections in the pool, and UseTime (green) is used to connect the average time (milliseconds). It can be seen from the figure for all 10 connected always in use. In addition to form indicators, the table also shows other important indicators: WaitingThreadCount show 33 threads waiting for database connection, the average WaitTime is 8.25 ms, and the overall pool PercentUsed indicators show that occupancy rate was 100%.
Figure 9 shows the tuning the connection pool size is 10 (minimum) / 70 (maximum) after the same table. This indicates that a large number of free connection is available, and there is no thread waiting for connections, enabling a faster response speed.
Data source statement cache size specified by each connection can be cached for the number of JDBC statements. WebSphere Application Server data source will be optimized through prepared statements and callable statements, it can cache the connection is not used in the activity statement. If your application to use as many statement as DayTrader, increase this parameter can sometimes improve application performance. To configure statement cache size, you can navigate to Resources => JDBC => Data sources => data_source => WebSphere Application Server data source properties.
A number of different methods can be used to tune the database statement cache size. One technique is to review all unique, having a prepared statement that the application code (or from the database or database driver to collect the SQL trace), and to ensure that the cache size is greater than the value. The iterative method is the increase in cache size and the peak steady state in running the application under load, until the PMI index dropped not report any cache operations. Figure 11 shows the same connection pool PMI table, the database cache size from the default value of the statement (10) to 60. PrepStmtCacheDiscardCount indicator (red) that were discarded because the cache is full and the number of statements. Looking back at the table in Figure 9, the tuning data source statement cache size, number of statements discarded more than 1.7 million. While Figure 11 shows that tuning the cache size does not appear after the statement of the case dropped.
Object Request Broker (ORB) to pass by reference option to identify, in dealing with the parameters of EJB objects involved in the request should be passed by reference or use pass by value semantics. To locate this option in the management console, navigate to Servers => Application Servers => server_name => Object Request Broker (ORB). By default, this option is disabled, and will be a copy of the parameter object passed to the called EJB method. Has been passed to the parameter object with a reference to a simple comparison, the cost of this approach is much higher.
In general, ORB pass by reference option is basically to call the EJB method to treat as a local call (even with the remote interface for EJB), and avoid copying the necessary objects. If the remote interface is not absolutely necessary, then a slightly simple, no tuning is to use an alternative interface with the local EJB. However, the use of local rather than remote interface, you lose the remote interface, the location of the distributed environment, transparency and workload management functions related to some of the gains.
Only when the EJB client (ie servlet) and the call in the same EJB module when the class loader, ORB pass by reference option to be able to bring in revenue. This demand means that the EJB client and EJB module must be deployed in the same EAR file, and in the same application server instance running. If the EJB client and EJB module is mapped to a different application server instance (usually expressed as a split-tier), you must use pass by value semantics to the remote call EJB module.
Because DayTrader application EAR in the same WEB and EJB module contains, and both will be deployed to the same application server instance, so ORB pass by reference option can be used to achieve performance improvement. Figure 13 shows the measurement, this option allows DayTrader significantly benefit from, all requests from the servlet will pass through the remote interface stateless session bean. But other than direct mode, then EJB container will pass and manual direct JDBC transactions.
Points for discussion:
- Servlet caching
- Persistent HTTP connection transmission
- Large page support
- Disable unused services
- Web server location
WebSphere Application Server's DynaCache generated by the server object and provide a common sub-page memory caching service. DistributedMap and DistributedObjectCache interface can be used in the application cache and share Java objects, the method is to store references to the cache of these objects for later use. On the other hand, Servlet to servlet and JSP response cache segment can be customized by a set of rules to store and manage the cache.
In the DayTrader application, the user's home page each time you visit will display a Market Summary. Change the summary information includes the stock list of the top 5 and after 5, and the current stock index and trading volume. This activity needs to perform database queries, thus giving the user home page appears a significant load delay. With servlet caching, marketSummary.jsp can be stored in the cache, which can be largely avoided the cost of large database queries, thereby improving the response speed of the user's home page. You can configure the object cache refresh interval, the example in Listing 1 to set the value to 60 seconds. Dynacache DayTrader also be used to cache the other servlet / JSP segment and data. This example will demonstrate through the cache to avoid complex server operations which improvements can be achieved.
To enable the Servlet cache, in the management console, navigate to Servers => Application servers => server_name => Web container settings => Web container. Cachespec.xml file must define the servlet or JSP to be cached in the URI path, the file is located in the WEB-INF Web simulation directory. For the marketSummary.jsp DayTrader is, cachespec.xml content similar to Listing 1.
- Task overview: Using the dynamic cache service to improve performance
- Configuration file by cachespec.xml object cache
Specify outgoing HTTP persistent connection response should use persistent (keep-alive) connection, rather than in response to the request or after the exchange closed connection. In many cases, a single HTTP connection can be permitted the maximum number of persistent requests to achieve performance improvement. So that each connection can be supported by the persistent requests unrestricted, SSL connections can be a significant performance improvement, because SSL connections will lead to an agreement through the exchange of keys and coordination to complete the SSL handshake process, thereby increasing overhead. Each connection by maximizing the number of requests can be handled, you can minimize the impact of this overhead. In addition, by keeping the connection is open, and not for each request to open or close the connection, fast response and high-throughput applications can achieve performance improvements. When this property is set to 0 (zero), the connection will be running in the application server has been kept open. However, if related to security issues, you should carefully consider the setting, because when the client tries to establish a connection is always active, this parameter can help prevent denial of access attacks.
Set in the management console can be transmitted persistent HTTP connection by navigating to Servers => Application servers => server_name => Ports. Then, click and you want to change the HTTP transport channel settings related to the port of View associated transports.
DayTrader test in the process, each connection lasting the maximum number of requests (Maximum persistent requests per connection) of the value from 100 (default) to unlimited. Figure 15 The table shows through the HTTP (non SSL) and HTTPS (SSL) access to a simple "Hello World" servlet throughput results (in the most durable of each connection is set to unlimited number of requests before and after) .
Some platforms support the creation of a large contiguous memory area, in order to be able to use more than the default memory page size of memory paging. Depending on the platform, large memory page size from 4 MB (Windows) to 16 MB (AIX), whereas the default page size is only 4 KB. Many applications (including Java-based applications) can often benefit from large pages, large pages due to the reduction of management can help reduce CPU overhead.
To use large memory pages, you first need to define the operating system and enable them. System requirements for each platform is also different from one another, you must enable large page support can be configured before them. WebSphere Application Server Information Center provides a platform for the detailed steps to accomplish this task:
Done in the operating system configuration, you can start in the JVM to support large pages, is in the management console Servers => Application servers => server_name => Process definition => Java Virtual Machine settings specified in the Generic JVM Arguments of
-Xlp. Note that if a large paging is enabled, the operating system will allow a large memory area relative to Ling for the JVM to use. If not enough free memory for other applications are running, it will change the page (the page about swapping out memory to the hard disk on the page), which will significantly reduce system performance.
Applications do not need to disable unused services that can improve performance. PMI is a typical example. It is important to note that the PMI must be enabled to see this article from the previous record of performance indicators and review procedures for advice. While PMI will be removed to disable the feature to view this information, but it also provides some performance improvements. Application server can be disabled for the PMI, is in the administrative console to navigate to the Monitoring and Tuning => Performance Monitoring Infrastructure (PMI).
IBM HTTP Server and other Web servers are typically used to deploy WebSphere Application Server to handle static content, or provide workload management (WLM) function. In before the WebSphere Application Server V6 version, also needs the Web server to efficiently handle thousands of incoming client connection, the reason attributed to the client connection and the Web thread between the "one to one" relationship between image ( discussed earlier.) In the WebSphere Application Server V6 and later, NIO and AIO's introduction to waive these requirements. Web server for the use of the environment, should be placed on Web server instance is different from the WebSphere Application Server instance on a dedicated system. If a Web server where the system has a WebSphere Application Server instance, they will be effective sharing of valuable processor resources, thus reducing the overall throughput configuration.
We will IBM HTTP Server and WebSphere Application Server placed in the same local machine DayTrader tests performed, and then in turn placed on a remote Web server running on dedicated machines a second test. Table 5 shows the Web server and application server when placed in the same system used in the process of CPU cycles percentage points. Can be seen from the results, HTTP Server process takes about 25% of the CPU cycles, which corresponds to this test in the 4-CPU system used in a CPU.
|WebSphere Application Server||66.3|
|IBM HTTP Server||26.2|
Figure 18 compares the two scenarios, and no case of Web server throughput and response time.
So far, the main focus of this paper is the DayTrader application and WebSphere Application Server's core Web services, and persistence. Now, we will focus on how to use the JMS component DayTrader to perform asynchronous processing and monitoring of purchase orders or transactions changes. DayTrader benchmark application consists of two separately enable or disable the messaging features:
- Asynchronous order processing: order transaction processing by an asynchronous JMS queue and a MDB to process.
- Quote consistency tracking: a JMS topic, and MDB will be used to monitor stock trading orders associated with the offer changes.
Messaging configuration has two major performance tuning options will have a significant impact: the reliability of the message library and message types. In addition, a more advanced, enabling additional performance tuning techniques for improvement is the transaction log and document library (if available) placed in the faster disk. The theme and the corresponding performance improvement will be discussed in detail in the following.
Points for discussion.
- Message store type
- Message reliability levels
- The transaction log and document libraries to a fast disk
WebSphere Application Server's internal messaging provider messaging engine to maintain "database" concept. Database as a message from the engine to handle the persistent storage library. In the single-server environment to create a messaging engine, the system creates a file-based database as default database use. In the WebSphere Application Server V6.0.x, the default database will be a local in-process Derby databases. Derby's database based on documents and ideal for single-server scenario, but can not provide the highest level of performance, scalability, manageability or high availability. To meet these requirements, you can use a remote database data store:
- The local Derby database data store: With this option, you can use a local in-process Derby database to store operational information as well as the messaging engine related news. Although suitable for development purposes, but this configuration requires the application server CPU cycles and valuable memory to manage the archived messages.
- File-based data register: (default) configuration message if the file-based data engine registers, the operation information and messages will be persisted to the file system, instead of the database. The execution speed faster than the local Derby database. And, if you use a fast hard drive, such as redundant array of independent disks (RAID), its execution speed is comparable with the remote database. The test results shown below do not use the RAID device as a file-based data store, and did not return to this additional improvement.
- Remote database data registers: In this configuration, the remote system's database will serve as the message engine data registers. This will release the application server JVM process uses the Derby database used to manage before the file-based register of CPU cycles, which allows the use of an enhanced performance of the production database server (such as IBM DB2 ® Enterprise Server). Use the database as a data register an advantage in technology, some J2EE ™ applications can share the JDBC connection and thus benefit from the single-phase commit optimization. See the connection on the adoption of shared revenue for single-phase commit optimization for more information. Register file does not support this optimization.
DayTrader use three different types of message buffer under run in asynchronous mode EJB. Running process, we enabled trace specification org.apache.geronimo.samples.daytrader.util.Log = all to capture messages received TradeBrokerMDB the time spent. Asynchronous message passing performance in the measurement when the MDB to measure response time in asynchronous basis for the very important - and not the actual synchronization of the paging response time. Figure 19 The results indicate that the remote database data store to achieve optimal performance, because it provides the shortest response time of MDB and the highest throughput.
The reliability of information systems for any messaging is a very important factor. WebSphere Application Server messaging providers do not provide the level of 5 Reliability:
- As much as possible to ensure that non-persistent (Best effort non-persistent)
- Speed non-persistent (Express non-persistent)
- Reliable non-persistent (Reliable non-persistent)
- Reliable persistent (Reliable persistent)
- Can guarantee the durability (Assured persistent)
Persistent messages are always saved to some form of persistent data register, rather than persistent messages are usually stored in volatile memory. Reliable message delivery and delivery of the message is a trade-off between the speed relationship. Figure 20 shows the use of different levels of reliability (which can guarantee the durability and speed of non-persistent) file registers two test results. The results clearly show that the level of increase as the reliability, speed of processing the message will become faster.
Because disk I / O operations more expensive, so the log file is stored on disk in the RAID that can significantly improve performance. In most RAID configurations, the data is written to the task of the physical media will be multiple drives sharing. This technique will be the result of the concurrent use of more ways to access information stored in persistent Services for faster access to log data.
You can set the transaction log management console directory by navigating to Servers => Application Servers => server_name => Container Services => Transaction Service.
In the process of creating SIBus member, you can use
AdminTask addSIBusMember the
-logDirectory option, or through the management console SIBus Member panel to the specified file register to create log directory (File store log directory).
Figure 21 shows the results of two test run: a test to register the transaction logs and files stored on the local hard disk, and another test will be stored in a RAMDisk on them (which is essentially memory area will be treated as a hard disk to achieve faster read and write operations). Run the test, we used the speed of non-persistent level of reliability. The results showed that the log is stored in the fast disk, response time and high throughput target.
IBM WebSphere Application Server's aim is to meet the increasing demand for applications and these applications have their own unique characteristics, needs and services. This flexibility also acknowledged the fact that: any two applications will not use exactly the same way to use the application server, but also not a tuning parameter can be any two different applications to provide the best performance.
Although DayTrader may be different from your application, but found the method adopted tuning is the same. The key is based application architecture to identify and focus on the application server using the principal components and resources. In general, many applications can tune the following three core areas to achieve a certain degree of performance improvement: JVM, thread pool and connection pool. Other parameters can be tuned to achieve extraordinary results; but they usually involve the application of the specific characteristics of WebSphere Application Server.
This article discusses the core of these parameters can be tuned and some other applications that make DayTrader benefit options. For these options, we provide a way to find them, providing some general recommendations and should pay attention to the balance issues, and has pointed out the related tools. Figure 22 shows the application of these tuning options for run-time mode after each DayTrader the overall performance improvement:
- JVM heap size
- Thread pool size
- Connection pool size
- Data source statement cache size
- ORB is passed by reference
- Servlet caching
- Unrestricted persistent HTTP connections
- Large page support
- PMI is disabled
Can clearly be seen from the figure, the paper described in the EJB model tuning to achieve 169% improvement in the Session Direct mode to achieve 207% improvement, and in the Direct mode to achieve 171% improvement. This is a considerable improvement, and other applications can also achieve similar improvements; however, you must remember that the end result will be a result of factors described in this article, and other factors.
Hope that the information discussed in this paper and tools to help you more easily tune the application of a particular application server.
- WebSphere Application Server Support
- Information Center:
- IBM Java Virtual Machine tuning
- Performance of the type and purpose of the review process
- Tivoli Performance Viewer in the performance review process
- Tuning application servers
- Data access tuning parameters
- Why use the performance review process
- Object Request Broker tuning guidelines
- Task overview: Using the dynamic cache service to improve performance
- Configuration file using the cache object cachespec.xml
- HTTP transport channel settings
- Tuning AIX systems
- Tuning Linux systems
- Tuning Solaris systems
- Tuning Windows systems
- Garbage collection is enabled PMI
- File registers and data registers of the comparative advantages
- Message reliability levels - JMS delivery mode of service quality and service integration
- Transaction service settings
- addSIBusMember command
- Java technology, IBM style: garbage collection strategy, Part 1
- Java support for large memory pages
Get products and technologies
WebSphere simple troubleshooting | used manageprofiles to create websphere's prof ...
- View (0)
- Comments (0)
- Category: Websphere
- Related Recommendation