Workload Management in WebLogic Server 9.0
by Naresh Revanuru
03/07/2006
What Is a WebLogic WorkManager?
As mentioned above, WebLogic Server 9.0 maps incoming requests to a WorkManager instead of executing queues. Let's now look at what a WorkManager is and does. A WebLogic WorkManager is the runtime abstraction into which requests are submitted by WebLogic Server containers for asynchronous execution. Multiple WorkManagers can be present, and the choice of which WorkManager to use is determined during deployment. Administrators can change the WorkManager for any servlet or EJB dispatch by using deployment descriptors. This is described in more detail in the WorkManager Configuration section on page 4. Each WorkManager can contain four types of components:
- Request class (fair-share, response-time goal, context based)
- Minimum threads constraint
- Maximum threads constraint
- Capacity
It is important to note a couple of things before we proceed. These four components are optional. In addition, the components can be shared between WorkManagers. For example, two or more WorkManagers can share the same request class, which means that they get the same internal priority. Another simple example could be when WorkManagers share the same capacity. This means that the sum total of requests from all WorkManagers that share the capacity will not exceed the specified limit.
The term WebLogic WorkManager is used to purposefully differentiate it from the Timer and WorkManager specification. The relationship between the two is explained in the Timer and WorkManager Specification Support section.
Now let's look at each of the four components in more detail.
The WorkManager Request Class
A request class defines a class of requests. Requests that share the same runtime execution behavior should be a part of the same request class. The "same runtime behavior" could mean many things. Examples include multiple invocations of the same servlet, the module, or the entire application. By default, each application belongs to its own unique request class. All requests that share a request class get the same runtime priority. They are treated as one single type by the WebLogic scheduler. Three types of request classes can be defined:
- Fair share-based request class
- Response-time goal-based request class
- Context-based request class
It is important to note that a WorkManager can have just one kind of request class and not more.
Fair share request class
This request class takes a simple integer value that denotes a fair share. The value can range from 1 to 1000. Thread usage become higher as the fair share number increases. Fair shares are relative to other fair shares defined in the system. Fair shares are reflected in scheduling logic such that as long as multiple fair shares compete, the average thread usage by each is in proportion to its fair share.
For example, consider a situation in which we only have two fair shares, A and B, having a fair share of 80 and 20, respectively. During a period in which both fair shares are sufficiently requested, say, zero think time and more clients than threads, the probability that a thread will work on behalf of A or B will tend toward 80 percent or 20 percent, respectively. The scheduling logic ensures this even when A tends to use a thread for much longer than B.
A WorkManager without a fair share request class gets its own exclusive fair share of 50.
Here is an example of a WorkManager with a fair share request class. This WorkManager is defined in
weblogic-application.xml
. It is also important to note that request classes can be defined outside the WorkManager and then referred by the WorkManagers using the request class name.
<work-manager>
<name>highfairshare_workmanager</name>
<fair-share-request-class>
<name>high_fairshare</name>
<fair-share>80</fair-share>
</fair-share-request-class>
</work-manager>
Response-time goal request class
The response-time goal request class takes an integer value that specifies the response-time goal in milliseconds. Response-time goals are relative to other response goals and fair shares. Response-time goals can be used to differentiate request classes. We do not try to meet a response-time goal for an individual request. Rather we compute a tolerable waiting time for a request class by subtracting the observed average thread use time. Then we schedule requests so that the average wait for each request class is in proportion to their tolerable waiting time. For example, consider when we only have two request classes, A and B, with response-time goals 2000 ms and 5000 ms, respectively, where the time an individual request uses a thread is much smaller. During a period in which both request classes are sufficiently requested, say, zero think time and more clients than threads, we schedule to keep the average response time in the ratio 2:5 so that it is a common fraction or multiple of the stated goal.
Here is an example of a WorkManager with a response-time request class:
<work-manager>
<name>fast_response_time</name>
<response-time-request-class>
<name>fast_response_time</name>
<goal-ms>2000</goal-ms>
</response-time-request-class>
</work-manager>
Context-based request class
This is a compound request class that provides a mapping between the request context and the above two request classes. We currently look at the security name of the user submitting the request and the security group the user belongs to. During work schedule, the exact request class is determined by looking at this security context information. For example, a context request class can be defined such that all users of a "platinum" group get a high fair share while the rest of the users get a lower fair share. Here is an example configuration:
<work-manager>
<name>context_workmanager</name>"
<context-request-class>
<name>test_context</name>
<context-case>
<user-name>platinum_user</user-name>
<request-class-name>high_fairshare</request-class-name>
</context-case>
<context-case>
<user-name>evaluation_user</user-name>
<request-class-name>low_fairshare</request-class-name>
</context-case>
</context-request-class>
</work-manager>
In the example above, a WorkManager is defined with a context request class. The request class has two mappings—one for a platinum user and the other for an evaluation user—and we assume these two users are configured in the WebLogic Server security realm. The mapping states that all platinum users should use the high fair share request class while evaluation users should use the low fair share request class. Note that "high_fairshare" and "low_fairshare" are the names of two fair share request classes defined either at the same level in the application or globally at the server level.
The WorkManager Minimum Threads Constraint
The minimum threads constraint is another component of a WorkManager, much like a request class. This constraint should be used in very special cases and should not be confused with the thread count parameter in execute queues. The minimum threads constraint takes an integer value that specifies the number of threads that should be assigned to this constraint to prevent server-to-server deadlocks. Usually, all WorkManagers use threads from the common thread pool, and the proportion of thread usage by different WorkManagers is determined by the request class component defined in the previous sections. Threads are dynamically added to the common thread pool to maximize the overall server throughput.
There are special situations where two servers can deadlock if a WorkManager does not get a minimum number of threads. For example, consider when Server A makes an EJB call to Server B, and Server B in turn makes a callback to Server A as a part of the same request processing. In this case, we know that until Server A responds to the callback from Server B, its original request to Server B will not get serviced. This scenario can be addressed in two ways. Configuring a WorkManager in Server A with a high fair share for callback requests from Server B will ensure that those requests get higher priority than new requests arriving at Server A from clients. In addition, adding a minimum threads constraint clause to the WorkManager will ensure that at least some threads in Server A will be able to pick up callback requests from Server B even when all the threads are busy or deadlocked.
A minimum threads constraint is specified like this:
<work-manager>
<name>MyWorkManager</name>
<min-threads-constraint>
<name>minthreads</name>
<count>3</count>
</min-threads-constraint>
</work-manager>
The WorkManager Maximum Threads Constraint
The maximum threads constraint can be used to limit the maximum number of concurrent threads given to all WorkManagers that share the constraint. Like the minimum threads constraint, the maximum threads constraint should be used only in special circumstances. Normally, the request classes get their fair share of thread resources, and the maximum number of threads in the common thread pool is limited by throughput considerations. Threads are not added if there is no benefit in overall server throughput. There are cases when the administrator/deployer knows that the maximum number of concurrent threads is bound by the maximum capacity of the JDBC connection pool. For example, all servlet or EJB requests that use a common JDBC connection pool are bound by its maximum pool size. In such circumstances, it is possible to define a maximum threads constraint that refers to the JDBC connection pool. This maximum threads constraint is then shared by all WorkManagers affected by the connection pool size. Here is an example of a maximum threads constraint that uses a JDBC connection pool:
<work-manager>
<name>MyDataSourceBoundWorkManager</name>
<max-threads-constraint>
<name>pool_constraint</name>
<pool-name>AppScopedDataSource</pool-name>
</max-threads-constraint>
</work-manager>
Note that
AppScopedDataSource
refers to the name of the data source that is either application scoped or defined at the server level. The maximum threads constraint changes dynamically as the maximum connection pool size changes.
The maximum threads constraint can also be specified as a number:
<work-manager>
<name>MyWorkManager</name>
<max-threads-constraint>
<name>ejb_maxthreads</name>
<count>5</count>
</max-threads-constraint>
</work-manager>
The WorkManager Capacity Constraint
The capacity constraint defines the maximum number of requests that can be queued or are executing at any given point in time. Only requests waiting for threads or those currently under execution are counted. Capacity can be shared across multiple WorkManagers. This means that all WorkManagers that share the capacity are bound by the common limit. Overload action is taken after the capacity is reached. For example, work is refused with a well-known HTTP response code like 503 or a RemoteException in case of RMI calls permitting cluster failover. Overload protection in WebLogic Server 9.0 will be discussed in a forthcoming dev2dev article. Capacity is specified like this:
<work-manager>
<name>MyWorkManager</name>
<capacity>
<name>MyCapacity</name>
<count>10</count>
</capacity>
</work-manager>
Interaction Between Different Types of Request Classes and Constraints
In the previous sections, I explained scheduling based on policies such as fair share and response time by relating the scheduling to other work using the same policy. This section explains how different types of policies like request classes and constraints interact with each other.
- A mix of fair-share and response-time policies is scheduled with a marked bias in favor of response-time scheduling.
- A minimum threads constraint may increase a fair share. If the minimum threads constraint is not met and there are pending requests that belong to this work set, then it will trump fair share in the sense that a request is scheduled from the minimum threads constrained work set even if its request class has gotten more than its fair share recently. That is why it is important to use a minimum threads constraint carefully to avoid giving a fair share a boost. Note that this behavior is subject to change, and the minimum threads constraint may not give a fair share a boost in a future version—but it is guaranteed that the minimum threads constraint will be honored when the server is almost in a deadlocked state and making little progress.
- A maximum threads constraint may, but not necessarily, prevent a request class from taking its fair share or meeting its response-time goal. Once the maximum threads constraint is reached, the server will not schedule requests of this constraint type until the number of concurrent executions falls below the limit. The server will then schedule work based on the fair share or response-time goal.