Use JMS Clients to Utilize Free Computer Resources

by Nimish Doshi

02/20/2006

Message-driven bean recipient

The message-driven bean instances will listen on a reply queue for finished unit of work objects. A skeletal sample follows:

public class MessageWorkBean implements MessageDrivenBean,


                                            MessageListener {

  ...

  // This method will receive a unit of work object to store

 public void onMessage(Message msg) {

   ObjectMessage om = (ObjectMessage) msg;

   try {

     UnitOfWork unit = (UnitOfWork)om.getObject();

     unit.print();

     unit.store();

   }

   catch(JMSException ex) {

    log("Message Driven Bean: Could not retrieve Unit of Work.");

    ex.printStackTrace();

   }

 }

}

The interesting method here is onMessage(). This simply receives a finished object from the reply queue. It then calls its print() and store() methods. My goal was for the server to offload its processing of this unit of work onto other computers. I've accomplished this through the JMS clients and used the message-driven bean as a means to communicate the results back to the server.

Scalability considerations

In a real implementation of this framework, several issues should be addressed to make the example scalable.

  • Consider using a sizable pool of message-driven beans to handle responses.
  • If no foreign consumers are available for the request queues, a few message-driven beans should be created to consume the request queues on the server itself. This goes against the spirit of this article, but it will prevent the queue from overflowing and becoming underutilized if no consumers are available.
  • If there are multiple types of units of work, each should have its own request and response queues.
  • For WebLogic Server, consider using JMS paging to prevent out-of-memory problems when there are too many messages on the queues that are not being consumed in a timely manner.
  • For WebLogic Server, consider using the throttling features of WebLogic JMS, if the producers, the servlets, are producing too much work that is not being consumed.
  • For WebLogic Server, consider using distributed destinations for the queues as this would distribute the queues to multiple servers. In this case, the servlets themselves should be clustered and coordinated to not create duplicate units of work requests.

The references at the end of the article should also be considered. An additional consideration that goes beyond the server is how to deliver the client piece to the various machines. One way is voluntarily, in which each machine owner would download an installer that can be configured and run on the client machine. Another way is to use a commercial software distribution package that automatically downloads the latest version of the client and installs it on the client machine.

Using WebLogic Integration Workflows to Distribute Work

The previous section presented a straightforward approach for distributing units of work to clients using servlets and message-driven beans. Although the approach can be implemented quite easily, it does not address several issues, such as how one kicks off the process in a self-supporting manner to regularly deliver requests to request queue(s) through set intervals. Surely an administrator is not expected to write a shell script to continuously call the servlet. Also, you should address the issue of throttling the amount of requests to be served in a manner that can be controlled a priori by the application. With this in mind, what follows is a more sophisticated example of distributing and responding to units of work to remote JMS clients for the use of underutilized computers.

This approach will have two BEA WebLogic Workshop-developed WebLogic Integration (WLI) workflows known as Java Process Definition (JPD) files, which are a precursor to BPEL/J (Business Process Engineering Language for Java). BPEL/J is specified in JSR 207. The first workflow starts in response to some Web service request and performs initialization to subscribe to a JMS request queue through a JMS control. The workflow uses a Timer control to loop continuously and wake up a while loop at set intervals to place more units of work on the request queue. The workflow will also use a custom Java control supplied in this article's associated code to browse the request queue to determine if more requests need to be placed on the queue to prevent it from being overburdened. Finally, the workflow will also wait for a stop message from a Web service to stop processing. The second workflow performs the same task as the message-driven bean in the previous example in that it will respond to messages in the response queue to call the print() and store() methods from the dequeued response queue. This is a short-lived workflow, and WebLogic Integration will spawn as many instances as required.

Browsing a JMS queue

WebLogic Integration is used as a mechanism for the construction and assembly of services for remote processes. The assembly of components off the shelf known as Java controls makes it quite easy to build composite applications without the need for extensive development. Although WebLogic Integration provides JMS controls out of the box to abstract away the internal details for using JMS, in certain situations, for reasons of fine-grained access to lower level methods, it is best to create a custom control that can be reused to accomplish a desired task. In this example framework, I need to browse the worker request queue to count the number of items that are pending on the queue to determine whether I can place more items on the request queue without overburdening it. To accomplish this, a custom Java control was written called JMSBrowse that has one method of interest:

public interface JMSBrowse extends Control {


  int numberOfElementsInQueue(String qFactory, String qName);

}

The implementation for this control uses the JMS QueueBrowser class to look into a given JMS queue with a given JMS connection factory. It returns the number of instances pending on the queue. The complete implementation is supplied in the accompanying code.

Web service to start and stop workflow

In order to start and stop the WebLogic Integration process responsible for distributing the units of work to request queues, a Java Web Service (JWS), which is specified in accordance to JSR 181, is created with two methods of interest:

public classControlWebService implements


                      com.bea.jws.WebService {

    

 /**

  *  @common:control

  */

 private Controls.JMSStopControlMessage JMSStopControl;

    

/**

 * @common:control

 */

 private Controls.JMSControlMessage JMSControl;



 static final long serialVersionUID = 1L;



 /**    

  * @common:operation

  */

 public void startFlow() {

   JMSControl.subscribe();

   JMSControl.sendTextMessage("start");

   JMSControl.unsubscribe(); 

 }



 /**

  * @common:operation

  */

  public void stopFlow() {

    JMSStopControl.subscribe();

    JMSStopControl.sendTextMessage("stop");

    JMSStopControl.unsubscribe();

  }  

}

Instead of calling the workflow directly, the Web service places a message on a JMS queue, called Worker.Message, to send messages to the distribution JPD. This allows the implementation of the Web service to be decoupled from the workflow to preserve its modularity. In WebLogic Integration, there is a concept called an event generator that is configured with the WebLogic Integration Administration Console. One event generator is configured to take the message off the JMS Worker.Message queue and deliver it to a logical concept known as a Message Broker channel. The distribution workflow listens on the /UnitOfWork/StartWorkflow channel, which is tied to the JMS event generator associated with the JMS Worker.Message queue. As soon as a String "start" message is delivered on this channel, the workflow begins. Similarly, once started, the distribution workflow listens on a Message Broker channel known as /UnitOfWork/StopWorkflow in one of its Event Choice nodes to receive a "stop" message from a Worker.StopMessage JMS queue. Again, an event generator associates the JMS message on the Worker.StopMessage queue to the channel /UnitOfWork/StopWorkflow to deliver the message.

This effectively creates a service-oriented approach decoupled from the implementation to start and stop the distribution workflow. The Web service can easily be tested via a Web service client, or using the supplied WebLogic Integration Workshop Test Browser.

The Distribution Workflow

Figure 2 illustrates the relevant portions of the DistributeFlow.jpd responsible for the distribution of the units of work, our simple matrix objects, to the request queue:

Figure 2. Workflow for distributing units of work

A while loop continuously loops until a stop message changes the value of a boolean variable to break out of the loop and complete the workflow. The Event Choice waits on one of two Control Receive callbacks. The first one is to receive a Stop message from a Message Broker channel via the Web service just described. The second callback responds to a Timer control that has been set via its property panel to occur every five seconds. This continues the processing, and the next activity calls the custom Java control to browse the Worker.Request queue to get the number of pending requests. Next, the decision node checks to see if the number of requests exceeds a maximum number of requests, which has been set to 5 in a variable. If it doesn't, a perform node is called to use a JMS control to place five matrix objects on the request queue as follows.

public void perform() throws Exception {


  for(int i = 0 ; i < maxInQueue; i++) { 

    matrix = new SimpleMatrix();

    jmsControl.sendObjectMessage(matrix);

  }

}

JMS clients responding to the workflow

The JMS client that responds to the workflow is almost the same as the one described in the earlier WebLogic Server section. The only difference is that the client now responds back to the response queue using a bytes message instead of an object message. The client converts the SimpleMatrix object into a byte array to pass it on the response queue. The reason for this is that the Message Broker channel associated with the event generator that is tied to the response queue can only listen to a stream of data, which is either a String, an XML Bean, or a byte array. The associated code has been designed to respond to both a WebLogic Integration request message and an ordinary WebLogic Server request message.

The workflow to receive a completed unit of work is show in Figure 3:

Figure 3. Workflow for the receiver

The important activity here is the perform node that converts a byte array into an Object and calls the print() and store() methods.

public void perform() throws Exception {


   ByteArrayInputStream arrayInputStream = new

        ByteArrayInputStream(rawData.byteValue()); 

   ObjectInputStream objectInputStream = new 

        ObjectInputStream(arrayInputStream);

   UnitOfWork unit = (UnitOfWork) objectInputStream.readObject();

   unit.print();

   unit.store();

   objectInputStream.close();

}

The use of WebLogic Integration workflows

You've seen that the use of workflows, Java controls, and Message Broker channels provides a more sophisticated way to distribute work to underutilized computers. Simply by adding more activity nodes into the process flows, you can make the processing as comprehensive as desired. For example, the workflow could have an auditing control to audit all requests to an internal log file before placing them on a queue. The workflow could redirect requests to other JMS queues simply by changing the JMS control's property values. You can even have the remote Web service start multiple instances of the workflow for scalability. Finally, the Timer control can have a more granular interval based on a business calendar.

Another advantage of using Message Broker channels and event generators is that the event generators can be monitored by the WebLogic Integration Administration Console for the number of response messages for further control. The event generator and channels can be suspended and resumed via the console to respond to production events.

This flexibility makes using WebLogic Integration workflows a compelling methodology.

Download

You can download the source code used in this article: JMSClientApp.zip.

Conclusion

The benefit of utilizing remote JMS clients to offload work is that it effectively utilizes network machines for certain types of batch processing work, while placing less of a burden on the original servers. A well-known example of this approach is the Search for Extraterrestrial Intelligence (SETI@home) system, which utilizes the world's PCs for performing units of work. This article sought to generalize this approach using a framework of JMS clients and also offered a discussion on how to deploy such a solution for scalability. The article discussed multiple approaches of distributing work to remote clients and offered a service-oriented approach as the preferred methodology.

References

Nimish Doshi works in the systems engineering group for BEA's ISV partners and has worked with various BEA partners in helping them leverage the construction and usage of controls.