Oracle Application Container Cloud Service: Create a Tomcat Cluster with TCP Session Replication

 

Before You Begin

Purpose

This tutorial shows you how to create a Tomcat cluster using TCP Session Replication on the Oracle Application Container Cloud Service.

Time to Complete

90 minutes

Background

What is a Cluster?

A cluster, in this context, consists of several Tomcat servers that work together to appear as a single system. This is achieved by using load balancers to tie the servers together using each server's network port and IP address. Each node in the cluster shares session data by replicating data to each node in the cluster. The result is a cluster of Tomcat servers appears as a single server to client systems.

What are are the Benefits?

There are a number of benefits to using clusters that include:

  • Performance and Scaling: Since more than one server performs a task or unit of work, response time should be faster. In addition, a cluster can be scaled out to handle more and more work by adding servers to the cluster. This is made possible by load balancing.

    • Load Balancing: A load balancer is a special server that allocates work to servers in the cluster. When requests come into the cluster, the balancer takes the request and passes it off to an individual server. Load balancers can use anything from complex algorithms to route traffic, to a simple round robin approach.

  • High Availability: High availability refers to systems which provide a high uptime while being resistant to faults in the system. Highly available systems are designed to eliminate a single point of failure. For example in a cluster, if one of the servers crashes, the other servers in the cluster continue running and function normally. From outside the system, there appears to be no change in the way the cluster operates if a node goes down.

Context

In this OBE, you will create a local cluster using the Apache HTTP server as a load balancer and two Tomcat servers as cluster nodes. After that system is setup, you will convert one of the Tomcat servers for deployment on the Oracle Application Container Cloud Service. Then, you deploy that Tomcat node so the cluster runs and can be managed in the cloud.

What Do You Need?

Detailed steps for installing these products are provided on their respective web sites. For brevity, those steps are not covered in this guide, except for Tomcat. The following sections assume the required software is installed.

 

Setting up Apache as a Load Balancer

 

Overview

In this section, we will configure an Apache 2.4 server to act as a load balancer for a Tomcat cluster. The steps outlined are for Windows, but tips related to Linux are included at the end of the section. It is assumed that you have installed or downloaded the required software listed above.

An Apache HTTP server can be used as the load balancing front end for a Tomcat cluster. This is accomplished using the mod_jk module. The mod_jk module serves as a redirector to servlet containers like Tomcat and can also be used to setup a cluster of Tomcat servers. The cluster uses AJP (Apache JServ Protocol) to communicate between the Tomcat servers and Apache. AJP is a binary version of HTTP that is optimized for communication between Apache HTTPD server and Apache Tomcat over a TCP connection.

 

Configuring your Apache Server

Follow these steps to configure your server.

  1. Install the mod_jk Apache module. This module allows Apache to function as load balancer for our Tomcat instances.

    1. Unzip the mod_jk zip file into a directory.

    2. Copy the mod_jk.so to the ApacheHome\modules directory.

  2. Create a configuration file to store the load balancing information.

    1. Change into the ApacheRoot\conf directory.

    2. Create a mod-jk.conf file.

  3. Edit the mod-jk.conf file and add the following information.

    Apache Configuration File

    
    # This file configures mod_jk for creating Tomcat clusters
    
    # Load the module
    LoadModule jk_module modules/mod_jk.so
    
    # Set path to workers file
    JkWorkersFile conf/workers.properties
    
    # Shared Memory file - For Unix only. Comment out for Windows.
    # JkShmFile logs/mod_jk.shm
    
    # Path to Logs
    JkLogFile logs/mod_jk.log
    
    # Log level - Info is default. Should work for most situations.
    JkLogLevel info
    
    # Mount workers for mod_jk
    JkMount /status/* status
    JkMount /clusterjsp/* loadbalancer
    

    Each configuration option is described with a comment in the mod-jk.conf file. The last set of configuration options require further explanation.

    • JkMount /status/* status - This creates an Apache status page with information about each node in the cluster. Going to http://hostname/status in your browser displays detailed information about the cluster.

    • JkMount /clusterjsp/* loadbalancer - This option indicates that any URL http://hostname/clusterjsp will be handled by the Tomcat cluster. If you wanted to have the cluster hand all HTTP traffic change the path to the root only /*.

  4. Save the file.

  5. Create the workers.properties file. The workers.properties file identifies the Tomcat instances that will be included in the cluster. Each "worker" is identified by host name and port. This example has 2 workers for the load balancer and a status page.

  6. Edit the workers.properties file and add the following information.

    workers.properties


    worker.list=loadbalancer,status

    worker.worker1.port=8009
    worker.worker1.host=localhost
    worker.worker1.type=ajp13

    worker.worker2.port=8010
    worker.worker2.host=localhost
    worker.worker2.type=ajp13

    worker.loadbalancer.type=lb
    worker.loadbalancer.balance_workers=worker1,worker2

    worker.status.type=status

    This indicates that two Tomcat servers are in the cluster on ports 8009 and 8010.

  7. Save the file.

  8. Edit the httpd.conf file. Add the following lines to the bottom of the file.

    httpd.conf Changes

    
    # Enable mod_jk to create a load balancer
    Include conf/mod-jk.conf
    
  9. Save the file.

  10. Start or restart your Apache server to load the new settings:

    • ApacheHome\bin\httpd.exe

 

Installing Apache and the mod_jk Module on Linux

If you wish to install and configure `mod_jk` on Linux there are a few configuration file variations depending upon the distribution.

Installing Apache on Oracle Linux

Here are some tips for installing Apache on Oracle Linux (RedHat). Perform the following commands as root or using sudo.

  • To install Apache Server on use the command:

    yum install httpd

  • To install mod_jk use the following command:

    yum install mod_jk

The Apache home directory is located in /etc/httpd. Other than that, the file names and locations are very similar to Windows. The Windows instructions can be followed with little variation.

Installing mod_jk on Ubuntu Linux

  • To install Apache HTTP server on Ubuntu use the following command:

    sudo apt-get install apache2

  • To install mod_jk on Ubuntu use the following command:

    sudo apt-get install libapache2-mod-jk

The locations of files on Ubuntu is different. When you install the libapache2-mod-jk package, that installs the module along with required configuration files, so there is no need to include a new mod-jk.conf file. Here is the location of the pertinent files.

  • Apache home: /etc/apache2

  • mod_jk configuration file: /etc/apache2/mods-available/jk.conf

  • worker.properties file: /etc/libapache2-modjk/

Modify the files in these new locations to configure Apache on Ubuntu.

Note: When the mod_jk package is installed, symbolic links are placed in the /etc/apache2/mods-enabled directory for jk.conf and jk.load. This enables the module for Apache on Ubuntu.

 

Configuring Tomcat to be used in a Cluster

 

Installing Tomcat

We will be installing two copies of Tomcat to create a local cluster. To begin with, one copy of Tomcat is installed and configured. Then a copy will be made of the first Tomcat installation and modified to be a second Tomcat instance.

To install Tomcat perform the following steps.

  1. Download the Tomcat 8.5 .zip or .tar.gz file.

  2. Uncompress Tomcat.

  3. Copy or move the uncompressed Tomcat directory to where you want to install Tomcat, for example c:\tom1.

 

Updating the Configuration Files

The next step in the process is to update the configuration files.

  1. Open the server.xml file in the TomcatHome\conf directory and make the following changes:

  2. Remove all the comments from the file.

  3. Change the Engine element about midway down the file. Add an attribute pair jvmRoute="worker1".

    This change helps identify which Tomcat server has been selected by the load balancer to process the current request. In this example, the server on port 8080 will be identified as worker1.

  4. Add the Cluster element as shown in the following example.

    When complete, the server.xml file should look like this.

    server.xml


    <?xml version="1.0" encoding="UTF-8"?>
    <Server port="8005" shutdown="SHUTDOWN">
    <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
    <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
    <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
    <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

    <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved"
    factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
    pathname="conf/tomcat-users.xml" />
    </GlobalNamingResources>

    <Service name="Catalina">

    <Connector port="8080" protocol="HTTP/1.1"
    connectionTimeout="20000"
    redirectPort="8443" />
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />


    <Engine name="Catalina" defaultHost="localhost" jvmRoute="worker1">


    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
    <Manager className="org.apache.catalina.ha.session.DeltaManager"
    expireSessionsOnShutdown="false"
    notifyListenersOnReplication="true"/>

    <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
    address="228.0.0.4"
    port="45564"
    frequency="500"
    dropTime="3000"/>

    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>

    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
    address="auto" port="4000" autoBind="100"
    selectorTimeout="5000" maxThreads="6"/>

    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>

    </Channel>

    <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>

    <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>

    </Cluster>


    <Realm className="org.apache.catalina.realm.LockOutRealm">
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    resourceName="UserDatabase"/>
    </Realm>

    <Host name="localhost" appBase="webapps"
    unpackWARs="true" autoDeploy="true">

    <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
    prefix="localhost_access_log" suffix=".txt"
    pattern="%h %l %u %t &quot;%r&quot; %s %b" />

    </Host>
    </Engine>
    </Service>
    </Server>

    The Cluster tag contains a number of elements. Let's look at some of the highlights.

    • <Cluster>: This element acts as a container for all cluster settings.

      • channelSendOptions: This attribute determines how nodes in the cluster communicate. A value of 8, which is the default, specifies asynchronous communication between nodes. This is the preferred method in most situations.

    • <Manager>: The Manager tag specifies the type of session manager to use for the cluster. The DeltaManager stores session data in memory and replicates that data to any node in the cluster. The only limitation is DeltaManager requires the cluster members to be homogeneous, all nodes must deploy the same applications and be exact replicas.

      • expireSessionsOnShutdown: Prevents a failing node from destroying sessions on other clustered nodes.

      • notifyListenersOnReplication: Notify other nodes when a session has been updated.

    • <Channel>: Configures the Tribes peer-to-peer communications library and acts as a container for these settings.

    • <Membership>: Specifies the network settings for the nodes in the Tomcat cluster.

      • address: The multicast address of the cluster network. The combination of IP address and port uniquely identifies a cluster. The default IP address is 228.0.0.4.

      • port: The network port for the members of the cluster to listen on. The default value is 45564.

      • frequency: The heart beat for listening for other nodes in the network. The value specifies that each node will send a heartbeat every 500 ms.

      • dropTime: The amount of time a node in the network fails to broadcast before it is dropped from the cluster. The default value is 3000 ms.

    • <Sender>: The component that sends messages to other nodes in the cluster. This example specifies an NIO component for communication.

    • <Receiver>: The component that receives messages from other nodes in the cluster. This example specifies an NIO component for communication.

    • <Interceptor>: Modifies messages sent between nodes.

      • TcpFailureDetector: If a node in the cluster times out, this class intercepts the memberDisappeared message (unless it is a true shutdown message) and connects to the member using TCP.

      • MessageDispatchInterceptor: Enables asynchronous communication through a channel. Note: This class name has changed from MessageDispatch15Interceptor in earlier versions of Tomcat.

    • <Valve>: A component that helps process requests made to the Tomcat server. The ReplicationValve determines if data in the request needs to be replicated to other nodes in the cluster.

    • <ClusterListener>: Handles the messages sent to and from the cluster.

  5. With those changes made, save the server.xml file.

This completes the initial configuration for the Tomcat server.

 

Creating and Deploying the ClusterJSP Application

 

Reviewing the ClusterJSP Application

To ensure that session sharing is working, an application for testing sessions is required. ClusterJsp is a JavaServer Pages (JSP) application that allows you to test sessions in a cluster. The application was originally included with the GlassFish application server samples.

The Maven project for ClusterJsp application is included in the zip file for this OBE in the maven directory. The application consists of two main files. To see the scripts navigate to the maven/src/main/webapp directory. The files listed there are.

  • index.jsp: Included for test purposes.

  • HaJsp.jsp: The main session testing script. Load this page to test and see session information. In addition, the page allows you to add session data to a session.

  • ClearSession.jsp: This script is called from HaJsp.jsp to reset the session and its data.

The web.xml configuration file contains some important information. Navigate to maven/src/main/webapp/WEB-INF to see the file. It should contain the following information.

web.xml


<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">

<web-app>
<display-name>clusterjsp</display-name>
<distributable/>
<servlet>
<servlet-name>HaJsp</servlet-name>
<display-name>HaJsp</display-name>
<jsp-file>/HaJsp.jsp</jsp-file>
</servlet>
<servlet>
<servlet-name>ClearSession</servlet-name>
<display-name>ClearSession</display-name>
<jsp-file>/ClearSession.jsp</jsp-file>
</servlet>
<session-config>
<session-timeout>30</session-timeout>
</session-config>
<welcome-file-list>
<welcome-file>HaJsp.jsp</welcome-file>
</welcome-file-list>
</web-app>

Note: Notice the <distributable/> tag. This indicates to Tomcat that this application can be distributed in a Tomcat cluster. This is required for session sharing.

 

Deploying and Testing the ClusterJsp Application

Building and Deploying the Application

Deploying a test application is the next step in the process. To deploy the ClusterJsp application perform the following steps.

  1. Open a command prompt window.

  2. Navigate to the maven directory.

  3. Run the Maven command to build the project: mvn clean package

  4. Copy the target\clusterjsp.war file into your TomcatHome\webapps directory.

Starting Tomcat and Testing the Application

When Tomcat starts, it will unzip and install the clusterjsp.war file into the clusterjsp directory. Perform the following steps to start Tomcat and test the ClusterJsp application.

  1. Open a command prompt window.

  2. Change into the TomcatHome\bin directory.

  3. Start Tomcat in the foreground with the following command:

    catalina.bat run

    Tomcat should start running after a few seconds.

  4. Open a web browser and verify that Tomcat is running using the URL: http://localhost:8080.

  5. Once you have confirmed Tomcat is running, open the HaJsp.jsp page with the URL: http://localhost:8080/clusterjsp/HaJsp.jsp.

    When the page loads, it should look something like the following:

    HaJSP.jsp Sample Page
    Description of this image

    You can fill out the form to add session name/value pairs into the session.

Once you have confirmed Tomcat is functioning properly, shutdown Tomcat by pressing Control-C in the command prompt window.

 

Adding a Second Tomcat Node to the Cluster

With the first Tomcat server setup and working, you can now make a copy of that server and then modify the configuration files to create a second server for the cluster. To setup a second server perform the following steps.

  1. With the first Tomcat server in the cluster created, that Tomcat server can be copied and modified to create additional servers in the cluster. For example, copy the contents of c:\tom1 to c:\tom2.

  2. After copying the directory, you need to update the new Tomcat server so that it runs on different network ports from the first Tomcat server. You will need to make the following changes to the new server.xml file, in order from top to bottom.

    1. For the Server tag change the port attribute to 8006.

    2. Under the Service tag, the first Connector tag. Change the port attribute to 8081. This is the port the server will listen on for connections.

    3. In the same tag, change the redirectPort attribute to 8444.

    4. In the second Connector tag change the port attribute to 8010 and the redirectPort attribute to 8444.

      Note: The redirectPort attribute matches in the two Connector tags.

    5. In the Engine tag, change the jvmRoute attribute value to "worker2". This change allows you to see which worker was selected by the load balancer to serve your request in the HaJsp.jsp page.

    6. Change the Receiver tag's port value to 4001.

    Tom2 server.xml


    <?xml version="1.0" encoding="UTF-8"?>
    <Server port="8006" shutdown="SHUTDOWN">
    <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
    <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
    <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
    <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

    <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved"
    factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
    pathname="conf/tomcat-users.xml" />
    </GlobalNamingResources>

    <Service name="Catalina">

    <Connector port="8081" protocol="HTTP/1.1"
    connectionTimeout="20000"
    redirectPort="8444" />
    <Connector port="8010" protocol="AJP/1.3" redirectPort="8444" />


    <Engine name="Catalina" defaultHost="localhost" jvmRoute="worker2">


    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
    <Manager className="org.apache.catalina.ha.session.DeltaManager"
    expireSessionsOnShutdown="false"
    notifyListenersOnReplication="true"/>

    <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
    address="228.0.0.4"
    port="45564"
    frequency="500"
    dropTime="3000"/>

    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>

    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
    address="auto" port="4001" autoBind="100"
    selectorTimeout="5000" maxThreads="6"/>

    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>

    </Channel>

    <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>

    <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>

    </Cluster>


    <Realm className="org.apache.catalina.realm.LockOutRealm">
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    resourceName="UserDatabase"/>
    </Realm>

    <Host name="localhost" appBase="webapps"
    unpackWARs="true" autoDeploy="true">

    <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
    prefix="localhost_access_log" suffix=".txt"
    pattern="%h %l %u %t &quot;%r&quot; %s %b" />

    </Host>
    </Engine>
    </Service>
    </Server>

The completes the configuration updates for the second Tomcat node.

 

Starting and Testing your Local Tomcat Cluster

With the Apache server and two Tomcat servers configured, you are ready to test your cluster.

  1. Change into the TomcatHome\bin directory for each Tomcat server and and start the server using the catalina script.

    catalina.bat run

  2. Change into the ApacheHome\bin\ directory and start Apache server.

    httpd.exe

    Now the Apache HTTP server is the front end load balancer for the two Tomcat servers.

  3. Test your basic setup.

    1. Open http://localhost in a browser. If you have a default Apache setup, it should respond with "It Works!".

    2. To test your if your cluster is up, open http://localhost/status in your browser. This will display a mod_jk status page about the nodes in the cluster. The following figure provides an example of what the page looks like.

      Output from the mod_jk Status Page
      Description of this image
  4. Test your cluster and session sharing.

    1. Open a browser and connect to the HaJsp.jsp page using this URL:

      The session information should appear just like in the earlier example.

    2. Open a second browser or private browsing window and the same URL: http://localhost/clusterjsp/HaJsp.jsp.

    3. Repeat the previous step until the page is served by worker 2. When that happens you should get data like the following:

      Sample Worker 2 Session Data

      
      HttpSession Information:
      Served From Server: localhost
      Server Port Number: 80
      Executed From Server: Hostname
      Executed Server IP Address: 192.168.2.21
      Session ID: 23BA587217DD06BD8E7D005E5E0A9A46.worker2
      Session Created: Fri Oct 21 09:56:16 MDT 2016
      Last Accessed: Fri Oct 21 09:57:38 MDT 2016
      Session will go inactive in 1800 seconds
      
    4. Add session data to this session that identifies the worker and the session. Include information that identifies the session. For example:

      Sample Worker 2 Session Data

      
      HttpSession Information:
      Served From Server: localhost
      Server Port Number: 80
      Executed From Server: Hostname
      Executed Server IP Address: 192.168.2.21
      Session ID: 23BA587217DD06BD8E7D005E5E0A9A46.worker2
      Session Created: Fri Oct 21 09:56:16 MDT 2016
      Last Accessed: Fri Oct 21 09:57:38 MDT 2016
      Session will go inactive in 1800 seconds
      
      Data retrieved from the HttpSession:
      worker2: 23BA = Session 23BA
      

      The session data appears at the bottom of the page. So if a cluster node fails, this data should still be stored in our database session store.

    5. Stop the second Tomcat server.

      Now the cluster failover kicks in.

    6. Reload your page. If everything is working, the data you created on worker 2 is now served by by worker 1. Except for the time, the data should appear like it did before.

      Sample Worker 2 Session Data from Worker 1

      
      HttpSession Information:
      Served From Server: localhost
      Server Port Number: 80
      Executed From Server: Hostname
      Executed Server IP Address: 192.168.2.20
      Session ID: 23BA587217DD06BD8E7D005E5E0A9A46.worker2
      Session Created: Fri Oct 21 09:56:16 MDT 2016
      Last Accessed: Fri Oct 21 09:57:38 MDT 2016
      Session will go inactive in 1800 seconds
      
      Data retrieved from the HttpSession:
      worker2: 23BA = Session 23BA
      

If a server fails, the other servers in the cluster will still serve up the session data.

 

Preparing a Tomcat Instance for Deployment on Oracle Application Container Cloud Service

 

Overview

With a local Tomcat cluster setup and working, we can now look at setting up a Tomcat instance for deployment on Oracle Application Container Cloud Service. First, be aware that the Tomcat instance will run in an Oracle Linux (RedHat) environment inside of a container. Therefore, any modifications need to be made to the Unix versions of the Tomcat scripts, not the Windows versions. The following list is an overview of the steps that need to be done to a Tomcat instance to make it Cloud ready.

  • Make a copy of a working Tomcat cluster instance.

  • Create a launch script to set environment variables and then launch Tomcat.

  • Update the Tomcat configuration for the cloud.

  • Configure Oracle Application Container Cloud Service deployment files for your Tomcat application.

  • Compress you application into an application archive for Oracle Application Container Cloud Service.

Once these steps are complete, you can deploy a Tomcat cluster node to Oracle Application Container Cloud Service. That node can be scaled up or down to act as a single cluster system.

 

Setting up the Launch Script and Passing Environment Data to Tomcat

The first task to make Tomcat ready for the cloud is to make a new launch script. In a cloud environment, the application needs to read environment variables from the container so it can launch with assigned values for network port and other values. The following steps provide details on this process.

Note: If you wish to test many of the steps from this point forward, you will need to execute Tomcat inside a Linux virtual machine or Linux machine.

  1. Copy the contents of c:\tom1 to a new directory, for example tom-accs. The creates a Tomcat instance you can modify for the cloud.

  2. You will use a new launch script, start.sh, to launch Tomcat in a container.
  3. The start.sh script checks for the existence of any required environment variables. Then, the script uses the sed string processor to substitute configuration values with the environment variables. The server.template.xml file is used as input. After sed processing, the server.xml file is written to the conf directory. Then, the catalina.sh script is called with the run option to launch the application in the foreground. The sample script code for this is provided in the start directory for the project download.

    start.sh Script


    #!/bin/bash

    # Set defaults if env vars are not set

    if [ "$PORT" = "" ]
    then
    PORT="8080"
    echo "PORT set to default $PORT"
    fi

    if [ "$MULTICAST_IP" = "" ]
    then
    MULTICAST_IP="228.0.0.4"
    echo "MULTICAST_IP set to default $MULTICAST_IP"
    fi

    if [ "$MULTICAST_PORT" = "" ]
    then
    MULTICAST_PORT="45564"
    echo "MULTICAST_PORT set to default $MULTICAST_PORT"
    fi


    echo "======"
    echo "Starting with the following values: "
    echo "PORT set to $PORT"
    echo "MULTICAST_IP set to $MULTICAST_IP"
    echo "MULTICAST_PORT set to $MULTICAST_PORT"
    echo "======"


    # Update server.xml with env vars

    sed "s/__PORT__/${PORT}/g; s/__MULTICAST_IP__/${MULTICAST_IP}/g; s/__MULTICAST_PORT__/${MULTICAST_PORT}/g;" tom-accs/conf/server.template.xml > tom-accs/conf/server.xml

    exec tom-accs/bin/catalina.sh run

    The script looks for three environment variables: PORT, MULTICAST_IP, and MULTICAST_PORT. If these variables are not set, then default values for Tomcat are used. They are 8080, 228.0.0.4, and 45564 respectively.

  4. For initial testing, make any needed changes to the start.sh script for your local environment.

  5. Save the file.

 

Making Configuration Changes for ACCS

The launch script is set, now you need to update the Tomcat configuration files to use system properties for the values passed in to the application. The following steps detail the changes you need to make.

  1. Copy the server.xml file to server.template.xml.

  2. Edit the server.template.xml file.

  3. For the first Connectortag under the Service tag change the value to "__PORT__".

  4. For the Engine tag remove the jvmRoute attribute and value. This is no longer needed since Oracle Application Container Cloud Service uses Oracle Traffic Director for load balancing instead of the Apache server. So this value would no longer be relevant.

  5. For the <Membership> tag, change the address and port attributes to __MULTICAST_IP__ and __MULTICAST_PORT__ respectively.

    Your server.template.xml file should now look like this.

    server.xml


    <?xml version="1.0" encoding="UTF-8"?>
    <Server port="8005" shutdown="SHUTDOWN">
    <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
    <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
    <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
    <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

    <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved"
    factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
    pathname="conf/tomcat-users.xml" />
    </GlobalNamingResources>

    <Service name="Catalina">

    <Connector port="__PORT__" protocol="HTTP/1.1"
    connectionTimeout="20000"
    redirectPort="8443" />
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />


    <Engine name="Catalina" defaultHost="localhost">


    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
    <Manager className="org.apache.catalina.ha.session.DeltaManager"
    expireSessionsOnShutdown="false"
    notifyListenersOnReplication="true"/>

    <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
    address="__MULTICAST_IP__"
    port="__MULTICAST_PORT__"
    frequency="500"
    dropTime="3000"/>

    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>

    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
    address="auto" port="4000" autoBind="100"
    selectorTimeout="5000" maxThreads="6"/>

    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>

    </Channel>

    <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>

    <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>

    </Cluster>


    <Realm className="org.apache.catalina.realm.LockOutRealm">
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    resourceName="UserDatabase"/>
    </Realm>

    <Host name="localhost" appBase="webapps"
    unpackWARs="true" autoDeploy="true">

    <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
    prefix="localhost_access_log" suffix=".txt"
    pattern="%h %l %u %t &quot;%r&quot; %s %b" />

    </Host>
    </Engine>
    </Service>
    </Server>
  6. Save the file.

That completes the configuration changes for Tomcat server.

 

Updating the ClusterJsp Application

One change needs to be made to the ClusterJsp application for Oracle Application Container Cloud Service. We need to add a line to the HaJsp.jsp file that identifies the current container instance that has served the current page to you. Perform the following steps to make the change.

  1. Change into the maven/src/main/webapp directory of the ClusterJsp applicatoin.

  2. Edit the HaJsp.jsp file.

  3. After the line that gets the server IP address, add this JSP code:

    <LI>Executed ACCS Container Name: <b><%= System.getenv("APAAS_CONTAINER_NAME") %></b></LI>

    This will provide the system generated container name for the container that served the request for the current page.

  4. Save the file.

  5. Change into the maven directory.

  6. Rebuild the application.

    mvn clean package

  7. Delete the old clusterjsp.war file and clusterjsp directory from the webapp directory for your Tomcat instance.

  8. Copy the newly generated clusterjsp.war file from the project's target directory to the webapp directory of the Tomcat instance.

  9. Start your test instance with the start.sh script so the new version of the ClusterJsp application is deployed.

  10. Test the Tomcat instance to ensure the application is deployed and working.

 

Creating Configuration Files for Deployment

 

Overview

The application is configured and ready for the cloud. Before it can be deployed to Oracle Application Container Cloud Service, a couple of configuration files are needed for deployment. The manifest.json is a required file that contains your launch command and metadata about your application. The deployment.json is an optional file that includes configuration information about your application. In addition, the configuration can be performed in the Oracle Application Container Cloud Service GUI.

 

Creating a manifest.json File

The mainfest.json file provides the launch command used for the application, Java version, and some metadata. For Tomcat, the start.sh script must call the tom-accs/bin/catalina.sh run command so that the application executes in the foreground. In addition, to enable clustering support, the "isClustered":"true" setting is required. The following is a sample manifest.json for the Tomcat instance.

manifest.json


{
"runtime":{"majorVersion": "8"},
"command":"./start.sh",
"isClustered":"true",
"notes":"Tomcat Cluster Test"
}

This manifest file assumes that the Tomcat instance will be installed in the ~/tom-accs directory of the container.

Deploying with the Defaults

Since the start.sh script will use default values if no environment variables are set, the application can be deployed with only the manifest.json file. Simply create an application archive as described in the next section and deploy the application. The deployed application will deployed using Oracle Application Container Cloud Service assigned port number and the default multicast IP and port numbers of: 228.0.0.4 and 45564.

Creating Multiple Clusters

A Tomcat cluster is identified by the combination of multicast IP address and multicast port. To create a new cluster, Tomcat servers must be assigned a new IP address and port combination. This is an important point since your Oracle Cloud network infrastructure is shared by your identity domain. Therefore, to run additional clusters a new IP address and port combination must be passed to any new clusters you wish to add to your account.

Because this application uses environment variables for configuration, changing the cluster configuration is straight forward. To assign new values for the multicast IP address and port simply add a deployment.json file to the application archive.

 

Creating a deployment.json File

The deployment.json is an optional file you can use to include configuration data with your application. In this case, the deployment.json demonstrates how the multicast IP address and port can be configured for a new and separate Tomcat cluster in the Oracle Application Container Cloud Service.

Set the new environment variables as shown:

deployment.json


{
"memory": "1G",
"instances": "2",
"environment": {
"MULTICAST_IP":"228.0.0.4",
"MULTICAST_PORT":"45565"
}
}

Here is a quick review of the fields in the deployment.json file.

  • memory: The size of each instance. One gigabyte in this example.

  • instances: The initial number of instances to create.

  • environment: A list of environment variables to create for this container.

An application distributed with this deployment.json file will run in a different cluster space than the defaults. Setting up additional clusters would require changing the port number in the deployment.json file for each additional cluster.

Note: Any cluster you deploy on Oracle Application Container Cloud Service needs to assign its own unique multicast IP address and port combination. If two different applications use the same address and port combination this results in the two clusters becoming fused and will cause problems with your application.

 

Deploying your Tomcat Instance for Oracle Application Container Cloud Service

 

Creating the Application Archive

To deploy an application to Oracle Application Container Cloud Service your application must be bundled in a properly formatted archive. The archive can be in the zip or tar.gz formats. The basic requirements are pretty straightforward.

  • The start.sh launch script must be in the root directory of the archive.

  • The manifest.json file must be in the root directory of the archive.

  • The deployment.json file, if present, must be in the root directory of the archive.

When the archive is decompressed, the two configuration files are placed in the home directory of the container. For this Tomcat example, the Tomcat instance is placed in the ~/tom-accs directory. Therefore, the launch command specified in manifest.json is:

tom-accs/bin/catalina.sh run

The command path is relative to the home directory of the container.

Here are some sample commands to create the archive for this example.

zip -r tom-accs-app.zip tom-accs start.sh manifest.json deployment.json

tar cvfz tom-accs-app.tgz tom-accs start.sh manifest.json deployment.json

Where tom-accs is the directory containing the Tomcat instance.

 

Deploying to Oracle Application Container Cloud Service

To deploy to Oracle Application Container Cloud Service follow these steps.

  1. Open the Oracle Application Container Cloud Service Service Console.

  2. Click Create Application.

  3. Select Java as the platform.

  4. Enter a name for your application. Enter a Description as well if you wish.

  5. Select the Upload Application Archive option.

  6. Navigate to your application archive on your local machine and select your archive.

  7. Click Create.

Your application is deployed to Oracle Application Container Cloud Service. You should now be able to scale the application up or down and sessions will be shared between each instance.

 

Deploying Your Application Using the REST API

Before you can deploy your application, you must copy it to the storage service. You need your Oracle Cloud service credentials (username, password, identity domain) to use the REST API. With your credentials, you create cURL scripts to upload your application to the storage service.

  1. Open a command-line window (or Terminal in Linux).

  2. Create a storage container using the following cURL command:

    Note: Replace the words in bold with your storage credentials.

    curl -i -X PUT \
      -u <Storage-User-ID>:<Storage-Password> \
      https://<hostname>/v1/Storage-<Storage-Id-Domain>/tom-accs
    
  3. Upload your application archive (tom-accs-app.zip) in the storage container:

    curl -i -X PUT \
    -u <Username:Password> \
    https://<hostname>/v1/Storage-<Storage-ID-Domain>/tom-accs/tom-accs-app.zip -T <Path-to-local-file>/tom-accs-app.zip
    
  4. Deploy your application with the following script. The example script shows placeholders for the required information:

    url -i -X POST  \ -u <Username>:<Password> \
      -H "X-ID-TENANT-NAME:<Identity-Domain>" \
      -H "Content-Type: multipart/form-data" \
      -F "name=Tomcat-Cluster" \
      -F "runtime=node" \
      -F "subscription=Monthly" \
      -F "deployment=<Path-to-local-file>/deployment.json" \
      -F "archiveURL=tom-accs/tom-accs-app.zip" \
      -F "notes=Tomcat Cluster application" \
      https://<hostname>/paas/service/apaas/api/v1.1/apps/<Identity-Domain>
    

    Here are a few key points about this example:

    • -H specifies headers that are added to the HTTP request.
    • -F allows cURL to submit data like it's coming from a form (so, POST as the HTTP method).
    • archiveURL specifies where your archive file is located. The URL consists of your application's name, a slash, and the archive's file name.

That completes the setup of the cluster. You should now be able to deploy and test the application.

 

Want to Learn More

 

Credits

Curriculum Developer: Michael Williams