OpenSSO Policy Agents (PA)3.0  on Glass Fish  Cluster

1.0 Introduction

The goal of this document is to enable the reader to be able to  protect their Java EE application deployed on Glass Fish Enterprise Server 2.1 Cluster using OpenSSO and Policy Agents 3.0. This document is verified and validated with OpenSSO policy agents 3.0 and GFv2.1 EE cluster as described in the next section.

2.0 Product versions

This procedure is verified with OpenSSO Server Express 8 build with the corresponding Java EE agents 3.0. Glassfish version is Sun GlassFish Enterprise Server v2.1 (9.1.1) (build b60f-fcs). Assumes an OpenSSO server and GFv2.1 cluster is already setup.

3.0 Glass Fish Cluster

For simplicity I have created a simple cluster with one node agent and 2 instances. These instances are load balanced with a Big IP Load Balancer Virtual IP. Creating glass fish cluster is out of scope for this document. There are lot of resources available in the internet including  the aquarium.

Typical GFv2.1 Cluster Deployment

Typical GFv2.1 Cluster Deployment

You should edit the config/asadminenv.conf to set AS_ADMIN_SECURE=false , since   the cluster profile sets admin port as non SSL.

Once this cluster is setup, you are pretty much ready to install the agents. For illustration purposes I am going to use ‘agents30‘ is my cluster it has corresponding ‘agents30-config’ node in the domain.xml (or simply agents30-config if you view from UI) This configuration name is the key information for the OpenSSO Policy  agents configuration.

You can verify the cluster setup by accessing the sample application ‘clusterjsp’ using the LB url
for eg:

4.0 Installing the OpenSSO Policy Agents.

The typical glassfish cluster scenario is depicted in the image below, this I have made for simplicity. A Cluster can have multiple remote node agents with many clusters along with server instances. The same procedure can be applied irrespective of the complexity of the clusters setup.  Protecting the Java EE clustered applications using OpenSSO policy agents is a two step process.

  1. Installing OpenSSO Policy Agents on the  Domain Administration Server(DAS) running on Host A
  2. Performing the OpenSSO Policy Agents specific configuration changes on the Glass Fish clustered instances

Performing  OpenSSO Policy Agents installation on the  Domain Administration Server is a straight forward procedure, Policy agents installer facilitate this step. The second step  is inherently manual require meticulous  planning and execution, Any erroneous execution could potentially render  the cluster unusable. Detailed  procedure of these two steps are in the following sections.

4.1 Installation of OpenSSO Policy Agents on DAS

The Domain Administration Server (DAS) is the one that manage the cluster where the Java EE application is deployed. To install the policy agents first obtain the latest Java EE agents for Glass Fish v2/Application Server 9.1 from
Unzip the binary to a directory that can be accessed by the DAS process. Follow the Policy Agents installation procedure to install and configure for the DAS instance. During this process make sure the server instance name is the default configuration(server).
Login to your OpenSSO server and create an agent profile for this agent, let us call ‘remotecluster’ as the agent identity that will be used while installing the agents.

Agents Profile

Agents Profile

Here is the sample silent installation response file to configure the policy agents to the DAS instance. You need to invoke
./agentadmin  –custom-install   –useResponse filename.inf

where filename.inf is

## Agent User Response File START OF FILE
 CONFIG_DIR= /export/sun/gf2.1/domains/telco/config
 AGENT_PROFILE_NAME= remotecluster
 ##Agent User Response File END OF FILE

Remember to   stop   all the domains,instances and node agents before starting the policy agents installation process.  If you fail to do so, you might lose all the OpenSSO policy agents installation changes in the domain.xml of the DAS instance. This happens because the OpenSSO policy agent installer manipulates the domain.xml using file editing tools.(Work in progress to use asadmin for these changes).

Then the policy agents configuration files, appropriate JARs and the locale files will be copied to the cluster configuration directory of the domain  directory that manages the cluster. Glass Fish cluster configuration automatically replicate the policy agent specific files to the remote cluster instances. This feature helps us from not installing the policy agents on the remote GF server instances.

In essence the policy agents installer makes following changes in the DAS instance.

  • Java Class Path Suffix added with the JARs and locale files of the agents in the domain.xml  for the ‘server-config’ target only(this is because we selected ‘server’ instance at the time of installation of PA). Neither for the default-config nor the ‘agents30-config’  targets.  This distinction is critical to make sure we properly configure the agents to protect the application deployed on the target ‘agents30-config’.(agents30 is our cluster configuration)
  • ${path.separator}/export/sun/j2ee_agents/appserver_v9_agent/lib/agent.jar${path.separator}/export/sun/j2ee_agents/appserver_v9_agent/lib/openssoclientsdk.jar${path.separator}/export/sun/j2ee_agents/appserver_v9_agent/locale${path.separator}/export/sun/j2ee_agents/appserver_v9_agent/Agent_001/config”


  • /export/sun is the base directory(BASE_DIR) where you have unzipped the
  • Agent_001 is the agent instance that is created in sec 4.1
  • Adding the JVM option for the target ‘server-config’  to enable the policy agents logging

– Djava.util.logging.config.file=<BASE_DIR>/j2ee_agents/appserver_v9_agent/config/”

  • Adding the J2EE permissions to read the agents JARs in the server.policy, following policy will be added in server.policy

grant codeBase “file:<BASE_DIR>/j2ee_agents/appserver_v9_agent/lib/*” {

  • Add the agent realm in config/login.conf

agentRealm {
com.sun.identity.agents.appserver.v81.AmASLoginModule  required;

  • A new authentication  realm ‘ agentRealm’ will be created for the ‘server’ instance
  • The default authentication realm for ‘server’ instance will be set to ‘ agentRealm’

That is all happens under the cover when you install the policy agents installer.
Now we need to apply these changes to the cluster configuration so the applications deployed on these clusters can be protected using OpenSSO Policy agents.

4.2 Performing PA Configuration on the Cluster

This step involves running a sequence of GF v2.1 EE administrative commands. The sequence and syntax all matters, please follow the instructions as it is given.  First make sure you have started the DAS instance in order to run the following sequence of commands. Just start only the DAS instance not the cluster instances.
Login to the DAS server(Host A) and make sure the asadmin command line utility is in the PATH.

4.2.1 Copy the agents configuration to cluster configuration directory

From the DAS host copy the PA’s configuration files and libraries to the GF cluster configuration directory so that these files will be available in the remote instances. If this is not done then PA sould be installed on each instance that belong to the cluster. To avoid this duplicate effort as well as to manage the policy configuration from the centralized location(in this case from DAS) you should do the following steps.
change directory to <BASE_DIR>/j2ee_agents/appserver_v9_agent

 /bin/cp -r  Agent_001  config lib  locale   ${com.sun.aas.instanceRoot}/config/agents30-config/

Any subsequent change that you make in these directories must be copied to the above location otherwise the cluster will not get the updates you make in the agents configuration files.

4.2.2 Make the configuration changes

Create a text file named   P_FILE containing the GF admin and master password.

 echo 'AS_ADMIN_PASSWORD=secret12' >> $P_FILE

make sure the asadmin command is in the PATH

 export PATH=/export/sun/gf2.1/bin/:$PATH

Following sequence of commands add the necessary PA configuration parameters to the agents30 cluster configuration. Once this process is complete you need to restart the whol cluster setup. At this point only the DAS administration server is running on port 34848, rest all are shut down.  All these commands are executed in a unix terminal(on DAS host, assuming admin server running on http) in this example, please follow respective syntax that is suitable for your environment. Set the logging properties

asadmin create-jvm-options --port 34848 --user admin --passwordfile $P_FILE --target agents30-config "-Djava.util.logging.config.file=\${com.sun.aas.instanceRoot}/config/agents30-config/config/" Set the COMPAT mode OFF

asadmin create-jvm-options --port 34848 --user admin --passwordfile $P_FILE --target agents30-config "-DLOG_COMPATMODE=Off" Create the agent authentication realm

asadmin create-auth-realm --port 34848 --user admin --passwordfile $P_FILE --classname com.sun.identity.agents.appserver.v81.AmASRealm --property jaas-context=agentRealm --target agents30-config agentRealm Set the default realm to agents realm

asadmin set Add the Classpath suffix

asadmin set"\${path.separator}/\${com.sun.aas.instanceRoot}/config/agents30-config/lib/agent.jar\${path.separator}\${com.sun.aas.instanceRoot}/config/agents30-config/lib/openssoclientsdk.jar\${path.separator}/\${com.sun.aas.instanceRoot}/config/agents30-config/locale\${path.separator}\${com.sun.aas.instanceRoot}/config/agents30-config/Agent_001/config"

Note the $ is escaped with backslash(\) this is required when it is executed in the shell environment. Edit the server.policy

If you have enabled the J2EE security(means you have JVM option) for the cluster then you have to allow permission to read the agent’s JARs located in {com.sun.aas.instanceRoot}/config/agents30-config/lib directory. This can be done by editing the {com.sun.aas.instanceRoot}/config/server.policy. Append the following line in the {com.sun.aas.instanceRoot}/config/server.policy.

  • grant codeBase “file:${com.sun.aas.instanceRoot}/config/agents30-config/lib/-” {


This update will be automatically pushed to the remote instances when you restart the cluster after completing this procedure. Deploy the agentapp.war on the cluster

This is one of the critical step that you need to perform. Make sure this application is deployed on the cluster not just on one instance.
For instance in this example agentapp.war is deployed using the following command

 ./asadmin deploy --target agents30  --host  --port 34848 --availabilityenabled=true /export/sun/j2ee_agents/appserver_v9_agent/etc/agentapp.war

This application is required for the agents receive notification as well as this app is required to perform   Cross Domain SSO

5.0 Verification of PA configuration

Once you complete the section 4.x, now the cluster is ready to be tested. In order to test the  Java EE policy agents there is a sample called agentsample.ear that shipped with the PA binary. You have to deploy this EAR file in to your cluster.  This can be done by simply invoking the ‘asadmin’ with deploy option on the host where DAS is running.

 ./asadmin deploy --target agents30 --port 34848 --availabilityenabled=true /export/sun/j2ee_agents/appserver_v9_agent/sampleapp/dist/agentsample.ear

Now Login to OpenSSO server and navigate to the J2EE agent identity ‘remotecluster’ property with the label Agent Filter Mode, remove the current value ‘ALL’ and add the value SSO_ONLY. This will ask only for authentication for the resource being access from the cluster URL which is When you access this URL the cluster will redirect to your OpenSSO server, with valid user name/password pair you will get access to this page.

You can do much more using this sample such as exhibiting Java EE programmatic and declarative security. You can find more on this by reading the readme under /export/sun/j2ee_agents/appserver_v9_agent/sampleapp directory.

Make sure restart the DAS and cluster together with node agent to get these configuration change propagated.  Even though the documents say that changes will be published to the nodes automatically, I need to supply the –syncinstances=true option while starting the node agent, only then I could see the configuration changes reflected in the remote instances.


Creating cluster

echo ‘AS_ADMIN_PASSWORD=secret12’ >> $P_FILE
GF_INSTALL_DIR/bin/asadmin create-domain –adminport 34848 –user admin –passwordfile $P_FILE –interactive=false –profile cluster telco
GF_INSTALL_DIR/bin/asadmin start-domain –user admin –passwordfile $P_FILE  telco
GF_INSTALL_DIR/bin/asadmin create-node-agent –user admin –port 34848 –interactive=false –passwordfile $P_FILE telco-nodeagent
GF_INSTALL_DIR/bin/asadmin create-cluster –port 34848 agents30
GF_INSTALL_DIR/bin/asadmin create-instance –port 34848 –nodeagent telco-nodeagent –systemproperties HTTP_LISTENER_PORT=38080 –cluster agents30 sales
GF_INSTALL_DIR/bin/asadmin create-instance –port 34848 –nodeagent telco-nodeagent –systemproperties HTTP_LISTENER_PORT=38081 –cluster agents30 eng
GF_INSTALL_DIR/bin/asadmin start-node-agent –user admin –interactive=false –passwordfile $P_FILE telco-nodeagent
GF_INSTALL_DIR/bin/asadmin deploy –target agents30 –port 34848 –availabilityenabled=true samples/quickstart/clusterjsp/clusterjsp.ear
GF_INSTALL_DIR/bin/asadmin start-cluster –port 34848 –interactive=false –passwordfile $P_FILE agents30

To start and Stop the cluster

asadmin stop-cluster agents30
asadmin stop-node-agent
asadmin stop-domain telco
asadmin start-domain telco
asadmin start-node-agent --syncinstances=true
asadmin start-cluster agents30