Wednesday, September 12, 2012

Do I Need to Worry About the Availability and Recovery of WebLogic Transaction Logs?


When planning a WebLogic deployment that places a significant emphasis on High Availability or Disaster Recovery, it may be necessary to preserve WebLogic's Transaction Logs, to enable business-critical I.T. systems to be recovered to a correct and consistent state, following a system crash.

You may ask: What are WebLogic Transaction Logs?

Every WebLogic server has a persistent store (either on a file-system or in a database) to record information about the in-flight global transactions it co-ordinates. This is the Transaction Log, or TLOG for short. In the TLOG, WebLogic records each global transaction that has been flagged to commit but may not yet have committed in all the affected back-end data-stores. A global transaction is a special type of transaction, where the host application has encompassed a set of updates to two or more different data-stores as a seemingly single atomic operation. These data-stores could be relational databases, message queues or enterprise information systems, for example. A global transaction should either succeed or fail as a whole, without leaving any of the incorporated data-stores in an inconsistent state and as such, global transactions have ACID properties. When WebLogic co-ordinates a global transaction, it uses a Two-Phase-Commit (2PC) protocol to interact with the data-store managers (called resource managers). The interface between the transaction manager (e.g. WebLogic) and each resource manager (e.g. a database) is defined by the XA industry standard. To summarise, when processing global transactions, the transaction manager needs to persist its commit decision somewhere, and in WebLogic's case, this is in its TLOG.


So, what if WebLogic didn't persist transaction commit decisions?

If global transaction commit decisions are not persisted and the system fails, then under heavy load it is very likely that at least some transactions will still be in-flight, and temporarily at least, be in-doubt. At this point in time, for each transaction, the updates to one back-end data store may have committed, but the updates to another data-store, in the same transaction, may not yet have been instructed to commit (i.e. the data-store's updates are still pending). The system as a whole will have data in an inconsistent state. Once the failed parts of the system have been re-started, the data-stores holding pending updates will have no way of knowing whether the updates should be committed or rolled-back. The data in the system will then be permanently in an incorrect and inconsistent state. Even with manual human intervention, an administrator will have no way of knowing whether to commit or roll-back pending updates in a data-store, and so the correctness of a complete I.T. system will be forever in-doubt.

So, how does WebLogic recover pending transactions?

WebLogic's TLOGs are a key component of avoiding data in-consistency. Following a system crash, WebLogic's built-in Transaction Recovery Service automatically determines the global transactions that are still pending, by reading the TLOG and polling the relevant back-end data stores. WebLogic is then able to instruct the back-end data stores to either commit or roll-back each pending transaction. Once WebLogic's Transaction Recovery Service completes, the overall system will have been restored to a healthy and consistent state.

So, TLOGs are valuable assets then, that need to be preserved?

If you value and strive to protect and preserve the data in the databases and other data-stores in your enterprise, and your WebLogic hosted applications use global transactions, then you need to equally value and protect your WebLogic TLOGs, as both are inter-related. You need to ensure the persistence store for your TLOG is located on a highly available file-system storage or in a highly available database, and can survive such scenarios as irrevocable damage to a hard-disk platter, for example, or even the loss of a whole data-centre. You also need to plan for the ability to restore the WebLogic server referencing its highly available TLOG, during system recovery, to enable WebLogic to push the in-flight transactions through to completion and return the overall system to a consistent state.

For multi-data-centre deployments, it may be necessary to have a TLOG replicated between two data-centres. In the event of a complete data-centre failure, you can bring the WebLogic servers up in the other data centre, referencing the replicated copy of their TLOG, to allow the pending transactions to be correctly committed or rolled-back.

For enterprises that use WebLogic with global transactions, the preservation and recovery of TLOGs will need to be a critical component of the overall disaster recovery process.

So, investing in technologies and processes to preserve and recover TLOGs is absolutely necessary for all deployments?

Before you go ahead and invest in putting in place highly available storage, multi-site replication technologies and disaster recovery practices for TLOGs, it's worth considering that not all WebLogic deployments use global transactions. You need to be cognisant of this and perform an analysis of your WebLogic deployments, because such an investment cost may not be necessary for your particular system.

If your WebLogic deployed applications are bespoke JEE applications, developed in-house or by a partner, then the application's developers will be able to tell you if global "XA" transaction are employed or not.

If the WebLogic deployed application is built using Oracle Middleware or runs Oracle Applications, then XA global transactions may or may not be being used under the covers, depending on the specific product. You may need to consult the Oracle product documentation or contact Oracle Support. For example, Oracle SOA Suite inherently uses global transactions to track activity transitions belonging to running business processes. So if you value the integrity of these business processes and the data-stores they update, you need to value and protect the TLOGs.

If the WebLogic deployed application is provided by an ISV, you will need to study the ISV's product documentation and/or consult the ISV's Support organisation, to determine if global transactions are employed.

Final Words.....

It is worth stating that such transaction persistence and recovery requirements, and the implied investment required, are not unique to WebLogic. A TLOG is just a mechanism that WebLogic uses. Any enterprise that uses global transactions, regardless of technology vendor, will need to make similar considerations and investments, concerning the provision of highly available storage, multi-site replication technologies and disaster recovery practices.


Song for today: Miles Iz Ded by The Afghan Whigs

Monday, January 9, 2012

New Exa and Engineered Systems blog to watch

Just a quick post to say look out for a new technical blog by my friend and colleague, Donald Forbes. Given Don's expertise, inside track and ready access to real Exa* machines, there should be lots of insightful technical information coming over the next few months, especially on Exalogic and SPARC SuperCluster, so watch this (that?) space.


Song for today: Slipping Husband by The National

Wednesday, October 5, 2011

New release of DomainHealth - 1.0

I've just released a new version of DomainHealth, that by virtue of being the next increment after 0.9, means that this is the grand 1.0 release! No great fanfare or massive new features but this should [hopefully] be a nice stable release to rely on and live up to its 1.0 billing! :D

You can download DomainHealth 1.0 from here: http://sourceforge.net/projects/domainhealth

One new feature in 1.0 that is worth highlighting though, is the new optional capability to collect and show Processor, Memory and Network statistics from the underlying host Operating System and Machine that WebLogic is running on. DomainHealth only enables this feature if you've also deployed another small open source JEE application that I've created, called WLHostMachineStats. Below is a screenshot of DomainHealth 1.0 in action, displaying graphs of some of these host machine statistics (in this case it's running on an Exalogic system).

(click image for larger view)
WLHostMachineStats is a small agent (a JMX MBean deployed as a WAR file) that runs in every WebLogic Server in a WebLogic domain. It is used to retrieve OS data from the underlying machine hosting each WebLogic Server instance. For more information, including deployment instructions, and to download it, go to: http://sourceforge.net/projects/wlhostmchnstats

Here's another screenshot, just for fun:

(click image for larger view)
Some things to bear in mind....

...the WLHostMachineStats project is still in its infancy and currently places restrictions on what specific environments are supported. Right now, WLHostMachineStats can only be used for WebLogic domains running on Linux Intel (x86) 64-bit based machines (including Exalogic) and only for versions 10.3.0 or greater of WebLogic. This is partly because WLHostMachineStats relies on the SIGAR open source utility, that uses native C libraries and JNI. I hope to widen the list of supported platforms for WLHostMachineStats in the future.


Song for today: Dynamite Steps by The Twilight Singers

Friday, September 2, 2011

New release of DomainHealth (v0.9.1)

I've just released a new version of DomainHealth (version 0.9.1). This is primarily a maintenance/bug-fix release.

DomainHealth is an open source "zero-config" monitoring tool for WebLogic. It collects important server metrics over time, archives these into CSV files and provides a simple web interface for viewing graphs of current and historical statistics. It also works nicely on Exalogic.

To download (and see release notes) go to the project home (select 'files' menu option) at: http://sourceforge.net/projects/domainhealth/



Song for today: Ascension Day by Talk Talk

Thursday, March 3, 2011

Exalogic DCLI - run commands on all compute nodes at once

Exalogic includes a tool called DCLI (Distributed Command Line Interface) that can be used to run the same commands on all or a subset of compute nodes in parallel. This saves a lot of time and helps avoid the sorts of silly errors that often occur when running a command over and over again. DCLI is a tool that originally came with Exadata (as documented in the Oracle Exadata Storage Server Software User's Guide - E13861-05 chapter 9), and is now incorporated into the new Exalogic product too. It is worth noting that if you are ever involved in performing the initial configuration of a new Exalogic rack, using OneCommand to configure the Exalogic's networking, then under the covers OneCommand will be using DLCI to perform a lot of its work.
Introduction to Exalogic's DCLI
The Oracle Enterprise Linux 5.5 based factory image running on each Exalogic compute node has the exalogic.tools RPM package installed. This contains the DCLI tool in addition to other useful Exalogic command line utilities. Running 'rpm -qi exalogic.tools' on a compute node shows the following package information:
Name : exalogic.tools
Version : 1.0.0.0
Release : 1.0
When you run 'rpm -ql exalogic.tools' you will see that the set of command line utilities are all placed in a directory at '/opt/exalogic.tools'. Specifically, the DCLI tool is located at '/opt/exalogic.tools/tools/dcli'.

Running DCLI from the command line with the '-h' argument, will present you with a short help summary of DCLI and the parameters it can be given:

# /opt/exalogic.tools/tools/dcli -h

If you look at the contents of the '/opt/exalogic.tools/tools/dcli' file you will see that it is actually a Python script that, essentially, determines the list of compute nodes that a supplied command should be applied to and then runs the supplied command on each compute node using SSH under the covers. Conveniently, the Python script also captures the output from each compute node and prints it out in the shell that DCLI was run from. The output from each individual compute node is prefixed by that particular compute node's name so that it is easy for the administrator to see if something untoward occurred on one of the compute nodes only.

A good way of testing DCLI, is to SSH to your nominated 'master' compute node in the Exalogic rack (eg. the 1st one), as root user, and create a file (eg. called 'nodelist') which contains the hostnames of all the compute nodes in the rack (separated by newlines). For example, my nodelist file has the following entries in the first 3 lines:

el01cn01
el01cn02
el01cn03
....

Note: You can comment out one or more hostnames with a hash ('#') if you want DCLI to ignore particular hostnames.

As a reminder on Exalogic compute node naming conventions, 'el01' is the Exalogic rack's default name and 'cn01' contains the number of the specific compute node in that rack.

Once you've created the list of target compute nodes for DCLI to distribute commands to, a nice test is to run a DCLI command that just prints the date-time of each compute node to the shell output of your master compute node (using the /bin/date Linux command). For example:

# /opt/exalogic.tools/tools/dcli -t -g nodeslist /bin/date
Example output:

Target nodes: ['el01cn01', 'el01cn02', 'el01cn03',....]
el01cn01: Mon Feb 21 21:11:42 UTC 2011
el01cn02: Mon Feb 21 21:11:42 UTC 2011
el01cn03: Mon Feb 21 21:11:42 UTC 2011
....

When this runs, you will be prompted for the password for each compute node that DCLI contacts using SSH. The '-t' option tells DCLI to first print out all the names of all nodes it will run the operation on, which is useful for double-checking that you are hitting the compute nodes you intended. The -g command provides the name of the file that contains the list of nodes to operate on (in this case, 'nodelist' in the current directory).


SSH Trust and User Equivalence

To use DCLI without being prompted for a password for each compute node that is contacted, it is preferable to first set-up SSH Trust between the master compute node and all the other compute nodes. DCLI calls this "user equivalence"; a named user on one compute node will then be assumed to have the same identity as the same named user on all other compute nodes. On your nominated 'master' compute node (eg. 'el01cn01'), as root user, first generate an SSH public-private key for the root user. For example:

# ssh-keygen -N '' -f ~/.ssh/id_dsa -t dsa

This places the generated public and private key files in the '.ssh' sub-directory of the root user's home directory (note, '' in the command is two single quotes)

Now run the DCLI command with the '-k' option as shown below which pushes the current user's SSH public key to each other compute node's '.ssh/authorized_keys' file to establish SSH Trust. You will again be prompted to enter the password for each compute node, but this will be the last time you will need to. With the '-k' option, each compute node is contacted sequentially rather than in parallel, to give you chance to enter the password for each node in turn.

# /opt/exalogic.tools/tools/dcli -t -g nodeslist -k -s "\-o StrictHostKeyChecking=no"

In my example above, I also pass the SSH option 'StrictHostKeyChecking=no' so you avoid being prompted with the standard SSH question "Are you sure you want to continue connecting (yes/no)", for each compute node that is contacted. The master compute node will then be added to the list of SSH known hosts on each other compute node, so that this yes/no question will never occur again.

Once the DCLI command completes you have established SSH Trust and User Equivalence. Any subsequent DCLI commands that you issue, from now on, will occur without you being prompted fo passwords.

You can then run the original date-time test again, to satisfy yourself that SSH Trust and User Equivalence is indeed established between the master compute node and each other compute node and that no passwords are prompted for.

# /opt/exalogic.tools/tools/dcli -t -g nodeslist /bin/date

Useful Examples

Now lets have a look at some examples common DCLI commands you might need to issue for your new Exalogic system.

Example 1 - Add a new OS group to each compute node called 'oracle' with group id 500:

# /opt/exalogic.tools/tools/dcli -t -g nodeslist groupadd -g 500 oracle

Example 2 - Add a new OS user to each compute node called 'oracle' with user id 500 as a member of the new 'oracle' group:

# /opt/exalogic.tools/tools/dcli -t -g nodeslist useradd -g oracle -u 500 oracle

Example 3 - Set the password to 'welcome1' for the OS 'root' user and the new 'oracle' user on each compute node (this uses another feature of DCLI where, if multiple commands need to be run in one go, they can be added to a file, which I tend to suffix with '.scl' in my examples - 'scl' is the convention for 'source command line', and the '-x' parameter is provided to tell DCLI to run commands from the named file):

# vi setpasswds.scl
echo welcome1 | passwd root --stdin
echo welcome1 | passwd oracle --stdin
# chmod u+x setpasswds.scl
# /opt/exalogic.tools/tools/dcli -t -g nodeslist -x setpasswds.scl

Example 4 - Create a new mount point directory and definition on each compute node for mounting the common/general NFS share which exists on Exalogic's ZFS Shared Storage appliance (the hostname of the HA shared storage on Exalogic's internal InfiniBand network in my example is 'el01sn-priv') and then from each compute node, permanently mount the NFS Share:

# /opt/exalogic.tools/tools/dcli -t -g nodeslist mkdir -p /u01/common/general
# /opt/exalogic.tools/tools/dcli -t -g nodeslist chown -R oracle:oracle /u01/common/general
# vi addmount.scl
cat >> /etc/fstab << EOF
el01sn-priv:/export/common/general /u01/common/general nfs rw,bg,hard,nointr,rsize=131072,wsize=131072,tcp,vers=3 0 0
EOF
# chmod u+x addmount.scl
# /opt/exalogic.tools/tools/dcli -t -g nodeslist -x addmount.scl
# /opt/exalogic.tools/tools/dcli -t -g nodeslist mount /u01/common/general


Running DCLI As Non-Root User

In the default Exalogic set-up, DCLI executes as root user when issuing all of its commands regardless of what OS user's shell you use to enter the DCLI command from. Although root access is often necessary for creating things like OS users, groups and mount points, it is not desirable if you just want to use DCLI to execute non-privileged commands under a specific OS user on all computes nodes. For example, as a new 'coherence' OS user, you may want the ability to run a script that starts a Coherence Cache Server instance on every one of the compute nodes in the Exalogic rack, in one go, to automatically join the same Coherence cluster.

To enable DCLI to be used under any OS user and to run all its distributed commands on all compute nodes, as that OS user, we just need to make a few simple one-off changes on our master compute node where DCLI is being run from...

1. As root user, allow all OS users to access the Exalogic tools directory that contains the DCLI tool:

# chmod a+x /opt/exalogic.tools/tools

2. As root user, change the permissions of the DCLI tool to be executable by all users:

# chmod a+x /opt/exalogic.tools/tools/dcli

3. As root user, modify, the DCLI python script (/opt/exalogic.tools/tools/dcli) using 'vi' and replace the line....

USER_ID="root"

...with the line...

USER_ID=pwd.getpwuid(os.getuid())[0]

This script line uses some Python functions to set the DCLI user id to the name of the current OS user running the DCLI command, rather than the hard-coded 'root' username.

4. Whilst still editing the file using vi, add the following Python library import command near the top of the DCLI Python script to enable the 'pwd' Python library to be referenced by the code in step 3.

import pwd

Now log-on to your master compute node as your new non-root OS user (eg. 'coherence' user) and once you've done the one-off setup of your nodelist file and SSH-Trust/User-Equivalence (as described earlier), you will happily be able run DCLI commands accross all compute nodes as your new OS user.

For example, for a test Coherence project I've been playing with recently, I have a Cache Server 'start in-background' script in a Coherence project located on my Exalogic's ZFS Shared Storage. When I run script using the DCLI command below, from my 'coherence' OS user shell on my master compute node, 30 Coherence cache servers instances are started immediately, almost instantly forming a cluster across the compute nodes in the rack.

# /opt/exalogic.tools/tools/dcli -t -g nodeslist /u01/common/general/my-coh-proj/start-cache-server.sh

Just for fun I can run this again to allow 30 more Coherence servers to start-up and join the same Coherence cluster, now containing 60 members.


Summary

As you can see DCLI is pretty powerful yet very simple in both concept and execution!


Song for today: Death Rays by Mogwai

Sunday, January 23, 2011

Exalogic Software Optimisations

[Update 19-March-2001 - this blog entry is actually a short summary of a much more detailed Oracle internal document I wrote in December 2010. A public whitepaper using the content from my internal document, has now been published on Oracle's Exalogic home page (see "White Papers" tab on right-hand side of the home page); for the public version, a revised introduction, summary and set of diagrams have been contributed by Oracle's Exalogic Product Managers.]

For version 1.0 of Exalogic there is a number of Exalogic-specific enhancements and optimisations that have been made to the Oracle Application Grid middleware products, specifically:
  • the WebLogic application server product;
  • the JRockit Java Virtual Machine (JVM) product;
  • the Coherence in-memory clustered data-grid product.
In many cases, these product enhancements address performance limitations that are not present on general purpose hardware that uses Ethernet based networking. Typically, these limitations are only manifested when running on Exalogic's high-density computing nodes with InfiniBand's fast-networking infrastructure. Most of these enhancements are designed to enable the benefits of the high-end hardware components, that are unique to Exalogic, to be utilised to the full. This results in a well balanced hardware/software system.

I find it useful to categorise the optimisations in the following way:
  1. Increased server scalability, throughput and responsiveness. Improvements to the networking, request handling, memory and thread management mechanisms, within WebLogic and JRockit, enable the products to scale better on the high-multi-core compute nodes that are connected to the fast InfiniBand fabric. WebLogic will use Java NIO based non-blocking server socket handlers (muxers) for more efficient request processing, multi-core aware thread pools and shared byte buffers to reduce data copies between sub-system layers. Coherence also includes changes to ensure more optimal network bandwidth usage when using InfiniBand networking.
  2. Superior server session replication performance. WebLogic's In-Memory HTTP Session Replication mechanism is improved to utilise the large InfiniBand bandwidth available between clustered servers. A WebLogic server replicates more of the session data in parallel, over the network to a second server, using parallel socket connections (parallel "RJVMs") instead of just a single connection. WebLogic also avoids a lot of the unnecessary processing that usually takes place on the server receiving session replicas, by using "lazy de-serialisation". With the help of the underlying JRockit JVM, WebLogic skips the host node's TCP/IP stack, and uses InfiniBand's faster “native” networking protocol, called SDP, to enable the session payloads to be sent over the network with lower latency. As a result, for stateful web applications requiring high availability, end-user requests are responded to far quicker.
  3. Tighter Oracle RAC integration for faster and more reliable database interaction. For Exalogic, WebLogic includes a new component called “Active Gridlink for RAC” that provides application server connectivity to Oracle RAC clustered databases. This supersedes the existing WebLogic capability for Oracle RAC connectivity, commonly referred to as “Multi-Data-Sources”. Active Gridlink provides intelligent Runtime Connection Load-Balancing (RCLB) across RAC nodes based on the current workload of each RAC node, by subscribing to the database's Fast Application Notification (FAN) events using Oracle Notification Services (ONS). Active Gridlink uses Fast Connection Failover (FCF) to enable rapid RAC node failure detection for greater application resilience (using ONS events as an input). Active GridLink also allows more transparent RAC node location management with support for SCAN and uses RAC node affinity for handling global (XA) transactions more optimally. Consequently, enterprise Java applications involving intensive database work, achieve a higher level of availability with better throughput and more consistent response times.
  4. Reduced Exalogic to Exadata response times. When an Exalogic system is connected directly to an Exadata system (using the built-in Infiniband switches and cabling), WebLogic is able to use InfiniBand's faster “native” networking protocol, SDP, for JDBC interaction with the Oracle RAC database on Exadata. This incorporates enhancements to JRockit and the Oracle Thin JDBC driver in addition to WebLogic. With this optimisation, an enterprise Java application that interacts with Exadata, is able to respond to client requests quicker, especially where large JDBC result sets need to be passed back from Exadata to Exalogic.
To summarise, Exalogic provides a high performance, highly redundant hardware platform for any type of middleware application. If the middleware application happens to be running on Oracle's Application Grid software, further significant performance gains will be achieved.


Song for today: Come to Me by 65daysofstatic

Friday, December 10, 2010

Exalogic downloads and documentation links

Now that Exalogic has been released, the main Exalogic documentation is available at: http://download.oracle.com/docs/cd/E18476_01/index.htm

Worth particular attention is the "Machine Owner's Guide" and the "Enterprise Deployment Guide".

The Machine Owner's Guide will give you a good idea of the machine's internal specifications as well as the unit's external dimensions, power consumption needs, cooling needs, multi-rack cabling configurations, etc.

The Enterprise Deployment Guide (EDG) will point you in the right direction if you want to install and configure the Application Grid products on Exalogic in an optimal way for performance and highly-availability.


If you are about to take shipment of Exalogic and need copies of the software, then these can be accessed from the Oracle eDelivery website, using the following steps:
  • Browse to the eDelivery site at http://edelivery.oracle.com/
  • Press "Continue" link
  • Submit the requested user info when prompted, accepting the restrictions
  • In the resulting search page, for the Product Pack field, select "Oracle Fusion Middleware", and for Platform field select "Linux x86-64"
  • In the results page, press the link for "Oracle Exalogic Elastic Cloud Software 11g Media Pack"
The Exalogic downloads include:
  • Compute Node Base Image for Exalogic (parts 1 and 2) - this is the Oracle Enterprise Linux image including the Unbreakable Enterprise Kernel, OFED drivers for InfiniBand connectivity, and various supporting command line utilities
  • Configuration Utilities for Exalogic - this is the set of "Middleware Machine Configurator" tools, including the spreadsheet and accompanying shell scripts to help users peform the base network configuration for all the compute nodes in an Exalogic rack (a.k.a. "OneCommand")
  • Oracle WebLogic Server 11gR1 (10.3.4) - this is the combined WebLogic/JRockit/Coherence .bin installer for Exalogic (Linux x86-64)

Song for today: Cause = Time by Broken Social Scene

Thursday, December 9, 2010

Exalogic 1.0 is here!

General availability of Oracle's brand new Exalogic Elastic Cloud product has just been publically announced.


Just in case you've somehow missed the buzz and haven't got a clue what Exalogic is, I'll describe it for you a little here...

Exalogic is an integrated hardware and software system that is engineered, tested, and tuned to run enterprise Java applications, as well as native applications, with an emphasis on high performance and high availability (HA). Exalogic incorporates redundant hardware with dense-computing resources, ZFS Shared Storage and InfiniBand networking. This hardware is sized and customised for optimum use by Oracle 'Application Grid' software products, to provide a balanced hardware/software system. Specifically, the WebLogic Application Server, JRockit Java Virtual Machine (JVM) and Coherence In-Memory Data Grid products have been enhanced to leverage some of the unique features of the underlying hardware, for maximum performance and HA.

If you are familiar with Exadata and it being the "database machine", then think of Exalogic as the "middleware machine". Physically linking the two together in a data-centre gives you the foundation for a very high-end Enterprise Java based OLTP solution.

Exalogic is a system rather than appliance, where users are able to install, develop or run what they want as long as it is Linux/Solaris x86-64 compatible. Even though some of the elements of Exalogic, like InfiniBand, are more often found in the supercomputing world, Exalogic is intended as a general purpose system for running enterprise business applications. Exalogic will just appear to the hosted applications as a set of general purpose operating systems and processors with common standards-based networking. This means that unlike the supercomputing world, developers don't have to create bespoke software specifically tailored to run on a high-end proprietary platform.

For further information, see the Oracle introduction to Exalogic whitepaper.


Song for today: Whipping Song by Sister Double Happiness

Saturday, August 28, 2010

A simplified view of WLDF

The WebLogic Diagnostic Framework (WLDF) is a very powerful capability that has been in the product since WebLogic 9.0. WebLogic's product documentation does a good job of describing the fine detail of WLDF, but I believe that the high level overview it provides is too verbose and makes WLDF seem more complicated than it is. This is probably one reason why I don't see WLDF used quite as often as it should be, in place of custom coded monitoring solutions.

A couple of years ago I created my own diagram showing WLDF's composition, in an attempt to demonstrate that it's actually a fairly simple concept. However, I never got around to publishing it. Prompted by a recent request, I thought I'd address this ommision now, so here is that diagram...

(click image for larger view)

Hopefully this helps to better show the power of WLDF and that it's not as complex as one might think.


Song for today: Maps by Yeah Yeah Yeahs

Wednesday, August 11, 2010

Review of the new JRockit book

As promised, here's my review of the new JRockit book...


Having used JRockit for years (mostly sitting beneath WebLogic) I've been waiting for something like this book to arrive and lift the lid off the JVM to show what's inside. I'm glad to say that I am not disappointed. This book is a must-have for any serious enterprise Java developer or administrator who uses JRockit and wants to better understand how to tune, manage and monitor the JVM. The book also provides the Java developer with a clearer direction on the do's and don't's for developing a well behaving application on top of JRockit, especially in regard to memory management and thread management. Even for users of other JVMs, like Hotspot, much of the first half of the book, which concentrates on the JVM's internals, is relevant and insightful into the workings of Java virtual machines generally.

The first half of the book concentrates on providing in-depth knowledge of JRockit's internals and focusses on:
  • Code Generation (Java bytecode, generation of native code, JRockit 's Just In Time compilation strategy with subsequent optimised code re-generation)
  • Memory Management (heap usage, thread-local allocation, garbage collection strategies, deterministic-GC, 32-bit vs 64-bit memory management, potential pitfalls)
  • Threads and Synchronisation (green vs O.S. threads, synchronisation patterns/anti-patterns, code generation strategies for locking/unlocking, thin and fat locks, lock profiling)
  • Benchmarking and Tuning (throughput vs latency, tuning tips, benchmarking tips)
Don't be put off by the early exposure to Java bytecode samples in the book. The bytecode examples are only used liberally and only in a couple of early chapters. It's worth reading these bytecode samples carefully because the points these chapters make will resonate more strongly if you do.

As I read this book, it became evident that the JRockit engineers are extremely passionate about their work and that they live and breath it. It is re-assuring to know that they are constantly striving to improve the product, based on deep scientific theory yet always with an eye on pragmatism and real-world usage. I'm sure this is why JRockit is fast and powerful whilst remaining easy to manage and run applications on. Throughout the book, the ethos of JRockit's design is very clear. It's an adaptive JVM, where the internal runtime is constantly and automatically being re-evaluated and re-tuned, according to the current nature of the hosted application's usage and load. Reading the book, I can better appreciate the argument that server-side Java is faster than an equivalent native C application. A native application only has one-shot, at compile time, to generate optimal native code. JRockit, however, takes the opportunity to revisit this at runtime, when it has a much better idea of actual usage patterns.

The second half of the book provides an introduction and detailed reference into JRockit's rich and powerful management and monitoring toolset, including:
  • JRockit Mission Control (JRMC), composed of the Management Console, the Flight Recorder (plus Runtime Analyzer which it replaces) and the Memory Leak Detector
  • JRockit Command line tool (JRCMD), including in-depth examples of all the important commands
  • JRockit Management APIs, including the Java API for direct in-line access to the JRockit JVM from hosted Java applications plus the JMX version of the API for remote access to the JRockit JVM
These chapters provide an easy to read introduction to the key features of JRockit's rich tools, lowering the barrier to use for newbies. Also, many of the illustrated examples are use-case driven, acting as a handy reference guide to come back to at a later date. For example, if you suspect that you have a memory leak in your application and want to work out how best to locate the root of the leak, using the tools, the Leak Detector chapter will take you by the hand through this process.

Finally, at the end of the book there is an introductory chapter covering JRockit Virtual Edition (VE). JRockit VE enables the JRockit JVM, accompanied by a small JRockit kernel, to be run in a virtualised environment. This runs directly on-top of a hypervisor (eg. Oracle VM) without requiring the use of a conventional, heavy-weight, general-purpose operations system such as Linux, Solaris or Windows. Such a fully functional operating systems would otherwise burden the system with a layer of unwanted latency. This chapter in particular makes me realise that the JRockit Engineers are proverbial Rocket Scientists (or should that be Rockit Scientists? :D ). I defy anyone not to be impressed by the ambition of JRockit VE and and the high level of technical expertise that must have gone into developing it!

To summarise, I highly recommend this book. Easy to read yet very insightful. Once you've read it, it will remain as a handy reference to come back to, especially when needing to use the JRMC tools to diagnose issues and tune the JVM and hosted applications.


Song for today: Speed Trials by Elliot Smith

Wednesday, August 4, 2010

New release of DomainHealth WebLogic monitoring tool

I've just released the latest version of DomainHealth - version 0.9.

DomainHealth is an open source "zero-config" monitoring tool for WebLogic. It collects important server metrics over time, archives these into CSV files and provides a simple web interface for viewing graphs of current and historical statistics.

(click image for larger view)
This release includes a new look and feel and a new page navigation mechanism (based on some contributions from Alain Gregoire). Other new features included are Web-App monitoring, EJB monitoring and various minor improvements and tweaks. For a full list of changes see the Release Notes document listed alongside the DomainHealth download files.

You can download DH from the project home page at http://sourceforge.net/projects/domainhealth.

The help docs for DH are at http://sourceforge.net/apps/mediawiki/domainhealth.

I'd like to say a big thank you to Alain Gregoire for his valuable design and code contributions to this version of DomainHealth.


Song for today: Lets Go For A Ride by Cracker

Friday, June 18, 2010

New JRockit book

A new book, "Oracle JRockit: The Definitive Guide", has just been published.

More info is available on the publisher's main landing page for the book.

I've got the book on order and I'll review it here once I've digested it. Judging by the content and the names of the esteemed JRockit engineers that wrote it, I'm expecting this to be an invaluable tome on all things JRockit.


Song for today: Mr.November by The National

Monday, February 22, 2010

WLDF and Spring-generated Custom MBeans

In last week's blog, I mentioned a problem which occurs when trying to configure the WebLogic Diagnostic Framework (WLDF) to reference Spring-generated custom MBeans. Here I will describe how this issue can be addressed in a fairly simple way.

Originally, after deploying a web-app containing a Spring-generated MBean, I had attempted to configure a WLDF module using WebLogic's Admin Console to harvest an attribute of this MBean. However, as shown in the screen-shot below, the WLDF tooling was not detecting this custom MBean type as something that could be monitored.

(click image for larger view)

The Spring-generated custom MBean type is not listed (it should begin with test.management...).

After doing a little digging around, I realised that WebLogic's WLDF documentation at section "Specifying Type Names for WebLogic Server MBeans and Custom MBeans" hints to why this should not work. Specifically it states: "...if the MBean is a ModelMBean and there is no value for the MBean Descriptor field DiagnosticTypeName) then the MBean can't be harvested".

Basically, when you try to define a Collected Metric or Watch Rule in a WLDF Diagnostic Module, WLDF needs to know the MBean's implementation class type for the way WLDF categorises MBeans of interest. Even if we don't use the Admin Console and instead use WLST or hack the XML for the domain config diagnostic module directly, we still have this problem because we have to declare the MBean implementation type.

In Sun's JMX standard, 3 of the 4 possible JMX MBean types (Standard, Dynamic and Open) are implemented directly by a Java class, which WLDF can automatically detect. However, for the 4th type (Model MBean), no direct Java class is used to define the MBean. Instead, the MBean is defined using metadata. As there is no MBean implementation class for WLDF to base its configuration on, it needs a hint for what the implementation class should be assumed to be. Spring-generated custom MBeans are Model MBeans and thus are affected by this issue. The WLDF documentation states that the 'implementation hint' should take the form of an explicitly declared MBean Descriptor field called DiagnosticTypeName.

The left-hand side of the screen-shot below shows the Spring-generated custom MBean, as seen via JConsole. The Model MBean descriptor is missing the WebLogic-specific field and hence the MBean can't be used by WLDF. The right-hand side of the screen-shot shows the generated MBean after the WLDF-required field DiagnosticTypeName has been included (the rest of this blog will show you how to achieve this).

(click image for larger view)

So, for an adequate solution, I really needed a way for Spring to generate MBeans with this field automatically set to an appropriate value. Looking at the documentation for Spring's MBeanExporter (which I described in my last blog entry), I found that MBeanExporter has an optional property called assembler, which, if not defined, defaults to org.springframework.jmx.export.assembler.SimpleReflectiveMBeanInfoAssembler. This default assembler uses reflection to generate Model MBeans by introspecting the simple Java-bean style classes. I didn't really want to lose the power and simplicity of this, but needed some way to ensure that the generated MBeans included the extra descriptor field. Then I hit upon the idea of extending this Spring assembler class and overriding its populateMBeanDescriptor() method to add the extra field after first calling the overridden method to have the other descriptor fields created as normal. So I implemented the following one-off class.
package customjmx;

import javax.management.Descriptor;
import org.springframework.jmx.export.assembler.SimpleReflectiveMBeanInfoAssembler;

public class WLDFAwareReflectiveMBeanInfoAssembler 
                    extends SimpleReflectiveMBeanInfoAssembler {
   private static final String WLDF_MBEAN_TYPE_DESCPTR_KEY = 
                                           "DiagnosticTypeName";
   private static final String NAME_MBEAN_DESCPTR_KEY = "name";
   private static final String MBEAN_KEYNAME_SUFFIX = "MBean";

   @Override
   protected void populateMBeanDescriptor(Descriptor descriptor, 
                           Object managedBean, String beanKey) {
      super.populateMBeanDescriptor(descriptor, managedBean, 
                                                       beanKey);
      descriptor.setField(WLDF_MBEAN_TYPE_DESCPTR_KEY, 
                descriptor.getFieldValue(NAME_MBEAN_DESCPTR_KEY) 
                                        + MBEAN_KEYNAME_SUFFIX);
   }
}
In my example code, I just take the original Spring POJO class's name and add the suffix 'MBean' to come up with a name which I feel best conveys the way the MBean is implemented, for the benefit of WLDF. In my example, the DiagnosticTypeName descriptor field is created for the MBean with a value of test.management.TestManagerBeanMBean. You could easily implement this sub-class code differently to generate a field value using your own convention.

In my Spring bean definition file (WEB-INF/applicationContext.xml) I declared my custom assembler class which extends the Spring default assembler class, and then modified the exporter Spring bean to explicitly reference this assembler, as shown in bold below.
<bean id="assembler" class="customjmx.WLDFAwareReflectiveMBeanInfoAssembler"/>

<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"
                                                      lazy-init="false">
   <property name="beans">
      <map>
         <entry key="com.test:name=TestMgr" value-ref="testMgrBean"/>
      </map>
   </property>
   <property name="server" ref="jmxServerRuntime"/>
   <property name="assembler" ref="assembler"/>
</bean>

This time when I re-deployed my web-app to WebLogic and used JConsole to view it, the extra DiagnosticTypeName field was present in the MBean (see the right-hand side of the screen-shot above).

I tried again to create a custom Harvested Metric for an attribute in my Spring generated custom MBean, using WebLogic's Admin Console, and this time I was able to find and select my MBean type as shown in the screen-shot below.

(click image for larger view)

On the next page, I was then able to see the available attributes of the MBean type to monitor and choose the 'PropA' one, as shown below.

(click image for larger view)

Finally, I was given the option to select the instance of the MBean to monitor, before pressing finish to save the WLDF module, as shown in the screen-shot below.

(click image for larger view)

Once my WLDF module was activated, I then waited a few minutes before using JConsole to change, dynamically at runtime, the value of PropA on my custom MBean. I then went to the Admin Console | Diagnostics | Log File page, and selected HarvestedDataArchive as the log file, to view the WLDF harvested data. The screen-shot below shows the value of the harvested 'PropA' attribute, taken every 30 seconds, with the new value shown at the base of the page.

(click image for larger view)

In summary, it is possible to use WLDF to monitor Spring-generated custom MBeans as long as a WebLogic-specific descriptor field is defined for the generated MBean. In this blog, I have shown one way to achieve this, using a simple class that only has to be written once and then re-used and applied for every MBean required in a deployed app.


Song for today: Saints Around My Neck by Come

Tuesday, February 16, 2010

Creating Custom MBeans for WebLogic using Spring

In my previous blog I discussed how WebLogic provides some features to better integrate Spring-enabled apps into the app-server, including WebLogic auto-generated MBeans to monitor Spring specific elements of an application. In this blog, I instead focus on Spring's built-in ability to let developers create their own custom MBeans and how these can then be published to WebLogic's Runtime MBean Server rather than the underlying JVM's default Platform MBean Server.

Why would a developer want to create a custom MBean? Well the developer may want to provide a standards-based management capability for his/her deployed application, to enable third-party tools to manage and monitor the application remotely.

Why would the developer want to publish these MBean's to WebLogic's Runtime MBean Server? Well, in addition to being visible to generic remote JMX client tools (eg. Sun's JConsole tool), the MBeans are then easily accessible from WebLogic specific tools such as the WebLogic Scripting Tool (WLST) and even the WebLogic Diagnostic Framework (WLDF). Also, WebLogic's capabilities can be leveraged to secure the custom MBeans when associated with WebLogic's MBean Servers. For example, one could secure access to the MBeans using the t3s protocol with a valid username and password.

A few months ago, Philippe Le Mouel wrote a great article on how to create custom MBeans and register them with WebLogic, using pretty much standard JavaEE code. As you can see from his article, it takes quite a lot of effort and boilerplate Java code to define an MBean and then register it with the server at start-up.

In contrast, Spring makes it really easy to generate an MBean as part of your developed JavaEE app and avoid a lot of this coding effort. The Spring Reference (chapter 20) describes how to do this in detail. In essence, just include a Java-bean style POJO in your JavaEE web-app, like the following, for example:
package test.management;

public class TestManagerBean {
   public String getPropA() {
      return propA;
   }
  
   public void setPropA(String propA) {
      this.propA = propA;
      System.out.println("PropA set to: " + propA);
   }

   public int getPropB() {
      return propB;
   }

   public void setPropB(int propB) {
      this.propB = propB;
      System.out.println("PropB set to: " + propB);
   }
 
   private String propA;
   private int propB;
}
This example provides two properties to be JMX-enabled (one String, one int). Obviously, in a real-world application, the getter and setter code would reach inside the rest of the application's code to obtain data or change settings.

In our application's Spring WEB-INF/applicationContext.xml file, we can then define our Spring bean in the normal way with some initial values to be injected into the bean's two properties, e.g.:
<bean id="testMgrBean" class="test.management.TestManagerBean">
   <property name="propA" value="Some text"/>
   <property name="propB" value="1000"/>
</bean>
The real add-value that Spring then provides, is the ability for Spring to auto-generate the MBean for us, for inclusion in our deployed app, by using Spring's MBeanExporter capability. This is enabled by adding the following definition to our applicationContext.xml file, for example:
<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"
                                                      lazy-init="false">
   <property name="beans">
      <map>
         <entry key="com.test:name=TestMgr" value-ref="testMgrBean"/>
      </map>
   </property>
</bean>
Now, when you deploy the app to WebLogic, the MBean is generated and registered with an MBean Server. However, by default, the Spring runtime only really knows about the standard Platform MBean server in the underlying JVM that Spring is running on. As a result, upon deployment, Spring registers this generated MBean with the JVM's built-in Platform MBean server only. Spring has no awareness of the possibility to use one of WebLogic's own MBean Servers. We can demonstrate that the JVM's Platform MBean server is currently hosting the deployed custom MBean, by launching the JDK's 'jconsole' tool, from the same machine as the WebLogic is running on, using the following commands:
$ . /opt/oracle/Middleware/wlserver_10.3/server/bin/setWLSEnv.sh
$ jconsole
We then select the local Java process corresponding to WebLogic to connect to - we don't specify a username/password:

(click image for larger view)

We can then traverse the JVM's Platform MBean Server, using JConsole's built-in MBean browser. As you can see below, in the MBean list, we can view the standard JVM java.lang MBeans, like the OperatingSystem MBean and the Memory MBean, together with our custom MBean which has an ObjectName of com.test:name=TestMgr.

(click image for larger view)

We can even click in the the shown field for one of the properties (e.g. PropA), change the value to "Hello World!", press enter and value is changed in the running MBean. If we view the system-output for the WebLogic Server JVM's O.S. process, we will see the following text logged:
   PropA set to: Hello World!
(we coded this println statement in our example POJO earlier)

What we really want to do though, is have this MBean registered with WebLogic's MBean Server, not the JVM's Platform MBean Server. To do this in Java code we'd have to add a lot of JMX code at application start-up and shut-down time to register/un-register our MBean with the WebLogic MBean Server. However, because we're using Spring, its much easier. In our Spring applicationContext.xml file, we simply add a bean definition to tell Spring to use JNDI to locate the WebLogic Runtime MBean Server object at the well known WebLogic JNDI path. Then we modify our exporter bean definition to set an optional server property of Spring's MBeanExporter, giving it the handle to the JMX Server which Spring should export MBeans to. The additions to the Spring bean definition file are shown below in bold:
<bean id="jmxServerRuntime" class="org.springframework.jndi.JndiObjectFactoryBean">
   <property name="jndiName" value="java:comp/env/jmx/runtime"/>
</bean>

<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter"
                                                      lazy-init="false">
   <property name="beans">
      <map>
         <entry key="com.test:name=TestMgr" value-ref="testMgrBean"/>
      </map>
   </property>
   <property name="server" ref="jmxServerRuntime"/>
</bean>
Once re-deployed, we can launch JConsole again, but this time we connect to the WebLogic Runtime MBean Server using T3 to communicate remotely, using the following commands:
$ . /opt/oracle/Middleware/wlserver_10.3/server/bin/setWLSEnv.sh
$ jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:
    $JAVA_HOME/lib/tools.jar:$WL_HOME/server/lib/weblogic.jar   
    -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote
For the connection URL we specify "service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime" and we provide the WebLogic administrator username/password:

(click image for larger view)

We can now see that our MBean is contained in the same JMX list as all WebLogic's server runtime MBeans.

(click image for larger view)

We can even launch WLST to access our custom MBean by looking up the MBean using the JMX APIs and its ObjectName:
$ /opt/oracle/Middleware/wlserver_10.3/common/bin/wlst.sh
 > connect('weblogic','welcome1','t3://localhost:7001')
 > serverRuntime()
 > myMBean = mbs.getObjectInstance(ObjectName('com.test:name=TestMgr'))
 > print myMBean
test.management.TestManagerBean[com.test:name=TestMgr]
If we want to get or set a property on the custom MBean using JMX in WLST, we can:
 > print mbs.getAttribute(ObjectName('com.test:name=TestMgr'), 'PropA')
Some text
 > mbs.setAttribute(ObjectName('com.test:name=TestMgr'), Attribute('PropA', 'Hello World!'))
 > print mbs.getAttribute(ObjectName('com.test:name=TestMgr'), 'PropA')
Hello World!
Or alternatively, we can use the more convenient WLST commands to navigate and manipulate our custom MBean rather than issuing complicated JMX commands:
 > custom()
 > ls()
drw-   com.test
 > cd('com.test')
 > ls()
drw-   com.test:name=TestMgr
 > cd('com.test:name=TestMgr')
 > ls()
-rw-   PropA        Some text
-rw-   PropB        1000
 > print get('PropA')
Some text
 > set('PropA', 'Hello World!')
 > print get('PropA')
Hello World!
(remember, when in the custom tree we can't use the WLST cmo variable)

In summary, when it comes to creating management interfaces for developed applications hosted on WebLogic, Spring comes into its own, making things much easier and less error prone. Developers can rely on the comfort of a POJO based programming model for development of application management logic in addition to core business logic.

One final word. At the start of this blog entry I stated that by exporting custom MBeans to WebLogic's MBean Servers, we can use WebLogic tools like WLST and WLDF for monitoring these custom MBeans. Whilst this is true for custom MBeans in general, it turns out that for WLDF and Spring generated MBeans specifically, there's a slight hitch which means you can't use WLDF to harvest a Spring-generated custom MBean's properties or define a Watch/Notify for these properties. In my next blog entry, I intend to explain why this problem occurs and highlight a simple and re-usable workaround to address this and thus enable Spring-generated custom MBeans to be fully usable with WLDF.


Song for today: Just Because by Jane's Addiction

Monday, February 1, 2010

WebLogic and Spring

In this blog topic I describe some of the integration points that WebLogic provides for Spring based applications.

I have mixed feelings about Spring. I definitely prefer its primary focus on the Dependency Injection pattern (DI) instead of the more typical JavaEE model of using the Service Locator pattern (think JNDI lookups). If used the right way, both patterns can help promote loosely coupled and easily unit-testable solutions. However, Spring's prescriptive DI approach makes the process of developing loosely-coupled solutions feel more intuitive and natural. Also, Spring offers an easy way to leverage Aspect Oriented Programming (AOP) when needed (and yes I emphasise the word when - excessive sprinkling of aspects can make applications hard to understand, debug and troubleshoot).

On the downside, it sometimes feels like Spring has evolved from a light-weight framework for making J2EE easier, into a vast and competing component model for building enterprise applications. It's not clear to me how much the balance in Spring has shifted from Innovating to Re-inventing, and whether this shift is a good thing or not.

Fortunately, WebLogic makes it pretty easy to deploy Spring based apps, whether your application uses Spring in a just-enough-dependency-injection way or in a use-nearly-every-Spring-feature-under-the-sun way. On most levels it doesn't really matter what version of Spring you use with a particular version of WebLogic. If you have an issue, you use the Oracle Support organisation for help with WebLogic specific problems and any Spring parts to your application are treated just like your own custom code is, from an Oracle Support perspective.

In addition, Oracle actually provides explicit certification for specific versions of Spring running on specific versions of WebLogic. For example, on WLS 10.3.x, Oracle explicitly certifies the use of Spring version 2.5.3 (and any later `double-dot' Spring releases). For the official certification matrix, see the spreadsheet titled System Requirements and Supported Platforms for Oracle WebLogic Server 10.3 on the supported configurations page. As a matter of interest, it's also worth noting that internally* WebLogic uses elements of Spring and its AOP capabilities to implement some of WebLogic's newer JavaEE features like EJB 3.0, by using the Spring Pitchfork codebase under the covers.

* WebLogic prefixes the package names of its internally bundled Spring classes to avoid potential class-loading clash issues with Spring classes bundled in any deployed applications. Application developers can also separately choose to bundle the classes from the Spring Pitchfork project to enable Spring beans to be injected into Servlets and EJBs in their own developed application.
When using an Oracle certified version of Spring with WebLogic, extra integration features are also available to help Spring based applications become first-class WebLogic citizens, in the same way that normal JavaEE applications are. This is described in WebLogic's help documentation on Spring. By including some boilerplate text in your web-app's Manifest.mf file to refer to an optional package and by ensuring WL_HOME/server/lib/weblogic-spring.jar is first deployed to WebLogic as a shared library, the following 3 WebLogic features are automatically enabled:


1. Spring MBeans. WebLogic automatically generates a set of Spring related MBeans, hanging off the normal WebLogic ApplicationRuntimeMBeans, into each server's Runtime Service JMX tree. Examples of these MBeans are SpringRuntimeMBean, SpringApplicationContextRuntimeMBean and SpringTransactionManagerRuntimeMBean. See the WebLogic MBean Reference for more info on these Spring MBeans. The Spring MBeans are read-only and enable administrators to have better visibility into what's going on inside the Spring parts of deployed applications. The screenshot below shows the use of WLST to inspect some of these MBeans. If the Manifest.mf file is not correctly defined, WebLogic does not detect the presence of Spring elements in the application and thus will not generate the Spring MBeans; SpringRuntimeMBean would not appear in the list of child MBeans shown in the screenshot.

(click image for larger view)

2. Spring Console Extension. WebLogic provides an Admin Console extension for Spring to provide administrators with visual tools for monitoring the Spring parts of deployed applications (first navigate to WebLogic Admin Console's Preferences | Extension menu option and and enable spring-console). This Spring console extension is basically a set of pages which are added amongst the normal pages of the standard WebLogic admin console, rather than being a separate console per se. The extension provides a view onto the values of the WebLogic generated Spring MBeans (see point 1). If you navigate to the deployed web-app in the Admin Console, select the Configuration tab, and then select the Spring Framework sub-tab, you will see a read-only view of the contents of the application's Spring application context(s), as shown in the example screenshot below.

(click image for larger view)

In the Deployment's Monitoring tab, if you select the Spring Framework sub-tab, as shown in the example below, you can drill into read-only views of the types and amounts of Spring beans that have currently been created in the deployed application's Spring application context(s). It also lets you view the WebLogic managed transactions that have been initiated via the Spring library code in the deployed application.

(click image for larger view)

By selecting one of the Spring application contexts listed in this table, you will see statistics showing how many beans have been created in the context, what scope they have and their performance metrics, as shown in the screenshot below.

(click image for larger view)

3. WebLogic Injected Spring Beans. During the start-up process for the Spring-enabled web-application (see point 1), WebLogic intercepts the creation of the web-app's normal Spring Application Context (using AOP under the covers), and transparently adds a parent context to the normal context. This parent context is pre-populated with the following 3 WebLogic specific beans, ready to be used by the application:
  • A WebLoigc Transaction Manager bean (ref="transactionManager") which extends org.springframework.transaction.jta.JtaTransactionManager
  • A WebLogic Edit Server MBean Connection bean (ref="editMBeanServerConnection") which implements javax.management.MBeanServerConnection
  • A WebLogic Runtime Server MBean Connection bean (ref="runtimeMBeanServerConnection") which implements javax.management.MBeanServerConnection
This is mainly just a convenience feature for application developers to use so that they can then refer to these WebLogic-specific beans (using the ref ids shown above) and have them injected into application code. For example, we may want to inject a reference to the WebLogic ServerRuntime JMX Server into a piece of our code, to enable the code to then use JMX to inspect the host server's runtime MBeans, using a Spring declaration similar to the following:
<bean id="myTestBean" class="com.acme.MyTestBean">
   <property name="mbeanSvrConn" ref="runtimeMBeanServerConnection"/>
</bean>
The exact set of WebLogic beans injected into the parent application context can be deduced by unzipping the file WL_HOME/wlserver_10.3/server/lib/weblogic-spring.jar and viewing the contents of the file weblogic/spring/beans/SpringServerApplicationContext.xml.

Using the Spring console extension discussed in point 2, we can actually navigate to and view the contents of the parent application context, at runtime, in addition to the normal app context, as shown in the first two screenshots in point 2.

During the application initialisation process, WebLogic also sets the application's default Spring Transaction Manager to be org.springframework.transaction.jta.WebLogicJtaTransactionManager to enable WebLogic's Transaction Manager to always be used for managing JTA transactions initiated in Spring code.

Note: A 4th bean (a WebLogic System Work Manager bean) is also populated in the parent context. However, this bean is meant for internal WebLogic system usage and is not for application developers to use, as it does not implement commonj.work.WorkManager. Developers who do want to utilise WebLogic Work Managers, should just declare the work managers they require in the WebLogic domain's configuration, in the normal way, and then use org.springframework.scheduling.commonj.WorkManagerTaskExecutor in their Spring Bean context XML files, to enable the work managers to be injected into application code.


In summary, in this blog topic I have described some of the ways that WebLogic handles applications that use Spring. Not all the integration points between WebLogic and Spring have been discussed. For example, I have not described how WebLogic can integrate with Spring Security (a.k.a. Acegi), how WebLogic's fast RMI/T3 binary wire protocol can be used for Spring Remoting and I have only mentioned in passing the ability to use Spring Dependency Injection into Servlets and EJBs.


Footnote: Brand new in WebLogic 10.3.2 is un-documented and un-supported tech-preview support for the Service Component Architecture (SCA) using Spring. This tech-preview uses Spring to wire up the POJOs inside components and composites, and to declaratively specify the invocation protocol bindings for these composites. The WebLogic-Spring capabilities discussed here in my blog entry are unrelated to the SCA tech-preview support.


Song for today: Rid of Me by PJ Harvey