Friday, November 30, 2007

Spring 2.5 - Too much auto-wired?

Concept to auto-wire relationships among spring enabled beans has always been there. The idea with auto-wiring is to get away from the tedious task of specifying and more importantly maintaining explicit wiring. Originally it was supported to be done by name, by type, by constructors or auto-detect and then you had the option to auto-wire all or specific beans within a context. But now with Spring 2.5 the auto wiring concept has taken a whole new meaning and so is the debate if we really want to do auto-wiring. Spring 2.5 has a new @Autowire annotation. @Autowire annotation let us do much fine grained auto-wiring then was possible before and also it make us much more explicit then was possible in pre Spring 2.5 times Lets consider few examples #1

public void init(AccountDao accountDao, CustomerDao customerDao) {
   this.accountDao = accountDao;
   this.customerDao = customerDao;
Do not need to have setters with one parameter to inject dependencies, any method with any name and any number of parameters will do. Can be applied to fields and constructors as well and obviously to the favorite setters :) #2
private BaseDao[] daos;
private Set daos;
Create an array or collection with all the possible beans available in the context.. #3
private AccountDao accountDao = new AccountDao();
Find it do it, if not leave it. Moving on we can even fine control this by using another annotation @Qualifier. Again lets see some more examples #4
private BankService bkService;
We are doing by-name auto-wiring by providing a bean name within the annotation. This might help by letting you declare the name of the property different from the name of the bean. #5 – Example of custom qualifier annotations to take care of the case when we have more then one implementation which we want to auto-wire Define an annotation like this
@Target({ElementType.FIELD, ElementType.PARAMETER})
public @interface Category {
   String value();
Custom qualifier on autowired fields can be applied like this
public class BookList {
   private Book technicalBook;

   private Book managementBook;

and then do the bean definitions like this
<bean name="book" class="example.Book">
    <qualifier type="Category" value="Technical"/>
    <!-- This implementation specific properties -->
<bean name="book" class="example.Book">
    <qualifier type="Category" value="Management"/>
    <!-- This implementation specific properties -->
And if this is not enough we can always create a custom auto-wire configurer or may be use the new @Resource annotation for further auto-wiring stuff. That is that for auto-wiring but Spring 2.5 introduces a concept of auto-detection of spring components on the classpath where we do not have to even define the beans (with the name and bean class) in the context. Spring does a component scanning on the classpath (which can be configured using filters) by detecting classes which are stereotyped and then registering them as spring enabled beans. Yes it provides a lot of flexibility and a much greater control as can be seen from all of the above examples but still I feel it should be done with a lot of caution and I-am-going-to-be-consistent attitude.

Sunday, November 18, 2007

Controlling Classloading with Spring

The Spring Framework provides tons of functionality for someone wanting to use dependency injection; so much so that it can be difficult to know everything you can do.

Many times when dealing with a framework or a web application, it becomes important to track down JAR files and class files at runtime, and load them, often times in their own ClassLoader object.

Just to remind everyone out there, classes loaded in one ClassLoader are not visible to a parent or sibling ClassLoader (unless you are using some nightmare ClassLoader object that breaks the hierarchical semantics of core Java classloading).

| (ClassA, ClassB, ClassC)
|__ ClassLoaderB (Child of A)
|   (ClassD, ClassE)
|__ ClassLoaderC (Child of A)
   (ClassF, ClassG)

Using this diagram as an example, ClassLoaderB can see ClassA, ClassB, ClassC, ClassD, and ClassE, but it cannot see ClassF or ClassG. Likewise, ClassLoaderC cannot see the classes loaded by ClassLoaderB. And finally, ClassLoaderA can only see it's own classes.

Thankfully, objects loaded by child/sibling classloaders can be dealt with in classes loaded by parent/peer classloaders (this can be seen when you put one of your objects in to an ArrayList object - the ArrayList may not be able to reference your class, but it can still deal with the objects inside of that class.

Considering the above, what if we want to use a Spring XMLBeanFactory object (created by ClassLoaderA) to load a bean definition in an XML file that is referring to ClassD which is inside ClassLoaderB?

Thankfully, it is a fairly simple process; it simply involves understanding the two pieces at work. We have:

  1. A Bean Definition Reader - This is the object that reads the bean definition from the source (redundant, aren't I?); in other words, in this case this is the XML parser and magical finder of Class objects when you type ....
  2. A Bean Factory - This is the implementation of the factory that handles the construction of beans after they have been defined by the reader. This is the stage in the process where AOP, autowiring, property setting, singletons, prototypes, etc is all plugged in.
So now, being infinitely more educated on the Spring bean factories, it is just a short hop to plugging in the correct class loader ClassLoaderB for the task at hand.

The BeanDefinitionReader is where classes are found when Spring is parsing/interpreting a bean definition source (such as a bean XML file). There is a method on the interface - BeanDefinitionReader.getBeanClassLoader() that provides the correct class loader to the BeanFactory (this defaults to Thread.currentThread().getContextClassLoader()). There is also a method on the abstract base implementation - AbstractBeanDefinitionReader.setBeanClassLoader(ClassLoader) so we can set our class loader to the correct implementation.

'But wait R.J.!', you say - 'There is no BeanDefinitionReader that I can see on the XmlBeanFactory class - you're a liar! I'm never reading Coffee-Bytes again!'. Don't pull away from me yet! While it is true that the XmlBeanFactory class doesn't provide visibility to the bean definition reader, that doesn't mean it's not there! In reality, the XmlBeanFactory is just a convenience implementation of two seperate objects:

XmlBeanFactory factory = new XmlBeanFactory([xml resource here!]);
// == to
XmlBeanDefinitionReader reader = new XmlBeanDefinitionReader();
DefaultListableBeanFactory factory = new DefaultListableBeanFactory(reader);
reader.loadBeanDefinitions([xml resource here!]);

Do I think it is frustrating that you can't get to the bean definition reader without forgoing the XmlBeanFactory class for it's super class? Absolutely! Do I have commit rights to Spring? Unfortunately for them, no (hint, hint). So, that's that... oh, you still don't feel that sweet cut/paste solution beckoning to you? Am I leaving you patient readers hanging without a clean solution to your problems? Ok, ok...

XmlBeanDefinitionReader reader = new XmlBeanDefinitionReader();
reader.setBeanClassLoader([Class LoaderB here!]);
DefaultListableBeanFactory factory = new DefaultListableBeanFactory(reader);
reader.loadBeanDefinitions([xml resource here!]);

Thursday, November 15, 2007

Runtime Exception in Spring

Spring does throw runtime exceptions instead of checked exceptions. Runtime exceptions must not be catched and if they happen and if they are not catched your application will stop running with a standard java stack trace. Checked exceptions must be catched. If you do not catch a checked exception in your code, it can not be compiled. As it is a tedious task to catch all the exceptions for example when using EJB technology and there are bad developers catching exceptions without giving any feedback to the user, the Spring developers decided to throw runtime exceptions, so that if you do not catch an exceptions your application will break and the user gets the application exception. Their second reasoning is that most exceptions are unrecoverable so your application logic can not deal with them anyway. This is true to some extend but in my opinion at least the user should not see a Java stack trace from an exception when using the application. Even if they are right and most exceptions are unrecoverable, I must catch them at least in the highest level (dialogues) of my application, if I do not want a user to see a Java Exception stack trace. The problem is that all IDEs recognizes checked exceptions and tell me that I must still for example catch a DataBaseNotAvailable exception and a doubleEntryForPrimaryKey exception. But in this case I can decide that the user application will stop for the first exception and will continue to the edit dialogue for the second exception to allow the user to input a different value for the primary key. Eclipse even creates the try catch method for me. Using RuntimeExceptions I must know which exception can happen and write my code manually. I do not know if I catched all the important runtime exceptions where I could allow the application to continue running. And this is what I really dislike about this strategy. But anyway, I came across a lot of very nice design patterns in Spring and it will help even mediocre developers as me ;- ) to write nice code.

Sunday, October 21, 2007

Spring-based Architectures

More often than not the potentials of Spring leave people confused. How should an application be designed? What about best practices? In this talk some of the issues for the architecture of Spring applications are explained in more detail and typical approaches are shown.

This includes typical solution in areas like

  • the design of a persistence layer
  • the choice of persistence technology
  • how to actually make layering work
  • the design of the service layer
  • how to do distributed applications with Spring.

Here can be some of the answers

Saturday, October 20, 2007

Tips for improving Spring’s start up performance

Someone is blathering on again about how Spring sucks, and so startup performance came up in the conversation. CXF uses Spring under the covers by default (it’s optional, don’t worry). Initially there was some major slowness, but I’ve spent a lot of time profiling/improving CXF’s startup time. I thought I’d share what I learned so you can improve your startup time as well:

  1. Use the latest version of Spring (2.0.5+). The latest version of Spring contains may performance improvements (including one I found while profiling CXF).
  2. Reduce the number of configuration files. More configuration == more XML to parse.
  3. Reduce the number of <bean>s in your configuration. Contrary to popular belief, not every friggin class needs to be a <bean>. Startup time is pretty proportional to the number of beans (if you follow the suggestion in #4).
  4. Don’t use classpath*:*.xml type wildcards. These significantly slow down the starup process as Spring needs to search through all the jars. In some cases it needs to expand them (I think on certain appservers). Doing a classpath:/foo/*.xml search is much better as that can delegate to the JDK. Although further performance gains can be made by removing wildcards together I think.
  5. Reduce your number of beans - just because you can make it a <bean> doesn’t mean you should.
  6. lazy-init=”true” is your friend. Don’t load a <bean> until you need to.
  7. Turn off schema validation (Alternately: anyone want to rewrite Xerces?) - this adds about 20% more time to startup in the CXF case.

If all goes well, you should be able to get the Spring startup time pretty low. At this point its probably other things, like your ORM layer (*cough* Hibernate *cough*), that are slowing things down.

Random tidbit: I was profiling the CXF startup the other day. About 33% of the time is in Introspector.getBeanInfo(), so its really not all Spring’s fault IMO :-). Maybe someone can hack the JDK to be faster there?

Have any more Spring performance tips? Add them to the comments!

Sunday, September 23, 2007

Dynamic roles management in acegi security

The example that comes with acegi security, and the one that comes with appfuse define secure URL patterns, and the roles that can access them in the XML configuration file, which is not a very flexible solution. It is a good approach if you have predefined roles and access rights , that are never going to change (or the changes are very rare). In case you have a security module, where you define which role can access which URL patterns, this method is the worst thing you could do. The solution to this problem is to define your own data source, which loads the secure URL patterns and the corresponding roles that can access it from somewhere (database for example). A simple way to do that is by extending PathBasedFilterInvocationDefinitionMap and loading the secure URL patterns and the roles that can access them in an initialization method or in the constructor. Here's an example:

public class UrlPatternRolesDefinitionSource extends PathBasedFilterInvocationDefinitionMap {

public void init(){
Session session = sessionFactory.openSession();

List urlPatternsList = urlPatternDao.listUrlPatterns(session);

for (UrlPattern pattern : urlPatternsList) {

ConfigAttributeDefinition configDefinition = new ConfigAttributeDefinition();

for(Role role: pattern.getRoles()){

ConfigAttribute config = new SecurityConfig(role.getAuthority());


addSecureUrl(pattern.getUrlPattern(), configDefinition);

catch (FindException e) {
// Handle exception
finally {
In this example I used a Hibernate DAO I created (urlPatternDao) to retrieve the secure URL patterns which is defined and initialized with the sessionFactory somewhere else in the code (using spring's dependency injection). You can implement it anyway you like (JDBC, with any other ORM framework, or any other method you like). In this example, each URL pattern can has a set roles that can access it, so I loop over this set to create the ConfigAttributeDefinition. I prefer if you have your own implementation of the ConfigAttributeDefinition, but this is going to be discussed in part 2 of this article. Of course the Role class used in this code implements the GrantedAuthority interface. As for the XML configuration of this entry, it's shown below:
<bean id="filterInvocationInterceptor" class="org.acegisecurity.intercept.web. FilterSecurityInterceptor">

<property name="authenticationManager" ref="authenticationManager"/>

<property name="accessDecisionManager" ref="accessDecisionManager"/>

<property name="objectDefinitionSource" ref="objectDefinitionSource"/>


<bean id="objectDefinitionSource" class= "" init-method="init">

<property name="convertUrlToLowercaseBeforeComparison">
<value type="boolean">false</value>

<property name="urlPatternDao" ref="urlPatternDao"/>
<property name="sessionFactory" ref="sessionFactory"/>
As for the first bean in the previous snippet, this definition is the default that comes with the sample acegi application and with appfuse, except to the reference to the objectDefinitionSource bean which was added by us, to allow dynamic role management. The second bean maps to the class we created, and defines the initialization method, that will load the secured URL patterns, and the roles associated with them. This definition also sets the DAO and the hibernate session factory used by the class. Of course you don't need those if you want to have any other implementation. The previous implementation seems good as a solution for dynamic role management, but if we look more closely, it's not a very good solution (or to be more accurate, it's not a complete solution). The problem that is going to face us here is that secure URL patterns are loaded only ONCE at the application start up, and any change done to the role access rights will not effective until the application is started, and this is not dynamic at all. In fact, there is no difference between this solution, and having the access rights in the XML configuration file, except that you will not need to redeploy your application, but you will need to restart it. There are several solutions to this problem, and this will be the main focus of the next part of this article. By Nassef

Using a Shared Context from EJBs

ContextSingletonBeanFactoryLocator and SingletonBeanFactoryLocator

The basic premise behind ContextSingletonBeanFactoryLocator is that there is a shared application context, which is shared based on a string key. Inside this application context is instantiated one or more other application contexts or bean factories. The internal application contexts are what the application code is interested in, while the external context is just the bag (for want of a better term) holding them. Consider that a dozen different instances of non-IoC configured, application glue code need to access a shared application context, which is defined in an XML definition on the classpath as
<?xml version="1.0" encoding="UTF-8"?>
  Service layer ApplicationContext definition for the application.
  Defines beans belonging to service layer.
  <bean id="myService" class="...">
  ...     bean definitions
The glue code cannot just instantiate this context as an XmlApplicationContext; each such instantiation would get its own copy. Instead, the code relies on ContextSingletonBeanFactoryLocator, which will load and then cache an outer application context, holding the service layer application context above. Let's look at some code that uses the locator to get the service layer context, from which it gets a bean:
BeanFactoryLocator locator = ContextSingletonBeanFactoryLocator.getInstance();
BeanFactoryReference bfr = locator.useBeanFactory("serviceLayer-context");
BeanFactory factory = bfr.getFactory();
MyService myService = factory.getBean("myService");
// now use myService
Let's walk through the preceding code. The call to ContextSingletonBeanFactoryLocator.getInstance() triggers the loading of an application context definition from a file that is named (by default) beanRefContext.xml. We define the contents of this file as follows: beanRefContext.xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- load a hierarchy of contexts, although there is just one here -->
  <bean id="servicelayer-context"
As you can see, this is just a normal application context definition. All we are doing is loading one context inside another. However, if the outer context (keyed to the name beanRefContext.xml) had already been loaded at least once, the existing instance would just have been looked up and used. The locator.useBeanFactory("serviceLayer-context") method call returns the internal application context, which is asked for by name, serviceLayer-context in this case. It is returned in the form of a BeanFactoryRef object, which is just a wrapper used to ensure that the context is properly released when it's no longer needed. The method call BeanFactory factory = bfr.getFactory() actually obtains the context from the BeanFactoryRef. The code then uses the context via a normal getBean() call to get the service bean it needs, and then releases the context by calling release() on the BeanFactoryRef. Somewhat of a complicated sequence, but necessary because what is being added here is really a level of indirection so that multiple users can share one or more application context or bean factory definitions. We call this a keyed singleton because the outer context being used as a bag is shared based on a string key. When you get the BeanFactoryLocator via
it uses the default name
as the resource location for the outer context definition. So all the files called beanRefContext.xml, which are available on the classpath, will be combined as XML fragments defining the outer context. This name (
) is also the key by which other code will share the same context bag. But using the form:
for example:
allows the name of the outer context definition to be changed. This allows a module to use a unique name that it knows will not conflict with another module. Note that the outer bag context may define inside it any number of bean factories or application contexts, not just one as in the previous example, and because the full power of the normal XML definition format is available, they can be defined in a hierarchy using the right constructor for ClasspathXmlApplicationContext, if that is desired. The client code just needs to ask for the right one by name with the
method call. If the contexts are marked as lazy-init="true", then effectively they will be loaded only on demand from client code. The only difference between SingletonBeanFactoryLocator and ContextSingletonBeanFactoryLocator is that the latter loads the outer bag as an application context, while the former loads it as a bean factory, using the default definition name of classpath*:beanRefFactory.xml. In practice, it makes little difference whether the outer bag is a bean factory or full-blown application context, so you may use either locator variant. It is also possible to provide an alias for a context or bean factory, so that one locator.useBeanFactory() can resolve to the same thing as another locator.useBeanFactory() with a different ID. For more information on how this works, and to get a better overall picture of these classes, please see the JavaDocs for ContextSingletonBeanFactoryLocator and SingletonBeanFactoryLocator.

Using a Shared Context from EJBs

We're now ready to find out how ContextSingletonBeanFactoryLocator may be used to access a shared context (which can also be the same shared context used by one or more web-apps) from EJBs. This turns out to be trivial. The Spring EJB base classes already use the BeanFactoryLocator interface to load the application context or bean factory to be used by the EJB. By default, they use an implementation called ContextJndiBeanFactoryLocator, which creates, an application context based on a classpath location specified via JNDI. All that is required to use ContextSingletonBeanFactoryLocator is to override the default BeanFactoryLocator. In this example from a Session Bean, this is being done by hooking into the standard Session EJB setSessionContext() method:
// see javax.ejb.SessionBean#setSessionContext(javax.ejb.SessionContext)
public void setSessionContext(SessionContext sessionContext) {
First, because the Spring base classes already implement this method so they may store the EJB SessionContext, super.setSessionContext() is called to maintain that functionality. Then the BeanFactoryLocator is set as an instance returned from ContextSingletonBeanFactoryLocator.getInstance(). If we didn't want to rely on the default outer bag context name of classpath*: beanRefContext.xml, we could use ContextSingletonBeanFactoryLocator.getInstance(name) instead. Finally, for the BeanFactoryLocator.useBeanFactory() method that Spring will call to get the final application context or bean factory, a key value of serviceLayer-context is specified, as in the previous examples. This name would normally be set as a String constant somewhere, so all EJBs can use the same value easily. For a Message Driven Bean, the equivalent override of the default BeanFactoryLocator needs to be done in setMessageDrivenContext().

Session Limitation with Spring security

Sometimes it's useful to restrict a user a single session. This simplifies the logic needed to guarantee certain restrictions. For example, I always want a user to have a minimum of one valid email address. With two parallel sessions and two valid emails a user could delete one email in each session and I would need to verify consistency in the database. Restrictig to one session lets me implement the restriction in the business logic. However, the exact configuration was not obvious. After some experimentation the following seemed to work. First, you need some way of detectig when sessions expire. This is largely automatic as long as you register the following in web.xml:

  <!-- used to track session events (single user session) -->
I have all my authentication-related xml in web-authentication.xml (and referenced via context-param in web.xml). It includes:
  <bean id="authenticationManager"
    <property name="sessionController" ref="singleSession"/>
    <property name="providers">

  <bean id="sessionRegistry"

  <bean id="singleSession"
    <property name="maximumSessions" value="1"/>
    <property name="exceptionIfMaximumExceeded" value="true"/>
    <property name="sessionRegistry" ref="sessionRegistry"/>
Which is all that is needed (I suspect sessionRegistry is supplied by default anyway). The way it seems to work is as follows: - authenticationManager calls the appropriate provider - if that succeeds, it calls sessionController - sessionController applies the appropriate logic, using the information in sessionRegistry - sessionRegistry is correct because of the event system (which includes the listener you registered).
By Andrew

Did You Know: Spring Object Pooling

A subject came up recently in the Spring Framework Spring-User mailing list regarding object pooling. The original mis-conception (that was triggered by the wording in the Spring documentation) was that Spring doesn't support object pooling. This isn't neccessarily true depending on the release of Spring that you are using. (Note: When I refer to release, I am not referring to the version, but rather which distribution; e.g.: spring-core, spring-web, spring-mvc, etc.) The core bean factory code of Spring does not support object pooling; instead it supports singleton objects and what they refer to as prototype objects, which simply means that Spring returns a new object everytime you request the bean with that particular id from the bean factory. (Spring Documentation Reference) For example:

    <!-- This will always be the same bean (Singleton is default -->
    <bean id="bean1" /> 

    <!-- This will be a new bean every time you request it -->
    <bean id="bean2" singleton="false"/>
While the documentation is 100% correct in saying that bean factories don't support object pooling (or more specifically saying that they only support singletons and prototypes), there is no mention of the object pooling that is available if you are using a release of Spring that contains Spring AOP support. Spring supports what's called a Target Source. The description in the documentation is somewhat complicated if you aren't familiar with Spring AOP, but long story short, Spring uses a class called ProxyFactoryBean to create an AOP wrapper for a bean. By default, you set the 'target' to the bean you want to wrap with AOP interceptors (target means the bean that is being layered with some advice). However, a target source is an interim factory for a given object type that sits between the ProxyFactoryBean and your target bean. The term 'target source' sounds confusing, but is also self-explanatory. It is a source for the proxy bean to use when retrieving 'target' objects. Here is the XML snippet for Pooling Target Sources directly from the Spring documentation:
    <bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject" 
        ... properties omitted

    <bean id="poolTargetSource" 
        <property name="targetBeanName"><value>businessObjectTarget</value></property>

        <property name="maxSize"><value>25</value></property>

    <bean id="businessObject" 
        <property name="targetSource"><ref local="poolTargetSource"/></property>

        <property name="interceptorNames"><value>myInterceptor</value></property>
By Lorimer

Struts2 + Spring + JUnit

Hopefully this entry serves as some search engine friendly documentation on how one might unit test Struts 2 actions configured using Spring, something I would think many, many people want to do. This used to be done using StrutsTestCase in the Struts 1.x days but Webwork/Struts provides enough flexibility in its architecture to accommodate unit testing fairly easily. I’m not going to go over how the Spring configuration is setup. I’m assuming you have a struts.xml file which has actions configured like this:

 <package namespace="/site" extends="struts-default">
  <action name="deletePerson" class="personAction"
   <result name="success">/WEB-INF/pages/person.jsp</result>
You also might have an applicationContext.xml file where you might define your Spring beans like this.
 <bean id="personAction"
Then of course you also need to have an action which you want to test which might look something like:
public class PersonAction extend ActionSupport { 

  private int id; 

  public int getId() {
    return id;
  public void setId(int id) { = id;
  public String deletePerson() {
    return SUCCESS;
Remember than in Struts 2, an action is usually called before and after various other interceptors are invoked. Interceptor configuration is usually specified in the struts.xml file. At this point we need to cover three different methods of how you might want to call your actions. 1. Specify request parameters which are translated and mapped to the actions domain objects (id in the PersonAction class) and then execute the action while also executing all configured interceptors. 2. Instead of specifying request parameters, directly specify the values of the domain objects and then execute the action while also executing all configured interceptors. 3. Finally, you just might want to execute the action and not worry about executing the interceptors. Here you’ll specify the values of the actions domain objects and then execute the action. Depending on what you’re testing and what scenario you want to reproduce, you should pick the one that suits the case. There’s an example of all three cases below. The best way I find to test all your action classes is to have one base class which sets up the Struts 2 environment and then your action test classes can extend it. Here’s a class that could be used as one of those base classes. See the comments for a little more detail about whats going on. One point to note is that the class being extended here is junit.framework.TestCase and not org.apache.struts2.StrutsTestCase as one might expect. The reason for this is that StrutsTestCase is not really a well written class and does not provide enough flexibility in how we want the very core Dispatcher object to be created. Also, the interceptor example shown in the Struts documentation does not compile as there seems to have been some sort of API change. It’s been fixed in this example.
public class BaseStrutsTestCase extends TestCase {

 private Dispatcher dispatcher;
 protected ActionProxy proxy;
 protected MockServletContext servletContext;
 protected MockHttpServletRequest request;
 protected MockHttpServletResponse response;

  * Created action class based on namespace and name
 protected T createAction(Class clazz, String namespace, String name)
   throws Exception {

  // create a proxy class which is just a wrapper around the action call.
  // The proxy is created by checking the namespace and name against the
  // struts.xml configuration
  proxy = dispatcher.getContainer().getInstance(ActionProxyFactory.class).
    namespace, name, null, true, false);

  // set to true if you want to process Freemarker or JSP results
  // by default, don't pass in any request parameters
    setParameters(new HashMap());

  // set the actions context to the one which the proxy is using
  request = new MockHttpServletRequest();
  response = new MockHttpServletResponse();
  return (T) proxy.getAction();

 protected void setUp() throws Exception {
  String[] config = new String[] { "META-INF/applicationContext-aws.xml" };

  // Link the servlet context and the Spring context
  servletContext = new MockServletContext();
  XmlWebApplicationContext appContext = new XmlWebApplicationContext();

  // Use spring as the object factory for Struts
  StrutsSpringObjectFactory ssf = new StrutsSpringObjectFactory(
    null, null, servletContext);

  // Dispatcher is the guy that actually handles all requests.  Pass in
  // an empty Map as the parameters but if you want to change stuff like
  // what config files to read, you need to specify them here
  // (see Dispatcher's source code)
  dispatcher = new Dispatcher(servletContext,
    new HashMap());
By extending the above class for our action test classes we can easily simulate any of the three scenarios listed above. I’ve added three methods to PersonActionTest which illustrate how to test the above three cases: testInterceptorsBySettingRequestParameters, testInterceptorsBySettingDomainObjects() and testActionAndSkipInterceptors(), respectively.
public class PersonActionTest extends BaseStrutsTestCase { 

  * Invoke all interceptors and specify value of the action
  * class' domain objects directly.
  * @throws Exception Exception
 public void testInterceptorsBySettingDomainObjects()
         throws Exception {
  PersonAction action = createAction(PersonAction.class,
                "/site", "deletePerson");
  String result = proxy.execute();
  assertEquals(result, "success");

  * Invoke all interceptors and specify value of action class'
  * domain objects through request parameters.
  * @throws Exception Exception
 public void testInterceptorsBySettingRequestParameters()
                     throws Exception {
  createAction(PersonAction.class, "/site", "deletePerson");
  Map params = new HashMap();
  params.put("id", "123");
  String result = proxy.execute();
  assertEquals(result, "success");

  * Skip interceptors and specify value of action class'
  * domain objects by setting them directly.
  * @throws Exception Exception
 public void testActionAndSkipInterceptors() throws Exception {
  PersonAction action = createAction(PersonAction.class,
                  "/site", "deletePerson");
  String result = action.deletePerson();
  assertEquals(result, "success");
The source code for Dispatcher is probably a good thing to look at if you want to configure your actions more specifically. There are options to specify zero-configuration, alternate XML files and others. Ideally the StrutsTestCaseHelper should be doing a lot more than what it does right now (creating a badly configured Dispatcher) and should allow creation of custom dispatchers and object factories. That’s the reason why I’m not using StrutsTestCase since all that does is make a couple calls using StrutsTestCaseHelper. If you want to test your validation, its pretty easy. Here’s a snippet of code that might do that:
 public void testValidation() throws Exception {
  SomeAction action = createAction(SomeAction.class,
                  "/site", "someAction");
  // lets forget to set a required field: action.setId(123);
  String result = proxy.invoke();
  assertEquals(result, "input");
  assertTrue("Must have one field error",
                  action.getFieldErrors().size() == 1);
This example uses Struts 2.0.8 and Spring 2.0.5.

Saturday, July 7, 2007

Step by step Spring-Security (aka. Acegi security)

I recently evaluated the use of Acegi as the security framework for a Web development project. In the end, we decided to move forward with Acegi but in the beginning it took a couple days to come to that decision. The amazing thing is: once you get over the initial learning curve, it's smooth sailing. Hence, I wanted to share my experiences with it because first, I wanted to expose the Acegi security framework to JDJ readers and, second, I wanted to make it easier for JDJ readers to get over the initial learning curve. Once you're over that, you should be well on your way with Acegi.

Exposing Acegi Security Framework

Acegi is an open source security framework that allows you to keep your business logic free from security code. Acegi provides three main types of security:

  1. Web-based access control lists (ACL) based on URL schemes
  2. Java class and method security using AOP
  3. Yale's Central Authentication Service for single sign-on (SSO).
Acegi also provides the option of performing container security.

Acegi uses Spring as its configuration settings, so those familiar with Spring will be at ease with Acegi configuration. If you're not familiar with Spring, it's still easy to learn Acegi configuration by example. You don't have to use SpringMVC to secure your Web application. I have successfully used Acegi with Struts. You can use Acegi with WebWork and Velocity, Struts, SpringMVC, JSF, Web Services, and more.

Why use Acegi instead of JAAS? It can be difficult to stray from well-documented standards like JAAS. However, porting container-managed security realms is not easy. With Acegi, this security layer is an application framework that is easily ported. Acegi will allow you to easily reuse and port your "Remember Me," "I forgot my password," and log-in/log-out security functions to different servlet and EJB containers. If you have a standards-based security layer that you have re-created for numerous Java applications and it is not getting reused, you need to take a good look at Acegi. Besides, why are you spending time on framework coding when you should be focusing on the business logic? Leave the framework development to product developers and the open source community.

Getting Over That Initial Learning Curve

To get you over the initial learning curve, I'll take you through a simple setup using a demonstration application. I'll focus on the first security approach - URL-based security for Web applications because that's the most commonly used.


First things first - we need to install it! I'll use Tomcat 5 as my servlet container to illustrate.

Step 1: Set up a new Tomcat Web context with the "WEB-INF/", "WEB-INF/lib/", and "WEB-INF/classes" folders per usual . I called my context "/acegi-demo" and access it using http://localhost:8080/acegi-demo/.

Step 2: Add another folder called "/secured," which we'll protect with Acegi.

Step 3: Now let's add the necessary Acegi library files to plug-in Acegi to our Tomcat context. (Please download the file provided with this article .)

Let's understand the JAR packages we are adding to the lib directory. The most important JAR is acegi-security-0.8.3.jar, the Acegi core library. Acegi leverages Spring for its configuration, so we also need spring-1.2.RC2.jar. The remaining JARs are utilities libraries for dealing with collections (commons-collections-3.1.jar), logging (commons-logging-1.0.4.jar, log4j-1.2.9.jar), and regular expressions (oro-2.0.8.jar). Special thanks to Apache Jakarta for these wonderful utility libraries.


Now that we have our core infra-structure in place, let's focus on configuration.

Step 4: Configure the web.xml file to begin tying the Web application to the Acegi security framework.

  1. First, we need to set up two parameters: contextConfiguration, which will point to Acegi's configuration file, and log4jConfigLocation, which will point to Log4J's configuration file.
  2. Next, we have to set up the Acegi Filter Chain Proxy; this critical proxy allows Acegi to interact with the servlet filtering feature. We will talk about this more in step 5 (configuring applicationContext.xml).
  3. Finally, we want to add three listeners to loosely couple Spring with the Web context, Spring with Log4J and Acegi with the HTTP Session events in the Web context, such as create session and destroy session.
Step 5: Now we need to configure the applicationContext.xml  to instruct the Acegi framework to perform our security requirements. It is important to note that you typically don't have to write or compile any code to fuse your application with the Acegi security framework. Acegi is almost entirely configuration driven, thanks to a great design by its creator, Ben Alex, and Spring. Okay, enough back patting, let's get to it...

Remember, the Acegi Filter Chain Proxy is critical. This is the backbone of the configuration. Using the servlet filter specification, Acegi is able to plug in its security functionality in a modular way.

I ordered the Spring bean references in the applicationContext.xml file based on the sequence each bean is referenced, starting with the filterChainProxy bean. If you are new to Spring, just know that the order in which a bean is referenced is not important. I ordered it this way to make it as easy as possible to follow along.

<bean id="filterChainProxy" class="net.sf.acegisecurity.util.FilterChainProxy">
    <property name="filterInvocationDefinitionSource">
   /**=httpSessionContextIntegrationFilter, authenticationProcessingFilter,
   anonymousProcessingFilter, securityEnforcementFilter

In the filterChainProxy bean (see code snippet above), we tell Acegi that we want to use lowercase for all URL comparisons and use the Apache ANT style for pattern matching on the URLs. In our example, we run the filterChainProxy on every single URL by specifying /**=Filter1,Filter2, etc. Next, we set up the filter chain itself, where order is very important. We have four filter chains in this simple example, but when you start using Acegi, you'll most likely have more. Viewing applicationContext.xml, please take a few moments to follow all the bean references in great detail as you traverse the filter chain. I will walk through each item in the filter chain at a high level.

The first item in the chain must be the httpSessionContextIntegrationFilter filter. This filter works hand-and-hand with the HTTP Session object and the Web context to see if the user is authenticated and, if so, then what roles the user has. We have little to configure for this filter.

The second item in the chain is the authenticationProcessingFilter filter, which searches for any URL that matches /j_acegi_security_check because this is the URL that our login form will post a username and password to when attempting authentication. This filter also contains the configuration information detailing where to send someone if the login succeeds or fails. If it succeeds, you can configure this filter to direct the user to the page the user originally tried to access or direct the user to a particular start page where you want all authenticated users to land after authentication. I have the latter option configured in my example by setting alwaysUseDefaultTargetUrl to true and you just set it to false to get the former option.

One of the beans configured in the authenticationProcessingFilter is the authenticationManager bean. This bean manages the various providers you configure. A provider is essentially a repository of usernames with corresponding passwords and roles. The authenticationManager will stop iterating through the list of providers once a user is successfully authenticated. In practice, you may have two or three providers; for example, one provider could access an Active Directory for employee credentials, while your second provider might access a database for customer credentials. You will most often need an anonymousAuthenticationProvider because you need it to allow access to pages that do not requiring authentication to access, such as the login page or the home page. The demonstration application for this article uses a memory provider and an anonymous provider. Once you get this simple application working, you probably want to add a JDBC or LDAP provider.

The third item in the chain is the anonymousProcessingFilter filter. This will match the value created by the anonymousAuthenticationProvider.

The fourth and final item in the filter chain is the securityEnforcementFilter filter. This filter has two beans: the filterSecurityInterceptor and the authenticationProcessingFilterEntryPoint. The latter bean is used to direct the user to the login form each time the user tries to access a secured page but is not logged in. We can also force the user to use HTTPS. The former bean, filterSecurityInterceptor, does quite a bit of heavy lifting by tying all our filters together.

The filterSecurityInterceptor bean checks that the authenticated user has the right roles (or permissions) to access a particular objectDefinitionSource. Here we are using AffirmativeBased voting, which means the user just has to have one of the roles specified in the objectDefinitionSource. This is most likely what you will use, but Acegi does have a unanimous voter that ensures that a person has every role specified in the objectDefinitionSource before granting access. By now you may have realized that objectDefinitionSource determines who can access what.

The objectDefinitionSource starts off with the same two configuration instructions that filterChainProxy did, namely converting all URLs to lowercase and using the Apache ANT style for regular expressions. Next, we define which roles are allowed to access a particular URL. In our example, we give anonymous access to the /acegilogin.jsp page so that unauthenticated users can arrive at this page to log in. The next line in the objectDefinitionSource provides access to everything below the /secured directory for any user with the ADMIN role. Finally, we add a line that starts with /** to match on every URL. The filter will stop once the URL matches on a URL, so make sure you put specific regular expressions toward the top and broad regular expressions toward the bottom to ensure you get the desired behavior. If you were working with Struts, you could either set up your struts in modules configuration.html#5_4_1_Configure_the_ActionServlet_Instance or simply specify the StrutAction (e.g., / in the objectDefinitionSource.

At this point, we are done with applicationContext.xml file. To complete our demonstration application, all we need to do now is create a login form and put something in the /secured directory to see that our Acegi authentication and authorization configuration is working. (See the for /acegilogin.jsp and /secured/index.jsp.)

The login form is very simple; it has input fields for the username and password, j_username and j_password, respectively, and a form action pointing to j_acegi_security_check since that is what the authenticationProcessingFilter filter listens for to capture every login form submission.

Test your configuration and inspect the Tomcat logs and the Log4J log file that we configured for this application if you run into problems.

Now That I'm Over the Initial Learning Curve, What's Next?
Once you have this simple Acegi demonstration application running, you will undoubtedly want to increase its sophistication. The first thing I would want to do is to add a JDBC profile in addition to the simple in-memory profile.

I can understand the excitement after getting the initial application up and running, but you still have some reading to do in order to eclipse the initial learning curve. Read through the articles posted in the External Web Articles section of the Acegi Web site Read through the Reference Documentation provided by Ben Alex, the creator of Acegi. Ben does a good job of providing help through the support forum too. Also, read the well-kept JavaDocs as your main source of information once you get familiar with Acegi. Of course, you can opt to read the source code - it's open source!

Since this is your first time using Acegi, test after each change to the applicationContext.xml file. The process of "one change, then test" will help you understand exactly what change to the applicationContext.xml file caused an error if one should occur. If you make four changes to that file, restart the application and get an error, then you won't know which one of the four changes caused the error.

Note that I kept this application very simple. As you add in features such as Acegi's caching, you will need to add the appropriate libraries (or JARs). Look at the Acegi example application available on the Acegi Web site to get access to all the various libraries. The example application on the Acegi Web site is complex, so it is not the best place to start to get over the initial learning curve, unfortunately, hence my attempt to make it easier with the article!

No Groups in Acegi?
Acegi will let you work with the notion of groups. When you put a person in a group, you are just grouping the permissions (or roles) that the group does or does not have. So, when you set up your LDAP or JDBC profile, you need to make sure that the query returns the roles that the users' groups should have access to.

Acegi is a very configurable, open source security framework that will finally let you reuse and port your security layer components. It can be daunting at first, but this article should easily remove the stress in getting over the learning curve. Remember, you need to get this simple application running, test after each change, and read the recommended readings to fully surmount the initial learning curve. After you follow these steps, you will be well on your way to mastering Acegi.

Spring MVC: How it works

If you are interested in the Spring Framework’s MVC packages, this could be helpful. It’s a unified description of the lifecycle of a web application or portlet request as handled by Spring Web MVC and Spring Portlet MVC. I created this for two reasons: I wanted a quick reference to the way Spring finds handlers for each stage of the request; and I wanted a unified view so I could see the similarities and differences between Web MVC and Portlet MVC.
Spring Web MVC, part of the Spring Framework, has long been a highly-regarded web application framework. With the recent release of Spring 2.0, Spring now supports portlet development with the Spring Portlet MVC package. Portlet MVC builds on Web MVC; even their documentation is similar. My focus right now is on portlet development, and I found it cumbersome to have to read the Web MVC and Portlet MVC documentation simultaneously. So I have re-edited the two together into one unified document. I have also included related information from elsewhere in the documentation, so it’s all in one place.
My idea here is to make it easy to go through your Spring configuration files and ensure that all beans are declared and named as they should be, whether you are using Spring Web MVC or Spring Portlet MVC.


Spring’s Web and Portlet MVC are request-driven web MVC frameworks, designed around a servlet or portlet that dispatches requests to controllers. Spring’s dispatchers (DispatcherServlet and DispatcherPortlet) are also completely integrated with the Spring ApplicationContext and allow you to use every other feature Spring has.
The DispatcherServlet is a standard servlet (extending HttpServlet), and as such is declared in the web.xml of your web application. Requests that you want the DispatcherServlet to handle should be mapped using a URL mapping in the same web.xml file. Similarly, the DispatcherPortlet is a standard portlet (extending GenericPortlet), and as usual is declared in the portlet.xml of your web application. This is all standard J2EE configuration; Here are a couple of examples:
From web.xml:
 <!-- all requests ending with ".form" will be handled by Spring. -->
From portlet.xml:
  <title>Sample Portlet</title>
In the Portlet MVC framework, each DispatcherPortlet has its own WebApplicationContext, which inherits all the beans already defined in the root WebApplicationContext. These inherited beans can be overridden in the portlet-specific scope, and new scope-specific beans can be defined local to a given portlet instance.

Dispatcher Workflow

When a DispatcherServlet or DispatcherPortlet is set up for use and a request comes in for that specific dispatcher, it starts processing the request. The sections below describe the complete process a request goes through when handled by such a dispatcher, from determining the application context right through to rendering the view.

Application context

For Web MVC only, the WebApplicationContext is searched for and bound in the request as an attribute in order for the controller and other elements in the process to use. It is bound by default under the key DispatcherServlet.WEB_APPLICATION_CONTEXT_ATTRIBUTE.


The locale is bound to the request to let elements in the process resolve the locale to use when processing the request (rendering the view, preparing data, etc.). For Web MVC, this is the locale resolver. For Portlet MVC, this is the locale returned by PortletRequest.getLocale(). Note that locale resolution is not supported in Portlet MVC - this is in the purview of the portal/portlet-container and are not appropriate at the Spring level. However, all mechanisms in Spring that depend on the locale (such as internationalization of messages) will still function properly because DispatcherPortlet exposes the current locale in the same way as DispatcherServlet.


For Web MVC only, the theme resolver is bound to the request to let elements such as views determine which theme to use. The theme resolver does not affect anything if you don’t use it, so if you don’t need themes you can just ignore it. Theme resolution is not supported in Portlet MVC - this areas is in the purview of the portal/portlet-container and is not appropriate at the Spring level.

Multipart form submissions

For Web MVC and for Portlet MVC Action requests, if a multipart resolver is specified, the request is inspected for multiparts. If they are found, the request is wrapped in a MultipartHttpServletRequest or MultipartActionRequest for further processing by other elements in the process.

Handler mapping

Spring looks at all handler mappings (beans implementing the appropriate HandlerMapping interface) in the application context. Any that implement the Ordered interface are sorted (lowest order first), and others are added at the end of the list. The handler mappingas re tried in order until one yields a handler. (Note: if the dispatcher’s detectAllHandlerMappings attribute is set to false, then this changes: Spring simply uses the handler mapping bean called “handlerMapping” and ignores any others.)


If a handler is found, the execution chain associated with the handler (pre-processors, controllers and post-processors) will be executed in order to prepare a model for rendering. THe handler chain returns a View object or a view name, and normally also returns a model. For example, a pre-processor may block the request for security reasons and render its own view; in this case it will not return a model. Note that tha handler chain need not explicitly return a view or view name. If it does not, Spring creates a view name from the request path. For example, the path /servlet/apac/NewZealand.jsp yeilds the view name “apac/NewZealand”. This behaviour is implemented by an implicitly-defined DefaultRequestToViewNameTranslator bean; you can configure your own bean (which must be called “viewNameTranslator”) if you want to customise its behaviour.


Exceptions that are thrown during processing of the request go to the hander exception resolver chain. Spring looks at all hander exception resolver (beans implementing the appropriate HandlerExceptionResolverinterface) in the application context. Any that implement the Ordered interface are sorted (lowest order first), and others are added at the end of the list. The resolvers re tried in order until one yields a model and view. (Note: if the dispatcher’s detectAllHandlerExceptionResolvers attribute is set to false, then this changes: Spring simply uses the hander exception resolver bean called “handlerExceptionResolver” and ignores any others.)

View resolver

If the handle chain returns a view name and a model, Spring uses the configured view resolvers to resolve the view name to a View. Spring looks at all view resolvers (beans implementing the ViewResolver interface) in the application context. Any that implement the Ordered interface are sorted (lowest order first), and others are added at the end of the list. Then view resolvers are tried in order until one yields a view. (Note: if the dispatcher’s detectAllViewResolvers attribute is set to false, then this changes: Spring simply uses the view resolver bean called “viewResolver” and ignores any others.) If the handle chain returns a View object, then no view resolution is necessary. Similarly, if it does not return a model, then no view will be rendered, so again no view resolution is necessary.


If we now have a view and a model, then Spring uses the view to render the model. This is what the user will see in the browser window or portlet.

Changing Log4j logging levels dynamically

Simple problem and may seem oh-not-so-cool. Make the log4j level dynamically configurable. You should be a able to change from DEBUG to INFO or any of the others. All this in a running application server.

First the simple, but not so elegant approach. Don't get me wrong (about the elegance statement) this approach works.

Log4j API

Often applications will have custom log4j properties files. Here we define the appenders and the layouts for the appenders. Somewhere in the java code we have to initialize log4j and point it to this properties file. We can use the following API call to configure and apply the dynamic update.
org.apache.log4j.PropertyConfigurator.configureAndWatch(logFilePath, logFileWatchDelay);
  • Pass it the path to the custom and a delay in milliseconds. Log4j will periodically check the file for changes (after passage of the configured delay time).

Spring Helpers

If you are using Spring then you are in luck. Spring provides ready-to-use classes to do this job. You can use the support class org.springframework.web.util.Log4jWebConfigurer. Provide it values for log4jConfigLocation, log4jRefreshInterval. For the path you can pass either one that is relative to your web application (this means you need to deploy in expanded WAR form) or provide an absolute path. I prefer the latter; that way I can keep my WAR file warred and not expanded.

There is also a web application listener class org.springframework.web.util.Log4jConfigListener that you can use in the web.xml file. The actual implementation of the Spring class Log4jWebConfigurer does the call to either:

Log4j spawns a separate thread to watch the file. Make sure your application has a shutdown hook where you can org.apache.log4j.LogManager.shutdown() to shut down log4j cleanly. The thread unfortunately does not die if your application is undeployed. Thats the only downside of using Log4j configureAndWatch API. In most cases thats not a big deal so I think its fine.

JMX Approach

JMX according to me is the cleanest approach. Involves some leg work initially but is well worth it. This example here is run on JBoss 4.0.5. Lets look at a simple class that will actually change the log level.
package com.aver.logging;

import org.apache.log4j.Level;
import org.apache.log4j.Logger;

public class Log4jLevelChanger {
   public void setLogLevel(String loggerName, String level) {
      if ("debug".equalsIgnoreCase(level)) {
      } else if ("info".equalsIgnoreCase(level)) {
      } else if ("error".equalsIgnoreCase(level)) {
      } else if ("fatal".equalsIgnoreCase(level)) {
      } else if ("warn".equalsIgnoreCase(level)) {
  • Given a logger name and a level to change to this code will do just that. The code needs some error handling and can be cleaned up a little. But this works for what I am showing.
  • To change the log level we get the logger for the specified loggerName and change to the new level.
My application uses Spring so the rest of the configuration is Spring related. Now we need to register this bean as an MBean into the MBeanServer running inside JBoss. Here is the Spring configuration.
<?xml version="1.0" encoding="UTF-8"?>

  <bean id="exporter"  
 <property name="beans">
  <entry key="bean:name=Log4jLevelChanger"
         value-ref="com.aver.logging.Log4jLevelChanger" />

  <bean id="com.aver.logging.Log4jLevelChanger"

  • In Spring we use the MBeanExporter to register your MBeans with the containers running MBean Server. 
  • I provide MBeanExporter with references to beans that I want to expose via JMX.
  • Finally my management bean is Log4jLevelChanger is registered with Spring.
Thats it. With this configuration your bean will get registered into JBoss's MBean server. By default Spring will publish all public methods on the bean via JMX. If you need more control on what methods get published then refer to Spring documentation. I will probably cover that topic in a separate blog since I had to do all of that when i set up JMX for a project using Weblogic 8.1. With Weblogic 8.1 things are unfortunately not that straight forward as above. Thats for another day another blog.

One thing to note here is that the parameter names are p1 (for loggerName) and p2 for (level). This is because I have not provided any meta data about the parameters. When I do my blog on using JMX+Spring+CommonsAttributes under Weblogic 8.1, you will see how this can be resolved. BTW for jdk 1.4 based Spring projects you must use commons attributes tags provided by Spring to register and describe your beans as JMX beans. The initial minor learning curve will save you tons of time later.
By Mathew

XFire WebService With Spring

Tried setting up XFire with Spring and thought I'd share that experience. One more place to come for this information will not hurt ah!

Once again I used Maven to build my test application. At the bottom of this article you will find a download link for the entire application.

I have used Axis in the past and wanted to try out some other frameworks. At the same time I absoutely needed the framework to support JSR 181 (web service annotations), required the framework to integrate with Spring and relatively simple configuration. Oh also I did not want to write any WSDL. This example is an RPC based web service (unlike my previous article on document based web service with Spring-WS). I will after this article also start using Axis2, since I have been an Axis fan for many years.

JSR 181 is important to me. I think annotations are the right way to go for most simple tasks that do not require a lot of input. The web service annotations are good. I have seen examples of annotations where it would be easier and more clearer to put it into the old XML style configuration. Some folks are anti-annotations and I think that attitude is not the best. Use it where it makes sense and reduces configuration in external files.

Lets view the echo service java POJO code.

package com.aver;

public interface EchoService {
    public String printback(java.lang.String name);
package com.aver;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebResult;
import javax.jws.WebService;

@WebService(name = "EchoService", targetNamespace = "")
public class EchoServiceImpl implements EchoService {

    @WebMethod(operationName = "echo", action = "urn:echo")
    @WebResult(name = "EchoResult")
    public String printback(@WebParam(name = "text")
    String text) {
        if (text == null || text.trim().length() == 0) {
            return "echo: -please provide a name-";
        SimpleDateFormat dtfmt = new SimpleDateFormat("MM-dd-yyyy hh:mm:ss a");
        return "echo: '" + text + "' received on " + dtfmt.format(Calendar.getInstance().getTime());

As you can see above I have made liberal use of JSR 181 web service annotations.

  • @WebService declares the class as exposing a web service method(s).
  • @WebMethod declares the particular method as being exposed as a web service method.
  • @WebParam gives nice-to-read parameter names which will show up in the auto-generated WSDL. Always provide these for the sake of your consumers sanity.
  • Also you can see that the java method is named 'printback' but exposed as name 'echo' by the @WebMethod annotation.
Here is the web.xml.
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app
    PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"



The web.xml configures the 'XFireSpringServlet' and sets up the Spring listener. Straightforward.
Finally here is the xfire-servlet.xml (this is our spring configuration file).

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

    <bean id="webAnnotations"
        class="org.codehaus.xfire.annotations.jsr181.Jsr181WebAnnotations" />

    <bean id="jsr181HandlerMapping"
        <property name="typeMappingRegistry">
            <ref bean="xfire.typeMappingRegistry" />
        <property name="xfire" ref="xfire" />
        <property name="webAnnotations" ref="webAnnotations" />

    <bean id="echo" class="com.aver.EchoServiceImpl" />
  • Sets up the xfire bean to recognize jsr 181 annotations.
  • Last bean is our echo service implementation bean (with annotations).
Thats it. Build and deploy this and you should see the WSDL at http://localhost:9090/echoservice/services/EchoServiceImpl?wsdl.
By Mathew

Step by step Spring-WS

Took a look at Spring-WS and came up with a quick example service to describe its use. I decided to build an 'echo' service. Send in a text and it will echo that back with a date and time appended to the text.

After building the application I saw that Spring-WS comes with a sample echo service application. Oh well. Since I put in the effort here is the article on it.

Spring-WS encourages document based web services. As you know there are mainly two types of web services:

  • RPC based. 
  • Document based.
In RPC you think in terms of traditional functional programming. You decide what operations you want and then use the WSDL to describe those operations and then implement them. If you look at any RPC based WSDL you will see in the binding section the various operations (or methods).

In the document based approach you no longer think of operations (their parameters and return types). You decide on what XML document you want to send in as input and what XML document you want to return from your web service as a response.

When you think document based the traditional approach thus far has been to draw up the WSDL and then go from there. I see no problem in this approach.

Spring-WS encourages a more practical approach to designing document based web services. Rather than think WSDL, it pushes you to think XSD (or the document schema) and then Spring-WS can auto-generate the WSDL from the schema.

Lets break it up into simpler steps:
  1. Create your XML schema (.xsd file). Inside the schema you will create your request messages and response messages. Bring up your favourite schema editor to create the schema or write sample request and response XML and then reverse-engineer the schema (check if your tool supports it).
  2. You have shifted the focus onto the document (or the XML). Now use Spring-WS to point to the XSD and set up a few Spring managed beans and soon you have the web service ready. No WSDL was ever written.
Spring-WS calls this the contract-first approach to building web services.

Lets see the echo service in action. You will notice that I do not create any WSDL document throughout this article.

Business Case:

Echo service takes in an XML request document and returns an XML document with a response. The response contains the text that was sent in, appended with a timestamp.

Request XML Sample:

The schema XSD file for this can be found in the WEB-INF folder of the application (echo.xsd).
Response XML Sample:
 <ec:Message>echo back: name Mathew received on 05-06-2007 06:42:08 PM
The schema XSD file for this can be found in the WEB-INF folder of the application (echo.xsd).

If you inspect the SOAP request and response you will see that this XML is whats inside the SOAP body. Thats precisely what is document based web services.

Echo Service Implementation:

Here is the echo service Java interface and its related implementation. As you can see this is a simple POJO.
package echo.service;

public interface EchoService {
    public String echo(java.lang.String name);
package echo.service;

import java.text.SimpleDateFormat;
import java.util.Calendar;

public class EchoServiceImpl implements EchoService {

    public String echo(String name) {
        if (name == null || name.trim().length() == 0) {
            return "echo back: -please provide a name-";
        SimpleDateFormat dtfmt = new SimpleDateFormat("MM-dd-yyyy hh:mm:ss a");
        return "echo back: name " + name + " received on "
                + dtfmt.format(Calendar.getInstance().getTime());

Now the Spring-WS stuff:

Here is the web.xml for the sake of clarity.
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="" xmlns:xsi=""

    <display-name>Echo Web Service Application</display-name>




Only thing to note in the web.xml is the Spring-WS servlet.

Next is the all important Spring bean configuration XML.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

    <bean id="echoEndpoint" class="echo.endpoint.EchoEndpoint">
        <property name="echoService"><ref bean="echoService"/></property>

    <bean id="echoService" class="echo.service.EchoServiceImpl"/>

    <bean class="">
        <property name="mappings">
                <prop key="{}EchoRequest"
        <property name="interceptors">

    <bean id="echo" class="">
        <property name="builder">
                <property name="schema" value="/WEB-INF/echo.xsd"/>
                <property name="portTypeName" value="Echo"/>
                <property name="locationUri" value="http://localhost:9090/echoservice/"/>
  • Registered the 'echoService' implementation bean.
  • Registered an endpoint class named 'echoEndpoint'. The endpoint is the class that receives the incoming web service request. 
  • The endpoint receives the XML document. You parse the XML data and then call our echo service implementation bean.
  • The bean 'PayloadRootQNameEndpointMapping' is what maps the incoming request to the endpoint class. Here we set up one mapping. Anytime we see a 'EchoRequest' tag with the specified namespace we direct it to our endpoint class.
  • The 'XsdBasedSoap11Wsdl4jDefinitionBuilder' class is what does the magic of converting the schema XSD to a WSDL document for outside consumption. Based on simple naming conventions in the schema (like XXRequest and XXResponse) the bean can generate a WSDL. This rounds up the 'thinking in XSD for document web services' implementation approach.  Once deployed the WSDL is available at http://localhost:9090/echoservice/echo.wsdl.
Finally here is the endpoint class. This is the class, as previously stated, that gets the request XML and can handle the request from there.
package echo.endpoint;

import org.jdom.Document;
import org.jdom.Element;
import org.jdom.Namespace;
import org.jdom.output.XMLOutputter;
import org.jdom.xpath.XPath;

import echo.service.EchoService;

public class EchoEndpoint extends AbstractJDomPayloadEndpoint {
    private EchoService echoService;

    public void setEchoService(EchoService echoService) {
        this.echoService = echoService;

    protected Element invokeInternal(Element request) throws Exception {
        // ok now we have the XML document from the web service request
        // lets system.out the XML so we can see it on the console (log4j
        // latter)
        System.out.println("XML Doc >> ");
        XMLOutputter xmlOutputter = new XMLOutputter();
        xmlOutputter.output(request, System.out);

        // I am using JDOM for my example....feel free to process the XML in
        // whatever way you best deem right (jaxb, castor, sax, etc.)

        // some jdom stuff to read the document
        Namespace namespace = Namespace.getNamespace("ec",
        XPath nameExpression = XPath.newInstance("//ec:Name");

        // lets call a backend service to process the contents of the XML
        // document
        String name = nameExpression.valueOf(request);
        String msg = echoService.echo(name);

        // build the response XML with JDOM
        Namespace echoNamespace = Namespace.getNamespace("ec",
        Element root = new Element("EchoResponse", echoNamespace);
        Element echoResponse = new Element("EchoResponse", echoNamespace);
        Element message = new Element("Message", echoNamespace);
        Document doc = new Document(root);

        // return response XML
        System.out.println("XML Response Doc >> ");
        xmlOutputter.output(doc, System.out);
        return doc.getRootElement();
This is a simple class. Important point to note is that it extends 'AbstractJDomPayloadEndpoint'. The 'AbstractJDomPayloadEndpoint' class is a helper that gives you the XML payload as a JDom object. There are similar classes built for SAX, Stax and others. Most of the code above is reading the request XML using JDOM API and parsing the data out so that we may provide it to our echo service for consumption.

Finally I build a response XML document to return and thats it.

Download the sample Application:
Click here to download the jar file containing the application. The application is built using Maven. If you do not have Maven please install it. Once Maven is installed run the following commands:
  1. mvn package (this will generate the web service war file in the target folder).
  2. mvn jetty:run (this will bring up Jetty and you can access the wsdl at http://localhost:9090/echoservice/echo.wsdl.
  3. Finally use some web service accessing tool like the eclipse plug-in soapUI to invoke the web service.
As you can see this is relatively simple. Spring-WS supports the WS-I basic profile and WS-Security. I hope to look at the WS-Security support sometime soon. Also interesting to me is the content based routing feature. This lets you configure which object gets the document based on the request XML content. We did the QName based routing in our example but I would think the content based parsing is of great interest.

While I could not find a roadmap for Spring-WS, depending on the features it starts supporting this could become a very suitable candidate for web service integration projects. Sure folks will say where is WS-Transactions and all of that, but tell me how many others implement that. I think if Spring-WS grows to support 90% of what folks need in integration projects then it will suffice. I hope in future I see some support for content transformation.

Open Session In View

I was a part of a team that have developed several applications using Struts, Spring and Hibernate together, and one of the problems that have faced us while using Hibernate was the rendering of the view. The problem is that when you retrieve an object 'a' of persistence class 'A' that has an instance 'b' of persistence class 'B', and this relation is lazily loaded, the value of 'b' will be 'null'. This will cause a "LazyInitializationException" while rendering the view (if you need the value of 'b' in the view of course).

A quick and easy solution to that is to set the "lazy" attribute to "false" so that 'b' would be initialized while fetching 'a', but this is not always a good idea. In case of many-to-many relationships, using non-lazy relations might result in loading the entire database into the memory using a great number of "select" statements, which will result to very poor performance, and to massive memory consumption.

Another solution is to open another unit of work in the view, which is really bad for several reasons. First of all, as a design concept, the layers of your application should be loosely coupled, and by doing the previous practice you have coupled the presentation layer with you DB layer, which is bad. Another thing is that this destroys the separation of concerns concept.

The solution to this problem can be done by keeping the hibernate session alive until the view is rendered, and this is what Hibernate introduced as the Open Session In View Design Pattern. Since the Hibernate session will be opened, trying to retrieve 'b' in the view will cause Hibernate to go and fetch it from the DB. In a web application, this can be done through a filter/interceptor.

Spring framework comes with both a filter and an interceptor, so that you don't have to write your own. The problem that might face you, if you're using spring's HibernateTemplate,without doing your own session and transaction management, is that you will not be able to save, edit or delete anything, since both the filter and the interceptor provided by spring set the flush mode of the session to "NONE".

A solution to that, which I've learned from a friend of mine recently is to extend the filter provided by spring, override the getSession method to set a different flush mode, and override the closeSession method to flush the session before closing it. The sample code is shown below:

public class HibernateFilter extends OpenSessionInViewFilter {

 protected Session getSession(SessionFactory sessionFactory) throws DataAccessResourceFailureException {
  Session session = SessionFactoryUtils.getSession(sessionFactory, true);
  //set the FlushMode to auto in order to save objects.
  return session;
 protected void closeSession(Session session, SessionFactory sessionFactory) {
   if (session != null && session.isOpen() && session.isConnected()) {
    try {
    } catch (HibernateException e) {
     throw new CleanupFailureDataAccessException("Failed to flush session before close: " + e.getMessage(), e);
    } catch(Exception e){}
  } finally{
   super.closeSession(session, sessionFactory);

By using this filter, you will be able to render the view easily, without having to set the "lazy" attribute to "false", or to open a hibernate session in the view, but you have to take care not to change any values of the persistence object in the view, because those changes will be saved to the DB at the end of the request. This is the main reason why the flush mode is set to "NONE" in the original filter and interceptor.
By Alaa Nassef

10 Common Misconceptions about Grails

As is usually the case with anything "new" there’s a lot of FUD and confusion out there with people who have not used Grails yet, that may be stopping them using it. Here’s a quick list of some of the more common falsehoods being bandied about:

  1. "Grails is just a clone of Rails". Ruby On Rails introduced and unified some great ideas. Grails applies some of them to the Groovy/Java world but adds many features and concepts that don’texist in Ruby, all in a way that makes sense to Groovy/Java programmers.

  2. "Grails is not mature enough for me". The increasing number of live commercial sites is the best answer to that. Its also built on Hibernate, Spring and SiteMesh which are well-established technologies, not to mention the Java JDK which is as old as the hills. Groovy is over three years old.

  3. "Grails uses an interpreted language (Groovy)". Groovy compiles to Java VM bytecode at runtime. It is never, ever, ever interpreted. Period. Never. Did I say never ever? Really.

  4. "Grails needs its own runtime environment". Nope, you produce good old WAR files with "grails war" and deploy on your favourite app container. During development Grails uses the bundled Jetty just so you have zero configuration and dynamic reloading without container restarts.

  5. "My manager won’t let me use Grails because it isn’t Java". Smack him/her upside the head then!** Grails code is approximately 85% Java. It runs on the Java VM. It runs in your existing servlet container. Groovy is the greatest complement to Java, and many times more productive. You can also write POJOs for persistence to databases in Java and include Java src and any JARs you like in a Grails application, including EJBs, Spring beans etc. Any new tech can be a hard sell in a cold grey institution, but there’s rarely a more convincing argument than "Hey Jim, I knocked up our new application prototype in 1hr in my lunch break with Grails - here’s the URL". [** comedy violence kids, not the real kind]

  6. "Grails is only for CRUD applications". Many demos focus on CRUD scaffolding, but that is purely because of the instant gratification factor. Grails is an all purpose web framework.

  7. "Scaffolding needs to be regenerated after every change". Scaffolding is what we call the automatically generated boilerplate controller and view code for CRUD operations. Explicit regeneration is never required unless you are not using dynamic scaffolding. "def scaffold = Classname" is all you need in a controller and Grails will magic everything else and handle reloads during development. You can then, if you want, generate the controller and view code prior to release for full customisation.

  8. "Grails is like other frameworks, ultimately limiting". All Grails applications have a Spring bean context to which you can add absolutely any Java beans you like and access them from your application. Grails also has a sophisticated plugin architecture, and eminently flexible custom taglibs that are a refreshing change from JSP taglib.

  9. "I can’t find Grails programmers". Any Java developer is easily a Grails developer. Plus there are far fewer lines of code in a Grails application than a standard Java web application, so getting up to speed will be much quicker.

  10. "Grails will make you popular with women". Sorry quite the opposite, you will be enjoying coding so much you won’t be chasing any women for a while. We should put this as a warning in the README actually, along with a disclaimer about any potential divorce that might result from hours spent playing with your Grails webapps.
By AnyWhere