Saturday, July 7, 2007

Step by step Spring-Security (aka. Acegi security)

I recently evaluated the use of Acegi as the security framework for a Web development project. In the end, we decided to move forward with Acegi but in the beginning it took a couple days to come to that decision. The amazing thing is: once you get over the initial learning curve, it's smooth sailing. Hence, I wanted to share my experiences with it because first, I wanted to expose the Acegi security framework to JDJ readers and, second, I wanted to make it easier for JDJ readers to get over the initial learning curve. Once you're over that, you should be well on your way with Acegi.

Exposing Acegi Security Framework

Acegi is an open source security framework that allows you to keep your business logic free from security code. Acegi provides three main types of security:

  1. Web-based access control lists (ACL) based on URL schemes
  2. Java class and method security using AOP
  3. Yale's Central Authentication Service for single sign-on (SSO).
Acegi also provides the option of performing container security.

Acegi uses Spring as its configuration settings, so those familiar with Spring will be at ease with Acegi configuration. If you're not familiar with Spring, it's still easy to learn Acegi configuration by example. You don't have to use SpringMVC to secure your Web application. I have successfully used Acegi with Struts. You can use Acegi with WebWork and Velocity, Struts, SpringMVC, JSF, Web Services, and more.

Why use Acegi instead of JAAS? It can be difficult to stray from well-documented standards like JAAS. However, porting container-managed security realms is not easy. With Acegi, this security layer is an application framework that is easily ported. Acegi will allow you to easily reuse and port your "Remember Me," "I forgot my password," and log-in/log-out security functions to different servlet and EJB containers. If you have a standards-based security layer that you have re-created for numerous Java applications and it is not getting reused, you need to take a good look at Acegi. Besides, why are you spending time on framework coding when you should be focusing on the business logic? Leave the framework development to product developers and the open source community.

Getting Over That Initial Learning Curve

To get you over the initial learning curve, I'll take you through a simple setup using a demonstration application. I'll focus on the first security approach - URL-based security for Web applications because that's the most commonly used.

Installation

First things first - we need to install it! I'll use Tomcat 5 as my servlet container to illustrate.

Step 1: Set up a new Tomcat Web context with the "WEB-INF/", "WEB-INF/lib/", and "WEB-INF/classes" folders per usual . I called my context "/acegi-demo" and access it using http://localhost:8080/acegi-demo/.

Step 2: Add another folder called "/secured," which we'll protect with Acegi.

Step 3: Now let's add the necessary Acegi library files to plug-in Acegi to our Tomcat context. (Please download the acegi-demo.zip file provided with this article .)

Let's understand the JAR packages we are adding to the lib directory. The most important JAR is acegi-security-0.8.3.jar, the Acegi core library. Acegi leverages Spring for its configuration, so we also need spring-1.2.RC2.jar. The remaining JARs are utilities libraries for dealing with collections (commons-collections-3.1.jar), logging (commons-logging-1.0.4.jar, log4j-1.2.9.jar), and regular expressions (oro-2.0.8.jar). Special thanks to Apache Jakarta for these wonderful utility libraries.

Configuration

Now that we have our core infra-structure in place, let's focus on configuration.

Step 4: Configure the web.xml file to begin tying the Web application to the Acegi security framework.

  1. First, we need to set up two parameters: contextConfiguration, which will point to Acegi's configuration file, and log4jConfigLocation, which will point to Log4J's configuration file.
  2. Next, we have to set up the Acegi Filter Chain Proxy; this critical proxy allows Acegi to interact with the servlet filtering feature. We will talk about this more in step 5 (configuring applicationContext.xml).
  3. Finally, we want to add three listeners to loosely couple Spring with the Web context, Spring with Log4J and Acegi with the HTTP Session events in the Web context, such as create session and destroy session.
Step 5: Now we need to configure the applicationContext.xml  to instruct the Acegi framework to perform our security requirements. It is important to note that you typically don't have to write or compile any code to fuse your application with the Acegi security framework. Acegi is almost entirely configuration driven, thanks to a great design by its creator, Ben Alex, and Spring. Okay, enough back patting, let's get to it...

Remember, the Acegi Filter Chain Proxy is critical. This is the backbone of the configuration. Using the servlet filter specification, Acegi is able to plug in its security functionality in a modular way.

I ordered the Spring bean references in the applicationContext.xml file based on the sequence each bean is referenced, starting with the filterChainProxy bean. If you are new to Spring, just know that the order in which a bean is referenced is not important. I ordered it this way to make it as easy as possible to follow along.

<bean id="filterChainProxy" class="net.sf.acegisecurity.util.FilterChainProxy">
    <property name="filterInvocationDefinitionSource">
     <value>
     CONVERT_URL_TO_LOWERCASE_BEFORE_COMPARISON
     PATTERN_TYPE_APACHE_ANT
   /**=httpSessionContextIntegrationFilter, authenticationProcessingFilter,
   anonymousProcessingFilter, securityEnforcementFilter
     </value>
    </property>
   </bean>

In the filterChainProxy bean (see code snippet above), we tell Acegi that we want to use lowercase for all URL comparisons and use the Apache ANT style for pattern matching on the URLs. In our example, we run the filterChainProxy on every single URL by specifying /**=Filter1,Filter2, etc. Next, we set up the filter chain itself, where order is very important. We have four filter chains in this simple example, but when you start using Acegi, you'll most likely have more. Viewing applicationContext.xml, please take a few moments to follow all the bean references in great detail as you traverse the filter chain. I will walk through each item in the filter chain at a high level.

The first item in the chain must be the httpSessionContextIntegrationFilter filter. This filter works hand-and-hand with the HTTP Session object and the Web context to see if the user is authenticated and, if so, then what roles the user has. We have little to configure for this filter.

The second item in the chain is the authenticationProcessingFilter filter, which searches for any URL that matches /j_acegi_security_check because this is the URL that our login form will post a username and password to when attempting authentication. This filter also contains the configuration information detailing where to send someone if the login succeeds or fails. If it succeeds, you can configure this filter to direct the user to the page the user originally tried to access or direct the user to a particular start page where you want all authenticated users to land after authentication. I have the latter option configured in my example by setting alwaysUseDefaultTargetUrl to true and you just set it to false to get the former option.

One of the beans configured in the authenticationProcessingFilter is the authenticationManager bean. This bean manages the various providers you configure. A provider is essentially a repository of usernames with corresponding passwords and roles. The authenticationManager will stop iterating through the list of providers once a user is successfully authenticated. In practice, you may have two or three providers; for example, one provider could access an Active Directory for employee credentials, while your second provider might access a database for customer credentials. You will most often need an anonymousAuthenticationProvider because you need it to allow access to pages that do not requiring authentication to access, such as the login page or the home page. The demonstration application for this article uses a memory provider and an anonymous provider. Once you get this simple application working, you probably want to add a JDBC or LDAP provider.

The third item in the chain is the anonymousProcessingFilter filter. This will match the value created by the anonymousAuthenticationProvider.

The fourth and final item in the filter chain is the securityEnforcementFilter filter. This filter has two beans: the filterSecurityInterceptor and the authenticationProcessingFilterEntryPoint. The latter bean is used to direct the user to the login form each time the user tries to access a secured page but is not logged in. We can also force the user to use HTTPS. The former bean, filterSecurityInterceptor, does quite a bit of heavy lifting by tying all our filters together.

The filterSecurityInterceptor bean checks that the authenticated user has the right roles (or permissions) to access a particular objectDefinitionSource. Here we are using AffirmativeBased voting, which means the user just has to have one of the roles specified in the objectDefinitionSource. This is most likely what you will use, but Acegi does have a unanimous voter that ensures that a person has every role specified in the objectDefinitionSource before granting access. By now you may have realized that objectDefinitionSource determines who can access what.

The objectDefinitionSource starts off with the same two configuration instructions that filterChainProxy did, namely converting all URLs to lowercase and using the Apache ANT style for regular expressions. Next, we define which roles are allowed to access a particular URL. In our example, we give anonymous access to the /acegilogin.jsp page so that unauthenticated users can arrive at this page to log in. The next line in the objectDefinitionSource provides access to everything below the /secured directory for any user with the ADMIN role. Finally, we add a line that starts with /** to match on every URL. The filter will stop once the URL matches on a URL, so make sure you put specific regular expressions toward the top and broad regular expressions toward the bottom to ensure you get the desired behavior. If you were working with Struts, you could either set up your struts in modules http://struts.apache.org/struts-core/userGuide/ configuration.html#5_4_1_Configure_the_ActionServlet_Instance or simply specify the StrutAction (e.g., /CustomerAdd.do) in the objectDefinitionSource.

At this point, we are done with applicationContext.xml file. To complete our demonstration application, all we need to do now is create a login form and put something in the /secured directory to see that our Acegi authentication and authorization configuration is working. (See the acegi-demo.zip for /acegilogin.jsp and /secured/index.jsp.)

The login form is very simple; it has input fields for the username and password, j_username and j_password, respectively, and a form action pointing to j_acegi_security_check since that is what the authenticationProcessingFilter filter listens for to capture every login form submission.

Test your configuration and inspect the Tomcat logs and the Log4J log file that we configured for this application if you run into problems.

Now That I'm Over the Initial Learning Curve, What's Next?
Once you have this simple Acegi demonstration application running, you will undoubtedly want to increase its sophistication. The first thing I would want to do is to add a JDBC profile in addition to the simple in-memory profile.

I can understand the excitement after getting the initial application up and running, but you still have some reading to do in order to eclipse the initial learning curve. Read through the articles posted in the External Web Articles section of the Acegi Web site http://acegisecurity.sourceforge.net. Read through the Reference Documentation provided by Ben Alex, the creator of Acegi. Ben does a good job of providing help through the support forum too. Also, read the well-kept JavaDocs as your main source of information once you get familiar with Acegi. Of course, you can opt to read the source code - it's open source!

Since this is your first time using Acegi, test after each change to the applicationContext.xml file. The process of "one change, then test" will help you understand exactly what change to the applicationContext.xml file caused an error if one should occur. If you make four changes to that file, restart the application and get an error, then you won't know which one of the four changes caused the error.

Note that I kept this application very simple. As you add in features such as Acegi's caching, you will need to add the appropriate libraries (or JARs). Look at the Acegi example application available on the Acegi Web site to get access to all the various libraries. The example application on the Acegi Web site is complex, so it is not the best place to start to get over the initial learning curve, unfortunately, hence my attempt to make it easier with the article!

No Groups in Acegi?
Acegi will let you work with the notion of groups. When you put a person in a group, you are just grouping the permissions (or roles) that the group does or does not have. So, when you set up your LDAP or JDBC profile, you need to make sure that the query returns the roles that the users' groups should have access to.

Conclusion
Acegi is a very configurable, open source security framework that will finally let you reuse and port your security layer components. It can be daunting at first, but this article should easily remove the stress in getting over the learning curve. Remember, you need to get this simple application running, test after each change, and read the recommended readings to fully surmount the initial learning curve. After you follow these steps, you will be well on your way to mastering Acegi.

Spring MVC: How it works

If you are interested in the Spring Framework’s MVC packages, this could be helpful. It’s a unified description of the lifecycle of a web application or portlet request as handled by Spring Web MVC and Spring Portlet MVC. I created this for two reasons: I wanted a quick reference to the way Spring finds handlers for each stage of the request; and I wanted a unified view so I could see the similarities and differences between Web MVC and Portlet MVC.
Spring Web MVC, part of the Spring Framework, has long been a highly-regarded web application framework. With the recent release of Spring 2.0, Spring now supports portlet development with the Spring Portlet MVC package. Portlet MVC builds on Web MVC; even their documentation is similar. My focus right now is on portlet development, and I found it cumbersome to have to read the Web MVC and Portlet MVC documentation simultaneously. So I have re-edited the two together into one unified document. I have also included related information from elsewhere in the documentation, so it’s all in one place.
My idea here is to make it easy to go through your Spring configuration files and ensure that all beans are declared and named as they should be, whether you are using Spring Web MVC or Spring Portlet MVC.

Dispatcher

Spring’s Web and Portlet MVC are request-driven web MVC frameworks, designed around a servlet or portlet that dispatches requests to controllers. Spring’s dispatchers (DispatcherServlet and DispatcherPortlet) are also completely integrated with the Spring ApplicationContext and allow you to use every other feature Spring has.
The DispatcherServlet is a standard servlet (extending HttpServlet), and as such is declared in the web.xml of your web application. Requests that you want the DispatcherServlet to handle should be mapped using a URL mapping in the same web.xml file. Similarly, the DispatcherPortlet is a standard portlet (extending GenericPortlet), and as usual is declared in the portlet.xml of your web application. This is all standard J2EE configuration; Here are a couple of examples:
From web.xml:
<web-app>
    <servlet>
        <servlet-name>example</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
 <!-- all requests ending with ".form" will be handled by Spring. -->
    <servlet-mapping>
        <servlet-name>example</servlet-name>
        <url-pattern>*.form</url-pattern>
    </servlet-mapping>
</web-app>
From portlet.xml:
<portlet>
 <portlet-name>sample</portlet-name>
 <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class>
 <supports>
  <mime-type>text/html</mime-type>
  <portlet-mode>view</portlet-mode>
 </supports>
 <portlet-info>
  <title>Sample Portlet</title>
 </portlet-info>
</portlet>
In the Portlet MVC framework, each DispatcherPortlet has its own WebApplicationContext, which inherits all the beans already defined in the root WebApplicationContext. These inherited beans can be overridden in the portlet-specific scope, and new scope-specific beans can be defined local to a given portlet instance.

Dispatcher Workflow

When a DispatcherServlet or DispatcherPortlet is set up for use and a request comes in for that specific dispatcher, it starts processing the request. The sections below describe the complete process a request goes through when handled by such a dispatcher, from determining the application context right through to rendering the view.

Application context

For Web MVC only, the WebApplicationContext is searched for and bound in the request as an attribute in order for the controller and other elements in the process to use. It is bound by default under the key DispatcherServlet.WEB_APPLICATION_CONTEXT_ATTRIBUTE.

Locale

The locale is bound to the request to let elements in the process resolve the locale to use when processing the request (rendering the view, preparing data, etc.). For Web MVC, this is the locale resolver. For Portlet MVC, this is the locale returned by PortletRequest.getLocale(). Note that locale resolution is not supported in Portlet MVC - this is in the purview of the portal/portlet-container and are not appropriate at the Spring level. However, all mechanisms in Spring that depend on the locale (such as internationalization of messages) will still function properly because DispatcherPortlet exposes the current locale in the same way as DispatcherServlet.

Theme

For Web MVC only, the theme resolver is bound to the request to let elements such as views determine which theme to use. The theme resolver does not affect anything if you don’t use it, so if you don’t need themes you can just ignore it. Theme resolution is not supported in Portlet MVC - this areas is in the purview of the portal/portlet-container and is not appropriate at the Spring level.

Multipart form submissions

For Web MVC and for Portlet MVC Action requests, if a multipart resolver is specified, the request is inspected for multiparts. If they are found, the request is wrapped in a MultipartHttpServletRequest or MultipartActionRequest for further processing by other elements in the process.

Handler mapping

Spring looks at all handler mappings (beans implementing the appropriate HandlerMapping interface) in the application context. Any that implement the Ordered interface are sorted (lowest order first), and others are added at the end of the list. The handler mappingas re tried in order until one yields a handler. (Note: if the dispatcher’s detectAllHandlerMappings attribute is set to false, then this changes: Spring simply uses the handler mapping bean called “handlerMapping” and ignores any others.)

Handler

If a handler is found, the execution chain associated with the handler (pre-processors, controllers and post-processors) will be executed in order to prepare a model for rendering. THe handler chain returns a View object or a view name, and normally also returns a model. For example, a pre-processor may block the request for security reasons and render its own view; in this case it will not return a model. Note that tha handler chain need not explicitly return a view or view name. If it does not, Spring creates a view name from the request path. For example, the path /servlet/apac/NewZealand.jsp yeilds the view name “apac/NewZealand”. This behaviour is implemented by an implicitly-defined DefaultRequestToViewNameTranslator bean; you can configure your own bean (which must be called “viewNameTranslator”) if you want to customise its behaviour.

Exceptions

Exceptions that are thrown during processing of the request go to the hander exception resolver chain. Spring looks at all hander exception resolver (beans implementing the appropriate HandlerExceptionResolverinterface) in the application context. Any that implement the Ordered interface are sorted (lowest order first), and others are added at the end of the list. The resolvers re tried in order until one yields a model and view. (Note: if the dispatcher’s detectAllHandlerExceptionResolvers attribute is set to false, then this changes: Spring simply uses the hander exception resolver bean called “handlerExceptionResolver” and ignores any others.)

View resolver

If the handle chain returns a view name and a model, Spring uses the configured view resolvers to resolve the view name to a View. Spring looks at all view resolvers (beans implementing the ViewResolver interface) in the application context. Any that implement the Ordered interface are sorted (lowest order first), and others are added at the end of the list. Then view resolvers are tried in order until one yields a view. (Note: if the dispatcher’s detectAllViewResolvers attribute is set to false, then this changes: Spring simply uses the view resolver bean called “viewResolver” and ignores any others.) If the handle chain returns a View object, then no view resolution is necessary. Similarly, if it does not return a model, then no view will be rendered, so again no view resolution is necessary.

View

If we now have a view and a model, then Spring uses the view to render the model. This is what the user will see in the browser window or portlet.

Changing Log4j logging levels dynamically

Simple problem and may seem oh-not-so-cool. Make the log4j level dynamically configurable. You should be a able to change from DEBUG to INFO or any of the others. All this in a running application server.

First the simple, but not so elegant approach. Don't get me wrong (about the elegance statement) this approach works.

Log4j API

Often applications will have custom log4j properties files. Here we define the appenders and the layouts for the appenders. Somewhere in the java code we have to initialize log4j and point it to this properties file. We can use the following API call to configure and apply the dynamic update.
org.apache.log4j.PropertyConfigurator.configureAndWatch(logFilePath, logFileWatchDelay);
  • Pass it the path to the custom log4j.properties and a delay in milliseconds. Log4j will periodically check the file for changes (after passage of the configured delay time).

Spring Helpers

If you are using Spring then you are in luck. Spring provides ready-to-use classes to do this job. You can use the support class org.springframework.web.util.Log4jWebConfigurer. Provide it values for log4jConfigLocation, log4jRefreshInterval. For the path you can pass either one that is relative to your web application (this means you need to deploy in expanded WAR form) or provide an absolute path. I prefer the latter; that way I can keep my WAR file warred and not expanded.

There is also a web application listener class org.springframework.web.util.Log4jConfigListener that you can use in the web.xml file. The actual implementation of the Spring class Log4jWebConfigurer does the call to either:
org.apache.log4j.PropertyConfigurator.configureAndWatch 
//OR
org.apache.log4j.xml.DOMConfigurator.configureAndWatch

Log4j spawns a separate thread to watch the file. Make sure your application has a shutdown hook where you can org.apache.log4j.LogManager.shutdown() to shut down log4j cleanly. The thread unfortunately does not die if your application is undeployed. Thats the only downside of using Log4j configureAndWatch API. In most cases thats not a big deal so I think its fine.

JMX Approach

JMX according to me is the cleanest approach. Involves some leg work initially but is well worth it. This example here is run on JBoss 4.0.5. Lets look at a simple class that will actually change the log level.
package com.aver.logging;

import org.apache.log4j.Level;
import org.apache.log4j.Logger;

public class Log4jLevelChanger {
   public void setLogLevel(String loggerName, String level) {
      if ("debug".equalsIgnoreCase(level)) {
         Logger.getLogger(loggerName).setLevel(Level.DEBUG);
      } else if ("info".equalsIgnoreCase(level)) {
         Logger.getLogger(loggerName).setLevel(Level.INFO);
      } else if ("error".equalsIgnoreCase(level)) {
         Logger.getLogger(loggerName).setLevel(Level.ERROR);
      } else if ("fatal".equalsIgnoreCase(level)) {
         Logger.getLogger(loggerName).setLevel(Level.FATAL);
      } else if ("warn".equalsIgnoreCase(level)) {
         Logger.getLogger(loggerName).setLevel(Level.WARN);
      }
  }
}
  • Given a logger name and a level to change to this code will do just that. The code needs some error handling and can be cleaned up a little. But this works for what I am showing.
  • To change the log level we get the logger for the specified loggerName and change to the new level.
My application uses Spring so the rest of the configuration is Spring related. Now we need to register this bean as an MBean into the MBeanServer running inside JBoss. Here is the Spring configuration.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" 
  "http://www.springframework.org/dtd/spring-beans.dtd">
<beans>

  <bean id="exporter"  
        class="org.springframework.jmx.export.MBeanExporter"
 lazy-init="false">
 <property name="beans">
    <map>
  <entry key="bean:name=Log4jLevelChanger"
         value-ref="com.aver.logging.Log4jLevelChanger" />
    </map>
 </property>
  </bean>

  <bean id="com.aver.logging.Log4jLevelChanger"
 class="com.aver.logging.Log4jLevelChanger">
  </bean>

</beans>
  • In Spring we use the MBeanExporter to register your MBeans with the containers running MBean Server. 
  • I provide MBeanExporter with references to beans that I want to expose via JMX.
  • Finally my management bean is Log4jLevelChanger is registered with Spring.
Thats it. With this configuration your bean will get registered into JBoss's MBean server. By default Spring will publish all public methods on the bean via JMX. If you need more control on what methods get published then refer to Spring documentation. I will probably cover that topic in a separate blog since I had to do all of that when i set up JMX for a project using Weblogic 8.1. With Weblogic 8.1 things are unfortunately not that straight forward as above. Thats for another day another blog.

One thing to note here is that the parameter names are p1 (for loggerName) and p2 for (level). This is because I have not provided any meta data about the parameters. When I do my blog on using JMX+Spring+CommonsAttributes under Weblogic 8.1, you will see how this can be resolved. BTW for jdk 1.4 based Spring projects you must use commons attributes tags provided by Spring to register and describe your beans as JMX beans. The initial minor learning curve will save you tons of time later.
By Mathew

XFire WebService With Spring

Tried setting up XFire with Spring and thought I'd share that experience. One more place to come for this information will not hurt ah!

Once again I used Maven to build my test application. At the bottom of this article you will find a download link for the entire application.

I have used Axis in the past and wanted to try out some other frameworks. At the same time I absoutely needed the framework to support JSR 181 (web service annotations), required the framework to integrate with Spring and relatively simple configuration. Oh also I did not want to write any WSDL. This example is an RPC based web service (unlike my previous article on document based web service with Spring-WS). I will after this article also start using Axis2, since I have been an Axis fan for many years.

JSR 181 is important to me. I think annotations are the right way to go for most simple tasks that do not require a lot of input. The web service annotations are good. I have seen examples of annotations where it would be easier and more clearer to put it into the old XML style configuration. Some folks are anti-annotations and I think that attitude is not the best. Use it where it makes sense and reduces configuration in external files.

Lets view the echo service java POJO code.

package com.aver;

public interface EchoService {
    public String printback(java.lang.String name);
}
package com.aver;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebResult;
import javax.jws.WebService;

@WebService(name = "EchoService", targetNamespace = "http://www.averconsulting.com/services/EchoService")
public class EchoServiceImpl implements EchoService {

    @WebMethod(operationName = "echo", action = "urn:echo")
    @WebResult(name = "EchoResult")
    public String printback(@WebParam(name = "text")
    String text) {
        if (text == null || text.trim().length() == 0) {
            return "echo: -please provide a name-";
        }
        SimpleDateFormat dtfmt = new SimpleDateFormat("MM-dd-yyyy hh:mm:ss a");
        return "echo: '" + text + "' received on " + dtfmt.format(Calendar.getInstance().getTime());
    }
}

As you can see above I have made liberal use of JSR 181 web service annotations.

  • @WebService declares the class as exposing a web service method(s).
  • @WebMethod declares the particular method as being exposed as a web service method.
  • @WebParam gives nice-to-read parameter names which will show up in the auto-generated WSDL. Always provide these for the sake of your consumers sanity.
  • Also you can see that the java method is named 'printback' but exposed as name 'echo' by the @WebMethod annotation.
Here is the web.xml.
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app
    PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
    "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>
            classpath:org/codehaus/xfire/spring/xfire.xml
            /WEB-INF/xfire-servlet.xml
        </param-value>
    </context-param>

    <listener>
        <listener-class>
            org.springframework.web.context.ContextLoaderListener
        </listener-class>
    </listener>

    <servlet>
        <servlet-name>XFireServlet</servlet-name>
        <servlet-class>
            org.codehaus.xfire.spring.XFireSpringServlet
        </servlet-class>
    </servlet>
    <servlet-mapping>
        <servlet-name>XFireServlet</servlet-name>
        <url-pattern>/servlet/XFireServlet/*</url-pattern>
    </servlet-mapping>
    <servlet-mapping>
        <servlet-name>XFireServlet</servlet-name>
        <url-pattern>/services/*</url-pattern>
    </servlet-mapping>
</web-app>

The web.xml configures the 'XFireSpringServlet' and sets up the Spring listener. Straightforward.
Finally here is the xfire-servlet.xml (this is our spring configuration file).

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
">www.springframework.org/schema/beans/spring-beans-2.0.xsd">

    <bean id="webAnnotations"
        class="org.codehaus.xfire.annotations.jsr181.Jsr181WebAnnotations" />

    <bean id="jsr181HandlerMapping"
        class="org.codehaus.xfire.spring.remoting.Jsr181HandlerMapping">
        <property name="typeMappingRegistry">
            <ref bean="xfire.typeMappingRegistry" />
        </property>
        <property name="xfire" ref="xfire" />
        <property name="webAnnotations" ref="webAnnotations" />
    </bean>

    <bean id="echo" class="com.aver.EchoServiceImpl" />
</beans>
  • Sets up the xfire bean to recognize jsr 181 annotations.
  • Last bean is our echo service implementation bean (with annotations).
Thats it. Build and deploy this and you should see the WSDL at http://localhost:9090/echoservice/services/EchoServiceImpl?wsdl.
By Mathew

Step by step Spring-WS

Took a look at Spring-WS and came up with a quick example service to describe its use. I decided to build an 'echo' service. Send in a text and it will echo that back with a date and time appended to the text.

After building the application I saw that Spring-WS comes with a sample echo service application. Oh well. Since I put in the effort here is the article on it.

Spring-WS encourages document based web services. As you know there are mainly two types of web services:

  • RPC based. 
  • Document based.
In RPC you think in terms of traditional functional programming. You decide what operations you want and then use the WSDL to describe those operations and then implement them. If you look at any RPC based WSDL you will see in the binding section the various operations (or methods).

In the document based approach you no longer think of operations (their parameters and return types). You decide on what XML document you want to send in as input and what XML document you want to return from your web service as a response.

When you think document based the traditional approach thus far has been to draw up the WSDL and then go from there. I see no problem in this approach.

Spring-WS encourages a more practical approach to designing document based web services. Rather than think WSDL, it pushes you to think XSD (or the document schema) and then Spring-WS can auto-generate the WSDL from the schema.

Lets break it up into simpler steps:
  1. Create your XML schema (.xsd file). Inside the schema you will create your request messages and response messages. Bring up your favourite schema editor to create the schema or write sample request and response XML and then reverse-engineer the schema (check if your tool supports it).
  2. You have shifted the focus onto the document (or the XML). Now use Spring-WS to point to the XSD and set up a few Spring managed beans and soon you have the web service ready. No WSDL was ever written.
Spring-WS calls this the contract-first approach to building web services.

Lets see the echo service in action. You will notice that I do not create any WSDL document throughout this article.

Business Case:

Echo service takes in an XML request document and returns an XML document with a response. The response contains the text that was sent in, appended with a timestamp.


Request XML Sample:
 <ec:EchoRequest>
  <ec::Echo>
   <ec:Name>Mathew</ec:Name>
  </ec:Echo>
 </ec:EchoRequest>

The schema XSD file for this can be found in the WEB-INF folder of the application (echo.xsd).
Response XML Sample:
<ec:EchoResponse>
 <ec:Message>echo back: name Mathew received on 05-06-2007 06:42:08 PM
 </ec:Message>
</ec:EchoResponse>
The schema XSD file for this can be found in the WEB-INF folder of the application (echo.xsd).

If you inspect the SOAP request and response you will see that this XML is whats inside the SOAP body. Thats precisely what is document based web services.

Echo Service Implementation:

Here is the echo service Java interface and its related implementation. As you can see this is a simple POJO.
package echo.service;

public interface EchoService {
    public String echo(java.lang.String name);
}
package echo.service;

import java.text.SimpleDateFormat;
import java.util.Calendar;

public class EchoServiceImpl implements EchoService {

    public String echo(String name) {
        if (name == null || name.trim().length() == 0) {
            return "echo back: -please provide a name-";
        }
        SimpleDateFormat dtfmt = new SimpleDateFormat("MM-dd-yyyy hh:mm:ss a");
        return "echo back: name " + name + " received on "
                + dtfmt.format(Calendar.getInstance().getTime());
    }
}

Now the Spring-WS stuff:


Here is the web.xml for the sake of clarity.
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee">

    <display-name>Echo Web Service Application</display-name>

    <servlet>
        <servlet-name>spring-ws</servlet-name>
        <servlet-class>org.springframework.ws.transport.http.MessageDispatcherServlet</servlet-class>
    </servlet>

    <servlet-mapping>
        <servlet-name>spring-ws</servlet-name>
        <url-pattern>/*</url-pattern>
    </servlet-mapping>

</web-app>

Only thing to note in the web.xml is the Spring-WS servlet.

Next is the all important Spring bean configuration XML.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans">

    <bean id="echoEndpoint" class="echo.endpoint.EchoEndpoint">
        <property name="echoService"><ref bean="echoService"/></property>
    </bean>

    <bean id="echoService" class="echo.service.EchoServiceImpl"/>

    <bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootQNameEndpointMapping">
        <property name="mappings">
            <props>
                <prop key="{http://www.averconsulting.com/echo/schemas}EchoRequest"
                >echoEndpoint</prop>
            </props>
        </property>
        <property name="interceptors">
            <bean
                class="org.springframework.ws.server.endpoint.interceptor.PayloadLoggingInterceptor"
            />
        </property>
    </bean>

    <bean id="echo" class="org.springframework.ws.wsdl.wsdl11.DynamicWsdl11Definition">
        <property name="builder">
            <bean
                class="org.springframework.ws.wsdl.wsdl11.builder.XsdBasedSoap11Wsdl4jDefinitionBuilder">
                <property name="schema" value="/WEB-INF/echo.xsd"/>
                <property name="portTypeName" value="Echo"/>
                <property name="locationUri" value="http://localhost:9090/echoservice/"/>
            </bean>
        </property>
    </bean>
</beans>
  • Registered the 'echoService' implementation bean.
  • Registered an endpoint class named 'echoEndpoint'. The endpoint is the class that receives the incoming web service request. 
  • The endpoint receives the XML document. You parse the XML data and then call our echo service implementation bean.
  • The bean 'PayloadRootQNameEndpointMapping' is what maps the incoming request to the endpoint class. Here we set up one mapping. Anytime we see a 'EchoRequest' tag with the specified namespace we direct it to our endpoint class.
  • The 'XsdBasedSoap11Wsdl4jDefinitionBuilder' class is what does the magic of converting the schema XSD to a WSDL document for outside consumption. Based on simple naming conventions in the schema (like XXRequest and XXResponse) the bean can generate a WSDL. This rounds up the 'thinking in XSD for document web services' implementation approach.  Once deployed the WSDL is available at http://localhost:9090/echoservice/echo.wsdl.
Finally here is the endpoint class. This is the class, as previously stated, that gets the request XML and can handle the request from there.
package echo.endpoint;

import org.jdom.Document;
import org.jdom.Element;
import org.jdom.Namespace;
import org.jdom.output.XMLOutputter;
import org.jdom.xpath.XPath;
import org.springframework.ws.server.endpoint.AbstractJDomPayloadEndpoint;

import echo.service.EchoService;

public class EchoEndpoint extends AbstractJDomPayloadEndpoint {
    private EchoService echoService;

    public void setEchoService(EchoService echoService) {
        this.echoService = echoService;
    }

    protected Element invokeInternal(Element request) throws Exception {
        // ok now we have the XML document from the web service request
        // lets system.out the XML so we can see it on the console (log4j
        // latter)
        System.out.println("XML Doc >> ");
        XMLOutputter xmlOutputter = new XMLOutputter();
        xmlOutputter.output(request, System.out);

        // I am using JDOM for my example....feel free to process the XML in
        // whatever way you best deem right (jaxb, castor, sax, etc.)

        // some jdom stuff to read the document
        Namespace namespace = Namespace.getNamespace("ec",
                "http://www.averconsulting.com/echo/schemas");
        XPath nameExpression = XPath.newInstance("//ec:Name");
        nameExpression.addNamespace(namespace);

        // lets call a backend service to process the contents of the XML
        // document
        String name = nameExpression.valueOf(request);
        String msg = echoService.echo(name);

        // build the response XML with JDOM
        Namespace echoNamespace = Namespace.getNamespace("ec",
                "http://www.averconsulting.com/echo/schemas");
        Element root = new Element("EchoResponse", echoNamespace);
        Element echoResponse = new Element("EchoResponse", echoNamespace);
        root.addContent(echoResponse);
        Element message = new Element("Message", echoNamespace);
        echoResponse.addContent(message);
        message.setText(msg);
        Document doc = new Document(root);

        // return response XML
        System.out.println();
        System.out.println("XML Response Doc >> ");
        xmlOutputter.output(doc, System.out);
        return doc.getRootElement();
    }
}
This is a simple class. Important point to note is that it extends 'AbstractJDomPayloadEndpoint'. The 'AbstractJDomPayloadEndpoint' class is a helper that gives you the XML payload as a JDom object. There are similar classes built for SAX, Stax and others. Most of the code above is reading the request XML using JDOM API and parsing the data out so that we may provide it to our echo service for consumption.

Finally I build a response XML document to return and thats it.

Download the sample Application:
Click here to download the jar file containing the application. The application is built using Maven. If you do not have Maven please install it. Once Maven is installed run the following commands:
  1. mvn package (this will generate the web service war file in the target folder).
  2. mvn jetty:run (this will bring up Jetty and you can access the wsdl at http://localhost:9090/echoservice/echo.wsdl.
  3. Finally use some web service accessing tool like the eclipse plug-in soapUI to invoke the web service.
As you can see this is relatively simple. Spring-WS supports the WS-I basic profile and WS-Security. I hope to look at the WS-Security support sometime soon. Also interesting to me is the content based routing feature. This lets you configure which object gets the document based on the request XML content. We did the QName based routing in our example but I would think the content based parsing is of great interest.

While I could not find a roadmap for Spring-WS, depending on the features it starts supporting this could become a very suitable candidate for web service integration projects. Sure folks will say where is WS-Transactions and all of that, but tell me how many others implement that. I think if Spring-WS grows to support 90% of what folks need in integration projects then it will suffice. I hope in future I see some support for content transformation.

Open Session In View

I was a part of a team that have developed several applications using Struts, Spring and Hibernate together, and one of the problems that have faced us while using Hibernate was the rendering of the view. The problem is that when you retrieve an object 'a' of persistence class 'A' that has an instance 'b' of persistence class 'B', and this relation is lazily loaded, the value of 'b' will be 'null'. This will cause a "LazyInitializationException" while rendering the view (if you need the value of 'b' in the view of course).

A quick and easy solution to that is to set the "lazy" attribute to "false" so that 'b' would be initialized while fetching 'a', but this is not always a good idea. In case of many-to-many relationships, using non-lazy relations might result in loading the entire database into the memory using a great number of "select" statements, which will result to very poor performance, and to massive memory consumption.

Another solution is to open another unit of work in the view, which is really bad for several reasons. First of all, as a design concept, the layers of your application should be loosely coupled, and by doing the previous practice you have coupled the presentation layer with you DB layer, which is bad. Another thing is that this destroys the separation of concerns concept.

The solution to this problem can be done by keeping the hibernate session alive until the view is rendered, and this is what Hibernate introduced as the Open Session In View Design Pattern. Since the Hibernate session will be opened, trying to retrieve 'b' in the view will cause Hibernate to go and fetch it from the DB. In a web application, this can be done through a filter/interceptor.

Spring framework comes with both a filter and an interceptor, so that you don't have to write your own. The problem that might face you, if you're using spring's HibernateTemplate,without doing your own session and transaction management, is that you will not be able to save, edit or delete anything, since both the filter and the interceptor provided by spring set the flush mode of the session to "NONE".

A solution to that, which I've learned from a friend of mine recently is to extend the filter provided by spring, override the getSession method to set a different flush mode, and override the closeSession method to flush the session before closing it. The sample code is shown below:

public class HibernateFilter extends OpenSessionInViewFilter {

 @Override
 protected Session getSession(SessionFactory sessionFactory) throws DataAccessResourceFailureException {
  Session session = SessionFactoryUtils.getSession(sessionFactory, true);
  //set the FlushMode to auto in order to save objects.
  session.setFlushMode(FlushMode.AUTO);
  return session;
 }
 
 
 @Override
 protected void closeSession(Session session, SessionFactory sessionFactory) {
  try{
   if (session != null && session.isOpen() && session.isConnected()) {
    try {
     session.flush();
    } catch (HibernateException e) {
     throw new CleanupFailureDataAccessException("Failed to flush session before close: " + e.getMessage(), e);
    } catch(Exception e){}
    }
  } finally{
   super.closeSession(session, sessionFactory);
  }
 }
}

By using this filter, you will be able to render the view easily, without having to set the "lazy" attribute to "false", or to open a hibernate session in the view, but you have to take care not to change any values of the persistence object in the view, because those changes will be saved to the DB at the end of the request. This is the main reason why the flush mode is set to "NONE" in the original filter and interceptor.
By Alaa Nassef

10 Common Misconceptions about Grails

As is usually the case with anything "new" there’s a lot of FUD and confusion out there with people who have not used Grails yet, that may be stopping them using it. Here’s a quick list of some of the more common falsehoods being bandied about:

  1. "Grails is just a clone of Rails". Ruby On Rails introduced and unified some great ideas. Grails applies some of them to the Groovy/Java world but adds many features and concepts that don’texist in Ruby, all in a way that makes sense to Groovy/Java programmers.

  2. "Grails is not mature enough for me". The increasing number of live commercial sites is the best answer to that. Its also built on Hibernate, Spring and SiteMesh which are well-established technologies, not to mention the Java JDK which is as old as the hills. Groovy is over three years old.

  3. "Grails uses an interpreted language (Groovy)". Groovy compiles to Java VM bytecode at runtime. It is never, ever, ever interpreted. Period. Never. Did I say never ever? Really.

  4. "Grails needs its own runtime environment". Nope, you produce good old WAR files with "grails war" and deploy on your favourite app container. During development Grails uses the bundled Jetty just so you have zero configuration and dynamic reloading without container restarts.

  5. "My manager won’t let me use Grails because it isn’t Java". Smack him/her upside the head then!** Grails code is approximately 85% Java. It runs on the Java VM. It runs in your existing servlet container. Groovy is the greatest complement to Java, and many times more productive. You can also write POJOs for persistence to databases in Java and include Java src and any JARs you like in a Grails application, including EJBs, Spring beans etc. Any new tech can be a hard sell in a cold grey institution, but there’s rarely a more convincing argument than "Hey Jim, I knocked up our new application prototype in 1hr in my lunch break with Grails - here’s the URL". [** comedy violence kids, not the real kind]

  6. "Grails is only for CRUD applications". Many demos focus on CRUD scaffolding, but that is purely because of the instant gratification factor. Grails is an all purpose web framework.

  7. "Scaffolding needs to be regenerated after every change". Scaffolding is what we call the automatically generated boilerplate controller and view code for CRUD operations. Explicit regeneration is never required unless you are not using dynamic scaffolding. "def scaffold = Classname" is all you need in a controller and Grails will magic everything else and handle reloads during development. You can then, if you want, generate the controller and view code prior to release for full customisation.

  8. "Grails is like other frameworks, ultimately limiting". All Grails applications have a Spring bean context to which you can add absolutely any Java beans you like and access them from your application. Grails also has a sophisticated plugin architecture, and eminently flexible custom taglibs that are a refreshing change from JSP taglib.

  9. "I can’t find Grails programmers". Any Java developer is easily a Grails developer. Plus there are far fewer lines of code in a Grails application than a standard Java web application, so getting up to speed will be much quicker.

  10. "Grails will make you popular with women". Sorry quite the opposite, you will be enjoying coding so much you won’t be chasing any women for a while. We should put this as a warning in the README actually, along with a disclaimer about any potential divorce that might result from hours spent playing with your Grails webapps.
By AnyWhere

Friday, July 6, 2007

A POJO with annotations is not Plain

Everybody loves POJO’s. Ever since Martin Fowler, Rebecca Parsons, and Josh MacKenzie coined the term back in 2000, the idea has taken the Java world by storm. Successful frameworks like Spring and Hibernate have centered on POJO’s to make J2EE development a whole lot easier.

Since then JDK 1.5 came along and brought us annotations. And now everybody’s going crazy again. And by claiming to support POJO’s old-skool tech is now trying to become fashionable again.

Although I could not find an official definition of POJO, I think we can generalize from Martin Fowler’s original definition that a POJO is not an EJB to a POJO is a Java object that is not tied to any specific framework, and therefore can be used by multiple frameworks in different situations and configurations. If you accept this definition, you’d have to agree this annotated POJO craze is silly.

There’s four problems with this, especially with the trend that a lot of these annotations are used to put configuration data in your code:

  • You can’t compile an annotated POJO without having the frameworks that include the annotations.
  • The syntax is horrible when a lot of configuration data is put in. I’m not a big fan of XML configuration either but annotations just replace angle brackets with a whole lot of parentheses and curly braces.
  • By putting configuration data in the class, you have to recompile your code to change the configuration.
  • And finally, by doing this you can’t have multiple configuration for the same class, e.g. map one class to multiple tables in the database.

(Of course, you could use AspectJ 5’s declare annotation feature to keep the configuration separate. But we already had that when we used XML configuration files. If you declaratively added annotations with AspectJ you’re not only adding parentheses and curly braces but also the whole AspectJ syntax. Makes XML looks really nice by comparison.)

So, I propose to call these things a-POJO’s from now on. You could imagine that to be short for Annotated Plain Old Java Object, but that’s not what I mean. ;-)

By Vincent Partington

Unit Testing from the trenches

Unit testing is one of the cornerstones of modern software development. Working on several projects with Spring and Hibernate I have come to the conclusion that (in that setting?) there are actually three types of unit testing:

1. Basic Unit testing
2. Dao Layer Unit testing
3. Component Integration testing

As the name of the last type suggests, the categories are actually on a scale towards integration testing. This leads to the conclusion that the boundaries between unit and integration testing are not very sharp. This is interesting, since in our build systems (maven) and best practices, the two are distinct steps.

In this blog I’ll discuss the differences between the three types and describe some guidelines for creating tests.

Basic Unit testing


Basic unit testing is where you test the smallest units in your system in isolation. The units are usually single classes, in rare cases a class can be considered a sub unit of another class not worthy of a separate test. In my experience a satisfactory level of isolation can only be achieved by mocking. The reason for this is that only mocking ensures that when the functionality of a class changes, the change doesn’t cascade through the tests for all classes using that functionality. This is what isolation is all about. We use RMock, although rumor has it that on a Java5 enabled project the latest version of EasyMock would be superior.

A serious drawback of this approach, in my opinion, is that it doesn’t really inspire TDD. One of the basics of TDD is that the design of the solution should follow from the test and the test should follow directly from a requirement. Most requirements are not easily interpreted as “add this functionality to this class”. To arrive at the class level, some (upfront) design is necessary. So instead of writing a test that encapsulates the requirement, the requirement is broken down into functionality that is assigned to classes (this is the upfront design) and this functionality is tested.

DAO layer testing


When using Hibernate or any other framework for your DAO layer, a lot of business logic ends up in units or artifacts that are not testable by basic unit testing. With Hibernate the business logic gets injected into the mapping files and the queries.
Hibernate mappings basically allow the application to do CRUD operations on entities. A basic test would start with inserting an entity, check that it can be retrieved, update it, check again and then delete it. Envisioning a test framework for this is not that hard. But Hibernate configuration files also define complex properties such as cascading and inverse-ness. Correct configuration of these properties is just as important in your application. Testing these properties is a bit more complicated, involving navigating associations and collections.

Testing queries is a bit more complicated. Based upon an HQL, SQL or Criteria query, Hibernate generates SQL that can be executed against a specific RDBMS. One approach towards testing the query would be checking that the generated query is correct. This involves parsing the SQL produced and then checking certain features, such as the presence of tables in the FROM-clause, the correct restrictions in the WHERE-clause and the columns in the SELECT clause. This seems pretty laborious. Another approach would be to actually execute the SQL against a database and check the results. Using an in memory database such as HSQLDB, this is quite an easy task. One important side effect of this is that Hibernate will generate different SQL for a different type of RDBMS. When the test setup uses a different type of RDBMS, the queries that are used in the deployed situation are actually never tested during the unittest phase.

Data is another problem when running tests against an actual RDBMS. To be able to check the logic of a query, it should be executed against a number of different datasets. Usually the query is sensitive to parameters as well as to the actual data in the system. Both sensitivities should be tested with different variations. One way of getting data in to the system to enable testing is using DBUnit. This framework reads XML files and creates insert statements that can be executed against the RDBMS. The problem with this approach is that test expectations and results for different tests tend to get correlated, since the data for the whole test case is read from the same XML file. This file tends to get very big and hard to understand, the different variations that are tested are hardly documented and not evident from a quick scan of the file.

One solution would be to use a separate file for each test. In this situation, tests will not be correlated and a comment at the top of the file could describe the variation that is tested. This would probably lead to a large number of very similar files in the project. Due to foreign key restrictions, inserting a single record in the database often cascades through a lot of related tables that are reference data. This could be resolved by using a hierarchical structure, in which reference data is loaded first and used with a number of smaller and more specific datasets.

A different solution would be to create the objects necessary for the test in the code end persist them using Hibernate. The resulting tests would be easier to refactor and would be more independent. It could lead to a lot of code duplication, but this might be solved by using the ObjectMother Pattern. An extra advantage of this approach would be that the mapping files are tested indirectly. This might actually be a bad thing, since when a mapping is corrupted, completely unrelated tests might fail.

As always, reality isn’t simple and the best solution will be a mix of both. I’ve actually never tried mixing both strategies (DBunit and ObjectMother) on the same project. I’m very curious what the pitfalls will be. A problem that will probably remain with every strategy is that test coverage cannot be determined (or can it?).

Component Integration testing


When using Spring, units are glued together in an application context into ‘components’ or top level beans. The application context is never really used in the unit test phase, since its sole purpose is glueing together units that are tested in isolation in this phase. The application context is one of the most important files in your application. Starting up an application with a typo in its application context will fail miserably, though luckily early.

Some of the properties and structures that are defined in the application context can have a far reaching impact on your application. Usually transaction boundaries are defined here and other cross-cutting concerns as well. Not testing the application context should in my opinion be considered a serious shortcoming of the automatic testing of an application. Some of the components that are used in the deployed situation (and thus configured in the application context) cannot be used in a test situation. An example of these are interfaces to backend systems. To facilitate this, these components should be defined in a separate application context, that can be left out or replaced when testing.

On the other hand, these type of tests are very similar to integration tests. I think the difference should be that the “Component Integration” tests hardly check the results of the application. If a certain service method can be executed without running into a NullPointerException and it outputs something that at first glance appears to be meaningful, the test has succeeded. Further testing should be considered part of the functional and integration tests.

Conclusion


In my opinion there are several steps that should be executed sequentially in the unit test phase. A failure in an early step should terminate the building process without executing the other steps, since their erroneous output might rely on the previously found error. One of my colleagues suggested TestNG to create test groups. The types of tests described above can be used as starting points for these steps. There might be other types of tests that deserve their own group. The methods described above to write the tests seem promising, but are not yet proven technology. Do you think the grouping of unittests is useful? Do you recognize the groups and guidelines described above? 
By Maarten Winkels

Tuesday, July 3, 2007

Convention over Configuration in Spring’s MVC

Convention Over Configuration (CoC) is a term often bandied around by Ruby on Rails followers. From the wikipedia it’s defined as “the programmer only needs to specifically configure what is unconventional.” Very sensible advice indeed, although it’s not without its problems. For example the use of InternalResourceViewResolver effectively lets programmers forget about configuring resource views for incoming URLs, and instead rely on the convention of mapping to a like-named view (it extracts the view name from the URL). In fact, the MultiActionController is a type of CoC too - you don’t have to configure an action name; Spring tries to automatically invoke a like-named method on the controller class based on the URL. (The MethodNameResolver actually.) These two essentially enforce the convention of automatically selecting a view, and a method in a controller class respectively.

Now let’s look at an example of CoC: ControllerClassNameHandlerMapping. This enforces the convention of automatically selecting a controller.

Background

An URL Mapper has to map incoming URL requests to particular controllers. Typically we have to configure the URL mapper with each new controller that we add. Turning this on its head, we want to use CoC so that the convention is that the correct controller is automatically selected if it’s registered, without any configuration.

How to do it

imply use the ControllerClassNameHandlerMapping as your URL mapper in your dispatch-servlet.xml configuration file. For example:
<bean id="urlMapping"
  class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping">
</bean>
Define your controllers as usual. For example, I have this:
<bean id="commandController"
   class="com.memestorm.web.CommandController">
</bean>
That’s it. You don’t have to wire your controllers to URLs, this is now automated. Now instead of explicitly configuring mappings between URLs and controllers, you can rely on the convention, which is to take the ClassUtils.getShortName() name of the controller class, remove the “Controller” suffix if it exists and the cap, and use the result as a mapping. For example our CommandController would be mapped to /command/.

For MultiActionController controllers it works slightly differently. In this case, say for our DispatchController example, it will map /dispatch/* to the controller. (Note the extra wildcard).

Now you can concentrate on writing your controllers and action methods. Everything will be automatically wired in; nice.

How it works

It’s actually dead simple. The ControllerClassNameHandlerMapping class simply iterates through the application context looking for beans of type Controller, adding them to the mapping as described earlier.

By Jon

JTA Does Not Equal Automatic Support of Two-Phase Commit!

I find it a little bit distressing how few Java developers understand that using JTA does not automatically get you XA/Two-Phase-Commit capabilities.

Here we’ve got Matt Raible, who really should know better, or at least should not be blogging about it, posting on Two Phase Commit in Tomcat with JOTM and Spring. Somebody flew out to see Matt, and they spent some time setting up a system with TomCat and JOTM, and apparently they went away thinking their app was now XA capable. I wouldn’t want to be using this app in production if they’re depending on coordinated rollback across multiple datasources.

Now it looks like JOTM is getting (it’s in CVS, and part of a new Jonas reelase) XA recovery, but the release version doesn’t. I really welcome a version of JOTM which can do proper XA recovery. It’ll be great to have that option, but until that’s actually available there’s not really much of a reason you’d ever want to use JOTM in a new Spring app deployed in TomCat.

Without the full XA capabilities, the only thing JOTM really adds is making the standard JTA apis available (the UserTransaction and TransactionManager interfaces), but Spring itself doesn’t need those, it can handle non-XA transactions in a very performant and robust fashion with just a local Spring transaction manager like HibernateTransactionManager or DataSourceTransactionManager. Bringing JOTM into the picture will just add overhead and another potential source of problems (certainly in the past JOTM was not the most robust or spec-complete piece of software around, although from what I know it’s gotten a lot better).

Now if you have user code which needs to work agains the JTA apis (without needing XA), that _would_ be a reason to bring in JOTM. It’s just not needed for Spring-only apps. So if you’re deploying a non-Spring app, or perhaps have an app which is partially using Spring transaction management and partially (for legacy reasons) working against JTA, JOTM may make some sense But you still won’t get XA.

Also, aside from the transaction manager, to get proper XA recovery capabilities, your JDBC driver and database itself need to be XA capable. You’re not going to get 2PC with HSQLDB no matter what JTA driver you use…

One note to people looking for 2PC like functionality in a Spring environment, for other kinds of non-transactional data, such as LDAP access. You can actually pretty easily hook into the Spring transaction synchronization, so that for example when a database transaction which writes a partial user record to the db rolls back, you can also do something like a manual rollback (via the synchronization) of the data you wrote to LDAP.


By Colin