Apache (/əˈpætʃiː/; French: [a.paʃ]) is the collective term for several culturally related groups of Native Americans in the United States originally from the Southwestern United States. [read more at http://en.wikipedia.org/wiki/Apache]

  • Speed up your apache server running PHP

    Since I am facing performance problem due to the load of visitors and a badly configured server, I decide today to document my findings in this new series of articles.

    As soon as PHP run as FastCGI and no more inside an Apache module (mod_php4.so is not multi threaded), You should be able to switch the default MPM setting of Apache from MPM prefork to MPM worker.

    So, what's the difference between prefork and worker?

    Quoting from the Apache MPM Prefork page: http://httpd.apache.org/docs/2.0/mod/prefork.html
    MPM Prefork implements a non-threaded, pre-forking web server that handles requests in a manner similar to Apache 1.3.

    And for the Apache MPM Worker says: http://httpd.apache.org/docs/2.0/mod/worker.html
    MPM Worker implements a hybrid multi-process multi-threaded server. By using threads to serve requests, it is able to serve a large number of requests with less system resources than a process-based server.

    Server: Strato (www.strato.de)
    Operating system: SuSE / openSuSE
    Requirements: root access and basic Unix knowledge

    1 Edit the file  
    # vi /etc/sysconfig/apache2

    and change the key:

    APACHE_MPM=" "



    2 You can now tune also  default parameters in file /etc/apache2/server-tuning.conf
    Here are my settings, these are still the default setting of Apache


    # worker MPM

    <IfModule worker.c>
        ServerLimit 16
        # initial number of server processes to start
        StartServers         2
        # minimum number of worker threads which are kept spare
        MinSpareThreads     25
        # maximum number of worker threads which are kept spare
        MaxSpareThreads     75
        # maximum number of simultaneous client connections
        MaxClients       150
        # constant number of worker threads in each server process
        ThreadsPerChild     25
        # maximum number of requests a server process serves
        MaxRequestsPerChild  6000

    3 start
    # apache2-reconfigure-mpm
    this recreate the file  /etc/apache2/sysconfig.d/loadmodule.conf
    and restart apache automatically. Test Your site to ensure everything still work as expected.

  • Speed up your apache server with mod_expires

    This module controls the setting of the Expires HTTP header and the max-age directive of the Cache-Control HTTP header in server responses. The expiration date can set to be relative to either the time the source file was last modified, or to the time of the client access.

    These HTTP headers are an instruction to the client about the document's validity and persistence. If cached, the document may be fetched from the cache rather than from the source until this time has passed. After that, the cache copy is considered "expired" and invalid, and a new copy must be obtained from the source.

    How to activate mod_expires.so
    # vi /etc/apache2/conf.d/mod_expires.conf

    config is rough right now, but it is better than nothing,put the following in the file

    LoadModule evasive20_module     /usr/lib/apache2/mod_expires.so
    <IfModule mod_expires.c>
     ExpiresActive On

     ExpiresDefault "access plus 1 month"
     #ExpiresByType text/html "access plus 1 month 15 days 2 hours"
     #ExpiresByType image/gif "modification plus 1 month"
     #ExpiresByType image/png "modification plus 1 month"
     #ExpiresByType image/jpg "modification plus 1 month"
     #ExpiresByType text/css "access plus 1 month 15 days 2 hours"
     #ExpiresByType text/javascript "access plus 1 month 15 days 2 hours"

    You can also set the expire header by type, but it wa snot working in my case, that is why I use the ExpiresDefault directive

  • 1 week of mod_evasive some nasty bots get blacklisted

    This are my mod_evasive settings:
    LoadModule evasive20_module     /usr/lib/apache2/mod_evasive20.so
    <IfModule mod_evasive20.c>
      DOSHashTableSize 3097
      DOSPageCount 5
      DOSSiteCount 100
      DOSPageInterval 2
      DOSSiteInterval 2
      DOSBlockingPeriod 600
      DOSEmailNotify This email address is being protected from spambots. You need JavaScript enabled to view it.

    And this is a small documentation I've forget to add in the previous article:

    • DOSHashTableSize: is the size of the table of URL and IP combined. The greater this setting, the more memory is required for the look up table, but also the faster the look ups are processed. This option will automatically round up to the nearest prime number.
    • DOSPageCount: is the number of same page requests from the same IP during an interval that will cause that IP to be added to the block list.
    • DOSSiteCount: is the number of pages requested of a site by the same IP during an interval which will cause the IP to be added to the block list.
    • DOSPageInterval:  Interval for the 'DOSPageCount' threshold in second intervals.
    • DOSSiteInterval:Interval for the 'DOSSiteCount' threshold in second intervals.
    • DOSBlockingPeriod: is the time the IP is blacked (in seconds
    • DOSEmailNotify: can be used to notify by sending an email everytime an IP is blocked
    • DOSSystemCommand: is the command used to execute a command when an IP is blocked. It can be used to add a block the user from a firewall or router.
    • DOSWhiteList: can be used to whitelist IPs such as
    So if anybody on my homepage request 5 times the same page in less than 2 seconds, it will get blacklisted.
    If anybody try to make more than 100 requests of my homepage in less than 2 seconds, it will get blacklisted.  
    In less than a week, the following Bots get blacklisted.      Unknown Country   Germany   Chinese (Hong Kong)    Unknown Country    Dutch (Netherlands)      GERMANY (DE) City: Muenchen Latitude: 48.15 Longitude: 11.5833    United States   Country: INDIA (IN) City: Hyderabad Latitude: 17.3833 Longitude: 78.4833      UNITED STATES (US)      Swedish (Sweden)   BELGIUM (BE) City: Tournai Latitude: 50.6 Longitude: 3.3833   NETHERLANDS (NL) City: Harlingen Latitude: 53.1833 Longitude: 5.4167    GERMANY (DE) City: Heinsberg Latitude: 51.0333 Longitude: 8.15    GERMANY (DE)      DENMARK (DK)      Dutch (Netherlands)    ITALY (IT) City: Roma  Latitude: 41.9 Longitude: 12.4833    UNITED STATES (US) City: Mountain View, CA Latitude: 37.402 Longitude: -122.078 GOOGLE      GERMANY (DE)    Dutch (Netherlands)     UNITED STATES (US) City: Raleigh, NC Latitude: 35.8219 Longitude: -78.6588

  • A strategy for Integrations versions with maven...


    We are currently asking ourselves at INNOVEO, if we need to keep a version of integration versions.
    Integration versions main objective is to be integrated with other modules to make and test an
    application or a framework. This question is quickly becoming essential when working with several
    modules, where you will have to to rely on intermediate, non finalized versions of modules.

    Since we are also following  the continuous integration paradigm for all our modules, Thanks to
    Apache MAVEN, these integration versions are produced by a continuous integration server
    (Team City from JetBrain), very frequently.

    So, how can you deal with these, possibly numerous, integration versions? The response is coming
    from this extract from IVY documentation

    There are basically two ways to deal with them,

    Use a naming convention
    The idea is pretty simple, each time you publish a new integration of your module you give the same
    name to the version (in maven world this is for example 1.0-SNAPSHOT). The dependency manager
    should then be aware that this version is special because it changes over time, so that it does not
    trust its local cache if it already has the version, but check the date of the version on the repository
    and see if it has changed.

    Create automatically a new version for each
    in this case you use either a build number or a timestamp to publish each new integration version
    with a new version name. Then you can use one of the numerous ways in Ivy to express a version
    constraint. Usually selecting the very latest one (using 'latest.integration' as version constraint) is

    But usually we recommend to use the second one, because using a new version each time you publish
    a new version better fits the version identity paradigm, and can make all your builds reproducible,
    even integration one. And this is interesting because it enables, with some work in your build system,
    to introduce a mechanism to promote an integration build to a more stable status, like a milestone
    or a release.

    The example given is very interesting...

    Imagine you have a customer which comes on a Monday morning and asks your latest version of your
    software, for testing or demonstration purpose. Obviously he needs it for the afternoon :-) Now if
    you have a continuous integration process and a good tracking of your changes and your artifacts, it
    may occur that you are actually able to fulfill his request without needing the use of a dolorean to
    give you some more time :-) But it may occur also that your latest version stable enough to be used
    for the purpose of the customer was actually built a few days ago, because the very latest just break
    a feature or introduce a new one you don't want to deliver. In this case, you can deliver this 'stable'
    integration build if you want, but be sure that a few days, or weeks, or even months later, the
    customer will ask for a bug fix on this demo only version. Why? Because it's a customer, and we
    all know how they are :-)

    So, with a build promotion feature of any build in your repository, the solution would be pretty easy:
    when the customer ask for the version, you not only deliver the integration build, but you also
    promote it to a milestone status, for example. this promotion indicates that you should keep track of
    this version in a long period, to be able to come back to it and create a branch if needed.

    Note this is the strategy at Eclipse.org, where a nightly build (N20080420) can be promoted to an Maintenance
    release if quality is good enough. Below I've put an extract of a presentation document from © 2006 by Alex Blewitt;
    made available under the EPL v1.0 |  2006-03-20  |  http://www.rcpapps.org/

    We are now using the same naming convention at INNOVEO for our product

    Eclipse builds are of different type:

    (N) Nightly
    • Built every night (whether successful or not)
    • Used to run quality metrics and whether tests have passed
    (I) Integration
    • Used to ensure that code works together
    • Used to run quality metrics
    (M) Maintenance
    • Released at the end of each build cycle
    (R) Release
    • Released at the end of each release cycle

    Each product is given a build id,

    • Build Type (N, I, M or R)
    • Build ID (M20060118)
    • Build Label (M20060118-1600)
    • Timestamp of build (16:00 on the 18th Jan, 2006)
    • Each release corresponds to a specific build label
    • May also be known as other aliases in CVS
    • R3_1_2, vI20060118-1000, R3_1_Maintenance

    To keep the Eclipse ecosystem in step, everything is tagged

    • Part of the build process tags the current code with vI20060320
    • A build is only promoted from N->I if there are no build failures
    • A build is promoted from I->M if there are no failures and all the
      functionality works to a satisfactory level
    • A build is promoted from M->R at the end of a release cycle and
      the quality is suitably high

     On the other hand, the main drawback of this solution is that it can produce a lot of intermediate
    versions, and you will have to run some cleaning scripts in your repository...

    I will present You later how you can achieve this goal with MAVEN and Team City

  • Adding mod_security to better protect your webserver

    ModSecurityTM is an open source intrusion detection and prevention engine for web applications (or a web application firewall). Operating as an Apache Web server module or standalone, the purpose of ModSecurity is to increase web application security, protecting web applications from known and unknown attacks.from http://www.modsecurity.org/
    Installing mod_security as DSO is easier, and the procedure is the same for both Apache branches. First unpack the distribution somewhere (anywhere will do, I copy the .c files in my home),

    # cd
    # wget http://www.modsecurity.org/download/mod_security-1.9.4.tar.gz
    # tar -zxfv mod_security-1.9.4.tar.gz
    # cd mod_security-1.9.4/apache2

    and compile the module with:

    /usr/local/psa/admin/bin/apxs  -cia ~/mod_security.c/usr/sbin/apxs2  -cia ~/mod_security.c

    First problem that may occur is the absence of
    • GccThe GNU Compiler Collection (usually shortened to GCC) is a set of programming language compilers produced by the GNU Project. It is free software distributed by the Free Software Foundation (FSF) under the GNU GPL, and is a key component of the GNU toolchain. It is the standard compiler for the open source Unix-like operating systems, and certain proprietary operating systems derived therefrom such as Mac OS X. [WikiPedia]
    • apache-dev: contains the apxs tool, and required pache heder to compile a module
    Both can be installed via YaST2...

    Tips: if your apxs2 is not located at /usr/bin/apxs2, you can search it by typing # find / -name apxs2

    # /usr/sbin/apxs2  -cia ~/mod_security.c
    /usr/share/apache2/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -Wall -g -fPIC -Wall -fno-strict-aliasing -D_LARGEFILE_SOURCE -DAP_HAVE_DESIGNATED_INITIALIZER -DLINUX=2 -D_REENTRANT -D_XOPEN_SOURCE=500 -D_BSD_SOURCE -D_SVID_SOURCE -D_GNU_SOURCE -DAP_DEBUG -Wmissing-prototypes -Wstrict-prototypes -Wmissing-declarations -pthread -I/usr/include/apache2  -I/usr/include/apache2   -I/usr/include/apache2   -c -o /root/mod_security.lo /root/mod_security.c && touch /root/mod_security.slo
    /usr/share/apache2/build/libtool --silent --mode=link gcc -o /root/mod_security.la  -rpath /usr/lib/apache2 -module -avoid-version    /root/mod_security.lo
    /usr/share/apache2/build/instdso.sh SH_LIBTOOL='/usr/share/apache2/build/libtool' /root/mod_security.la /usr/lib/apache2
    /usr/share/apache2/build/libtool --mode=install cp /root/mod_security.la /usr/lib/apache2/
    cp /root/.libs/mod_security.so /usr/lib/apache2/mod_security.so
    cp /root/.libs/mod_security.lai /usr/lib/apache2/mod_security.la
    cp /root/.libs/mod_security.a /usr/lib/apache2/mod_security.a
    ranlib /usr/lib/apache2/mod_security.a
    chmod 644 /usr/lib/apache2/mod_security.a
    PATH="$PATH:/sbin" ldconfig -n /usr/lib/apache2
    Libraries have been installed in:

    If you ever happen to want to link against installed libraries
    in a given directory, LIBDIR, you must either use libtool, and
    specify the full pathname of the library, or use the `-LLIBDIR'
    flag during linking and do at least one of the following:
       - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
         during execution
       - add LIBDIR to the `LD_RUN_PATH' environment variable
         during linking
       - use the `-Wl,--rpath -Wl,LIBDIR' linker flag
       - have your system administrator add LIBDIR to `/etc/ld.so.conf'

    See any operating system documentation about shared libraries for
    more information, such as the ld(1) and ld.so(8) manual pages.
    chmod 755 /usr/lib/apache2/mod_security.so
    apxs:Error: Config file /etc/apache2/httpd2-prefork.conf not found.

    Do not take care of the error in blue, since the resulting shared library (mod_security.so) has been automatically copied into /usr/lib/apache2

    Copy then the desired rule set (modsecurity-general.confor modsecurity-php.conf) into /etc/apache2

    Edit /etc/apache2/httpd.confand add the following lines at the end of file, it is also recommended to use the rules from www.GotRoot.com

    LoadModule security_module /usr/lib/apache2/mod_security.so
    SecFilterEngine On
    Include /etc/apache2/modsecurity_rules/modsecurity-general.conf
    Include /etc/apache2/modsecurity_rules/modsecurity-hardening.conf

    #rules set found at http://www.gotroot.com/tiki-index.php?page=mod_security+rules
    Include /etc/apache2/modsecurity_rules/gotroot/apache2-rules.conf
    Include /etc/apache2/modsecurity_rules/gotroot/badips.conf
    Include /etc/apache2/modsecurity_rules/gotroot/blacklist2.conf
    Include /etc/apache2/modsecurity_rules/gotroot/blacklist.conf
    Include /etc/apache2/modsecurity_rules/gotroot/exclude.conf
    Include /etc/apache2/modsecurity_rules/gotroot/jitp.conf
    Include /etc/apache2/modsecurity_rules/gotroot/proxy.conf
    Include /etc/apache2/modsecurity_rules/gotroot/recons.conf
    Include /etc/apache2/modsecurity_rules/gotroot/rootkits.conf
    Include /etc/apache2/modsecurity_rules/gotroot/rules.conf
    Include /etc/apache2/modsecurity_rules/gotroot/useragents.conf

    BUT be carefull with modsecurity-hardening.conf
    1. This fle has to be tuned  for your server: logs files location, advanced rulesets, read carfeully and uncomment TODO if needed
    2. As default mod_security is in learning mode: it log and let the request  pass through (line SecFilterDefaultAction "pass, log"), recommended as soon as You have a good rulesets SecFilterDefaultAction "deny,log,status:500"
     Restart Apache2 by typing
    # /etc/init.d/apache2 restart

    Now it is time to check if mod_security is running       

    # tail -f /var/log/apache2/error_log
    [Mon Aug 21 18:43:38 2006] [notice] Apache/2.0.53 (Linux/SUSE) configured -- resuming normal operations
    [Mon Aug 21 19:01:56 2006] [notice] caught SIGTERM, shutting down
    [Mon Aug 21 19:01:57 2006] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
    [Mon Aug 21 19:01:57 2006] [warn] RSA server certificate CommonName (CN) `h790663.serverkompetenz.net' does NOT match server name!?
    [Mon Aug 21 19:01:57 2006] [warn] RSA server certificate CommonName (CN) `plesk' does NOT match server name!?
    [Mon Aug 21 19:01:57 2006] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec2)
    [Mon Aug 21 19:01:57 2006] [notice] mod_security/1.9.4 configured
    [Mon Aug 21 19:01:57 2006] [warn] RSA server certificate CommonName (CN) `h790663.serverkompetenz.net' does NOT match server name!?
    [Mon Aug 21 19:01:57 2006] [warn] RSA server certificate CommonName (CN) `plesk' does NOT match server name!?
    [Mon Aug 21 19:01:57 2006] [notice] Apache/2.0.53 (Linux/SUSE) configured -- resuming normal operations

  • Ant scripts How to...

  • Apache Jmeter


    Work in progress

  • Apache Junit

    In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible.

    JUNIT: A testcase framework for Java


    In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible.

    The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides two main benefits:

    • Encourages change
      Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly.
    • Simplifies Integration
      Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier.
    • Documentation
      As an added value, all your Testcases can act as a documentation for your set of classes

    Kent Beck, (CSLife) and Erich Gamma, (OTI Zürich) have made a very good article:
    "Testing is not closely integrated with development. This prevents you from measuring the progress of development- you can't tell when something starts working or when something stops working. Using JUnit you can cheaply and incrementally build a test suite that will help you measure your progress, spot unintended side effects, and focus your development efforts." more here

    It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. Unit testing is only effective if it is used in conjunction with other software testing activities.

    There is a lot of ways to use JUNIT:

    • Write your set of classes, then some Testcase that should run and validate the work done,
    • Write Testcases first that won't run because no classes are existing yet, then write the code that will make it run!
    • Correct a bug in a piece of code, and write a Testcase for being sure that it won't reappear one day.

    Junit is based on fact that you want to test a code. Normally you know the result expected, all you have to do is to ask your code (class, method, set of cooperating class) and to test if the response is correct.
    Let's take an example.... I have a Class that can replace patterns in a string (like in JDK 1.4.2: "aText".replace("seachPattern","withThisPattern"))). Since I wrote the class, and know the purpose of it, I can write some pertinent testcases. I want to protect this Object and all other Object that may use it from loss of functionnality, bugs which may lead to malfunction in a complex system.

    Writing good Testcases

    There is no rule how to write a test, but remember

    • That a testcase should be pertinent, otherwise it will have no quality impact and will lead to a loss of developer time.
    • Be honest: push your Objects to the limit of their usage! try to describe and test all functionnality of your set of objects.
    • You need to do some dummy/obvious assertions (but sometimes these dummy tests are not obvious with complex object and or runtime environment).
      Constructor should not give back the same instance
      (Except if you are using a singleton pattern)
      ClassA classA = new ClassA();
      ClassA classA1 = new ClassA();
      assertNotEquals(classA, classA1);

    The JUNIT language

    JUnit use some primitives methods to achieve regression testing. As today in JUNIT 1.3.8, The assertion methods are all located in junit.framework.AssertA lot of third party tools has been developed to extends possibilities of tests with database, EJB, JSP for example.

    • Assert methods are testing equality of nearly all Java standard type
    • If these methods are not enough, you can always decide to validate your objects by Your own and call fail() if you decide that conditions are not met.

    Write your first Testcase

    A Junit test is a classe which extends junit.framework.Tescase and has some methods beginning with the word "test"

    A trivial example:

    Your first JUNIT testcase classe
    public class SquareTest extends junit.framework.TestCase {
            public void testClassA {
             Square squareA = new Square();
             Square squareB = new Square();
            public void testCloneability() {
             Square squareA = new Square();
             Square squareB = (Square) squareA.clone();

    Writing a Testcase is always more or less the same:

    1. Create one or more classes extending junit.framework.Tescaseand implement some test methods
    2. Create in these methods instances of the object you want to test or validate.
    3. Use your object, use setter and getter, constructor to change their internal state (here is the concept of pushing your object to the limits: use the full range of input data accepted by your objects)
    4. Test values returned by methods, assuming that you know what would be the correct result,
    5. Write a lot of them to test the maximum of functionnalities provided by your objects.

    Run your testcases
    Different TestRunner or how to run your suite of testcases

    A TestRunner is able to run JUNIT testcases, there is more or less 2 categories:

    • Textual TestRunner (console output)
      • The fastest to launch and can be used when you don't need a red green success indication. This is recommended with ANT.
    • Graphical TestRunners (client server web GUI, swing, AWT in eclipse....)
      • They show a simple graphical dialog to start/stop and display results of tests and provide some graphical progress indication.

    A TestRunner can be configured to be either loading or non-loading. In the loading configuration the TestRunner reloads your class from the class path for each run. As a consequence you don't have to restart the TestRunner after you have changed your code. In the non-loading configuration you have to restart the TestRunner after each run. The TestRunner configuration can be either set on the command line with the -noloading switch or in the junit.properties file located in the "user.home" by adding an entry loading=false.

    JUNIT find all testcase using java.lang.reflexion package, in fact it will call all methods starting with the word test will be found.

    In a JAVA main class:
    String[] listUnitTest = {ClassA.class.getName(), ClassB.class.getName()}; //list of classname containing your units tests
    junit.textui.TestRunner.main(listUnitTest); //Text based
    junit.awtui.TestRunner.main(listUnitTest);//green mean all test successful red is bad in case of error, you see the stack and which test failed.
    junit.swingui.TestRunner.main(listUnitTest); //green mean all testcases successful red is badin case of error, you see the stack and which test failed.
    JUnit Testrunner in Eclipse is a standar View

    Testsuite is a suite of testcase or method, you can give this testsuite to a testrunner.

    Some particular TestSuite

    Multi threading test
    If you need to have multiple threads hitting your class. ActiveTestSuite starts each test in its own thread However, ActiveTestSuite does not have a constructor which automatically adds all testXXX methods in a class to the test suite. I tried addTestSuite method with class name as the argument, but it added all tests in the class to run sequentially in the same thread. So, I had to manually add each test name to the ActiveTestSuite.
    public static Test suite() {
    TestSuite suite = new ActiveTestSuite();
    suite.addTest(new com.waltercedric.junit.ClassA ("testClonability"));
    suite.addTest(new com.waltercedric.junit.ClassA ("testSerialization"));
    suite.addTest(new com.waltercedric.junit.ClassA ("testRandom"));
    return suite;

    public static void runTest (String[] args) {
    junit.textui.TestRunner.run( suite() );

    JUNIT can be extended with 3rd party extensions, if you need some specials capabilities, refer to this page: JUNIT extensions


  • Apache Log4j

    Log4J: A logging framework for J2EE

    Log4j homepage: http://jakarta.apache.org/log4j/

    Reference book on log4j:

    The Complete Log4j Manual

    by Ceki Gulcu
    Edition: Paperback

    Log4j is an open source tool (OSS) developed for inserting logs statements into your application and was developed by people at Apache fundation. It's speed and flexibility allows log statements to remain in shipped code while giving the user the ability to enable logging at runtime without modifying any of the application binary. All of this while not incurring a high performance cost/loss.


    • Log4j need at least a compatible JDK 1.1.x to run.
    • The DOMConfigurator is based on the DOM Level 1 API. The DOMConfigurator.configure(Element) method will work with any XML parser that will pass it a DOM tree. The DOMConfigurator.configure(String filename) method and its variants require a JAXP compatible XML parser, for example Xerces or Sun's parser. Compiling the DOMConfigurator requires the presence of a JAXP parser in the classpath.
    • The org.apache.log4j.net.SMTPAppender relies on the JavaMail API. It has been tested with JavaMail API version 1.2. The JavaMail API requires the JavaBeans Activation Framework package.
    • The org.apache.log4j.net.JMSAppender requires the presence of the JMS API as well as JNDI.
    • Log4j test code relies on the JUnit testing framework in order to maintain quality of release.

    Why inserting log statement or rely on this (old) technology?

    Advantages Drawbacks
    It offers several advantages. It provides precise context about a run of the application.
    Once inserted into the code:
    • It Help developer to develop and correct bugs,
    • Generation of logging output requires no human intervention,
    • Log output can be saved in persistent medium to be studied at a later time,
    • Rich logging package can also be viewed as an auditing tool, for example to determine performance...
    • Debugging statements stay with the program (for years) while debugging sessions are always transient (lifetime of bug resolution).
    • Log can make the glue between developer within a development environment and specialist within a production environment. The know how or description in log statements can help the production specialist to undestand how your application work.
    • It can/May slow down an application.
    • If the program verbosity is high, it can pollute reader's mind, or lead to misanalyse of a problems.
      For example:
      - saying something false in a log statement can have tremendous effects...
      - Writing too much info (irrelevent), can hide some major error.

    Why choosing Log4J? (Fromapache.org)

    • log4j is optimized for speed. The system write has been rewrite for efficiency and is now asynchrone (compare to System.err)
    • log4j is based on a named logger hierarchy. (category)
    • log4j is fail-stop but not reliable.
    • log4j is thread-safe. No interblocking thread, or memory leaks.
    • log4j is not restricted to a predefined set of facilities.
    • Logging behavior can be set at runtime using a configuration file. Configuration files can be property files or in XML format.
    • log4j is designed to handle Java Exceptions from the start.
    • log4j can direct its output to a file, the console, an java.io.OutputStream, java.io.Writer, a remote server using TCP, a remote Unix Syslog daemon, to a remote listener using JMS, to the NT EventLog or even send e-mail. (Appenders)
    • log4j uses 5 levels, namely DEBUG, INFO, WARN, ERROR and FATAL.
    • The format of the log output can be easily changed by extending the Layout class.
    • The target of the log output as well as the writing strategy can be altered by implementations of the Appender interface.
    • log4j supports multiple output appenders per logger
    • log4j supports internationalization.
    • It is used extensively by thousands of Java developers. If a flaw is discovered it gets fixed in the next release.
    • The log4j code is likely to be better than code you'd write yourself and is l ikely to improve over time.
    • Ports to other languages are: C++, Eifel, Perl, .NET, Python, Ruby…more than 57 languages are supported

    Log4j concepts

    Logger Logger are responsible for handling the majority of log operations. The logger is the core component of the logging process.
    Levels Log4j by default can log messages with five priority levels (not including custom Levels). More can be defined by subclassing, but it is not recommended.

    debug to write debugging messages which should not be printed when the application is in production.
    log.debug("Starting init of RequestController");

    info for messages similar to the "verbose" mode of many applications.
    log.info("Analyser init successfull");

    warn for warning messages which are logged to some log but the application is able to carry on without a problem.
    log.warn("Inconsistent value in conf for key 'debug', line 123 assuming default value true");

    error for application error messages which are also logged to some log but, still, the application can hobble along. Such as when some administrator-supplied configuration parameter is incorrect and you fall back to using some hard-coded default value. You must use this level in all catch clause, if you can not resolve the exception!
    log.error("The object Account is null");

    fatal for critical messages, after logging of which the application quits abnormally
    log.fatal("Can not get any new connection from database");


    A logger will only output messages that are of a level greater than or equal to it. If the level of a logger is not set it will inherit the level of the closest ancestor. So if a logger is created in the package com.waltercedric.account and no level is set for it, it will inherit the level of the logger created in com.waltercedric. If no logger was created in com.waltercedric., the logger created in com.waltercedric.balance will inherit the level of the root logger, the root logger is always instantiated and available.

    Appender Appender
    1. Are responsible for controlling the output of log operations.
    2. Controls where and how logging result is store.

    The Appenders available are (from the log4j API)

    • ConsoleAppender: appends log events to System.out or System.err using a layout specified by the user. The default target is System.out
    • DailyRollingFileAppender extends FileAppender so that the underlying file is rolled over at a user chosen frequency.
    • FileAppender appends log events to a file.
    • RollingFileAppender extends FileAppender to backup the log files when they reach a certain size.
    • WriterAppender appends log events to a Writer or an OutputStream depending on the user's choice.
    • SMTPAppender sends an e-mail when a specific logging event occurs, typically on errors or fatal errors.
    • SocketAppender sends LoggingEvent objects to a remote a log server, usually a SocketNode.
    • SocketHubAppender sends LoggingEvent objects to a set of remote log servers, usually a SocketNodes
    • SyslogAppender sends messages to a remote syslog daemon.
    • TelnetAppender is a log4j appender that specializes in writing to a read-only socket.

    One may also implement the Appender interface to create ones own ways of outputting log statements.

    Layout Layout:
    1. Are responsible for formatting the output for Appender.
    2. Are always used by Appender
    3. Knows how to format the output.

    There are three types of Layout available:

    • HTMLLayout formats the output as a HTML table.
    • PatternLayout formats the output based on a conversion pattern specified, or if none is specified, the default conversion pattern.
    • SimpleLayout formats the output in a very simple manner, it prints the Level, then a dash '-' and then the log message.

    Using Log4j in your code

    It is not recommended to use log4j api directly, since who knows if a better logging framework won't do better in the future or if log4j won't modify its api's. The main idea is that when you aquire a 3rd party component, is to build a wrapper around it. It is even better if the wrapper contains an abstract factory: maybe in some case you wil have to use different class of logging (because of performance, licence...)

    A simple log4j wrapper
    Import com.waltercedric.LogWrapper;

    public LogWrapper {


    Using your newly created wrapper
    Import com.waltercedric.LogWrapper;

    public void init() throws com.waltercedric.applicationException {

    LogWrapper logger = new LogWrapper(Account.class);
    logger.info("Starting init");

    logger.debug("create an Account");
    up = new Account(new NullObject());

    Log4j Guidelines
    The FAQ of log4J is a must to read, here are the most important points:

    1. Respect Levels!
      Respect levels and categorize the logs according to severity and messages size. Please define a special logger (restricted to a package) that can be switch off and that do not write to much statement in log output.
    2. Meaningful statements
      Create code with System.err.println or System.out.println If you are doing some internal reviews of your code, please try to write some meaningful information in logs. Avoid log of type: "I am here", "here 1", "here 2" and so on..
    3. Classwide static logger
      It is recommended to provide a class wide logger access point, if you need to do a lot of output in a class or hierarchy. Define a protected Logger in the parent hierarchie
      public class Mamals {
      protected static LoggerWrapper logger = LogFactory.getLog(Mamals.class);
      and use it in all children
      public class Human extends Mamals {

      public Human() {


    4. Increase speed
      Log4J is not slow, it is even faster than System.out or System.err (System.err or System.out are synchronous while NOT with log4j, the cost in times comes more from costs during formating messages!
      If you know that you must heavily formatted the output message, do not use the following:
      l.debug("Cash balance is " + cashvalue);
      use instead
      if(myLogger.isDebugEnabled()) {
      myLogger.debug("Cash balance is " + cashBalance.toXML());
    5. How to name Loggers?
      You can name loggers by locality. It turns out that instantiating a logger in each class, with the logger name equal to the fully-qualified name of the class, is a useful and straightforward approach of defining loggers. This approach has many benefits:
    • It is very simple to implement.
    • It is very simple to explain to new developers.
    • It automatically mirrors your application's own modular design.
    • It can be further refined at will.
    • Printing the logger automatically gives information on the locality of the log statement.

    However, this is not the only way for naming loggers. A common alternative is to name loggers by functional areas. For example, the "database" logger, "RMI" logger, "security" logger, or the "XML" logger. You are totally free in choosing the names of your loggers. The log4j package merely allows you to manage your names in a hierarchy. However, it is your responsibility to define this hierarchy. Note by naming loggers by locality one tends to name things by functionality, since in most cases the locality relates closely to functionality.

    Remote logging over TCP
    Read carefully: http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/net/SocketAppender.html and

    Starting the server.Chainsaw
    Chainsaw is a graphical logging client, where you can see, sort and filter logs data.
    Documentation can be read here: http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/chainsaw/package-summary.html and it is a part of log4j.jar

    Starting chainsaw
    c:jdk1.4.2binjava org.apache.log4j.chainsaw.Main "chainsaw.port", "5000"
    1. Log4 gives you the ability to send messages to a remote location over a socket for logging purposes. The org.apache.log4j.net.SocketAppender and org.apache.log4j.net.SocketServer classes are the key classes used in remote logging.
    2. Modify all logger in your log4j.xml to use a SocketApender as appender, Once you have loaded this configuration, all messages will be written to the machine and port that you specify.
    3. Start the client application (Chainsaw), this program will receive logs and show them in a swing GUI
    Example of TCP appender in log4j.xml
    log4j.appender.remote =org.apache.log4j.net.SocketAppender
    log4j.appender. remote.RemoteHost=localhost
    log4j.appender. remote.Port=5000
    log4j.appender. remote.LocationInfo=true

    On the server side (where your application create logs), you will need to run log4j's SocketServer class. You can create a configuration file with configuration information similar to the following: The whole applcation is in DEBUG mode

    Example of socketserver.properties

    # log1 is set to be a file

    log4j.appender.log1.append = true
    log4j.appender.log1.layout.ConversionPattern=%p %t %c - %m%n
    1. Set up your CLASSPATH on both the client and server to contain log4j.jar
    2. Run the SocketServer at the command line. The command line syntax for the SocketServer is as follows:
      java org.apache.log4j.net.SocketServer portNumber configurationFile configurationDirectory
      start the server:
    Start the server
    java org.apache.log4j.net.SocketServer 5000 C:socketserver.properties C:temp
    org.apache.log4j.net.SocketServer "5000", "C:socketserver.properties", "C:temp"

    Start your application, without doing any change in your code or recompiling it, you can now log data remotely!

    Configuring log4j

    Location of configuration file
    The configuration files of log4j must be in classpath, if more than one are in classpath, the first found will be used. Log4j require to have a compatible parser in classpath in order to read the configuration file. As default, Logj use Crimson.jar

    Location of DTD
    The DTD is needed in order to initialize log4j, 2 solutions are available:

    Public DTD, the file must be on internet or on network System path, but with a fix path (URI)


    Extending log4j

    Defining your application specific loggers, appenders and layouts
    You can look at the Log4j API to see how to implement a logger, appender and layout.


    One of the strength of log4j is that is do not require to recompile the java code to binary classes to change considerably the ouput amount in logs. You can add logging statements in your code, and without changing the code shipped, change at runtime the amount of log output. Thus the major behaviour logging strategies are done in this file (it can be a properties file or a XML file). You should store this file in the classpath of your application.


    Example of configuration files:

    Example of log4j.xml

    Example of log4j.properties
    # log4Java properties
    # Documentation can be found at
    # There is no other documentation except forum, a commercial book is due (oreilly)
    # To permit reloading during runtime, the LogDecorator will test each 60s if the file has changed
    # and update configuration of log4j if needed
    # Ascending prioriy INFO < WARNING < DEBUG < ERROR < FATAL
    # log visible only if current log level >= defined level
    # current layout can be: DateLayout, HTMLLayout, PatternLayout, SimpleLayout, XMLLayout

    # Set root logger level to [FATAL|ERROR|WARN|INFO|DEBUG], and provide default appender

    log4j.rootLogger=DEBUG, stdout

    # define category (and their level [INHERITED|FATAL|ERROR|WARN|INFO|DEBUG] and appender)
    # category should be fully qualified class name or incomplete package name
    # Note that you inherit from the root logger otherwise specified (set addtivity flag)
    # additivity= true (default) all request will also be forwarded to the hierarchy
    # -> log twice if the same appender is already in the hierarchy
    # additivity= false do not forward to ancestor appenders
    # INHERITED can be optionally specified which means that named category should inherit
    # its priority from the category hierarchy. If you add the flag additivity to false,
    # you do not inherit of appender

    log4j.category.com.waltercedric.account=INHERIT, log1

    log4j.category.com.waltercedric=DEBUG, log1

    # You Can defined as many appender as you want

    # stdout is set to be a ConsoleAppender.

    #see http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/PatternLayout.html
    log4j.appender.stdout.layout.ConversionPattern=%d %r [%t] %-5p %c - %m%n

    # log1 is set to be a file by date

    #rollover each day at midnight, see DailyRollingFileAppender object
    log4j.appender.log1.DatePattern='.'yyyy-MM- dd
    #by size

    log4j.appender.log1.append = true
    #see http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/PatternLayout.html
    #-4r [%t] %-5p %c %x - %m%n lead to 331 [main] ERROR com.waltercedric.account - classCastexception-->

    log4j.appender.log1.layout.ConversionPattern=%p %t %c - %m%n

    # eMail logging
    # SMTPAppender will store all the logging events on an
    # internal cache and it will send all the messages when
    # the TriggeringEventEvaluator you set with the
    # setEvaluatorMethod or the constructor parameter return true.
    # By default the evaluator is set with an instance of
    # DefaultEvaluator wich is a package-private class
    # defined in the same compilation unit of SMTPAppender.
    # This evaluator will return true only when the logging
    # event has a priority greater or equal than ERROR.

    log4j.appender.email.To=This email address is being protected from spambots. You need JavaScript enabled to view it.
    log4j.appender.email.From=This email address is being protected from spambots. You need JavaScript enabled to view it.
    log4j.appender.email.Subject=A Fatal error has occured in your application
    #see http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/PatternLayout.html
    log4j.appender.email.ConversionPattern=%d{ABSOLUTE} (%F:%L) - %m%n

    # remote socket server logging
    # The SocketAppender has the following properties:
    # please read: http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/net/SocketAppender.html
    # If you want to have a server that listen, you can start the following utilities Chainsaw
    # (swing gui) read how at http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/chainsaw/package-summary.html
    # Chainsaw is a particular server!



  • Apache Maven 3 Cookbook


    First a big thanks toPackt Publishing for having sent me this book to review! I did enjoy going through this book, while I did not learn a lot of new stuff (I am using Apache Maven daily since 2006!), I found it to be concise and would recommend it anytime to any of my colleagues. But let’s go through my review of this cookbook of over 50 recipes towards optimal Java Software Engineering with Maven 3:

    Apache Maven 3 Cookbook is a clear, precise, well-written book that gives readers clear recipes for the release process using Apache Maven 3. The authors give a step-by-step account of expectations and hurdles for software development.

    The first few chapters quickly bring you to the point to be comfortable using Maven on straightforward projects, and the later chapters provide even more recipes examples on subjects like running a Repository Manager, Writing Plugins, and details on various techniques. The book also covers numerous real world software delivery issues such as multi-module projects, web/enterprise projects, dependency management, automatic testing and documentation.

    To sum up key points from this 224 pages book in a few bullets:

  • Chapter 1: Basics of Apache Maven: Setting up Apache Maven on Windows/Linux/Mac, Creating a new project, Understanding the Project Object Model, build lifecycle and build profiles,
  • Chapter 2: Software Engineering Techniques: Build automation, modularization, Dependency management, Source code quality check, Test Driven Development (TDD), Acceptance testing automation and Deployment automation,
  • Chapter 3: Agile Team Collaboration: Creating centralized remote repositories, Performing continuous integration with Hudson, Integrating source code management, Team integration with Apache Maven, Implementing environment integration, Distributed development and Working in offline mode,
  • Chapter 4: Reporting and Documentation: javadocs, unit tests, coverage reports and Maven dashboard setup,
  • Chapter 5: Java Development with Maven: Java web application, J2EE, Spring, Hibernate and JBoss SEAM development,
  • Chapter 6: Google Development with Maven: Android and GWT (Google Web Toolkit), Google App Engine deployment,
  • Chapter 7: Scala, Groovy, and Adobe Flex
  • Chapter 8: IDE Integration
  • Chapter 9: Extending Apache Maven: creating plugins using Java, Apache ANT or Ruby,
  • The author Srirangan go into detail in describing each of these themes. 

    I recommend you this book if

  • If you need to learn Apache Maven quickly, you can go through the recipes and examples and come away with a good knowledge of Maven.
  • If you are currently implementing Apache Maven for the first time in your development process and feel a bit lost by the lack of clear examples that just run.
  • If you want to use proven solutions to real common engineering challenges: this book will save you a lot of time!

    if you want to be able to deliver your software to any target environment, using continuous delivery processes, chances are high that Apache Maven is the right tool for this job, and this book should be part of your technical library, beside also of course the free online book of Sonatype Maven: The Complete Reference

  • Apache Maven 3 Cookbook Review

    Thanks to Packt Publishing for having sent me this book to review. I will publish a review in the next coming days

    • Grasp the fundamentals and extend Apache Maven 3 to meet your needs
    • Implement engineering practices in your application development process with Apache Maven
    • Collaboration techniques for Agile teams with Apache Maven
    • Use Apache Maven with Java, Enterprise Frameworks, and various other cutting-edge technologies
    • Develop for Google Web Toolkit, Google App Engine, and Android Platforms using Apache Maven


    You may also consider reading all my articles related to Apache Maven

  • Apache Maven Archetype for Joomla

    Got this email from Cyprian Sniegota, he did develop a Maven Archetype for easing development of Joomla extensions. His archetype currently support the creation of a skeleton for components, modules, plugins and templates.

    I noticed some time ago that you described combination of Joomla! and Maven. Few weeks ago i wrote joomla-maven-plugin with skeleton projects (sources: bitbucket.org/deviapps) based on php-maven.org work.
    Here is short description http://deviapps.com/create-joomla-extension-with-maven and 5 min video (in Polish so far) http://www.youtube.com/watch?v=aE8w9EZciTg
    I hope you will be interested.

    Thanks to him for having written this project. I will also try to Maven-ize what Joomla has done with Ant in the future (I prefer now crystal clear software lifecycle )

  • Apache Maven BEA Weblogic 10.3 remote deployment


    In this small post I will show you how to deploy automatically some artifacts of your build into bea_logo1Weblogic 10.3 by using the weblogic-maven-plugin

    This plugin will support various tasks within the Weblogic 8.1 and 9.x environment. Such tasks as deploy, undeploy,clientgen,servicegen, and appc are supported as well as many others. The plugin uses exposed API's that are subject to change but have been tested in 8.1 SP 4-6 and 9.0 - 9.2 MP3. There are two versions of the plugin to support the two environments based on differences in the JDK. The 9.x version is currently being refactored to support the standard JSR supported deployment interface

  • Apache Maven books


    Questions for the official certification.

     JavaBlackBelt is a community for Java & open source skills assessment. It is dedicated to technical quizzes about Java related technologies. This is the place where Java developers have their technology knowledge and development abilities recognized. Everybody is welcome to take existing and build new exams.


    maven.the.definitive.guide BetterBuildsWithMaven

    Maven: The Definitive Guide (Readable HTML alpha release)

    Better Builds with Maven (Free PDF)

    • Covers:Maven 2.0.4
    • Publisher:DevZuz
    • Published:March 2006
    • Authors: John Casey, Vincent Massol, Brett Porter, Carlos Sanchez

      Better Builds with Maven is a comprehensive 'How-to' guide for using Maven 2.0 to better manage the build, test and release cycles associated with software development. The chapters include:

      • An introduction to Maven 2.0
      • Creating, compiling and packaging your first project
      • Best practices and real-world examples
      • Building J2EE Applications
      • Extending builds by creating your own Maven plugins
      • Monitoring the health of source code, testing, dependencies and releases
      • Team collaboration and utilising Continuum for continuous integration
      • Converting existing Ant builds to Maven

    Maven: A Developer's Notebook




  • Apache Maven Cargo deploy with Tomcat 7



    Following the post about Deploy to Tomcat 6 using Maven, here is a ready to use example with the main differences explained in the table below

      Tomcat 7 Tomcat 6
    containerId <containerId>tomcat7x</containerId> <containerId>tomcat6x</containerId>
    Url of Tomcat manager <cargo.remote.uri> <cargo.tomcat.manager.url>
    example http://host..com/manager/text/ http://host..com/manager/

    <role rolename="manager-gui"/>
    <role rolename="manager-script"/>
    <role rolename="manager-jmx"/>
    <role rolename="manager-status"/>
    <user username="admin" password="admin" roles="manager-gui,manager-script"/>

      <role rolename="manager"/>
      <user username="admin" password="admin" roles="manager"/>

    And finally a snippet of an Apache Maven pom.xml ready to use in a profile, so you can reuse this profile like a method call


    Place as many profiles as you have machine to deploy in settings.xml and declare some variables as properties, as shown below:

        <!-- these properties must be defined
           as system property or -D -->
        <!-- - deployable.artifactid:
             artifactId of web application to be deployed -->
        <!-- - deployable.context: web context name -->

    So you can run, and traget multiple host by just exchanging the name of the profile serverA to something else.

    mvn integration-test –PdeployTomcat,serverA
  • Apache Maven copy local file to a remote server server using SSH

    I will show you in an Apache Maven configuration file how to copy files to server each time the package phase is executed.

    Solution with Ant SCP task

    This snippet of code is a ready to use code that make use of Apache Ant task scp, Just put this snippet of code in your Maven module where the assembly is executed or anywhere else to push all tar.gz files to a server just run a maven mvn package, you can add as many ant task and push to many server the same file during the reactor build.

                <echo message="Push to server/home/"/>
                <scp trust="yes"
                    <fileset dir="${basedir}/target">
                        <include name="**/*.tar.gz"/>

    Solution with maven-deploy-plugin

    The maven-deploy-plugin allows you to configure the deploy phase to deploy to a server using scp. There is a page in the documentation that describes how it can be done.

    Deploy maven artifact using Maven Wagon SCP

    Another alternative would be to use Maven Wagon SCP like described in this post for example

  • Apache Maven copy local file to a remote server server using SSH

    I will show you in an Apache Maven configuration file how to copy files to server each time the package phase is executed.

    Solution with Ant SCP task

    This snippet of code is a ready to use code that make use of Apache Ant task scp, Just put this snippet of code in your Maven module where the assembly is executed or anywhere else to push all tar.gz files to a server just run a maven mvn package, you can add as many ant task and push to many server the same file during the reactor build.

                <echo message="Push to server/home/"/>
                <scp trust="yes"
                    <fileset dir="${basedir}/target">
                        <include name="**/*.tar.gz"/>

    Solution with maven-deploy-plugin

    The maven-deploy-plugin allows you to configure the deploy phase to deploy to a server using scp. There is a page in the documentation that describes how it can be done.

    Deploy maven artifact using Maven Wagon SCP

    Another alternative would be to use Maven Wagon SCP like described in this post for example

  • Apache Maven profiles order in multi modules projects


    In which order are Apache Maven profiles executed? are Apache Maven profiles ordered? how can you insured that Apache Maven profiles are activated in the right order?

    You normally don’t end up with these questions, issues may only appear if

    • Some profiles are dependent each other,
    • Some profiles can not run in any order,

    The use case behind this article is very simple, as I have a a continuous build were:

    • 5 web applications have  to be deployed into a remote tomcat in phase pre-integration-test,
    • 2 databases are created for test cases in phase generate-test-resources
    • 1 more database is created and needed for runtime, done in phase pre-integration-test
    • One of these web applications is able to inject data into database using web services, a profile do this in a profile in phase pre-integration-test
    • Selenium test cases are run in phase integration-test

    All these steps are done using several Apache Maven pom profiles.

    As it is a bit complicated to explain, lets first refresh some Apache Maven concepts

    Apache Maven Goals

    First you’ll have to keep in the mind Apache Maven lifecycle of modules, 21 goals out of the box:

    • Validate: validate the project is correct and all necessary information is available 
    • generate-sources: generate any source code for inclusion in compilation    
    • process-sources: process the source code, for example to filter any values    
    • generate-resources: generate resources for inclusion in the package    
    • process-resources: copy and process the resources into the destination directory, ready for packaging  
    • compile: compile the source code of the project    
    • process-classes: post-process the generated files from compilation, for example to do byte code enhancement on Java classes    
    • generate-test-sources: generate any test source code for inclusion in compilation
    • process-test-sources: process the test source code, for example to filter any values    
    • generate-test-resources : create resources for testing 
    • process-test-resources: copy and process the resources into the test destination directory    
    • test-compile: compile the test source code into the test destination directory    
    • test: run tests using a suitable unit testing framework. These tests should not require the code be packaged or deployed
    • prepare-package: perform any operations necessary to prepare a package before the actual packaging. This often results in an unpacked, processed version of the package    
      package     take the compiled code and package it in its distributable format, such as a JAR    
      pre-integration-test: perform actions required before integration tests are executed. This may involve things such as setting up the required environment   
    • integration-test: process and deploy the package if necessary into an environment where integration tests can be run     (selenium test cases for example)
      post-integration-test: perform actions required after integration tests have been executed. This may including cleaning up the environment
    • verify     run any checks to verify the package is valid and meets quality criteria    
    • install     install the package into the local repository, for use as a dependency in other projects locally
    • deploy    code is deployed in artifactory or copied with ftp/scp for distribution

    if you run the goal compile

    mvn compile

    on a simple multi module project, EVERY modules, one after the others,  will go through these phases
    validate –> generate-sources –> process-sources –> generate-resources –> process-resources –> compile

    Apache Maven reactor

    The reactor is the part of Apache Maven that allows to execute a goal on a set of modules. As mentioned in the Apache Maven 1.x documentation on multi-modules builds, while modules are discreet unit of work, they can be gathered together using the reactor to build them simultaneously and:

    The reactor determines the correct build order from the dependencies stated by each project in their respective project descriptors, and will then execute a stated set of goals. It can be used for both building projects and other goals, such as site generation.

    The reactor is what makes multi-modules build possible: it computes the oriented graph of dependencies between modules, derive the build order from this graph and then execute goals on the modules. In other words, a "multi-modules build" is a "reactor build" and a "reactor build" is a "multi-modules build".

    A simple multi modules project

    For the sake of the exmaple, it has modules and profiles dependencies, in myProject/pom.xml


    or if you prefer the directory layout

        |_ pom.xml
        |_ web
        |_ remoting
        |_ services
        |_ web

    Lets assume also I would like to apply a list of profiles named

    • deployWeb, deploy the war module using cargo to a running tomcat instance
    • createDatabase, create a mysql database from scratch
    • runSelenium, run selenium test in phase integration test against web, assume database is created first
    • deployMonitoring, deploy the war module using cargo to a running tomcat instance, query the web at startup to get some infos.

    Maven calculate the module order in reactor based on dependencies, as seen in logs file after running

    mvn compile

    [INFO] Reactor build order:  Unnamed - com.waltercedric:myproject:pom:0.0.1-SNAPSHOT
    Unnamed - com.waltercedric:common:jar:0.0.1-SNAPSHOT
    Unnamed - com.waltercedric:services:jar:0.0.1-SNAPSHOT
    Unnamed - com.waltercedric:remoting:ear:0.0.1-SNAPSHOT
    Unnamed - com.waltercedric:web:war:0.0.1-SNAPSHOT
    Unnamed - com.waltercedric:monitoring:war:0.0.1-SNAPSHOT


    It start to be complicated when you provide a list of profile using Apache Maven command line like this

    mvn post-integration-test –PdeployWeb,createDatabase,runSelenium,deployMonitoring

    Chances are high that you will get profile executed in wrong order, too early or too late..

    Rule #1 profiles are activated (if found) following reactor modules order

    The first rule is that profiles are activated in module reactor order first, if myProject is first it will go through all 18 phases of  Apache Maven (from validate to post-integration-test in my example). Keep in mind also that the list of profiles will be applied to EVERY modules in EVERY phase starting at the top most module in reactor.

    • On modules myproject:
      •  Apache Maven will activate profiles PdeployWeb,createDatabase,runSelenium,deployMonitoring if one or more in the list are present in myproject/pom.xml
    • On modules common,
      • Apache Maven will activate profiles PdeployWeb,createDatabase,runSelenium,deployMonitoring if one or more in the list are present in common/pom.xml
    • and so on….

    Rule #2  Reactor modules order “may” be changed

    And now the tricky part, you can normally NOT change the module order in reactor, that’s ok but….

    The order you define in myProject/pom.xml for   (=module aggregation) is still kept if the maven dependencies resolver don't see any problems

    Not clear enough? look at the 2 examples below:

    myProject/pom.xml mvn post-integration-test
    Reactor build order (seen in logs)


    1. myProject
    2. common
    3. services
    4. remoting
    5. web
    6. monitoring
    Maven adapt the order based on oriented graph of dependencies between modules.


    1. myProject
    2. common
    3. services
    4. remoting
    5. monitoring
    6. web
    Swapping module having no direct connections each others and having no conflicting dependencies to other may result in a different order in reactor!!!! and also different profile execution order.

    Since Apache Maven has detected that the module monitoring and web have no connections, it accept the “human/natural” order found in myproject/pom.xml.

    You may have to use this technique to distribute your profiles in pom.xml while still keeping the profile order execution under control.

    Rule #3 Maven profile order is not taken from command line

    The order of profile in the Apache Maven command line  –P list is not taken into account, running the expected profiles order

    mvn post-integration-test –PdeployWeb,createDatabase,runSelenium,deployMonitoring

    is equivalent to

    mvn post-integration-test –PcreateDatabase,deployMonitoring, deployWeb,runSelenium



    It is a good things, as it  simply make no sense across all modules and all Maven phase all combined together.

    Rule #4 You can force profiles to run in an order if you SORT them accordingly into ONE pom.xml

    Apache Maven recommend to place profiles into the module where they are acting.

    If I want to insure that profiles deployWeb, createDatabase are run before the profiles runSelenium you have to keep that order in the pom.xml even if these profiles are acting in different Maven phase

    • createDatabase  may run in phase generate-test-resources 
    • deployWeb run in phase pre-integration-test
    • runSelenium run in phase integration-test

    Considering the module ordering in reactor, a good pom.xml candidate could be web/pom.xml like this






  • Apache POI Speed Optimizations

    apache-poi-logo The Apache POI Project's mission is to create and maintain Java APIs for manipulating various file formats based upon the Office Open XML standards (OOXML) and Microsoft's OLE 2 Compound Document format (OLE2). In short, you can read and write MS Excel files using Java. In addition, you can read and write MS Word and MS PowerPoint files using Java. Apache POI is your Java Excel solution (for Excel 97-2008). We have a complete API for porting other OOXML and OLE2 formats and welcome others to participate.

    Switch Off logging

    From the documentation at http://poi.apache.org/utils/logging.html

    Logging in POI is used only as a debugging mechanism, not a normal runtime logging system. Logging is ONLY for autopsie type debugging, and should NEVER be enabled on a production system. Enabling logging will reduce performance by at least a factor of 100. If you are not developing POI or trying to debug why POI isn't reading a file correctly, then DO NOT enable logging. You've been warned.

    In order to effectively disable the logging functionality in Apache POI you must use an alternative logger. This is accomplished by providing a property to the POILogFactory to override the default logger. You can add one of these –D to your JVM settings


    I found Apache POI to slightly better perform with the NoOpLog of apache common!

    Recompile poi with more adapted settings

    You can create a custom build of Apache POI 3.8 and alter the following properties to better match the size of the excel files you are generating or reading:

  • org.apache.poi.hssf.usermodel.HSSFRow#INITIAL_CAPACITY=5;
  • org.apache.poi.hssf.usermodel.HSSFSheet#INITIAL_CAPACITY= 20;    // used for compile-time optimization.  This is the initial size for the collection of rows.  It is currently set to 20.  If you generate larger sheets you may benefit by setting this to a higher number and recompiling a custom edition of HSSFSheet.
  • org.apache.poi.hssf.usermodel.HSSFWorkbook#INITIAL_CAPACITY=3;  // used for compile-time performance/memory optimization.  This determines the  initial capacity for the sheet collection.  Its currently set to 3.Changing it in this release will decrease performance since you're never allowed to have more or less than three sheets!     
  • http://poi.apache.org/apidocs/org/apache/poi/hssf/usermodel/HSSFWorkbook.html#INITIAL_CAPACITY

    Don’t use xlsx, prefer xls!

    This will only work if you do not reach xls limitations which may avoid you to go to that extreme solution. XLS is not compressed (XLSX is xml based and compressed) and your workbook may double size in memory as a result!

    For example, data beyond 256 (IV) columns by 65,536 rows will not be saved in xls! In Excel 2010 and Excel 2007, the worksheet size is 16,384 columns by 1,048,576 rows, but the worksheet size of Excel 97-2003 is only 256 columns by 65,536 rows. Data in cells outside of this column and row limit is lost in Excel 97-2003. But there is a lot more limitations listed at office.com

    The biggest side effect was that my excel file went from 354kb to 967kb, but the speed increase was quite interesting: more than 44% less evaluation time.

    Small localized optimization

    I don’t think these bring a lot of speed, JIT should optimize this bad piece of code for us but it is always worth trying Speeding up org.apache.poi.hssf.usermodel.HSSFRow.compareTo() and http://affy.blogspot.ch/2004/04/poi-optimization-speeding-_108265938673224937.html

  • Auto deployment of Maven artifacts to Oracle Weblogic


    I found  this time a  new way to deploy Maven artefacts using the Oracle Weblogic Ant API!

    If you remember my previous post, there is many ways to deploy your war/ear to Oracle Weblogic

    1. Using Oracle Weblogic development mode, a mode in which a simple copy of your files in a specific autodeploy directory trigger the update/install of these
    2. Using Maven Cargo, this work only if your Oracle Weblogic container is local (see here) on the same machine, where Apache Maven is running
    3. Using a very old Maven plugin (2008), local and remote container are supported, but our builds were sometimes hanging during pre integration phase for no apparent reasons.

    And now using the official ANT API of Oracle, by far the MOST stable of all!

  • Avoid Hotlinking or so called bandwidth stealing

    From WikiPedia

    Inline linking (also known as hotlinking, leeching, piggy-backing, direct linking, offsite image grabs and bandwidth theft) is the use of a linked object, often an image, from one site into a web page belonging to a second site. The second site is said to have an inline link to the site where the object is located.

    This is not just Bandwidth Stealing, as

    • It cost CPU and bandwidth which means less performance for your visitors,
    • It cost a lot of money as you still pay the server cost, and loose ad revenues,
    • It drive people away from your reputable homepage since they will find your picture or files on any mirrors,
    • It may be a security threat at least for distributable software, anybody may alter (backdoor,ads, privacy information stealing) any of my open source component without my consent.

    The mod_rewrite module is able to intercept incoming URLs and modify them according to a set of rules that you specify. The basic idea is use the mod_rewrite module to inspect the incoming HTTP header. The field we're looking for is the Referer field - or basically the URL that the current request originated from.


    This optional header field allows the client to specify, for the server's benefit, the address ( URI ) of the document (or element within the document) from which the URI in the request was obtained.
    This allows a server to generate lists of back-links to documents, for interest, logging, etc. It allows bad links to be traced for maintenance.

    So create a file .htaccess at the root of your site with the following content:

    RewriteCond %{HTTP_REFERER} !^$
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?wiki.waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?forums.waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?bugs.waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?demo.waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?demo2.waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?mirror.waltercedric.com(/)?.*$     [NC]
    RewriteCond %{HTTP_REFERER} !^http://(www\.)?images.google.com(/)?.*$     [NC]
    RewriteRule .*\.(jpg|jpeg|gif|png|bmp|zip|css)$ http://www.waltercedric.com/bandwidthStealing.html [R,NC]


    • I want to allow cross linking between all my Subdomains wiki,demo, bugs, forums... so I have a bigger list of allowed Referer than usual to enter...
    • I do not allow hotlinking of the following resources for obvious reasons: jpg|jpeg|gif|png|bmp|zip|css
    • I redirect any bad people to a fix files on disk http://www.waltercedric.com/bandwidthStealing.html
    • You are allowed to copy the templates http://www.waltercedric.com/bandwidthStealing.html as long as you keep the bottom link.
    • Note the latest RewriteCond: I always allow Google to references my images

    There is a useful online generator with a lot more explanation online at the bottom of this page http://www.htmlbasix.com/disablehotlinking.shtml . This is active on my server since 2 weeks, and I've see a performance in response time.

    More tips 

    • To have an insight on resources stealing in nearly real time, simply put a statistics marker with for example Google Analytics to see how many people are landing on that page per week or months!
    • To generate money (better than nothing), dot forget also to put advertisements publicity on your redirect hot linking page
  • Behavior Driven Development with JBehave and Apache Maven



    I won’t explain you how to write any JBehave tests as the online documentation is more than complete.

    I prefer to show you how to make them run in eclipse, and in Apache Maven as the example were not easy to run (scenario are wrongly in src/main/java).


    JBehave is a framework for Behaviour-Driven Development
    Behaviour-driven development (BDD) is an evolution of test-driven development (TDD) and acceptance-test driven design, and is intended to make these practices more accessible and intuitive to newcomers and experts alike.
    It shifts the vocabulary from being test-based to behaviour-based, and positions itself as a design philosophy.
    You can find out more about behaviour-driven development on the BDD wiki, or in the article Introducing BDD

    Features of JBehave include:

    • Pure Java implementation, which plays well with Java-based enterprises.
    • Users can specify and run text-based user stories, which allows "out-in" development.
    • Annotation-based binding of textual steps to Java methods, with auto-conversion of string arguments to any parameter type (including generic types) via custom parameter converters.
    • Annotation-based configuration and Steps class specifications
    • Dependency Injection support allowing both configuration and Steps instances composed via your favourite container (Guice, PicoContainer, Spring).
    • Extensible story reporting: outputs stories executed in different human-readable file-based formats (HTML, TXT, XML). Fully style-able view.
    • Auto-generation of pending steps so the build is not broken by a missing step, but has option to configure breaking build for pending steps.
    • Localisation of user stories, allowing them to be written in any language.
    • IDE integration: stories can be run as JUnit tests or other annotation-based unit test frameworks, providing easy integration with your favourite IDE.
    • Ant integration: allows stories to be run via Ant task
    • Maven integration: allows stories to be run via Maven plugin at given build phase

    To make the online sample run easily without having to check out the whole tree of JBehave, I will show you that by slightly altering the pom.xml of a sample (Trader), you can run them against a fix version of JBehave.

    The whole pom.xml

                <configuration> </configuration>
  • Benchmarking your LAMP server


    The acronym LAMP refers to a solution stack of software, usually free and open source software, used to run dynamic Web sites or servers. It stand for:

    • Linux, for the operating system;
    • Apache, the Web server;
    • MySQL, the database management system (or database server);
    • Perl, Python, and PHP, the programming languages.

     ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.
    Apache-utils package contains utility programs for webservers and some add-on programs useful for any webserver. These include:

    • ab (Apache benchmark tool)
    • Logresolve (Resolve IP addresses to hostname in logfiles)
    • htpasswd (Manipulate basic authentication files)
    • htdigest (Manipulate digest authentication files)
    • dbmmanage (Manipulate basic authentication files in DBM format, using perl)
    • htdbm (Manipulate basic authentication files in DBM format, using APR)
    • rotatelogs (Periodically stop writing to a logfile and open a new one)
    • split-logfile (Split a single log including multiple vhosts)
    • checkgid (Checks whether the caller can setgid to the specified group)
    • check_forensic (Extract mod_log_forensic output from apache log files)

    This package Apache-Utils can be install through apt or YaST depending if you are using a Debian base distro or OpenSuse


    • Define realistic objectives, do not create too much virtual clients if you do not have usually that kind of user traffic..
    • For example an objective could be: number of users served, or percentage of the requests served within a certain time
    • This tool ab do not simulate realistic user behavior, it just hit a page without being able to simulate a complex workflow (like login, navigate and do things users usually do)
    • Try to monitor at the same time the CPU/Memory consumed in order not to make false assumption on apache settings (use top d 1)


    It is an iterative process!

    1. Benchmark,
    2. Change settings and
    3. Restart benchmark.

    It is very important to only change a setting a time in order to better identify what is really bringing something! By changing only one settings at a time, you can:

    • Better see the influence on CPU, memory (you must look also at resources, a server swapping to disk is never good)
    • There is not so much universal settings bringing a speed kick (except DNSlookup off, keep alive small), some settings are depending on your Linux kernel version, CPU class, disk speed, network latency

    Other components

    mysql While tuning apache, you will see that most of the time is used in PHP/MySQL, for MySQL recommend to run at the same time tuning-primer.sh, read more here


    ab [ -A auth-username:password ] [ -c concurrency ] [ -C cookie-name=value ] [ -d ] [ -e csv-file ] [ -g gnuplot-file ] [ -h ] [ -H custom-header ] [ -i ] [ -k ] [ -n requests ] [ -p POST-file ] [ -P proxy-auth-username:password ] [ -q ] [ -s ] [ -S ] [ -t timelimit ] [ -T content-type ] [ -v verbosity] [ -V ] [ -w ] [ -x <table>-attributes ] [ -X proxy[:port] ] [ -y <tr>-attributes ] [ -z <td>-attributes ] [http://]hostname[:port]/path


    -A auth-username:password
    Supply BASIC Authentication credentials to the server. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the server needs it (i.e., has sent an 401 authentication needed).
    -c concurrency
    Number of multiple requests to perform at a time. Default is one request at a time.
    -C cookie-name=value
    Add a Cookie: line to the request. The argument is typically in the form of a name=value pair. This field is repeatable.
    Do not display the "percentage served within XX [ms] table". (legacy support).
    -e csv-file
    Write a Comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests. This is usually more useful than the 'gnuplot' file; as the results are already 'binned'.
    -g gnuplot-file
    Write all measured values out as a 'gnuplot' or TSV (Tab separate values) file. This file can easily be imported into packages like Gnuplot, IDL, Mathematica, Igor or even Excel. The labels are on the first line of the file.
    Display usage information.
    -H custom-header
    Append extra headers to the request. The argument is typically in the form of a valid header line, containing a colon-separated field-value pair (i.e., "Accept-Encoding: zip/zop;8bit").
    Do HEAD requests instead of GET.
    Enable the HTTP KeepAlive feature, i.e., perform multiple requests within one HTTP session. Default is no KeepAlive.
    -n requests
    Number of requests to perform for the benchmarking session. The default is to just perform a single request which usually leads to non-representative benchmarking results.
    -p POST-file
    File containing data to POST.
    -P proxy-auth-username:password
    Supply BASIC Authentication credentials to a proxy en-route. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the proxy needs it (i.e., has sent an 407 proxy authentication needed).
    When processing more than 150 requests, ab outputs a progress count on stderr every 10% or 100 requests or so. The -q flag will suppress these messages.
    When compiled in (ab -h will show you) use the SSL protected https rather than the http protocol. This feature is experimental and very rudimentary. You probably do not want to use it.
    Do not display the median and standard deviation values, nor display the warning/error messages when the average and median are more than one or two times the standard deviation apart. And default to the min/avg/max values. (legacy support).
    -t timelimit
    Maximum number of seconds to spend for benchmarking. This implies a -n 50000 internally. Use this to benchmark the server within a fixed total amount of time. Per default there is no timelimit.
    -T content-type
    Content-type header to use for POST data.
    -v verbosity
    Set verbosity level - 4 and above prints information on headers, 3 and above prints response codes (404, 200, etc.), 2 and above prints warnings and info.
    Display version number and exit.
    Print out results in HTML tables. Default table is two columns wide, with a white background.
    -x <table>-attributes
    String to use as attributes for <table>. Attributes are inserted <table here >.
    -X proxy[:port]
    Use a proxy server for the requests.
    -y <tr>-attributes
    String to use as attributes for <tr>.
    -z <td>-attributes
    String to use as attributes for <td>.

    Some real examples

    time /usr/sbin/ab2 -n 500 -c 30 http://www.waltercedric.com
    This will make 500 requests on them and hammering localhost for 30 seconds

    After tuning Before tuning
    Benchmarking www.waltercedric.comCompleted 100 requests
    Completed 200 requests
    Completed 300 requests
    Completed 400 requests
    Finished 500 requests
    Server Software:        NOYB
    Server Hostname:        www.waltercedric.com
    Server Port:            80
    Document Path:          /index.php
    Document Length:        45532 bytes
    Concurrency Level:      30
    Time taken for tests:   38.576375 seconds
    Complete requests:      500
    Failed requests:        19 
       (Connect: 0, Length: 19, Exceptions: 0)
    Write errors:           0
    Total transferred:      23000106 bytes
    HTML transferred:       22762106 bytes
    Requests per second:    12.96 [#/sec] (mean)
    Time per request:       2314.582 [ms] (mean)
    Time per request:       77.153 [ms] (mean, across all concurrent requests)
    Transfer rate:          582.25 [Kbytes/sec] received
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    8  36.9      0     207
    Processing:   394 2239 345.3   2237    6223
    Waiting:      379 2197 340.9   2190    6173
    Total:        397 2247 344.2   2239    6223
    Percentage of the requests served within a certain time (ms)
      50%   2239
      66%   2294
      75%   2327
      80%   2357
      90%   2457
      95%   2560
      98%   2973
      99%   3341
    100%   6223 (longest request)
    real    0m38.617s
    user    0m0.024s
    sys     0m0.240s

    Benchmarking www.waltercedric.com
    Completed 100 requests
    Completed 200 requests
    Completed 300 requests
    Completed 400 requests
    Finished 500 requests

    Server Software:        NOYB
    Server Hostname:        www.waltercedric.com
    Server Port:            80

    Document Path:          /index.php
    Document Length:        45532bytes

    Concurrency Level:      30
    Time taken for tests:   108.897481 seconds
    Complete requests:      500
    Failed requests:        19
       (Connect: 0, Length: 19, Exceptions: 0)
    Write errors:           0
    Total transferred:      23000106bytes
    HTML transferred:     23000106bytes
    Requests per second:    4.59 [#/sec] (mean)
    Time per request:       6533.849 [ms] (mean)
    Time per request:       217.795 [ms] (mean, across all concurrent requests)
    Transfer rate:          178.41 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0  114 478.9      0    2276
    Processing:   336 6186 1665.2   6108   16189
    Waiting:    -5148 5982 1982.8   6066   16009
    Total:        391 6301 1580.2   6120   17093

    Percentage of the requests served within a certain time (ms)
      50%   6120
      66%   6453
      75%   6778
      80%   7046
      90%   7861
      95%   8516
      98%  10110
      99%  12418
    100%  17093 (longest request)

    real    1m48.905s
    user    0m0.024s
    sys     0m0.152s


    time /usr/sbin/ab2 -kc 10 -t 30 http://www.waltercedric.com
    This will open 10 connections, using Keep-Alive on them and hammering localhost for 30 seconds

    Same tests but without mod_security

    • Mod_security is a module for Apache which act like a software firewall
    • Depending on the number of rules, can greatly affect through output speed

    time /usr/sbin/ab2 -kc 10 -t 30 http://www.waltercedric.com
    This will open 10 connections, using Keep-Alive on them and hammering localhost for 30 seconds

    real    0m39.040s
    user    0m0.020s
    sys     0m0.208s

    Nearly one second more with mod_security gotroot rules, worth the added security!

    If you want to know more options and how to use apache ab check the apache ab/ab2 man page click here for this man page

    How to optimize Apache/Joomla/PHP

    I forward You to some of my previous articles:

    And more ideas here Secure, Safe, Fast Linux Hosting

  • Best nginx configuration for Joomla


    nginx (pronounced “engine-x”) is an open source Web server and a reverse proxy server for HTTP, SMTP, POP3 and IMAP protocols, with a strong focus on high concurrency, performance and low memory usage. It is licensed under a BSD-like license and it runs on Unix, Linux, BSD variants, Mac OS X, Solaris, AIX and Microsoft Windows [WikiPedia]

    These are my reusable settings for any Joomla hosting, these are the most secure, and fastest settings to the best of my knowledge.

    Configuration files are provided using Gist  and are CONSTANTLY updated for added security and speed. Gist is a simple way to share snippets and pastes with others. All gists are git repositories, so they are automatically versioned, forkable and usable as a git repository. I recommend you to starred them to stay up to date.

    Joomla.conf for nginx

    Create a new directory nginx/conf to be able to place reusable nginx settings:

    mkdir -p /etc/nginx/conf

    vi /etc/nginx/conf/joomla.conf

    Edit or create joomla.conf, you can find the latest joomla.conf documented version in one of my Gist at https://gist.github.com/1620307

    Adding a new Joomla Site to nginx

    Create required directory anywhere on your disk, here is an example with a domain www.example.com

    mkdir -p /var/www/vhosts/example.com/httpdocs
    mkdir -p /var/www/vhosts/example.com/logs

    Set the right permission to the user and group you have defined in nginx.conf

    chown -fR www-data:www-data /var/www/vhosts/example.com/httpdocs

    Copy the nginx template and adapt to your liking

    cp /etc/nginx/sites-available/default /etc/nginx/sites-available/example
    vi /etc/nginx/sites-available/example

    Edit or create example, you can find the latest file example documented version in one of my Gist at https://gist.github.com/1620307

    this file include Joomla.conf to avoid duplicating nginx settings

    Activate the new domain

    ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example
    service nginx restart
  • Break Maven build when there is a dependency conflict



    • You want to control Maven during dependency resolution and break the build if some conditions are not met,
    • You want to detect dependencies conflict early during the build,
    • You want to avoid anybody in your team to use the dependency x in the version y

    This is where the Maven Enforcer Plugin will assist you:

    The Enforcer plugin provides goals to control certain environmental constraints such as Maven version, JDK version and OS family along with many more standard rules and user created rules.

    Add in your pom.xml the following to configure the plugin