tuning

  • Still not enough, we were forced to profile the java code and make some big changes.... (from part 1)

    Profiling

    You either profile an application for speed and/or memory usage and/or memory leaks. Our application is fast enough at the moment. Our major concern is optimizing memory usage and thus avoiding disk memory swapping.

    Some words about architecture


    It is not possible to profile any applications without having a deep understanding of the architecture behind. The Product Catalog is an innovative product which is a meta model for storing insurances product in a database, a Product is read only and  can derivate instance that we call Policies. Policies are users data holder, containing no text, just values, and sharing a similar structure as the Product. This let the product know everything about (cross) default values, (cross) validations,  multiple texts, attributes order/length/type etc... and thus separate definition (Products) from implementation (Policies). Products and Policies can be fully described  with Bricks, Attributes in a tree manner.

    Reduce the number of object created


    Looking at the code, we have seen that too many Products (17 Products has 15000 objects either Attributes/Bricks/Texts/Value/ValueRange) were loaded in memory. While this is clearly giving a speed advantage on an application server, it is simply killing the offline platform with it 1GB RAM (remember memory really free is 500Mb)
    The problem is that Attributes and Brick are using/can use  a lot of fields/meta data in the database which translate into simple java type (String for UUID, and meta data keys and values) in  memory. We start looking at the profiler and the 100 MB used by the product cache.
    Reducing this amount of object was the first priority, a lot of them are meta data which are common and spread across the Product Tree in memory. Since avoiding creation of unneeded object is always recommended, we decide to remove duplicate element in the tree by singularizing them. This is possible because the product is Read Only and made of identical meta data keys, meta keys value.

    Entropy and cardinality of meta data
    An Attribute may have an unlimited number of meta text (among other things), common meta data keys are  "long", and "short" and "help" text description in 4 languages (en_us, fr_ch, de_ch, it_ch), while this is not a problem in the database, this make the Product  object tree size quite huge  in the Product cache (containing Products Value Object). ..Counting some of them in database for example return stunning result. 
    We found 60000 "long" texts which translate into 60000 String text keys and 60000 String text values (worst case scenario since texts values may not be all reusable). Reducing this number of Objects is done quite easily by not creating new instance of  String, Decimal, Integer object and returning always the right and same Object instance. (we keep them in a MAP and return either a new instance or a previously cache one).

    Large objects cardinality  but a poor entropy
    By running two or three SQL statement and trying to distinguish real distinct values, we found that a lot of these meta data are made of a relative small number of different values. By just storing a limited number of String like "0", "1", "2" to... "99", "default", "long", "short", "de_ch", "fr_ch" we have reach a cache efficiency and reuse of object instance of 99%
    After that "small" change in the way value objects (VO) are created and connected, a java String object containing before "de_ch" and existing 10000 times in memory is now replaced across all  Attributes/Bricks by the same instance!

     The gain is simply phenomenal. Memory gain is bigger than 50%.

    Reducing the number of objects in memory 
    Instead of storing thousands of Products Text String in memory,  we decided to allocate them on disk using  java reflection API and a Dynamic Proxy.

    The idea is to save all String in one or more files on disk, the position of each text and length being saved in the corresponding Value Object. So basically we gain the space used by a String in memory  at the expense of a long (String position in file relative to start of file) and an  int (length of String) primitive type

    References:  Proxy  - InvocationHandler
    Resume: Java String disk based allocation
    Code snippet: soon

    Use better data structures
    Java has a lot of  quality library, commons collections from apache are well known. Javalution is a real time library with real time and reduce garbage collector impact. We have use FastTable and FastMap where it make sense.

    For example the class FastTablehas the following advantages over the widely used java.util.ArrayList:
    • No large array allocation (for large collections multi-dimensional arrays are employed). The garbage collector is not stressed with large chunk of memory to allocate (likely to trigger a full garbage collection due to memory fragmentation).
    • Support concurrent access/iteration without synchronization if the collection values are not removed/inserted

    Different caching strategy
    By design the ProductCatalog is able to use many caching strategy. One is named "Nocache" limit number of object in memory to the bare minimum, and redirect all access to product to database. In a mono user environment, and since products reside in 4 tables only (so only 4 select to read all data from DB and some VO to rebuild the tree are needed), the through output is more than enough.

    More to come...



    References
  • joomla_cms

    Since my server is still suffering, Ive decide today to take some actions.

    user: changes can be done on shared hosting with limited user rights.
    root. changes require a full access to the server (root access with secure shell ssh)

    I have currently per months, 160.000 visitors and 2 Millions Hits.or per day 8000 visitors and 24000 pages view.
    Server has only 1GB RAM.

    UPDATE: I found one/THE reason why my host is slowing down...SPAMMERS!

    click read more

    user: Joomla! settings
    • I switch enable gzip compression OFF. Since this is meant to reduce the usage of bandwidth, not the load on my  server. The server has to encode all the files in order to send them, which only puts additional load on your page.
    • I switch Joomla!! statistics off, as AWSTATS is providing a much better job.
    user: MySQL maintenance

    I Optimize (repair, refresh statistics) MySQL tables through MySQL admin, but It can be done through PLESK
    user: Tune Joomla!! cache

    I Increase Joomla! cache lifetime from 900seconds to 24 hours as it better reflect the way I update my site (daily)
    user: keep pages small

    • Reduce size of banner, using GIMP so they are using a web palette, most of them were shrink from 40kb to 7kb
    • I remove all un-needed space from the main templates file (index.php), an action which will help first users visitors only.
    • Attention, it is a never ending task...
    user: Hunting software bugs
    By switching the site to debug mode, I notice some nasty queries (select count(*) from), all created by my statistics module ( Content Statistics on right side), In fact Joomla!! module do not inherit from Joomla!! cache automatically. I fix value in this module since I do not want to program cache support in it right now.
    root One morecache

    I decide to install a PHP accelerator: PHPA from  http://www.php-accelerator.co.uk/
    " The ionCube PHP Accelerator is an easily installed PHP Zend engine extension that provides a PHP cache, and is capable of delivering a substantial acceleration of PHP scripts without requiring any script changes, loss of dynamic content, or other application compromises."

    Install is straightforward: just copy library into /usr/local/lib/php_accelerator_1.3.3r2.so
    and add these lines to /etc/php.ini

    ; PHP Accelerator extension
    zend_extension="/usr/local/lib/php_accelerator_1.3.3r2.so"
    phpa = on
    phpa.c0_size = 64
    phpa.cache_dir = /tmp
    phpa.c0_logging = on


    ;The shm_stats_check_period is the minimum interval between checks of the
    ;cache for expired scripts. The first server request after the interval has
    ;elapsed will trigger a scan of the cache for expired scripts, and remove
    ;any entries that it finds.
    phpa.c0_stats_check_period = 5m

    ;The shm_ttl value is the value used to set the
    ;time-to-expiry value when a script is accessed. Put another way, the shm_ttl
    ;value is the period after which an unaccessed script expires.
    phpa.c0_ttl = 12h

    ;phpa.ignore_files = ""
    ;phpa.ignore_dirs = ""

    I use that tool: HTTP Viewer to check if my page now contains a headerX-Accelerated-By:·PHPA/1.3.3r2

    Reduce surface of attack: I found my components that were not used by Joomla! (very old code and uused components . So go through all directories with FTp/SCP and remove any un-needed code....


    Review table data directly in the database...
    This is how I found 27 000 spams in my gallery (Zoom gallery)
    solution:
    • I remove all entries
    • Disallow comments operations (in Zoom gallery admin panel),
    but spammers were still able to insert comments, so I edit the file components/com_zoom/lib/image.class.php
    //add because of spammers
    header("HTTP/1.0 403 Forbidden");
    //$database->setQuery("INSERT INTO __zoom_comments (imgid,cmtname,cmtcontent,cmtdate) VALUES ('".mysql_escape_str

    Note: I  recommend You to use also mod_evasive and mod_security  (root access needed), see aprevious article on my site



    Some links, where I borrow some ideas:


    http://www.primakoala.com/tutorials/guides/speeding_up_joomla.html
    http://forum.joomla.org/index.php/topic,50278.0.html
    http://forum.joomla.org/index.php/topic,54175.0.html
  • We are working since 3 days on tuning a big application: 

    • Client server enterprise grade application,
    • Run on 2 JVM  with 4Gb (Tomcat/Application server) of RAM each!
    • Run on 2 Double core AMD 64 bits server,
    • Linux 64bits,
    • Has a lot of parallel users and > 10000 are registered
    • Use a product meta model which separate definition from implementation data.
    • Java server faces, java, ajax
     This application  is just consuming too much memory for the offline version. Our objective is to make that big application run

    • The same code as above,
    • In windows XP, 
    • IBM T40,  Intel Pentium M 1.6 GHz,  DDR266/PC2100
    • 1 JVM with 500Mb in Tomcat,
    • 1 GB of physical Ram,
    • 1 Desktop user who may run also Lotus Notes, Microsoft Office at the same time...
    There is already a lot of good resources and valuable advices on internet (Google is your friend :-)). Before digging in the code, and since the code is already productive,  we have done some tuning on component first.
    By tuning each components involved one after the other, this follow the principle: Lets do some quick win first before changing algorithm and increasing risk of breaking something....
    In order to back up each changes made with some statistics, the first step was to develop a testcase with Web Stress Tool(Commercial) but Apache JMETER (... replace with your favorite web testing tool) would have do the job

    At the OS Level

    by trying to convince the company to turn the anti virus off on some files and directory. They were scanning XHTML, javascript, XML, class files, images, so nearly everything... during EACH file access. Note the user has no windows right to alter files.


    MySQL 5(we are already using the latest 5.X branch by luck)

    By removing TCP database access and using name pipe only (+30 to +50% performances),

    By Installing MySQL Enterprise Advisor and Monitor. (You can request a free trial key here) and looking at what the advisor recommend. Attention this tool has been developed for monitoring servers, some recommendations are simply not always usable. In our case we are constrained by the memory, remember less than 500Mb, so we did not blindly follow advices. Basic stuff were done, like adding indexes (were it make sense to avoid full tables scan and reduce slow queries), increasing  buffers,

    By switching to myISAM (multi threaded with table locking) instead of innoDB (multi threaded with row locking), and also avoiding other storage engine to run with different algorithm to run in parallel..

    MyISAM is the default storage engine for the MySQL relational database management system. It is based on the older ISAM code but has many useful extensions. In recent MySQL versions, the InnoDB engine has widely started to replace MyISAM due to its support for transactions, referential integrity constraints, and higher concurrency.Each MyISAM table is stored on disk in three files. The files have names that begin with the table name and have an extension to indicate the file type. MySQL uses a .frm file to store the definition of the table, but this file is not a part of the MyISAM engine, but instead is a part of the server. The data file has a .MYD (MYData) extension. The index file has a .MYI (MYIndex) extension. [WikiPedia]

    InnoDB is a storage engine for MySQL, included as standard in all current binaries distributed by MySQL AB. Its main enhancement over other storage engines available for use with MySQL is ACID-compliant transaction support, similar to PostgreSQL, along with declarative referential integrity (foreign key support). InnoDB became a product of Oracle Corporation after their acquisition ofInnobase Oy, in October 2005. The software is dual licensed. It is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software. [WikiPedia]

    What are the differences, and you may want also to use myISAM for mono user applications...
    1. InnoDB recovers from a crash or other unexpected shutdown by replaying its logs. MyISAM must fully scan and repair or rebuild any indexes or possibly tables which had been updated but not fully flushed to disk. Since the InnoDB approach is approximately fixed time while the MyISAM time grows with the size of the data files, InnoDB offers greater perceived availability and reliability as database sizes grow.
    2. MyISAM relies on the operating system for caching reads and writes to the data rows while InnoDB does this within the engine itself, combining the row caches with the index caches. Dirty (changed) database pages are not immediately sent to the operating system to be written by InnoDB, which can make it substantially faster than MyISAM in some situations.
    3. InnoDB stores data rows physically in primary key order while MyISAM typically stores them mostly in the order in which they are added. This corresponds to the MS SQL Server feature of “Clustered Indexes” and the Oracle feature known as "index organized tables." When the primary key is selected to match the needs of common queries this can give a substantial performance benefit. For example, customer bank records might be grouped by customer in InnoDB but by transaction date with MyISAM, so InnoDB would likely require fewer disk seeks and less RAM to retrieve and cache a customer account history. On the other hand, inserting data in orders that differ substantially from primary key (PK) order will presumably require that InnoDB do a lot of reordering of data in order to get it into PK order. This places InnoDB at a slight disadvantage in that it does not permit insertion order based table structuring.
    4. InnoDB currently does not provide the compression and terse row formats provided by MyISAM, so both the disk and cache RAM required may be larger. A lower overhead format is available for MySQL 5.0, reducing overhead by about 20% and use of page compression is planned for a future version.
    5. When operating in fully ACID-compliant modes, InnoDB must do a flush to disk at least once per transaction, though it will combine flushes for inserts from multiple connections. For typical hard drives or arrays, this will impose a limit of about 200 update transactions per second. If you require higher transaction rates, disk controllers with write caching and battery backup will be required in order to maintain transactional integrity. InnoDB also offers several modes which reduce this effect, naturally leading to a loss of transactional integrity. MyISAM has none of this overhead, but only because it does not support transactions. [WikiPedia]
    For us the speed of myISAM is clearly balancing the drawback for a desktop applications.

    JSF tuning

    Obvious settings here:, JSF is lacking more fine tuning settings. Serialization is occurring during the model life cycle and consume memory and CPU. We may dig deeply later.
    • javax.faces.STATE_SAVING_METHOD to server
    • org.apache.myfaces.COMPRESS_STATE_IN_SESSIONto true since memory is the biggest constraint for us
    • org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION to 0
    • facelets.BUFFER_SIZE to 8192

    Tomcat tuning

    Nothing big can be done here...For me Tomcat is really missing a dynamic web application loader:  Tomcat is simply installing all applications found in /webapps at startup even if they are not used. They are never remove from memory or serialized to disk. Tomcat 4.1 seems to have a memory footprint of 22 Mb, going to the latest Tomcat 6.0 is a too big changes for us now, but we might reconsider it in the future. Removing java library which are not use from WEB-INF/libby trial and error can save some precious Bytes through as it is pretty common when you use frameworks to have jar not desired. For example: junit.jar, jdbc drivers, jms.jar,...Moving common lib to shared/lib may also help remove duplicate jars from webapps class loader and memory.


    JVM tuning

    Java 1.5 and java 1.6 have made a lot of progress, and the JIT compiler found in JAVA 1.5/1.6 is getting  more and more aggressive...The basic rule is to turn the GC JVM log on (by adding -Xloggc:<file> [-XX:+PrintGCDetails]) and analyze it offline with a tool like GCViewer (free). The JIT is doing a pretty good job as the application run more and more faster with the time, but it is just a feeling ;-)
    By analyzing the GC logs we were able to optimize and avoid big mis configurations mistakes, one more time a lot of articles and books are available on how to tune a JVM. Sadly java has no advisor at the moment or is not using genetic algorithms to tune itself...It remains a dream for now.

    By using an empiric approach, which means:
    1 changing JVM parameters -> running test cases ->deciding if we give CPU away or minimize RAM usage -> go back to 1

    We come down to the following  exotic parameters (Xms and Xmx are not of any help since it is really depending on your application and how memory is managed internally)

     -XX:+AggressiveOpts -XX:-UseConcMarkSweepGC

    By the way I use them in Eclipse + JDK 1.6 since months. This page A Collection of JVM Options compiled by: Joseph D. Mocker (Sun Microsystems, Inc.) has been of a great help during this stage.

    Still not enough, we were forced to profile the java code and make some big changes....
  • mysql.logo I found this interesting tool (beside tuning-primer.sh) while trying to optimizing my server setting for Joomla!

    mysqlreport

    mysqlreport makes an easy-to-read report of important MySQL status values. Unlike SHOW STATUS which simply dumps over 100 values to screen in one long list, mysqlreport interprets, formats, and then nicely presents the values in report readable by humans. Numerous example reports are available at the mysqlreport web page.
    The benefit of mysqlreport is that it allows you to very quickly see a wide array of performance indicators for your MySQL server which would otherwise need to be calculated by hand from all the various SHOW STATUS values. For example, the Index Read Ratio is an important value but it is not present in SHOW STATUS; it is an inferred value (the ratio of Key_reads to Key_read_requests).

    Grab it at http://hackmysql.com/mysqlreport

    How to run it (more options), it require PERL to run.

    # ./mysqlreport --user xxxxx--password xxxxxx| more

    Running mysqlreport against my host, gave the following results:

    • very good table lock 0.02%
    • very good read ratio 99.93%
    • good query cache, but could be reduce to 40Mb to avoid wasting memory resource

    if you need something more professional and can afford it, you can try Mysql Enterprise (free for 30 days, enough to tune any small server bottleneck)

    &160;

    &160;

    MySQL 5.0.41-log&160;&160;&160;&160;&160;&160;&160;&160; uptime 4 10:56:4&160;&160;&160;&160;&160;&160; Fri Jan&160; 2 22:45:47 2009

    __ Key _________________________________________________________________
    Buffer used&160;&160;&160;&160; 2.44M of&160;&160; 5.00M&160; %Used:&160; 48.75
    &160; Current&160;&160;&160;&160;&160;&160; 2.97M&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; %Usage:&160; 59.38
    Write hit&160;&160;&160;&160;&160; 47.41%
    Read hit&160;&160;&160;&160;&160;&160; 99.93%

    __ Questions ___________________________________________________________
    Total&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 4.07M&160;&160;&160; 10.6/s
    &160; QC Hits&160;&160;&160;&160;&160;&160; 1.93M&160;&160;&160;&160; 5.0/s&160; %Total:&160; 47.35
    &160; DMS&160;&160;&160;&160;&160;&160;&160;&160; 973.13k&160;&160;&160;&160; 2.5/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 23.89
    &160; Com_&160;&160;&160;&160;&160;&160;&160; 936.64k&160;&160;&160;&160; 2.4/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 23.00
    &160; COM_QUIT&160;&160;&160; 249.35k&160;&160;&160;&160; 0.6/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 6.12
    &160; -Unknown&160;&160;&160;&160; 14.78k&160;&160;&160;&160; 0.0/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.36
    Slow 5 s&160;&160;&160;&160;&160; 123.77k&160;&160;&160;&160; 0.3/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 3.04&160; %DMS:&160; 12.72&160; Log:&160; ON
    DMS&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 973.13k&160;&160;&160;&160; 2.5/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 23.89
    &160; SELECT&160;&160;&160;&160;&160; 589.04k&160;&160;&160;&160; 1.5/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 14.46&160;&160;&160;&160;&160;&160;&160;&160; 60.53
    &160; UPDATE&160;&160;&160;&160;&160; 135.53k&160;&160;&160;&160; 0.4/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 3.33&160;&160;&160;&160;&160;&160;&160;&160; 13.93
    &160; INSERT&160;&160;&160;&160;&160; 125.80k&160;&160;&160;&160; 0.3/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 3.09&160;&160;&160;&160;&160;&160;&160;&160; 12.93
    &160; DELETE&160;&160;&160;&160;&160; 119.91k&160;&160;&160;&160; 0.3/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 2.94&160;&160;&160;&160;&160;&160;&160;&160; 12.32
    &160; REPLACE&160;&160;&160;&160;&160;&160; 2.85k&160;&160;&160;&160; 0.0/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.07&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.29
    Com_&160;&160;&160;&160;&160;&160;&160;&160;&160; 936.64k&160;&160;&160;&160; 2.4/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 23.00
    &160; set_option&160; 411.63k&160;&160;&160;&160; 1.1/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 10.11
    &160; change_db&160;&160; 230.65k&160;&160;&160;&160; 0.6/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 5.66
    &160; show_tables&160; 68.89k&160;&160;&160;&160; 0.2/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1.69

    __ SELECT and Sort _____________________________________________________
    Scan&160;&160;&160;&160;&160;&160;&160;&160;&160; 205.15k&160;&160;&160;&160; 0.5/s %SELECT:&160; 34.83
    Range&160;&160;&160;&160;&160;&160;&160;&160;&160; 27.27k&160;&160;&160;&160; 0.1/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 4.63
    Full join&160;&160;&160;&160;&160; 13.73k&160;&160;&160;&160; 0.0/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 2.33
    Range check&160;&160;&160;&160;&160;&160;&160;&160; 8&160;&160;&160;&160; 0.0/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.00
    Full rng join&160;&160; 4.46k&160;&160;&160;&160; 0.0/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.76
    Sort scan&160;&160;&160;&160;&160; 76.29k&160;&160;&160;&160; 0.2/s
    Sort range&160;&160;&160; 110.20k&160;&160;&160;&160; 0.3/s
    Sort mrg pass&160;&160;&160;&160;&160;&160; 0&160;&160;&160;&160;&160;&160; 0/s

    __ Query Cache _________________________________________________________
    Memory usage&160;&160; 25.86M of&160; 70.00M&160; %Used:&160; 36.94
    Block Fragmnt&160; 16.52%

    Hits&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1.93M&160;&160;&160;&160; 5.0/s
    Inserts&160;&160;&160;&160;&160;&160; 533.75k&160;&160;&160;&160; 1.4/s
    Insrt:Prune&160;&160; 13.29:1&160;&160;&160;&160; 1.3/s
    Hit:Insert&160;&160;&160;&160; 3.61:1

    __ Table Locks _________________________________________________________
    Waited&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 269&160;&160;&160;&160; 0.0/s&160; %Total:&160;&160; 0.02
    Immediate&160;&160;&160;&160;&160;&160; 1.71M&160;&160;&160;&160; 4.5/s

    __ Tables ______________________________________________________________
    Open&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1482 of 2000&160;&160;&160; %Cache:&160; 74.10
    Opened&160;&160;&160;&160;&160;&160;&160;&160; 44.50k&160;&160;&160;&160; 0.1/s

    __ Connections _________________________________________________________
    Max used&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 16 of&160;&160; 25&160;&160;&160;&160;&160; %Max:&160; 64.00
    Total&160;&160;&160;&160;&160;&160;&160;&160; 250.45k&160;&160;&160;&160; 0.7/s

    __ Created Temp ________________________________________________________
    Disk table&160;&160;&160;&160; 65.75k&160;&160;&160;&160; 0.2/s
    Table&160;&160;&160;&160;&160;&160;&160;&160; 198.32k&160;&160;&160;&160; 0.5/s&160;&160;&160; Size: 120.0M
    File&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 5&160;&160;&160;&160; 0.0/s

    __ Threads _____________________________________________________________
    Running&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1 of&160;&160;&160; 3
    Cached&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 13 of&160;&160; 32&160;&160;&160;&160;&160; %Hit:&160; 99.99
    Created&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 16&160;&160;&160;&160; 0.0/s
    Slow&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0&160;&160;&160;&160;&160;&160; 0/s

    __ Aborted _____________________________________________________________
    Clients&160;&160;&160;&160;&160;&160;&160;&160; 2.20k&160;&160;&160;&160; 0.0/s
    Connects&160;&160;&160;&160;&160;&160;&160; 3.41k&160;&160;&160;&160; 0.0/s

    __ Bytes _______________________________________________________________
    Sent&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1.48G&160;&160;&160; 3.8k/s
    Received&160;&160;&160;&160;&160; 757.33M&160;&160;&160; 2.0k/s

    __ InnoDB Buffer Pool __________________________________________________
    Usage&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 7.98M of&160;&160; 8.00M&160; %Used:&160; 99.80
    Read hit&160;&160;&160;&160;&160;&160; 99.80%

    Pages
    &160; Free&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; %Total:&160;&160; 0.20
    &160; Data&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 510&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 99.61 %Drty:&160;&160; 0.00
    &160; Misc&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.20
    &160; Latched&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.00
    Reads&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 1.03M&160;&160;&160;&160; 2.7/s
    &160; From file&160;&160;&160;&160; 2.10k&160;&160;&160;&160; 0.0/s&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0.20
    &160; Ahead Rnd&160;&160;&160;&160;&160;&160;&160; 79&160;&160;&160;&160; 0.0/s
    &160; Ahead Sql&160;&160;&160;&160;&160;&160;&160;&160; 6&160;&160;&160;&160; 0.0/s
    Writes&160;&160;&160;&160;&160;&160;&160;&160; 45.01k&160;&160;&160;&160; 0.1/s
    Flushes&160;&160;&160;&160;&160;&160;&160; 12.42k&160;&160;&160;&160; 0.0/s
    Wait Free&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0&160;&160;&160;&160;&160;&160; 0/s

    __ InnoDB Lock _________________________________________________________
    Waits&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0&160;&160;&160;&160;&160;&160; 0/s
    Current&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0
    Time acquiring
    &160; Total&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0 ms
    &160; Average&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0 ms
    &160; Max&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0 ms

    __ InnoDB Data, Pages, Rows ____________________________________________
    Data
    &160; Reads&160;&160;&160;&160;&160;&160;&160;&160; 2.30k&160;&160;&160;&160; 0.0/s
    &160; Writes&160;&160;&160;&160;&160;&160; 23.18k&160;&160;&160;&160; 0.1/s
    &160; fsync&160;&160;&160;&160;&160;&160;&160; 14.15k&160;&160;&160;&160; 0.0/s
    &160; Pending
    &160;&160;&160; Reads&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0
    &160;&160;&160; Writes&160;&160;&160;&160;&160;&160;&160;&160;&160; 0
    &160;&160;&160; fsync&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 0

    Pages
    &160; Created&160;&160;&160;&160;&160;&160;&160;&160;&160;&160; 5&160;&160;&160;&160; 0.0/s
    &160; Read&160;&160;&160;&160;&160;&160;&160;&160;&160; 2.60k&160;&160;&160;&160; 0.0/s
    &160; Written&160;&160;&160;&160;&160; 12.42k&160;&160;&160;&160; 0.0/s

    Rows
    &160; Deleted&160;&160;&160;&160;&160;&160;&160;&160; 843&160;&160;&160;&160; 0.0/s
    &160; Inserted&160;&160;&160;&160;&160; 2.07k&160;&160;&160;&160; 0.0/s
    &160; Read&160;&160;&160;&160;&160;&160;&160; 107.49k&160;&160;&160;&160; 0.3/s
    &160; Updated&160;&160;&160;&160;&160;&160; 2.83k&160;&160;&160;&160; 0.0/s

  • joomla_cms

    I did optimize a bit my Joomla! homepage in the last few days. This has been achieved with

    • Enabling Joomla module caching in all 3rd party module where it was missing/ not implemented at all,
    • Starting to offload some assets (JavaScript) to faster hosting,

    Click read more to apply the same for your internet site.

    Use Joomla Module caching

    Not all 3rd party Joomla modules are using caching. This means that in worst case, some Joomla! modules may create way too much SQL queries. A way to reduce the load is to activate module caching. You’ll have to go through all 3rd party modules and check that they have in their administrator panel a setting to enable/disable cache.

    jooomla.enable.caching

    You’ll see that 90% of all modules (except official Joomla! modules which are able to deal with caching) are NOT supporting caching. We will change that now:

    For every module without cache, open the xml file at /modules/mod_xxxxxxx/mod_xxxxxxx.xml and add between <params> .. </params>

    <param name="cache" type="radio" default="0" label="Enable Cache" 
           description="Select whether to cache the content of this module">
     <option value="0">No</option>
     <option value="1">Yes</option>
    </param>

    Note that if <params> .. </params> do not exist, just add it like below

    <params>
     <param name="cache" type="radio" default="0" label="Enable Cache" 
           description="Select whether to cache the content of this module">
      <option value="0">No</option>
      <option value="1">Yes</option>
     </param>
    </params>

    Visit or reload the admin panel of that module and set the Enable Cache to Yes.Click Save/Apply at least once.

    Now the output of this module will be saved in /cache and only refresh when global Joomla cache timeout (900 seconds as default). Consider also contacting the author of the module so he can patch his code.

    Offload assets

    Offloading assets (JavaScript, static images, static files) can bring tremendous speed gains, at the cost of resolving more DNS name. Using this technique will help your Apache concentrate on php instead of streaming static data.

    Offload JavaScript

    When you look at Joomla! frontend source code, you will see that the JavaScript library mootols.js is 74kb big. Google is offering to host all major AJAX  libraries free of charge at http://code.google.com/apis/ajaxlibs/documentation/ so why not profiting of their datacenter speed/bandwidth/response time?

    Now the dirty part, You can’t tell Joomla! not to include the mootols.js from /media/system/js/mootools.js at rendering time. We will have to patch the code of Joomla!

    open /libraries/joomla/html/html/behavior.php and search for

    if ($debug || $konkcheck) {
      JHTML::script('mootools-uncompressed.js', 'media/system/js/', false);
    } else {
     //JHTML::script('mootools.js', 'media/system/js/', false); // old Joomla code
     JHTML::script('mootools-yui-compressed.js', 'http://ajax.googleapis.com/ajax/libs/mootools/1.11/', false);
    }

    Joomla use mootools.js in version 1.11, don’t use the latest version (1.2.3) as most Joomla! plugin wont work (but your mileage may vary).

    To be continued