Archive for January, 2006

Apache Geronimo

Recently we’ve been hearing good news concerning Apache Geronimo project. Towards the end of 2005, Apache Geronimo became a fully certified J2EE 1.4 Server. Then, IBM announced that they were basing WebSphere Application Server Community Edition on Apache Geronimo and a week later Virtuas announced support for it.

Earlier this week, Covalent started to offer commercial support for Apache Geronimo stating that their customers have been asking for it. And if IBM and BEA decide to base their application servers on Geronimo, the project will have a guaranteed bright future. This will surely put some pressure on JBoss, currently the most popular open source application server (some will add “despite Marc Fleury” :)).

The good thing for us in it? First of all, this type of competition is necessary for a healthy industry. Both projects are run by brilliant people who are going to continue to improve the tools that we use. Secondly, there are the commercial offerings such as WebSphere or WebLogic application servers that may be based on open source software, which would increase the credibility and the reach of Apache Geronimo. For long time, IBM HTTP Server has been based on Apache HTTP Server. Maybe they think it’s the time to do the same thing for their application server.

- Yagiz Erkan -

Leave a comment

Java Scheduling with Redwood CronacleBeans

We have been working with Redwood Software’s CronacleBeans technology for about the last year now and I would like to try and start spreading the good word about this most useful piece of technology.

CronacleBeans comes as part of the Cronacle Enterprise Scheduler suite of software and allows full integration between the Java world and the enterprise scheduling world. What made CronacleBeans interesting for us was the fact that our development teams could develop Java based services (POJOs, EJBs, whatever) and then, via a simple API (the heart of Cronacle is built on PL/SQL but Redwood provide a Java API which maps directly to the PL/SQL API), make them enterprise scheduler executable jobs.

Read the rest of this entry »

, , , , , ,

3 Comments

Convention over Configuration

Henry Ford, founder of the Ford Motor Company in 1903, had a great philosophy. Get the laziest man to do a job and he’ll find the quickest way to do it. It turns the negativity of laziness on it’s head and turns it into something positive.

Laziness is not a bad thing in Software Development either, it turns out. I say this because like most people, I am lazy. I don’t like to write excessive code and I don’t like too much of a configuration overhead in my frameworks. It slows my creative process. That’s why I am an advocate of [tag]Convention over Configuration[/tag].

Read the rest of this entry »

Leave a comment

Performance Tuning with JProbe and PerformaSure

JPROBE

As any developer that has created Java applications will tell you, memory and performance issues can be extremely difficult, frustrating, and consequently time consuming, to detect, diagnose and ultimately resolve.

DSI employs Quest Software’s market leading Java performance tuning toolset, JProbe Suite to assist our development and test teams to identify performance, memory, threading and code coverage issues down to the line of code.

The JProbe Suite contains three tools to assist in your investigations:

JProbe Profiler – To identify method and line level performance bottlenecks
JProbe Memory Debugger – To investigate memory leaks and excessive garbage collection
JProbe Coverage – To measure code coverage after testing

DSI have used JProbe Profiler and the Memory Debugger on both highly scalable real-time applications and high throughput, highly performant batch type applications. Typically during load testing of these types of applications, performance bottle necks, or memory issues will be identified. Once an issue has been identified, we re-run these load tests in a JProbe Profiler enabled environment. This allows us to establish a baseline from which a report can be generated. This report highlights method hotspots, indicating number of method calls, cumulative time spent of method and explicit method time. A graphical representation of the method call stack is also available, displaying in a easy to understand, colour coded manner, the ‘flow’ of the application. This report is analysed by a lead developer/application architect who has a expert understanding of how the application should behave. Problematic code is then identified, down to the line of code, if necessary. A code fix is applied and the load test is re-ran. We then compare the baseline report against the report generated by the new, improved code. Any change in code performance is displayed in plus or minus percentage differences from the baseline. This process then continues until any SLAs are met.

We have used JProbe Profiler to identify and resolve the following types of issues:

  • Inefficient database connection pooling
  • Inefficient use of the Collections API
  • Inefficient use of the Calendar object which was eventually replaced with Joda Time opensource library
  • Inefficient looping
  • Redundant code
  • Inefficient object cycling
  • Inefficient IO

Thankfully, we have only been called upon to use JProbe Memory Debugger only once. During load testing it was observed that the performance of a batch type application was decreasing over time, and on observing the memory graph generated by the profiler, it was clear that the issue was memory related. The graph indicated that memory usage was increasing exponentially, thus causing excessive garbage collections. On running the application load test through the Memory Debugger, and indepth analysis of the resulting report, the culprit was found. The application was using multiple levels of caching, via Hibernate query caching and two levels of OSCache implemented caching. The application in question was a batch type application, the nature of which was that each ‘job’ was unique. The report indicated that the cache object maintained by the Hibernate query contained ‘hard references’ to each and every query (String object), of a certain type, invoked by the application. The object graph maintained by this cache got ‘deeper’ and consumed more memory during lifetime of a batch run. We disabled Hibernate query caching, re-ran the loadtest, and the memory issue disappeared. This issue would have been near impossible to identify without the use of a tool like JProbe Memory Debugger. As a side effect of this, we started thinking about the caching approach used in the application. Quite soon it was realized that we did not infact need two levels of object caching provided by OSCache. The batch application ‘inherited’ a set of services that were written for a real-time system. The nature of the real-time application required caching, but this was not the case with the batch type application. Object caching was switched off at the ‘inherited’ service level saving further memory. Lowering memory consumption generally equals improved performance.

When using tools like JProbe Profiler and Memory Debugger, yes it does help you fix any immediate problems that you may be having, but it also really gets you thinking about your code. You learn that not all solutions are applicable in all cases – ‘horses for courses’ so to speak.

PERFORMASURE

In DSI we use Quest Software’s PerformaSure software during the production implementation phase of a project. A lead developer/application architect will go on-site to a customer, install the PerformaSure software in their production environment and will monitor application performance during the rollout of the application.

Depending on the application we generally recommend a two week rollout period before the ‘go-live’ date. Our customized licensing agreement allows us to run PerformaSure in any type of production environment: no matter the OS, CPU count, number of application servers in the cluster, no matter the number of physical machines etc. This will allow our team of experts to observe the application, under a combination of virtual and real load in a production environment. Reports can be generated for the consumption of both business and technical management. The fact that these types of reports, and the detail contained there-in, can be generated give both the business and technical management a strong sense of confidence in the robustness and performance of an application.

The PerformaSure agent can run passively (almost 0% overhead on server(s))on the production environment. If and when a potential issue arises, our PerformaSure experts can activate the system agent via a remote workstation. This agent will then begin collecting various metrics, relaying this information to the PerformaSure analytical system, the Nexus. When the system agent is active, it represents < 5% overhead on the server(s). The metrics relayed to the Nexus can then be user analysed via the Workstation. The reports displayed by the workstation can render specific information on a specific technology in the technology stack e.g. JNDI, Servlet, EJB, MDB, Apache Struts, Taglib etc. The ability to have this level of information at your finger tips in a production environment is priceless. It gives the customer a sense of security and gives the development team a 'third-eye' that allows them to react to issues if and when they happen.

We generally will leave PerformaSure running in the production environment for a one month period of time. So that even though we may not be on site we can activate the system agent remotely and connect remotely to the Nexus to assist in problem identification and resolution. We feel that this gives real meaning to any warranty that we give with our software.

- Jay

1 Comment

Data – Insurance Industry

Having recently read the Next Big Thing article by Anthony O’ Donnell,
I was delighted to see that ‘The future of the insurance industry seemingly will be shaped by data-related technologies‘.

Two key points to take from this article, in my opinion, are
1. Data Visualization Tools
2. Data Integration

Data Visualization is not a new concept at all – we’ve always known that “a Picture tells a thousand words” – the correct and appropriate chart/graph/diagram is so much more intuitive than a presentation of raw data. It will be interesting to see the long-term impact this will have on the BI areas for the Insurance Industry in particular CRM, Enterprise Risk Management and Enterprise Performance Management. In the short-term many Enterprises have to address their data modelling and/or data integration projects first before seeing the true-value add of these additional data visualization tools. I also wonder if some of these new tools will be beyond the realms of use for traditional Business Users and if they will be more suited to our Six Sigma Master Black Belts! ; Time will tell.

Data Integration will continue to play a big part for the Insurance Industry this year, especially for Enterprises that still rely on legacy systems, as they carefully manage the migration of particular core modules to newer technologies as well as manage the integration between these legacy systems and newer applications. As Enterprises move to these newer applications I hope they will use the opportunity to develop their BI Strategy, if one doesn’t already exist or improve on it if it does, to better assist our Business Users and ultimately improve Business.

As I mentioned in my previous Dimensional Modelling piece – exciting times ahead – and it looks like the Insurance Industry is one of the key business segments to be working in.

~ Maria

Leave a comment

Don’t forget your Babel fish if you want to go to work….

Forgive my rip-off of two cultural icons folks, I promise the title will be the last reference to the works of Messrs Adams and Moore. Just a few rambling unconnected and quite possibly irrelevant thoughts..

This may be my first and only foray into this technologist’s blog speaking as one of 4 only non-technologists in the company. There are currently 100-odd (and counting – kinda like your google mail capacity)  other individuals who are right now to one degree or another feverishly involved and busy producing quality applications for critical enterprise wide business solutions (per the marketing blurb)

I had until now considered myself as having grasped at least a rudimentary inkling of what it was all about. ‘It’ being of course the whole software development thing.

This modicum of insight had supposedly been achieved (or so I thought) through a little bit of osmosis, a little bit of active learning and just a smidgen of common (?) sense.

Alas I was sadly mistaken as to the degree of my tech literacy, and am now, as a result of the entries to the Blog so far, coming to the inevitable conclusion that there is a whole other universe operating right outside my office door, a veritable parallel dimension, light years in expanse and constantly expanding away from my static little world at an alarming rate.

This other universe is populated not just by my co-workers, (who traverse to my universe at will e.g. when they want to arrange their annual leave or get paid), but is also filled with strange phenomena straight out of a BBC 4 documentary, such as ‘Web 2.0’, and ‘dimensional modelling’ and all manner of other weird and presumably wonderful wonders.

I realise now after 5 years in the Software industry, and having spent many many hours in that alternate universe attending meetings where the majority of the subject matter bandied about was, to put it politely, gibberish to my layman’s mind, that I have had to train myself to listen carefully and recognise and extract the pertinent information. This usually consists of two or three HR type actions such as ‘Find me a Hyper specialist contractor for a 3 day role who has 25 years experience in developing sub-orbital satellite guidance software in blurg on a Unix platform, with at least seven of the following seven development languages – insert random alpha numeric characters in groups of 3 or 4 here___ ___ ___ ___ etc. …Oh and I need him/her here by tomorrow or else forget about it…’ or perhaps, ‘buy more tea’.

Right then…

The ‘business user’ referred to in an earlier entry by Maria, doesn’t presumably care as to how many rows and columns there are in a particular database or about table joins, as Maria points out. They ‘just’ want the software to provide them with the information they need in a timely manner and ultimately to help them add to their bottom line or improve their service or whatever their business drivers happen to be.

So where should the tech-non tech interface or common ground be? Should business folk get more tech literate or should technical people english-up a bit. But you guys have the upper hand with your own suite of languages and protocols and processes and so on. I still can’t program my VCR. And I don’t have time to go back to college.

I do remember whilst in College, my IT lecturers were proposing a new concept of ‘hybrid manager’ a tech savvy business type, neither engineer nor accountant, who could seamlessly integrate (see, I have acquired some of the lexicon) with both worlds, and as you can tell from the plethora of Hybrid managers out there, this idea had legs….

We at DSI are fortunate to have excellent analysts, Project Managers, Architects etc. who do a very good job speaking to clients about their IT needs and produce coherent and comprehensive requirements documentation eventually leading to the development of high quality IT Business solutions. Still I suspect it’s not always the technical people in organisations that make the buying decisions. I know I have had my eyes involuntarily glaze over on so many occasions I have to believe there are other poor souls out there. If not, direct me to the nearest asylum someone.

And as ‘they’ say …‿As the amount of data once measured in Megabytes/Gigabytes/Terabytes‿ (yes – and Petabytes) grows and the languages and processes you folks devise to transform it to business useful information, spare a thought for those of us who still think of Struts as something peacocks do. When you see my eyes glaze over it’s either because I need you repeat what you’ve just said in a very slow fashion and with pictures, or I’ve suddenly remembered I’ve forgotten my Mother’s birthday – again. Now, where’s my osmosis wellies…..? The ones with the big holes in…

Leave a comment

The End of the Complexity Chain

The software business is different. Not in the never-mind-the-numbers madness of the .com bubble – eventually you have to turn a profit. What I mean is that doing business in the software sphere is a unique experience because of the special place the software industry occupies in the world’s money-go-round: We sit at the very end of the complexity chain.

What’s the Complexity Chain?
When any other industry looks to increase productivity, lower running costs or save time, it looks for suppliers who can suck some complexity out of their operations, ideally leaving only core competency activities behind. Each of these suppliers in turn tries to do the same so they can keep their overheads down and compete on value. The chain goes on from company to company, each extracting complexity from customers and dumping it on suppliers.

But the chain can’t go on to infinity. The complexity has to pool somewhere, and in my opinion it usually ends up in the IT sector. We are there to facilitate or automate every kind of business process imaginable, and allow other industries to concentrate on their core activities. We cannot play the same game – or at least not to the same extent – because in order to provide our particular service, we must make our customer’s competency our own. We have two main jobs to perform:

  • Understand our customer’s business
  • Understand the computing business

We can outsource paper recycling, printer maintenance, travel arrangements and so on but the first thing we do when we begin a project is to absorb the complexity of the business process we are about to automate or facilitate.

What About Software Tooling?
We don’t have to develop in machine code using vi of course. We can use any number of IDEs to make the job easier, not to mention a raft of frameworks and libraries to save time and trouble. These tools will simplify the job of software engineering by allowing us to concentrate on the tough task of translating business requirements into computer programs. But the best that a tool vendor can do is to sell finely crafted chisels to experienced artisans. He cannot produce the goods for us (beware those that say they can). He can’t change the bottom line: software companies have to absorb the core competencies of their customers while retaining their own.

Maybe someday the artisans of software development may be replaced by some combination of artificial intelligences, but to me that still looks like the logic of infinite recursion. In the meantime, the strength of the software development company will remain in its people.

The Importance of the Translator
Ask a German person a question that requires a considered answer. Often the first word you’ll hear back first will be ‘Ja’ – not because the answer to your question is ‘Yes’ but because ‘Ja’ is a common way to say ‘Well…’ in German. If a translator doesn’t know this, an international meeting can end in tears.

Translators typically understand the source language very well, and speak the target language as their mother tongue. Good software engineers can do the same: They can understand real-world problems well enough to explain them to machines that have zero tolerance for ambiguity.

The job of translation is an atomic one – you can’t use one translator’s brain to understand and another’s to translate. It all happens inside one head. Software engineering cannot be reduced any further. A single software engineer’s brain is the crucible in which the two competencies – the customer’s and the engineer’s – are brought together, right at the end of the complexity chain.

- brendan lawlor

Those likely to be for:
http://www.martinfowler.com/bliki/ModelDrivenArchitecture.html
http://alistair.cockburn.us/

Those likely to be against:
http://ianwij.com/weblog/articles/Writing_Code_Is_Stupid.aspx

Leave a comment

Follow

Get every new post delivered to your Inbox.