Archive for January, 2006

Apache Geronimo

Recently we’ve been hearing good news concerning Apache Geronimo project. Towards the end of 2005, Apache Geronimo became a fully certified J2EE 1.4 Server. Then, IBM announced that they were basing WebSphere Application Server Community Edition on Apache Geronimo and a week later Virtuas announced support for it.

Earlier this week, Covalent started to offer commercial support for Apache Geronimo stating that their customers have been asking for it. And if IBM and BEA decide to base their application servers on Geronimo, the project will have a guaranteed bright future. This will surely put some pressure on JBoss, currently the most popular open source application server (some will add “despite Marc Fleury” :)).

The good thing for us in it? First of all, this type of competition is necessary for a healthy industry. Both projects are run by brilliant people who are going to continue to improve the tools that we use. Secondly, there are the commercial offerings such as WebSphere or WebLogic application servers that may be based on open source software, which would increase the credibility and the reach of Apache Geronimo. For long time, IBM HTTP Server has been based on Apache HTTP Server. Maybe they think it’s the time to do the same thing for their application server.

- Yagiz Erkan -

Leave a comment

Java Scheduling with Redwood CronacleBeans

We have been working with Redwood Software’s CronacleBeans technology for about the last year now and I would like to try and start spreading the good word about this most useful piece of technology.

CronacleBeans comes as part of the Cronacle Enterprise Scheduler suite of software and allows full integration between the Java world and the enterprise scheduling world. What made CronacleBeans interesting for us was the fact that our development teams could develop Java based services (POJOs, EJBs, whatever) and then, via a simple API (the heart of Cronacle is built on PL/SQL but Redwood provide a Java API which maps directly to the PL/SQL API), make them enterprise scheduler executable jobs.

Read the rest of this entry »

, , , , , ,

3 Comments

Convention over Configuration

Henry Ford, founder of the Ford Motor Company in 1903, had a great philosophy. Get the laziest man to do a job and he’ll find the quickest way to do it. It turns the negativity of laziness on it’s head and turns it into something positive.

Laziness is not a bad thing in Software Development either, it turns out. I say this because like most people, I am lazy. I don’t like to write excessive code and I don’t like too much of a configuration overhead in my frameworks. It slows my creative process. That’s why I am an advocate of [tag]Convention over Configuration[/tag].

Read the rest of this entry »

Leave a comment

Performance Tuning with JProbe and PerformaSure

JPROBE

As any developer that has created Java applications will tell you, memory and performance issues can be extremely difficult, frustrating, and consequently time consuming, to detect, diagnose and ultimately resolve.

DSI employs Quest Software’s market leading Java performance tuning toolset, JProbe Suite to assist our development and test teams to identify performance, memory, threading and code coverage issues down to the line of code.

The JProbe Suite contains three tools to assist in your investigations:

JProbe Profiler – To identify method and line level performance bottlenecks
JProbe Memory Debugger – To investigate memory leaks and excessive garbage collection
JProbe Coverage – To measure code coverage after testing

DSI have used JProbe Profiler and the Memory Debugger on both highly scalable real-time applications and high throughput, highly performant batch type applications. Typically during load testing of these types of applications, performance bottle necks, or memory issues will be identified. Once an issue has been identified, we re-run these load tests in a JProbe Profiler enabled environment. This allows us to establish a baseline from which a report can be generated. This report highlights method hotspots, indicating number of method calls, cumulative time spent of method and explicit method time. A graphical representation of the method call stack is also available, displaying in a easy to understand, colour coded manner, the ‘flow’ of the application. This report is analysed by a lead developer/application architect who has a expert understanding of how the application should behave. Problematic code is then identified, down to the line of code, if necessary. A code fix is applied and the load test is re-ran. We then compare the baseline report against the report generated by the new, improved code. Any change in code performance is displayed in plus or minus percentage differences from the baseline. This process then continues until any SLAs are met.

We have used JProbe Profiler to identify and resolve the following types of issues:

  • Inefficient database connection pooling
  • Inefficient use of the Collections API
  • Inefficient use of the Calendar object which was eventually replaced with Joda Time opensource library
  • Inefficient looping
  • Redundant code
  • Inefficient object cycling
  • Inefficient IO

Thankfully, we have only been called upon to use JProbe Memory Debugger only once. During load testing it was observed that the performance of a batch type application was decreasing over time, and on observing the memory graph generated by the profiler, it was clear that the issue was memory related. The graph indicated that memory usage was increasing exponentially, thus causing excessive garbage collections. On running the application load test through the Memory Debugger, and indepth analysis of the resulting report, the culprit was found. The application was using multiple levels of caching, via Hibernate query caching and two levels of OSCache implemented caching. The application in question was a batch type application, the nature of which was that each ‘job’ was unique. The report indicated that the cache object maintained by the Hibernate query contained ‘hard references’ to each and every query (String object), of a certain type, invoked by the application. The object graph maintained by this cache got ‘deeper’ and consumed more memory during lifetime of a batch run. We disabled Hibernate query caching, re-ran the loadtest, and the memory issue disappeared. This issue would have been near impossible to identify without the use of a tool like JProbe Memory Debugger. As a side effect of this, we started thinking about the caching approach used in the application. Quite soon it was realized that we did not infact need two levels of object caching provided by OSCache. The batch application ‘inherited’ a set of services that were written for a real-time system. The nature of the real-time application required caching, but this was not the case with the batch type application. Object caching was switched off at the ‘inherited’ service level saving further memory. Lowering memory consumption generally equals improved performance.

When using tools like JProbe Profiler and Memory Debugger, yes it does help you fix any immediate problems that you may be having, but it also really gets you thinking about your code. You learn that not all solutions are applicable in all cases – ‘horses for courses’ so to speak.

PERFORMASURE

In DSI we use Quest Software’s PerformaSure software during the production implementation phase of a project. A lead developer/application architect will go on-site to a customer, install the PerformaSure software in their production environment and will monitor application performance during the rollout of the application.

Depending on the application we generally recommend a two week rollout period before the ‘go-live’ date. Our customized licensing agreement allows us to run PerformaSure in any type of production environment: no matter the OS, CPU count, number of application servers in the cluster, no matter the number of physical machines etc. This will allow our team of experts to observe the application, under a combination of virtual and real load in a production environment. Reports can be generated for the consumption of both business and technical management. The fact that these types of reports, and the detail contained there-in, can be generated give both the business and technical management a strong sense of confidence in the robustness and performance of an application.

The PerformaSure agent can run passively (almost 0% overhead on server(s))on the production environment. If and when a potential issue arises, our PerformaSure experts can activate the system agent via a remote workstation. This agent will then begin collecting various metrics, relaying this information to the PerformaSure analytical system, the Nexus. When the system agent is active, it represents < 5% overhead on the server(s). The metrics relayed to the Nexus can then be user analysed via the Workstation. The reports displayed by the workstation can render specific information on a specific technology in the technology stack e.g. JNDI, Servlet, EJB, MDB, Apache Struts, Taglib etc. The ability to have this level of information at your finger tips in a production environment is priceless. It gives the customer a sense of security and gives the development team a 'third-eye' that allows them to react to issues if and when they happen.

We generally will leave PerformaSure running in the production environment for a one month period of time. So that even though we may not be on site we can activate the system agent remotely and connect remotely to the Nexus to assist in problem identification and resolution. We feel that this gives real meaning to any warranty that we give with our software.

- Jay

1 Comment

Data – Insurance Industry

Having recently read the Next Big Thing article by Anthony O’ Donnell,
I was delighted to see that ‘The future of the insurance industry seemingly will be shaped by data-related technologies‘.

Two key points to take from this article, in my opinion, are
1. Data Visualization Tools
2. Data Integration

Data Visualization is not a new concept at all – we’ve always known that “a Picture tells a thousand words” – the correct and appropriate chart/graph/diagram is so much more intuitive than a presentation of raw data. It will be interesting to see the long-term impact this will have on the BI areas for the Insurance Industry in particular CRM, Enterprise Risk Management and Enterprise Performance Management. In the short-term many Enterprises have to address their data modelling and/or data integration projects first before seeing the true-value add of these additional data visualization tools. I also wonder if some of these new tools will be beyond the realms of use for traditional Business Users and if they will be more suited to our Six Sigma Master Black Belts! ; Time will tell.

Data Integration will continue to play a big part for the Insurance Industry this year, especially for Enterprises that still rely on legacy systems, as they carefully manage the migration of particular core modules to newer technologies as well as manage the integration between these legacy systems and newer applications. As Enterprises move to these newer applications I hope they will use the opportunity to develop their BI Strategy, if one doesn’t already exist or improve on it if it does, to better assist our Business Users and ultimately improve Business.

As I mentioned in my previous Dimensional Modelling piece – exciting times ahead – and it looks like the Insurance Industry is one of the key business segments to be working in.

~ Maria

Leave a comment

Don’t forget your Babel fish if you want to go to work….

Forgive my rip-off of two cultural icons folks, I promise the title will be the last reference to the works of Messrs Adams and Moore. Just a few rambling unconnected and quite possibly irrelevant thoughts..

This may be my first and only foray into this technologist’s blog speaking as one of 4 only non-technologists in the company. There are currently 100-odd (and counting – kinda like your google mail capacity)  other individuals who are right now to one degree or another feverishly involved and busy producing quality applications for critical enterprise wide business solutions (per the marketing blurb)

I had until now considered myself as having grasped at least a rudimentary inkling of what it was all about. ‘It’ being of course the whole software development thing.

This modicum of insight had supposedly been achieved (or so I thought) through a little bit of osmosis, a little bit of active learning and just a smidgen of common (?) sense.

Alas I was sadly mistaken as to the degree of my tech literacy, and am now, as a result of the entries to the Blog so far, coming to the inevitable conclusion that there is a whole other universe operating right outside my office door, a veritable parallel dimension, light years in expanse and constantly expanding away from my static little world at an alarming rate.

This other universe is populated not just by my co-workers, (who traverse to my universe at will e.g. when they want to arrange their annual leave or get paid), but is also filled with strange phenomena straight out of a BBC 4 documentary, such as ‘Web 2.0’, and ‘dimensional modelling’ and all manner of other weird and presumably wonderful wonders.

I realise now after 5 years in the Software industry, and having spent many many hours in that alternate universe attending meetings where the majority of the subject matter bandied about was, to put it politely, gibberish to my layman’s mind, that I have had to train myself to listen carefully and recognise and extract the pertinent information. This usually consists of two or three HR type actions such as ‘Find me a Hyper specialist contractor for a 3 day role who has 25 years experience in developing sub-orbital satellite guidance software in blurg on a Unix platform, with at least seven of the following seven development languages – insert random alpha numeric characters in groups of 3 or 4 here___ ___ ___ ___ etc. …Oh and I need him/her here by tomorrow or else forget about it…’ or perhaps, ‘buy more tea’.

Right then…

The ‘business user’ referred to in an earlier entry by Maria, doesn’t presumably care as to how many rows and columns there are in a particular database or about table joins, as Maria points out. They ‘just’ want the software to provide them with the information they need in a timely manner and ultimately to help them add to their bottom line or improve their service or whatever their business drivers happen to be.

So where should the tech-non tech interface or common ground be? Should business folk get more tech literate or should technical people english-up a bit. But you guys have the upper hand with your own suite of languages and protocols and processes and so on. I still can’t program my VCR. And I don’t have time to go back to college.

I do remember whilst in College, my IT lecturers were proposing a new concept of ‘hybrid manager’ a tech savvy business type, neither engineer nor accountant, who could seamlessly integrate (see, I have acquired some of the lexicon) with both worlds, and as you can tell from the plethora of Hybrid managers out there, this idea had legs….

We at DSI are fortunate to have excellent analysts, Project Managers, Architects etc. who do a very good job speaking to clients about their IT needs and produce coherent and comprehensive requirements documentation eventually leading to the development of high quality IT Business solutions. Still I suspect it’s not always the technical people in organisations that make the buying decisions. I know I have had my eyes involuntarily glaze over on so many occasions I have to believe there are other poor souls out there. If not, direct me to the nearest asylum someone.

And as ‘they’ say …‿As the amount of data once measured in Megabytes/Gigabytes/Terabytes‿ (yes – and Petabytes) grows and the languages and processes you folks devise to transform it to business useful information, spare a thought for those of us who still think of Struts as something peacocks do. When you see my eyes glaze over it’s either because I need you repeat what you’ve just said in a very slow fashion and with pictures, or I’ve suddenly remembered I’ve forgotten my Mother’s birthday – again. Now, where’s my osmosis wellies…..? The ones with the big holes in…

Leave a comment

The End of the Complexity Chain

The software business is different. Not in the never-mind-the-numbers madness of the .com bubble – eventually you have to turn a profit. What I mean is that doing business in the software sphere is a unique experience because of the special place the software industry occupies in the world’s money-go-round: We sit at the very end of the complexity chain.

What’s the Complexity Chain?
When any other industry looks to increase productivity, lower running costs or save time, it looks for suppliers who can suck some complexity out of their operations, ideally leaving only core competency activities behind. Each of these suppliers in turn tries to do the same so they can keep their overheads down and compete on value. The chain goes on from company to company, each extracting complexity from customers and dumping it on suppliers.

But the chain can’t go on to infinity. The complexity has to pool somewhere, and in my opinion it usually ends up in the IT sector. We are there to facilitate or automate every kind of business process imaginable, and allow other industries to concentrate on their core activities. We cannot play the same game – or at least not to the same extent – because in order to provide our particular service, we must make our customer’s competency our own. We have two main jobs to perform:

  • Understand our customer’s business
  • Understand the computing business

We can outsource paper recycling, printer maintenance, travel arrangements and so on but the first thing we do when we begin a project is to absorb the complexity of the business process we are about to automate or facilitate.

What About Software Tooling?
We don’t have to develop in machine code using vi of course. We can use any number of IDEs to make the job easier, not to mention a raft of frameworks and libraries to save time and trouble. These tools will simplify the job of software engineering by allowing us to concentrate on the tough task of translating business requirements into computer programs. But the best that a tool vendor can do is to sell finely crafted chisels to experienced artisans. He cannot produce the goods for us (beware those that say they can). He can’t change the bottom line: software companies have to absorb the core competencies of their customers while retaining their own.

Maybe someday the artisans of software development may be replaced by some combination of artificial intelligences, but to me that still looks like the logic of infinite recursion. In the meantime, the strength of the software development company will remain in its people.

The Importance of the Translator
Ask a German person a question that requires a considered answer. Often the first word you’ll hear back first will be ‘Ja’ – not because the answer to your question is ‘Yes’ but because ‘Ja’ is a common way to say ‘Well…’ in German. If a translator doesn’t know this, an international meeting can end in tears.

Translators typically understand the source language very well, and speak the target language as their mother tongue. Good software engineers can do the same: They can understand real-world problems well enough to explain them to machines that have zero tolerance for ambiguity.

The job of translation is an atomic one – you can’t use one translator’s brain to understand and another’s to translate. It all happens inside one head. Software engineering cannot be reduced any further. A single software engineer’s brain is the crucible in which the two competencies – the customer’s and the engineer’s – are brought together, right at the end of the complexity chain.

- brendan lawlor

Those likely to be for:
http://www.martinfowler.com/bliki/ModelDrivenArchitecture.html
http://alistair.cockburn.us/

Those likely to be against:
http://ianwij.com/weblog/articles/Writing_Code_Is_Stupid.aspx

Leave a comment

Data – Dimensional Modelling

Having read the very interesting article on MediaHype 2.0 I thought I should share a few words about Dimensional Modelling especially since the in-vogue phrase “Data as the next “Intel Inside“ was referred to. Yes, everyone knows that data is fundamentally the cornerstone of the Enterprise but many Enterprises still struggle to maximise the knowledge they can extract from their Megabytes/Gigabytes/Terabytes (and ultimately Petabytes!) of Data.

The basic goal of most applications developed is to very effectively insert, delete and update records but this unfortunately has a high cost and impact on querying/reporting. Third Normal Form databases are not easy for Business Users to navigate and a simple request for a few columns & rows of data can result in complex queries with multiple outer-joins to many tables. Business Users often want to compare some KPI for a particular instance in time with the same KPI as it was in a different time frame, adding to the complexity of queries. Additionally the database being queried has many millions of rows but we possibly can’t add indexes to assist querying the database as this could affect our OLTP application.

It would be so much easier for our Business Users if we modelled the data so that it will support reporting requirements in a Reporting Database – Data Mart or Data Warehouse – in my experience Dimensional Modelling is the key to delivering what our Business Users need.
Instead of juggling many tables and multiple complex joins we could have all the necessary values readily available in a Dimensional Model for the Business Users. (Transporting the data into the Dimensional Model is a topic for another day i.e. ETL process)

Dimensional Modelling allows for the removal of all the table joins associated with traditional ER Modelling reducing complexity, potential for mistakes due to joining tables incorrectly and improves performance as table-joins slow down queries.
Instead of trying to aggregate measures by Month we have already organised the data and facts to allow this type of analysis by Month, along with Day, Year, Quarter etc. as well as other predefined date attributes that the business will use e.g. Holiday, Weekday. (Yes – the Date Dimension is one of my favourites!)

We don’t have to worry about adversely affecting our OLTP application so we can add the necessary indexes to assist our queries. Additionally, the Business Users don’t have to worry about complex joins as all records in the fact table have corresponding records in each of the Dimensions. We also have the capability to capture data changes effectively in our Dimensions without having to use complex (or reporting unfriendly) journal tables or logs.

The two main drivers when deciding to use Dimensional Modelling are
1.a business need for high-performance query access that you are restricted from obtaining due to OLTP application priorities and
2. a need for ‘Ease of Use’, in particular for Business Users, who primarily require analytical capabilities as well as transactional reporting.

If this has grabbed your attention you now need to turn to Ralph Kimball, original publisher of Dimensional Modelling techniques, for further reading. I’ll let his WebSite speak for him.

Those of you interested in Reporting Tools will know we have had new releases from 4 of the Big Business Intelligence Vendors in Autumn 2005. The OLAP Report has an excellent article by Nigel Pendse comparing each of the 4 releases and also includes a piece on Oracle & IBM’s stance in the BI marketplace today.

So to conclude – yes, data is the fundamental cornerstone of the Enterprise and those of us lucky to work in the Reporting Area have exciting times ahead to look forward to as more Enterprises discover the opportunities available with Dimensional Modelling.

Maria

1 Comment

MediaHype 2.0

There’s always been hype around the technologies labelled as being “cool”. Either because they were developer-friendly or because they were alternatives to the tools/technologies produced by the dark side in Redmond. So they don’t really need the marketing push created by the big companies or the artificial coverage produced by the IT media. You hear it from your fellow developer, you try it, you like it and on your turn, you tell people how much you like it. This is the good hype. The Spring Framework is a good example for this type. Initially created as a book example, it became one of the most popular frameworks.

There’s a second type of hype. Unfortunately the market holders create this one artificially when a technology cannot engender the expected hype that in turn generates the revenue. To give an example, our hype-o-meters indicated very high figures in the early days of Web Services. However there was some technology behind the hype. There were standardization efforts, lists of problems that Web Services was supposed to solve, use cases, relevant discernable technical boundaries etc. All the big guns of the IT industry jumped into the hype-wagon in order to give life to the frighteningly slowing industry. But even in those days, there was something behind all the talk. There was something for us, the developers, to look at, to play with, to discuss and to improve. Because after all, we are the innovators in this industry. The hype is used to create business for us. It’s almost like having a little bit of inflation, which is important to have in order to encourage the consumption. So I’m not totally against this type of hype when it’s backed up by some technology.

However, the recent popularity of the concepts that are termed as Web 2.0, which I’m unwillingly contributing to by reading, thinking and writing about it, crossed the line of technical usefulness, and started to travel in the land of marketing spectacle.

If you heard about Web 2.0 then you probably read Tim O’Reilly’s article on this subject. I find it ironic that it starts with: “The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was overhyped…” Well, I’m not sure about Web 1.0 however Web 2.0 is definitely overhyped.

In the same article, Tim O’Reilly states that the concept of “Web 2.0″ began with a conference brainstorming session between O’Reilly and MediaLive International. That’s the reason why I personally think that Web 2.0 is a misnomer. Media 2.0 would convey their intention better. Or even MediaControl 2.0 would be more meaningful.

There are strange mantras in Web 2.0. Apparently, “it’s an attitude not a technology”. However using attractive technical terms like “design patterns”, citing new-old-cool technologies like AJAX and mentioning Service Oriented Architecture are not considered getting into technology.

Another funny one is “Data as the next “Intel Inside””. How is that new? We all know that data has always been the most important part of the enterprise, right? This isn’t new, let alone being revolutionary. We’ve always said that the data outlives the applications that we develop.

It looks to me that a bunch of individuals got together and made a list of what is cool and successful on the Web today. Then, they decided to name them “Web 2.0″ and make them theirs. It even looks like it isn’t important for the items on the list to be related. Google Maps is a killer app for me but it is Web 2.0 for some others. At the same time, Blogging or Wikis are Web 2.0. Wikipedia is Web 2.0 (even though there have been serious problems recently that restricted the public editing). So one may ask, “what’s the relation between AJAX and Wikipedia?!?”

In his article titled “Web 2.0 – The Global SOA” published in the December 2005 issue of the Web Services Journal, Dion Hinchcliffe makes technically unconvincing and contradictory statements. One of them is about the software release cycle. It turns out that Web 2.0 is the end of the software release cycle. Within the same paragraph, he mentions “the end of the release cycle” and “continuous improvements”. So it is possible to improve the software without releasing it. Or does it mean no testing? Furthermore, he misses one of the troublesome areas of the service orientation: Versioning. Everything looks perfect in a demo combining two freely-available services from two different companies however in the real-life, companies upgrade their services. There may be minor upgrades, which are backwards compatible and major upgrades non-backwards compatible. Is your service provider going to provide you with the new version of the service alongside the old version? Or is it going to force you to upgrade? How will your service consumer get notified of the change? Do you notice that the service was upgraded when your application doesn’t work anymore? Or do you get an e-mail from an administration application?

My point is that Web 2.0 is not for us, the technical people, to waste time on. It isn’t well thought and it is full of contradictions. Everyone has a different definition of Web 2.0, which shows how abstract is this bag of unrelated concepts. Probably, someone or a group of individuals own a big bunch of Google shares because at some point Web 2.0 sounds like a Google advertisement. Using “synergy”, “critical mass” or “collective intelligence” doesn’t explain it all technically (One of the best ones I’ve heard is “Web 2.0 has a ballistic trajectory”. *sigh*).

:) In fact, it’s pretty obvious to me that all Web 2.0 really needs is to leverage the repurposing of synergistic, best-of-breed e-markets into more scalable, cross-platform action-items, allowing us to harness the power of the aggregation of one-to-one metrics in a way that will simultaneously optimize and extend several world-class, out-of-the-box web-readiness initiatives and give us the disintermediated mindshare we’re all after. What could be easier? ;)

- Yagiz Erkan -

2 Comments

Key Themes in Building Better Software

12 months ago, DSI embarked on a complete re-engineering of its software development processes and infrastructure. The way we make software is being rebuilt from the ground up, beginning with the development infrastructure we use, the processes that guide us, the practices that we adopt and the way we communicate. Today, half-way though the rollout of these changes, we are already seeing the results of that effort in the form of reduced timelines, measurably higher quality output, and an enhanced ability to deliver complex enterprise-level systems. From this vantage point, we have more clearly identified the key characteristics of our process:

  • Simplicity
  • Productivity
  • Stability

Simplicity
The drive towards making things simpler was applied both to our architecture and to our processes. We began by favouring POJO development over EJBs, and supported this direction by adopting Spring and Hibernate. Each of these opensource frameworks is excellent in its own right. Together they have transformed our development experience – especially in the business tier.

EJB code – particularly Entity Beans – had proven bureaucratic, repetitive and hard to test. J2EE Design Patterns, which were supposed to address these drawbacks, were really workarounds. Some of them actually restricted our ability to make the most of Java’s object-oriented approach. Spring supported a return to basics by allowing simple, testable, object-oriented code to use the kind of services that full-blown J2EE stacks promised (declarative transactions, transparent remoting etc.)

We stripped away any documentation overhead that we could do without, using tools (eg CVS and Fisheye) to monitor changes to the codebase and generate reports. We have become code-centric, so instead of doing things once on paper and once again in code, we go to code earlier and cut out the middleman. A key enabler in this change was a renewed emphasis on readable code, from which we can extract UML documentation where required.

Productivity
Late software is bad software. There is little point in having perfect quality if your client – or your entire market – has moved on. To deliver the necessary productivity, enterprise applications of the kind we develop in DSI require that the codebase be accessed by many developers at once, sometimes working towards different medium term goals. To maintain our high productivity, we have to manage this access in a concurrent way – but safely so. We chose CVS for its tried and tested functionality, and its wide range of clients. CVS’s non-locking checkouts and comprehensive branching support, together with its merging funtionality, allow us to develop in parallel, secure in the knowledge that we can splice parallel changes back together with minimal overhead.

To further boost our output, we develop Java projects on rails: We’ve done it all before and we reuse that experience by classifying new projects by type (we have identified five standard types) and kickstarting them with all the boilerplate code and configuration that they need to get going. Our developers can concentrate from day one on coding the business logic. We also relieve the developers of as many repetitive and wasteful tasks as possible, by providing tooling support. Often this is integrated into our preferred IDE Eclipse as plugins.

Stability
Things change: Requirements, technologies, tools, methodologies. But the requirement for stability in the midst of this change remains a constant. If something worked yesterday, it must work today. If we could build version 2.6.2 of a project last month, we must be able to build it tomorrow. How is this possible?

When it comes to build consistency, the magic ingredients are centralization of building scripts, and rationalized dependency management. The key that unlocks each project, and creates deliverable artifacts from source code, is the build script that is stored with the project itself in CVS. Now, each of these build scripts is making use of one centralized library of scripts. Furthermore our build scripts specify the project’s dependancies by referring to centralized and versioned jars and other artifacts. These mechansisms combine to ensure that the build behaviour across all projects remains predictable and reproducible.

By far the most important elements of our process changes to date have been a re-emphasis on unit testing and the use of continuous integration. So much has been written on these subjects that there is nothing for us to add. Except this: Thanks to unit testing and continuous integration we have seen levels of regression errors drop by orders of magnitude without impacting project timelines.

We look forward to continuing the rollout and building on these three themes in 2006.

- brendan lawlor

Leave a comment

Follow

Get every new post delivered to your Inbox.