Archive for category Uncategorized
I thought I’d share a technique I’ve found very useful for any application of significant size with a need for automated testing. A lot of transaction processing tasks are time sensitive, particularly in the financial or insurance domains.
This can make automated testing brittle or even impossible while your application directly acquires the current time or date from a system clock. The tip is to implement an application clock abstraction which is always used by your application for obtaining the current date and time. Thus in any testing, your application clock can be reliably used to simulate any arbitrary time and date repeatably. It’s a simple but invaluable pattern making test results repeatable and consistent for years if necessary.
This pattern is really simple to implement and is even relevant for database only operations as the same pattern can be applied there too.
It’s usually worth insuring your application clock can only simulate a date and time when in development environments. It would be nasty to have a production system accidentally start using test values for time and date.
As a software architect I’m constantly exposed to people quoting high code coverage metrics as a measure of test quality. Eh, no… A code coverage metric is not a measure of test quality! Code coverage provides an ability to determine what parts of your application are not tested. It has no value at all in determining how well tested your application is. After all, it’s not particularly difficult to obtain high code coverage metrics without providing any degree of testing.
Thus the paradox is, a lower code coverage metric is actually more valuable than a high one; as it provides more guidance in what needs to be tested. In fact I would argue, it’s hard to imagine a code coverage value of over 70% to be of any value whatsoever. Code coverage only measures the degree of your application exercised within the context of a unit test, not whether anything is actually being tested. Some people would argue that just exercising code within the context of a unit test is testing the code. It isn’t. But it is a mechanism that’s frequently used as a scheme to get high code coverage values.
At present, there is no automated mechanism for determining unit test quality. I suspect a mechanism rather like the code quality metrics such as cyclomatic complexity etc. will be the eventually obtain some degree of measurement of automated test quality. Right now, there’s nothing. The whole area of automated testing is constantly mutating with techniques like IoC, mock objects and parameterized tests surfacing every few years to revise what’s considered best practice.
In short, if someone quotes a figure of 100% for code coverage for a non-trivial application one might be forgiven for thinking he’s either a liar or an idiot and quite possibly both.
Do you use Git? Do you create your
.gitignore files manually?
If yes, here is an on-line tool that can make your life easier. Read the rest of this entry »
I’ve discussed using event messages to carry payload data, to help with resynchronizing a failed independent component with its loosely coupled neighbours. However, this could very easily lead to extremely large and inefficient messages, for what should ideally be very simple events. This will invariably lead to performance and scalability issues, not to mention affecting the cost of provisioning. Also, since message publishers are unconcerned with how many subscribers there might be, publishing large messages is an irresponsible development practice with potentially unforeseeable consequences. I’ve outlined some alternatives we’ve assessed in attempting to resolve this issue.
The first strategy is, as we’ve discussed, including a data payload within the event itself. For example, consider an order processor component raising an event to indicate successfully closing an order. The event might also include some customer data, which is not strictly necessary, just to indicate the order has been made. As I’ve indicated, this strategy is really only viable for small message sizes. If we were to attempt to include an entire order with the event, the overall payload size would dwarf the event notification and result in the issues we’ve already mentioned.
A second, leaner approach is to include a REST URL to reference the payload data. Thus subscribers can choose to consume the related payload data as required using the REST URL. In our example, a downstream payment processor component might need customer data from the order. By requesting the order customer details via the supplied REST URL, the payment processor potentially has access to the entire order. I like this pattern as it virtually eliminates the size associated with any payload and also leverages the caching benefits of REST, ensuring a flexible, efficient mechanism for very large payloads.
Note: Consider it a tip to include a version determining parameter in the URL, to ensure the reference within the event remains immutable in the face of any later changes to the data.
The third strategy is a little more difficult to explain and involves the use of a mediator object. A mediator is an abstraction used to lower direct coupling between interdependent components. The mediator is responsible for abstracting any communications between components and can be implemented in many different ways depending on its use and the data involved. Communications can be heavily cached, synchronous, asynchronous it really doesn’t matter. It is sufficient for the consuming component to know, the mediator is responsible for executing a required task which is ultimately under the purview of another component. A mediator may make a variety of communications but only ever to a single component, thus ownership of the mediator is clear.
This strategy is only to be considered where some behaviour is required, not just data, which is the responsibility of another component. Within our order example, it might be to obtain the customer’s billing address for the payment processor say. This could be considered behaviour because the customer’s billing address is unlikely to be a property of the newly closed order, and hence it would require a lookup based on the customer (a responsibility of the customer management component). The mediating object might orchestrate the process, acquiring data from the customer management component to do this.
Just got back from another day at QCon London 2012. Previous day was very good, but this morning I wasn’t quite sure what to expect from the sessions, and surely it was a mixed bag. Originally, I’ve planned to see Architectures You’ve Always Wondered About, but since Martin Thompson’s introduction to Finance track was so good I thought to give it a go.
Today was the first day of the conference, and it was a very good day. I must say it largely met my expectations. I was part of the day on Architecture track and part of the day on High Availability. But lets start at the beginning and the key note. Btw, no photos, my phone camera is useless😦
I am in London this week for QCon London 2012. Expectations are high and I hope it lives up to it. So far so good, I have to say. I have been on two tutorials so far and if quality is kept at this level I should be in for a treat.