Friday, September 22, 2006

Open Source vs. Proprietary

It was about 203,563 years ago, on a Tuesday morning, at about 11ish in the morning. The sun was bright in the sky and there was a gentle breeze from the south. Og had a brainwave. So sick was he of burning his hands while trying to cook a rat that he picked up a stick, rammed it through the rat and held it over the fire. Beautiful he thought! Now Ms Oog will like him since he'll be able to pick lice from her back without burnt hands! Then the stick burnt through and the rat fell in the fire. Damn.


Brian was watching this and had another brainwave. He soaked the stick in water and did the same. Perfection. Crispy rat, non-crispy hands, and he ran off with Ms Oog much to Ogs disgust.


Right there, at that point competition between humans began. And so did innovation.


Roll forward 203,563 years later and competition is alive and well, and one of the most hotly debated concepts around. Especially in software. Two seemingly polarised camps have emerged, tribes if you will; open source vs. proprietary. Both are powerfully represented and passionately defended. Often the arguments between the camps are vitriolic and personal. From the open source camp comes the charge that the proprietary tribe are evil and self serving. From the proprietary group come the charges of tree hugging etc etc....


But both have solid arguments and neither are truly wrong. Open source leads to large scale groups of minds that can refine ideas over a large area. From proprietary comes innovation driven by competition. Think back to all the technological advances that have been made during war time. Would jets have developed so quickly? Would computers have been so prevalent?


Imagine a world without competition. Imagine, if you will, that Microsoft ME was the last desktop. No competition to drive the development of new software. Ok, you can stop screaming now. Without competition would we be driven to develop new advances?


I have never been 100% on one side or the other. As a system integrator I dislike closed systems. As a business person, I like competitive edge.


So what am I talking about? My greatest area of interest is in standards. For me SOA provides an interesting balance between open source and proprietary methodology. Define the interfaces and communications between systems in a standard based, community agreed way, and treat the implementation of the service (the black box portion if you will) as proprietary. Within Healthcare we have HL7 as the primary developer of standards. Recently a new Special Interest Group (SIG) has been started up; SOA for HL7. Reviewing the draft documentation from the group I see that the primary intent is to describe the service definition without technological specifics (as much as possible) and then leave the implementation of the service up to individual companies and providers.


For me this is a perfect balance between the two tribes; it allows me to ensure the work I do benefits everyone, whilst leaving me free to innovate and develop a competitive edge. I can satisfy my desire to work to a common good by involvement in the organisation AND still provide benefit to my company.


Within the industry we have discussed standards for years, but it is only recently that I have begun to feel that this is truly becoming a realistic possibility. So I will continue to harp on about standards to my customers and colleagues (sorry guys)!


Tags: , , ,


Powered by Qumana


Friday, September 15, 2006

HL7 Ballot Site

Well the latest version of the ballot is up on the HL7 site for v3. Actually it might have been up for a while but I've been on holiday! (Rare thing I know!)


http://www.hl7.org/v3ballot/html/welcome/introduction/index.htm


Tags: ,


Powered by Qumana


Friday, September 08, 2006

Fast or slow

You know, as developers we have become lazy. To be fair it's not our fault. It makes sense for us to be lazy. It is econmically responsible for us to be this way. Consider, as developers should we spend time shaving 5% off the overall speed of our applications when we can install the application on a fast machine that will not even register the difference? Does that make sense for our customers? In fact much of my time is spent convincing people it doesn't. Development time costs more that processing power, and memory, and network speeds are so fast these days that why bother?


But then just when you think it will never matter again, circumstances come along that change your view point and you really wish things were more carefully written. For example a customer in the far north has a number of remote sites connected by satellite phone. And dog sleds. I kid you not. So all the "slow" applications that send large data volumes suddenly become problematic. Even worse are web apps. Fat clients might be easier, but remember the dog sleds? Good luck installing them.


Numerous countries that are developing are bypassing the copper/fibre networks and moving directly to wireless and satellite. Heavy network requirements need to be examined in these cases.


So I have been thinking, does AJAX give us a way through? Develop a virtual fat client web app, that makes heavy use of client side caching, and then use AJAX to refresh data at the start and end of a process.


Tags:


Powered by Qumana


Wednesday, September 06, 2006

HL7 Object Model

So I can't believe its been a year since the normative standard of HL7 version 3 has been released. It seems years in the coming, and now here it is.

But that's obviously just the start. So we have the RIM. We have DMIMs. And we have schemas. In fact we have a lot of schemas. 1000's in fact. And many different data types. So the next big question is how do we use them?


Well that probably depends on what you are trying to achive. (As a side note that's one of the reasons why there are so many; controlled flexibility) Interested in a detailed healthcare data model around the Laboratory? Try the Lab Domain DMIM. Interested in allowing a doctor to request an appointment with a hospital for a patient? Look at the interactions for the Scheduling Domain. And that's were the benefit and pitfall of the Version 3 paradim lie. It can be used to describe just about anything in the healthcare space, so where do you start?


One area of interest is in the generation of class libraries that reprisent the data contained within the interaction messages. There are many reasons why you would want to do this, say simplifying development of software that would like to communicate in the HL7 v3.0 world. But again there are many situations where this is hardly ideal, say when you want to extract a single value from the message.


I have enough reasons to make it worthwhile. So. If you try and generate a series of class libraries directly using a tool you end up with an awful lot of classes. Not so helpful. With thousands of datatypes being represented by thousands of classes you might as well use xPath given the memory requirements. A more subtle approach may be required.


One area I am currently investigating is the generation of base libraries of data types. I think the best approach is to start with the core schemas in the HL7 schemas. Only data types that have actual implementations will be included (so if an element is cloned but never used it's out). Once these have been modelled its onto the CMETs. Then we can start looking at software factories, or tools to generate domain specific classes.


The trick here is going to be namespacing to prevent redevelopment or multiple reprisentations of the same data type. Once the basic platform is created tools will hopefully provide the way forward to creating specific and re-usable, but most of all fast classes for serializing/deserializing HL7 interactions. Beyond that? Auto generation of web service end points, databases and fully interactive class systems (i.e. the system should be aware of associated messages).


I'll let you know how it goes!


Addendum:
Steve Hart has posted on this subject as well at http://www.hartsteve.com/2006/09/11/hl73-xml-msg-receipt-handling/


Tags: , , , ,