Arnaud’s Open blog

Opinions on open source and standards

More on Linked Data and IBM

For those technically inclined, you can learn more about IBM’s interest in Linked Data as an application integration model and the kind of standard we’d like the W3C Linked Data Platform WG to develop by reading a paper I presented earlier this year at the WWW2012 Linked Data workshop titled: “Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile”.

Here is the abstract:

Linked Data, as defined by Tim Berners-Lee’s 4 rules [1], has
enjoyed considerable well-publicized success as a technology for
publishing data in the World Wide Web [2]. The Rational group in
IBM has for several years been employing a read/write usage of
Linked Data as an architectural style for integrating a suite of
applications, and we have shipped commercial products using this
technology. We have found that this read/write usage of Linked
Data has helped us solve several perennial problems that we had
been unable to successfully solve with other application
integration architectural styles that we have explored in the past.
The applications we have integrated in IBM are primarily in the
domains of Application Lifecycle Management (ALM) and
Integration System Management (ISM), but we believe that our
experiences using read/write Linked Data to solve application
integration problems could be broadly relevant and applicable
within the IT industry.
This paper explains why Linked Data, which builds on the
existing World Wide Web infrastructure, presents some unique
characteristics, such as being distributed and scalable, that may
allow the industry to succeed where other application integration
approaches have failed. It discusses lessons we have learned along
the way and some of the challenges we have been facing in using
Linked Data to integrate enterprise applications.
Finally, we discuss several areas that could benefit from
additional standard work and discuss several commonly
applicable usage patterns along with proposals on how to address
them using the existing W3C standards in the form of a Linked
Data Basic Profile. This includes techniques applicable to clients
and servers that read and write linked data, a type of container
that allows new resources to be created using HTTP POST and
existing resources to be found using HTTP GET (analogous to
things like Atom Publishing Protocol (APP) [3]).

The full article can be found as a PDF file: Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile

September 11, 2012 Posted by | linkeddata, standards | , , , | Leave a comment

Linked Data

Several months ago I edited my “About” text on this blog to add that: “After several years focusing on strategic and policy issues related to open source and standards, including in the emerging markets, I am back to more technical work.”

One of the projects that I have been working on in this context is Linked Data.

It all started over a year ago when I learned from the IBM Rational team that Linked Data was the foundation of Open Services for Lifecycle Collaboration Lifecycle (OSLC) which Rational uses as their platform for application integration. The Rational team was very pleased with the direction they were on but reported challenges in using Linked Data. They were looking for help in addressing these.

Fundamentally, the crux of the challenges they faced came down to a lack of formal definition of Linked Data. There is plenty of documentation out there on Linked Data but not everyone has the same vision or definition. The W3C has a growing collection of standards related to the Semantic Web but not everyone agrees on how they should be used and combined, and which one applies to Linked Data.

The problem with how things stand isn’t so much that there isn’t a way to do something. The problem is rather that, more often than not, there are too many ways. This means users have to make choices all the time. This makes starting to use Linked Data difficult for beginners and it hinders interoperability because different users make different choices.

I organized a teleconference with the W3C Team in which we explained what IBM Rational was doing with Linked Data and the challenges they were facing. The W3C team was very receptive to what we had to say and offered to organize a workshop to discuss our issues and see who else would be interested.

The Linked Enterprise Data Patterns Workshop took place on December 6 and 7, 2011 and was well attended. After a day and a half of presentations and discussions the participants found themselves largely agreeing and unanimously concluded that: the W3C should create a Working Group to create a Recommendation that formally defines a “Linked Data Platform”.

The workshop was followed by a submission by IBM and others of the Linked Data Basic Profile and the launch by W3C of the Linked Data Platform (LDP) Working Group (WG) which I co-chair.

You can learn more about this effort and IBM’s position by reading the “IBM on the Linked Data Platform” interview the W3C posted on their website and reading the “IBM lends support for Linked Data standards through W3C group” article I published on the Rational blog.

On a personal level, I’ve known about the W3C Semantic Web activities since my days as a W3C Team Member but I had never had the opportunity to work in this space before so I’m very excited about this project. I’m also happy to be involved again with the W3C where I still count many friends. 🙂

I will try to post updates on this blog as the WG makes progress.

September 10, 2012 Posted by | ibm, linkeddata, standards | , , , | Leave a comment

LibreOffice should declare victory and rejoin OpenOffice

When OpenOffice went to the Apache Software Foundation I started writing a post about this topic that I never got to finish and publish.

The post from my colleague Rob Weir on Ending the Symphony Fork prompted me to post this now though.

I should say that I no longer have anything to do with what IBM does with ODF and anything related. I’ve changed position within IBM in the Fall of 2010 and now focuses on other things such as Linked Data which I may talk about in some other post.

In fact, I’m now so out of touch with the folks involved with ODF that I only learned about OpenOffice going to the Apache Software Foundation when the news hit the wire. I had no knowledge of what was going on and have no insights as to what led to it. Similarly, I only learned after the fact about IBM deciding to merge Symphony with OpenOffice.

So, if anyone wants to blame me for speaking as a person on IBM payroll, I’m not even going to bother responding. This is entirely my personal opinion and I’ve not even talked about it to anyone within IBM.

But let me say quite bluntly what Rob is only hinting at: It’s time for LibreOffice to rejoin OpenOffice.

LibreOffice started from a sizable portion of the OpenOffice.org community being tired of Oracle’s control and apparent lack of interest in making it a more open community. I certainly understand that. But now that this problem is solved, what does anyone have to gain from keeping the fork alive?? Seriously.

While forks in the open source world can be a tremendous way of shaking things up, they can also be very damaging. In this case, I think it’s a waste of resources and energy to keep this going. Instead of competing with each other the LibreOffice and OpenOffice communities should get together to fight their common and real competitor.

I know a certain level of competition can be healthy but I’m tired of seeing open source communities fight with each other to their own loss.

I know the fork was painful and people still hold a lot of angst against one another but they need to get over that. They need to realize they would do themselves and everyone else a real service by putting all this behind them and uniting. LibreOffice should declare victory and join forces!

February 3, 2012 Posted by | opensource, standards | , , , | 32 Comments

The value of preparation and going slower

Ok, maybe I’m just rehashing the obvious here but, it occurs to me that in our ever faster pacing world we sometimes forget simple wisdom which is worth restating and reflecting on.

I’ve been remodeling my house quite extensively and while I’ve hired contractors now and then for specific tasks I’ve done a lot of the work myself. And when I say myself, I literally mean that. This is myself alone.

Working alone isn’t always the easiest obviously. Sometimes a helping hand can save you quite a bit of time and trouble but, I’ve come to realize that working by myself forces me to prepare more and this leads to better results in the end.

One such example is simply when nailing something up on a wall for instance, be it a sheet of drywall or a piece of framing. If you have a helper, one typically holds the piece in place while the other nails it. It’s fast and easy, but actually not that precise.

In contrast, when you’re alone, you can’t hold the piece and work on it at the same time. So, you have to first figure out some support mechanism that will hold your piece in place and free you to work on it. Using a support mechanism actually allows you to much more precisely position your piece before you move on to nailing it in place.

I’ve done that on several occasions and it never fails. I always end up with a better result that way.

Now, it’s true that it’s a slower process, especially given that I sometimes have to first build some kind of contraption to hold my pieces in place. But you’ve got to wonder about always trying to go faster. Is it really worth it?

Look at OOXML (what? you didn’t see it coming? 🙂 ) What have we gained from having this rushed through the standards process? Now that the dust is starting to settle down it’s easy to see that all we end up with is a specification of terrible quality and a lot of collateral damage, including for Microsoft, Ecma, ISO, and IEC. Wouldn’t we have been better off taking the time to do it right?

I sometimes marvel at some of the old buildings and wish we took the time to build more like those. Buildings that are not only functional but also elegant. Buildings that show, from the quality and the level of detailed work they present, how much their makers cared. Something which is unfortunately too rarely seen on modern buildings.

Of course, one has to find the right balance. But it seems that the balance is currently heavily tilted towards always going faster, even if it’s at the cost of producing lower quality. I think we should slow down a bit and give quality another try.

September 15, 2008 Posted by | standards | , , | Leave a comment

ISO is being challenged

I’ve meant to blog on several topics but just didn’t have the time to do so. I’m putting an end to more than a month of silence to highlight some interesting news from India.

Like many people I’ve been appalled by the way ISO officials are trying to dismiss the appeals filed against the way OOXML was processed. Once more I’ve discovered a new aspect of the ISO process which has left me puzzled.

Essentially, the ISO and IEC courts of appeal are made of a jury composed of a subset of the very same parties that judged OOXML in the first place. Now, I’m not a law expert by any means but it doesn’t take much expertise to figure out that such a set up is bogus. The whole point of an appeal process is to get a second opinion. How can this be achieved by asking the same people?

Of course, ISO’s officials attitude to recommend a straight dismissal isn’t helping the matter either. Although they are definitely being consistent I’m afraid in this case they are just being consistently wrong. They remind me of these abusive governments that spend their time trying to shush the opposition rather than understand it. They should know better though.

History is full of governments that were thrown out by oppressed people. If ISO and IEC officials think they are somewhat shielded from this kind of trouble they need to think again.

For proof I suggest you read “ISO/IEC and OOXML: The judge, the jury and the hangman” in which Venkatesh Hariharan calls for the creation of an alternate standards organization for the benefit of the emerging economies.

July 21, 2008 Posted by | standards | , , | 4 Comments

A sign of changing times

My lawyer made my day this morning. Not just because he does a great job, I’m used to that and that’s why he’s my lawyer. The reason he made my day today is because the document he just sent me is in ODF. 🙂

We’re talking about a small office of three lawyers with a couple of assistants in the South of France. For several years he’s been sending me all his documents in MS Word format. I’m not sure what made him change but it’s not because I told him to do so. I don’t think I ever mentioned anything about ODF to him or his staff.

In any case, no matter what the actual reason for the change is, I find it uplifting. The fact that such people, who are not part of the industry and not versed into the whole document format debate, are getting equipped with software such as OpenOffice and start using ODF is a clear sign of change.

The type of documents they produce in that office, as in many other offices if not most I’m sure, is just pure text with a little formatting. They really have no reason to keep buying licenses for MS Office for this.

I think it is this type of grassroot movement that will make the difference in the end.

June 3, 2008 Posted by | standards | , | 8 Comments

My take on why Microsoft finally decided to support ODF

People are a bit puzzled by today’s announcement that Microsoft will be adding native ODF support to Office 2007 and the timing of the announcement. People are asking: Why? Why now? Why not earlier?

Well, I don’t have any privileged insights so all I can offer is my own speculations but I think the answer might just be in the results of the ISO/IEC vote on OOXML.

Indeed, while OOXML has garnered enough votes to pass, several major countries including China, India, and Brazil among others, voted against it. It is safe to assume that, in accordance with the opinion the expressed through this vote, those countries will not adopt OOXML as a national standard either. India has already decided so for one. I know the same is true for South Africa. The same will probably be true for others.

Now, think about this for a minute. This is a huge market that Microsoft cannot address with Office as it stands. Can they really disregard a market that size? I don’t think so. If not, what can they do about it?

Well, they can keep trying to fight countries decisions not to adopt OOXML but if they haven’t managed to achieve that already, despite all the efforts they put in, including some rather unethical if not illegal ones, their chances of success on that front are pretty slim.

So, what else can they do? Balk. Finally admit the reality that ODF is here to stay and that there are many people out there that just won’t accept to be locked in anymore and try to save face by making it look like this is in line with their strategy… Fair enough I suppose. I don’t know how many they will fool but it doesn’t really matter.

Let’s not forget Microsoft is just a business trying to make money. They’ve proven in the past that they are quite resilient and can make radical changes when needed. This might just be one of those occurences.

Many of us knew that they were only gaining time anyway. Like building sand walls against the rising tide.

Let’s just now hope that Microsoft won’t try to play games anymore. Besides their rather poor track record at delivering on the ongoing chain of announcements about becoming open and caring about interoperability (as opposed to intraoperability), there are other reasons one might want to take today’s announcement with caution.

One trick they could try and pull for instance would be to put just enough support for ODF to claim that they support it but not enough for people to really use it systematically. They could then tell customers who complain something isn’t working that it’s because ODF isn’t powerful enough, and if they want the full power of Office they need to use OOXML.

That’d be a sneaky way to fulfill the ODF requirement set by customers and then force people into using OOXML anyway. Sneaky but not unlike Microsoft unfortunately. So, beware.

May 21, 2008 Posted by | standards | , , , | 21 Comments

A Standards Quality Case Study: W3C

Since I gave a presentation on this topic at the OFE Conference in Geneva at the end of February I have meant to post something about it here. As some of us stated before, if anything, the OOXML debacle has achieved one thing: raising awareness for the need for higher quality standards and standards development processes.

Introduction

Having been primarily involved in W3C both as a staff member and a member company representative I had grown to expect a certain quality level which has led me to be genuinely baffled by the whole OOXML experience. I just didn’t know how superior the W3C process was compared to that of ECMA and ISO/IEC. I just didn’t know those organizations had processes which are so broken that they would allow such a parody of a standards development to take place and such a low quality specification to be eventually endorsed as an international standard.

There have been discussions within the W3C for a long time as to whether it should seek to become a PAS submitter and adopt a policy to systematically submit its standards to ISO/IEC. I used to think it should. I no longer think so. The W3C process is so superior to that of ECMA and ISO/IEC, it’s these organizations that need to learn from W3C and those who are working for the W3C standard label to be recognized at an international level in its own right have all my support.

Ecma’s value proposition vs W3C’s core principles

Let’s look at what differentiates W3C from these organizations by first having a look at Ecma’s stated value:

A proactive, problem solving experts’ group that ensures timely publication of International standards;

Offers industry a “fast track“, to global standards bodies, through which standards are made available on time;

Balances Technical Quality and Business Value:

  • Quality of a standard is pivotal, but the balance between timeliness and quality as well: Better a good standard today than a perfect one tomorrow!
  • Offers a path which will minimize risk of changes to input specs
  • Solid IPR policy and practice

Ecma can be viewed as a reconfigurable hub of TCs

The insistence on time, fast track, business value, minimal risk of changes over quality certainly strikes me as odd. Contrast this with some of the key characteristics of W3C taken from various parts of its documentation:

Mission: To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.

W3C refers to this goal as “Web interoperability.” By publishing open (non-proprietary) standards for Web languages and protocols, W3C seeks to avoid market fragmentation and thus Web fragmentation.

A vendor-neutral forum for the creation of Web standards.

W3C Members, staff, and Invited Experts work together to design technologies to ensure that the Web will continue to thrive in the future, accommodating the growing diversity of people, hardware, and software.

Although W3C is a consortium which for a large part is funded by its members, the staff led by Tim Berners-Lee has a clear understanding that its mission goes far beyond that of merely satisfying its members. It is working for the benefit of all with a long term vision.

Because of this W3C is more open than many other organizations. One such evidence is the notion of invited experts that was introduced very early on and that allows non members subject-matter experts to participate in the development process. Because of this it also favors quality over time, knowing that while publishing standards faster might serve some short term financial interests it is typically detrimental to the overall stability of the web and contrary to a smooth evolution that will benefit the greater community in the long term.

This is of course not without creating some tensions between its staff and its members at times but, to its credit I think the staff has been mostly successful at balancing the various forces at play so that no single interest takes priority over general interest. This was true for instance when it adopted a patent policy which favors Royalty Free licensing, forcing major vendors, often more stuck in their old ways than necessarily fundamentally against it, to reconsider how they manage their IP with regard to standards.

W3C’s standards development process

Looking at the W3C standards development process also reveals some key characteristics that are fundamental to achieving its greater mission. The typical development of a W3C “standard” – officially called “recommendation” – looks something like this:

  1. Member or Team Submission
  2. Development of a charter / Creation of a Working Group
    • Vote from Members + call for participation
  3. Publication of Member-only and Public Working Drafts (WD).
  4. Last Call announcement.
    • WG believes all requirements are fulfilled
  5. Publication of a Candidate Recommendation (CR)
    • Call for implementations
  6. Publication of a Proposed Recommendation (PR).
    • Call for review
  7. Publication as a Recommendation (REC).

It is particularly important to note that contrary to Ecma, submissions to W3C in no way constrain what is eventually produced as a standard, and that no guarantee is given regarding how much can be changed. In fact, quite the opposite is said to be expected. Yet, I’ve never heard anyone claim that any W3C standard developed from a submission didn’t turn out to be better than the original submission.

It is also worth noting that several phases of the process stress the need for reviews by various interested parties, going from a fairly small group to an ever bigger community as the level of confidence increases over time and the specification gets closer to final approval.

Also worth noting is the “Candidate Recommendation” stage. I’m happy to say that I, along with Lauren Wood then chair of the DOM Working Group, am at the source of the introduction of this step in the W3C’s standards development process. The idea behind it is simply to stress the need for implementation experience and to ensure that specifications do not move forward unless they are backed by actual implementations demonstrating that the specification achieves its stated goal.

When first introduced, the success criteria for this phase merely relied on having for each feature of the specification a couple of vendors reporting successful implementation. Over time the bar has been raised time after time to now going as far as holding “interop fests” during which implementations from various vendors are tested against each other.

Contrast this with Ecma and ISO/IEC publishing international standards without even a single claim of successful impementation from anyone…

More striking yet, is the alternate paths a specification may follow within W3C:

  • Alternate ending
    Working Group Note
  • Return of a Document to a Working Group for Further Work when:
    • the Working Group makes substantive changes to the technical report at any time
    • the Director requires the WG to address important issues raised during a review or as the result of implementation experience.

When not enough implementation experience can be gathered after a while the specification is basically parked aside and recorded as a “Note” rather than let through as a “recommendation” or standard.

Any time significant changes occur or issues are found the document is sent back to the beginning. This is simply because it is well understood that 1) all the checks that were made all along may be jeopardized by any significant changes, and 2) any issue found may require significant changes leading to 1). In practice this doesn’t always mean a lot more time being spent. Indeed, if the changes turn out not to raise any particular problems the document will go that much faster through every step the second time around. But this way, no chances is taken.

Contrast this with the ISO/IEC Fast Track process which allowed OOXML to be modified in ways no one could even fully understand and which went its merry way to final vote without even a final document to show for.

W3C’s decision process

Another key differentiator of W3C is its decision process which I’ve talked about in my blog entry called Can you live with it?

  • Consensus is a core value of W3C.
  • Vote is a last resort when consensus cannot be reached.
    • Everyone has one vote (including invited experts)
  • Consensus sets the bar higher than a majority vote.
    • Not only ask whether people agree but also whether anybody dissents
    • Practical way to judge the latter is to ask: “Can you live with it?”
    • Can lead to opposite decision

While the notion of “consensus” isn’t that unique I think W3C differentiates itself from other organizations claiming to make decisions by consensus in the way it defines and assesses whether consensus has been reached.

From what I’ve heard of what went on with OOXML many claims of decisions made by consensus I believe would have failed in W3C.

W3C’s constant evolution

Beyond the core principles on which it is founded, the W3C differentiates itself in that it is constantly looking for ways to further improve its process to better achieve its goals.

  • Process is constantly evolving to increase quality and openness
  • More and more Working Groups are public
  • Technical Architecture Group (TAG)
  • Based on the belief that the larger the community the greater the standards produced
  • Patent policy evolved from RAND to RF

I’ve already talked about the introduction of the “Candidate Recommendation” phase to ensure greater quality. The introduction of the TAG with the mission to ensure that all W3C recommendations follow some key architectural principles and that the sum of all of them constitute a consistent set is another example of how the W3C evolved for the better.

I’ve already talked about the notion of invited experts ensuring greater input and more openness. Allowing its Working Groups to be opened to the public was yet another bold move from W3C. This was feared to be detrimental to ensuring a sustained number of members for which one of the incentives of being a member is to do just that: participate in Working Groups. But here again the W3C favored greater openness over its own self interest and from what I understand it is being rewarded in that more and more WGs are becoming public without having generated an hemorrhage of members.

Contrast this with ISO/IEC’s process which, from what I’m told, has been left untouched for many years, save a few changes to reduce the amount of time allocated to each phase of its process…

(True) Open standards development process increases quality

It is now well understood that the power of open source development comes from its community-driven approach to problem solving. Because open source communities can include people with very different geographical and cultural backgrounds, they are inherently richer than what any single organization can afford. As a result the sum of community innovations thus created far exceeds what any single vendor could create. The same applies to standards development.

  • The benefits of open development apply to standards just as well
  • Greater community input with different background, expertise, culture, interest leads to better standards
  • Example: SOAP
    • SOAP 1.1 submitted to W3C in 2000 by several members
    • SOAP 1.2 Recommendation published in 2003
    • SOAP 1.2 is recognized by all to be superior

As previously stated and demonstrated in the example of SOAP, specifications going through true open development improve. Progressive companies that have understood this embrace this openness rather than fight it or pretend, simply because they’ve realized that when everybody benefits from it so do they.

Conclusion

Not all standards development organizations are the same. Looking forward, I believe that competition between standards organizations will increase and established de jure organizations will be further challenged. In this context, quality will become a differentiator between standards organizations and, just as it is true in the corporate world, standards organizations that do not strive to improve will become irrelevant over time.

The number of ad hoc. community-driven organizations will increase and more standards will be created the way OpenID was: by a group of interested individuals that share a common interest and decide to solve it swiftly in a somewhat informal way using the internet to its full advantage.

Customers will learn to differentiate products, solutions, and services based on quality open standards or seek unbiased counsel from firms and partners who can help them tell the difference between good quality and good marketing.

Ultimately, reliance on traditional de jure standards will probably decrease. In the meanwhile, if they care to survive standards development organizations will need to start a serious introspection of their processes and look to adopt some of the principles set by exemplary organizations such as W3C.

While no organization is perfect and there always is room for improvement, W3C has indeed set itself apart from the pack by showing the way to much greater quality and openness for the benefit of all.

It only makes me more proud to have its name on my resume. 🙂

April 25, 2008 Posted by | standards | , , , , , , | 6 Comments

Clarification on what the Fast Track is really about

From the outset of the process several countries pointed out that OOXML was inappropriate for Fast Track processing and that it should be rejected and re-submitted to the formal standards process. This has since then be repeated again and again, by me as well as many other people, and I have no interest in rehashing that point once again.

On the other hand it appears to me that some people are getting confused about what the Fast Track is really about and what it’s not designed for.

JTC1’s choice not to listen to the countries that raised contradictions basically led it to trying to replace the multi-year standards process by a few months and a 5 day BRM. Predictably this has failed leaving many issues undiscussed, unresolved, or simply to be accounted for.

I said predictably because the Fast Track process is not designed to fix broken specifications, so it is no surprise that it failed short of achieving that goal.

The Fast Track process is merely designed to ratify specifications that already meet ISO standards criteria or are very close to. OOXML doesn’t, and for this reason alone, if nothing else, it shouldn’t be approved.

People should also remember that voting No to OOXML now doesn’t necessarily mean No forever. It simply means not yet, it is not ready – and there is plenty of evidence this is the case -. By voting No, people are simply giving the world a chance to fix OOXML before ratifying it.

As I stated before the world has nothing to gain from rushing OOXML through ISO. The only urgency here is not to rush into making this broken specification an ISO standard.

For what it’s worth, ISO/IEC officials’ response to criticism over the use of the Fast Track process has been that if people don’t think it is appropriate they should simply vote No. So you can take it from them: Vote No.

March 26, 2008 Posted by | standards | , , , | 13 Comments

What Microsoft’s track record tells us about OOXML’s future

If the discussion on OOXML was purely technical I don’t think there would be much debate. Apart from Microsoft employees and a few lost souls, for whom we can only wonder about their real motivations, I have yet to meet any technical person arguing that OOXML is a good specification.

The reality is that from a technical point of view OOXML is just plain terrible and the body of evidence proving this point only keeps growing as people get time to review it. Antonis Christofides of the Greek delegation in his write-up on Some clarifications on the OOXML Ballot Resolution Meeting, sums it all up with “the Ecma responses make the text slightly better, but though slightly better it is still abysmal.

Given that, it means the only reason OOXML has even a chance to become an ISO standard is political. On that front I see two factors at play. First and for most is the extent of Microsoft’s powerful network built over the years which proves to be enough to, at least in some cases, skew the results in National Bodies. Second appears to be the belief by some people that by approving OOXML as an ISO standard they’re somehow bringing Microsoft to the table and taking control over the format.

It is this second point that I want to talk about here because I have no doubts that many well intentioned people fall in that latter category and I’m afraid they are badly mistaken. To be clear I’d be more than happy to be proven wrong on that front, but Microsoft’s track record leaves me no hope to see this happen.

Indeed, Microsoft’s track record shows that they never give away control and that they only use the standard process to appease customers’ fears by making them believe that everything is all right because their products are based on standards.

Let me share with you some personal experience on this so you can understand where I’m coming from.

At the time I was a W3C employee, in response to when Microsoft would fully implement CSS2 Microsoft’s representative once replied: “We will never do it. We have implemented what we are interested in. This is it.” Almost ten years later, this is still basically true. Yet, Microsoft at the same time kept pushing for CSS3 with new features they were interested in. What this told me is that they pick and choose.

Later, when I became IBM’s main representative at W3C and dealt with Microsoft to bring to W3C several of the Web services related specifications we were developing together I kept being confronted with the problem that while IBM leaned towards bringing the specifications early on to W3C Microsoft kept delaying this as much as possible. This kept me intrigued until one day my peer at Microsoft told me: “For us, the submission to W3C is the end of the road. What happens after doesn’t really matter.

I didn’t need to ask why. The explanation was obvious. Once the specification is submitted to W3C Microsoft can tell its customers that it is a standard. Technically it’s not, but if any customer ever cares to ask it’s easy enough to put their fear to rest by explaining that the process is started and it’s simply a matter of time. By the time the standard eventually comes out, customers are already using Microsof’s technology and no longer have much choice if they find out that Microsoft doesn’t even bother adhering to the actual standard. They are locked-in.

Now, someone I’m sure could point out that they aren’t the only ones failing to be fully compliant to a given standard and for that matter I’m sure somebody will quickly bring up a case where IBM is at fault. But while I believe it usually is accidental what I got from Microsoft tells me that in their case it is not. It is simply part of their strategy.

So, back to OOXML. I’d like to know what makes people believe this isn’t going to be the case here. I’d like to understand what makes people think that all of a sudden Microsoft has decided to play nice and relinquish control over the technology that constitutes its biggest cash cow, mind you.

Just looking at the their response to the comments that were filed for the BRM I can already see where this is going. In response to the lack of use of standard technologies Microsoft has generously offered to ADD many of these technologies. This is the case of SVG for vector graphics for instance, or ISO 8601 for dates. But while doing so they were very careful not to require implementation of these technologies, and they were very careful not to remove any of the technologies they actually use.

This is all Microsoft needed. They can, and I predict will, ignore all these additions which are optional and stick to what they have. The only reason they were added was to remove reasons for National Bodies to vote against OOXML.

You’ll note that in this case, because governments aren’t satisfied with a mere submission to ISO, it was important for Microsoft to actually get the ISO standard label. That’s why they chose the shortest way and put all their resources into achieving this goal at all cost.

If OOXML becomes an ISO standard, I predict that they will claim compliance with the standard overnight. Quite easy thing to do when the standard was custom made for your product and every modification brought to it was carefully carved out as optional. Then they will either ignore the standard process altogether and dump a new version every once in while through ECMA+JTC1, or pretend they care about the standard process to appease any possible discontents and participate in its “maintenance” while still ignoring it in its product. They will undoubtedly continue to pick and choose.

And what this means is that, if OOXML becomes an ISO standard next week, all the people who thought they were forcing them to the table will simply have given them carte blanche to abuse the ISO standard label.

I hope we’ll never have to find out but if we do, I sincerely hope I’m wrong. I’m used to joke about the fact that I’m always right, but in fact, it that were the case I challenge Microsoft to prove me wrong. I’d be much happier if they did, I just see no reason to believe it.

March 25, 2008 Posted by | standards | , , | 9 Comments