Arnaud’s Open blog

Opinions on open source and standards

Update on the W3C LDP specification

What just happened

The W3C LDP Working Group just published another Last Call draft of the Linked Data Platform (LDP) specification.

This specification was previously a Candidate Recommendation (CR) and this represents a step back – sort of.

Why it happened

The reason for going back to Last Call, which is before Candidate Recommendation on the W3C Recommendation track, is primarily because we are lacking implementations of the IndirectContainer.

Candidate Recommendation is a stage when implementers are asked to go ahead and implement the spec now considered stable, and report on their implementations. To exit CR and move to Proposed Recommendation (PR), which is when the W3C membership is asked to endorsed the spec as a W3C Recommendation/standard, every feature in the spec has to have two independent implementations.

Unfortunately, in this case, although most of the spec has been implemented by at least two different implementations (see the implementation report) IndirectContainer has not. Until this happens, the spec is stuck in CR.

We do have one implementation and one member of the WG said he plans to implement it too. However, he couldn’t commit to a specific timeline.

So, rather than taking the chance of having the spec stuck in CR for an indefinite amount of time the WG decided to republish the spec as a Last Call draft, marking IndirectContainer has a “feature at risk“.

What it means

When the Last Call period review ends in 3 weeks (on 7 October 2014) either we will have a second implementation of IndirectContainer and the spec will move to PR as is (skipping CR because we will then have two implementations of everything), or we will move IndirectContainer to a separate spec that can stay in CR until there are two implementations of it and move the remaining of the LDP spec to PR (skipping CR because we already have two implementations).

I said earlier publishing the LDP spec as a Last Call was a “step back – sort of” because it’s really just a technicality. As explained above, this actually ensures that, either way, we will be able to move to PR (skipping CR) in 3 weeks.

Bonus: Augmented JSON-LD support

When we started 2 years ago, the only the serialization format for RDF that was standard was RDF/XML. Many people disliked this format, which is arguably responsible for the initial lack of adoption of RDF, so the WG decided to require that all LDP servers support Turtle as a default serialization format – Turtle was in the process of becoming a standard. The WG got praised for this move which, at the time, seemed quite progressive.

Yet, a year and a half later, during which we saw the standardization of JSON-LD, requiring Turtle while leaving out JSON-LD no longer appeared so “bleeding edge”. At the LDP WG Face to Face meeting in Spring, I suggested we encourage support for JSON-LD by adding it as a “SHOULD”. The WG agreed. Some WG members would have liked to make it a MUST but this would have required going back to Last Call and as for one, as chair of the WG responsible for keeping the WG on track to deliver a standard on time, didn’t think this was reasonable.

Fast forward to September, we now found ourselves having to republish our spec as a Last Call draft anyway (because of the IndirectContainer situation).  We seized the opportunity to increase support for JSON-LD by requiring LDP servers to support it (making it a MUST).

Please, send comments to public-ldp-comments@w3.org and implementation reports to public-ldp@w3.org.

Lesson learned

I wish we had marked IndirectContainer as a feature at risk when we moved to Candidate Recommendation back in June. Already then we knew we might not have enough implementations of it to move to PR. If we had marked it as a feature at risk we could now just go to PR without it and without any further delay.

This is something be remembered: when in doubt, just mark things “at risk”. There is really not much downside to it and it’s a good safety valve to have.

Advertisements

September 16, 2014 Posted by | standards | , , | Leave a comment

More on Linked Data and IBM

For those technically inclined, you can learn more about IBM’s interest in Linked Data as an application integration model and the kind of standard we’d like the W3C Linked Data Platform WG to develop by reading a paper I presented earlier this year at the WWW2012 Linked Data workshop titled: “Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile”.

Here is the abstract:

Linked Data, as defined by Tim Berners-Lee’s 4 rules [1], has
enjoyed considerable well-publicized success as a technology for
publishing data in the World Wide Web [2]. The Rational group in
IBM has for several years been employing a read/write usage of
Linked Data as an architectural style for integrating a suite of
applications, and we have shipped commercial products using this
technology. We have found that this read/write usage of Linked
Data has helped us solve several perennial problems that we had
been unable to successfully solve with other application
integration architectural styles that we have explored in the past.
The applications we have integrated in IBM are primarily in the
domains of Application Lifecycle Management (ALM) and
Integration System Management (ISM), but we believe that our
experiences using read/write Linked Data to solve application
integration problems could be broadly relevant and applicable
within the IT industry.
This paper explains why Linked Data, which builds on the
existing World Wide Web infrastructure, presents some unique
characteristics, such as being distributed and scalable, that may
allow the industry to succeed where other application integration
approaches have failed. It discusses lessons we have learned along
the way and some of the challenges we have been facing in using
Linked Data to integrate enterprise applications.
Finally, we discuss several areas that could benefit from
additional standard work and discuss several commonly
applicable usage patterns along with proposals on how to address
them using the existing W3C standards in the form of a Linked
Data Basic Profile. This includes techniques applicable to clients
and servers that read and write linked data, a type of container
that allows new resources to be created using HTTP POST and
existing resources to be found using HTTP GET (analogous to
things like Atom Publishing Protocol (APP) [3]).

The full article can be found as a PDF file: Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile

September 11, 2012 Posted by | linkeddata, standards | , , , | Leave a comment

Linked Data

Several months ago I edited my “About” text on this blog to add that: “After several years focusing on strategic and policy issues related to open source and standards, including in the emerging markets, I am back to more technical work.”

One of the projects that I have been working on in this context is Linked Data.

It all started over a year ago when I learned from the IBM Rational team that Linked Data was the foundation of Open Services for Lifecycle Collaboration Lifecycle (OSLC) which Rational uses as their platform for application integration. The Rational team was very pleased with the direction they were on but reported challenges in using Linked Data. They were looking for help in addressing these.

Fundamentally, the crux of the challenges they faced came down to a lack of formal definition of Linked Data. There is plenty of documentation out there on Linked Data but not everyone has the same vision or definition. The W3C has a growing collection of standards related to the Semantic Web but not everyone agrees on how they should be used and combined, and which one applies to Linked Data.

The problem with how things stand isn’t so much that there isn’t a way to do something. The problem is rather that, more often than not, there are too many ways. This means users have to make choices all the time. This makes starting to use Linked Data difficult for beginners and it hinders interoperability because different users make different choices.

I organized a teleconference with the W3C Team in which we explained what IBM Rational was doing with Linked Data and the challenges they were facing. The W3C team was very receptive to what we had to say and offered to organize a workshop to discuss our issues and see who else would be interested.

The Linked Enterprise Data Patterns Workshop took place on December 6 and 7, 2011 and was well attended. After a day and a half of presentations and discussions the participants found themselves largely agreeing and unanimously concluded that: the W3C should create a Working Group to create a Recommendation that formally defines a “Linked Data Platform”.

The workshop was followed by a submission by IBM and others of the Linked Data Basic Profile and the launch by W3C of the Linked Data Platform (LDP) Working Group (WG) which I co-chair.

You can learn more about this effort and IBM’s position by reading the “IBM on the Linked Data Platform” interview the W3C posted on their website and reading the “IBM lends support for Linked Data standards through W3C group” article I published on the Rational blog.

On a personal level, I’ve known about the W3C Semantic Web activities since my days as a W3C Team Member but I had never had the opportunity to work in this space before so I’m very excited about this project. I’m also happy to be involved again with the W3C where I still count many friends. 🙂

I will try to post updates on this blog as the WG makes progress.

September 10, 2012 Posted by | ibm, linkeddata, standards | , , , | Leave a comment

What consensus means

Over the last year I’ve noticed that quite a few people are using the word “consensus” in a way which differs from my understanding of what consensus is about.

Looking at Wikipedia, I see that it defines consensus as “general agreement”. This is pretty vague obviously, and if it were left to that there wouldn’t be much more to talk about. But Wikipedia quite rightly points out that consensus is also used to refer to the process used to reach this agreement. I think that’s where the problem lies.

From what I can tell, some people are happy to just use consensus as if it were synonymous with agreement. On that basis, any decision, no matter how it is made, can pretty much be said to be the consensus.

For instance, in the case of the BRM for OOXML, it has been stated by several, and ISO/IEC officials in particular, that the decisions were made by “consensus”. Was it so, though? I certainly don’t think so.

I’ll admit that my expectations are heavily rooted in my background with the W3C which inherited from IETF the goal of making all its decisions by consensus. As I explained in my previous entry A Standards Quality Case Study: W3C, the W3C leaves it to the chair to decide whether consensus is reached or not.

Interestingly enough, this is not any different from ISO/IEC’s directives which read in Part 1:

It is the responsibility of the chairman […] to judge whether there is sufficient support bearing in mind the definition of consensus given in ISO/IEC Guide 2:1996.

“consensus: General agreement, characterized by the absence of sustained opposition to
substantial issues by any important part of the concerned interests and by a process that
involves seeking to take into account the views of all parties concerned and to reconcile
any conflicting arguments.
NOTE Consensus need not imply unanimity.”

The way the “absence of sustained opposition” is typically assessed at W3C is by simply asking whether everybody can live with the proposed decision. As I indicated before asking this question can sometimes lead to a completely different decision. I’ve witnessed it myself. So, this is not just some sort of polite gesture, it is at the very heart of consensus building.

I wish that rule were more broadly adopted. For instance, to my knowledge this was never asked in any way at the BRM for OOXML.

Instead, the BRM proceeded by making its decisions by simple majority vote, leaving no room “to reconcile any conflicting arguments”. In fact, as it’s well known, the vast majority of the issues were “addressed” in bulk via a simple majority vote. So much for ISO/IEC directives. It seems to me that the chair failed to “bear in mind the definition of consensus” or was instructed to forgo that golden rule for the sake of expediency.

As if that was not enough ISO/IEC officials went on to then recommend to its respective boards of directors (SMB & TMB) to dismiss the appeals filed by Brazil, India, South Africa, and Venezuela. Now, I don’t know what an appeal is if it’s not a clear expression of “sustained opposition”.

You would think that given ISO/IEC’s own directives, its officials would have recommended to create a conciliation panel “to take into account the views of all parties concerned and to reconcile any conflicting arguments.” Looks to me like ISO/IEC officials, who have for mission to ensure the directives are followed mind you, have completely lost sight of this fundamental principal. Either that or they should revise their directives to acknowledge that decisions are not actually made by consensus.

As Wikipedia puts it: Consensus decision-making is a group decision making process that not only seeks the agreement of most participants, but also to resolve or mitigate the objections of the minority to achieve the most agreeable decision.”

We’d all be better off if everybody were “to bear that in mind”.

August 7, 2008 Posted by | standards | , , | 23 Comments

A Standards Quality Case Study: W3C

Since I gave a presentation on this topic at the OFE Conference in Geneva at the end of February I have meant to post something about it here. As some of us stated before, if anything, the OOXML debacle has achieved one thing: raising awareness for the need for higher quality standards and standards development processes.

Introduction

Having been primarily involved in W3C both as a staff member and a member company representative I had grown to expect a certain quality level which has led me to be genuinely baffled by the whole OOXML experience. I just didn’t know how superior the W3C process was compared to that of ECMA and ISO/IEC. I just didn’t know those organizations had processes which are so broken that they would allow such a parody of a standards development to take place and such a low quality specification to be eventually endorsed as an international standard.

There have been discussions within the W3C for a long time as to whether it should seek to become a PAS submitter and adopt a policy to systematically submit its standards to ISO/IEC. I used to think it should. I no longer think so. The W3C process is so superior to that of ECMA and ISO/IEC, it’s these organizations that need to learn from W3C and those who are working for the W3C standard label to be recognized at an international level in its own right have all my support.

Ecma’s value proposition vs W3C’s core principles

Let’s look at what differentiates W3C from these organizations by first having a look at Ecma’s stated value:

A proactive, problem solving experts’ group that ensures timely publication of International standards;

Offers industry a “fast track“, to global standards bodies, through which standards are made available on time;

Balances Technical Quality and Business Value:

  • Quality of a standard is pivotal, but the balance between timeliness and quality as well: Better a good standard today than a perfect one tomorrow!
  • Offers a path which will minimize risk of changes to input specs
  • Solid IPR policy and practice

Ecma can be viewed as a reconfigurable hub of TCs

The insistence on time, fast track, business value, minimal risk of changes over quality certainly strikes me as odd. Contrast this with some of the key characteristics of W3C taken from various parts of its documentation:

Mission: To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web.

W3C refers to this goal as “Web interoperability.” By publishing open (non-proprietary) standards for Web languages and protocols, W3C seeks to avoid market fragmentation and thus Web fragmentation.

A vendor-neutral forum for the creation of Web standards.

W3C Members, staff, and Invited Experts work together to design technologies to ensure that the Web will continue to thrive in the future, accommodating the growing diversity of people, hardware, and software.

Although W3C is a consortium which for a large part is funded by its members, the staff led by Tim Berners-Lee has a clear understanding that its mission goes far beyond that of merely satisfying its members. It is working for the benefit of all with a long term vision.

Because of this W3C is more open than many other organizations. One such evidence is the notion of invited experts that was introduced very early on and that allows non members subject-matter experts to participate in the development process. Because of this it also favors quality over time, knowing that while publishing standards faster might serve some short term financial interests it is typically detrimental to the overall stability of the web and contrary to a smooth evolution that will benefit the greater community in the long term.

This is of course not without creating some tensions between its staff and its members at times but, to its credit I think the staff has been mostly successful at balancing the various forces at play so that no single interest takes priority over general interest. This was true for instance when it adopted a patent policy which favors Royalty Free licensing, forcing major vendors, often more stuck in their old ways than necessarily fundamentally against it, to reconsider how they manage their IP with regard to standards.

W3C’s standards development process

Looking at the W3C standards development process also reveals some key characteristics that are fundamental to achieving its greater mission. The typical development of a W3C “standard” – officially called “recommendation” – looks something like this:

  1. Member or Team Submission
  2. Development of a charter / Creation of a Working Group
    • Vote from Members + call for participation
  3. Publication of Member-only and Public Working Drafts (WD).
  4. Last Call announcement.
    • WG believes all requirements are fulfilled
  5. Publication of a Candidate Recommendation (CR)
    • Call for implementations
  6. Publication of a Proposed Recommendation (PR).
    • Call for review
  7. Publication as a Recommendation (REC).

It is particularly important to note that contrary to Ecma, submissions to W3C in no way constrain what is eventually produced as a standard, and that no guarantee is given regarding how much can be changed. In fact, quite the opposite is said to be expected. Yet, I’ve never heard anyone claim that any W3C standard developed from a submission didn’t turn out to be better than the original submission.

It is also worth noting that several phases of the process stress the need for reviews by various interested parties, going from a fairly small group to an ever bigger community as the level of confidence increases over time and the specification gets closer to final approval.

Also worth noting is the “Candidate Recommendation” stage. I’m happy to say that I, along with Lauren Wood then chair of the DOM Working Group, am at the source of the introduction of this step in the W3C’s standards development process. The idea behind it is simply to stress the need for implementation experience and to ensure that specifications do not move forward unless they are backed by actual implementations demonstrating that the specification achieves its stated goal.

When first introduced, the success criteria for this phase merely relied on having for each feature of the specification a couple of vendors reporting successful implementation. Over time the bar has been raised time after time to now going as far as holding “interop fests” during which implementations from various vendors are tested against each other.

Contrast this with Ecma and ISO/IEC publishing international standards without even a single claim of successful impementation from anyone…

More striking yet, is the alternate paths a specification may follow within W3C:

  • Alternate ending
    Working Group Note
  • Return of a Document to a Working Group for Further Work when:
    • the Working Group makes substantive changes to the technical report at any time
    • the Director requires the WG to address important issues raised during a review or as the result of implementation experience.

When not enough implementation experience can be gathered after a while the specification is basically parked aside and recorded as a “Note” rather than let through as a “recommendation” or standard.

Any time significant changes occur or issues are found the document is sent back to the beginning. This is simply because it is well understood that 1) all the checks that were made all along may be jeopardized by any significant changes, and 2) any issue found may require significant changes leading to 1). In practice this doesn’t always mean a lot more time being spent. Indeed, if the changes turn out not to raise any particular problems the document will go that much faster through every step the second time around. But this way, no chances is taken.

Contrast this with the ISO/IEC Fast Track process which allowed OOXML to be modified in ways no one could even fully understand and which went its merry way to final vote without even a final document to show for.

W3C’s decision process

Another key differentiator of W3C is its decision process which I’ve talked about in my blog entry called Can you live with it?

  • Consensus is a core value of W3C.
  • Vote is a last resort when consensus cannot be reached.
    • Everyone has one vote (including invited experts)
  • Consensus sets the bar higher than a majority vote.
    • Not only ask whether people agree but also whether anybody dissents
    • Practical way to judge the latter is to ask: “Can you live with it?”
    • Can lead to opposite decision

While the notion of “consensus” isn’t that unique I think W3C differentiates itself from other organizations claiming to make decisions by consensus in the way it defines and assesses whether consensus has been reached.

From what I’ve heard of what went on with OOXML many claims of decisions made by consensus I believe would have failed in W3C.

W3C’s constant evolution

Beyond the core principles on which it is founded, the W3C differentiates itself in that it is constantly looking for ways to further improve its process to better achieve its goals.

  • Process is constantly evolving to increase quality and openness
  • More and more Working Groups are public
  • Technical Architecture Group (TAG)
  • Based on the belief that the larger the community the greater the standards produced
  • Patent policy evolved from RAND to RF

I’ve already talked about the introduction of the “Candidate Recommendation” phase to ensure greater quality. The introduction of the TAG with the mission to ensure that all W3C recommendations follow some key architectural principles and that the sum of all of them constitute a consistent set is another example of how the W3C evolved for the better.

I’ve already talked about the notion of invited experts ensuring greater input and more openness. Allowing its Working Groups to be opened to the public was yet another bold move from W3C. This was feared to be detrimental to ensuring a sustained number of members for which one of the incentives of being a member is to do just that: participate in Working Groups. But here again the W3C favored greater openness over its own self interest and from what I understand it is being rewarded in that more and more WGs are becoming public without having generated an hemorrhage of members.

Contrast this with ISO/IEC’s process which, from what I’m told, has been left untouched for many years, save a few changes to reduce the amount of time allocated to each phase of its process…

(True) Open standards development process increases quality

It is now well understood that the power of open source development comes from its community-driven approach to problem solving. Because open source communities can include people with very different geographical and cultural backgrounds, they are inherently richer than what any single organization can afford. As a result the sum of community innovations thus created far exceeds what any single vendor could create. The same applies to standards development.

  • The benefits of open development apply to standards just as well
  • Greater community input with different background, expertise, culture, interest leads to better standards
  • Example: SOAP
    • SOAP 1.1 submitted to W3C in 2000 by several members
    • SOAP 1.2 Recommendation published in 2003
    • SOAP 1.2 is recognized by all to be superior

As previously stated and demonstrated in the example of SOAP, specifications going through true open development improve. Progressive companies that have understood this embrace this openness rather than fight it or pretend, simply because they’ve realized that when everybody benefits from it so do they.

Conclusion

Not all standards development organizations are the same. Looking forward, I believe that competition between standards organizations will increase and established de jure organizations will be further challenged. In this context, quality will become a differentiator between standards organizations and, just as it is true in the corporate world, standards organizations that do not strive to improve will become irrelevant over time.

The number of ad hoc. community-driven organizations will increase and more standards will be created the way OpenID was: by a group of interested individuals that share a common interest and decide to solve it swiftly in a somewhat informal way using the internet to its full advantage.

Customers will learn to differentiate products, solutions, and services based on quality open standards or seek unbiased counsel from firms and partners who can help them tell the difference between good quality and good marketing.

Ultimately, reliance on traditional de jure standards will probably decrease. In the meanwhile, if they care to survive standards development organizations will need to start a serious introspection of their processes and look to adopt some of the principles set by exemplary organizations such as W3C.

While no organization is perfect and there always is room for improvement, W3C has indeed set itself apart from the pack by showing the way to much greater quality and openness for the benefit of all.

It only makes me more proud to have its name on my resume. 🙂

April 25, 2008 Posted by | standards | , , , , , , | 6 Comments

How many bad standards does one need in a given domain? Zero.

A lot of the debate around OOXML has focused on whether it is good to have competing standards or not. The debate started from the simple fact that Microsoft decided to create its own standard for office applications rather than adopt the established ISO standard for office applications: ODF.

While there is clearly a need for evolution and there are times when it makes sense to introduce a new standard to replace an old one there is no doubt in my mind that in general there is much more to lose from having multiple standards rather than a single one than there is to gain.

Interestingly enough I should point out that Microsoft defends that very point at times. In the case of XML for instance, when the W3C introduced XML 1.1 to address some internationalization limitations in XML 1.0, something important to many non-western countries, Microsoft voted against XML 1.1 arguing that the introduction of a new version of XML would be too disruptive!

Yet the tactic of introducing a competing standard to disrupt the status quo is common practice for the Redmond company. For instance, in the Web services management area, an area not so visible to the public as office applications but still very important to the IT industry, they did the same. Microsoft consistently refused to join the ongoing industry effort around Web Services Distributed Management (WSDM) at OASIS for years. They kept claiming that they had no interest in this topic. Yet, in 2005. after WSDM became an OASIS standard supported by a large segment of the industry mind you, Microsoft introduced their own technology named WS-Management, with support from some well chosen partners. Three years later the industry is still trying to figure out what to do with the mess they thus created.

But all this debate around multiple standards is somewhat of a distraction from the real issue at hand. In the end what is really being asked to National Bodies (NB) around the world isn’t to choose between ODF and OOXML, or to choose between ODF alone and ODF and OOXML. The question that NBs are asked to answer is whether OOXML deserves to become an ISO standard in its own right.

The reality is that if the OOXML specification wasn’t of such a poor quality it most certainly would have had a much easier ride through the Fast Track process. If all that could be argued against it was that it was too big, the IP license has gaps, and multiple standards aren’t good, this may not even have made headlines, no matter how true it is.

What is appalling about OOXML is that it is fundamentally a VERY BAD specification and I just can’t understand what process would allow this garbage to even be presented as an ISO standard up for vote. OOXML is from a technical point of view just terrible. All I’ve seen from it and all I’ve read about it only confirms this. And I have yet to meet any technical person arguing in all honesty that OOXML is a good specification. The latest facts reported by Rob Weir speak for themselves.

So, again, the real question isn’t so much whether the world would benefit from having several competing standards or not. The real question is how many bad standards do we need? And the answer is zero.

OOXML must be voted down simply because it is a bad standard.

March 19, 2008 Posted by | standards | , , , , , | 1 Comment