Arnaud’s Open blog

Opinions on open source and standards

Update on the W3C LDP specification

What just happened

The W3C LDP Working Group just published another Last Call draft of the Linked Data Platform (LDP) specification.

This specification was previously a Candidate Recommendation (CR) and this represents a step back – sort of.

Why it happened

The reason for going back to Last Call, which is before Candidate Recommendation on the W3C Recommendation track, is primarily because we are lacking implementations of the IndirectContainer.

Candidate Recommendation is a stage when implementers are asked to go ahead and implement the spec now considered stable, and report on their implementations. To exit CR and move to Proposed Recommendation (PR), which is when the W3C membership is asked to endorsed the spec as a W3C Recommendation/standard, every feature in the spec has to have two independent implementations.

Unfortunately, in this case, although most of the spec has been implemented by at least two different implementations (see the implementation report) IndirectContainer has not. Until this happens, the spec is stuck in CR.

We do have one implementation and one member of the WG said he plans to implement it too. However, he couldn’t commit to a specific timeline.

So, rather than taking the chance of having the spec stuck in CR for an indefinite amount of time the WG decided to republish the spec as a Last Call draft, marking IndirectContainer has a “feature at risk“.

What it means

When the Last Call period review ends in 3 weeks (on 7 October 2014) either we will have a second implementation of IndirectContainer and the spec will move to PR as is (skipping CR because we will then have two implementations of everything), or we will move IndirectContainer to a separate spec that can stay in CR until there are two implementations of it and move the remaining of the LDP spec to PR (skipping CR because we already have two implementations).

I said earlier publishing the LDP spec as a Last Call was a “step back – sort of” because it’s really just a technicality. As explained above, this actually ensures that, either way, we will be able to move to PR (skipping CR) in 3 weeks.

Bonus: Augmented JSON-LD support

When we started 2 years ago, the only the serialization format for RDF that was standard was RDF/XML. Many people disliked this format, which is arguably responsible for the initial lack of adoption of RDF, so the WG decided to require that all LDP servers support Turtle as a default serialization format – Turtle was in the process of becoming a standard. The WG got praised for this move which, at the time, seemed quite progressive.

Yet, a year and a half later, during which we saw the standardization of JSON-LD, requiring Turtle while leaving out JSON-LD no longer appeared so “bleeding edge”. At the LDP WG Face to Face meeting in Spring, I suggested we encourage support for JSON-LD by adding it as a “SHOULD”. The WG agreed. Some WG members would have liked to make it a MUST but this would have required going back to Last Call and as for one, as chair of the WG responsible for keeping the WG on track to deliver a standard on time, didn’t think this was reasonable.

Fast forward to September, we now found ourselves having to republish our spec as a Last Call draft anyway (because of the IndirectContainer situation).  We seized the opportunity to increase support for JSON-LD by requiring LDP servers to support it (making it a MUST).

Please, send comments to public-ldp-comments@w3.org and implementation reports to public-ldp@w3.org.

Lesson learned

I wish we had marked IndirectContainer as a feature at risk when we moved to Candidate Recommendation back in June. Already then we knew we might not have enough implementations of it to move to PR. If we had marked it as a feature at risk we could now just go to PR without it and without any further delay.

This is something be remembered: when in doubt, just mark things “at risk”. There is really not much downside to it and it’s a good safety valve to have.

September 16, 2014 Posted by | standards | , , | Leave a comment

Lucerne Foods, get your act together already!

It’s already confusing enough to me that “butter” in the US is actually salted and what should really be called “butter” is called “unsalted butter”.

It’s not like butter is naturally salted and you have to remove salt from it to make “unsalted butter”. Salt is added to butter to make it salted butter. So the current names defy logic.

But that’s not all. To make matters worse, Lucerne’s unsalted butter sticks come in a blue wrapper packaged in green cartons, whlie its salted butter (a.k.a. “butter”) sticks come in a red wrapper in blue cartons! Now, if that’s not madness, what is it??

Seriously, how hard is it to have matching colors??

Given the absurd but unrelenting resistance against abandoning the Imperial system in favor of the Metric system despite all the good reasons to do so, I don’t expect the US to adopt the right names for the butter but at least Lucerne could sort out its packaging and making it easier on us. :-)

P1040177

May 6, 2014 Posted by | standards | , , | Leave a comment

Box Wine Rules!

For once I’ll post on something that has nothing to do with my work. It’s also not that new news but I think I feel like adding my piece on this.

When my wife and I were on vacation last year we were given a box of wine – and I don’t mean a box of bottles of wine, but literally a box of wine, as in “box wine”. :-)

It’s fair to say that we were pretty skeptical at first but we ended up agreeing that it was actually quite nice. Based on that experience we decided to try a few box wines back in California. We tried the Bota Box first and stuck with it for a while but we eventually grew tired of it and switched to Black Box which seemed significantly better. This has become our table wine. We’ve compared the Cabernet against the Shiraz and the Merlot and the Shiraz won everyone’s vote, although I like to change every now and then.

Last weekend I had some friends over for lunch and ended up with a bottle that was barely started. On my own for the week it took me several days to get to the bottom of it. All along I kept the bottle on the kitchen counter with just the cork on.

At the rate of about a glass a day I noticed the quality was clearly declining from one day to the next. Tonight, as I finally reached the bottom of the bottle I drank the last of it without much enjoyment. This is when I decided to get a bit more from the Black Box wine that had now been sitting on the counter for about two weeks.

Well, box wine is known to be good in that it stays better longer because the bag it’s stored in, within the box, deflates as you poor out wine without letting any air in. As a result the wine doesn’t deteriorate as fast.

If there was any doubt, tonight’s experience cleared it out for me. Although the wine from the bottle was initially of better quality than any box wine I’ve tried to date, after a week, it was not anywhere near as good.

I’ve actually found box wine to have several advantages. It’s said to be less environmentally unfriendly. (They say more environment friendly but with its plastic bag and valve it doesn’t quite measure up against drinking water. ;-) Because it can last much longer than an open bottle of wine you can have different ones “open” at the same time and enjoy some variety.

So, all I can say is: More power to box wine! :-)

[Additional Note: This is not to say that one should give up on bottled wine of course. The better wines only come in bottles and I’ll still get those for the “special” days. But for table wine you use on a day to day basis box wine rules.]

October 3, 2013 Posted by | standards | | 1 Comment

Linked Data Platform Update

Since the launch of the W3C Linked Data Platform (LDP) WG in June last year, the WG has made a lot of progress.

It took a while for WG members to get to understand each other and sometimes it still feels like we don’t! But that’s what it takes to make a standard. You need to get people with very different backgrounds and expectations to come around and somehow find a happy medium that works for everyone.

One of the most difficult issues we had to deal with had to do with containers, their relationship to member resources and what to expect from the server when a container gets deleted. After investigating various possible paths the WG finally settled on a simple design that is probably not going to make everyone happy but that all WG members can live with. One reason for this is that it can possibly be grown into something more complex later on if we really want to. In some ways, we went full circle on that issue but in the process we have all gained a much greater understanding of what’s in the spec and why it is there so, this was by no means a useless exercise.

Per our charter, we’re to produce the Last Call specification this month. This is when the WG thinks it’s done – all issues are closed – and external parties are invited to comment on the spec (not to say that comments aren’t welcome all the time). I’m sorry to say that this isn’t going to happen this month. We just have too many issues still left open and the draft still has to incorporate some of the decisions that were made. This will need to be reviewed and may generate more issues. However, the WG is planning to meet face to face in June to tackle the remaining issues. If everything goes to plan this should allow us to produce our Last Call document by the end of June.

Anyone familiar with the standards development arena knows that one month behind is basically “on time”. :-)

In the meantime, next week I will be at the WWW2013 conference where I will be presenting on LDP. It’s a good opportunity for people to come and learn about what’s in the spec if you don’t know yet! If you can’t make it to Rio, you’ll have another chance at the SemTech conf in June where I will be presenting on LDP as well. Jennifer Zaino from SemanticWeb.com wrote a nice piece based on an interview I gave her.

May 6, 2013 Posted by | linkeddata, standards | Leave a comment

More on Linked Data and IBM

For those technically inclined, you can learn more about IBM’s interest in Linked Data as an application integration model and the kind of standard we’d like the W3C Linked Data Platform WG to develop by reading a paper I presented earlier this year at the WWW2012 Linked Data workshop titled: “Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile”.

Here is the abstract:

Linked Data, as defined by Tim Berners-Lee’s 4 rules [1], has
enjoyed considerable well-publicized success as a technology for
publishing data in the World Wide Web [2]. The Rational group in
IBM has for several years been employing a read/write usage of
Linked Data as an architectural style for integrating a suite of
applications, and we have shipped commercial products using this
technology. We have found that this read/write usage of Linked
Data has helped us solve several perennial problems that we had
been unable to successfully solve with other application
integration architectural styles that we have explored in the past.
The applications we have integrated in IBM are primarily in the
domains of Application Lifecycle Management (ALM) and
Integration System Management (ISM), but we believe that our
experiences using read/write Linked Data to solve application
integration problems could be broadly relevant and applicable
within the IT industry.
This paper explains why Linked Data, which builds on the
existing World Wide Web infrastructure, presents some unique
characteristics, such as being distributed and scalable, that may
allow the industry to succeed where other application integration
approaches have failed. It discusses lessons we have learned along
the way and some of the challenges we have been facing in using
Linked Data to integrate enterprise applications.
Finally, we discuss several areas that could benefit from
additional standard work and discuss several commonly
applicable usage patterns along with proposals on how to address
them using the existing W3C standards in the form of a Linked
Data Basic Profile. This includes techniques applicable to clients
and servers that read and write linked data, a type of container
that allows new resources to be created using HTTP POST and
existing resources to be found using HTTP GET (analogous to
things like Atom Publishing Protocol (APP) [3]).

The full article can be found as a PDF file: Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile

September 11, 2012 Posted by | linkeddata, standards | , , , | Leave a comment

Linked Data

Several months ago I edited my “About” text on this blog to add that: “After several years focusing on strategic and policy issues related to open source and standards, including in the emerging markets, I am back to more technical work.”

One of the projects that I have been working on in this context is Linked Data.

It all started over a year ago when I learned from the IBM Rational team that Linked Data was the foundation of Open Services for Lifecycle Collaboration Lifecycle (OSLC) which Rational uses as their platform for application integration. The Rational team was very pleased with the direction they were on but reported challenges in using Linked Data. They were looking for help in addressing these.

Fundamentally, the crux of the challenges they faced came down to a lack of formal definition of Linked Data. There is plenty of documentation out there on Linked Data but not everyone has the same vision or definition. The W3C has a growing collection of standards related to the Semantic Web but not everyone agrees on how they should be used and combined, and which one applies to Linked Data.

The problem with how things stand isn’t so much that there isn’t a way to do something. The problem is rather that, more often than not, there are too many ways. This means users have to make choices all the time. This makes starting to use Linked Data difficult for beginners and it hinders interoperability because different users make different choices.

I organized a teleconference with the W3C Team in which we explained what IBM Rational was doing with Linked Data and the challenges they were facing. The W3C team was very receptive to what we had to say and offered to organize a workshop to discuss our issues and see who else would be interested.

The Linked Enterprise Data Patterns Workshop took place on December 6 and 7, 2011 and was well attended. After a day and a half of presentations and discussions the participants found themselves largely agreeing and unanimously concluded that: the W3C should create a Working Group to create a Recommendation that formally defines a “Linked Data Platform”.

The workshop was followed by a submission by IBM and others of the Linked Data Basic Profile and the launch by W3C of the Linked Data Platform (LDP) Working Group (WG) which I co-chair.

You can learn more about this effort and IBM’s position by reading the “IBM on the Linked Data Platform” interview the W3C posted on their website and reading the “IBM lends support for Linked Data standards through W3C group” article I published on the Rational blog.

On a personal level, I’ve known about the W3C Semantic Web activities since my days as a W3C Team Member but I had never had the opportunity to work in this space before so I’m very excited about this project. I’m also happy to be involved again with the W3C where I still count many friends. :-)

I will try to post updates on this blog as the WG makes progress.

September 10, 2012 Posted by | ibm, linkeddata, standards | , , , | Leave a comment

LibreOffice should declare victory and rejoin OpenOffice

When OpenOffice went to the Apache Software Foundation I started writing a post about this topic that I never got to finish and publish.

The post from my colleague Rob Weir on Ending the Symphony Fork prompted me to post this now though.

I should say that I no longer have anything to do with what IBM does with ODF and anything related. I’ve changed position within IBM in the Fall of 2010 and now focuses on other things such as Linked Data which I may talk about in some other post.

In fact, I’m now so out of touch with the folks involved with ODF that I only learned about OpenOffice going to the Apache Software Foundation when the news hit the wire. I had no knowledge of what was going on and have no insights as to what led to it. Similarly, I only learned after the fact about IBM deciding to merge Symphony with OpenOffice.

So, if anyone wants to blame me for speaking as a person on IBM payroll, I’m not even going to bother responding. This is entirely my personal opinion and I’ve not even talked about it to anyone within IBM.

But let me say quite bluntly what Rob is only hinting at: It’s time for LibreOffice to rejoin OpenOffice.

LibreOffice started from a sizable portion of the OpenOffice.org community being tired of Oracle’s control and apparent lack of interest in making it a more open community. I certainly understand that. But now that this problem is solved, what does anyone have to gain from keeping the fork alive?? Seriously.

While forks in the open source world can be a tremendous way of shaking things up, they can also be very damaging. In this case, I think it’s a waste of resources and energy to keep this going. Instead of competing with each other the LibreOffice and OpenOffice communities should get together to fight their common and real competitor.

I know a certain level of competition can be healthy but I’m tired of seeing open source communities fight with each other to their own loss.

I know the fork was painful and people still hold a lot of angst against one another but they need to get over that. They need to realize they would do themselves and everyone else a real service by putting all this behind them and uniting. LibreOffice should declare victory and join forces!

February 3, 2012 Posted by | opensource, standards | , , , | 32 Comments

A little trick to make your presentation document smaller

Presentation documents have become an essential part of our work life and our mailboxes are typically filled and even clogged with messages containing presentations. Some of these attachments are unnecessarily big and there is something very simple you can do to make them smaller. Here is what I found.

I was working on a presentation which had already gone through the hands of a couple of my colleagues when I noticed it contained 40 master slides, or templates, while only 3 of them were used and most of the others were just duplications.

From what I understand this typically happens when you paste in slides copied from other presentations. Usually what happens then is the software paste the slide in along with its template to ensure it remains unchanged. Given how common it is to develop presentations by pulling slides from different sources this can quickly lead to a situation with many templates, not always different.

I went on to delete all the useless copies I had in my document and saved the file to discover that its size had gone from 1.6MB to a mere 800KB. Even though disk space and bandwidth is getting cheaper every day I think anybody will agree that a 50% size reduction remains significant!

So, here, you have it. Tip of the day: To make your presentation file smaller and avoid clogging your colleagues mailbox unnecessarily, check the master view of your presentation and consolidate your templates.

Of course the actual size reduction you’ll get depends on your file. In this case the presentation contains about 20 slides, with only 3 slides including graphics.

For what it’s worth I experimented both with ODF and PPT using Lotus Symphony, as well as with PPT using Microsoft Powerpoint 2003 and the results were similar in all cases.

March 1, 2011 Posted by | standards | 1 Comment

The cost of wifi and wifi security (continued)

Just to clarify my post on the cost of wifi and wifi security, I’m not advising anyone to turn security off just because it may significantly be slowing their connection. For one thing, I didn’t.

Just like the old saying “safety first” goes, “security first” ought to prevail here. Indeed, there are several reasons for which you should bare the cost of security and keep it on no matter what.

If you need more speed, like I do for my media center, the solution for now is to use a cable and avoid wifi altogether. For a media center it’s not so bad given that I don’t really need to move it around, it’s just that there already are so many cables, one fewer would have been nice…

In the future, the upcoming 802.11n wifi standard should alleviate that problem by providing faster speed all around. You can actually already get products that support draft versions of the spec but I prefer waiting for the technology to stabilize.

The intent of my post was merely to highlight something that doesn’t seem to be getting much coverage on the web and which I think people should be aware of.

Also, I should note that both devices – the router and your computer – play a role in this. So, the loss in speed doesn’t necessarily only come from the router. The wifi card in your computer may be just as guilty. To figure this out you’d have to make tests with different computers, which I haven’t done (yet).

November 29, 2010 Posted by | standards | , , , | Leave a comment

The cost of wifi and wifi security

I break a long silence to write about something I just found out about wifi and wifi security. Admittedly it may not be an earthshaking discovery but having searched for info on the subject it doesn’t seem like it is covered much so it seems worth a blog post (plus, for once, I can give this higher priority than everything else.)

There is a lot of info out there on how to set up your home wifi and set it up to be secured. However, little is said about what this will cost you. I mean in loss of speed.

I did some tests over the weekend and here is what I found, using speedtest.net, with a cable internet connection:

Directly connected to the cable modem (no router, no wifi): ~23Mbps download

Connected via cable through my router (“Belkin Wireless G Plus Router”), no wifi: ~17Mbps download. Gasp, that’s a 25% loss right there. I’m no network expert so I don’t know if that’s normal but I sure didn’t expect to lose that much just going through the router. But that’s actually nothing. Read on.

Connected via wifi through my router, with an open connection, no security: ~14Mbps download. Ouch. Here goes another 18%. Unfortunately that’s not even close to be the end of it.

Connected via wifi through my router, with security set to WPA-PSK TKIP: ~8Mbps download. Wow! That’s yet another 42% loss just for turning the security on, which every website out there says you MUST do.

The loss due to the security setting motivated me to run tests against the various security options my router supports. It turns out that all WPA options and WEP 128bits basically lead to the same poor results.

Setting security to WEP 64bits is the only security option that doesn’t severely impact performance: ~13Mbps.

Sad state of affair!

WEP is known to be very weak and easy to break in minutes by a knowledgeable hacker. 64bits is that much faster to break than 128bits obviously.

So here you have it. The choice is between fast and unsecured or secured and slow. Stuck between a rock and a hard place.

Obviously results will vary depending on the router you use but, here is the rub: when shopping for routers I find very little/no info on the impact of turning security on. Most products claims are typically in optimal circumstances, as in “up to xxx”. and relative, as in “10x faster than xxx). This is of no help determining what performance you will actually get.

One thing that plays a role in the performance you get is the CPU your router is equipped with. Yet, from what I’ve seen, this is not a piece of information that is readily available.

To make matters worse, from what I’ve seen, websites such as CNET don’t highlight that aspect either. So, you’re pretty much on your own to figure it out.

Beware. Run some tests and see for yourself what you get.

November 28, 2010 Posted by | standards | , , , , | 9 Comments

Follow

Get every new post delivered to your Inbox.