The leaked updated document of the European Interoperability Framework (EIF) is generating a lot of noise and for good reason. It is taking back what could be considered one of the most advanced features of the previous document: its insistence on the use of open standards.
In particular, the new document contains the following puzzling piece instead:
interoperability can also be obtained without openness, for example via homogeneity of the ICT systems, which implies that all partners use, or agree to use, the same solution to implement a European Public Service.
I don’t know about you but, to me this statement simply makes no sense. And I wonder to whom it could truly make sense.
Indeed, interoperability is defined in wikipedia as “a property referring to the ability of diverse systems and organizations to work together”. That seems about right to me.
So, how could “homogeneity” possibly qualify has a way of obtaining “interoperability”? Aren’t “homogeneity” and “diverse” opposing each other?
Saying that “interoperability can be obtained [...] via homogeneity” is equivalent to saying that diverse systems and organizations can work together via homogeneity. Or in other words that diversity can be dealt with via homogeneity. This doesn’t make sense, does it?
The only way to make sense of this is obviously to read it as saying that one can actually avoid the need for interoperability by adopting an homogeneous system or solution. That is true actually. And that’s something many organizations have tried before. But everybody has learned by now that this is a losing proposition. It just doesn’t work.
It may work on a short term basis, but in the long term in never does. Because the world is fundamentally heterogeneous. And resistance in this domain is futile. One way or another the heterogeneous aspect of nature will eventually kick in. Some of the most common sources of heterogeneity in IT simply comes from mergers and acquisitions, which happen all the time.
Furthermore, isn’t the whole point of the European Interoperability Framework about enabling heterogeneity? Isn’t it all about providing choice? So, why would the EU endorse the notion of having everybody select one specific solution or system? Isn’t that in total contradiction with its very goal?
Why would the EU promote the use of one specific system or solution that would bind governments and their constituents to a specific vendor rather than allowing diversity and choice? I seriously wonder.
And one has to wonder who has to gain from such an idea… For sure anyone who has a monopoly or quasi-monopoly would love that. Do you know anyone?
I seriously hope the EU realizes how misguided this move was and takes it back.
Especially because this flies in the face of the current trend in favor of open standards and open source that has recently made Europe so interesting in the field of standards. This is what has led several other countries such as Japan to reach out to Europe to discuss standards related policy issues. It’d be a shame to kill that momentum.
I’ve talked about the Eco-Patent Commons a couple of times before, including in a recent announcement of a presentation I gave yesterday at the Licensing Executives Society (LES) USA-Canada Annual Meeting.
Fortunately my presentation happened to coincide with a press release that was issued yesterday and which announces two new members: Dow Chemical and Fuji-Xerox, as well as a new pledge by Xerox. This brings the number of members to 11 and the number of patents in the commons to 100.
As I stated before while these numbers do not demonstrate an explosion in membership and pledged patents I’m pleased to see a continuous increase on both fronts. In particular, the fact that existing members keep adding patents, beyond the initial pledge done to become a member, demonstrates a real commitment to the commons.
Based on the feedback I got and the people who came to meet me afterward, my talk at LES was very well received and spiked quite some interest. I have to say that it must have been quite a change for these people who otherwise pretty much only hear about generating more money out of their IP for three full days!
I hope all this helps spread the word on this important project. Just to give an example of a tangible impact this project already had. IBM pledged a patent on the substitution of a toxic solvent used in the manufacturing of electronic chip by a mixture of alcohol and water better for the environment. We’ve been informed that the Yale University is now using this for its quantum computing device research. How cool is that?
Please, look into it if you haven’t done so already. Participating in such a project is a great opportunity to show leadership in the protection of the environment, and looking for ways to foster sustainable development not only makes good sense, it is simply part of our social responsibility.
I’m attaching the slides I used at the conference.
I’ve talked about the Eco-Patent Commons before on this blog and I just want to advertise the fact that I will be speaking about it at the upcoming LES Annual Meeting in San Francisco.
I’m always shy to point out any recordings or podcasts that get published on the web because I find it to always be a very humbling experience to hear myself speak. This may be something that’s true for about everybody but I guess even more so for those of us who have to speak in a language different from our native language.
But in an effort not to let this get in the way of shedding more light on what I think is a great project I want to say that I gave a short interview that is now posted on the meeting website. In this interview I introduce what the Eco-Patent Commons is about.
I encourage everybody to attend the meeting and learn more about this initiative. In the meantime, please, listen to the podcast and check the Eco-Patent Commons website.
As the world faces unprecedented global environmental challenges there hasn’t been a better time for everyone of us to look at what we can do to help. The Eco-Patent Commons provides a new and unique way to make an impact. I urge everybody to look into it and give it some serious consideration. It is good for everybody, including you.
And if you come around the LES meeting in a few weeks please come and meet me there.
I look forward to this event.
It’s been a while since I last posted but that is to be expected at times. First, because I don’t want to force myself to post just for the sake of it. Second, because I keep all my private stuff away from this blog. Last, because I’ve been working on things that aren’t public and can’t talk about it here, and when I had something I felt talking about I just didn’t have the time.
This being said, I recently stumbled over a piece of information about Facebook that has left me baffled enough that I want to post about it here.
Most people I know have some access restrictions on their FB profile. It is typically open to just friends, friends of friends, or maybe networks, but it is rarely completely public. Did you know though, that on sending a message to someone through FB you effectively give that person access to your profile for 30 days? I’m not kidding.
When this was pointed out to me I just didn’t believe it. It just made no sense to me at all. How could they possibly silently override your privacy settings? For sure, a posting on Yahoo! Answers seem to confirm that claim.
I searched FB’s documentation and didn’t find anything. Then I found a bunch of information, mostly from other confused users desperately trying to figure out what the real story is.
I eventually found what appeared to be the “official” answer in FB’s help center Q&A which I’ll reproduce here:
When you contact someone through a poke, message, or friend request, Facebook temporarily allows that person to see certain parts of your profile, even if your privacy and network settings would usually prevent him or her from seeing your full profile. The only parts of your profile that are made visible are your Basic Info, Work Info, Education Info, your profile pictures album, and your Friends List. A poke allows the user to see this information for one week, a message enables visibility for one month, and a friend request allows the user to see this information until the request is either confirmed or denied.
However, judging from the various experiments reported by users it’s actually not certain whether only parts of your profile is given access, and what exactly this includes. Reports are actually contradicting each other, some reporting this has been fixed and others saying it hasn’t.
So, I decided to test it myself . I created a bogus FB account to which I sent a message from my own account. I then logged in with the bogus account and when I tried to access my profile I got access to almost nothing. What I got access to was basically my almost empty profile that people get to see when they are not my friends, in accordance with my privacy settings.
This is somewhat reassuring but it makes you wonder about the “official answer” quoted above.
There is one effective workaround to this problem. You can reply to the person’s message, then immediately after doing so, BLOCK them, then immediately after that, UNBLOCK them again. This will revert their status to being able to message you back, but not see any aspect of your profile. Just like before you ever messaged them in the first place.
It’s unfortunate that FB doesn’t seem to care enough to fully document the actual and current behavior though. If anyone has additional information on this please let me know. Thanks.
For obvious reasons what’s happening in the emerging markets such as India and China is getting a lot of attention but it seems worthwhile to underscore that Europe is really playing a major role in getting our industry to move forward.
If you’re not convinced I suggest you take a look at the news section of the Open Source Observatory and Repository Europe website.
I find it fascinating to see the prominent place open source and standards issues are taking in the political arena, the number of decisions various European administrations are making in favor of open source and standards, and the cost savings some of these administrations are reporting.
This is demonstrated by the following few examples:
The French candidate for the European Parliament Marielle de Sarnez says public administrations’ interest in free software is essential. “This is an issue of competitiveness for the EU in the information technologies sector, as well as the condition of our technological independence.”
Two parliament members of the Italian Democratic Party want Italy’s public bodies to favour free software. By 2012 all IT systems should be based on such software, MPs Vincenzo Vita and Luigi Vimercati proposed in a bill last month
The city council of the city of Amsterdam on Wednesday decided that OpenOffice and Firefox should become default applications on all 15.000 desktops in use by the administration.
Nine Swedish municipalities have asked ten software application firms to start supporting OpenOffice.
The Danish municipality of Gribskov has saved two million DKK, about 270,000 euro, over the past two years by switching the public administration and schools to OpenOffice, Michel van den Linden, responsible for IT in the municipality says in an interview with the Danish IT news site Computerworld.
The French Gendarmerie’s gradual migration to a complete open source desktop and web applications has saved millions of euro, says Lieutenant-Colonel Xavier Guimard. “This year the IT budget will be reduced by 70 percent. This will not affect our IT systems.”
And the list goes on and on. Good luck to those who think they can still stick to the old model of proprietary software and vendor lock-in. This is like standing in front of a train coming at full speed in my opinion.
On a personal level I’m obviously interested because I’m European and still have strong ties to the old continent but this, along with many other changes I observe, makes it clear to me that the United States are no longer where “things” are happening. At least not the way it used to be. The change is coming from other places in the world, like Europe.
As I indicated in a previous post I was invited to participate in the SDForum Open Source Colloquium on Monday. This year the event ended up being jointly held with Microsoft third annual Open Source ISV Forum which was taking place the same day. Unfortunately, due to conflicting schedules I couldn’t attend much of Microsoft’s sessions.
The next two days the InfoWorld OSBC conference was held in the same hotel and I attended that event as well. So, I just participated in two and a half days of presentations, panels, and hallway discussions on open source and I want to share some of my impressions.
First, I think it’s fair to say that the most obvious thing that comes out of all this is that there isn’t any discussion about whether open source is real or not anymore. It is clearly accepted that it’s become part of our industry and it’s here to stay. As several speakers commented the fact that even Microsoft seems to finally be recognizing this fact is a clear sign that this question isn’t really up for debate anymore.
I should point out that Microsoft’s message remains somewhat twisted though. Yes, they recognize that open source is part of our industry. They talk about interoperability with Linux for instance and this appears to be real, simply because it is motivated by customer demand and even Microsoft has to listen to its customers sometimes.
However, this doesn’t mean they are embracing open source for that matter. You’ve probably heard that the financial crisis is said to be an incentive for companies to look at open source solutions as a way to cut costs. I don’t know about you but it makes sense to me.
In the little I heard during Microsoft’s event one of their executives was claiming that contrary to what is being said there is no real move toward open source though. According to him this is because the last thing companies want to do in the current situation is to take risks and moving toward open source is too big a risk.
While the point may seem to have some validity it’s reminding me of the same old FUD Microsoft has been spreading for years to try and keep people away from using open source. And one has to balance that with claims from Mindtouch’s CEO and the likes about the ease and record speed at which companies can deploy their open source offerings.
The second thing I noticed is that a lot of the sessions were about sharing information on how to use open source, how to manage open source activities in your company, how to successfully launch an open source project and create a community around it, the legal intricacies of the various open source licenses and their interaction with proprietary code.
There seemed to be a large amount of lawyers actually, both presenting and attending, as well as geeks and business people. Bringing these people together seems to be a characteristic of what open source does actually.
I should mention one talk from a lawyer on how to separate or “shim” proprietary code from copyleft code (typically under GPL). He mentioned that some people got the feeling that he was just helping companies avoid having to comply with the obligations brought by the copyleft license and gave some explanation of why this wasn’t the case. But I have to admit that I did not understand it.
All I can say is that having listened through all the techniques he suggests one uses to avoid the license contamination did leave me with the exact feeling he tried to render invalid. That is: 1) these are just tricks not to comply with the license, and 2) this is clearly going against the spirit of the license.
In summary, I think there was a lot of very practical information being shared rather than general debates on the good or bad of open source we used to have. I think this is very good and a clear sign of maturity.
As some of us remarked during the conference(s) we probably won’t have conferences dedicated to open source for much longer for that matter. Open source is poised to become business as usual.
Although I’ve never talked about the Eco-Patent Commons before I’ve actually been involved in this project since its inception. You might wonder why, given that it doesn’t really have anything to do with standards and open source which are my primary focus.
It’s one of those “special projects” we have at IBM which do not necessarily fall within the scope of anyone’s responsibilities, and for which we pull in people with various skills to help out.
What I brought to the project was experience with patent pledges and policies as well as organizations/associations of various interested parties (regarding process, governance, etc.)
The initial idea came out of IBM’s Global Innovation Outlook program. A program in which we invite people from all around the world to meet and discuss “the most vexing challenges on earth” and what might be done about them.
In this case the idea that came out of the GIO was to share patents to help protect the environment and foster innovation in that field.
For several years IBM had been experimenting with non traditional ways to use our patent portfolio. We thus did several patent pledges in support of Linux, Open Source, Web services based healthcare and education related standards, and others. So, the idea of allowing one to use our patents on a royalty free basis for a specific purpose was no foreign concept to IBM and doing so in support of the protection of the environment fit within this trend. It was therefore agreed upon within IBM without much controversy.
Because we didn’t want this to be just an IBM thing however, we looked for a neutral host and invited other companies to participate in the creation of a patent commons.
We investigated several possible hosts and eventually settled on the World Business Council for Sustainable Development (WBCSD) which welcomed us with enthusiasm.
WBCSD is perfect for this because the project fits well within its mission and, WBCSD is an international organization with participation from companies from all around the world.
The Eco-Patent Commons was launched in January 2008 with the participation of Nokia, Pitney Bowes, and Sony, in addition to IBM.
While we haven’t seen an explosion in participation the commons exists and keeps on growing both in terms of number of patents and in number of members. This is very encouraging. The idea of pledging patents is still brand new so, it’s no surprise it takes time for companies to get comfortable with it. The fact that several companies have already done so leaves me without doubt that more are to join us.
I invite you to familiarize yourself with the Eco-Patent Commons. Investigate whether your company might join and talk about it around you. Feel free to contact me if you have any questions.
I realize this is late notice but there is a whole set of events happening in San Francisco so if you’re in the area you may be interested in this.
I participated in last year’s event and thought the discussions were very interesting so, I’m looking forward to it and I invite people to come and join us there.
If you’re interested check out the registration page.
See you there.
I received an invitation to a Symposium on Peer Reviewing, which is motivated by the following:
Only 8% members of the Scientific Research Society agreed that “peer review works well as it is”. (Chubin and Hackett, 1990; p.192).
“A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research.” (Horrobin, 2001)
Horrobin concludes that peer review “is a non-validated charade whose processes generate results little better than does chance.” (Horrobin, 2001). This has been statistically proven and reported by an increasing number of journal editors.
After a short introduction the invitation then goes on into explaining how one should go at submitting a paper and what the selection process will be. And this is what it reads:
All Submitted papers will be reviewed using a double-blind (at least three reviewers), non-blind, and participative peer review.
Some people have humor.
I noticed that several e-government policy documents being developed around the world are touching on the difficult issue of dealing with extensions to standards. This is a problem that always seems to be a source of controversy and that is hard to deal with so, I decided to investigate it further.
I asked for input from several standards people and ended up writing, with my colleague Jochen Friedrich, the following short document.
In this document we tried to describe the problem, capture the main points that have been made to us so far, and further define the concept of “Open extensions”. This concept can be found in the draft policy of the government of India (PDF).
It would be great if this could constitute the beginning of a discussion and help move the industry in the right direction on this issue. I would appreciate if you could use the comment feature to give us your opinion on this.
This is very much a work in progress and I may publish updates in the future.
As everything on this blog, this is my/our personal opinion and it does not necessarily represent IBM’s position.
Extensions to standards are common practice, in some areas more than in others. Extensions can help to meet specific requirements in a given domain which are not covered by the applicable standard. On the other hand, extensions carry the risk of breaking the interoperability and thus violating one of the basic objectives of having a standard.
How to deal with extensions is therefore a major challenge – in particular for organizations which consider standards, and in particular open standards, as the basis for their technology and procurement decisions. Fundamentally, the goal of procurement and open standards policies ought to be to prevent vendor lock-in via proprietary extensions which effectively break interoperability.
Open Standards and the Public Sector
In the globally integrated economy, open technical standards are integral to enabling the delivery of everything from disaster relief services and health care, to business services and consumer entertainment. They allow governments to create economic development platforms and deliver services to their citizens. Open standards enable electronic devices and software programs to interoperate with one another, which is a prerequisite for efficient electronic data processing and transaction handling.
Worldwide more and more governments are implementing open standards policies. They require open standards and interoperability in public tenders and thus leverage the benefits of open standards for the public sector. Open standards ensure that users have flexibility and choice, by keeping exit costs low where technologies need to be replaced by better, more efficient, or more cost effective ones. Open standards are key to guaranteeing fair competition. This includes open source offerings as well as other stakeholders in the market.
The challenge of extensions
Extensions to open standards are a challenge because they are to some extent contrary to the whole point of requiring the use of standards in the first place. However, both the complexity of the various situations technology has to address and the need for evolution makes extensions a necessity that needs to be taken into account. And, after all, extensions don’t necessarily have to break the idea behind using a standard.
Indeed, the key criterion regarding extensions is not whether some technology or product extends a given standard or not, but rather whether in doing so it breaks interoperability.
Some standards provide built-in extension mechanisms with ways to communicate what extensions, if any, are being used and how they must be dealt with. For instance, the standard may prevent interaction altogether if the integrity of the information being exchanged depends on an extension which is not understood by the other component.
In such a case a product is deemed compliant even when it supports extensions – assuming they are supported according to the mechanism defined by the standard.
Procurement policies – support for Open Extensions
In general, products that support extensions to a standard ought to strictly adhere to the standard by default and only make use of extensions through some explicit request mechanism. This is necessary to avoid unintentional reliance on proprietary extensions which is typically discovered later on, when trying to connect components from different vendors or replace a component from one vendor by that of another.
When extensions are supported they should be “Open Extensions”. Open extensions are to extensions what open standards are to standards. To be “open” extensions must meet the same criteria with regard to openness, transparency, availability, implementability, etc.
Open extensions are best developed as part of the evolution of the standard in an open standards development organization. This has proved to be effective and efficient in ensuring that such extensions are indeed open which is a major element to preserving interoperability.
Open extensions provide for the required flexibility while keeping the spirit and technical benefits of open standards and openness. Where procurement policies – especially in the public sector – support extensions they should make clear that it is limited to open extensions. Procurement policies which require open standards and allow, in exceptional cases, for open extensions are optimally set up to leverage the full benefits of open standardization.