I break a long silence to write about something I just found out about wifi and wifi security. Admittedly it may not be an earthshaking discovery but having searched for info on the subject it doesn’t seem like it is covered much so it seems worth a blog post (plus, for once, I can give this higher priority than everything else.)
There is a lot of info out there on how to set up your home wifi and set it up to be secured. However, little is said about what this will cost you. I mean in loss of speed.
I did some tests over the weekend and here is what I found, using speedtest.net, with a cable internet connection:
Directly connected to the cable modem (no router, no wifi): ~23Mbps download
Connected via cable through my router (“Belkin Wireless G Plus Router”), no wifi: ~17Mbps download. Gasp, that’s a 25% loss right there. I’m no network expert so I don’t know if that’s normal but I sure didn’t expect to lose that much just going through the router. But that’s actually nothing. Read on.
Connected via wifi through my router, with an open connection, no security: ~14Mbps download. Ouch. Here goes another 18%. Unfortunately that’s not even close to be the end of it.
Connected via wifi through my router, with security set to WPA-PSK TKIP: ~8Mbps download. Wow! That’s yet another 42% loss just for turning the security on, which every website out there says you MUST do.
The loss due to the security setting motivated me to run tests against the various security options my router supports. It turns out that all WPA options and WEP 128bits basically lead to the same poor results.
Setting security to WEP 64bits is the only security option that doesn’t severely impact performance: ~13Mbps.
Sad state of affair!
WEP is known to be very weak and easy to break in minutes by a knowledgeable hacker. 64bits is that much faster to break than 128bits obviously.
So here you have it. The choice is between fast and unsecured or secured and slow. Stuck between a rock and a hard place.
Obviously results will vary depending on the router you use but, here is the rub: when shopping for routers I find very little/no info on the impact of turning security on. Most products claims are typically in optimal circumstances, as in “up to xxx”. and relative, as in “10x faster than xxx). This is of no help determining what performance you will actually get.
One thing that plays a role in the performance you get is the CPU your router is equipped with. Yet, from what I’ve seen, this is not a piece of information that is readily available.
To make matters worse, from what I’ve seen, websites such as CNET don’t highlight that aspect either. So, you’re pretty much on your own to figure it out.
Beware. Run some tests and see for yourself what you get.
I’ve thought about posting here several times over the last many months and I even have several drafts that never saw the light but this just keeps getting pushed down too far on my priority list to happen. However, I have to react to the buzz I’m discovering on my return from a week off.
Indeed, it is with quite a bit of astonishment that I read about Alex Brown’s frustration over Microsoft lack of interest in implementing ISO/IEC 29500 (OOXML). In the burgeoning comment sections following his post, Alex writes:
> The outcome that many had predicted, yet you
> insisted would not occur
Oh? I don’t recall making predictions about Microsoft’s behaviour? URL please!
Well, let me give you a link to a prediction I made! In my post What Microsoft’s track record tells us about OOXML’s future of March 25, 2008 I wrote:
They can, and I predict will, ignore all these additions which are optional and stick to what they have. The only reason they were added was to remove reasons for National Bodies to vote against OOXML.
So, here we are. Two years later, Microsoft has done exactly that and Alex Brown is finally seeing the light.
One can only hope that the standards community will have at least learned a lesson from this sad story: you simply cannot take control away from a vendor who has a monopoly and isn’t willing to give it up through a mere standardization process.
The leaked updated document of the European Interoperability Framework (EIF) is generating a lot of noise and for good reason. It is taking back what could be considered one of the most advanced features of the previous document: its insistence on the use of open standards.
In particular, the new document contains the following puzzling piece instead:
interoperability can also be obtained without openness, for example via homogeneity of the ICT systems, which implies that all partners use, or agree to use, the same solution to implement a European Public Service.
I don’t know about you but, to me this statement simply makes no sense. And I wonder to whom it could truly make sense.
Indeed, interoperability is defined in wikipedia as “a property referring to the ability of diverse systems and organizations to work together”. That seems about right to me.
So, how could “homogeneity” possibly qualify has a way of obtaining “interoperability”? Aren’t “homogeneity” and “diverse” opposing each other?
Saying that “interoperability can be obtained […] via homogeneity” is equivalent to saying that diverse systems and organizations can work together via homogeneity. Or in other words that diversity can be dealt with via homogeneity. This doesn’t make sense, does it?
The only way to make sense of this is obviously to read it as saying that one can actually avoid the need for interoperability by adopting an homogeneous system or solution. That is true actually. And that’s something many organizations have tried before. But everybody has learned by now that this is a losing proposition. It just doesn’t work.
It may work on a short term basis, but in the long term in never does. Because the world is fundamentally heterogeneous. And resistance in this domain is futile. One way or another the heterogeneous aspect of nature will eventually kick in. Some of the most common sources of heterogeneity in IT simply comes from mergers and acquisitions, which happen all the time.
Furthermore, isn’t the whole point of the European Interoperability Framework about enabling heterogeneity? Isn’t it all about providing choice? So, why would the EU endorse the notion of having everybody select one specific solution or system? Isn’t that in total contradiction with its very goal?
Why would the EU promote the use of one specific system or solution that would bind governments and their constituents to a specific vendor rather than allowing diversity and choice? I seriously wonder.
And one has to wonder who has to gain from such an idea… For sure anyone who has a monopoly or quasi-monopoly would love that. Do you know anyone?
I seriously hope the EU realizes how misguided this move was and takes it back.
Especially because this flies in the face of the current trend in favor of open standards and open source that has recently made Europe so interesting in the field of standards. This is what has led several other countries such as Japan to reach out to Europe to discuss standards related policy issues. It’d be a shame to kill that momentum.
I’ve talked about the Eco-Patent Commons a couple of times before, including in a recent announcement of a presentation I gave yesterday at the Licensing Executives Society (LES) USA-Canada Annual Meeting.
Fortunately my presentation happened to coincide with a press release that was issued yesterday and which announces two new members: Dow Chemical and Fuji-Xerox, as well as a new pledge by Xerox. This brings the number of members to 11 and the number of patents in the commons to 100.
As I stated before while these numbers do not demonstrate an explosion in membership and pledged patents I’m pleased to see a continuous increase on both fronts. In particular, the fact that existing members keep adding patents, beyond the initial pledge done to become a member, demonstrates a real commitment to the commons.
Based on the feedback I got and the people who came to meet me afterward, my talk at LES was very well received and spiked quite some interest. I have to say that it must have been quite a change for these people who otherwise pretty much only hear about generating more money out of their IP for three full days!
I hope all this helps spread the word on this important project. Just to give an example of a tangible impact this project already had. IBM pledged a patent on the substitution of a toxic solvent used in the manufacturing of electronic chip by a mixture of alcohol and water better for the environment. We’ve been informed that the Yale University is now using this for its quantum computing device research. How cool is that?
Please, look into it if you haven’t done so already. Participating in such a project is a great opportunity to show leadership in the protection of the environment, and looking for ways to foster sustainable development not only makes good sense, it is simply part of our social responsibility.
I’m attaching the slides I used at the conference.
I’ve talked about the Eco-Patent Commons before on this blog and I just want to advertise the fact that I will be speaking about it at the upcoming LES Annual Meeting in San Francisco.
I’m always shy to point out any recordings or podcasts that get published on the web because I find it to always be a very humbling experience to hear myself speak. This may be something that’s true for about everybody but I guess even more so for those of us who have to speak in a language different from our native language.
But in an effort not to let this get in the way of shedding more light on what I think is a great project I want to say that I gave a short interview that is now posted on the meeting website. In this interview I introduce what the Eco-Patent Commons is about.
I encourage everybody to attend the meeting and learn more about this initiative. In the meantime, please, listen to the podcast and check the Eco-Patent Commons website.
As the world faces unprecedented global environmental challenges there hasn’t been a better time for everyone of us to look at what we can do to help. The Eco-Patent Commons provides a new and unique way to make an impact. I urge everybody to look into it and give it some serious consideration. It is good for everybody, including you.
And if you come around the LES meeting in a few weeks please come and meet me there.
I look forward to this event.
It’s been a while since I last posted but that is to be expected at times. First, because I don’t want to force myself to post just for the sake of it. Second, because I keep all my private stuff away from this blog. Last, because I’ve been working on things that aren’t public and can’t talk about it here, and when I had something I felt talking about I just didn’t have the time.
This being said, I recently stumbled over a piece of information about Facebook that has left me baffled enough that I want to post about it here.
Most people I know have some access restrictions on their FB profile. It is typically open to just friends, friends of friends, or maybe networks, but it is rarely completely public. Did you know though, that on sending a message to someone through FB you effectively give that person access to your profile for 30 days? I’m not kidding.
When this was pointed out to me I just didn’t believe it. It just made no sense to me at all. How could they possibly silently override your privacy settings? For sure, a posting on Yahoo! Answers seem to confirm that claim.
I searched FB’s documentation and didn’t find anything. Then I found a bunch of information, mostly from other confused users desperately trying to figure out what the real story is.
I eventually found what appeared to be the “official” answer in FB’s help center Q&A which I’ll reproduce here:
When you contact someone through a poke, message, or friend request, Facebook temporarily allows that person to see certain parts of your profile, even if your privacy and network settings would usually prevent him or her from seeing your full profile. The only parts of your profile that are made visible are your Basic Info, Work Info, Education Info, your profile pictures album, and your Friends List. A poke allows the user to see this information for one week, a message enables visibility for one month, and a friend request allows the user to see this information until the request is either confirmed or denied.
However, judging from the various experiments reported by users it’s actually not certain whether only parts of your profile is given access, and what exactly this includes. Reports are actually contradicting each other, some reporting this has been fixed and others saying it hasn’t.
So, I decided to test it myself . I created a bogus FB account to which I sent a message from my own account. I then logged in with the bogus account and when I tried to access my profile I got access to almost nothing. What I got access to was basically my almost empty profile that people get to see when they are not my friends, in accordance with my privacy settings.
This is somewhat reassuring but it makes you wonder about the “official answer” quoted above.
There is one effective workaround to this problem. You can reply to the person’s message, then immediately after doing so, BLOCK them, then immediately after that, UNBLOCK them again. This will revert their status to being able to message you back, but not see any aspect of your profile. Just like before you ever messaged them in the first place.
It’s unfortunate that FB doesn’t seem to care enough to fully document the actual and current behavior though. If anyone has additional information on this please let me know. Thanks.
For obvious reasons what’s happening in the emerging markets such as India and China is getting a lot of attention but it seems worthwhile to underscore that Europe is really playing a major role in getting our industry to move forward.
If you’re not convinced I suggest you take a look at the news section of the Open Source Observatory and Repository Europe website.
I find it fascinating to see the prominent place open source and standards issues are taking in the political arena, the number of decisions various European administrations are making in favor of open source and standards, and the cost savings some of these administrations are reporting.
This is demonstrated by the following few examples:
The French candidate for the European Parliament Marielle de Sarnez says public administrations’ interest in free software is essential. “This is an issue of competitiveness for the EU in the information technologies sector, as well as the condition of our technological independence.”
Two parliament members of the Italian Democratic Party want Italy’s public bodies to favour free software. By 2012 all IT systems should be based on such software, MPs Vincenzo Vita and Luigi Vimercati proposed in a bill last month
The city council of the city of Amsterdam on Wednesday decided that OpenOffice and Firefox should become default applications on all 15.000 desktops in use by the administration.
Nine Swedish municipalities have asked ten software application firms to start supporting OpenOffice.
The Danish municipality of Gribskov has saved two million DKK, about 270,000 euro, over the past two years by switching the public administration and schools to OpenOffice, Michel van den Linden, responsible for IT in the municipality says in an interview with the Danish IT news site Computerworld.
The French Gendarmerie’s gradual migration to a complete open source desktop and web applications has saved millions of euro, says Lieutenant-Colonel Xavier Guimard. “This year the IT budget will be reduced by 70 percent. This will not affect our IT systems.”
And the list goes on and on. Good luck to those who think they can still stick to the old model of proprietary software and vendor lock-in. This is like standing in front of a train coming at full speed in my opinion.
On a personal level I’m obviously interested because I’m European and still have strong ties to the old continent but this, along with many other changes I observe, makes it clear to me that the United States are no longer where “things” are happening. At least not the way it used to be. The change is coming from other places in the world, like Europe.
As I indicated in a previous post I was invited to participate in the SDForum Open Source Colloquium on Monday. This year the event ended up being jointly held with Microsoft third annual Open Source ISV Forum which was taking place the same day. Unfortunately, due to conflicting schedules I couldn’t attend much of Microsoft’s sessions.
The next two days the InfoWorld OSBC conference was held in the same hotel and I attended that event as well. So, I just participated in two and a half days of presentations, panels, and hallway discussions on open source and I want to share some of my impressions.
First, I think it’s fair to say that the most obvious thing that comes out of all this is that there isn’t any discussion about whether open source is real or not anymore. It is clearly accepted that it’s become part of our industry and it’s here to stay. As several speakers commented the fact that even Microsoft seems to finally be recognizing this fact is a clear sign that this question isn’t really up for debate anymore.
I should point out that Microsoft’s message remains somewhat twisted though. Yes, they recognize that open source is part of our industry. They talk about interoperability with Linux for instance and this appears to be real, simply because it is motivated by customer demand and even Microsoft has to listen to its customers sometimes.
However, this doesn’t mean they are embracing open source for that matter. You’ve probably heard that the financial crisis is said to be an incentive for companies to look at open source solutions as a way to cut costs. I don’t know about you but it makes sense to me.
In the little I heard during Microsoft’s event one of their executives was claiming that contrary to what is being said there is no real move toward open source though. According to him this is because the last thing companies want to do in the current situation is to take risks and moving toward open source is too big a risk.
While the point may seem to have some validity it’s reminding me of the same old FUD Microsoft has been spreading for years to try and keep people away from using open source. And one has to balance that with claims from Mindtouch’s CEO and the likes about the ease and record speed at which companies can deploy their open source offerings.
The second thing I noticed is that a lot of the sessions were about sharing information on how to use open source, how to manage open source activities in your company, how to successfully launch an open source project and create a community around it, the legal intricacies of the various open source licenses and their interaction with proprietary code.
There seemed to be a large amount of lawyers actually, both presenting and attending, as well as geeks and business people. Bringing these people together seems to be a characteristic of what open source does actually. 🙂
I should mention one talk from a lawyer on how to separate or “shim” proprietary code from copyleft code (typically under GPL). He mentioned that some people got the feeling that he was just helping companies avoid having to comply with the obligations brought by the copyleft license and gave some explanation of why this wasn’t the case. But I have to admit that I did not understand it.
All I can say is that having listened through all the techniques he suggests one uses to avoid the license contamination did leave me with the exact feeling he tried to render invalid. That is: 1) these are just tricks not to comply with the license, and 2) this is clearly going against the spirit of the license.
In summary, I think there was a lot of very practical information being shared rather than general debates on the good or bad of open source we used to have. I think this is very good and a clear sign of maturity.
As some of us remarked during the conference(s) we probably won’t have conferences dedicated to open source for much longer for that matter. Open source is poised to become business as usual.
Although I’ve never talked about the Eco-Patent Commons before I’ve actually been involved in this project since its inception. You might wonder why, given that it doesn’t really have anything to do with standards and open source which are my primary focus.
It’s one of those “special projects” we have at IBM which do not necessarily fall within the scope of anyone’s responsibilities, and for which we pull in people with various skills to help out.
What I brought to the project was experience with patent pledges and policies as well as organizations/associations of various interested parties (regarding process, governance, etc.)
The initial idea came out of IBM’s Global Innovation Outlook program. A program in which we invite people from all around the world to meet and discuss “the most vexing challenges on earth” and what might be done about them.
In this case the idea that came out of the GIO was to share patents to help protect the environment and foster innovation in that field.
For several years IBM had been experimenting with non traditional ways to use our patent portfolio. We thus did several patent pledges in support of Linux, Open Source, Web services based healthcare and education related standards, and others. So, the idea of allowing one to use our patents on a royalty free basis for a specific purpose was no foreign concept to IBM and doing so in support of the protection of the environment fit within this trend. It was therefore agreed upon within IBM without much controversy.
Because we didn’t want this to be just an IBM thing however, we looked for a neutral host and invited other companies to participate in the creation of a patent commons.
We investigated several possible hosts and eventually settled on the World Business Council for Sustainable Development (WBCSD) which welcomed us with enthusiasm.
WBCSD is perfect for this because the project fits well within its mission and, WBCSD is an international organization with participation from companies from all around the world.
The Eco-Patent Commons was launched in January 2008 with the participation of Nokia, Pitney Bowes, and Sony, in addition to IBM.
While we haven’t seen an explosion in participation the commons exists and keeps on growing both in terms of number of patents and in number of members. This is very encouraging. The idea of pledging patents is still brand new so, it’s no surprise it takes time for companies to get comfortable with it. The fact that several companies have already done so leaves me without doubt that more are to join us.
I invite you to familiarize yourself with the Eco-Patent Commons. Investigate whether your company might join and talk about it around you. Feel free to contact me if you have any questions.