Arnaud's Blog

Opinions on open source, standards, and other things

Paradigm shifts and tug wars over the web – Part 2/2

In the first part of this discussion I focused on the shift that occurred around who controls the rendering of a page. I will now discuss how control of the user interaction shifted from the user to the page author/developer.

Web pages initially contained no information about how the user would interact with the page or the web browser would behave. It’s by using functions supported by the browser that a user would move back to a previous page, print a page, open a page in a new window, etc. The user was therefore in full control of the interaction.

This simple paradigm was however first challenged with the introduction of the infamous frames and link targets in HTML. These mechanisms let the web author control how the browser should behave when the user clicks on a link. Aside from making it impossible to reliably bookmark a page, this marked an important departure from the original concept of having the user in control.

Yet this was just the beginning, and was nothing compared to what is now done with all the websites full of javascript which make the so called “Web 2.0“.

With javascript, web pages no longer just contain some content to be displayed along with some layout information but they are in effect programs that want to dictate how the browser should behave and how the user should interact with them. And that’s where the problems creep in.

In reality we end up with two separate entities trying to control the same thing. The web author on one end and the user with the browser on the other. Both trying to control how to manipulate and interact with the same information. It’s no wonder there is a clash.

This can take some rather benign forms like the close button I often see on some web pages. I always wonder: who needs that? Why isn’t the close button in the corner of the window enough? I use neither anyway, favoring a quick Alt-F4, but admittedly other than a bit of wasted real estate on the page it doesn’t really hurt.

Equally useless is the print button I often see on web pages. Some websites use that to render the page differently, it’s not truly necessary with proper use of stylesheets but at least it does something useful. More often than not it does nothing different than the print command of my browser though. But, again, it doesn’t really hurt.

What really hurts is when it interferes with other aspects of the basic browser functions like the navigation history or whether the new page should be rendered in a new window or not. The reality is that in the vast majority of cases this is only done for bad reasons – or at least for reasons that serve the web author rather than the web user.

These reasons include: the authors don’t want you to move off their site, they want to force some information onto you (remember those lovely pop-up ads?), they think they know what’s best for you or that you don’t know how to use your browser, etc.

The list of bad web programing practices hidden behind all this is endless. How many times do you see websites that urge you not to use the back button or reload the page, informing you that doing so may lead to multiple charges on your credit card? This one has typically nothing to do with javascript but more with the fact that they didn’t program their state machine well enough to support you going back and forth between pages, or reload them.

In fact, most of this only exists because websites are poorly designed, there are mechanisms to ensure the browser history gets updated so that the back button works for instance but, fundamentally it remains that there is a clash between two different paradigms that are both at play: 1) the user is in control, 2) the web author is in control.

Users have control via their browser, authors have control via the javascript and other techniques they stuff their pages with.

While I enjoy many of the benefits modern websites bring, as a user, I regret the loss of control that often seems to come with them. I hope that we eventually reconcile the two forces at play, and authors/developers get better at designing their websites so that users can get the advantages of modern websites without losing basic functionality and control.


January 28, 2009 - Posted by | standards | , , ,

1 Comment »

  1. This echoes the split of responsibilities many years ago, when I learned how to produce documentation at/for IBM.

    The ‘knowledgeable person’ was expected to key in their knowledge, with fairly basic mark-up indicating ‘This is a chapter’, ‘This is a heading’, ‘This is a paragraph’.

    Then the editor, or ‘text programmer’, would take over. Chapter, heading, and paragraph would be described in the ‘house style’, and the editor would make minor revisions to the text to make it more suitable for the target audience; hopefully not obscuring the technical meaning.

    There were two participants then; the one who knew what he wanted to say, and the one who didn’t know what to say but did know what it should look like.

    I lean to the view that the Web Browser is the user’s agent. If my web browser doesn’t do what I want, I will drop it in the trash and get another one. If that means I cannot see some publisher’s web site, it’s the publisher’s problem, not mine.

    Comment by Chris Ward | February 7, 2009 | Reply

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: