Skip to main content
Baldur Bjarnason

A few thoughts on standardisation, W3C, and the IDPF

Baldur Bjarnason

The news that the World Wide Web Consortium and the International Digital Publishing Forum are planning to merge has prompted many to reassess the state of publishing industry standardisation.

Over the weekend I jotted together a few of my own thoughts, initially in response to Peter Brantley’s post celebrating the idea.

Now, I’m generally not a kumbaya kind of guy so it was never likely that I’d jump up and cheer what is essentially a feelgood “We Can Do It!” post, but I also see the general value in such a piece of writing. Sometimes people just need to be cheered on.

The problem is that people have been cheering ebook and digital book standards on for more than a decade and none of the fundamental problems have been addressed. We need more than happy thoughts to solve these problems and we need to be aware of the risks we are facing.

My first response was a question. (Additions and commentary are in square brackets.)

(And, because I regularly get angry emails/tweets from people who don’t read posts to the end, I actually do propose a solution at the end.)


Let’s say we convince publishers and related publishing industry companies and they all enthusiastically join the ebooks-on-the-web standards effort.

Given that these are organisations with little experience with longterm software development (they focus on one-off projects), no experience in cross-industry software collaboration (last time they tried they got sued), habitually underpay their staff, whose management and executive staff both publicly and privately have expressed disdain for digital publishing (even relief that ebook sales are going down) [and, I might add, they legitimately have other priorities with maintaining their print businesses]…

… does their addition to the standardisation effort increase or decrease the its odds of success?

Portable web publications and related specs–the whole effort to try to get the web community so solve the ebook mess–already smell way too much like XHTML2 for my taste; the IDPF/W3C merger doesn’t change the odour one bit.

I’m not at all convinced that the increased involvement of the IDPF and publishers in the process will push web-book-related standardisation in a healthy direction.

Everybody involved seems entirely oblivious to the possibility of portable web publications and related specs ending up as unimplemented failures.

There’s a substantial opportunity cost to adding features this complex to the web stack. For most browser vendors (or any vendor for that matter), the rational course of action is to prioritise implementing features with the highest marginal profit (marginal profit here being the marginal value added minus marginal cost of implementation). This is why MathML and CSS Regions languished even with the support they had.

From what I have read, the flagship concepts of the W3C publishing IG represent a low or unclear marginal profit to the web developer community in general.

Moreover, given that there are many other pressing issues plaguing web development, implementing ebook-related features has a non-trivial opportunity cost for both browser vendors and web developers.

Nobody seems to have taken it upon themselves to make a solid economic case for why this particular standardisation effort should take priority over the many others that are trying to solve pressing problems for the web.

The onus is on those who want these specs implemented to make the case for their implementation.

I’m not sure that case can be made. But it’s strange that nobody has really tried.


While I still believe in all of the points I made in that post, I also feel that it laid the blame a little bit too much on the publishing side. It underplays a few issues that are core to why things are the way they are:

  1. The needs of the publishing industry and the web community do not coincide particularly well. Neither is to blame for this.
  2. There isn’t a strong case for substantially investing in building digital publishing tech. The economic story surrounding digital publishing is murky.
  3. The specification processes we have been using for ebook tech (and related) aren’t working.

This led to another post of mine on Medium which hews closer to the actual problem but doesn’t quite hit it:


Rhetorical questions about standards #

Let’s say we have two people.

One works for a publisher and has been tasked with working in a standards organisation to get technology specified that is strategically important to the publisher.

Another works for an organisation whose goal is to democratise access to longform text and to improve the lot of authors—something that’s only possible if publishers (education or trade) are forced to reform, which is incompatible with the publisher’s extant strategy.

If you get both to work on specifying a piece of technology is the project:

a) More likely to fail (neither is willing to give up on their incompatible goals)?

b) More likely to be complicated and messy and therefore less adopted and buggy when implemented?

c) More likely to take such a long time to finish that it will be irrelevant when done?

d) All of the above?

Both the publishing industry and the web community are incredibly diverse groups. The idea that a single specification (the nebulous portable web publications idea) can serve them all without intolerable compromises is not plausible. The idea that a single organisation can serve them all is not a given.

Why should the web community work with the publishing industry, when the financial strategies of most publishers do not coincide with that of most who work on and for the web?

From the web perspective, the strategy that’s most likely to succeed is to work separately, without having to worry about the issues, business models, dynamics, and strategies of the publishing industries. Why shouldn’t the web community try to just improve reading in browsers at their own pace, focusing on their own needs and use cases without having to solve the self-inflicted problems of publishing?

Why is it a given that the publishers who have so far not participated in the web community must have something constructive to contribute to the web stack?

If the strategic goals of big publishers were compatible with the web, they’d have participated ages ago, as many many other tech-savvy publishers already have. Why do the IDPF and the W3C need to merge for the publishing industry to get involed in the web?

Why should the web community be happy about being forced to be involved in—and required to solve—the utter mess that is the ebook space?

Why do the IDPF and the W3C just assume that everybody will be happy about the proposed merger?

Why do people think that it’s a good idea?


Tim Flem then asked this here question:

Can you specify what you consider to be the “intolerable compromises”? I can certainly see how DRM could be a sticking point, but what else?

Which led to my reply:


  1. Do not underestimate how problematic DRM will be in this collaboration. At the moment and as proposed, web DRM is limited to a plugin system for time-based media (which is bad enough). Adding publishers and a publishing industry interest group to the W3C will dramatically increase the pressure to extend that system to regular text. Extending that DRM system to general markup would be an intolerable compromise.
  2. The portability concept isn’t easily compatible with the web’s current security model (See my post A few simplified points on web and document security for an overview.) Fixing that incompatibility requires compromises in the web stack that I’m not excited about. Compromising web security to cater to the business models of publishers would be an intolerable compromise.
  3. The publishing industry is deep into EPUB. Maintaining compatibility with their EPUB stack is one of the stated goals of many publishers and publishing industry companies. However, compatibility with something as complex and flawed as EPUB is expensive, hard, and fraught with challenges and has no value to the web community.
  4. There is an opportunity cost to all of these features. The web stack is flawed and needs improvement on many levels. The work going into fixing ebooks for the publishing industry is often work that could go into fixing pressing issues elsewhere. Not fixing something important because of this would be an intolerable compromise.
  5. One of Tim Berners-Lee’s stated goals with the merger is tracking (see What the Inventor of the World Wide Web Sees for the Future of Ebooks) and from what I can tell this is also an explicit goal of many organisations in the publishing industry. Pervasive tracking however is one of the biggest problems plaguing the web industry. Settling on an acceptable compromise for tracking is already hard enough.
  6. [Both] the web and the publishing industry are more heterogenous than people make them out to be. Designing a single portable document format that serves the digital publication needs of everybody involved is, IMO, next to impossible without somebody making major compromises in their use cases. Somebody will inevitably find the necessary compromises intolerable.

Improving reading on the web can be done without introducing DRM, re-implementing its security model to cater to full-featured portability, making exhausting EPUB compatibility compromises, or making the online tracking situation even more complicated and consumer-hostile.


I also posted a few tweets that are relevant (serial tweets merged for clarity):

“Do nothing and see what gets implemented” is actually a valid course of action for a standards group. One that gets dismissed too easily. If nothing then gets implemented the problem isn’t too few standards but that the value of implementation does not exceed its cost.

Collaboration is not a good in and of itself. When the goals of the collaborators are incompatible it instead decreases the odds of success. When promoting the idea that two groups should collaborate, you need to have a convincing argument why it will increase the odds of success. Saying “because collaboration is good” is not convincing, esp. when the success rate of both organisations (IDPF/W3C) has been sporadic.

Creating a specification for tech that does not yet exist (i.e. you aren’t normalising variations between vendors) is product development. I am extremely sceptical of the idea that the traditional W3C process is likely to succeed under those circumstances.


Finally, with my last post, I think I explained the problems, as I see them, a bit more clearly and followed it up with a proposed solution.

It was prompted by a tweet from Dave Cramer.


Different economics require different standards processes #

@fakebaldur I think of the w3c process as the great advantage over idpf process—requiring multiple implementations before a spec is final — Dave Cramer

This is utterly true. Procedurally, the W3C process favours implementation. But it also doesn’t present the whole picture.

First off, the W3C is usually dealing with specifications that have very particular economic dynamics.

Second, the W3C has, off the top of my head, three different processes, two of which have evolved explicitly because the standard W3C process wasn’t working well.

1. The economics #

Historically, the W3C has tried to solve two different problems.

The first is to harmonise the implementation of a particular feature of the web stack across vendors. E.g. several browser vendors see the value in addressing a particular problem area and need to make sure that the solution is compatible across browsers. This has sort of worked because browser vendors realised that incompatible APIs and features reduced the value of their browser to pretty much everybody. Compatibility maximises adoption and value because they share a platform.

The key here is that you already have more than one browser vendor who has figured that the marginal profit (marginal value added minus marginal cost of implementing) of a feature is high enough to be worthwhile.

The second problem the W3C has tried to solve is when it sees the web stack as incomplete in some way or tries to push it in a particular ideological direction. These tend to be efforts that are high on ideology and low on economic rationale. Examples being early flavours of RDF, various flavours of XHTML, XHTML2 (a big enough flop to mention on its own) and similar messes.

It is generally much less successful at solving the second type of problem than it has been at the first (they often end up being utter disasters). The reasons should become clear when you look at the processes.

2. The processes #

The W3C’s normal specification process in broad strokes: a committee of interested vendors assign editors they trust to write a specification, that specification then becomes a firmer and firmer recommendation as more and more vendors implement it and hash out incompatibilities.

This works well when the economic picture is clear: the value of the feature is obvious to all participants and there’s no particular economic advantage for any one of them to hold the others back.

The process doesn’t work well under a few circumstances:

  1. The marginal profit of a feature is unclear, i.e. browser vendors either aren’t sure if the web community will value the feature or aren’t certain about how hard it will be to implement.

  2. The feature clearly has no near term marginal profit but the W3C (or somebody else) is driving the feature because they firmly believe in some long term picture.

  3. Vendors are divided because the marginal profit of the feature varies a lot from vendor to vendor. For example, Apple often doesn’t have the same economic dynamic as Google, and Mozilla’s values tend to differ considerably from the rest.

The second process at the W3C came because of tensions between vendors and the W3C. The consortium pushed for XHTML2 while vendors wanted a more pragmatic approach. (This is simplifying a complex situation a lot, I know.) This led to the WHATWG and its set of specs followed by a reconciliation and harmonisation process of sorts. The end result is a complicated mess that is extremely confusing to outsiders. Obviously HTML5 walked its own walk when it came to W3C specification processes.

Reusing the WHATWG/W3C hybrid process for other specs is clearly inadvisable but it’s evidence to the fact that there has been in the past considerable dissatisfaction with how W3C has done things.

The third process is, in my view, a consequence of two realisations:

  1. The W3C is institutionally a really bad judge of what’s a good idea.

  2. The marginal profit for a lot of potential web features is very hard to gauge at the outset. I.e. the ‘is it worth implementing?’ question is hard to answer.

This has led to an increase in a community-oriented standards process where a community incubates and debates a feature before it graduates to a standards track at the W3C. IndieWebCamp has done this with Web Mentions and Micropub. And it has been done in informal ways with several specs that have been developed in ad hoc outside communities before they are brought to the W3C. Chromium/Blink has formalised this as its preferred specification path by adopting an incubation-first standards policy, directing participants to take proposed web specs through Web Platform Incubator Community Group. It should be noted that since this is a community group, participation is open to all, unlike what you see in the W3C’s standard working group format.

The community incubation process is a quick, iterative way for developers and vendors discover exactly how hard a feature is to implement, what design for the feature is the most viable, and gauge the value it would add to the platform: it allows for a dramatically simpler discovery of a feature’s marginal profit than the other processes do.

My suggestion #

As my prior series of posts and tweets should make abundantly obvious, I think that a merger of the IDPF and the W3C is a bad idea and I’m sceptical of the direction that the specs driven by the publishing industry at the W3C are taking.

The digital side of publishing suffers from a very hazy economic picture. Layoffs have been abundant in the past year. A lot of its digital talent, including people with standardisation expertise are now unemployed. Sales have been going down. Amazon’s dominance makes the ROI on tech investment unclear. Publishing industry processes aren’t particularly compatible with web industry processes. We don’t know if any of these issues can be addressed by new specifications at all. We don’t know if the various publishing industries will adopt new specifications even if they get specified. We don’t know if any vendor will implement them. We don’t know if publishers will use them even if they do get implemented. The marginal profit of new specifications is utterly unknown. Not acknowledging these uncertainties and the risk they pose to tech specification is irrational.

The W3C’s standard process is not going to work under these circumstances. Adding the IDPF and its cluster-mess of a format into the mix will not improve the odds one bit.

The only viable process under these circumstances is the community incubation model. Take the features that the publishing industry needs but might interest browser vendors through the Web Incubator Community Group. In parallel, the W3C should set up an Publication Incubator Community Group, structured in the same way, for specifications that are less interesting to browser vendors but are publishing industry specific and relevant to non-browser vendors.

Going the community route is more democratic, more transparent (which has been lacking in publishing industry standards work), better suited to publishing industry budget requirements, and—in my opinion—more likely to succeed.

Excluding the community by continuing with a closed process is not a good idea when the digital side of publishing is facing so many uncertainties in its near and distant future.


I think the community incubator approach is the only viable option for taking ebook and web book specifications forward. They have the advantages of openness, transparency, iteration, and economy—they make it possible to involve a wider range of people from the publishing industry who have been excluded by the sheer expense of participation, which in turn makes it easier to assess the economic viability of the proposed standards. The open community approach lets staff work on standards in ways that bypass the internal structures of their publisher. It lets smaller publishers into the discussion without them having to risk their low margin business with a cash outlay. It lets authors and freelancers in who, as it happens, are the ones who actually do most of the work in the publishing industry. The community approach is much more compatible with the actual structure and nature of the various publishing industries than either the IDPF process or the W3C’s standard specification process.


If people don’t find the community approach palatable…

Then there’s another alternative which should be seriously considered:

Letting publishing and the web go their separate ways.

It isn’t guaranteed that industries as divergent as publishing (which is varied enough on its own) and the web will find usable common solutions.

Maybe it was a mistake for the publishing industry to base their ebook formats on web tech. Maybe they should have just built on OOXML instead.

Maybe it’s best that web books be just a web thing, done by and for the web community as it attempts to solve its problems, and let publishing go off and solve its problems separately.

If people can’t make the community approach work or refuse go down the open path, then maybe the economics of collaboration between publishing and the web just don’t work.


ETA: I gathered links to the entire debate and wrote down my final thoughts on the subject over on Medium: Link compilation, final thoughts, and compromises