Leftover Thoughts From 2017
I’ve never been much of a fan of the ‘What I learned in this year doing X, Y, and Z’ tradition of many bloggers.
Not because the posts are uninteresting but because tallying the latest additions to your finite set of skill atoms should be done more regularly than just once a year.
Although, I guess if you only blog once a year, that topic is as good as any other.
I have a stronger tendency to use the opportunity to look back over the various ‘idée fixe’ of the year and, instead of picking out individual lessons I’ve learned, go over what topics it is that I have been thinking about regularly over the past few months. Some years only a single topic feels important enough to write about at the start of the new year. Other years it’s a smorgasbord of cluttered ideas and half-baked concepts.
Given how much has been going on, it’s no surprise that this year leans more towards the smorgasbord side of the spectrum.
The following are the non-political, work-related topics that have been plaguing me this year—following me around and hanging off the edges of my mind like a kitten on new jeans. They are by their very nature half-baked as they are all ideas I’ll be digesting for a long while (some have been following me for almost 20 years) so they don’t warrant blog posts of their own.
Hypertext is still the fundamental model of the web #
Related to most of the below topics is the fact that the web has, at its heart, a pretty simple and flexible architecture.
24 years in and the web’s basic conceptual model still seems remarkably robust.
It is:
- A group of resources written in HTML
- That refer to and identify each other using URLs
- Are fetched and modified using HTTP.
- And are supported by other resources that are identified using URLs and fetched over HTTP.
- Oh, and HTML pages can 'export specific fragments of themselves that can be referred to using HTTP and URLs.
Even with all of the changes over the past few years, this is still the basic model of the web. Even the new JS module system eschews the function call structure of CommonJS modules. The new ‘ES6’ module system is basically just a mechanism where each module script lists the code fragments it exports and which code fragments it imports from a set of URLs. All done over HTTP.
In short, it’s a hypertext system for code.
Given that, it shouldn’t come as a surprise that people are contemplating extending the JS module hypertext system to HTML, making the two hypertexts more interoperable. HTML links to modules which in turn can link to and use not just HTML files but specific fragments of those files.
As a fan of hypertext in general, I’m in favour of this idea.
JS modules aren’t the only place where hypertextuality is centred as a basic operating principle:
- The Link is a first class object in Activity Streams 2.0 and the Activity* family of social media specifications is very hypertextual in how it works.
- In fact, most of the social web protocols coming out of the W3C these days use hypertext as their foundation.
- Using links with a specific
rel=type
to add behaviours declaratively to a web page without resorting to JS is a basic idiom of modern web specifications.
Now, hypertext isn’t used in content as much as I’d like but it’s also easy to forget just how much it is used because it all feels so natural today.
Remote work is a completely different beast #
Traditional ways of collaborating in the workplace aren’t translating well to remote work. Most of the tactics you’d apply when everybody is in the same room just don’t work when the team is distributed.
I can’t help but think that trying to leverage more and more technology to make remote work function better with locally-oriented methodologies is a mistake.
One thought that has been sticking to my mind over the past few months is that remote work has a much stronger resemblance to academic and professional discourse than to traditional teamwork. If that’s the case then more literate methods of communications and collaboration would work better than recreating in-office staples using software:
- ‘Let’s recreate a meeting room using video’ (Hangouts)
- ‘Let’s recreate the printout handed around in the office’ (Google Docs)
- ‘Let’s recreate the ephemeral water-cooler vibe’ (Slack)
I really do think that Amazon is onto something with their heavy reliance on the six-page memo as a recurring form of communication, debate, and argument. Structuring an argument using non-conversational text is a tactic that works equally well remotely as it does locally.
The downside is that a lot of people really can’t handle text well, neither as readers or writers, and many people (especially managers) don’t seem to have the discipline to even habitually read a one-pager before a meeting.
Current methods just don’t work that well for the workplace reality we’re facing: most of our workplaces are mixed local and remote now and our current methods regularly exclude one group or another of our colleagues.
Collaboration as discourse, as the exchange of written arguments, is the most promising approach to this problem I can think of at the moment.
(Also have to say that all of the current trends and fashions in collaboration, teamwork, and organisation completely lose sight of the value of human relationships, which has to be the starting point before you apply anything else.)
Just business logic #
This was written by Werner Vogels six years ago. But he reiterated the message a few weeks ago [on stage](https://dzone.com/articles/the-20-most-interesting-things-werner-vogels-said): "In the future, all the code you ever write will be business logic."
It’s increasingly obvious that coding as a profession is diverging into a number of specialised roles with little in common. You have the infrastructure programmers who work for a big tech company and are responsible for keeping the ‘cloud’ infrastructure running. And, more and more, you have business logic coders who are responsible for very little of the actual application code.
Web server? All of that’s done by the API Gateway and CDN front end.
Authentication? Cloud-provided, just plug it into your Lambda/Cloud function.
Image processing? Just use a service.
Machine learning? Plug the APIs into your cloud function.
Service coordination? Just use Step Functions and other cloud-provided coordination services.
We aren’t there yet but it’s getting very close. Already, you can easily think of individual businesses where the vast majority of the code you write is just what you’d call business logic.
The key feature that enable more structured and strategic use of all of these cloud services isn’t technical but financial: most of these services have pricing models that are based on invocations or actual use. This makes many varieties of organisational and strategic issues financially transparent and therefore organisationally visible.
What’s less clear to me is how this will play out in front end web development. My main concern is that single-page-apps, which currently dominate the web development community, suck from a business perspective. They have a much greater number of things that can go wrong, which absolutely results in a much higher frequency of bugs, many of them are harder to solve than their RESTful HTML equivalent. They open up the possibility of a host of new classes of XSS bugs that won’t appear in your logs. Calling their state management requirements non-trivial is an understatement. And, finally, even though they are often used to integrate micro-services, they are by their very nature extremely monolithic which has substantial implications on development and deployment.
SPAs have high ongoing costs and are harder to fix when they break. But, more importantly, they make it harder to separate out concerns into individual layers that can then be outsourced into services. Instead they are the pinnacle of Conway’s Law: an entire development methodology that is exclusively focused on building software architectures that replicate the organisational structure of the company making them.
Concerns are only separated on the component level where each team or department provides an opaque blob of code with no hope of discerning which code affects which individual concrete cross-departmental business problem.
This blocks off most (business) strategic approaches to front end development. The biggest impact is in reduced visibility into costs. Most companies vastly underestimate the costs of their Single-Page-App front ends. They don’t see the increased bug costs, the increased cost of accessibility compliance, the user churn resulting from accessibility non-compliance, the reduced awareness of XSS and similar security bugs, or the increased developer load due to state management.
In many ways this makes Single-Page-Apps the antithesis of cloud functions. One both increases and obfuscates costs. The other decreases cost and increases visibility.
So, for purely financial reasons, I’m not convinced that this particular web development architecture can survive beyond the current tech startup micro-bubble or outside of the U.S. west coast tech mega-corporation enclave.
It’s not the frameworks that are a problem but the architectural model of the applications we use them to make. Maybe frameworks will come up with some revolutionary solution that solves or makes up for the downsides.
Maybe I’m just not seeing the awesome.
(RESTful server-rendered HTML that uses JS for behaviour and CSS for design is a comparatively inexpensive web front end architecture, by the way. Especially when you look at long term costs.)
Percentages, scale, and forgotten spaces #
Tech and media obsess over percentages and pay no attention to the underlying values. They regularly bleat headlines like ‘new thing totally dominates old thing!’ and most of us don’t notice that the old thing has actually grown in use, value, and reliability. Too often we assume that all new pieces of tech are replacing the old when they are just as often additive.
Desktop versus mobile is just one example. There are many more.
Listen to pundits, developers and journalists and you’d think that nobody uses forums anymore or that not one email has been sent since 2010.
But email, forums, and even blogs are still roughly as big as they were ten years ago, modulo an unsustainable VC-subsidised blip or two.
The tech trends that do tend to die off are those that aren’t sustained by communities. The generic trendsetter blog phenomenon of fifteen years ago wasn’t particularly sustainable because each blog ‘island’ generally only had one inhabitant: the blogger. The ones that survived tend to have strong communities.
Any part of the web or internet that takes on the characteristics of a small township—a small community united around shared needs—is going to last, no matter what the overall percentages of the market look like.
The next evolution of the web #
Strongly related to the above, I’m starting to think of the past ten years, starting with Facebook’s dominance, as a deviation from the natural evolution of the web.
One of the strongest features of Web 2.0 was interoperability and syndication. You interactions might be locked into a silo but the content you created wasn’t: it was syndicated via RSS to any other service that spoke the format. FriendFeed was probably the apotheosis of the Web 2.0 which was unlucky enough to launch in 2007 just as Facebook first began its transformation into the juggernaut it is today.
One of the key issues with Web 2.0 was the vulnerability of the phenomenon to being co-opted by silos. Syndication and interoperability let silos consume data from all of the other services without any form of reciprocation. Twitter especially benefited but managed to squander whatever advantage it had through mismanagement and strategic blunders.
For the longest time I just thought that Web 2.0 had lead us into an evolutionary dead end and that open services would have to survive by being small, nimble, and hard to notice—much like early mammals trying to survive the age of dinosaurs.
But Mastodon and the excitement surrounding ActivityPub has changed my mind. This is the true Web 3.0. It takes the focus on interoperability and syndication further by making the exchange bidirectional. It is the web growing to take on some of the characteristics of email and it is the natural evolution from the semi-cooperative services of Web 2.0.
And, going further, completely distributed architectures that remove the need for servers feel like the next step beyond that, say Web 4.0, in a few year’s time.
The key aspect of federated communities like Mastodon, GNU Social, postActiv, and related is they are fundamentally built around communities. There’s no hint of the radical individualism that drove most of the blogging phenomenon. Instead we have many clusters of loosely connected communities, each of which is sustainable in their own right. They grow very slowly by the standards of VC-funded unicorns but that growth is actually sustainable and will continue with minimal investment.
It starts with Twitter-like communities and other social media but I think there’s a good chance that the model will end up working well across broad swathes of the web.
Writing tools and writing space #
This is the one thing that I’ve been obsessing about over the past year that I hope to blog more about in the near future. So I hope you don’t mind if I just leave you with this quote from a book that blew my mind completely when I first read it 18 years ago.
2018 #
Not-even-half-baked thoughts that are lurking in the back of my mind at the moment and might hang on long enough to be running themes in 2018:
- Software security is having multiple simultaneous crises: excessive trust in code from random strangers (OSS and package managers), fundamentally insecure hardware, half-baked standards, the field’s inability to shed insecure development practices, and our inability to make secure practices easy for the end user. It’s only going to get worse.
- Insecurity, hubris, and—in some cases—outright hostility towards end users is becoming an ongoing competitive advantage for those who are peddling analogue alternatives. Even though the recent uptick in print book sales is in many ways a statistical blip caused by price increases and a shift away from traditional publishers on the ebook side, the form along with a variety of other ‘retro’ tools have a window of opportunity at the moment. They benefit from the fact that modern software companies don’t actually have an incentive to make a better product. They have an incentive to create more invasive, trackeable products and until that changes, analogue products will have an ongoing competitive advantage.
- At the same time the publishing industries of various languages/countries seem to be having a crisis of their own. They are dominated by a very small number of companies (in some cases outright monopoloes or duopolies) and seem to be completely disconnected from the buying public: catering to an ever-diminishing group of known customers who just happen to resemble the cultural and social background of your average publishing industry employee. This would be a greater concern for publishers if it weren’t for the fact that tech companies have set themselves up to have exactly the same problem down the line.
- Our education systems are deeply dysfunctional and most of open education focuses, by necessity, on replicating that dysfunction. There isn’t much else you can do as the dysfunction stems from the industrialisation of a process that fundamentally does not lend itself to industrialisation. True learning does not scale, cannot be homogenised over an entire country, and is a deeply personal relationship between a teacher and a small number of students. Our society cannot fix education when it doesn’t actually value it as a public good or as state of being that touches all citizens of a country. Technology cannot reverse this course when its focus continues on scale and depersonalisation.