The Digital Swamp
How our metaphors for the internet determine success and failure in the messy world of software design.
I consider myself a designer. That’s what I write on the arrivals card when I touch down in a foreign country. I mainly design websites and software but the term ‘web-design’ generally suggests a firm grasp of programming and development. I’ve managed to avoid acquiring those skills. Too many weird brackets, too much math. So, in the world of LinkedIn, people like me have ended up with the vague title of ‘User-Experience Designer’ – which has always struck me as an awkward term for what is, essentially, a customer advocate that can draw. Even the word ‘user’ seems a bit euphemistic. The only other industry that describes its customers as ‘users’ is based in Sinaloa and, much like Silicon Valley, it has very little regard for the wider community.
When it comes to software design the most valuable insights are usually gained in the first few weeks – when you don’t know much about the industry you’re working in and everything they take for granted seems strange and confusing. That initial period offers the best insight into how the average person relates to the organisation and the application that it’s built. Inevitably you have to learn the ins-and-outs of the industry or you’ll annoy your coworkers but it always helps to remember what it felt like to be a foreigner.
The benefits of this ignorance are difficult for most project managers to appreciate. Usually they expect designers to provide immediate answers to questions about how people behave and how they’ll react to some new product or idea. They’re generally somewhat put-out when you explain that you don’t know, and can only speculate, until you test the idea with their target audience. That disappointment is understandable. In most circumstances companies hire senior staff based on their ‘domain knowledge’ and salaries are supposed to reflect the value of all that hard-won experience – especially the ability to ‘speak the language’ of the industry they’ve found themselves in. By contrast a good designer tends to set aside conventions, filter out jargon and conduct their own research. Rather than expertise, interface design relies on methods for measuring ‘usability’ that should apply just as well to the design of a 747 cockpit than to a TV remote or a website that sells coyote piss.
The scope and demand for ‘user-experience design’ means that those doing the work get exposed to a whole range of businesses and organisations and can easily end up as a jack-of-all-trades but master of none. In that respect I’m no different. Just when I’ve felt close to some level of authority I get whisked away to work on something entirely different. But despite years of bouncing between different industries I have managed to accrue some expertise when it comes to digital applications. Specifically I’ve developed a fairly solid understanding of how and why they fail.
When it comes to failure my credentials are rock solid. Over the years I’ve worked on all sorts of digital failures – websites, games, mobile apps, calculators, booking platforms and software. You might have encountered some of these defective apps and, if so, I’m sorry. I did all I could. Thankfully most have since been recalled. Websites and software can take months to build but success can usually be determined in a matter of days. It’s pretty obvious if the registration page isn’t registering or the sales funnel isn’t funnelling. It gets a little trickier if the application is something people are compelled to use – like a banking app or anything called ‘MyAccount’. In those cases failure only becomes apparent when complaints start flowing in and the company’s star rating melts away like a urinal cake under a deluge of pissy reviews.
In those circumstances most sensible companies decide to quietly remove the offending app – cutting their losses and adding another 404 page to the internet’s graveyard. That’s actually the best case scenario. In an effort to save face a lot of failed apps are kept on life support – abandoned by their developers but still available to the public, annoying new people every day like some sort of smouldering industrial accident.
As a designer it’s tempting to suggest that apps fail due to bad interface design. But, while some interfaces are undoubtedly very poorly thought-out (witness the rage-inducing UI for the music software Sibelius), they rarely make or break an application. To take some high-profile examples- the messaging app Snapchat ignores many of the basic rules of interface design and still manages to retain its market share. Likewise Facebook continues to dominate the social media world despite being a nightmare to use for anything other than scrolling through the timeline. Apple is often held up as a leader in interface design but its iOS Podcast app still baffles me after years of use. What can we conclude from this? Only that failure doesn’t always hinge on ‘usability’. An interface might be confusing or awkward but, with sufficient motivation, people tend to work out what to do.
So why do so many websites and applications fail to provide any value – either to the organisation that commissioned them or to the people they’re supposedly aimed at? Conventional wisdom suggests that failures are the result of an essentially Bad Idea™. Instead of providing a solution to an existing problem or responding to some unmet need these applications attempt to impose themselves on an indifferent public. Bad ideas, we’re told, come from people operating under the Field of Dreams assumption that ‘if you build it, they will come’. This assumption is then compounded by a pattern of behaviour known as ‘escalation of commitment’ whereby people cling to their initial idea even after it becomes untenable. In economics this is known as the ‘sunk cost’ fallacy.
In my experience, however, truly bad ideas are quite rare. Most proposed applications – if followed through – would solve a real problem or help people with a real task. Nevertheless they end up failing due to some combination of the following factors.
- A failure to appreciate the underlying systems
- A failure to appreciate what’s at stake
- A failure to appreciate what humans can do
One of the reasons we have difficulty explaining the success and failure of software projects is that we lack a good metaphor for digital technology. In particular we’re missing a shared mental model for the internet and all its backstage systems. In the 90s we started out discussing the internet in terms of ‘webs’ and ‘networks’- which helped us visualise how sites are linked to one another. For some reason the anglosphere seized on the term ‘surfing’ to describe getting around on the web. The French, on the other hand, opted for ‘naviguer’, meaning ‘sail’ – which seems more apt but still implies a relatively sedate pace. Meanwhile the exchange of data between sites was described in terms of an ‘information superhighway’ or, to much hilarity, as a ‘series of tubes’.
Long before most people knew anything about the internet Steven Lisburger’s Tron (1982) provided one of the first depictions of a parallel digital world. Tron’s ‘grid’ was populated by people in neon crash helmets brandishing glow-in-the-dark frisbees and riding light-trail motorcycles. When William Gibson’s penned his novel Neuromancer in 1988 he provided a much darker vision of what he called ‘cyberspace’ but, by the end of the 90s, science fiction had come full circle with the Wachowski’s depiction of The Matrix showing us a vision of the internet that looked remarkably like Sydney’s CBD. Since then the internet has lost a lot of its mystique and tech writers have largely given up on metaphors altogether. Nowadays you might hear buzzwords like ‘cloud computing’ or the ‘internet of things’ but these terms only describe certain aspects of the digital world and they can’t help us pin down the factors that lead to a good ‘user experience’ or tell us why so many new applications fail. For that we need a mental model of digital technology.
Perhaps the most useful metaphor for the internet and all its attendant technology is the concept of an ecosystem. While it sounds a little buzzwordy it does help clarify a few things. Like its natural equivalent the digital ecosystem is made up of many thousands of different organisms that interact with one another in complex ways. This environment is also teeming with bugs. Instead of nutrients the digital ecosystem feeds on data (anecdotally this appears to be mostly YouTube comments, instagram photos and cat memes) which then get distributed and repurposed in various ways. According to this metaphor armies of bots and email SPAM are like bouts of disease or algae blooms that have to be filtered out or absorbed by other organisms within the system.
So far so good. We’ve moved away from neon grids and glowing frisbees but we’re still left with a fairly vague analogy because referring to the mass of digital technologies as an ecosystem is really just exchanging one abstraction for another. But perhaps it’s possible to be more specific without getting lost in the metaphor. After all, natural ecosystems come in all sorts of different forms – rainforest, savannah, prairie, reef, mangrove – each with its own unique features.
At the risk of offending my developer friends I would suggest that the digital ecosystem is a lot like a swamp. If that sounds insulting it shouldn’t. Swamps and wetlands are very important environments – they exist as natural filters between inland waterways and oceans. Although it’s often hard to tell the water in swamps is actually moving – flowing down a slight gradient and draining into channels and waterways downstream. The deeper you go the less things tend to move but nothing in the swamp is entirely permanent. Nutrients find their way into the swamp in the form of decaying organisms and vegetation and plants take root in the mud and grow up to take advantage of the sunlight. For our purposes the mud represents all those underlying systems that make the visible part of the internet possible – things like coding languages, file formats, frameworks, script libraries, databases, operating systems, plugins, APIs, codecs, encryption keys and all sorts of abandoned and unfinished open-source experiments.
To understand why some applications manage to take root and grow up out of the swamp while others never see the light of day you first have to imagine each application as a plant loosely rooted in this shared mass of technology. The people that work on these applications already have some sense of this arrangement. When an app or a website fails to load developers say that it’s ‘down’ but, when it only works some of the time, they generally refer to it as being ‘unstable’. That’s a useful expression – one that conveys more about the basis for the application than the application itself.
Randall Monroe, the artist behind the webcomic xkcd, highlighted that same essential instability in one of his early single-panel webcomics. Presumably the big blocks are enterprise systems built by companies like IBM but those big pieces often rest on all sorts of obscure open-source and proprietary programs that remain lost in the weeds. Despite this fragile arrangement it’s clearly possible to build impressive structures in the digital swamp. Even if we set aside the mixed blessings of social media and YouTube we can all appreciate the usefulness of tools like Google Maps or resources like Wikipedia.
In the deeper levels of the swamp you can find the more solid technologies that everything else rests on. This layer includes the physical infrastructure of the net – servers, data centres, computers, electrical lines, fibre optic cables, copper wiring (thanks a bunch, Malcolm), relays and transmitters. Like the software it supports this hardware also changes and shifts over time but these changes are much more gradual than those that occur in the turbulent upper layers of frameworks and operating systems. Fibre optic cables appear to represent a permanent upper limit for data transfer speeds (nothing goes faster than light) but that doesn’t mean our bedrock technology is static and it doesn’t mean that it’s literally underground. Many countries are currently transitioning their cellular networks from 4G to 5G transmitters and Elon Musk’s Starlink has already begun putting up another ‘web’ of micro-satellites designed to form an all-encompassing orbital network to increase internet speeds by some fractional percentage.
Other organisations are less concerned with speed but place an enormous premium on stability. Up until 2019 The US military’s Strategic Air Command still used hardware from the 70s and software from the 90s to run its early warning and nuclear launch systems because they had proved their reliability and the risk of upgrade was deemed to be too great. Rather than employing the 3.5 inch floppy disks (that now only exist in spirit as the ‘save’ icon) the system used eight-inch floppy disks to move updates between computers at intercontinental ballistic missile sites and airfields. If Russian hackers had wanted to paralyse America’s nuclear arsenal they would have been forced to comb through antique shops for the necessary equipment. They’d also have had to confine their version of stuxnet to a measly 80kb.
Below the physical infrastructure of the internet is another layer of systems that we normally don’t even think of as technologies at all. These are more like cultural conventions – languages, religions, currency markets, legal codes and international agreements. Like the upper layers these deeper substrates are also shifting but the changes that occur at this level are usually too gradual to have an immediate effect on the systems above.
The upshot of this arrangement is that success is often more dependent on timing than any particular merits of the idea itself. The cycling/fitness app Strava provides a nice test case for biding your time. In 1996 two Harvard alumni – Mark Gainey and Michael Horvath – started a company which provided email and marketing services. The two friends had met in the competitive rowing scene at university and had discussed the idea of creating a website that would allow athletes scattered across the country to connect with one another and offer encouragement by sharing results and personal achievements. At the time web technology was still in its infancy and there was no easy way for people to record and upload athletic activities. In an interview with CyclingTips Horvath explained:
“There was no javascript let alone GPS and the concept of a dynamically rendered webpage did not exist. The folks we spoke to about this couldn’t wrap their heads around building a site like this. What we were talking about was building a combination of a social network and a quantified-self site before there were even terms like that.”
The ability to pinpoint a particular device via satellite Global Positioning Systems had been established in the early 1960s but the full capabilities of GPS were reserved for the US military. Concerned that the technology might be used by foreign adversaries to target weapons the US government handicapped commercially available GPS receivers with a system called ‘Selective Availability’ which intentionally introduced errors to reduce accuracy. Eventually commercial interests won out over security concerns and President Bill Clinton signed a directive in 2000 to provide the same GPS accuracy to civilians that was afforded to the military.
By 2009 a whole raft of technologies that Strava required were firmly established in the digital swamp. Google was well on its way to mapping the world and, for a fee, its maps API could be tapped into by apps like Strava for route setting and navigation. Strava was also able to connect with 3rd party devices thanks to the widespread adoption of the Bluetooth standard which had been established in the 90s. Bluetooth allowed users to synchronise data from cadence sensors and heart rate monitors – the miniaturised descendants of the ‘portable’ EKG devices pioneered in the 1980s.
Nowadays Strava lays claim to more than 70 million users and is by far the most popular fitness app for cyclists. Even with lockdowns and restrictions 2020 saw the upload of more than a billion activities. It’s an incredible achievement in its own right but it’s easy to forget that Strava’s success depended on the accretion of thousands of technological discoveries and innovations – starting with the spoked wheel and ending with microchips and radio transmitters.
Returning to the idea of the swamp – why does it matter that all these technological sediments are constantly shifting? How does thinking about it in those terms help?
For businesses trying to offer their services on the internet the digital swamp demands back-end systems that are firmly rooted and strong enough to support the applications they want to build. This seems obvious but, in many large corporates, designers spend most of their time drafting plans for applications that can’t be built and testing features that can’t be implemented. The resulting wastage in time, money and morale is immense.
While technical delusions are part of the problem project teams also have to contend with the deep aversion to risk that characterises most large organisations. In the opening to his grand essay ‘What is Code?’ Paul Ford captures the dread felt by corporate leaders faced with the prospect of overhauling legacy systems. Writing from the perspective of an executive responsible for a typical ‘digital transformation’ project he offers a counterpoint to the impatience of software developers and UX designers.
“What no one in engineering can understand is that what they perceive as static, slow-moving, exhausting, the enemy of progress—the corporate world that surrounds them, the world in which they work—is not static. Slow-moving, yes, but so are battleships when they leave port. What the coders aren’t seeing, you have come to believe, is that the staid enterprise world that they fear isn’t the consequence of dead-eyed apathy but rather détente. . .They can’t see how hard-fought that stability is. Where they see obstacles and intransigence, you see a huge, complex, dynamic system through which flows a river of money and where people are deeply afraid to move anything that would dam that river.”
Right now most applications are being built by a generation that grew up in the digital swamp while authority and responsibility remains vested in a generation who have, at best, an uneasy relationship with the internet. But without an understanding of the swamp corporate leadership tends to expect miracles from both designers and developers. They think if they find the right pool of talent those people will be able to design their way out of the digital quagmire that the business has found itself in. Clearly it doesn’t work that way, but that doesn’t stop designers from presenting beautiful, but totally implausible, concepts and prototypes. Likewise developers are often encouraged to ignore any technical constraints that sit outside their immediate responsibility.
The combined effect is that companies end up repeating the same mistakes – never learning that a good interface is useless if the technology doesn’t support what’s been proposed. During development there are always telltale signs of the impending failure. A development team preoccupied by ‘error states’ and ‘exception scenarios’ usually indicates that the front-end and back-end systems aren’t properly connected and the people involved have abandoned any hope of success. Companies that ignore those warning signs only end up adding another expensive 404 page to the digital swamp.
All sorts of systems have been devised to avoid this outcome but project teams are often incentivised to ignore larger problems in favour of addressing smaller ones. The ‘agile’ software development method uses the concept of ‘blockers’ – issues that prevent individual developers from progressing with their tasks. But while blockers are effective at highlighting the day-to-day issues (eg. something doesn’t work in a particular browser) that same warning system tends to come unstuck at the project-management level where the pressure to meet deadlines and avoid uncomfortable conversations leaves managers reluctant to raise any issues. Theoretically most large software projects involving multiple teams have some sort of stoplight system to draw attention to potential issues (orange) or show-stopping problems (red). Unfortunately these stoplight systems are often treated as a box-ticking exercise.
It turns out that alert systems are only as useful as the workplace culture that governs their use. To be effective not only do managers need to encourage people to raise the alarm they also need to listen when they do. According to manufacturing folklore this willingness to stop and immediately address issues as they arise is one of the great strengths of Japanese industrial firms like Toyota. On Toyota’s assembly lines all employees are encouraged to pull the ‘andon’ cord nearest their station whenever they come across a defect or a problem. Andon is a Japanese loanword that originally referred to traditional paper lanterns but Toyota’s andon is an alarm designed to temporarily halt the entire production line and alert team leaders. When the andon is pulled supervisors are instructed to immediately thank the worker in question before even clarifying the issue and are strictly forbidden from penalising staff for false alarms. It’s that attitude, rather than the alarm itself, that makes the system effective.
But when companies try to paper over their technical problems best practises are quickly discarded. Another concept popular within software development is ‘graceful degradation’. It refers to the practise of offering a cut-down version of an application when non-essential data fails to load. For example if a weather app isn’t able to load the rain radar it should, at least, display the current temperature or the five day forecast. Failsafes like these mean the whole program doesn’t just crash when one input goes AWOL.
But instead of gracefully omitting peripheral features product managers sometimes call upon designers to come up with error states to compensate for a lack of basic functionality. This is madness. Case in point; if you build a banking app but you can’t provide an account balance there’s no error message that will satisfy your users. When I get these requests I’m always reminded of the episode of Black Books where Bernard and Manny inadvertently drink their friend’s £7,000 bottle of wine.
B: Could we burn down the house? No, that’s absurd. Think, Bernard, think! What about a gift?
M: Oh, that’s a much better idea.
B: But it’d have to be perfect.
M: …what about a really nice box of pencils?
B: No.
M: I mean, a REALLY nice box, you know?
B: No! I think, you know, if you’re gonna give the guy pencils for drinking his wine, you’re talking about, you know, magic pencils. You draw a cow, the cow comes to life! Those kind of pencils.
The metaphors we use are partially to blame for the misunderstandings that lead to failure. What often gets sold to corporate executives as a ‘technology stack’ – something solid and permanent – is actually more like a living organism that needs to be nurtured and fed and doesn’t actually belong to any single entity. Those that insist on imagining a machine will end up with one. But it’ll be some sort of Rube-Goldberg contraption – slow, costly and prone to failure. In software development this sad state of affairs is sometimes referred to in terms of a ‘technical debt’ and, just like its monetary equivalent, it accumulates interest over time. This technical debt is part of the reason large corporates struggle to attract or retain talented developers despite the lure of permanent roles and generous salaries.
A large part of their recruitment challenge is overcoming the sheer despair provoked by grotesque legacy systems. This reluctance on the part of potential job applicants leads hiring managers to confuse the superficial perks offered by tech companies and start-ups (think office hammocks and open bars) with the actual appeal of technology-driven companies – a stable platform to work on and the prospect of tackling interesting mathematical problems rather than endlessly troubleshooting a patchwork of shitty code.
After years of watching projects get shelved halfway through development or having them fail on release I have only one piece of advice for corporate CTOs and executives determined to force through some sort of ‘digital transformation’ – fix your back-end. It may not be glamorous and I’m sure it seems expensive but until you find that stability you won’t be able to build anything that actually works. Much like Churchill I have nothing to offer but other people’s blood, toil, tears and sweat. My strategic advice, frustrating as it may be, is to simply do the dirty work required to establish yourself in the swamp and be prepared to do it all again when everything underneath inevitably shifts.
Notes:
Paul Ford – What is Code?
Explain XKCD – Dependency
Martin Keary – Music Software & Bad Interface Design: Avid’s Sibelius
Liam Stack/NYT – Update Complete: U.S. Nuclear Weapons No Longer Need Floppy Disks
CyclingTips.com – STRAVA From the Beginning
Title Image – ‘Swamp’ by Jakub Skop
Leave a Reply