During FWD50 2020, our speakers—nearly 240 of them!—brought up several vital topics we want to explore in 2021. As we launch our fifth annual content survey (you can see the 2020 results here on our blog) these are some of the things we’ll be diving into, both online throughout the year, and at the November event itself.
Several 2020 speakers brought up the need to question long-held assumptions. Technology has changed so much about government work that many things we think are true simply aren’t. What’s costly or hard in the physical world is often trivial or easy in the digital one, and vice-versa.
For example, personalization is expensive when you’re making something out of atoms. If you want custom furniture, it’s far more expensive than off-the-shelf flat-packed IKEA. The twentieth century saw the rise of mass production and the assembly line, precisely because economies of scale are an efficient way to get something to many people. There were only a few printed newspapers per city because that’s how publishing turned a profit.
In the digital world, each of us gets a personal news feed. Facebook alone claims 1.85B daily active users, and several years ago the average user checked their feed 13.8 times a day. That means 25.53 Billion personalized newsfeeds a day, each tailored to a specific user. Setting aside the epistemic crisis that comes from each of us having our own set of news for a moment, clearly personalization is easy in digital.
How do we know what’s real? In the physical world, we can be fairly certain that the person in front of us is who they claim to be, if they have the proper identification. Similarly, we know that a piece of art is the original, because we can detect imperfections in the copy. But that’s not the case in a digital world, where copies are identical and algorithms can impersonate us.
From chatbots to deepfakes, veracity is a casualty of technology. This, too, is a challenge for progressive governments who want their citizens to make informed decisions about society.
Now think about how much folklore exists around these two assumptions. Government services that are based on truth being easy and personalization being hard are outdated, and we need to look at problems with fresh eyes. How do we retain decades of wisdom while revisiting our underlying assumptions?
Government as Big Tech
One of the themes that kept coming up at the 2020 event was the notion that government is big tech. Governments are the largest buyers of information technology, and have been since the time of punched cards and mainframes.
The private sector is loyal to its shareholders. It focuses on features that will give it a competitive advantage, and on the most lucrative customers—often leaving marginal users stranded. It wants to keep its methods secret, and lock in its customers to maximize their lifetime value and discourage them from switching vendors. So there’s plenty about for-profit tech firms that has no place in the public sector.
But there’s also lots to love. Service-centric delivery that reduces user friction. Easy onboarding and compelling user interfaces, with nudges and incentives to encourage use. Lean product management, agile software development, and continuous deployment based on Devops approaches allow tech firms to update billions of users continuously. Tech firms embrace experimentation and testing to find what works. The career paths for technologists, and the future-of-work innovations they put into place, could find a welcome home in governments of the future.
Retooling and decommissioning
We talk a lot about adding new services and launching new products. We seldom focus on decommissioning. But every launch needs a landing. When someone has a new initiative they want to work on, they need to justify it based on cost and effort—and yet, in a world that’s constantly changing, we don’t ask legacy product owners to justify their continued existence.
To avoid becoming a hoarder of brittle, outdated tech, we need to spend as much time turning things off as we do turning them up. Understanding dependencies, and considering the cost of inertia, is vital for digital transformation.
Listening at scale
The great depression of 1930 had an outsized impact on the lives of Americans in part because government was making decisions without accurate information. In the years that followed, the US embarked on a massive project of civic data collection, pulling in economic and social data with which to steer the economy.
Today, we can collect information instantly from sensors, social media, and myriad sources both public and private. Yet most Western democracies ask their citizens what they think once every four years or so. To react and adapt to changes at a digital pace, governments need to learn to listen at scale to their stakeholders.
The tools to do this exist, but aren’t widespread. There’s a tension between representation and direct democracy; there’s disagreement over free speech; and survey responses distort reality because polls naturally select for certain voices. With the rise of natural language parsing and advanced algorithms for sentiment analysis, new approaches to listen at scale will change how governments learn and communicate.
Rethinking public infrastructure
Freedom of speech assumed vocal cords and a soapbox. But digital speech demands a platform, and whatever your political stripe, privatization of fundamental infrastructure needed to give citizens their basic rights should concern you.
We’re seeing private platforms enforce policymaking around issues like free speech, independent of governments and unaccountable to voters. Where do we draw the line between public regulation and infrastructure creation? Why do we let governments build highways, but not the foundations of digital free speech?
When the Curiosity lander touched down on the red planet, it took 13 minutes and 48 seconds for a message to reach earth from Mars. The delay varies as the two planets orbit the sun, from a short 4 minutes to an interminable 24. This means that if you’re landing on Mars, you need to get things right ahead of time. The same is true when you’re building a big, irreversible project. Once you’ve launched a battleship, you can’t easily upgrade its physical design.
Digital doesn’t work that way. You can undo a change, rolling back to an earlier, more stable version. You can move your code from one cloud to another, with virtually identical behaviour. And you can experiment with several versions of a thing, because copies are free, before settling on a final version. Software—and the processes we build atop it—are free from the constraints of time and space that burden physical things.
And yet we haven’t changed our calculus of risk as we move to digital. We need to adjust our idea of risk based on circumstance, and leaders need to provide air cover for imperfect prototypes and open experimentation despite the potential for political fallout.