Ten years ago this week I started a job that was to change my life, heading up a new BBC audience research team in what was then called the ‘New Media’ division.
And it really was new: BBC Online was a mere two-and-a-half years old, it had 3 million monthly unique users (it now has nearly ten times that) and fewer than 4 in 10 people in the UK had ever used the internet. What’s more, the kinds of services and devices that are now integral to our lives, such as broadband, MP3 players and even DVDs, were the domain of the innovators at the very foot of the adoption curve.
But it was a time of unbelievable excitement and discovery. Although the dot com boom had officially ended a few months earlier, the bubble had certainly not burst in Bush House. During my first year, the division’s budget doubled to around £100m and I was pleasantly surprised to discover that my research budget was around a million quid. Pretty much everything we did was a step into uncharted territory.
The pace of media change over the past ten years has been so explosive that you might expect most of what we learnt at the time to be consigned to distant history.
But remarkably, a lot of it continues to hold true today.
We learnt a lot about the twin pitfalls of researching innovative new concepts: on the one hand, audiences can’t articulate a need for something which they don’t yet understand, so they may reject some very sound ideas. (Focus groups have famously rejected any disruptive new media over the years, from television to mobile phones.) On the other hand, a ‘wow’ factor can occur, where respondents are so bowled-over by a wizzy new concept that they claim it will become indispensible. And then no-one buys it. We overcame this by using research techniques that could identify unfulfilled needs (from diaries to ethnography) and placing these needs at the heart of the production process – something we still subscribe to today.
We learnt that when it comes to concept development, not all respondents are equal. Rather then screening out innovators and ‘experts’, we actively encouraged their involvement in early co-creation, before sense-checking the emergent prototypes with ‘real’ audiences.
We learnt that technology was converging, but not the circumstances in which content was consumed. We understood that TVs, PCs and mobile phones would increasingly have the same capabilities and connectivity, but that the different screens would retain their own core values and audience expectations. This was something of a heretical view at the time, when platform-neutral content creation was all the rage, but it explained why Top of the Pops karaoke through the red button was a real success for us and web on TV was a complete disaster. Today it explains why Nintendo Wii games belong on the living room screen but Facebook doesn’t.
In essence, the advance of technology was allowing each screen to fulfil its real potential: the living room screen got bigger and brought families back together; the desk screen allowed faster and easier completion of tasks and sharing with distant friends; and the handheld screen became an intensely personal device where content could be truly engaging, but only delivered with the user’s permission.
We knew that the internet would be an important social medium and suspected that user-generated content would become as important as that created by the big media owners. But we didn’t get everything right. What we didn’t know was that video would become such a key element of internet use, or that peer-to-peer file sharing would fundamentally change content distribution, or that Google would reinvent everything from web navigation to the online advertising model, or that a very skinny man in a black polo-neck would persuade millions of people to spend £500 on a media tablet through which you could only watch video that you bought from the very skinny man.
But even a million quid can’t buy you that kind of foresight.