News

Can we have confidence in our data?

Our Indigo Share: Subscription post-visit tracker surveys are used by organisations all over the UK, and our days are full of bespoke projects which often involve some audience surveying. Whatever the context, the questions from organisations are so often the same:

  • How can I be sure this data is representative of my audience?
  • Don't the same types of people always fill in surveys?
  • How do we avoid survey fatigue?
  • How do we make sure the surveys show us the truth of our audience profile?

Well, the bad news is that survey fatigue is very real. In a world where arts organisations need to measure everything to prove their worth to funders and stakeholders, as well as wanting to gather meaningful insight to inform decisions, sometimes audiences are at risk of drowning in surveys.

This is actually one of the reasons we created Indigo Share Subscription: a way of trying to gather everything you might need in one survey in as concise a way as possible, managed carefully so as not to over-survey regular attenders, with live insight to be able to respond quickly.

So how much can we trust our post-visit surveys?

What's the question?

None of us should be doing research with audiences unless we know why we're doing it. Unless you have the budgets and time to complete a full census, you can only take a sample, so what do you actually want to know? Is it progress year on year? Is it what one group of audiences felt compared to another? Post-visit surveys are pretty good at this.

Or is it exactly how many people with low income bought a ticket? Perhaps your ticketing data might do better at finding this out based on postcode. And if you want more accuracy, you might need to select audiences differently; more about that later.

Triangulate to interrogate

Of course, surveying is only one way of measuring audiences. As with many methodologies, there is in-built bias: surveys only go to bookers not all attenders; some groups (for example your megafans) are more likely to take the time to feed back; and surveys don't tell you why people think what they think as well as a depth interview would. We firmly believe you should always triangulate with other methods: ticketing data if you have it, social media comments, anecdotal conversations with audiences, perhaps reviews, and so on.

Apples with apples

Then there's benchmarking. The great thing about the Indigo Share benchmark is that everyone is in the same boat: all those organisations are having the same challenges and facing the same biases. So while pursuit of “truth” can be a frustrating conversation internally sometimes, at least you’re comparing apples with apples. And this means internal and external reporting can be done in the context of a comparative bigger picture. So if there are typically types of people who are more or less likely to complete surveys – well, that's the same for every organisation in the benchmark. A benchmark gives useful context and balance.

Have confidence

In April 2025, as we published our annual benchmark reports, we took a quick look at the stats to see how reliable the Indigo Share results were. Response rates to surveys varied, but the more important thing was the number of responses compared to the overall number of audiences. Response rate won’t tell you that, but looking at the number of responses you’ve received in a year compared with the total number of audiences you have in a year will.

Firstly, everyone got more than enough responses for a sample size calculator (like this one) to say they were representative. We then double-checked a sample from a few organisations with some basic statistical checks: the confidence interval and margin of error. The combination of these two tells us how sure we can be that our data shows the "true" answer and can therefore represent audiences as a whole (again, bearing in mind the biases we've already mentioned).

For all the organisations we looked at, we were able to calculate a confidence interval of 99% with a margin of error of less than 3%. This means we can be really confident that data we are collecting is a good representation of audiences as a whole.

But sampling matters

This does assume, however, that we are truly sampling our audience at random. Most of us are not: we're taking a sample based on convenience. We're sending out an email and accepting that those who reply form our sample. And this is where the bias creeps in.

Imagine collecting responses is like shooting at an archery target. If you're getting enough responses, the cluster of arrows gets pretty tight so we know the variability in answers will reduce. However, that cluster of arrows could be far from the bullseye if you're not sampling the right audiences. We can increase the quality of our sampling (i.e. get closer to the bullseye of "truth") in a whole range of ways. If you really want to improve the quality of your data, this is where to focus your energies. Get in touch if we can help.

Indigo Share: Subscription can help