There are quite a lot of misleading panel metrics being used in our industry. People often get confused, when we tell them that we don’t pay much attention to them. You can certainly ask us about our panel sizes or attrition rates, but at the end this is nothing we would worry about.
Let us explain what is meant by that. From our point of view, there are typically bad panel metrics:
- Panel size: Anyone can pile up a huge amount of email addresses, which doesn't mean you can gather valuable information from their owners. The addresses might not be in use anymore, or they are used but survey invitations are directly forwarded to the spam folder. And even if the owner notices your invitation doesn't mean he responds to it. Size means nothing. The contrary is true: Huge databases may even indicate bad maintenance, when old and unresponsive accounts have never been deleted.
- Panel churn: The same is true for panel attrition. Only a minority of our panelists unsubscribe, most of them simply do not respond anymore when being invited. When you exclude these addresses from your panel, you’ll get a high churn rates and a lower panel size, even though the quality of the remaining addresses is far better. Without database maintenance you would have low attrition rates and bigger panels. But you will also have low response rates.
So, let’s talk about what we consider good panel metrics:
- Net reach: Though a calculated value, net reach gives you the maximum amount of interviews you could expect within a specific socio-demographic group. Having 10.000 male panelists in the data base doesn’t mean anything, if you don't know their response rates. In contrast, knowing that you can expect up to of 5.000 interviews among men is clear and comprehensible. Size multiplied by response rate is the number you should care about.
- Amount of invitations per panelist per month: Big panel suppliers with offices all around the world run far more projects than local companies. Their panels have to lift more interviews and thus have to be bigger. However, this doesn’t mean a single panelist gets invited less often and, thus, learning effects are lower. The number of invitations per panelist in a given period of time is the number you should care about. Just register at different panels (if they allow open registration, of course) and count the invitations you receive. You’ll see huge differences!
These are just some examples. But in any case, we would be happy to get asked more often about the numbers that really matter.
Do you know other misleading metrics? Let us know!