We’re frequently asked how to calculate the feasibility of research projects with our online panels. Are the panels big enough to cover a required sample size? And what happens, when the desired target group is very specific? In this post we discuss how to calculate the feasibility for online projects.
What’s the respondent’s final status?
These kind of questions often require starting from the end and working our way backwards to the beginning. However, for the sake of clarity, we’d briefly walk you through the interviewing process in the right order and explain the final statuses a respondent can get.
It all starts with sending invitations to our panelists. From all invited panel members, only a part will actually click on the link and start the survey. That’s what we describe with the Response Rate. Then, at the beginning of a survey, we typically have some screening questions to identity the desired target group. The percentage of eligible respondents at this stage is reflected in the Incidence Rate. After we have made sure to have the right target group, we will assess possible quotas and end the interview for those respondents, whose quotas have been filled already. Quotas are usually assessed after the screener to make sure we can measure the right incidence rate without the interference of quotas. If respondents fit into an open quota, they can participate in the main survey. Nonetheless, some may break off during the interview and never reach the end page. Finally, those reaching the end of the survey will be counted as completed interviews.
As indicated before, calculating the feasibility starts from the amount of required completes and consists in working our way backwards to the amount of invitations. So, let’s say we strive for 1,000 interviews in total. The first step is estimating the amount of break offs during the main interview (also referred to as “drop outs”, “partials” or “abandonments”).
So, what’s a reasonable assumption for the break off rate? It mainly depends on the survey itself. If the questionnaire is lengthy, repetitive or about a topic that is not too relevant for the respondents, more break offs can be expected. But also technology plays an important role. If the survey relies on outdated technology (e.g. Flash) or is not mobile friendly (e.g. responsive), users may have a hard time completing the survey. Our experienced project managers will be happy to help you optimizing your questionnaire to keep the amount of break offs as low as possible.
Now, let’s assume a drop-out rate of 2% in our example, that means we’ll need 1,020 respondents starting the main interview.
The next step is estimating the amount of quota fails. This is probably the hardest task of all and requires a lot of experience by the project manager.
Quota definitions can be quite complex. They can include numerous variables, they can be interlocking or non-interlocking and sometimes respondents even get assigned to them by chance (think of monadic tests). In theory, the available variables of our panellists’ profiles should help us to invite only the right participants and avoid any quota fails. However, this isn’t possible in practice. We don’t always have all the necessary profiles available, and, if the field period is too short, we even don’t have the time to fill the quotas in small, careful steps.
So all in all, quota fails are almost inevitable in most cases. Their extent depends a lot on the specifications of the study (i.e. quota plan, field period), but also on the project manager’s experience. Meeting all the quotas in due time and still treating the panel with care can be quite challenging and will tell experienced samplers from unexperienced ones.
Let’s assume 20% quota fails in our example, so we’ll need 1,276 screened respondents.
Estimating the amount of screen outs is relatively easy, as the incidence rate is usually part of the proposal. This incidence rate should ideally equal the proportion of respondents that make it through the screener and is typically independent from any other factors.
Let’s assume an incidence rate of 50% for our example, that will give us a required amount of 2,552 starters.
The last step in our calculation is an answer to the question how many panellists we’ll have to invite, in order to get 2,552 starters. The response rate slightly depends on external factors (such as daytime, weekday, weather, holiday season, etc.). In addition, also the quality of the panel (i.e. the panellists’ motivation) plays a role, and, last but not least, the parameters of the study itself: if the survey is suitable for mobile devices, we can push the invitation to our panel app and thereby leverage the response rates.
You will find our average response rates in our panel book. If we take 45% for our example, we would need a total panel size of 5,669 panellists. That’s the minimum panel size required to meet the specifications of this exemplary study. But as you would see in our panel book, even our smallest online panel is big enough to carry out this kind of study.
So, how big is big enough?
Apparently, estimating the feasibility of online studies is not an exact science. Some influencing factors are not in our hands, while other things can be optimized by our project managers.
But what’s even more interesting here: While feasibility seems to be a matter of mere panel size at first sight, panel quality is also of high importance in this discussion. In fact, a lot of panels are big enough to cover your need for a specific sample size, but not all will try to get the best out of your project and take care about the quality of your study. And at the end of the day, this is where we make the difference.