21st September 2018, Volume 131 Number 1482

Michael Thomson, Megan Pledger, Richard Hamblin, Jackie Cumming, Essa Tawfiq

The decline in survey response rates in most developed countries over the past few decades has prompted serious concern about the validity of survey findings.1 A low response rate is thought to increase the risk of non-response bias, where significant differences exist between the characteristics of people who responded to a survey and those who did not. These systematic differences can bias results, particularly if characteristics that distinguish responders from non-responders are correlated with the outcome of interest.

A number of scholars have benchmarked response rates below 50% or 60% as highly likely to produce biased results, particularly when the sample size is small and there are associations between attributes of non-responders and the outcome variable.2–5 However, such rules-of-thumb imperfectly proxy the complications of non-response bias5 and indeed several meta-analyses have found response rate to be a poor predictor of non-response bias.3,6

Theories of non-response bias offer more precise understandings about how non-response produces error. One influential theory—‘leverage-salience theory’—conceptualises non-response bias as error resulting from endogenous relationships between individual and survey characteristics (for instance, its length, mode or subject matter).7 The extent of bias will increase with the salience of endogenous survey characteristics at the time of the survey request.6,7 The results of a meta-analysis by Groves and Peytcheva6 suggest such relationships can significantly moderate the beneficial effects of a high response rate. Other theories of non-response also provide useful insights; for example, continuum of resistance theory predicts non-responders will be more similar to respondents who responded late than initial responders, lending support to inferences made with follow-up responders about non-responders in general.8–11 Both theories seek to identify endogenous relationships between individual characteristics and the model error term. Accordingly, researchers are increasingly encouraged to supplement response rates with alternative indicators of non-response bias,1 and pay close attention to the specific relationship between the survey and the participants.6,7

Empirical studies assessing non-response bias have tended to compare non-responders and responders on observed variables such as sociodemographic characteristics, administrative health records, and health conditions reported or observed during screening procedures. Sociodemographic characteristics tend to differ between responders and non-responders,12–22 although a smaller number of studies find no significant differences between groups.23–27

Fewer studies have followed up with participants to measure response behaviour directly. These studies are split between those finding significantly different responses between groups28,29 and those that do not,30,31 which likely reflects the heterogeneity in survey topics and sampled populations. Longitudinal drop-out studies also contribute similar evidence to follow-up studies; again, finding mixed results.32–34 The foremost among these studies examined a prescription drug database, while the latter two were large-scale cohort studies of health and disease. Finally, studies comparing early and late responders to surveys find late responders to exhibit more extreme behaviours than early responders, as continuum of resistance theory predicts.8–11 The results to date suggest that the presence of bias is heterogeneous among surveys and sample populations, as leverage-salience theory would suggest, and that generally those who are more difficult to contact tend to have more extreme results.

It is surprising that relatively few studies have followed up with non-responders to directly measure differences in response behaviours, given that comparisons of sociodemographic characteristics are limited to providing inferences of such responses. If we are to maintain confidence in the validity of surveys as instruments in spite of low response rates, it is necessary to measure as directly as possible whether non-responders are genuinely likely to respond differently to initial responders. This study contributes to the literature in this field by attempting to answer the following research questions:

  1. Do follow-up responders to an inpatient survey have a significantly different sociodemographic structure to initial responders?
  2. Do follow-up responders significantly differ from initial responders in how they answer survey questions?
  3. What factors are correlated with non-response?

We re-contacted non-responders to a nationally representative, cross-sectional inpatient survey in New Zealand and asked them to respond to a subset of seven items drawn from the initial survey, and to disclose their reasons for non-response. Differences among groups were tested for significance using chi-square tests of association and logistic regression. The study ultimately determines whether there are observable differences between initial and follow-up responders, and discusses how this relates to broader evidence of non-response bias.

Methods

Study design

This study analyses cross-sectional primary data collected in January 2016 in New Zealand. Eligible participants were non-responders to a nationally representative, cross-sectional survey conducted in December 2015. Data are compared across surveys to sample both initial responders and responders to follow-up.

Study population and data

The Adult Inpatient Experience Survey is a quarterly 20-item online survey designed by the Health Quality & Safety Commission in August 2014. The survey runs in all 20 health administration regions (district health boards, or ‘DHBs’) across New Zealand. In each DHB, 400 patients aged 15 years or above who spent at least one night in hospital within the two-week study period were invited via email, SMS or post to participate in the survey. To maximise survey response and minimise cost to DHBs, patients who had provided email addresses were preferentially selected, with random sampling of patients with a mobile phone number or postal address subsequently. Preferential selection of patients with email addresses may skew survey participation towards groups with greatest access to the associated technologies, and is therefore considered a limiting trade-off between cost and the representativeness of data.35 Reminders were sent after seven days via email or SMS if available, or via post if not. Where DHBs had less than 400 eligible patients within the two-week time period, all patients were contacted. A complete methodology document is available on the Health Quality and Safety Commission’s website.36

During the survey wave ending December 2015, approximately 14,000 patients were eligible to participate in the study. Of these, 6,089 individuals were randomly selected to participate, 1,668 returned and completed the survey, while 4,421 did not start or did not complete the survey, comprising a 27% response rate. Results were weighted subsequently to account for non-representative demographic composition.

The present study conducts a comparative follow-up survey, examining the sociodemographic characteristics and response behaviour of non-responders who obtained services in a sample of 10 DHB catchments, comprising Capital and Coast, Counties Manukau, Hutt Valley, Northland, Southern, Taranaki, Wairarapa, Waitemata, West Coast and Whanganui DHBs. The selection of cases focused on DHB catchments where response rates were particularly low, and a spread of geographical regions. Eligible patients were convenience-sampled by software-as-a-service firm Cemplicity from lists of individuals in each DHB catchment who had been sent the inpatient experience survey via email or SMS but did not complete the survey. While data collected through convenience sampling are limited by potential for non-random bias,37 the ease and affordability of this method aligns with the data’s function in enabling the Health Quality & Safety Commission to provide continuous feedback on DHB performance. Where cell phone numbers were available, patients were called and asked about their reasons for non-response on the initial survey. They were further invited to participate in a truncated seven-item version of the Adult Inpatient Experience Survey, at the time of the call or in future, and either over the phone or online. Where cell phone information was not available, participants were contacted with a short email containing a link to the truncated survey and an invitation to reply with their reasons for non-response.

Particular methodological attention was given to maximising responses on the follow-up survey. Sampled individuals were telephoned up to six times before being recorded non-responsive, with calls spread across different days and times of the day. The interviewers also offered opportunities to make alternative appointments, as well as allowing time-pressed responders to opt out of the seven-item survey and instead comment briefly on their reasons for non-response. Market research firm Buzz Channel conducted all interviewing.

Of the 2,209 eligible individuals who did not respond to the initial survey and were treated in one of the 10 selected DHBs, 163 were contacted using convenience sampling to surpass a target of 150 respondents recommended by power analysis. The flow of participant recruitment and selection is summarised in Figure 1. Individuals who did not respond to the original Adult Inpatient Survey, but did respond to the follow-up survey, are henceforth referred to as follow-up responders in this article. Follow-up responders were compared to respondents to the Adult Inpatient Survey from the same 10 selected DHBs (henceforth termed initial responders) to allow for valid comparisons. Finally, these two groups were compared to the overall pool of 6,581 individuals from which they were drawn, comprised of individuals eligible to participate in the Adult Inpatient Survey in the 10 DHBs selected for the follow-up study. Table 1 summarises characteristics and inclusion/exclusion criteria of the three comparison groups referred to henceforth in this article.

Figure 1: Flow of participant recruitment and selection.

c 

Table 1: Summary of comparison groups.

 

Sample size

Data source

Inclusion/exclusion criteria

Pool

n=6,581

Demographic data requested from district health boards

Potential responders to the Patient Experience Survey, aged 15+, who were discharged from hospitals during the two-week study period, from selected DHBs.

Initial responders

n=910

Adult Inpatient Experience Survey, December 2015

Individuals drawn from the pool who were contacted and completed the Adult Inpatient Experience Survey.

Follow-up responders

n=163

Follow-up survey, January 2016

Individuals drawn from the pool who were contacted and either did not start or did not complete the Adult Inpatient Experience Survey, but were re-contacted and started or completed the follow-up survey.

Measures

Demographic variables

Both surveys contained questions eliciting the respondent’s demographic characteristics. Participants identified as either male or female, and age was reported as a continuous variable. Ethnicity was reported following Statistics New Zealand’s prioritised ethnicity categorisations.38 Patients identifying as Pacific peoples, Asian, MELAA (Middle-Eastern/Latin-American/African), Other, or any residual responses were grouped into the category ‘Other’ to resolve issues with the size of the sub-sample. Thus, ethnicity was compared on the basis of the groups ‘European’, ‘Māori’ and ‘Other’.

Survey items

The Adult Inpatient Experience Survey consists of 20 items derived by the Health Quality & Safety Commission from licensed access to the Picker library of 200-plus questions. Items of importance were established via testing conducted in tandem with KPMG. Some questions were slightly reworded to suit a New Zealand cultural context. Participants in the follow-up survey were asked to respond to a seven-item survey consisting of core questions from the initial survey. Six of these were multiple-choice questions, and the last question allowed free response.

Given that comparison groups come from distinct surveys and potentially held differential sociodemographic structures, non-responders were weighted to match the characteristics of responders. In order to determine which demographic variables were important, the two groups were compared based on age, sex and ethnicity alone and as a two-way interaction using logistic regression. Age and ethnicity were found to be associated with being a responder or non-responder. These two variables were subsequently used individually to weight the non-responders to the responders. Responses are reported in both raw and weighted terms for comparison.

Reasons for non-response

Follow-up responders were asked why they did not take part in the survey using two open-ended questions—“Why did you not take part in the survey?” and “Is there anything we could have done differently to make you take part?” The data for this question was thematically coded into appropriate categories. Categories that were meaningful and had a sufficient number of responders were investigated to see if they depended on demographic variables such as age, sex and ethnicity.

Results

Demographic differences between responders and non-responders

Table 2 displays the distribution of demographic variables across follow-up responders, initial responders and the pool of all patients discharged. The distribution of males and females in each of the three groups was similar. Initial responders were less likely to be very young (aged 15–24) compared to the pool. They were also more likely to be in early old age (aged 65–74) than the pool. Follow-up responders had a similar age distribution of those aged 15–24 to the pool but were more likely to be in middle age (aged 45–64) than the pool. They were also less likely to be in the two oldest age groups.

Table 2: Demographics of follow-up responders, responders, and all patients discharged (pool).

 

Follow-up responders

Initial responders

All patients discharged (pool)

 

N

%

N

%

N

%

Sex

Female

97

60

556

61

3,907

59

Male

66

40

354

39

2,674

41

Age group (years)

15–24

18

11

40

4

667

10

25–44

45

28

220

24

1,748

27

45–64

59

36

235

26

1,628

25

65–74

24

15

207

23

1,084

16

75–84

11

7

145

16

911

14

85+

6

4

63

7

543

8

Ethnicity

European

130

80

677

74

4,452

68

Māori

19

12

76

8

845

13

Other

14

9

157

17

1,284

20

The initial responders group was more likely to be New Zealand Europeans than the pool, and the follow-up responders group was even more likely to be New Zealand European. Māori were less likely to be initial responders compared to the pool but equally likely to be follow-up responders. The combined ethnic group, Other, were less likely to be in the initial respondents group than in the pool and even less likely to be in the follow-up respondents groups.

Differences in response behaviour between responders and non-responders

Initial and follow-up responders were compared on six controlled-response items measuring patient experience (see Table 3). Where the number of respondents answering “No” was low, sensitivity analyses were conducted combining these responders with those in the weakly affirmative group (ie, those responding “Yes, sometimes”). Where a notable number of participants did not answer the question or were not applicable, these responses were removed and the percentages were recalculated.

Table 3: Responses to the follow-up survey.

 

Follow-up responders

Initial responders

Raw %

Weighted %

%

Question 1: When you had important questions to ask a doctor, did you get answers that you could understand?

Yes, always

76

76

72

Yes, sometimes

20

20

21

No

3

3

2

I had no need to ask

-

-

3

Not answered

1

2

1

Chi-squared

Unadjusted response categories

χ2=0.44437, df=2, p=0.8008

Combined “No” and “Yes, sometimes”

2=0.1371, df=1, p=0.7110)

Question 2: Did a member of staff tell you about medication side effects to watch for when you went home?

Yes, completely

41

40

37

Yes, to some extent

15

15

22

No

12

14

16

I did not need an explanation

17

18

14

Not applicable

-

-

10

Not answered

15

13

1

Chi squared

2=4.4057, df=2, p=0.1105)

Selected responses: Did a member of staff tell you about medication side effects to watch for when you went home?

Yes, completely

48

46

41

Yes, to some extent

17

17

25

No

14

16

18

I did not need an explanation

20

21

16

Question 3: Were you involved as much as you wanted to be in decisions about your care and treatment?

Yes, completely

75

75

66

Yes, to some extent

17

18

25

No

5

5

6

I was unable or did not want to be involved

2

2

2

Not answered

-

-

1

Chi-squared

2=5.1568, df=2, p=0.0759)

Question 4: Did you feel you received enough information from the hospital on how to manage your conditions after discharge?

Yes, completely

59

60

55

Yes, to some extent

21

23

28

No

12

11

12

I did not need any help in managing my condition

8

6

4

Not answered

-

-

2

Chi-squared

2=3.05, df=2, p=0.2170)

Question 5: Overall, did you feel staff treated you with respect and dignity while you were in the hospital?

Yes, always

81

83

84

Yes, sometimes

14

13

12

No

5

4

2

Not answered

-

-

2

Chi-squared

Unadjusted response categories

2=3.5782, df=2, p=0.1671)

Combined “No” and “Yes, sometimes”

2=1.4193, df=1, p=0.2335)

Question 6: Did you have confidence and trust in the nurses treating you?

Yes, always

81

81

68

Yes, sometimes

15

14

13

No

3

4

1

Not applicable

-

-

1

Not answered

1

1

16

Selected responses: Did you have confidence and trust in the nurses treating you?

Yes, always

81

82

83

Yes, sometimes

16

14

16

No

3

4

1

Chi-squared

Unadjusted response categories

2=3.5394, df=2, p=0.1704

Combined “No” and “Yes, sometimes”

2=0.051907, df=1, p=0.7624)

Across all questions, follow-up responders gave answers that were near identical (Questions 5 and 6) or slightly more positive (Questions 1, 2, 3 and 4) than initial responders. Chi-squared tests do not find any significant differences between groups. This interpretation does not change in sensitivity analyses assessing the impact of low subsamples of negative responders (Questions 1, 5 and 6) or when removing participants not answering a given question (Questions 2 and 6).

Reasons for non-response

Participants of the follow-up survey were asked to briefly comment on why they did not take part in the initial inpatient survey. Table 4 displays the categorised results.

Table 4: Reasons for not completing the Adult Inpatient Survey.

 

n

%

Total people

163

100

People responding to question1

161

99

Total responses

191

117

Don’t remember/can’t remember

53

33

Social

49

30

Too busy

31

19

Bad timing

10

6

Forgot to do it

4

2

Just didn’t do it

2

1

Nothing untowards to report

1

1

Old age

1

1

Objecting

8

5

Negative feeling towards hospital

6

4

Objected to survey content

2

1

Medical reasons

16

10

Medical condition intervened

13

8

Still needing more medical care

3

2

Human-survey technology breakdown

54

33

Did not receive invite

41

25

Thought they had completed survey

7

4

SMS survey attempted but not completed

3

2

Too many solicitations

1

1

Filtered to spam

1

1

Lost the paperwork

1

1

Unwilling/unable to use survey technology

11

7

Not confident with SMS

3

2

Not confident with computers

3

2

Dislikes computers

1

1

Don’t do online surveys

1

1

Don’t do SMS surveys

1

1

Don’t have computer

1

1

Don’t trust phone interviews

1

1

1The information was collected from two questions—“Why did you not take part in the survey?” and “Is there anything we could have done differently to make you take part?” Information that was not relevant to the question being asked was not reported. The questions were open-ended and participants could give as many responses as they liked. 

The most common response was to say they didn’t remember or couldn’t remember why they didn’t take part in the survey (33%), followed by saying they did not receive a survey invite (25%) or they were too busy (19%).

Tree-based methods were used to analyse if there were some demographic variables that were associated with not responding. The two reasons for non-responses considered were i) not receiving the survey invite and ii) being too busy. The most common reason, not being able to remember why they didn’t do the survey, was not analysed because it was deemed not likely to provide any useful information.

The variables considered in the tree models were age group (15–44, 45–64 and 65+), ethnicity (New Zealand European, Māori, Other) and gender (male, female). None of these variables were associated with respondents reporting that they did not receive the survey invitation. However, responders who were young (15–44 years old), female, Māori or Other ethnicity were more likely to say they were too busy to respond to the survey compared to the other responders.

Only 15% of New Zealand Europeans (making up 80% of the sample) said they were too busy compared to 53% of young, female, Māori or Other ethnicity responders.

Discussion

This study contributes to the literature on survey non-response by following up directly with initial non-responders to an inpatient survey. We find that despite differences in the age and ethnic composition of initial and follow-up responders, responses do not appear significantly different between groups. Our results align with previous New Zealand research suggesting younger age-groups8,22 and Maori8,11,22 are associated with non-response, and supplements sociodemographic, longitudinal and early-and-late responder study designs with evidence of direct follow-up with non-responders. The most common reasons for non-response were reportedly not receiving the invite or being too busy, although a notable group reported they could not recall why they had not responded. The reliability and generalisability of these findings are subject to a number of caveats, as follows.

The reliance on convenience-sampled data is a notable limitation of this study, exacerbating the issue of non-random bias in study participation by further selecting easy-to-reach participants. The data are hence limited in assessing the true extent of non-response bias, particularly among those theory would predict are most likely to have extreme values.9,10 The small sample size and associated limited statistical power of the follow-up group represents another data limitation, although this somewhat ameliorated through pooling of small subgroups and sensitivity analyses. While population weighting is a commonly applied solution to issues of non-representation, previous empirical analysis in New Zealand demonstrates that weighting procedures may still underestimate extreme behaviours by simply magnifying unrepresentative values for small subpopulation groups rather than more comprehensively representing those who did not respond.8 However, in this study repeating the analyses unweighted did not change the interpretation of results.

Another source of potential bias arises from modality effects. Initial survey respondents answered primarily online, with those who were not available to contact through email responding via post, while the equivalent questions in the follow-up survey were undertaken either online or via phone. While mixed-modality methods were chosen to maximise response rates among harder-to-reach groups and reduce non-response bias39, error may be introduced if there are systematic differences in response behaviour associated with different modalities, as the literature suggests.40,41 For example, the over-sampling of New Zealand Europeans may have partially resulted from correlations with the survey distribution methods. Access to the preferentially-selected email and SMS technologies has historically been differentially distributed among ethnicities in favour of New Zealand Europeans.35 However, very few follow-up respondents indicated that technology was a barrier. Finally, the non-equivalent psychosocial dynamics of responding to an initial survey versus responding to follow-up may have introduced some degree of measurement error.

Self-reported data are subject to a well-studied range of limitations, including failure to accurately recall events, a tendency to present ones’ self positively, and a tendency to withhold sensitive information.42 Recall bias seems likely in this study given a third of follow-up participants reported they could not recall why they did not participate, despite the fact that this study followed up with participants within a month, which is a similar29 or shorter43 timeframe than other follow-up studies. Alternatively, participants might feign poor memory to avoid disclosing a sensitive or socially undesirable reason for non-participation, such as having little interest in participating.42

The notable proportion of follow-up responders claiming they did not receive the survey invitation (25% of those who responded to the question) may be indicative of technical issues in survey distribution. If these individuals did not in fact receive a survey invitation, then they may behave more like responders than non-responders, thereby biasing the composition of each group. To test whether these ambiguous non-responders constituted a problem, we removed them from the dataset and re-ran the analyses. This made little change to the results and, more often than not, the change made the amended non-responder group more like the responder group.

The reason for non-response with the most substantive implications for survey design was being too busy (19%). This finding is in line with previous research, which suggests participants report being too busy due to “lack of time to dedicate to a topic seen as low priority, overestimated perception of time for study commitments and the inappropriate timing of the request”43 (p. 57). In the framework of leverage-salience theory, future iterations of the survey should seek to reduce the salience of perceived time costs given the presumed negative correlation with response propensity.

In light of the above discussion, the findings of this study can contribute modest evidence of similarities in response behaviour between initial and follow-up responders but does not enable confident inference into residual non-responders. While theory would suggest follow-up responders are likely to be more similar to the remaining non-responders than those who responded to the initial survey request,9 the data are insufficient to draw strong inferences into their likely behaviour. Future monitoring of inpatient experience undertaken by the Health Quality & Safety Commission and other sources should track and compare patients by the number of attempts made to contact participants in line with continuum of response methods, to allow broader inferences to be made about the total pool of non-responders. Case studies of groups who are least likely to respond to surveys could elucidate whether their patient experience pathways diverge from those of survey responders.

Summary

Researchers are concerned that surveys might not accurately represent populations they are interested in if people who respond to surveys are different to those who do not respond. We investigated whether there was any indication of such biases in a hospital inpatient experience survey, by conducting follow-up interviews with non-responders and comparing them to initial responders. We found no evidence of differences between initial and follow-up responders in terms of how they reported their hospital experience, despite younger age-groups and Maori being more associated with non-response than other groups. The results suggest respondents to follow-up have similar experiences of inpatient care in New Zealand to initial responders.

Abstract

Aim

This study investigates non-response bias in an inpatient experience survey with a low response rate by comparing sociodemographic characteristics and response behaviours of initial responders with responders to follow-up, and further explores the factors contributing to non-response. Prior research suggests non-response may be endogenously related to patient characteristics.

Method

We re-contacted a convenience sample of non-responders to a nationally representative, cross-sectional inpatient survey conducted in New Zealand. Participants were given a subset of six items drawn from the initial survey and the opportunity to disclose reasons for non-response. Responders to follow-up (n=163) were subsequently compared with responders to the initial survey (n=910) using chi-squared tests of association and logistic regression to assess differences in sociodemographic variables and substantive responses.

Results

We find no significant differences in the responses given by initial and follow-up responders. The most common reasons for non-response were “can’t remember” (33%), not receiving the survey (25%) or being too busy at the time (25%).

Conclusion

Responders to follow-up have similar experiences of inpatient care in New Zealand to initial responders. Further study is needed to strengthen inferences regarding hard-to-reach patients.

Author Information

Michael Thomson, Research Assistant, Health Services Research Centre, Victoria University of Wellington, Wellington; Megan Pledger, Senior Research Fellow, Health Services Research Centre, Victoria University of Wellington, Wellington; Richard Hamblin, Director, Healthy Quality Intelligence, Health Quality and Safety Commission, Wellington;
Jackie Cumming, Director, Health Services Research Centre, Victoria University of Wellington, Wellington; Essa Tawfiq, Research Fellow, School of Population Health, Department of Epidemiology and Biostatistics, The University of Auckland, Auckland.

Acknowledgements

We are grateful to KPMG, Cemplicity and Buzz Channel for their contributions to survey design, sampling and data collection, respectively. We also extend gratitude to the participants of both surveys for their time.

Correspondence

Michael Thomson, Research Assistant, Health Services Research Centre, Victoria University of Wellington, Wellington 6011.

Correspondence Email

michael.thomson@vuw.ac.nz

Competing Interests

Nil.

References

  1. Nishimura R, Wagner J, Elliott M. Alternative indicators for the risk of non-response bias: A simulation study. Int Stat Rev. 2016; 84:43–62.
  2. Hartge P. Raising response rates: Getting to yes. Epidemiology. 1999; 10:105–107.
  3. Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Q. 2006; 70:646–675.
  4. Draugalis JR, Plaza CM. Best practices for survey research reports revisited: Implications of target population, probability sampling, and response rate. Am J Pharm Educ. 2009; 73:142.
  5. Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. J Am Med Assoc. 2012; 307:1805–1806.
  6. Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Q. 2008; 72:167–189.
  7. Groves RM, Singer E, Corning A. Leverage-saliency theory of survey participation: Description and an illustration. Public Opinion Q. 2000; 64:299–308.
  8. Meiklejohn J, Connor J, Kypri K. The effect of low survey response rates on estimates of alcohol consumption in a general population survey. PLoS One. 2012; 7:e35527.
  9. Lin I-F, Schaeffer NC. Using survey participants to estimate the impact of nonparticipation. Public Opinion Q. 1995; 59:236–258.
  10. Boniface S, Scholes S, Shelton N, Connor J. Assessment of non-response bias in estimates of alcohol consumption: Applying the continuum of resistance model in a general population survey in England. PloS one. 2017; 12:e0170892.
  11. Kypri K, Samaranayaka A, Connor J, et al. Non-response bias in a web-based health behaviour survey of New Zealand tertiary students. Prev Med. 2011; 53:274–277.
  12. Christensen AI, Ekholm O, Gray L, et al. What is wrong with non-respondents? Alcohol-, drug-and smoking-related mortality and morbidity in a 12-year follow-up study of respondents and non-respondents in the Danish health and morbidity survey. Addiction. 2015; 110:1505–1512.
  13. Demarest S, Van der Heyden J, Charafeddine R, et al. Socio-economic differences in participation of households in a Belgian national health survey. Eur J Public Health. 2012; 23:981–985.
  14. Volken T. Second-stage non-response in the Swiss health survey: Determinants and bias in outcomes. BMC Public Health. 2013; 13:167.
  15. Uusküla A, Kals M, McNutt L-A. Assessing non-response to a mailed health survey including self-collection of biological material. Eur J Public Health. 2010; 21:538–542.
  16. Knapp CA, Madden VL, Curtis C, et al. Assessing non-response bias in pediatric palliative care research. Palliat Med. 2010; 24:340–347.
  17. Strandhagen E, Berg C, Lissner L, et al. Selection bias in a population survey with registry linkage: Potential effect on socioeconomic gradient in cardiovascular risk. Eur J Epidemiol 2010; 25:163–172.
  18. Larsen SB, Dalton SO, Schüz J, et al. Mortality among participants and non-participants in a prospective cohort study. Eur J Epidemiol. 2012; 27:837–845.
  19. Shortreed SM, Von Korff M, Thielke S, et al. Electronic health records to evaluate and account for non-response bias: A survey of patients using chronic opioid therapy. Observational Studies. 2016; 2:24–38.
  20. Wolf H, Wulff C, Ekelund C, et al. Characteristics of first trimester screening non-responders in a high uptake population. Ultrasound Obstet Gynecol. 2015; 63:A5219.
  21. Ahacic K, Kåreholt I, Helgason AR, Allebeck P. Non-response bias and hazardous alcohol use in relation to previous alcohol-related hospitalization: Comparing survey responses with population data. Subst Abuse Treatment Prev Policy. 2013; 8:10.
  22. Mannetje At, Eng A, Douwes J, et al. Determinants of non-response in an occupational exposure and health survey in New Zealand. Aust N Z J Public Health. 2011; 35:256–263.
  23. Locke GR, Schleck CD, Ziegenfuss JY, et al. A low response rate does not necessarily indicate non-response bias in gastroenterology survey research: A population-based study. J Public Health. 2013; 21:87–95.
  24. Yousaf-Khan U, Horeweg N, van der Aalst C, et al. Baseline characteristics and mortality outcomes of control group participants and eligible non-responders in the NELSON lung cancer screening study. J Thorac Oncol. 2015; 10:747–753.
  25. Lundervold AJ, Posserud M-B, Ullebø A-K, et al. Teacher reports of hypoactivity symptoms reflect slow cognitive processing speed in primary school children. Eur Child Adolesc Psychiatry. 2011; 20:121–126.
  26. Knopman D, Roberts R, Pankratz V, et al. Risk of dementia in persons who refused to participate in a longitudinal study of cognitive aging. Neurology. 2012; 78:P01. 083-P01. 083.
  27. Yu S, Brackbill RM, Stellman SD, et al. Evaluation of non-response bias in a cohort study of world trade center terrorist attack survivors. BMC Res Notes. 2015; 8:42.
  28. Nohlert E, Öhrvik J, Helgason ÁR. Non-responders in a quitline evaluation are more likely to be smokers– a drop-out and long-term follow-up study of the Swedish national tobacco quitline. Tob Induced Dis. 2016; 14:5.
  29. Studer J, Baggio S, Mohler-Kuo M, et al. Examining non-response bias in substance use research—are late respondents proxies for non-respondents? Drug Alcohol Depend. 2013; 132:316–323.
  30. Eriksson A-K, Ekbom A, Hilding A, Östenson C-G. The influence of non-response in a population-based cohort study on type 2 diabetes evaluated by the Swedish Prescribed Drug Register. Eur J Epidemiol. 2012; 27:153–162.
  31. Hansen E, Fonager K, Freund KS, Lous J. The impact of non-responders on health and lifestyle outcomes in an intervention study. BMC Res Notes. 2014; 7:632.
  32. Wastesson JW, Rasmussen L, Oksuzyan A, et al. Drug use among complete responders, partial responders and non-responders in a longitudinal survey of nonagenarians: Analysis of prescription register data. Pharmacoepidemiol Drug Saf. 2017; 26:152–161.
  33. Ng S-K, Scott R, Scuffham PA. Contactable non-responders show different characteristics compared to lost to follow-up participants: Insights from an Australian longitudinal birth cohort study. Matern Child Health J. 2016; 20:1472–1484.
  34. Multone E, Vader J-P, Mottet C, et al. Characteristics of non-responders to self-reported questionnaires in a large inflammatory bowel disease cohort study. Scand J Gastroenterol. 2015; 50:1348–1356.
  35. Crothers C, Smith P, Urale P, Bell A. The internet in New Zealand. Auckland, NZ: 2016.
  36. Health Quality & Safety Commission. Patient experience survey -adult inpatients: Methodology and procedures. Wellington: 2014.
  37. Etikan I, Musa SA, Alkassim RS. Comparison of convenience sampling and purposive sampling. Am J Theor Appl Stat. 2016; 5:1–4.
  38. Statistics New Zealand. Statistical standard for ethnicity. Wellington: Statistics New Zealand; 2017.
  39. Phillips AW, Reddy S, Durning SJ. Improving response rates and evaluating nonresponse bias in surveys: Amee guide no. 102. Med Teacher. 2016; 38:217–228.
  40. Dillman DA. Mail and internet surveys: The tailored design method--2007 update with new internet, visual, and mixed-mode guide. 2011: John Wiley & Sons.
  41. Kays K, Gathercoal K, Buhrow W. Does survey format influence self-disclosure on sensitive question items? Comput Hum Behav. 2012; 28:251–256.
  42. Donaldson SI, Grant-Vallone EJ. Understanding self-report bias in organizational behavior research. J Business Psychol. 2002; 17:245260.
  43. Barratt R, Levickis P, Naughton G, et al. Why families choose not to participate in research: Feedback from non-responders. J Paediatr Child Health. 2013; 49:57–62.