7th September 2012, Volume 125 Number 1361

Phillippa Poole, Boaz Shulruf, Ben Harley, John Monigatti, Mark Barrow, Papaarangi Reid, Caitlin Prendergast, Warwick Bagg

Medical schools develop selection processes to serve multiple aims. Among these are to determine those with the potential to become good doctors, as well as to screen out those with unfavourable traits or unlikely to complete the programme. Furthermore, any process needs to rank fairly the eligible applicants in order to offer the limited number of places. The student body resulting from any selection process must be competent to practice effectively as junior doctors, with the base for further training in any branch of medicine.1 Finally, the future specialist workforce must be sufficiently diverse to meet future community health needs.2,3

To assist with meeting the final aim, both NZ schools have two affirmative entry pathways—one for Māori or Pacific students, and one for students from a rural background. Regardless of pathway, the choice of tools for the process of medical student selection remains complex and controversial.4–6

At the University of Auckland, the standard process is that medical school applicants are ranked for offer of a place by combining scores from their Grade Point Average (GPA, weighting 60%), the Undergraduate Medical Admissions Test (UMAT,7 weighting 15%), and a structured interview. The interview is weighted at 25% for the majority of applicants, other than a few deemed ‘must have’ or ‘must not have.’

To be eligible for consideration, an applicant must have achieved an average of at least a B+ grade (GPA of 6) across their eight courses in Overlapping Year 1 (OLY1) at The University of Auckland, or over the last 2 years of an acceptable prior degree. After a ranking based on GPA, interviews are offered to about twice as many applicants as there are places. In 2011 for 2012 entry, for example, there were 420 interviews conducted. The presence of an interview is the most significant difference in process from that used by the University of Otago.

In early 2011, driven in part by increasing medical student numbers, a review of the place of the interview in Auckland medical student selection was requested by the then Dean, Professor Iain Martin. An Interview Working Party (see Appendix 1) was established.

This paper describes the main deliberations and recommendations of this Working Party, with the dual aims of disseminating this information to stakeholders and assisting schools debating the place of interview in medical student selection.

Evaluation of the current Auckland interview

The interview has been an intrinsic part of Auckland medical student selection since the first cohort was selected in 1967. The first Dean of the medical school introduced an interview at the outset reportedly to identify ‘bad buggers’.8 Based upon decisions of those responsible for admissions, the format of the interview has varied over time. Among these are a single longer interview with one and two interviewers; two short interviews, and an observed group task. Since 2000, there has been a single semi-structured interview with two interviewers.

In general, interviews are acknowledged to have low reliability. Among the reasons for this are low agreement between raters, rater variability, rater-student effects, and potential for candidates to adopt socially desirable stances in response to questions.6 More structured interviews show higher reliability,9 and seeking examples of past behaviour may produce more honest responses than asking candidates what they would do if faced with a particular scenario.10

To this end, the current 25-minute interview is qualitative and semi-structured, with applicants assessed on five domains: maturity; communication; awareness and knowledge; career choice, and well-roundedness. Scoring is based on 7 categorical descriptions in each domain as this is associated with less discrepancy between interviewers.8 In addition, the interviewers make a global judgement about the suitability of the applicant using five categories: ‘exceptional' (= "must have"); ‘highly desirable’; ‘acceptable’; ‘uncertain unacceptability’ and ‘unacceptable' (= "must not have").

Other measures used at Auckland to enhance reliability include: calibration sessions for interviewers; new interviewers being paired with more experienced interviewers; each interviewer scoring independently in the first instance, before the final score is agreed. Interviewer performance is reviewed each year, with poor interviewers not invited back. Discrepant scores (>1 category) result in an interviewee being offered a second interview (2–3 each year). In the uncommon event of a formal appeal, this is very rarely upheld. Despite these measures, scores still vary considerably from year to year.11

In terms of validity, the interview domains have not changed substantially since the school started in 1968, which may suggest a degree of face and construct validity. Yet, we have found recently that the interview has no predictive validity in terms of predicting achievement later in the medical programme, withdrawal, or failure to complete Year 4 on time.11 Further, the effect of interview on ethnicity and gender is neutral, with most change in ethnicity occurring by the admixture of students from affirmative pathways. Interview scores correlate negatively with UMAT and GPA, suggesting it measures something different,11 but we currently have no idea what this is.

The number of applicants deemed to be ‘must have’ by each interviewer is as high as 25 per year. Of these, about half would have been selected anyway, leaving around 10–12 who move up the rankings and receive an offer. The Working Party found little evidence that the performance of these students was distinguishable from others in their class during medical school.

By contrast, the ‘must not have’ group comprises 2–4 students per year, mostly due to inability to adjust to the interview situation, rather than to frank mental health or dysfunctional personality issues. It is acknowledged that the interview can never check for these fully.12 To date, the outcomes of these students have not been tracked. In addition to reliability and validity considerations above, the Working Party identified other potential benefits or disadvantages of the current interview, or foreseeable harms in dropping it. These are summarised in Table 1.

Table 1 Considerations whether or not to include an interview as part of medical student selection
Potential benefit
of current interview
Potential disadvantages
of current interview
Potential harm if no interview
Shaping of applicant pool
Sends an important signal to applicants of commitment to excellence in communication and interpersonal skills.
Otherwise acceptable students may avoid applying, fearing that interview may disadvantage them.
No differentiating feature from schools without interview.
Richness and diversity of cohort
Allows selection of some with acceptable GPA plus exceptional qualities or life experience.
May favour women,13hence correcting for historical male predominance.
Interviewers may be biased towards selecting ‘people like me’ thus discounting more diverse candidates.
Several applicants with exceptional qualities or life experiences will be missed.
Commitment to local priorities
May explore commitment e.g. to rural practice.
Students may rehearse desirable answers.
No opportunity to explore commitment.
May deter or exclude people with very poor interpersonal skills. Interview may have already done its job before scores are used for ranking.
Interviewers may be ‘charmed’ by highly unsuitable candidates and scored highly.
Would need another measure of personal qualities, or else risk admitting those with poor interpersonal skills who are currently excluded.
Community engagement
Wider community feels engaged in selecting future doctors, e.g. rural doctor or student may sit in on rural applicant interviews.
A narrow pool of interviewers could undermine notions of true community engagement.
Loss of engagement and transparency.
Student engagement
Strong medical student support of, and pride in, the medical interview.
Cost, stress and inconvenience of interview.
Loss of early engagement with faculty and programme.
Staff engagement
Faculty feels engaged in selecting future doctors and the medical programme.
Takes staff away from other academic endeavours.
Loss of opportunity for engagement between student and staff.
Professional induction
Applicants must present professionally.
Allows professional judgement to be exerted by faculty members.
Interview is situation- specific and may miss those who would be professional in clinical setting.
Missed opportunity; may not happen until much later in the programme.
Student withdrawal
The occasional applicant may fail the interview on purpose, avoiding loss of face to their community.
Interview may be difficult for very shy students.
Applicant may feel obligated to take a place that is offered, with resultant opportunity costs.
Weighting of interview
Allows a lower weighting of UMAT (or any other new tools) until shown to be valid.14
Continuous quality improvement approach needed to justify methods and weighting.
If increased UMAT weighting, there may be cost implications for applicants (resits, prep. courses).
Cost is relatively low compared with cost of admitting a student who does not complete, or is highly unsuitable for medicine.
Cost per successful applicant is about $75 (mainly casual staff, parking, catering) PLUS
opportunity costs of interviewers’ time.
Savings in the order of $12,500, and staff time.
Other options such as Multi Mini Interview15,16 likely to be more expensive.

What options exist to improve the selection process?

The current selection process bears some resemblance to current best practice;17 with this being a battery of tools involving informed self-selection (in Auckland’s case, undertaking a university health sciences first year; website information; interview question as to what has been done to find out about a medical career); academic achievement (GPA); general cognitive ability (UMAT), and aspects of personality and interpersonal skills (interview).

How tools are combined to rank students will be the subject of ongoing debate, as internationally, there is no agreement as to the best method.18 Notwithstanding this, the selection system in use currently at Auckland is associated with a low attrition rate (fewer than 3% of mainstream admissions), which is much lower than in the early 90’s when the rate was about 10%.19 Then, half of students left for academic failure and the other half withdrew for other reasons.

The Working Party believes that one of the reasons for the low attrition rate is that, to be selected, applicants must undertake and achieve sufficiently at a number of sequential steps over at least a year (viz. enter and complete first year or degree programme; sit UMAT during that time; attend interview). This observation is consistent with data from a controlled experiment in The Netherlands which found attrition was far lower in medical students selected with dedicated medical school admission tests (admission GPA as a threshold; thence grading on interview + written motivational statement + CV + general knowledge test), than in students selected on their GPA alone (OR 0.56, 95%CI 0.39–0.80).20

On the other hand, whether or not dropping the interview would make any difference to the attrition rate is uncertain. The attrition rate at Otago, which does not have an interview, is also relatively low.

Multiple Mini Interviews (MMIs) are increasingly used by medical schools to select students, as they have been shown to have a higher reliability derived from increasing the number of stations, not increasing the number of raters in any one station.15 MMIs have some predictive power for students’ later clinical exam results in medical school21 and licensing examinations.22

Briefly, applicants are required to attend several stations where they respond to a specific scenario placed in front of them or to specific questions posed by an interviewer, for 8–10 minutes at each station. While referred to as ‘interviews,’ the activities at each station may be wide-ranging, and candidates rated on their performance using a standard rating scale. Thus, MMIs allow a broad sampling of a candidate’s abilities, dilute effects of chance and interviewer bias, and allow a candidate to recover from a poor station with an independent interviewer.15

Other advantages may include fewer problems with security violations,23 and being less subject to influence by coaching.24Disadvantages include the time and expense to develop and deliver the MMI process. MMIs already form part of the Māori and Pacific pathway selection process to Auckland’s health programmes. However, the information gained during these MMIs is used to determine at which level a student should be advised to enter tertiary education to maximise their chances of success, rather than to whom places in medicine should be offered.

GPA is by far the most reliable and most predictive tool for future performance,14,25 with UMAT far behind in this regard.19 Furthermore, there is a scarcity of tools with which to test interpersonal skills and personality reliably. In the past, a principal’s report was used to inform decision-making for school-leaver applicants, but this was dropped when selection became based on university, not school, performance.

This decision is in accordance with the literature that shows personal references and statements to be unreliable and of no predictive value.26,27 A potentially more robust tool is the personal qualities assessment (PQA), which is a portfolio of psychometric tests designed to predict performance in medical school and professional progress.28 This was developed in Newcastle by the same group that developed UMAT, but has yet to be validated.29

A lottery, once a certain GPA has been achieved, is appealing as it might be fairer. Against this is that students selected using a lottery system are more likely to drop out of their medical programme than those selected using a combination of academic and non-cognitive tools.30Although reasons are not entirely clear, this may relate to student commitment to the programme. In the NZ setting this would be unacceptable for funding and workforce reasons, as universities cannot backfill places of students who leave.

Conclusions and future directions

Members of the Working Party acknowledged how their own conflicts of interest might lead to bias; in particular that students and graduates of the Auckland medical programme had benefitted from having an interview in the selection process and were not likely to advocate dropping it. They agreed with the previous Dean’s comment (I. Martin, personal communication 2011) that any selection policy would be based on a ‘fusion of culture, beliefs and evidence.’ This review did not consider moving to a wider stakeholder perspective, including surveys; first for reasons of expediency; second as it seemed unlikely this would add much to the debate in terms of considerations or ways forward.

Accepting the limitations above, the main findings of the Working Party were that the Auckland interview in its current format is not particularly valid or reliable in terms of its ability to predict future success at medical school, but at least it is not as resource-intensive as initially thought. The group identified additional considerations when deciding the possible benefits and harms of retaining or dropping the interview, as outlined in Table 1. While testing of the constructs in Table 1 would require future research, it may act as a checklist for others evaluating admission policies.

The Working Party suggested alternative ways that might improve interview validity, reliability, feasibility and acceptability, but highlighted that any format would inevitably result in ‘trade-offs’ among these.

Among the scenarios thought to be viable in the Auckland setting were:

  • Status quo, but reduce the number of students offered an interview to, say, 1.5 times the number of places, rather than 2. This would preserve current feasibility, reliability, validity, local culture and beliefs, but reduce further the very small chance of being able to move to the top of the rankings.
  • Have two shorter interviews, each with one interviewer, assessing overall attributes but with different specific foci. This may increase reliability as there is more sampling, but could decrease construct validity, and introduce other quality control problems. Furthermore, having shorter stations will discriminate against nervous students who take some time to ‘warm up’.
  • Use an MMI format with 6–8 stations. This could be expensive and hard to staff if used for all students; some types of stations may ameliorate this (e.g. a personal statement or videoed interaction with a standardised patient who might score the performance).
  • Use the interview dichotomously, in that applicants are either ‘acceptable’ or ‘not acceptable’. Using a ranking based on GPA and UMAT, the highest ranking candidates are offered a relatively short interview with one experienced faculty member who decides if they are ‘acceptable’ or ‘uncertain’.
A major effort, in the form of an MMI, is then dedicated to further assessment of those about whom there is uncertainty from this first interview, or with ranking scores nearer the cut-off point for an offer of a place. This would retain many of the qualitative benefits of the interview, remove the ability to move to the top of the rankings, and potentially allow resources to be used more effectively; that is, not wasted on obtaining information of no benefit. Admission would be then offered in order to applicants ranked on GPA and UMAT, and ‘acceptable’ after interview process.

At the Board of Studies (Medical Programme) meeting in February 2012, it was decided that as the pros of continuing with an interview outweighed the cons, an interview would remain as part of Auckland medical student selection. In particular, the Board favoured further exploration of the last scenario. Interestingly, this is not that far from the initial use of the interview to screen out unsuitable candidates, proposed by Dr Cecil Lewis.8

We wish to notify potential applicants and other stakeholders that the present interview format will continue until such time as a ‘better’ format and process is determined.


Medical schools need to justify their range of selection tools and processes. This paper describes the selection tools used at one university in New Zealand (Auckland), which combine a measure of academic achievement, score on a test of general cognitive ability, and score in a structured interview. Further, it describes considerations in justifying the decision to continue with an interview as part of the selection process. This information may be of use to stakeholders in the Auckland medical programme, and to other schools evaluating their admission tools.

Author Information

James M Hayes, Medical Imaging Lecturer, Faculty of Health and Science, CPIT, Christchurch; Steven L Ding, Consultant Gastroenterologist and Clinical Senior Lecturer, Christchurch Gastroenterology, Christchurch


We thank Dr Stu Gowland and Dr John Tuckey for their helpful comments on this paper.


James M Hayes, Medical Imaging Lecturer, Faculty of Health and Science, CPIT, PO Box 540, Christchurch, New Zealand.

Correspondence Email


Competing Interests

None declared.


  1. Australian Medical Council. Assessment and Accreditation of Medical Schools: Standards and Procedures. In: Part 3; 2009.
  2. Donnon T, Paolucci EO. A generalizability study of the medical judgment vignettes interview to assess students' noncognitive attributes for medical school. BMC Med Educ 2008;8:58.
  3. Gorman D, Monigatti J, Poole P. On the case for an interview in medical student selection. Int Med J 2008;38:621–3.
  4. Prideaux D, Roberts C, Eva K, et al. Assessment for selection for the health care professions and specialty training: Consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher 2011;33:215–23.
  5. Edwards JC, Elam CL, Wagoner NE. An Admission Model for Medical Schools. Academic Medicine 2001;76:1207–12.
  6. Poole P, Moriarty H, Wearn A, Wilkinson T, Weller J. Medical student selection in New Zealand: looking to the future. N Z Med J 2009;122(1306):88–100. http://journal.nzma.org.nz/journal/122-1306/3884/content.pdf
  7. UMAT Undergraduate Medicine & Health Sciences Admission Test. 2010. (Accessed 12 August, 2010, athttp://umat.acer.edu.au/.)
  8. Gorman D, Poole P, Monigatti J. On the case for an interview in medical student selection. Int Med J 2008;38:621–2.
  9. Lievens F, Buyse T, Sackett P. The operational validity of a video-based situational judgement test for medical college admissions: illustrating the importance of matching predictor and criterion construct domains. J Appl Psych 2005;90:442–52.
  10. Taylor P, Small B. Asking applicants what they would do versus what they did do: a meta-analytic comparison of situational and past behavioral employment interview questions. J Occup Organ Psychol 2002;75:277–94.
  11. Shulruf B, Poole P, Wang G, Rudland J, Wilkinson T. How well do selection tools predict performance later in a medical programme. Adv Health Sci Educ Theory Pract 2011;Sep 3.
  12. Knights J, Kennedy B. Medical school selection: screening for dysfunctional tendencies. Med Educ 2006;40:1058–64.
  13. Puddey I, Mercer A, Carr S, Louden W. Potential influence of selection criteria on the demographic composition of students in an Australian medical school. BMC Med Educ 2011;11:97.
  14. Poole P, Shulruf B, Rudland J, Wilkinson T. Comparison of UMAT scores and GPA in prediction of performance in medical school: a national study. Med Educ 2012;46:163–71.
  15. Eva K, Rosenfeld J, Reiter H, Norman G. An admissions OSCE – the Multiple Mini-Interview. Med Educ 2004;38:314–26.
  16. Rosenfeld J, Reiter H, Trinh K, Eva K. A cost efficiency comparison between the multiple mini-interview and traditional admissions interviews. Adv Health Sci Educ Theory Pract 2008;13:43–58.
  17. Bore M, Munro D, Powis D. A comprehensive model for the selection of medical students. Med Teach 2009;31:1066–72.
  18. Adam J, Dowell J, Greatrix R. Use of UKCAT scores in student selection by U.K. medical schools, 2006-2010. BMC Med Educ 2011;11:98.
  19. Collins J, White G. Selection of Auckland medical students over 25 years: a time for change? Med Educ 1993;27:321–7.
  20. O’Neill L, Hartvigsen J, Wallstedt B, Korsholm L, Eika B. Medical school dropout - testing at admission versus selection by highest grades as predictors. Med Educ 2011;45:1111–20.
  21. Eva K, Reiter H, Rosenfeld J, Norman G. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med 2004;79:S40–S2.
  22. Reiter H, Eva K, Rosenfeld J, Norman G. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ 2007;41:378–84.
  23. Reiter H, Salvatori P, Rosenfeld J, Trinh K, Eva K. The effect of defined violations of test security on admissions outcomes using multiple mini-interviews. Med Educ 2006;40:36–42.
  24. Griffin B, Harding D, Wilson I, Yeomans N. Does practice make perfect? The effect of coaching and retesting on selection tests used for admission to an Australian medical school. MJA 2008;189:270–3.
  25. McManus I, Smithers E, Partridge P, Keeling A, Fleming P. A levels and intelligence as predictors of medical careers in UK doctors: a 20 year prospective study. BMJ 2003;327:139–42.
  26. Wagoner N. Admission to medical school: selecting applicants with the potential for professionalism. In: Stern D, ed. Measuring Medical Professionalism. Oxford: Oxford University Press; 2006.
  27. Powis D. How to do it: select medical students. BMJ 1998;317:1149–50.
  28. Powis D, Bore M, Munro D. Selecting medical students: Evidence based admissions procedures for medical students are being tested. BMJ 2006;332:1156.
  29. Dowell J, Lumsden M, Powis D, et al. Predictive validity of the personal qualities assessment for selection of medical students in Scotland. Med Teach 2011;33:e485–8.
  30. Urlings-Strop L, Stijnen T, Themmen A, Splinter T. Selection of medical students: a controlled experiment. Med Educ 2009;43:175–83.