The IELTS Research Report, Benchmarking English Standards Across Professions and Professional University Degrees by Müller and Brenner, investigates the English proficiency standards set for professional registration and university degree admissions across six English-speaking countries: Australia, Canada, Ireland, New Zealand, the United Kingdom, and the United States.
This study reports on practices of benchmarking minimum English language proficiency test scores for admission into linguistically demanding professions, including education, law, medicine, nursing, psychology, and social work, while highlighting inconsistencies in test score equivalence practices.
English proficiency standards: discrepancies in practice
Key findings reveal that minimum English proficiency scores required by professional bodies for registration are generally higher than those set by universities for corresponding degree programs.
For instance, health professions such as medicine and nursing typically require average IELTS scores of around 7.0 for professional registration whereas university admission standards for these degrees are often lower, averaging around 6.5.
This discrepancy raises concerns about whether graduates are adequately prepared for the communication demands of professional practice.
Key findings: gaps in academic and professional expectations
The study also explores how different English language tests—such as IELTS, TOEFL, PTE, C1A, OET, and the Duolingo English Test (DET)—are used by universities and professional bodies. It highlights substantial inconsistencies in how these tests are perceived and equated.
For example, a TOEFL score considered equivalent to an IELTS 7.0 by one institution might be equated to a 6.5 or 7.5 by another. Similarly, PTE and DET scores often showed wide variability in equivalency practices.
Such inconsistencies undermine the validity and fairness of using multiple tests for high-stakes purposes like university admissions and professional licensing.
Inconsistencies in test equivalencies
The report identifies several factors contributing to these inconsistencies, including differences in institutional priorities, country-specific norms, and variations in test acceptance policies.
Higher-ranked universities often set stricter English proficiency requirements, potentially reflecting a focus on maintaining academic performance standards. In contrast, the United States was notable for having lower average professional registration standards compared to other countries, highlighting the impact of decentralised regulation.
Factors driving variability across countries and institutions
Poor test-to-test equivalency practices pose challenges for international students and professionals navigating the system. The report argues that inconsistencies in equivalence tables can lead to inequitable access, as test takers may seek "easier" pathways by choosing tests with more favourable equivalency outcomes.
This variability also undermines the credibility of English proficiency testing and creates mixed messages about proficiency expectations.
Recommendations for unified standards
The authors advocate for a more unified and evidence-based approach to setting minimum English proficiency scores and applying test equivalencies. Greater collaboration among universities, professional bodies, and test developers is recommended to enhance consistency and fairness. This would ensure that proficiency standards accurately reflect the communication demands of academic and professional contexts, ultimately improving outcomes for all stakeholders.
In conclusion, the report underscores the critical need to address misalignments and inconsistencies in English proficiency requirements. A systematic and standardised approach would better prepare students for professional practice, reduce inequities in access, and enhance the credibility and transparency of language testing systems.