• Users Online: 81
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Contacts Login 


 
 Table of Contents  
MEDICAL EDUCATION
Year : 2018  |  Volume : 31  |  Issue : 5  |  Page : 293-295

Performance of medical students in final professional examination: Can in-course continuous assessments predict students at risk?


1 Department of Obstetrics and Gynaecology, International Medical University, Seremban, Malaysia
2 Department of Internal Medicine, International Medical University, Seremban, Malaysia
3 Department of Paediatrics, International Medical University, Seremban, Malaysia
4 Department of Family Medicine, International Medical University, Seremban, Malaysia
5 Examination Division, International Medical University, Seremban, Malaysia

Date of Web Publication24-Jun-2019

Correspondence Address:
K Nagandla
Department of Obstetrics and Gynaecology, International Medical University, Seremban
Malaysia
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0970-258X.261197

  Abstract 

Background. Assessment drives students’ learning. It measures the level of students’ understanding. We aimed to determine whether performance in continuous assessment can predict failure in the final professional examination results.
Methods. We retrieved the in-course continuous assessment (ICA) and final professional examination results of 3 cohorts of medical students (n = 245) from the examination unit of the International Medical University, Seremban, Malaysia. The ICA was 3 sets of composite marks derived from course works, which includes summative theory paper with short answer questions and 1 of the best answers. The clinical examination includes end-of-posting practical examination. These examinations are conducted every 6 months in semesters 6, 7 and 8; they are graded as pass/fail for each student. The final professional examination including modified essay questions (MEQs), 1 8-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE), were graded as pass/fail. Failure in the continuous assessment that can predict failure in each component of the final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence interval (CI).
Results. Failure in ICA in semesters 6-8 strongly predicts failure in MEQs, OSPE and OSCE of the final professional examination with OR of 3.8-14.3 (all analyses p< 0.001) and OR of 2.4-6.9 (p<0.05). However, the correlation was stronger with MEQs and OSPE compared to OSCE.
Conclusion. ICA with theory and clinical examination had a direct relationship with students’ performance in the final examination and is a useful assessment tool.


How to cite this article:
Nagandla K, Gupta E D, Motilal T, Teng C L, Gangadaran S. Performance of medical students in final professional examination: Can in-course continuous assessments predict students at risk?. Natl Med J India 2018;31:293-5

How to cite this URL:
Nagandla K, Gupta E D, Motilal T, Teng C L, Gangadaran S. Performance of medical students in final professional examination: Can in-course continuous assessments predict students at risk?. Natl Med J India [serial online] 2018 [cited 2019 Nov 18];31:293-5. Available from: http://www.nmji.in/text.asp?2018/31/5/293/261197




  Introduction Top


Assessment drives students’ learning; it measures the level of students’ understanding. It is used to analyse the extent of knowledge acquired and their learning skills.[1] There are different methods of assessment that include continuous or in-course assessment, formative assessments and final or summative assessments. While formative assessment provides feedback that assists students in the preparation of topics, continuous assessments are associated with a more distributed learning effort throughout the course and are thought to promote deeper learning, greater motivation, and consequently improved understanding of course material.[2] They attempt to ensure that students do not wait for the end of the semester or term to make efforts to study.[3]

Continuous assessments contribute to entry criteria for final examination placements and determine their impact on the overall performance of the student. The weight of the in-course continuous assessments (ICAs) from the final examination varies from 10% to 60%.[1],[4],[5] The ICAs at International Medical University, Seremban, Malaysia are 3 sets of composite marks derived from course works, summative theory paper and clinical examination conducted 6- monthly in semesters 6, 7 and 8 in which students are graded as pass/ fail. The theory paper includes short answer questions and 1 best answer questions. The clinical examination includes end-of-posting practical examination in the form of single long case examination. These are conducted 6-monthly in semesters 6, 7 and 8; they are graded as pass/fail for each student. This contributes 30% to the final examination. The final professional examination including modified essay questions (MEQs), 18-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE) which are graded as pass/fail. MEQs are short clinical scenarios that include series of questions with a structured format for scoring. The organization of knowledge, reasoning and problem solving are assessed. The OSPE includes 18 stations in which the students perform tasks such as ability to obtain/interpret data, solve the problem and communicate thus showing competency of skills and/or attitudes. The 16-station OSCE samples across a wide range of clinical competencies with 5 minutes of short OSCE and 15 minutes of long OSCE. The distribution of weightage for the assessments that contributes to 100% of the final score includes: ICA 30%, final professional examination MEQs 20%, OSPE 10% and OSCE 40%. Failure is defined as <50% of marks in the assessment component. We aimed to investigate the correlation between ICA and final examination results and whether the ICA predicts students at risk of failure that will facilitate early remedial measures.


  Methods Top


ICA and final professional examination results of 3 cohorts of medical students (n=245) were retrieved from the examination unit of the International Medical University. Failure in ICAs to predict failure in each component of final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence intervals (CIs).


  Results Top


Failure in all ICAs was statistically significantly associated with failure in MEQ and OSPE, with OR of 6.9-25.1 [Table 1]. In view of the small sample size, the confidence intervals of the ORs were rather wide. In contrast, there was no statistically significant relationship between failure in ICAs and long and short OSCE, with the exception of ICA for semester 6 and short OSCE. MEQ and OSPE assess mainly theoretical knowledge and problemsolving, while long and short OSCE assess clinical skills. The analysis shows that failure in ICA strongly predicts failure in MEQ/OSPE but not OSCE. It is likely the ICA, MEQ and OSPE assess fairly similar domain, i.e. theoretical knowledge.
Table 1: Association between performance in in-course assessment and final professional examination component

Click here to view



  Discussion Top


The aim of any medical educator is to identify students requiring remediation at the earliest possible stage. This exploratory study investigated the potential predictive ability of ICAs in identifying ‘at-risk’ students early and at different stages of their studies that can allow attention, and possible remediation. Through the ICA system of assessment, students are continuously under assessment and in an ongoing process of learning, cognisance of which can be taken to predict students at risk of failure in their final examinations.[6]-[7] According to Yoloye, continuous assessment aims at getting the truest possible picture of each student’s ability and at the same time helps each student to develop his or her abilities to the fullest.[8] The method or process of continuous assessment takes into account in a systematic manner the performance of students during a given period of time in medical school. Apart from all these, continuous assessment has the characteristics of being comprehensive by making use of many evaluation instruments, and cumulative by taking into consideration all past records to compute the final grades of the students.[8]

We observed that the ICA scores in theory (MEQs and OSPE) and clinical examination (OSCE) correlated well with the final examination marks of students of all 3 semesters (6-8) with OR of 3.8-14.3 (all analyses p<0.001) and OR of 2.4-6.9 (p<0.05). This supports the good predictive validity of our ICA examinations. The correlation was stronger with MEQ and OSPE compared to OSCE. This can be attributed to external factors such as stress among students exposed to external examiners who have little or no contact with the students, resulting in decreased performance.

Although we did find ICA scores to be a valuable predictor of high-risk students, we do not expect to consider this as the only means of detecting all potential strugglers. Nevertheless, at this stage a workable system in the form of ICA scores for early detection and remediation of high-risk students may be valuable. When academic remediation is instituted at this stage it is may help to overcome the stigma of failure.[9] Timely intervention for poor performance helps students deal with adverse learning and behaviour patterns promptly before it adversely affects their clinical performance. It is has also been observed that weak medical students become doctors, highlighting the importance of early identification of underperforming students.[10]

Remediation measures for failing students have been developed in many medical schools. These measures are tailored to the nature of the problems students’ encounter and the availability of staff and their interest. Most of these measures involve 3 steps : (i) identification of the problem; (ii) implementing remedial measures; and (iii) assessing the effectiveness of remedial measures.[11],[12] Most medical schools have remediation processes, developed usually on the basis of, for example, staff availability and interest. However, assessing the effectiveness of the measures is viewed with an opinion different from the literature. A study on asse ssment- focused revision and reassessment among poor performers in OSCE identified no improvement in subsequent performances; however, the data are derived from 1 medical school.[11] The finding is different from few studies when remedial measures evaluation was done for theory or clinical examination and was found to be effective.[13],[14] A recent systematic review by Cleland et al. found that the maj ority of remedial interventions are targeted at learners in the latter years of medical school.[10] The remedial measures undertaken at our university include activities such as case presentations, small group discussions, tutorials and feedback, when the students have failed their professional examinations at the end of semester 9, which is close to the time of completing their course. We believe that early remedial interventions have the potential to improve the performance of many struggling students. Identifying struggling students with ICAs will help initiate early remedial measures, giving students a sense of control over their learning and performances. While remediation serves to improve strategic learning, resource management, internal motivation, there must be a place to tackle non-academic problems such as language, communication and inter-professional skills of the students. This will enable early remedial action to be in place by targeted individual academic and pastoral support.[15]

Limitations

Our study was done at a single medical school and the results may not be generalizable to other medical schools with different curricula and student profiles. We are mindful that besides continuous assessment, there are independent variables such as learning styles, and pre-academic entry qualifications that we did not observe which may influence the final results. With regard to the predictive ability of pre-admission grades, there are mixed results with some studies showing low discriminatory power with individual mathematics, biology, or other science grades being more appropriate as potential predictors than comprehensive high school grades.[16] This is contradictory to our findings that students with better grades in their pre-university examinations had better performance in their final examinations, regardless of the science subjects that they took at the pre-university level.[17]

Conclusions

We have shown that there is definite correlation in the performance of medical students in ICA scores and final summative examination scores. More importantly, it allows medical educators to identify ‘at-risk’ students so that remediation measures can be undertaken. The earlier the students are ‘flagged’, we are more likely to obtain a positive outcome. We believe that we can do this during the early semesters of their clinical years and we are relatively confident of identifying ‘at-risk’ students 2 years in advance of the final exit examination. We believe that this correlation between ICA scores and the final examination scores is a true reflection of future student performance at our university. This predictor will assist medical educators to identify ' at-risk’ students at key points in the programme for effective remediation measures to be instituted. We propose that ICAs be developed to identify underperforming students so that medical schools initiate early remedial measures for students who fail and advise students to seek alternative careers despite adequate remedial interventions.


  Acknowledgements Top


We sincerely thank International Medical University and the examination unit, Clinical School for their valuable assistance in keying in all the data in the SPSS format.

Conflicts of interest. None declared



 
  References Top

1.
Yahya SA, Yamin SB. Differences and similarity of continuous assessment in Malaysian and Nigerian universities. J Edu Pract 2014;2:73-82.  Back to cited text no. 1
    
2.
O’kwu EI, Orum CC. Effect of continuous assessment scores on the final examination scores obtained by students at the Junior Secondary School (JSS) level in mathematics. EducRes 2012;3:706-9.  Back to cited text no. 2
    
3.
Cruickshank JK, Barritt PW, Mcbesag F, Waterhouse N, Goldman LH. Student views on continuous assessment at Birmingham University Medical School. BrMed J 1975;4:265-7.  Back to cited text no. 3
    
4.
Kaddam L, Elnimeiri MKM. Students’ perceptions about the impact of continuous assessment in learning physiology in Sudanese faculty of medicine and health sciences. Int J Educ Res Dev 2013;2:228-32.  Back to cited text no. 4
    
5.
Adeniyi OS, Ogli SA, Ojabo CO, Musa DI. The impact of various assessment parameters on medical students’ performance in first professional examination in physiology. Niger Med J 2013;54:302-5.  Back to cited text no. 5
    
6.
Carrillo MT, Pérez J. Continuous assessment improved academic achievement and satisfaction of psychology students in Spain. Teach Psychol 2012;39:45-7.  Back to cited text no. 6
    
7.
Ward C. Designing a scheme of assessment. London:Stanley Thornes; 1980.  Back to cited text no. 7
    
8.
Yoloye EA. Continuous assessment: A simple guide for teachers. London:Cassell; 1984.  Back to cited text no. 8
    
9.
Winston KA, Van Der Vleuten CP, Scherpbier AJ. At-risk medical students: Implications of students’ voice for the theory and practice of remediation. Med Educ 2010;44:1038-47.  Back to cited text no. 9
    
10.
Cleland J, Leggett H, Sandars J, Costa MJ, Patel R, Moffat M. The remediation challenge: Theoretical and methodological insights from a systematic review. Med Educ 2013;47:242-51.  Back to cited text no. 10
    
11.
Frellsen SL, Baker EA, Papp KK, Durning SJ. Medical school policies regarding struggling medical students during the internal medicine clerkships: Results of a national survey. Acad Med 2008;83:876-81.  Back to cited text no. 11
    
12.
White CB, Ross PT, Gruppen LD. Remediating students’ failed OSCE performances at one school: The effects of self-assessment, reflection, and feedback. Acad Med 2009;84:651-4.  Back to cited text no. 12
    
13.
Sayer M, Chaput De Saintonge M, Evans D, Wood D. Support for students with academic difficulties. Med Educ 2002;36:643-50.  Back to cited text no. 13
    
14.
Winston KA, Van der Vleuten CP, Scherpbier AJ. An investigation into the design and effectiveness of a mandatory cognitive skills programme for at-risk medical students. Med Teach 2010;32:236-43.  Back to cited text no. 14
    
15.
James D, Chilvers C. Academic and non-academic predictors of success on the Nottingham undergraduate medical course 1970-1995. Med Educ 2001 ;35: 1056-64.  Back to cited text no. 15
    
16.
Al-Nasir FA, Robertson AS. Can selection assessments predict students’ achievements in the premedical year? A study at Arabian Gulf University. Educ Health (Abingdon) 2001;14:277-86.  Back to cited text no. 16
    
17.
Radhakrishnan AK, Lee N, Young ML. The influence of admission qualifications on the performance of first and second year medical students at the International Medical University. Int e-J Sci Med Educ 2012;6:10-17.  Back to cited text no. 17
    



 
 
    Tables

  [Table 1]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Methods
Results
Discussion
Acknowledgements
References
Article Tables

 Article Access Statistics
    Viewed141    
    Printed5    
    Emailed0    
    PDF Downloaded196    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]