Favorable Indices for Assessing The Performance of University Students in The Humanities Fields and The Amount of Attention They Receive

Document Type : Research Paper

Authors

1 Ph.D Curriculum Studies in higher education, University of Isfahan, Faculty of Education and Psychology, Isfahan, Iran.

2 Professor, Department of Education, University of Isfahan, Faculty of Education and Psychology, Isfahan, Iran

3 Associate, Department of Curriculum Planning, University of Tehran, Faculty of Educational Sciences and Psychology, Tehran, Iran

4 Associate, Department of Higher Education Management, Higher Education Research and Planning Institute

Abstract

The present study is aimed towards Identification the Indices of Desirable Students performance evaluations regarding the Humanity Courses for Universities and the Extent of attention them. In the present study, we have made use of the consecutive mixed-method exploratory research which is of instrument-making type. In the quantitative section, a qualitative case study method and in the quantitative section, a descriptive-survey method was employed. For the qualitative section, we underwent quasi-structured interviews with 20 distinguished and qualified experts and faculty members of universities having their expertise in evaluation. Also, 360 of the students of universities comprised the quantitative statistical population. This number of participants was selected using multistage cluster sampling method. The results indicated that the interviewees outlined fourteen indices as the salient indices of evaluation and it can be argued that the recognized indices can be recruited to evaluation and revise the evaluation of courses in different majors of the humanities.
Introduction
Assessment of students’ learning has been introduced as an influential element in the curriculum, such that any weakness in assessment, which is mainly placed at the end of the curriculum, can lead to the failure of a curriculum. Assessment is defined as value-based judgment. Assessment includes two main components: 1) collecting data using measurement tools that include relevant criteria; 2) using the results to judge or decide on goals based on agreed standards (Alderman et al., 2014). The effects of assessment on students’ learning have been widely discussed. Cohen & Sampson (1999) suggested that assessment has a great impact on learning in formal courses. It plays an important role in the processes (the content) and the approaches (the amount and how) of students’ learning (Dai, Matthews & Reyes, 2020).
The research results of Murillo and Hidalgo (2020), Rasouli, Zandi and DeLuca (2018), Xu and Brown (2016), DeLuca et al. (2016), and DeLuca (2012) showed that observing the principle of justice is one of the important principles in professors’ methods of assessment. Furthermore, Tierney (2014, 2016) considers fairness in assessment necessary to ensure justice, which supports the compatibility of assessment with students’ needs and characteristics. Assessing how students learn is considered an essential component of effective education, and a key path to achieving important improvements in students’ abilities. However, we should know what the relevant indices to determine the assessment quality are. Considering the lack of research in this field, as well as limitations such as inadequacy in conducted studies, it appears necessary to conduct such research.
Methodology
This practical study used exploratory sequential mixed methods design for developing tools. Qualitative data was collected and analyzed to help develop a questionnaire. The qualitative phase was conducted using a qualitative case study method, and the quantitative phase used a descriptive survey method. Experts and key informants of assessment were selected via purposive sampling as potential participants, which continued until theoretical saturation of data. Therefore, 20 experienced professors in the field of assessment were recruited. The statistical population in the quantitative phase included all third- and fourth-year undergraduate students of Isfahan University in the academic year 2019-2020. The sampling method in the quantitative phase was multi-stage cluster sampling based on the sampling table of Krejcie and Morgan (1970).
To collect data, semi-structured interviews were used in the qualitative phase, and the questionnaire extracted from the interviews in the quantitative phase. The reliability and validity techniques were used for the qualitative validation. To analyze the data, structural and interpretative methods were used in the qualitative phase, and descriptive and inferential statistics were used in the quantitative phase.
Results
The findings related to the first research question showed that the most important assessment indices are suitability of assessment tasks with course objectives, observance of scientific and professional principles when designing questions and conducting exams, giving feedback to students’ assignments and guiding them, diversity of assessment methods, suitability and coordination of assessment with course content, informing students of assessment criteria and methods, observing justice and fairness in question design and grading, attention to continuous and developmental assessment; assessment of students’ knowledge, attitude and skills regarding the course, and activity-oriented assessment.
The findings of the second question of the research showed that the mean assessment indices were higher than the standard score in the courses of “teaching methodology”, “development management” and “research method in law” and lower than the standard score in other courses (3). The findings of the third research question showed that the mean of the major of “educational sciences” was higher than the criterion score (3), the mean of the major of “public administration” was equal to the criterion score, and the mean in other majors was lower than the criterion score. The findings of the fourth research question showed that the index of suitability of assessment tasks with lesson objectives had the highest mean and the index of activity-oriented assessment had the lowest mean. It should be noted that the mean of all assessment indices was lower than the assumed mean of the society (3). MANOVA was used to investigate the significance of the variables of majors effect on the rate of using assessment indices, which indicated that according to the value of Wilks’s lambda (0.752, F = 1.84, Eta = 0.055) and the level of significance obtained (P=0.000), there is a significant difference between majors. The findings of the sixth research question showed that the mean of the two groups of males and females was significantly different at the 95% confidence level in the rate of using lesson assessment indices.
Discussion and conclusion
The results indicated that the pass/fail of the courses should not be based only on the students’ final exam, but it is necessary to use different assessment methods to accurately define their academic progress and present their abilities and capabilities. The use of different contents by professors leads to the assessment moving from output-oriented and result-oriented to continuous-oriented and process-oriented approaches. In addition, the process-oriented approach to evaluation improves problem-solving ability, critical thinking, and application of knowledge in real situations. The traditional view of the assessment process leads to very little use of novel technologies, superficial learning, poor interaction with students, and emphasis on mere memorization. Finally, it is suggested that professors not only use written assessment for each course, but use a variety of new assessment methods.

Keywords

Main Subjects


Abrami, PC, Rosenfeld, S, & Dedic, H. (2007). The dimensionality of student ratings of instruction: an update on what we know, do not know, and need to do, in RP Perry & JC Smart (eds). The scholarship of teaching and learning in higher education: an evidence-based approach, Springer, Dordrecht, The Netherlands, 446–456. https://doi.org/10.1007/1-4020-5742-3_10
Azizi Mahmoodabad, M, Nili, M R. (2019). Evaluating elementary school math curriculum: providing a suggested model. Journal of New thoughts on Education, 15)2): 123 – 146. 10.22051/JONTOE.2019.17959.2006 (Text in Persian).
Abu-Alhija, F. N. (2007). Large-scale testing: Benefits and pitfalls. Studies in educational evaluation, 33(1), 50-68. https://doi.org/10.1016/j.stueduc.2007.01.005
Akker, J, V, D. (2003). Curriculum perspectives: An Introduction: In J. van den Akker, U. Hameyer, & W. Kuiper (Eds.), Curriculum landscapes and trends (pp. 1-10). Dordrecht, the Netherlands: Kluwer.  https://doi.org/10.1007/978-94-017-1205-7_1
Alderman, L, Towers, SJ & Bannah, S. (2012). Student feedback systems in higher education: a focused literature review and environmental scan. Quality in Higher Education, 18(3), 261–280. https://doi.org/10.1080/13538322.2012.730714
Alderman, L., Towers, S., Bannah, S., & Phan, L. H. (2014). Reframing evaluation of learning and teaching: An approach to changeEvaluation Journal of Australasia, 14(1), 24-34. https://doi.org/10.1177/1035719X1401400104
Alton-Lee, A. (2003). Quality teaching for diverse students in schooling: best evidence synthesis. Wellington: Ministry of Education.
Arreola, RA. (2007). Developing a comprehensive faculty evaluation system: a guide to designing, building, and operating large-scale faculty evaluation systems. 3rd edn, Anker, San Francisco.
Beauchamp, G. (1981). Curriculum theory (4th Ed.). Itasca, Ill.: F.E. Peacock Publishers.
Berk, RA. (2005). Survey of 12 strategies to measure teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 17(1), 48–62.
Blackmore, J. (2009). Academic pedagogies, quality logics and per formative universities: evaluating teaching and what students want. Studies in Higher Education, 34 (8), 857–872. https://doi.org/10.1080/03075070902898664
Brooman, S., Darwent, S., & Pimor, A. (2015). The student voice in higher education curriculum design: is there value in listening?. Innovations in Education and Teaching International, 52(6), 663-674.‏ https://doi.org/10.1080/14703297.2014.910128
Brown, G, Bull, J, & Pendlebury, M. (1997). Assessing student learning in higher education, London, Routledge.
Brown, G. T. L. (2002). Teachers’ conceptions of assessment. Unpublished dissertation, New Zealand, University of Auckland.
Brown, G. T. L. (2006). Teachers’ conceptions of assessment: Validation of an abridged version. Psychological reports, 99, 166–170. https://doi.org/10.2466/pr0.99.1.166-170
Cannon, R. (2001). Broadening the context for teaching evaluation. New Directions for Teaching and Learning, 88, 87–97.
Cheng, B., Liu, Y., & Jia, Y. (2024). Evaluation of students' performance during the academic period using the XG-Boost Classifier-Enhanced AEO hybrid model. Expert Systems with Applications238, 122136. https://doi.org/10.1016/j.eswa.2023.122136
Cook, S., Watson, D., & Webb, R. (2024). Performance evaluation in teaching: Dissecting student evaluations in higher education. Studies in Educational Evaluation81, 101342.  https://doi.org/10.1016/j.stueduc.2024.101342
Cornelius-White, J. (2007). Learner-Centered teacher-student relationships are effective: a meta-analysis. Review of Educational Research, 77,113–143. https://doi.org/10.3102/003465430298563
Creswell, J., W. and V. L Plano Clark, (2007). Designing and Conducting Mixed Methods Research, London: Sage Publication Inc.
Creswell, W. (2011). Educational research: planning, conducting, and evaluating quantitative and qualitative research. (4th ed). Boston: Pearson pub
Dai, K., Matthews, K. E., & Reyes, V. (2020). Chinese students' assessment and learning experiences in a transnational higher education programme. Assessment & Evaluation in Higher Education, 45(1), 70-81. https://doi.org/10.1080/02602938.2019.1608907
Darwin, S. (2011). Moving beyond face value: re-envisioning higher education evaluation as a generator of professional knowledge. Assessment & Evaluation in Higher Education, 37 (6),733–745. https://doi.org/10.1080/02602938.2011.565114
DeLuca, C. (2012). Preparing teachers for the age of accountability: Toward a framework for assessment education. Action in Teacher Education, 34, 576–591. https://doi.org/10.1080/01626620.2012.730347
DeLuca, C., Coombs, A., & LaPointe-McEwan, D. (2019). Assessment mindset: Exploring the relationship between teacher mindset and approaches to classroom assessment. Studies in Educational Evaluation, 61, 159-169. https://doi.org/10.1016/j.stueduc.2019.03.012
DeLuca, C., LaPointe-McEwan, D., & Luhanga, U. (2016). Teacher assessment literacy: A review of international standards and measures. Educational Assessment, Evaluation and Accountability, 1–22. https://doi.org/10.1007/s11092-015-9233-6
European Students’ Union (2015). Overview on student-centred learning in higher education in Europe: Research study. Retrieved from Brussels, Belgium: European Students’ Union.
Gerritsen-van Leeuwenkamp, K. J., Joosten-ten Brinke, D., & Kester, L. (2019). Students’ perceptions of assessment quality related to their learning approaches and learning outcomes. Studies in Educational Evaluation, 63, 72-82. https://doi.org/10.1016/j.stueduc.2019.07.005
Gibbs, G. (1999). Using assessment strategically to change the way students learn. In S. Brown & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches (pp. 41- 53). Buckingham: Open University Press.
Harlen, W. (2007). Criteria for evaluating systems for student assessment. Studies in Educational Evaluation, 33(1), 15-28. https://doi.org/10.1016/j.stueduc.2007.01.003
Johnson, TD & Ryan, KE. (2000). A comprehensive approach to the evaluation of college teaching. New Directions for Teaching and Learning, 83(9), 109–123.
Keinänen, M., Ursin, J., & Nissinen, K. (2018). How to measure students’ innovation competences in higher education: Evaluation of an assessment tool in authentic learning environments. Studies in Educational Evaluation, 58, 30-36. https://doi.org/10.1016/j.stueduc.2018.05.007
Lattuca,  L.,  &  Stark,  J.  (2009). Shaping the college curriculum:  Academic plans in context, San Francisco: Jossey ­Bass.
Marsh, HW. (2007). Students’ evaluations of university teaching: dimensionality, reliability, validity, potential biases and usefulness’, in RP Perry & JC Smart (eds). The scholarship of teaching and learning in higher education: an evidencebased approach, Springer, Dordrecht, The Netherlands.
Pereira, D., Flores, M. A., Simão, A. M. V., & Barros, A. (2016). Effectiveness and relevance of feedback in Higher Education: A study of undergraduate students. Studies in Educational Evaluation49, 7-14.
Peters, R., Kruse, J., Buckmiller, T., & Townsley, M. (2017). It’s Just Not Fair! making sense of secondary students’ resistance to a standards-based grading. American Secondary Education, 45(3), 9.
Pettifor, J. L., & Saklofske, D. H. (2012). Fair and ethical student assessment practices. In C. F. Webber, & J. L. Lupart (Eds.). Leading student assessment (pp. 87–106). Dordrecht, The Netherlands: Springer.
Rasooli, A., Zandi, H., & DeLuca, C. (2018). Re-conceptualizing classroom assessment fairness: A systematic meta-ethnography of assessment literature and beyond. Studies in Educational Evaluation, 56, 164-181.‏ https://doi.org/10.1016/j.stueduc.2017.12.008
Roorda, M., & Gullickson, A. M. (2019). Developing evaluation criteria using an ethical lens. Evaluation Journal of Australasia, 19(4), 179-194. https://doi.org/10.1177/1035719X19891991
Rust, C., O’Donovan, B., & Price, M. (2005). A social constructivist assessment process model: how the research literature shows us this could be best practice. Assessment & Evaluation in Higher Education, 30, 231–240. https://doi.org/10.1080/02602930500063819
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189. https://doi.org/10.3102/0034654307313795
Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment and Evaluation in Higher Education, 30, 325–341. https://doi.org/10.1080/02602930500099102
Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models & applications. San Francisco: Jossey-Bass.
Tierney, R. D. (2014). Fairness as a multifaceted quality in classroom assessment. Studies in Educational Evaluation, 43, 55–69. https://doi.org/10.1016/j.stueduc.2013.12.003
Tierney, R. D. (2016). Fairness in educational assessment. In M. A. Peters (Ed.). Encyclopedia of educational philosophy and theory (pp. 1–6). Singapore: Springer Science.
Urillo, F. J., & Hidalgo, N. (2020). Fair student assessment: A phenomenographic study on teachers’ conceptions. Studies in Educational Evaluation65, 100860. https://doi.org/10.1016/j.stueduc.2020.100860
Van den Bergh, V., Mortelmans, D., Spooren, P., Van Petegem, P., Gijbels, D., & Vanthournout, G. (2006). New assessment modes within project-based education-the stakeholders. Studies in educational evaluation, 32(4), 345-368. https://doi.org/10.1016/j.stueduc.2006.10.005
Xu, Y., & Brown, G. T. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149–162. https://doi.org/10.1016/j.tate.2016.05.010
Jin, X., & Ruan, Z. (2023). University Students’ Perceptions of Their Lecturer's Use of Evaluative Language in Oral Feedback. Linguistics and Education78, 101233.‏ https://doi.org/10.1016/j.linged.2023.101233