Access the full text.
Sign up today, get DeepDyve free for 14 days.
O. Westwood, A. Griffin (2013)
Principles of Assessment
Rebecca Ferguson (2012)
Learning analytics: drivers, developments and challengesInternational Journal of Technology Enhanced Learning, 4
(2002)
Design requirements of a databank
(2008)
Os testes de escolha múltipla (Tem)
Hee-Sun Lee, O. Liu, M. Linn (2011)
Validating Measurement of Knowledge Integration in Science Using Multiple-Choice and Explanation ItemsApplied Measurement in Education, 24
M. Pilant, R. Hall, Eunju Jung (2012)
Comprehensive Statistical Analysis of a Mathematics Placement Test, 2012
W. Greller, H. Drachsler (2012)
Translating learning into numbers: a generic framework for learning analytics multimodal learning analytics for collaborative learning understanding and support view project
C. Romero, Sebastián Ventura (2010)
Educational Data Mining: A Review of the State of the ArtIEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40
A. Aziz, N. Khatimin, A. Zaharim, Tuan Salleh (2013)
Evaluating multiple choice items in determining quality of testProceedings of 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE)
M. Zickar, Alison Broadfoot (2009)
The partial revival of a dead horse? Comparing classical test theory and item response theory.
T.M. Haladyna (2004)
10.1177/0146621605280143Developing and Validating Multiple-choice Test Items
(2006)
E-assessment glossary (extended)
J. Azevedo, Cristina Torres, A. Lopes, L. Babo (2017)
Learning Analytics: A Way to Monitoring and Improving Students' Learning
O. Liu, Hee-Sun Lee, M. Linn (2011)
An Investigation of Explanation Multiple-Choice Items in Science AssessmentEducational Assessment, 16
Educational Measurement Issues and Practice, 12
(2002)
A summary of methods of item analysis
L. Langlois, C. Lapointe, P. Valois, A. Leeuw (2014)
Development and validity of the Ethical Leadership QuestionnaireJournal of Educational Administration, 52
P. Valois, C. Houssemand, S. Germain, B. Abdous (2011)
An open source tool to verify the psychometric properties of an evaluation instrumentProcedia - Social and Behavioral Sciences, 15
J. Marôco, Teresa Garcia-Marques (2013)
Qual a fiabilidade do alfa de Cronbach? Questões antigas e soluções modernas?, 4
J. Azevedo (2015)
e-Assessment in Mathematics Courses with Multiple-choice Questions Tests
R. Hambleton, Russell Jones (2005)
An NCME Instructional Module on Comparison of Classical Test Theory and Item Response Theory and Their Applications to Test Development.Educational Measurement: Issues and Practice, 12
J. Azevedo, E. Oliveira, P. Beites (2017)
How Do Mathematics Teachers in Higher Education Look at E-assessment with Multiple-Choice Questions
Sarah-Caroline Poitras, F. Guay, C. Ratelle (2012)
Using the Self-Directed Search in ResearchJournal of Career Development, 39
M. Ferrão (2010)
E‐assessment within the Bologna paradigm: evidence from PortugalAssessment & Evaluation in Higher Education, 35
The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence with Item Response Theory (IRT) and Classical Test Theory (CTT). The used techniques can be considered a type of Descriptive Learning Analytics since they allow the measurement, collection, analysis and reporting of data generated from students’ assessment.Design/methodology/approachA representative data set of students’ grades from tests, randomly generated with a bank of questions implemented in Moodle, was used for analysis. The data were extracted from a Moodle database using MySQL with an ODBC connector, and collected in MS ExcelTM worksheets, and appropriate macros programmed with VBA were used. The analysis with the CTT was done through appropriate MS ExcelTM formulas, and the analysis with the IRT was approached with an MS ExcelTM add-in.FindingsThe Difficulty and Discrimination Indexes were calculated for all the questions having enough answers. It was found that the majority of the questions presented values for these indexes, which leads to a conclusion that they have quality. The analysis also showed that the bank of questions presents some internal consistency and, consequently, some reliability. Groups of questions with similar features were obtained, which is very important for the teacher to develop tests as fair as possible.Originality/valueThe main contribution and originality that can be found in this research is the definition of groups of questions with similar features, regarding their difficulty and discrimination properties. These groups allow the identification of difficulty levels in the questions on the bank of questions, thus allowing teachers to build tests, randomly generated with Moodle, that include questions with several difficulty levels in the tests, as it should be done. As far as the authors’ knowledge, there are no similar results in the literature.
The International Journal of Information and Learning Technology – Emerald Publishing
Published: Jul 23, 2019
Keywords: Analytics; e-assessment; Item Response Theory; Classical Test Theory; Learning and technology; Multiple-choice questions
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.