Avaliação Adaptativa de habilidades de lógica de programação

Autores

DOI:

https://doi.org/10.47236/2594-7036.2026.v10.1905

Palavras-chave:

Avaliação Adaptativa, Educação, Estimativa de máxima verossimilhança, Lógica de programação, Teoria de resposta ao item

Resumo

A Avaliação Adaptativa (AA) aprimora os resultados de aprendizagem ajustando as avaliações à proficiência dos estudantes. Este artigo apresenta métodos adaptativos para a avaliação de habilidades da lógica de programação na educação, implementados em um sistema de código aberto denominado MCTest. Neste sistema, os professores criam AA personalizadas para seus estudantes. Três métodos adaptativos foram desenvolvidos: Testagem Semi-Adaptativa (SAT), Probabilidade Ponderada de Correção (WPC) e Estimativa de Máxima Verossimilhança (MLE). Seis testes foram concebidos, incluindo uma linha de base não adaptativa, com questões de múltipla escolha classificadas de acordo com a Taxonomia de Bloom. Esses testes validaram as calibrações de itens usando a Teoria de Resposta ao Item. O método foi aplicado em duas turmas com 72 estudantes, e um questionário final com 17 respondentes confirmou estatisticamente sua eficácia percebida.

Downloads

Não há dados estatísticos.

Métricas

Carregando Métricas ...

Biografia do Autor

Lucas Montagnani Calil Elias, Universidade Federal do ABC

Bacharel em Ciência da Computação pela Universidade Federal do ABC. Santo André, São Paulo, Brasil.    Endereço eletrônico: lucas.montagnani@aluno.ufabc.edu.br. Orcid: https://orcid.org/0009-0006-4746-1551. Currículo Lattes: http://lattes.cnpq.br/3276534519963546.

Francisco de Assis Zampirolli, Universidade Federal do ABC

Doutorado em Engenharia Elétrica pela Universidade Estadual de Campinas. Professor Titular de Ciência da Computação na Universidade Federal do ABC. Santo André, São Paulo, Brasil. Endereço eletrônico: fzampirolli@ufabc.edu.br. Orcid: https://orcid.org/0000-0002-7707-1793. Currículo Lattes: http://lattes.cnpq.br/4127260763254001

Referências

ALVES, Laura Filállepe et al. Continuous assessment, a teaching methodology for reducing retention and dropout rates in higher education calculus courses. Revista Sítio Novo, Palmas, v. 6, n. 4, p. 51-60, 2022. DOI: 10.47236/2594-7036.2022.v6.i4.51-60p. DOI: https://doi.org/10.47236/2594-7036.2022.v6.i4.51-60p

ALVES, Lynn et al. Remote education: between illusion and reality. Interfaces Científicas-Educação, v. 8, n. 3, p. 348-365, 2020. DOI: https://doi.org/10.17564/2316-3828.2020v8n3p348-365

ALVES, Welington Domingos; SANTOS, Luiz Gustavo Fernandes dos. Playing with mathematics: using games to mediate the teaching and learning of mathematical content. Revista Sítio Novo, Palmas, v. 6, n. 4, p. 84-93, 2022. DOI: 10.47236/2594-7036.2022.v6.i4.84-93p. DOI: https://doi.org/10.47236/2594-7036.2022.v6.i4.84-93p

BAKER, Frank B. et al. The basics of item response theory using R. v. 969. Springer, 2017. DOI: https://doi.org/10.1007/978-3-319-54205-8

BAYLARI, Ahmad; MONTAZER, Gh A. Design a personalized e-learning system based on item response theory and artificial neural network approach. Expert Systems with Applications, v. 36, n. 4, p. 8013-8021, 2009. DOI: https://doi.org/10.1016/j.eswa.2008.10.080

BECKER, Samantha Adams et al. NMC horizon report: 2018 higher education edition. Louisville, CO: Educause, 2018.

BINH, Hoang Tieu; DUY, Bui The. Student ability estimation based on IRT. In: National Foundation For Science And Technology Development Conference On Information And Computer Science (NICS), 3., 2016. [S. l.], 2016. p. 56-61. DOI: https://doi.org/10.1109/NICS.2016.7725667

CAI, Li et al. Item response theory. Annual Review of Statistics and Its Application, v. 3, p. 297-321, 2016. DOI: https://doi.org/10.1146/annurev-statistics-041715-033702

CHEN, Keyu. A comparison of fixed item parameter calibration methods and reporting score scales in the development of an item pool. 2019. Tese (PhD) - University of lowa, [S. l.], 2019.

CHOI, Younyoung; MCCLENEN, Cayce. Development of adaptive formative assessment system using computerized adaptive testing and dynamic bayesian networks. Applied Sciences, v. 10, n. 22, p. 8196, 2020. DOI: https://doi.org/10.3390/app10228196

COHEN, Jacob. Statistical power analysis for the behavioral sciences. 2. ed. Hillsdale, NJ: Erlbaum, 1988.

COSTA, Rebeca Soler et al. Personalized and adaptive learning: educational practice and technological impact. Texto Livre, v. 14, p. e33445, 2022. DOI: https://doi.org/10.35699/1983-3652.2021.33445

GALVAO, Ailton Fonseca et al. An intelligent model for item selection in computerized adaptive testing. 2013. Dissertation (Master's) - Federal University of Juiz de Fora 2013. (UFJF), Juiz de Fora, MG, 2013.

GHAVIFEKR, Simin; ROSDY, Wan Athirah Wan. Teaching and learning with technology: Effectiveness of ICT integration in schools. International journal of research in education and science, v. 1, n. 2, p. 175-191, 2015. DOI: https://doi.org/10.21890/ijres.23596

GROS, Begoña. The design of smart educational environments. Smart learning environments, v. 3, p. 1-11, 2016. DOI: https://doi.org/10.1186/s40561-016-0039-x

HAMDARE, S. An adaptive evaluation system to test student caliber using item response theory. International Journal of Modern Trends in Engineering and Research, v. 1, n. 5, p. 329-333, 2014.

HAMMOND, Flora et al. Handbook for clinical research: design, statistics, and implementation. Demos Medical Publishing, 2014.

HOLM, Sture. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, v. 6, n. 2, p. 65-70, 1979.

JOHNSON, Amy M. et al. Challenges and solutions when using technologies in the classroom. In: Adaptive educational technologies for literacy instruction. Routledge, 2016. p. 13-30. DOI: https://doi.org/10.4324/9781315647500-2

KARINO, Camila Akemi; SOUSA, Eduardo Carvalho. Understanding your ENEM score - Participant's Guide. [S. l.: s. n.], 2012.

KRATHWOHL, David R. A Revision of Bloom's Taxonomy: An Overview. Theory Into Practice, v. 41, n. 4, p. 212-218, 2002. DOI: https://doi.org/10.1207/s15430421tip4104_2

LAZARINIS, Fotis et al. Creating personalized assessments based on learner knowledge and objectives in a hypermedia Web testing application. Computers & Education, v. 55, n. 4, p. 1732-1743, 2010. DOI: https://doi.org/10.1016/j.compedu.2010.07.019

LORD, Frederic M. Applications of item response theory to practical testing problems. Routledge, 1980.

MEIJER, Rob R.; NERING, Michael L. Computerized adaptive testing: Overview and introduction. Applied psychological measurement, 1999. DOI: https://doi.org/10.1177/01466219922031310

MIN, Shangchao; ARYADOUST, Vahid. A systematic review of item response theory in language assessment: Implications for the dimensionality of language ability. Studies in Educational Evaluation, v. 68, p. 100963, 2021. DOI: https://doi.org/10.1016/j.stueduc.2020.100963

MORAN, José. Hybrid education: a key concept for education today. In: Hybrid teaching: personalization and technology in education. Porto Alegre: Penso, 2015.

MOREIRA, José António; SCHLEMMER, Eliane.

Towards a new concept and paradigm for onlife digital education. Revista uFG, v. 20, 2020.

NEVES, Rogério; ZAMPIROLLI, Francisco de Assis. Processing information: a practical book on language-independent programming. São Bernardo do Campo: EdUFABC, 2017.

OLIVEIRA, Plínio Cardoso de; SOUZA, Wallysonn Alves de; ALVES, José Robson Mariano. Contesting: software to stimulate critical thinking, engagement, and promote autonomy in professional and technological education. Revista Sítio Novo, Palmas, v. 9, p. e1611, 2025. DOI: https://doi.org/10.47236/2594-7036.2025.v9.1611

PARAMYTHIS, Alexandros; LOIDL-REISINGER, Susanne. Adaptive learning environments and e-learning standards. In: European Conference On E-learning, 2., 2003. [S. l.], 2003. p. 369-379.

PELLEGRINO, James W.; QUELLMALZ, Edys S. Perspectives on the integration of technology and assessment. Journal of Research on Technology in Education, v. 43, n. 2, p. 119-134, 2010. DOI: https://doi.org/10.1080/15391523.2010.10782565

PONTES, Paulo Ricardo da Silva; VICTOR, Valci Ferreira. Educational robotics: a practical approach to teaching programming logic. Revista Sítio Novo, Palmas, v. 6, n. 1, p. 57-71, 2022. DOI: 10.47236/2594-7036.2022.v6.i1.57-71p. DOI: https://doi.org/10.47236/2594-7036.2022.v6.i1.57-71p

PUGLIESE, Lou. Adaptive learning systems: Surviving the storm. Educause review, v. 10, n. 7, 2016.

ROSENTHAL, Robert. Meta-analytic procedures for social research. Beverly Hills: Sage, 1984.

SHAPIRO, Samuel S.; WILK, Martin B. An analysis of variance test for normality (complete samples). Biometrika, v. 52, n. 3-4, p. 591-611, 1965. DOI: https://doi.org/10.1093/biomet/52.3-4.591

SOARES, Ronald Ruan Pereira et al. Development of a virtual assistant as academic support for the bachelor's degree program in Computer Science using customized generative AI and RAG. Revista Sítio Novo, Palmas, v. 9, p. e1757, 2025. DOI: https://doi.org/10.47236/2594-7036.2025.v9.1757

TUKEY, John W. Box-and-whisker plots. In: Exploratory data analysis. [S. l.: s. n.], 1977. p. 39-43.

WAINER, Howard et al. Computerized adaptive testing: A primer. Lawrence Erlbaum Associates, Inc, 1990.

WANG, Feng-Hsu. Application of componential IRT model for diagnostic test in a standard-conformant eLearning system. In: IEEE International Conference On Advanced Learning Technologies (ICALT'06), 6., 2006. [S. l.], 2006. p. 237-241. DOI: https://doi.org/10.1109/ICALT.2006.1652414

WATERS, John K. The great adaptive learning experiment. Campus Technology, v. 16, 2014.

WILCOXON, Frank. Individual comparisons by ranking methods. Biometrics Bulletin, v. 1, n. 6, p. 80-83, 1945. DOI: https://doi.org/10.2307/3001968

YANG, Albert C. M. et al. Adaptive formative assessment system based on computerized adaptive testing and the learning memory cycle for personalized learning. Computers and Education: Artificial Intelligence, v. 3, p. 100104, 2022. DOI: https://doi.org/10.1016/j.caeai.2022.100104

ZAMPIROLLI, Francisco de Assis. MCTest: How to create and correct automatically parameterized exams. Brazil: Independently Published, 2023.

ZAMPIROLLI, Francisco de Assis et al. An experience of automated assessment in a large-scale introduction programming course. Computer Applications in Engineering Education, p. 1284-1299, 2021. DOI: https://doi.org/10.1002/cae.22385

ZAMPIROLLI, Francisco de Assis et al. Evaluation process for an introductory programming course using blended learning in engineering education. Computer Applications in Engineering Education, 2018. p. 1-13.

ZHENG, Yi. New methods of online calibration for item bank replenishment. 2014. Tese (PhD) - University of Illinois at Urbana-Champaign, [S. l.], 2014.

Downloads

Publicado

2026-02-02

Como Citar

ELIAS, Lucas Montagnani Calil; ZAMPIROLLI, Francisco de Assis. Avaliação Adaptativa de habilidades de lógica de programação. Revista Sítio Novo, Palmas, v. 10, p. e1905, 2026. DOI: 10.47236/2594-7036.2026.v10.1905. Disponível em: https://sitionovo.ifto.edu.br/index.php/sitionovo/article/view/1905. Acesso em: 3 fev. 2026.

Edição

Seção

Artigo Científico