DOI: https://doi.org/10.7203/relieve.22.1.8280

Ítems contextualizados de PISA Ciencias: La conexión entre las demandas cognitivas y las características de contexto de los ítems


Resumen


El uso frecuente de contextos en los ítems de la prueba se basa en la premisa de que la contextualización de ítems es una estrategia eficaz para comprobar si los estudiantes pueden aplicar o transferir sus conocimientos. En este trabajo, seguimos una línea de investigación que se centra en probar esta premisa. Se presenta un estudio de las características del contexto en una muestra de ítems de PISA ciencias de 2006 y 2009 y cómo estas características, así como el rendimiento del estudiante puede estar relacionado con las demandas cognitivas en los ítems. El estudio se basa en dos preguntas de investigación: (1) ¿Cuáles son las demandas cognitivas de los ítems contextualizados en la muestra PISA y cuál es el rendimiento de los estudiantes vinculados a estos ítems? (2) ¿Están las demandas cognitivas de los ítems asociadas con ciertas características de los contextos de los elementos que han demostrado estar en relación con el rendimiento de los estudiantes? Usando 52 ítems liberados de PISA, encontramos información sobre tres dimensiones del contexto de los ítems, nivel de abstracción, los recursos y la naturaleza del contexto y las demandas cognitivas de los ítems. Una regresión logística multinomial con la demanda cognitiva como variable de resultado, las características contextuales como los predictores, y el porcentaje de respuestas correctas como la covariante indicó que ciertas características del contexto están vinculados a las demandas cognitivas de los ítems. Por ejemplo, se encontró que los contextos de los ítems en los que sólo suponen ideas concretas se asocian a elementos con bajas demandas cognitivas; estos elementos son poco probables que requieran el conocimiento del contenido para ser respondido. También se encontró que el tipo de recurso (por ejemplo, tablas, gráficos) se asocia con las demandas cognitivas de los ítems: representaciones esquemáticas parecen estar vinculadas a temas afectando el conocimiento procedimental y no a temas que afectan el conocimiento declarativo o esquemático. Llegamos a la conclusión de que se necesita más investigación para comprender mejor la influencia que tienen las características de contexto en los procesos cognitivos, en la cual se pida a los estudiantes a participar en su desarrollo.

Palabras clave


PISA; ítems de ciencias; características contextuales de los items; demandas cognitivas; validez.

Texto completo:

PDF PDF (English)

Referencias


  1. Ahmed, A., & Pollitt, A. (1999). Curriculum demands and question difficulty. Paper presented at the International Association of Educational Assessment Conference (IAEA), Slovenia.

  2. Ahmed, A., & Pollitt, A. (2000). Observing context in action. Paper presented at the International Association of Educational Assessment Conference (IAEA), Jerusalem, Israel.

  3. Ahmed, A., & Pollitt, A. (2001). Improving the validity of contextualized questions. In British Educational Research Association Annual Conference, Leeds.

  4. Ahmed, A., & Pollitt, A. (2007). Improving the quality of contextualized questions: An experimental investigation of focus. Assessment in Education, 14(2), 201–232. DOI:10.1080/09695940701478909

  5. Anderson, R. C. (1972). How to construct achievement test to assess comprehension. Review of Educational Research, 42(2), 145-170.

  6. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R. … & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York, NY: Longman.

  7. Boaler, J. (1993). The role of context in the mathematics classroom: Do they make mathematics more “real”? For the Learning of Mathematics, 13(2), 12–17.

  8. Boaler, J. (1994). When do girls prefer football to fashion? An analysis of female underachievement in relation to ‘realistic’ mathematic context. British Educational Research Journal, 20(5), 551–564. DOI: 10.1080/0141192940200504

  9. Bormuth, J. R. (1970). On a theory of achievement test items. Chicago: University of Chicago Press.

  10. Cooper, B., & Dunne, M. (2000). Assessing children’s mathematical knowledge. Social class, sex, and problem solving. Buckingham, UK: Open University Press.

  11. Downing, S. M. (2006). Twelve steps for effective test development. In S. M. Downing & T. M. Haladyna (Eds). Handbook of test development (pp. 3-25). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

  12. Fisher-Hoch, H., & Hughes, S. (1996). What makes mathematical exam questions difficult? Paper presented in the British Educational Research Association conference, Lancaster, UK.

  13. Fulkerson, D., Nichols, P., Haynie, K., & Mislevy, R. (2009). Narrative structures in the development of scenario-based science assessments (Large-Scale Assessment Technical Report 3). Menlo Park, CA: SRI International.

  14. Gerofsky, S. (1996). A linguistic and narrative view of word problems in mathematics education. For the Learning of Mathematics, 16(2), 36-45.

  15. Greeno, J. G. (1989). A perspective on thinking. American Psychologist, 44(2), 134–141.

  16. Haladyna, T. M. (1994). Developing and validating multiple-choice test items. Hillsdale, NJ: Lawrence Erlbaum Associates.

  17. Haladyna, T. M. (1997). Writing test items to evaluate higher order thinking. Boston, MA: Allyn and Bacon.

  18. Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37-50. DOI:10.1207/s15324818ame0201_4

  19. Haladyna, T., & Downing, S. M. (2004). Construct irrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17-27. DOI: 10.1111/j.1745-3992.2004.tb00149.x

  20. Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309–334. DOI:

  21. 1207/S15324818AME1503_5

  22. Hembree, R. (1992). Experiments and relational studies in problem solving: A meta-analysis. Journal for Research in Mathematics Education, 23(3), 242–273. DOI: 10.2307/749120

  23. Kelly, V. L. (2007). Alternative assessment strategies within a context based science teaching and learning approach in secondary schools in Swaziland. (Doctoral dissertation). University of Western Cape. Bellville, South Africa.

  24. Leighton, J. P., & Gokiert, R. J. (2005). The cognitive effects of test item features: Informing item generation by identifying construct irrelevant variance. In Annual Meeting of the National Council on Measurement in Education (NCME), Montreal, Quebec, Canada.

  25. Li, M. (2002). A framework for science achievement and its link to test items. (Unpublished doctoral dissertation). Stanford University, California.

  26. Li, M., & Shavelson, R. J. (April, 2001). Using TIMSS items to examine the links between science achievement and assessment methods. Paper presented at the annual meeting of the American Educational Research Association. Seattle, WA.

  27. Li, M., Ruiz-Primo, M. A., & Shavelson, R. J. (2006). Towards a science achievement framework: The case of TIMSS 1999. In S. J. Howie and T. Plop (Eds.). Contexts of learning mathematics and science (pp. 291-311). London, England: Routledge.

  28. McMartin, F., McKenna, A., & Youssefi, K. (2000, May). Scenario assignments as assessment tools for undergraduate engineering education. IEEE Transactions on Education, 43(2), 111–119. DOI:10.1109/13.848061

  29. Mevarech, Z. R., & Stern, E. (1997). Interaction between knowledge and context on understanding abstract mathematical concepts. Journal of Experimental Child Psychology, 65, 68–95. doi:10.1006/jecp.1996.2352

  30. Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered assessment design. In S. M. Downing & T. M. Haladyna (Eds). Handbook of test development (pp. 61-90). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

  31. Organisation for Economic Cooperation and Development (OECD). (2009). PISA 2009 Assessment Framework. Key competencies in reading, mathematics and science. Retrieved from http://www.oecd.org/pisa/pisaproducts/44455820.pdf.

  32. Organisation for Economic Cooperation and Development (OECD). (2015). PISA 2015 Assessment and Analytical Framework. Science, reading. Mathematic, and Financial Literacy. Retrieved from http://www.oecd-ilibrary.org/docserver/download/9816021e.pdf?expires=1465158103&id=id&accname=guest&checksum=57858C1E7640CA2C6457CC296204529A.

  33. Osterlind, S. J. (1998). Constructing test items: Multiple-choice, constructed response, performance, and other formats. (Evaluation in Education and Other Services, 47). Boston, MA: Kluwer Academic Publishers.

  34. Royer, J. M., Ciscero, C. A., & Carlo, M. S., (1993). Techniques and procedures for assessing cognitive skills. Review of Educational Research, 63(2), 201-243. doi: 10.3102/00346543063002201

  35. Ruiz-Primo, M. A., & Li, M. (2012, July). The role of context in science items and its relation to students’ performance. Paper presented in the International Test Commission Bi-Annual Conference. Amsterdam, The Netherlands.

  36. Ruiz-Primo, M. A., & Li, M. (2015). The relationship between item context characteristics and student performance: The case of the 2006 and 2009 PISA Science items. Teachers College Record, 117(1), 1-36.

  37. Ruiz-Primo, M. A., Li, M., & Minstrell, J. (2014). Building a framework for developing and evaluating contextualized ítems in science assessment (DECISA). Proposal submitted to the DRL–CORE R7D Program to National Science Foundation. Washington, DC: National Science Foundation.

  38. Ruiz-Primo, M. A. (2003, April). A framework to examine cognitive validity. Paper presented at the meeting of the American Education Research Association, Chicago, IL.

  39. Ruiz-Primo, M. A. (2007). Assessment in science and mathematics: Lessons learned. In M. Hoepfl & M. Lindstrom (Eds.), Assessment of Technology Education, CTTE 56th Yearbook (pp. 203-232). Woodland Hills, CA: Glencoe-McGraw Hill.

  40. Shavelson, R. J., Ruiz-Primo, M. A., Li, M., & Ayala, C. C. (2002). Evaluating new approaches to assessing learning. CSE Technical Report 604. National Center for Research on Evaluation, Standards, and Student Testing (CRESST) University of California, Los Angeles.

  41. Shoemaker, D. M. (1975). Toward a framework for achievement testing. Review of Educational Research, 45(1), 127-147. doi: 10.3102/00346543045001127

  42. Taber, K. S. (2003). Examining structure and context – questioning the nature and purpose of summative assessment. Presentation at Cambridge International Examinations Seminar. University of Cambridge Local Examinations Syndicate. Cambridge, England.

  43. Terry, T. M. (1980). The narrative exam – an approach to creative organization of multiple-choice tests. Journal of College Science Teaching, 9(3), 156-158.

  44. Wainer, H., & Kiely, 1, G. L. (1987). Item clusters and computerized adaptive testing: A case for testlets. Journal of Educational Measurement, 24(3), 185-201.

  45. Wang, T., & Li, M. (March, 2014). Literature review of characteristics of science item contexts. Paper presented in the Annual Meeting of National Association for Research in Science Teaching (NARST), Pittsburgh, PA.

  46. Welch, C. (2006). Item and prompt development in performance testing. In S. M. Downing & T. M. Haladyna (Eds). Handbook of test development (pp. 303-327). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers

  47. Wiggins, G. (1993). Assessing student performance: Exploring the purpose and the limits of testing. San Francisco: Jossey-Bass.

  48. Wiliam, D. (1997). Relevance as MacGuffin in mathematics education. In British Educational Research Association, York.

  49. Yin, Y. (2005). The influence of formative assessments on student motivation, achievement, and conceptual change. Unpublished doctoral dissertation, Stanford University, Stanford, CA.


Enlaces refback

  • No hay ningún enlace refback.




https://ojs.uv.es/public/site/images/aliaga/scopus_170 https://ojs.uv.es/public/site/images/aliaga/esci_225 https://ojs.uv.es/public/site/images/aliaga/sello-calidad-revistas-2016_697_01