DOI: https://doi.org/10.7203/relieve.22.1.8164

La investigación evaluativa en el siglo XXI: un instrumento para el desarrollo educativo y social cada vez más relevante


Resumen


Tras una amplia revisión de las recientes publicaciones alrededor del tema, se analiza y valora la actual situación de la investigación evaluativa, como instrumento estratégico para la toma de decisiones de desarrollo y mejora  de la sociedad y de la calidad de vida de los ciudadanos, en ámbitos diversos como la educación, la sanidad, la economía, la cultura, la protección social, las políticas públicas, etc. Se describe y fundamenta la identidad científica de la investigación evaluativa actual, incidiendo en su carácter transdisciplinar, en el auge de la evaluación de organizaciones e instituciones, en su apoyo en metodologías diversas y en la importancia de las estrategias participativas. Se destaca también la utilidad y el uso apropiado de las evaluaciones como objetivo prioritario de este tipo de investigación, apoyándose siempre en principios y normas éticas y de calidad científica y los correspondientes estudios metaevaluativos

Palabras clave


 Investigación evaluativa, Desarrollo social, Disciplina transversal, Metodologías diversas, Estrategias participativas, Utilidad y uso de la evaluación, Normas ético-científicas, Metaevaluación

Texto completo:

PDF

Referencias


  1. Abelson, J., Forest, P-G., Eyles, J., Smith, P., Martin, E., & Gauvin, F.P. (2003). Deliberations about deliberative methods: issues in the design and evaluation of public participation processes. Social Science & Medicine 57, 239–251. doi: http://dx.doi.org/10.1016/S0277-9536(02)00343-X

  2. Abma, T. A. (2000). Stakeholder conflict: a case study. Evaluation and Program Planning 23, 199-210. doi: http://dx.doi.org/10.1016/S0149-7189(00)00006-9

  3. Aguilar, M. (2001). La evaluación institucional de las universidades. Tendencias y desafíos. Revista de Ciencias Sociales (Cr), II-III, 93-92, 23-34.

  4. American Evaluation Association (2008). Guiding Principles for Evaluators American Journal of Evaluation, 29 (4), 397-398.

  5. Askew, K., Green Beverly, M. & Jay. M. L. (2012). Aligning collaborative and culturally responsive evaluation approaches. Evaluation and Program Planning, 35, 552–557. doi: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.011

  6. Azzam, T. & Levine, B. (2015). Politics in evaluation: Politically responsive evaluation in high stakes environments Evaluation and Program Planning, 53, 44-56. doi: http://dx.doi.org/10.1016/j.evalprogplan.2015.07.002

  7. Betzner, A., Lawrenz, F. P. & Thao. M. (2016). Examining mixing methods in an evaluation of a smoking cessation program. Evaluation and Program Planning, 54, 94-101. doi: http://dx.doi.org/10.1016/j.evalprogplan.2015.06.004

  8. Brandon, P. R. & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35 (1), 26–44. doi: http://dx.doi.org/10.1177/1098214013503699

  9. Bredo, E. (2006). Philosophies of Educational Research. En J. L. Green, G. Camilli & P. B. Elmore, Handbook of Complementary Methods in Education Research. London: Lawrence Erlbaum Associates Publishers AERA 3-31.

  10. Calderon, A. J. (2004). Institutional Research at RMIT. A case study. Ponencia presentada en el 26th EAIR Forum, Barcelona, 5-8 de septiembre.

  11. Chelimsky, E. (2008). A Clash of Cultures: Improving the “Fit” Between Evaluative Independence and the Political Requeriments of a Democretic Society. American Journal of Evaluation, 29, 4, 400-415. doi http://dx.doi.org/10.1177/1098214008324465

  12. Chouinard, J. A., & Milley. P. (2016). Mapping the spatial dimensions of participatory practice: A discussion of context in evaluation. Evaluation and Program Planning, 54, 1-10. http://dx.doi.org/10.1016/j.evalprogplan.2015.09.003

  13. Christie, C. A. (2003). The practice-theory relayionship in evaluation. New Directions for Program Evaluation, 97, Jossey-Bass, San Francisco, Ca.

  14. Christie, C. A. (2007). Reported influence of evaluation data on decision makers’actions: An empirical examination. American Journal of Evaluation, 28, 1, 8–25 doi: http://dx.doi.org/10.1177/1098214006298065

  15. Christie, C. A. & Fleischer, D. N. (2010). Insight Into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals. American Journal of Evaluation, 31(3), 326-346. doi: http://dx.doi.org/10.1177/1098214010369170

  16. Christie, C. A., Ross, R. M. & Klein, B. M. (2004). Moving toward collaboration by creating a participatory internal-external evaluation team: a case study. Studies in Educational Evaluation,36(2), 107-117. doi: http://dx.doi.org/10.1016/j.stueduc.2004.06.002

  17. Claverie, J., Gonzalez, G. & Perez, L. (2008), El Sistema de Evaluación de la Calidad de la Educación Superior en la Argentina: El Modelo de la CONEAU. Alcances y Límites para Pensar la Mejora. Revista Iberoamericana de Evaluación Educativa, 1(2), 149-164.

  18. Cook, J. R. (2015). Using Evaluation to Effect Social Change: Looking Through a Community Psychology Lens. American Journal of Evaluation, 28(1), 107-117. doi: http://dx.doi.org/10.1177/1098214014558504

  19. Cousins, J. B. (2004). Commentary: Minimizing evaluation misuse as principled practice. American Journal of Evaluation, 25(3), 391-397. doi: http://dx.doi.org/10.1177/109821400402500311

  20. Cousins, J. B.; Goh, S.C.; Elliot, C.J. & Bourgeois, I. (2014). Framing the Capacity to Do and Use Evaluation. New Directions for Evaluation, 141 (1), 7-23. doi: http://dx.doi.org/10.1002/ev.20076

  21. Daigneault, P. (2014). Taking stock of four decades of quantitative research on stakeholder participation and evaluation use: A systematic map. Evaluation and Program Planning, 45, 171–181. doi: http://dx.doi.org/10.1016/j.evalprogplan.2014.04.003

  22. Donaldson, S. I. (2007). Program theory-driven evaluation science. Mahwah, NJ: Erlbaum

  23. Donaldson, S. I. & Gooler, L. E. (2003). Theory-driven evaluation in action: lessons from a $20 million statewide Work and Health Initiative. Evaluation and Program Planning, 26, 355–366 doi: http://dx.doi.org/10.1016/S0149-7189(03)00052-1

  24. Donaldson, S. I. & Lipsey, M. W. (2006). Roles for theory in contemporary evaluation practice: Developing practical knowledge. En I. Shaw, J. C. Greene & M. M. Mark (Eds.), The Handbook of Evaluation: Policies, programs, and Practices (56-75). London: Sage

  25. Donaldson, S. I. & Scriven, M. (2003). Diverse visions for evaluation in the new millennium. En S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Vision for the new millenium (pp. 3-16), Mahwah, NJ: Erlbaum.

  26. Escudero, T. (2000). Evaluación de centros e instituciones educativas: las perspectivas del evaluador. En D. González, E. Hidalgo & J. Gutiérrez (Coords.), Innovación en la escuela y mejora de la calidad educativa (pp. 57-76). Granada: Grupo Editorial Universitario

  27. Escudero, T. (2002). Evaluación institucional: algunos fundamentos y razones. En V. Álvarez & A. Lázaro, Calidad de las Universidades y Orientación Universitaria (pp. 103-138). Málaga: Ediciones Aljibe.

  28. Escudero, T. (2003). Desde los tests hasta la investigación evaluativa actual. Un siglo, el XX, de intenso desarrollo de la evaluación en educación. [English version: From tests to current evaluative research. One century, the XXth, of intense development of evaluation in education]. RELIEVE, 9(1), Recuperado de http://www.uv.es/RELIEVE/v9n1/RELIEVEv9n1_1.htm

  29. Escudero, T. (2005-2006). Claves identificativas de la investigación evaluativa: análisis desde la práctica. Contextos Educativos. Revista de Educación, 8-9, 179-199.

  30. Escudero, T. (2006). Evaluación y mejora de la calidad en educación. En T. Escudero & A. D. Correa, Investigación en innovación educativa: algunos ámbitos relevantes (pp. 269-325). Madrid: La Muralla, S. A.

  31. Escudero, T. (2007). Evaluación institucional de la calidad universitaria en España: Breve pero interesante historia. Anuario de Pedagogía, 9, 103-115.

  32. Escudero, T. (2009). Some relevant topics in educational evaluation research. En M. Asorey, J. V García Esteve, M. Rañada, & J. Sesma, Mathematical Physics and Field Theory. Julio Abad, in Memoriam, (pp. 223-230). Prensas Universitarias de Zaragoza.

  33. Escudero, T., Pino, J. L. & Rodríguez, C. (2010). Evaluación del profesorado universitario para incentivos individuales: revisión metaevaluativa. Revista de Educación, 351, 513-537.

  34. Escudero, T. (2011). La construcción de la investigación evaluativa. El aporte desde la educación. Prensas Universitarias-Universidad de Zaragoza.

  35. Escudero, T. (2013). Utilidad y uso de las evaluaciones. Un asunto relevante. Revista de evaluación educativa, 2 (1). Recuperado de http://revalue.mx/revista/index.php/revalue/issue/current

  36. European Commission/EACEA/EURYDICE (2015). Assuring Quality in Education: Policy and Approaches to School Evaluation in Europe, Eurydice Report. Luxembourg: Publication Office of the European Union. Recuperado de http://eacea.ec.europa.eu/education/eurydice/

  37. Exposito, J., Olmedo, E. & Fernandez-Cano, A. (2004). Patrones metodológicos en la investigación española sobre evaluación de programas educativos. RELIEVE, 10 (2). Recuperado de http://www.uv.es/RELIEVE/v10n2/RELIEVEv10n2_2.htm

  38. Ferrandez, R. (2008). Programas de Auditoría Institucional Universitaria. Comparación de la Propuesta Española con el Sistema Británico. Revista Iberoamericana de Evaluación Educativa, (1)1, 156-170.

  39. Fetterman, D. M. (2001a). The Transformation of Evaluation into a Collaboration: A Vision of Evaluation in the 21st Century. American Journal of Evaluation, (22) 3, 381-385. doi: http://dx.doi.org/10.1177/109821400102200315

  40. Fetterman, D. M. (2001b). Foundations of empowerment evaluation. Thousand Oaks, CA.: Sage.

  41. Fetterman, D.M., Kaftarian, S.J. & Wandersman A. (Eds.) (2015). Empowerment Evaluation: Knowledge and Tools for Self- Assessment, Evaluation Capacity Building, and Accountability(2nd edition). Thousand Oaks, CA.: Sage Publications. doi: http://dx.doi.org/10.1016/b978-0-08-097086-8.10572-0

  42. Fitzpatrick, J. L. (2012). Commentary-Collaborative evaluation within the larger evaluation context, Evaluation and Program Planning, 35, 558-563. doi: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.012

  43. Geist, M. R. (2010). Using the Delphi method to engage stakeholders: A comparison of two studies. Evaluation and Program Planning, 33, 147-154. doi: http://dx.doi.org/10.1016/j.evalprogplan.2009.06.006

  44. Henry, G. T. (2003). Influential evaluations. American Journal of evaluation, 24 (4), 515-524. doi: http://dx.doi.org/10.1177/109821400302400409

  45. Henry, G. T. & Mark, M. M. (2003). Beyond use: Understanding evaluation´s influence on attitudes and actions, American Journal of Evaluation, 24 (3), 293-314. doi: http://dx.doi.org/10.1177/109821400302400302

  46. House, E. R. (2008). Blowback. Consequences of Evaluation for Evaluation. American Journal of Evaluation, 29(4), 416-426. doi: http://dx.doi.org/10.1177/1098214008322640

  47. Jacob, S. (2008). Cross-Disciplinarization: A New Talisman for Evaluation? American Journal of Evaluation, 29 (2), 175-194.doi: http://dx.doi.org/10.1177/1098214008316655

  48. Johson, K. (2009). Research on Evaluation Use: A Review of the Empirical Literature From 1986 to 2005. American Journal of Evaluation, 30(3), 377-410. doi: http://dx.doi.org/10.1177/1098214009341660

  49. Joint Committee On Standards Of Educational Evaluation (2003). The student evaluation standards. Thousand Oaks, Ca.: Corwin

  50. Kirkhart, K. (2000). Reconceptualizing evaluation use: An integrated theory of influence. En V. Caracelli, & H. Preskill (Eds.), The expanding scope of evaluation use. New Directions for Evaluation, Nº 88, San Francisco, Ca.: Jossey-Bass.

  51. La Velle, J. M. & Donaldson, S. I. (2010). University-Based Evaluation Training Programs in the United States 1980-2008: An Empirical Examination. American Journal of Evaluation, 31(1), 9-23. doi: http://dx.doi.org/10.1177/1098214009356022

  52. Leviton, L. C. (2003). Evaluation use: Advances, challenges and applications. American Journal of Evaluation, 33 (2), 159-178.

  53. Ledermann, S. (2012). Exploring the Necessary Conditions for Evaluation Use in Program Change. American Journal of Evaluation, 24 (4), 525-535. DOI: http://dx.doi.org/10.1177/1098214011411573

  54. Makrakis, V. & Kostoulas-Makrakis, N. (2016). Bridging the qualitative–quantitative divide: Experiences from conducting a mixed methods evaluation in the RUCAS programme. Evaluation and Program Planning 54, 144-151. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2015.07.008

  55. Mark, M. M. (2003). Toward a integrative view of the theory and practice of program and policy evaluation. En S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Vision for the new millenium, (pp. 183-204). Mahwah, NJ.: Erlbaum.

  56. Maxcy, S. J. (2003). Pragmatic threads in mixed methods research in social sciences: An emerging theory in support of practice. En A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social anb behavioral research, (pp. 51-89). Thousand Oaks, CA: Sage

  57. May, H. (2004). Making statistics more meaningful for policy research and program evaluation. American Journal of Evaluation, 25(4), 525-540. doi: http://dx.doi.org/10.1177/109821400402500408

  58. Mcclintock, C. (2003). Commentary: The evaluator as scholar/practicioner/change agent. American Journal of Evaluation, 24(1), 91-96. doi: http://dx.doi.org/10.1177/109821400302400110

  59. Mira, G. E., Meneses, R. M. & Rincón, D. A. (2012). La Investigación Evaluativa y su perspectiva en la Acreditación y Evaluación de Programas e Instituciones en Educación Superior, XIII Asamblea General de la Asociación Latinoamericana de Facultades y Escuelas de Contaduría y Administración (ALAFEC) (pp. 1-26). Buenos Aires, Argentina.

  60. Muñoz, A., Perez Zabaleta, A., Muñoz, A. & Sanchez, C. (2013). La evaluación de políticas públicas: una creciente necesidad en la unión europea. Revista de Evaluación de Programas y Políticas Públicas, 1, 1-30.

  61. Neuman, A., Shahor, N., Shina, I., Sarid, A. & Saar, Z. (2013). Evaluation utilization research -Developing a theory and putting it to use. Evaluation and Program Planning, 36, 64–70. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2012.06.001

  62. Nicoletti, J. A. (2013). La evaluación de la calidad educativa. Investigación de base evaluativa en centros de educación superior. Revista Argentina de Educación Superior, 6, 189-202.

  63. Nitsch, M., Waldherr, K., Denk, E., Griebler, U., Marent, B. & Forster, R. (2013). Participation by different stakeholders in participatory evaluation of health promotion: A literature review. Evaluation and Program Planning, 40 (1), 42–54. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2013.04.006

  64. Paricio, J. (2015). Análisis de los modelos de calidad de la educación superior. Diseño de una metodología de análisis multidimensional. Tesis Doctoral, Universidad de Zaragoza.

  65. Patel, M. (2002a). A meta-evaluation, or quality assessment, of the evaluations in this issue, based on the African Evaluation Guidelines: 2002. Evaluation and Program Planning, 25, 329-332. doi: http://dx.doi.org/10.1016/S0149-7189(02)00043-5

  66. Patel, M. (2002b). The African Evaluation Guidelines: 2002. A checklist to assist in planning evaluations, negotiating clear contracts, reviewing progress and ensuring adequate completion of an evaluation. Evaluation and Program Planning, 25, 481–492.

  67. Patton, M. Q. (2012). A utilization-focused approach to contribution analysis. Evaluation, 18, 364–377. doi: http://dx.doi.org/10.1177/1356389012449523

  68. Perassi, Z. (2009). Commentary: Evaluar un Programa Educativo: Una Experiencia Formativa Compleja, Revista Iberoamericana de Evaluación Educativa, 2(2), 172-195.

  69. Perez Juste, R. (2002). La evaluación de programas en el marco de la educación de calidad. XXI Revista de Educación, 4, 43-76.

  70. Perrin, B. (2001). Commentary: Making yoursel -and evaluation- useful. American Journal of Evaluation, 22 (2), 252-259. doi: http://dx.doi.org/10.1177/109821400102200209

  71. Pinkerton, S. D., Johnson-Massoti, A. P., Derse, A. & Layde, P. M. (2002). Ethical issues in cost-effectiveness analysis. Evaluation and Program Planning, 25, 71-83. doi: http://dx.doi.org/10.1016/S0149-7189(01)00050-7

  72. Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29, 443–459. doi: http://dx.doi.org/10.1177/1098214008324182

  73. Renger, R & Hurley, C. (2006). From theory to practice: Lessons learned in the application of the ATM approach to developing logic models. Evaluation and Program Planning, 29, 106–119. doi: http://dx.doi.org/10.1016/j.evalprogplan.2006.01.004

  74. Rodriguez-Campos, L. (2012). Stakeholder involvement in evaluation: Three decades of the American Journal of Evaluation. Journal of Multidisciplinary Evaluation, 17, 57– 79.

  75. Rodríguez-Campos, L. (2012). Advances in collaborative evaluation, Evaluation and Program Planning, 35, 523–528. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.006

  76. Roseland, D., Lawrenz, F. & Thao, M. (2015). The relationship between involment in and use of evaluation in multi-site evaluations. Evaluation and Program Planning, 48, 75-82. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.10.003

  77. Ryan, K. E. (2004). Serving public interests in educational accountability: Alternative approaches to democratic evaluation. American Journal of Evaluation, 25 (4), 443-460. doi: http://dx.doi.org/10.1177/109821400402500403

  78. Scheerens, J. (2004). The evaluation culture. Studies in Educational Evaluation, 30(2), 105-124. doi: Perez Juste, R. (2002). La evaluación de programas en el marco de la educación de calidad. XXI Revista de Educación, 4, 43-76.

  79. Perrin, B. (2001). Commentary: Making yoursel -and evaluation- useful. American Journal of Evaluation, 22 (2), 252-259.

  80. http://dx.doi.org/10.1177/109821400102200209

  81. Pinkerton, S. D., Johnson-Massoti, A. P., Derse, A. & Layde, P. M. (2002). Ethical issues in cost-effectiveness analysis. Evaluation and Program Planning, 25, 71-83.

  82. http://dx.doi.org/10.1016/S0149-7189(01)00050-7

  83. Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29, 443–459.

  84. http://dx.doi.org/10.1177/1098214008324182

  85. Renger, R & Hurley, C. (2006). From theory to practice: Lessons learned in the application of the ATM approach to developing logic models. Evaluation and Program Planning, 29, 106–119.

  86. http://dx.doi.org/10.1016/j.evalprogplan.2006.01.004

  87. Rodriguez-Campos, L. (2012). Stakeholder involvement in evaluation: Three decades of the American Journal of Evaluation. Journal of Multidisciplinary Evaluation, 17, 57– 79.

  88. Rodríguez-Campos, L. (2012). Advances in collaborative evaluation, Evaluation and Program Planning, 35, 523–528. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.006

  89. http://dx.doi.org/10.1016/j.evalprogplan.2011.12.006

  90. Roseland, D., Lawrenz, F. & Thao, M. (2015). The relationship between involment in and use of evaluation in multi-site evaluations. Evaluation and Program Planning, 48, 75-82. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.10.003

  91. http://dx.doi.org/10.1016/j.evalprogplan.2014.10.003

  92. Ryan, K. E. (2004). Serving public interests in educational accountability: Alternative approaches to democratic evaluation. American Journal of Evaluation, 25 (4), 443-460.

  93. http://dx.doi.org/10.1177/109821400402500403

  94. Scheerens, J. (2004). The evaluation culture. Studies in Educational Evaluation, 30(2), 105-124.

  95. http://dx.doi.org/10.1016/j.stueduc.2004.06.001

  96. Schwandt, T.A. (2002). Evaluation Practice Reconsidered. New York NY. Peter Lang Publishing

  97. Schwartz, R. & Mayne, J. (2005). Assuring the quality of evaluative information: theory and practice. Evaluation and Program Planning, 28(1), 1-14. doi: http://dx.doi.org/10.1016/j.evalprogplan.2004.10.001

  98. Schweigert, F., J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning, 30, 394–399. doi: http://dx.doi.org/10.1016/j.evalprogplan.2007.06.007

  99. Scriven, M. (2000). The logic and methodology of checklists. Recuperado de www.wmich.edu/evalctr/checklists/

  100. Scriven, M. (2003). Evaluation in the new millenium: The transdisciplinary vision. En S. I. Donaldson & M. Scriven (Eds.), Evaluating Social Programs and Problems. Visions for the New Millennium, (pp. 19-42). Mahwah, N. J.: Lawrence Earlbaum Associates

  101. Smith, M. J. (2010). Handbook of Program Evaluation for Social Work and Health Professionals. New York: Oxford University Press

  102. Sondergeld, T. & Koskey, K. (2011). Evaluating the impact of an urban comprehensive school reform: An illustration of the need for mixed methods. Studies in educational Evaluation, 37, 94-107. DOI: http://dx.doi.org/10.1016/j.stueduc.2011.08.001

  103. Stake, R. (2006). Evaluación comprensiva y evaluación basada en estándares, Editorial Graó, Barcelona.

  104. Stufflebeam, D. L. (2000). Guidelines for developing evaluation checklists. Recuperado de www.wmich.edu/evalctr/checklists/

  105. Stufflebeam, D. L. (2001a). Interdisciplinary PHd Programming in Evaluation. American Journal of Evaluation, 22(3), 445-455. doi: http://dx.doi.org/10.1177/109821400102200323

  106. Stufflebeam, D. L. (2001b). The metaevaluation imperative. American Journal of Evaluation, 22(2), 183-209. doi: http://dx.doi.org/10.1177/109821400102200204

  107. Stufflebeam, D. L. (2004). A note on the purposes, development, and applicability of the Joint Committee Evaluation Standards. American Journal of Evaluation, 25, 1, 99-102. doi: http://dx.doi.org/10.1177/109821400402500107

  108. Taut, S. (2008). What have we learned about stakeholders involvement in program evaluation. Studies in Educational Evaluation, 34, 224-230. doi: http://dx.doi.org/10.1016/j.stueduc.2008.10.007

  109. Thomas, V. G. & Madison, A. (2010). Integration of Social Justice Into the Teaching of Evaluation. American Journal of Evaluation, 31 (4), 570-583. doi: http://dx.doi.org/10.1177/1098214010368426

  110. Urban, J. B., Hargraves, M. & Trochim, W. M. (2014). Evolutionary Evaluation: Implications for evaluators, researchers, practitioners, funders and the evidence-based program mandate. Evaluation and Program Planning, 45, 127-139. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.03.011

  111. Urban, J. B. & Trochim, W. M. (2009). The role of evaluation in research-practice integration: Working toward the “Golden spike”. American Journal of Evaluation, 30 (4), 538-553. doi: http://dx.doi.org/10.1177/1098214009348327

  112. Vanhoof, J. & Van Petegem, P. (2010). Evaluating the quality of self-evaluations: The (mis)match between internal and external meta-evaluation. Studies in Educational Evaluation, 36, 20–26. doi: http://dx.doi.org/10.1016/j.stueduc.2010.10.001

  113. Walton, M. (2014). Applying complexity theory: A review to inform evaluation design. Evaluation and Program Panning, 45, 119-126. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.04.002

  114. Wasserman, D. L. (2010). Using a systems orientation and foundational theory to enhance theory-driven human service program evaluations. Evaluation and Program Planning, 33, 67-80. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2009.06.005

  115. Weiss, C. H. (2004). On theory-based evaluation: Winning friends and influencing people. The Evaluation Exchange, 9(4), 1-5.

  116. White, H. (2013). The Use of Mixed Methods in Randomized Control Trials. New Directions for Evaluation, 138(2), 61-73. DOI: http://dx.doi.org/10.1002/ev.20058 doi: http://dx.doi.org/10.1002/ev.20058

  117. Youker, B. W., Ingraham, A. & Bayer, N. (2014). An assessment of goal-free evaluation: Case studies of four goal-free evaluations. Evaluation and Program Planning, 46, 10-16. doi: http://dx.doi.org/10.1016/j.evalprogplan.2014.05.002

  118. Yusa, A., Hynie, M. & Mitchell, S. (2016). Utilization of internal evaluation results by community mental health organizations: Credibility in different forms. Evaluation and Program Planning, 54, 11-18. doi: http://dx.doi.org/10.1016/j.evalprogplan.2015.09.006


Enlaces refback

  • No hay ningún enlace refback.




Copyright (c)



https://ojs.uv.es/public/site/images/aliaga/scopus_170 https://ojs.uv.es/public/site/images/aliaga/esci_225 https://ojs.uv.es/public/site/images/aliaga/sello-calidad-revistas-2016_697_01