Development and Implementation of a Comprehensive Evaluation Plan: the Cornerstone for Centers for Interprofessional Practice and Education
Rigorous evaluations of interprofessional education (IPE) initiatives across the learning continuum are needed. The recently published Guidance on Developing Quality Interprofessional Education for the Health Professions endorses the requirement for a “coordinated strategy for assessing learners on their development and mastery of interprofessional collaborative competencies”. It also notes that it is “critical to monitor and evaluate the process of IPE plan implementation”. However, consistent models or strategies for assessing the collective impact of IPE programs relative to the overall effectiveness of an IPE Center are lacking. This lack of systematic evaluation grounded in a theoretical framework limits the ability to determine a Center’s overall impact.
The conference theme ‘Quality Interprofessional Education and Accreditation’ will be addressed through sharing one Center’s systematic process for creating a comprehensive evaluation plan. Participants will engage in this process so they are equipped to replicate it at their home institution. Evaluation planning engages the faculty and staff coordinating each individual IPE program with evaluation experts and an IPE evaluation committee with collective expertise in assessment pedagogy and research design. The outputs of this process, a program-specific logic model and evaluation plan ,provide the framework for a proactive, systematic longitudinal program evaluation to replace those that are reactive, guided by faculty or student interests, and may vary from year to year.
An overarching logic model guides the Center’s implementation and evaluation of all IPE programs, measuring the impact of the Center on matriculation, collaborative practice behaviors during training and following graduation, and benefits to the community. This information can also be used by individual healthcare professions’ programs as they seek assessment data to report to their accrediting bodies. Further, it can assist in securing both institutional and philanthropic funding of continued and emerging IPE programs and other center initiatives.
The workshop sequence will include a 15-minute background/overview describing the rationale for developing a systematic evaluation process for IPE programs, inclusive of a description of the types of evaluation tools recommended in the Guidance document, and logic model development. A 30-minute guided activity will follow where individuals at each table will share an IPE activity at their institution; the table will select one activity for which to create a sample logic model. Ten minutes of discussion of how individual logic models tie into a comprehensive logic model for a Center will follow. The hour will conclude with 5 minutes of Q&A. Active learning strategies will include individual reflection on where participants’ institution are relative to individual and Center-wide evaluation planning and where they wish to be; think/pair/share relative to the development of a logic model; and open discussion/Q&A.
1. Differentiation between formative and summative evaluation approaches
2. Identification of data collection strategies for evaluating IPE programs
3. Description of how program-specific evaluation plans contribute to an IPE Center’s overall evaluation plan (i.e., collective programming)
4. Development of a program-specific logic model to guide evaluation planning.