Evaluation and Research

Evaluation begins at the beginning and should be incorporated during the initial stages of program development and implementation. Although ideal, it is never too late to implement an evaluation infrastructure. Systemic inquiry drives the evaluation with the analysis and reporting of program impact over time. Evaluation serves all stakeholders and so Xcalibur uses a Participatory Evaluation approach.

Xcalibur uses three distinct approaches within the structure of Formative and Summative evaluation

  • Formative
  • Process Evaluation
  • Outcome Evaluation
  • Summative
  • Impact Evaluation and Research
Sometimes referred to as intervention fidelity, is used to determine the integrity of implementation and service delivery. It is also important in the interpretation of outcome data, i.e., if the project is not achieving expected results, it may be because there are problems with program delivery. It is a formative evaluation and not meant to report on results of interventions.
Investigates whether changes have occurred for participants in the project. It measures the degree and direction of those changes. It also seeks to tie these changes to specific services. This is a way of testing that the logic model and/or rationale for the program is valid. Outcome evaluation determines the effectiveness of the project in achieving its intended outcomes and advocates for continued resource allocation. This is a summative evaluation and meant to reflect short-term results of interventions.
Examines systemic change and longitudinal value and generally includes a summative evaluation study. It may be referred to as the “Why it Matters”, “So What” or “What Works and What Doesn’t” questions. The central theme is to improve the design and delivery of programs based on evidence acquired through rigorous investigation.

GEAR UP Evaluation

GEAR UP evaluation research takes place within an educational context and thus requires sensitivity to students, families, educators, and the community. Therefore, Xcalibur adheres to the American Evaluation Association Program Evaluation Standards and the Code of Federal Regulations, Part 46 - Protection of Human Subjects.

In the past seven years, much has changed in the GEAR UP world in terms of how we think about, conduct, and share results of evaluation and research. Interventions have become more standardized and defined, challenges and barriers are more dogmatic, and the burden for proof of accomplishments more demanding. Those pressures and changes require GEAR UP grants to respond using more thorough methods for providing evidence of success.

Xcalibur’s Participatory Evaluation contrasted to Conventional Evaluation

Xcalibur’s Participatory Approach Conventional
Who drives the evaluation? Partners (e.g., school districts and institutions of higher education), grant project staff, families, and other stakeholders Funders and program managers
Who determines indicators of program progress Stakeholders, project staff, Xcalibur evaluator(s), and industry experts Professional evaluators and outside experts
Who is responsible for data collection, analysis and preparing final reports? Shared responsibility of evaluator and participating stakeholders Professional evaluators and outside experts
What is the role of the evaluator? Coach, facilitator, negotiator, “critical friend”, expert, and leader Expert, leader
When is this type of evaluation most useful?
  • There are questions about program implementation difficulties
  • There are questions about program effects on students and families
  • There is a need for local context and independent judgment
  • Specialized information is needed that only experts can provide
  • There is a need for independent judgment
  • Specialized information is needed that only experts can provide
  • Program indicators are standardized, rather than particular to a program
What are the costs?
  • Time, energy and commitment from stakeholders and project staff
  • Coordination of many players
  • Training, skills development and support for key players
  • Consultant and expert fees
  • Loss of critical information that only stakeholders can provide
What are the benefits?
  • Local context and knowledge are incorporated into the evaluation
  • Verification of information from key players (validity)
  • Builds knowledge, skills and relationships among stakeholders and project staff
  • Independent judgment
  • Standardized indicators allow comparison with other research findings