May
2025
Form
approved OMB
Control No: 0970-0XXX Expiration
Date: XX/XX/XXXX
PREIS
Final Evaluation Report
Template
and Guidance
PREIS Final Evaluation Report Template
Authors
Randall Juras, Eleanor Harvill, Liz Yadav, and Michelle Blocklin (Abt Global), with support from the PREP Local Evaluation Support team. We’d also like to acknowledge the thoughtful contributions of the PREIS Steering Committee.
Abt Global LLC | 6130 Executive Boulevard | Rockville, MD 20852
THE PAPERWORK REDUCTION ACT OF 1995 The purpose of this information collection is to provide a standardized template and accompanying instructions to the 12 Personal Responsibility Education Program (PREP) Innovative Strategies (PREIS) grant recipients carrying out impact evaluations. This template will help them document their evaluation’s research questions, measures, study design, planned and actual implementation of the program, analytic methods, and findings. Public reporting burden for this collection of information is estimated to average 40 hours per respondent, including the time for reviewing instructions, gathering and maintaining the data needed, and reviewing the collection of information. This collection of information is required to systematically document details of the PREIS program evaluations, mandated by authorized and appropriated by Social Security Act Section 513. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information subject to the requirements of the Paperwork Reduction Act of 1995, unless it displays a currently valid OMB control number. The OMB # is 0970-0XXX and the expiration date is X/XX/XXXX. If you have any comments on this collection of information, please contact Selma Caal (Selma.Caal@acf.hhs.gov). |
CONTENTS
Final Evaluation Report Components 1
1. Title Page 2
2. Executive Summary 3
3. Introduction 4
4. Description of Intervention and Comparison Conditions 5
5. Impact Evaluation Design and Methods 7
6. Findings 15
7. Discussion 19
8. References 20
9. Appendices 21
This template is provided to assist Personal Responsibility Education Innovative Strategies (PREIS) program grant recipients produce final evaluation reports that clearly communicate both the evaluation findings and key details about the research methods used to produce the findings. The goal of the final evaluation report is to document the evidence of effectiveness of the innovative interventions and/or approaches on behavior change implemented by PREIS grantees under the PREIS program to share with the Family and Youth Services Bureau (FYSB) and the public.
The template is organized around an outline of sections that should be included in a report on a study of the effectiveness of a PREIS program. Each section of the outline includes detailed guidance on what information to include, paying particular attention to the required components of any report as delineated in the Notice of Funding Opportunity (NOFO), the expectations described in the PREP Evaluation Standards for Rigor, and information necessary for the Teen Pregnancy Prevention Evidence Review (TPPER)1. In each section, applicable expectations from the PREP Evaluation Standards for Rigor are shown in red-outlined text boxes.
PREIS grantees and evaluators should work together to complete the template and draft the final report. To save time when completing this template, we encourage the use of text already prepared for evaluation plans. We have noted in green specific places where text from evaluation plans, including your analysis plans, can be used.
The following sections present a template for each section of your final evaluation report, including:
Title Page
Executive Summary
Introduction
Description of Intervention and Comparison Conditions
Impact Evaluation Design and Methods
Findings
Discussion
References
Appendices
The final evaluation report should have a cover page with the title of the report and a list of all authors.
On the cover page, you should also:
Disclose any conflicts of interest, financial or otherwise. For example, if someone on the evaluation team received a grant from an organization included in the evaluation, you could include language such as “[insert name] reports receipt of a grant from [organization name] during the evaluation period.”
Include the attribution to FYSB as follows: “This publication was prepared under Grant Number [Insert Grant Number] from the Family and Youth Services Bureau (FYSB) within the Administration for Children and Families (ACF), U.S. Department of Health and Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the policies of HHS, ACF, or FYSB.”
The one-page executive summary should provide a high-level overview of the study’s background, research design, and findings. The summary should include brief descriptions of:
Name of the program and any substantive adaptations of the program or service that were made for the study.
Study design (such as randomized controlled trial [RCT] or quasi-experimental design [QED]) and the level of assignment (e.g., youth, classroom, school).
Comparison condition (such as no or minimal intervention, treatment as usual, or other intervention).
Setting of the study (e.g., middle schools or community organizations; geographic location).
Key characteristics of the study sample. Include sociodemographic characteristics and any specific risks or issues.
Key outcomes measured in the study.
Analytic methods.
Study’s findings.
Conclusions.
The report’s introduction should orient the reader to the study’s background and research questions.
Background
In this section, summarize the rationale for the local evaluation and how the local evaluation will help to inform current and future programming and expand the evidence base on adolescent pregnancy prevention. In this section make sure to identify:
The issue or problem the intervention addresses.
The rationale for selecting the intervention.
The intervention’s overall approach and goal.
The intervention’s target population.
The current knowledge base relevant to the specific intervention being evaluated.
Most of this information can be drawn from Section 1 of your evaluation plan, but if more recent references and statistics are available, you should update accordingly.
Expectation
2C.1: Target population is clearly defined.
Research Questions
In this section, articulate the study’s prespecified research questions about the impacts of the program on youth sexual behavior and other outcomes. Each research question should be designated as either primary or secondary, as it was specified in the evaluation plan.
Each research question should include the following:
The name of the intervention or component(s) tested,
The target population (e.g., 9th and 10th grade students),
The counterfactual condition, (i.e., summarize the business as usual or other condition to which treatment was compared; the condition/services the control or comparison group was offered),
The outcome domain,
An outcome domain is a general, or high-level category of outcome that may be affected by the treatment. Each domain may be measured using more than one outcome measure. Sexual risk behavior, knowledge, and intentions are examples of three outcome domains. The domain of knowledge might include measures like knowledge of how to prevent STIs or knowledge of reproductive health.
The length of time of exposure to the intervention condition.
Research questions can be copied from Section 2.1.1 of your evaluation plan.
Section 4 of the report summarizes the intended or planned intervention and comparison conditions and describes how study participants were identified and selected into each of these conditions. Implementation findings presented in Section 6 will describe what each group actually received.
Intervention Condition
Summarize the program tested. This section should describe the intervention condition as intended (or what the intervention group was supposed to receive).
Identify the program or service by name and describe its key (core) components.
Discuss program activities and content.
Describe the intended implementation location or setting (for example, schools or clinics), intended duration and dosage, and intended staffing.
If there are multiple conditions of the program or service tested in the study (as in a multi-arm study with two different intervention conditions and a comparison condition), provide a clear description of each condition and, if applicable, how conditions differ or were modified from the manual, books, or writings describing the program or service model.
The description of the intervention’s activities should include enough detail so that readers can understand what components or activities were expected to lead to the observed outcomes. Key program components and activities can be described in a narrative form, or you can refer to your logic model illustrating how those activities/inputs are hypothesized to affect key mediators and ultimately cause improvements in outcomes. The level of detail should provide enough information to indicate the resources that would be needed to replicate the intervention. Include your logic model as an appendix in your report.
Text in this section can be copied or adapted from Sections 1.1 and 2.2.1 of your evaluation plan.
Expectation
3: Evaluation provides information about the key elements and
the approach of the intervention to facilitate testing, development,
or replication in other settings. Intervention has a fully specified
logic model that identifies all key components of the intervention
and mediators through which the intervention affects outcomes,
including specific outcome domains.
Control/Comparison Condition
Provide a description of the control or comparison condition. The description should include:
A comprehensive description of the condition the intervention was being compared to, beyond identifying whether it is an alternative intervention, a “business-as-usual” condition, or a “no-treatment” condition. List any other services related to adolescent health that are available to youth in the communities where your intervention was conducted. Describe the source of information about the programs and services offered to or received by the comparison condition, if known. If this information is unknown, just explain what you know about it.
If the comparison condition received no or minimal treatment, specify whether the participants had an opportunity to participate in the program or service later (waitlist), opted out of participating in the program or service, or never had the opportunity to receive the program or service. If the comparison condition is a waitlist group, clearly indicate when the group was offered the intervention.
Text in this section can be copied or adapted from Section 2.2.2 of your evaluation plan.
Section 5 of the report summarizes the evaluation design, sample, data collection, and analytic methods for the impact evaluation.
Independence
Include a short statement affirming the impact evaluation was independent of the intervention developer and entities implementing the intervention. Please describe the roles the independent evaluator played, including who was responsible for conducting random assignment, collecting outcome data, and analyzing findings.
Text in this section can be copied and adapted from Section 6 of your evaluation plan.
Expectation
1A: Evaluation is independent of the intervention developer and
the entities responsible for implementing the intervention.
Evaluators have an independent affiliation and are conducting the
following activities independently: assignment of participants to
treatment and control (RCTs only); outcome data collection; and
impact analysis.
Pre-Registration
Please describe the study’s approach to pre-specification, referencing the public registry where the study was pre-registered and when, in relation to data collection.
Confirm that research questions and analyses were posed in advance of the study or describe any changes to the research questions after data collection or changes to analyses after analysis plan approval.
When reporting findings, consider adding a notation or symbol (such as ‘+’) to indicate the findings that address pre-specified research analyses.
Describe any differences between planned and actual execution of the study, even if the analyses were not pre-registered.
Your registration plan was described in Section 2.1.2 of your evaluation plan.
Expectation
1C.1: Evaluation prespecifies planned impact analyses of
participant outcomes. Expectation
1C.2: Study was registered prior to data collection.
Research Design
Describe the research design used to assess the effectiveness of the intervention.
Specify the study design (e.g., randomized controlled trial with blocking; cluster randomized controlled trial; quasi-experimental design using propensity score matching; quasi-experimental design using another approach to matching).
Clearly describe the timing (i.e., month and year) of all key milestones of the study, including assignment, consent, intervention beginning and end, and data collection points. Note if these differed by condition.
Specify the unit of assignment (i.e., youth, class, teacher, school).
Note any limitations of the design or how it was implemented. For example, were there concerns about knowledge of study assignment during the consent process (if consent was gathered after random assignment in a clustered RCT) or concerns that different types of schools volunteered for intervention versus comparison in a QED? Consider addressing relevant issues that you have discussed with your LES liaison over the course of the study.
Text in this section can be copied and adapted, as needed, from Sections 2.1 of your evaluation plan.
Evaluation Sample
In this section, you will describe the impact evaluation sample(s) (i.e., the individuals and/or groups that are contributing data to the evaluation, which may be a subset of the individuals or communities who are eligible for the intervention overall), and how they were identified and enrolled into the study.
Indicate how and when participants were recruited for participation in the study and, if applicable, any differences in recruitment procedures between conditions. Discuss recruitment separately for all applicable levels of the study sample (e.g., districts, schools, teachers, youths).
Describe whether the evaluation sample is different from the individuals, communities, or settings that received the intervention. If the evaluation sample is different, describe how it is different (e.g., were certain counties, community centers, facilitators, or individuals that received the intervention excluded from the evaluation and why?), whether it is a random sample (e.g., of the counties, community centers, facilitators, or individuals that received the intervention), or if it is a non-random sample, indicate the percent of settings and youth that were excluded (e.g., 10 percent of the community centers and 5 percent of youth who received the intervention were excluded from the evaluation).
Specify study participant inclusion and exclusion criteria for all levels of the study sample. For youths, this might include age, grade range, and/or specific demographic characteristics. For schools or districts, this might include average achievement, adolescent pregnancy rates, or percentage of students who are low income.
Text in this section can be adapted from Section 2.3.2 and 2.3.3 of your evaluation plan.
Expectation
2C.2: Sample description provided. Grantees describe the
universe of cases, the evaluation sample (if not the full universe),
and sampling plan and eligibility criteria for data collection for
the evaluation.
Expectation
2C.3: Evaluation is based on a sample that is representative of
the populations and settings that receive the intervention.
Describe in detail how individuals or clusters of individuals (such as schools or classrooms) were assigned to conditions (e.g., random, matched comparison).
If random assignment was used:
Specify when random assignment was performed (e.g., before or after baseline measures completed, before or after consent).
Describe any anomalies in random assignment or ways that random assignment was compromised, and any solutions used.
If randomization was performed within blocks, sites, or strata, describe the process of randomization for each, including differences in assignment across blocks and how this was handled in the impact analyses.
If cluster randomization was used, include information about whether any participants joined a cluster after random assignment. If applicable, describe how and when they joined. Also, discuss whether the individual joining the cluster or the person making the assignment to the cluster knew the condition of the cluster at the time of joining.
If a matched comparison group was used:
Describe the procedure used to construct the groups, including the method and software used. Specify the characteristics that were used to construct the matched groups; if an equation or model was used in matching, specify the variables used in the model.
Describe how matching was handled in baseline and impact analyses, including how weights were applied (if applicable).
Text in this section can be copied or adapted from 2.3.2 and/or 2.3.3 of your evaluation plan.
Expectation
2A.2a: Assignment in RCTs must involve a functionally random
process.
In this section you will describe your evaluation sample sizes and attrition according to your evaluation design. Please note instructions for RCTs, QEDs, and for all studies.
Report the number of participants (and clusters, if applicable) randomized to each condition, including any who were dropped from or left the study after randomization. If cluster randomization was used, indicate the total number of participants in each condition at the time of randomization2. If the study analyzes a subset of participants, report the full randomized sample size, and describe how the subset was selected.
Include the number of participants (and clusters) by intervention and comparison condition who were randomized but were excluded or dropped from the study for reasons other than non-response/attrition (e.g., randomized in error, did not meet enrollment criteria). Provide numbers dropped by reason for dropping.
Report participant and cluster sample sizes by condition for each outcome separately at each measurement point (i.e., pre-test, post-test, and follow-up).
Calculate overall and differential attrition for each outcome measure at each follow-up measurement point that is used for analysis3.
Expectation
2A.2b: Study anticipates attrition in RCTs. Study calculates
attrition appropriately. The sample used for the calculation of
attrition should be defined as the number of individuals who are
present for the follow up outcome measurement as a percentage of the
total number of members in the sample at the time that individuals
learned the condition to which they were randomly assigned. This
evaluation should include an assessment of both overall attrition
(total sample loss between randomization and the post-test), and
differential attrition (percentage difference in attrition between
the treatment and control group). Expectation
2A.2c: Study anticipates attrition in cluster RCTs. In
cluster-level designs (e.g., schools are randomly assigned) with
individual-level analysis (e.g., students), attrition should be
assessed for both cluster-level units and for individual units;
however, attrition should not be double-counted across levels of
analysis. Cluster-level studies that involve different probabilities
of individual-level assignment should control for the differential
probability of assignment in the analysis.
We strongly suggest using tables to report this information. Below are two examples of what these tables might look like for individual and cluster RCTs.
Example Table Shell for Reporting Sample Sizes at Randomization and in Analytic Sample Needed to Assess Attrition for an RCT with Individual-Level Assignment
Outcome Measure |
Follow-Up Measurement Point |
Comparison Group |
Treatment Group |
Attrition |
|||
# Randomized |
# Analytic Sample |
# Randomized |
# Analytic Sample |
Overall |
Differential |
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Example Table Shell for Reporting Sample Sizes at Randomization and in Analytic Sample Needed to Assess Attrition for an RCT with Cluster-Level Assignment
Outcome Measure |
Follow-up Measure-ment Point |
Comparison Group |
Treatment Group |
Attrition |
|||||||||
Clustersa |
Youthsb |
Clustersa |
Youthsb |
Clustersa |
Youths |
||||||||
# Random-ized |
# Analytic Sample |
# Random-ized |
# Analytic Sample |
# Random-ized |
# Analytic Sample |
# Random-ized |
# Analytic Sample |
Overall |
Differential |
Overall |
Differential |
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
a Reported only for cluster-assignment evaluations. Not applicable for individual-assignment evaluations.
b Report the number of youths in non-attrited clusters only, for cluster-assignment evaluations.
Provide analytic sample sizes by condition for each outcome at each measurement point (i.e., pre-test, post-test, and follow-up).
Illustrate the flow of participants through the study with a CONSORT diagram4, such as the diagram below. If your evaluation uses cluster assignment, include sample sizes for both clusters and individuals in each box. If your evaluation uses individual assignment, include sample sizes for individuals in each box. You should tailor the diagram to align with the flow of your evaluation as needed (e.g., include any reasons for exclusion from the analysis such as exclusions due to exogenous variables). For additional guidance or support in tailoring the CONSORT diagram to meet your needs, consult your LES liaison.
|
|
# Enrolled in study |
|
|
|||
|
|
n = |
|
|
|
||
|
|
|
|
|
|||
|
|
|
|
|
|||
|
|
# Assigned to Treatment or Comparison/ Control groups |
|
|
|||
|
|
|
|
|
|
||
Assigned to treatment |
|
|
Assigned to control/comparison |
||||
n = |
|
Assignment |
n = |
|
|||
|
|
|
|
|
|
||
|
|
|
|
|
|
||
Baseline survey |
Baseline Survey |
Baseline survey |
|||||
Completed n = Analyzed n = Excluded from analysis n = |
Completed n = Analyzed n = Excluded from analysis n = |
||||||
|
|
|
|
|
|
||
|
|
|
|
|
|
||
Short-term follow up survey |
Short-Term Follow-Up Survey |
Short-term follow up survey |
|||||
Completed n = Analyzed n = Lost to follow-up n = Excluded from analysis n = |
Completed n = Analyzed n = Lost to follow-up n = Excluded from analysis n = |
||||||
|
|
|
|
|
|
||
|
|
|
|
|
|
||
Long-term follow up survey |
Long-Term Follow-Up Survey |
Long-term follow up survey |
|||||
Completed n = Analyzed n = Lost to follow-up n = Excluded from analysis n = |
Completed n = Analyzed n = Lost to follow-up n = Excluded from analysis n = |
||||||
The information needed to complete this section can be found in your internal tracking files, your analytic files, and/or the final version of your Sample Progress Reporting Tool.
Data Collection
Indicate how data on outcomes of interest as well as key covariates, including baseline measures and demographics, were obtained from sample members. Include information on:
timing of data collection
mode of administration (e.g., paper surveys, web-based surveys, administrative records)
data collection procedures (e.g., who collected the data, in-person or remote administration, group or individual administration, length of survey)
incentives
If there were any differences between intervention and comparison conditions in timing, mode, and procedures used for data collection, describe them in this section.
Text in this section can be copied and adapted as needed from Section 2.4.1 of your evaluation plan.
Expectation
2A.1e: Measurement of outcomes is consistent between treatment
and comparison groups. Data are collected the same way in each
group. The time between baseline and follow-up measures does not
systematically differ between groups.
Measures
Define the outcomes that the primary and secondary research questions examine. Briefly explain how you operationalized and constructed each outcome measure. If you constructed a measure from multiple items, please document the source items and explain how you did so to create an outcome for analysis. For non-binary outcomes constructed from multiple items, provide reliability information for each measure. Describe any differences in measure construction between the treatment and comparison groups. If a detailed description of measure construction is necessary, please include that information in and appendix.
Text in this section can be copied and adapted from Section 2.4.2 and Appendix C of your evaluation plan.
Expectation
2A.1a: Outcome measures have face validity. Expectation
2A.1b: Outcome measures are reliable. Expectation
2A.1e: Measurement of outcomes is consistent between treatment
and comparison groups. Measures are constructed the same way for
both treatment and comparison groups. Expectation
2A.1f: Outcome measures are not over-aligned.
Analytic Methods
Describe the analytic methods you used to answer the study’s primary research questions. For the secondary research questions, note whether the analytic methods differ from the analytic methods for the primary research questions, and if so, briefly describe those differences. These analytic methods should match the approved approach from your evaluation plan; if you are making changes from your evaluation plan, consult your LES liaison.
For the main impact analysis, briefly summarize (1) the analytic model, including the covariates included; (2) prespecified cutoffs for statistical significance; (3) how you handled missing data; and, (4) if applicable, information on sample weights, multiple comparisons, and other items related to study design (for example, clustering correction or indicators for strata).
Include equations for estimating impacts in the appendix for transparency, along with any technical details not included in the body of the text.
If you use different methods for your secondary research questions or additional research questions, add a subheading and then describe those methods.
Describe any details about data cleaning in an appendix. In addition, if you employed alternate approaches to handling missing data, or if you tested alternate model specifications, include that information in an appendix and reference the appendix in this section.
Text in this section can be copied and adapted from Sections 2.5.1 (missing data), 2.5.2 (hypothesis testing), 2.5.4 (analytic approach), and 2.5.5 (differences for secondary contrasts) of your evaluation plan.
Expectation
1C.1 Evaluation prespecifies planned impact analyses of
participant outcomes. Include the cutoff for statistical
significance. If there is more than one primary contrast, describe
the strategy for minimizing or adjusting for multiple comparisons. Expectation
2A.1d: Analytic models must include a minimum set of covariates.
At a minimum, regression models used to estimate program impacts
control for a baseline measure of the outcome of interest, if
available, as well as baseline measures of three key demographic
characteristics: age or grade level, sex, and race/ethnicity. Expectation
2A.1g: Evaluation uses acceptable practices for addressing
missing data.
Section 6 of the report describes findings from the impact evaluation.
Baseline Equivalence and Sample Descriptive Characteristics
For QEDs and high-attrition RCTs, provide information on how you assessed baseline equivalence for each primary contrast in the analytic sample(s) and present the results of the assessments. (Assessing baseline equivalence is optional for low-attrition RCTs5). Then, describe the sample characteristics for the reader.
Remember that establishing baseline equivalence is required for primary contrasts in QEDs and high-attrition RCTs. Although it is not a requirement of the Standards for Rigor, the LES team recommends also establishing baseline equivalence for secondary contrasts, to ensure that any findings are reviewable by the TPPER and also to strengthen any journal article submissions.
Describe how the baseline mean difference between the intervention and comparison group was calculated (e.g., a simple mean difference in sample means, a model-based approach using a statistical model that adjusts for blocking, weighting, and/or clustering)
If a model-based approach was used:
Describe the statistical model used to calculate the baseline mean difference (e.g., linear regression, multilevel model).
Clearly indicate the unit of analysis (individual or cluster) and, if applicable, explain how clustering was addressed.
Describe adjustments for blocking or weighting.
Provide descriptive statistics by condition for each analytic sample6 in the study:
Demographic characteristics required for assessing baseline equivalence, which are age or grade level, sex, and race/ethnicity7.
Baseline measures of outcomes.
If it was not feasible to measure the outcome at baseline, report on other baseline constructs in the same or similar domain to the outcome and report the correlation with the outcome. Pre-test alternatives should be correlated with the outcome and/or may be a common precursor to the outcome (i.e., knowledge, intentions, skills, or attitudes).
You can use the table below to report baseline descriptive statistics8. See Part 4 of the Guide for Calculating Attrition and Baseline Equivalence for additional guidance on providing evidence of baseline equivalence in the table below. You should also include a narrative description of the sample characteristics for the reader.
Table Shell for Reporting Results from Baseline Equivalence Assessment
Baseline Measure |
Comparison Group |
Treatment Group |
Treatment – Control Difference |
Standardized Difference |
p-value |
||||
Sample Size |
Mean |
(SD) |
Sample Size |
Mean |
(SD) |
||||
Contrast 1 (Outcome/time point) |
|||||||||
Pretest for Outcome 1 |
|
|
|
|
|
|
|
|
|
Age or Grade |
|
|
|
|
|
|
|
|
|
Sex (% female) |
|
|
|
|
|
|
|
|
|
Race/ethnicity (% minority) |
|
|
|
|
|
|
|
|
|
Contrast 2 (Outcome/time point) |
|||||||||
Pretest for Outcome 2 |
|
|
|
|
|
|
|
|
|
Age or Grade |
|
|
|
|
|
|
|
|
|
Sex (% female) |
|
|
|
|
|
|
|
|
|
Race/ethnicity (% minority) |
|
|
|
|
|
|
|
|
|
Text describing how baseline differences were calculated can be copied or adapted from Section 2.5.3 of your evaluation plan.
Expectation
2A.3a: Quasi-experimental designs (QEDs) have baseline
equivalence in the analytic sample.
Expectation
2A.3b: Cluster QEDs have baseline equivalence in the analytic
sample.
Implementation Findings
PREIS evaluations were not required to design or conduct a process/implementation study of their PREIS-funded interventions. However, FYSB expects that the final evaluation report will document and report on implementation. If your evaluation included a more comprehensive implementation study, consider developing a separate, more comprehensive implementation report.
This section should provide information on the program as youth received it and the context in which it was delivered. If any unplanned adaptations to implementation occurred during the program, you should describe these adaptations here. This section should tell the story of implementation, providing context for the impacts and key lessons learned from implementation. This section should also provide information on the comparison group experience.
We encourage the use of subheadings in the text of this section to discuss the findings related to fidelity of implementation (i.e., the extent to which the intervention was delivered as intended)9 and dosage/attendance (e.g., the percent of youth who attend 75% or more of the sessions), quality of implementation and engagement (e.g., successes, challenges, and solutions; youth satisfaction and reports or observations on engagement; facilitator reflections), and experiences of the comparison group and context (e.g., youth satisfaction and engagement, facilitator reflections).
Expectation
2B: Evaluation accurately describes the intervention as
evaluated. Study authors note any substantial variations in
implementation from the intended model, even if these variations
were purposeful adaptations to the target population or setting.
Impact Evaluation Findings
Present impacts of the program in tables using the table shells specified in your analysis plan (i.e., Section 2.5.7 of your evaluation plan) and then describe the findings in the text (e.g., The intervention group was significantly less likely to have recently engaged in unprotected sex at the 6-month follow-up than the comparison group). The LES team recommends that one subsection (and table) shows impacts for primary contrasts and a separate subsection (and table) shows impacts for secondary contrasts. Make sure each finding corresponds to a prespecified contrast and answers a prespecified research question.
For each finding:
Report descriptive statistics (e.g., adjusted and unadjusted means, standard deviations, proportions) and sample sizes10 by condition for the outcome measure at the time point that corresponds to the contrast. Use the table shells specified in your analysis plan (i.e., Section 2.5.7 of your evaluation plan).
Report the model coefficients for the treatment effects, their standard errors, and exact p-values from impact analyses.
If outcome data were imputed and/or baseline data were imputed or missing:
Report the sample sizes, means, and standard deviations in both conditions for samples with and without missing data11.
Report the correlation between pretest and posttest, calculated using only non-imputed data3.
If you conducted sensitivity analyses (e.g., alternate approaches to missing data, inconsistent data, alternative model specifications, and so on), report the results from those analyses in an appendix. Refer to the appendix in this section and explain the rationale for those analyses and whether the findings differ from those presented here. You may have proposed sensitivity analyses in your analysis plan, or it is also possible that ideas for additional sensitivity analyses emerged during your analysis phase and you can include them in the appendix.
Summarize the impact and implementation findings and discuss their implications, clearly addressing each of the main hypotheses and prespecified research questions.
Revisit the issue or problem the intervention addresses. Discuss the extent to which the intervention as delivered benefited youth. Compare the magnitude and scope of the observed effects to the magnitude and scope of the problem identified in the background section.
Present the implications of your evaluation and findings for the broader field. Discuss important lessons learned that explain the impacts or that could help others replicate the program or serve the same target population.
Discuss any limitations of the study (for example, issues with randomization, study power, or implementation) and any related caveats the readers should keep in mind.
Describe next steps for both the specific program and the field more generally to continue the research and continue to improve outcomes for youth.
Provide the full reference for any work cited in the report.
Based on the guidance for the report sections, your report might include the following appendices. It might not be necessary to include appendices for all of these items, or you may choose to include additional appendices. Please label your appendices to be sequential:
Logic Model
Detailed Specification of Measures
Methods Used to Clean and Prepare Data
Model Specifications
Missing Data
Sensitivity Analyses
1 Note that PREIS grantees and evaluators may choose to include additional information in their final evaluation reports to meet the needs of their teams and any intended audiences of the report.
2 Note that the timing of randomization is considered to be the time when individuals learned the condition to which they were randomly assigned.
3 For additional guidance on calculating attrition, see the Guide for Calculating Attrition and Baseline Equivalence.
4 Schulz Kenneth F, Altman Douglas G, Moher David. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomized trials BMJ 2010; 340: c332; http://www.consort-statement.org/.
5 See Exhibit 2 on p. 9 of the PREP Evaluation Standards for Rigor for boundaries of what is considered “low attrition”.
6 The analytic sample is the sample of participants included in an analysis of the impact of the program or service on an outcome. Studies may have multiple analytic samples because the number of participants available for analysis may differ for different outcomes and different time points within a study.
7 You may also establish baseline equivalence on additional baseline characteristics of your sample if you choose.
8 If your contrasts have the same sample/sample sizes, you do not need to report baseline equivalence for demographic variables separately for each contrast as this information would be the same. If you have many contrasts with different samples/sample sizes, you may consider reporting on baseline equivalence for secondary contrasts in an appendix.
9 If you are assessing the fidelity of implementation using the fidelity matrix provided by the LES team, see this resource for additional guidance, and we suggest including the fidelity matrix in an appendix.
10 If you are using complete case analysis, you can report the sample size once in the table header or a table note.
11 This information can be included in an appendix. The Prevention Services Clearinghouse has helpful guidance on missing data bias calculations and corresponding table templates here (pp. 18-27): Title IV-E Prevention Services Clearinghouse Reporting Guide for Study Authors, Handbook of Standards and Procedures, Version 2.0. You may also consult your LES liaison for support.
| File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
| File Title | Abt Single-Sided Body Template |
| Author | Charmayne Walker |
| File Modified | 0000-00-00 |
| File Created | 2025-12-19 |