SUPPORTING STATEMENT – PART B
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
1. Description of the Activity
The respondent universe includes Marine Service members (E-1–E-3) in student status during the School of Infantry – Marines Awaiting Training (SOI-MAT) holding period. Marine Service members will be recruited from Marine Corps Base Camp Lejeune in North Carolina. The respondent universe also includes the Department of Defense Prime for Life 4.5 (PFL 4.5) instructors from Camp Lejeune who will administer the training to Marine Service members in the intervention condition. This is a new data collection and evaluation effort. The evaluation will use a quasi-experimental design (QED). QEDs involve statistically matching intervention participants (personnel who receive PFL 4.5) at the time of analysis to similar individuals who do not receive the training. This QED will produce methodologically rigorous evidence of the effects of PFL 4.5 on key outcomes, yielding the necessary information for U.S. Marine Corps (USMC) leadership to make informed decisions and policies regarding the potential universal implementation of PFL 4.5. Marine Service members in student status awaiting training will be assigned to either the intervention condition (in which they will receive PFL 4.5) or the comparison condition (in which they will not receive PFL 4.5). In addition to looking for differences between intervention and comparison conditions, the evaluation will assess potential differences between Marine Service members assigned to infantry and non-infantry Military Occupational Specialties (MOS). The evaluation design, therefore, is a 2 (intervention vs. comparison) x2 (Marine Service members assigned to infantry MOS vs. non-infantry MOS) QED within a single site (Camp Lejeune) to evaluate the outcomes of interest.
At baseline, the evaluation will enroll 1,200 Marine Service members as participants. Response rates and data quality will be monitored throughout the data collection period to ensure the evaluation meets the target sample sizes. The evaluation will collect up to 1,200 surveys at 90-day follow-up. After accounting for possible attrition, if we achieve a 30% retention rate at follow-up, we expect a minimum of 360 participants with completed baseline and follow-up surveys. Participants who are in the intervention condition will be asked to complete a participant feedback form at the conclusion of their PFL 4.5 session. While up to 800 feedback forms will be collected, if we achieve a 75% response rate, we expect a minimum sample of 600 participants with completed participant feedback forms.
The evaluation will enroll 15 PFL 4.5 instructors as participants. Trained PFL 4.5 instructors at USMC will administer the training. Prior to facilitating PFL 4.5 sessions, instructors will be provided guidance to complete an instructor fidelity worksheet after each PFL 4.5 session. A high response rate is expected because we anticipate that instructors will view completion of the fidelity worksheet as part of their responsibilities as a PFL instructor. However, the minimum expected response rate is 90% because of the possibility that an instructor may forget to complete the worksheet or encounter technical difficulties on site that make them unable to access it. If the expected number of worksheets are completed, approximately 3 worksheets per PFL 4.5 instructor, there will be up to 45 worksheets collected. Lastly, instructor interviews will be held with instructors of PFL 4.5. The expected response rate for these interviews is 100% and 15 instructor interviews will be held during the data collection period.
|
Baseline Surveys |
Participant Feedback Forms |
Follow-Up Surveys |
Instructor Fidelity Worksheets |
Instructor Interviews |
Overall |
N = 1,200 |
n = 800
|
n = 1,200 |
n = 45 |
n = 15 |
Intervention Group |
n = 800
|
n = 800 |
N/A |
N/A |
|
Comparison Group |
n = 400 |
N/A
|
n = 400 |
N/A |
|
Prime for Life 4.5 Instructors |
N/A |
N/A |
N/A |
n = 45 |
TABLE 1. SAMPLE SIZE ESTIMATES FOR DATA COLLECTION
2. Procedures for the Collection of Information
Statistical methodologies for stratification and sample selection
The evaluation will recruit all Marine Service members in student status during the SOI-MAT holding period from January 2025 to June 2026. Marine Service members will be recruited in groups of up to 30 until the sample size is sufficient to support the planned outcome analyses. To minimize any risk of timing effects, the intervention condition will be divided into two cohorts of 400 Marine Service members each (n = 800 at baseline). Between intervention condition cohorts, 400 Marine Service members will be recruited into a comparison condition which will not receive PFL 4.5. In other words, the recruitment of the comparison cohort will alternate between recruitment of two intervention cohorts to reduce the likelihood of timing effects. Propensity score weighting will be used to balance treatment and comparison samples for the purpose of outcome analyses. All 15 PFL 4.5 instructors will be invited to complete instructor fidelity worksheets and participate in an interview.
Estimation procedures
No sample estimation procedures are needed in this evaluation design. That is, all eligible cases during the evaluation period will be entered into the evaluation if the participants agree to participate.
Degree of accuracy needed for the Purpose discussed in the justification
Statistical power provides an estimate of the probability of identifying a relationship through a significant statistical test when, in fact, such an impact exists.1 Power estimates were calculated using G*Power 3.1 software.
We will attempt to enroll 1,200 participants in the evaluation. Response rates and data quality will be monitored throughout the data collection period to ensure the evaluation meets the target sample sizes. After accounting for possible attrition, if we achieve a 30% retention rate at follow-up, we expect a minimum of 360 participants with completed baseline and follow-up surveys.
We expect that up to 60 additional participants may be excluded from analyses due to propensity score matching and response-level missingness. Based on these assumptions, we performed a power analysis for a final sample size of 300. With this sample size, we will be able to detect small effect sizes (f = 0.19) at 80% power and an alpha level of 0.05. To the extent that a greater proportion of the baseline sample is retained, NORC will be able to detect smaller effects of PFL 4.5 on the evaluation outcomes.
Unusual problems requiring specialized sampling procedures
We do not anticipate a need for specialized sampling procedures given the evaluation design.
Use of periodic or cyclical data collections to reduce respondent burden
Two data points from each Marine Service member are necessary to identify change over time. Surveys will be administered at baseline and 90 days later. Without a follow-up data collection, NORC will be unable to assess the outcomes associated with PFL 4.5. Given the need to identify change over time, a minimum of two surveys is required.
3. Maximization of Response Rates, Non-response, and Reliability
Because the sampling frame includes all Marine Service members in student status awaiting training during the SOI-MAT holding period, the findings of the evaluation will be generalizable to Marine Service members in this period.
There are several facets of the evaluation design that will contribute to a strong response rate for this data collection and thus overall reliability and validity of the PFL 4.5 evaluation effort. NORC has worked in collaboration with an Evaluation Working Group (EWG) that consists of evaluation and program experts from the Department of Defense’s Sexual Assault Prevention and Response Office (SAPRO), the U.S. Marines Corps (USMC), and the Prevention Research Institute (PRI). Including key collaborators who are involved and invested in reducing high-risk drinking and preventing sexual harassment helped inform appropriate participant recruitment and retention strategies. In addition to the EWG, discussions were held with a small group of Marine Service members to assure that the recruitment and survey language is understandable, relatable, and acceptable to the target population.
NORC has developed the data collection protocols to be consistent with the best practices in the field for online survey design and implementation including multiple contact attempts across various contact methods and using a variety of recruitment messages.2 At baseline, NORC will recruit participants in both the intervention and comparison groups to complete the survey in-person via a Quick Response (QR) code or an abbreviated link. In the baseline survey, respondents will be asked to provide their .mil email address, a personal email address, and their mobile phone number to receive an invitation to the follow-up survey. Therefore, invitations to complete the follow-up survey will be sent to participants’ military email addresses, personal email addresses, and mobile phone numbers via text.
Military response rates for survey data collection are generally low. Response rates tend to be low for young, active-duty Marine Service members between the ages of 18 and 24,3 which is the key population demographic for the PFL 4.5 evaluation. While lack of motivation has been cited as a barrier to survey participation in young active-duty members,4 in other DoD evaluations, small financial incentives have been effective at boosting response.5,6 Respondents will therefore be offered modest financial incentives for participation based on guidance from USMC and following established practices in the field of survey research for reliable and valid data collection. To bolster participation, respondents will receive a $10 digital gift card after completing the baseline survey and a $20 digital gift card after completing the follow-up survey. Data has shown incentives of varying monetary values have varying effects on response rates.7,8 For example, in a sexual assault victimization survey of university students testing the impact of different incentive values, the response rate was significantly higher for students offered $25 than for students offered $10.8 While attrition at the follow-up stage is a common phenomenon in survey data collection, providing an increased incentive at follow-up will help motivate participation in the survey.
To address unit-level missing data, responders and non-responders will be compared at baseline and follow-up with basic aggregated demographic information and other information (e.g., demographics, years of service, rank and position in Marines, etc.) available on all eligible personnel receiving PFL 4.5 at the time of the evaluation intake period. Any statistically significant differences by demographic or background variables will be addressed as covariates in later outcome models. To address item-level missing data (i.e., if respondents skip some questions), NORC will first assess the amount of missing data and whether missingness is at random. If there is little missing data (e.g., under 10%), NORC will assess if it is statistically appropriate to use listwise deletion of these cases. If necessary, NORC will examine the impact of employing various methods to fill in missing values for the surveys that are only partially completed. For example, NORC will use imputation methods (e.g., nearest neighbor “hot deck”), including multiple imputation.
4. Tests of Procedures
The evaluation recruitment language and baseline survey instrument were reviewed during a feedback discussion with five Marine Service members at Camp Lejeune. In addition, NORC has engaged a team of stakeholders in the EWG to review all study instruments and protocols. Stakeholders included USMC personnel, DoD PFL instructors, and PRI, the organization that developed PFL 4.5. The EWG provided feedback on PFL 4.5 training context, content, and the development of instruments and protocols.
5. Statistical Consultation and Information Analysis
a. Provide names and telephone number of individual(s) consulted on statistical aspects of the design.
b. Provide name and organization of person(s) who will actually collect and analyze the collected information.
Elizabeth Mumford (NORC)
Bruce Taylor (NORC)
Cynthia Simko (NORC)
1 Cohen J. Statistical Power Analysis for the Behavioral Sciences | Jacob Cohen |. 2nd ed. Routledge; 1988. Accessed September 19, 2023. https://www.taylorfrancis.com/books/mono/10.4324/9780203771587/statistical-power-analysis-behavioral-sciences-jacob-cohen
2 Dillman DA, Smyth JD, Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th ed. Wiley; 2024.
3 Miller LL, Aharoni E. Understanding low survey response rates among young US military personnel. Online but also available in print form. 2015. https://www.rand.org/pubs/research_reports/RR881.html.
4 Newell CE, Rosenfeld P, Harris RN, Hindelang RL. Reasons for nonresponse on US Navy surveys: A closer look. Military Psychology. 2004;16(4):265-276.
5 Powell TM, Geronimo-Hara TR, Tobin LE, et al. Pre-incentive Efficacy in Survey Response Rates in a Large Prospective Military Cohort. Field Methods. online 2023;0(0):1525822X231163668. doi:10.1177/1525822x231163668
6 Berry-Cabán CS, Orchowski LM, Wimsatt M, et al. Perceived and Collective Norms Associated with Sexual Violence among Male Soldiers. Journal of Family Violence. 2020/05/01 2020;35(4):339-347. doi:10.1007/s10896-019-00096-6
7 Krebs C, Lindquist C, Richards A, et al. The Impact of Survey Incentive Amounts on Response Rates and Estimates of Sexual Assault Victimization. 2016.
8 Zheng G, Oksuzyan S, Hsu S, et al. Self-reported interest to participate in a health survey if different amounts of cash or non-monetary incentive types were offered. Journal of Urban Health. 2018;95(6):837-849.
| File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
| Author | Patricia Toppings |
| File Modified | 0000-00-00 |
| File Created | 2026-01-07 |