Advertisement
Original Article| Volume 151, P65-74, November 2022

The RIPI-f (Reporting Integrity of Psychological Interventions delivered face-to-face) checklist was developed to guide reporting of treatment integrity in face-to-face psychological interventions

  • Author Footnotes
    1 Both share the role of first authors.
    Jesus Lopez-Alcalde
    Footnotes
    1 Both share the role of first authors.
    Affiliations
    Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland

    Faculty of Health Sciences, Universidad Francisco de Vitoria (UFV), Madrid, Spain

    Instituto Ramón y Cajal de Investigación Sanitaria (IRYCIS), Unidad de bioestadística clínica, Hospital Universitario Ramón y Cajal, (CIBERESP), Madrid, Spain
    Search for articles by this author
  • Author Footnotes
    1 Both share the role of first authors.
    Ninib Yakoub
    Footnotes
    1 Both share the role of first authors.
    Affiliations
    Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland
    Search for articles by this author
  • Markus Wolf
    Affiliations
    Department of Psychology, University of Zurich, Zurich, Switzerland
    Search for articles by this author
  • Thomas Munder
    Affiliations
    Department of Psychology, University of Zurich, Zurich, Switzerland
    Search for articles by this author
  • Erik von Elm
    Affiliations
    Cochrane Switzerland, Centre for Primary Care and Public Health (Unisanté), University of Lausanne, Lausanne, Switzerland
    Search for articles by this author
  • Christoph Flückiger
    Affiliations
    Department of Psychology, University of Zurich, Zurich, Switzerland
    Search for articles by this author
  • Christiane Steinert
    Affiliations
    International Psychoanalytic University Berlin (IPU), Berlin, Germany

    Department of Psychotherapy and Psychosomatics, Justus Liebig University Giessen, Giessen, Germany
    Search for articles by this author
  • Sarah Liebherz
    Affiliations
    Department of Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
    Search for articles by this author
  • Jenny Rosendahl
    Affiliations
    Institute of Psychosocial Medicine, Psychotherapy and Psychooncology, Jena University Hospital, Jena, Germany
    Search for articles by this author
  • Claudia M. Witt
    Affiliations
    Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland

    Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Social Medicine, Epidemiology and Health Economics, Berlin, Germany

    Center for Integrative Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
    Search for articles by this author
  • Jürgen Barth
    Correspondence
    Corresponding author. Institute for Complementary and Integrative Medicine, University Hospital Zurich, Sonneggstrasse 6, CH-8091 Zurich, Switzerland. Tel.: +41-44-255-4896; fax: +41-44-255-43-94.
    Affiliations
    Institute for Complementary and Integrative Medicine, University Hospital Zurich and University of Zurich, Zurich, Switzerland
    Search for articles by this author
  • Author Footnotes
    1 Both share the role of first authors.
Open AccessPublished:August 01, 2022DOI:https://doi.org/10.1016/j.jclinepi.2022.07.013

      Abstract

      Objectives

      Intervention integrity is the degree to which the study intervention is delivered as intended. This article presents the RIPI-f checklist (Reporting Integrity of Psychological Interventions delivered face-to-face) and summarizes its development methods. RIPI-f proposes guidance for reporting intervention integrity in evaluative studies of face-to-face psychological interventions.

      Study Design and Setting

      We followed established procedures for developing reporting guidelines. We examined 56 documents (reporting guidelines, bias tools, and methodological guidance) for relevant aspects of face-to-face psychological intervention integrity. Eighty four items were identified and grouped as per the template for intervention description and replication (TIDieR) domains. Twenty nine experts from psychology and medicine and other scholars rated the relevance of each item in a single-round Delphi survey. A multidisciplinary panel of 11 experts discussed the survey results in three online consensus meetings and drafted the final version of the checklist.

      Results

      We propose RIPI-f, a checklist with 50 items. Our checklist enhances TIDieR with important extensions, such as therapeutic alliance, provider's allegiance, and the adherence of providers and participants.

      Conclusion

      RIPI-f can improve the reporting of face-to-face psychological interventions. The tool can help authors, researchers, systematic reviewers, and guideline developers. We suggest using RIPI-f alongside other reporting guidelines.

      Graphical abstract

      Keywords

      What is new?

        Key findings

      • Intervention integrity is the degree to which a study intervention is delivered as intended.
      • CONSORT-SPI and TIDieR guidelines already address psychological intervention reporting, but more guidance in describing integrity is needed.

        What this adds to what was known?

      • The RIPI-f checklist (see Table 1) proposes the first critical set of items to be reported to allow for an evaluation of intervention integrity in face-to-face psychological interventions.
      • The checklist considers the peculiarities of psychological interventions. Examples are the providers' allegiance to the intervention, motivation, therapeutic alliance, and the participants' receipt and enactment of the intervention and their expectations.
      • The checklist applies to any evaluative study, such as randomized or nonrandomized trials or observational studies.

        What is the implication, what should change now?

      • RIPI-f integrates TIDieR and complements other relevant reporting guidelines, such as CONSORT-SPI, SPIRIT, and TREND.
      • Adherence to RIPI-f may substantially enhance the reporting of studies evaluating face-to-face psychological interventions.
      • RIPI-f should be considered a living document. We encourage piloting, feedback, and further discussion.

      1. Introduction

      Intervention integrity (hereafter “integrity”) is the degree to which the study intervention is delivered as intended [
      • Yeaton W.H.
      • Sechrest L.
      Critical dimensions in the choice and maintenance of successful treatments: strength, integrity, and effectiveness.
      ,
      • Perepletchikova F.
      On the topic of treatment integrity.
      ,
      • Perepletchikova F.
      • Treat T.A.
      • Kazdin A.E.
      Treatment integrity in psychotherapy research: analysis of the studies and examination of the associated factors.
      ,
      • Perepletchikova F.
      • Hilt L.M.
      • Chereji E.
      • Kazdin A.E.
      Barriers to implementing treatment integrity procedures: survey of treatment outcome researchers.
      ]. It comprises aspects such as what intervention was delivered, how, and to which study participants. The terminology varies among disciplines, including integrity, fidelity, and adherence [
      • Carroll C.
      • Patterson M.
      • Wood S.
      • Booth A.
      • Rick J.
      • Balain S.
      A conceptual framework for implementation fidelity.
      ]. Observe the glossary in Appendix 1.
      Systematic and transparent reporting of integrity is crucial for several reasons. First, intervention integrity is necessary for the internal validity of studies determining the efficacy of psychological interventions. Compromised integrity can bias the study results, which hampers knowing if the observed effects can be causally attributed to the applied interventions [
      • Yeaton W.H.
      • Sechrest L.
      Critical dimensions in the choice and maintenance of successful treatments: strength, integrity, and effectiveness.
      ,
      • Bellg A.J.
      • Borrelli B.
      • Resnick B.
      • Hecht J.
      • Minicucci D.S.
      • Ory M.
      • et al.
      Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
      ,
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ,
      • Mayo-Wilson E.
      Reporting implementation in randomized trials: proposed additions to the consolidated standards of reporting trials statement.
      ,
      • Leichsenring F.
      • Salzer S.
      • J Hilsenroth M.
      • Leibing E.
      • Leweke F.
      • Rabung S.
      Treatment integrity: an unresolved issue in psychotherapy research.
      ]. Second, varying integrity levels may reduce statistical power and, thus, lead to nonsignificant results [
      • Borrelli B.
      • Sepinwall D.
      • Ernst D.
      • Bellg A.J.
      • Czajkowski S.
      • Breger R.
      • et al.
      A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
      ,
      • Dumas J.E.
      • Lynch A.M.
      • Laughlin J.E.
      • Phillips Smith E.
      • Prinz R.J.
      Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial.
      ]. Third, integrity informs the external study validity, that is, the degree to which the findings generalize to the real world. Therefore, readers must know the intervention's integrity to judge whether the study findings apply to their practice [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ,
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ]. In this line, it is essential to know if a psychological intervention is effective only when delivered with high levels of treatment integrity [
      • Goense P.B.
      • Assink M.
      • Stams G.-J.
      • Boendermaker L.
      • Hoeve M.
      Making ‘what works’ work: a meta-analytic study of the effect of treatment integrity on outcomes of evidence-based interventions for juveniles with antisocial behavior.
      ]. Fourth, knowing the integrity of the psychological interventions delivered in a trial, both in the intervention and comparator groups, is needed to judge the fairness of treatment comparisons. Understanding the comparison group explains the observed efficacy; minimal care in the control group associates greater effect sizes [
      • de Bruin M.
      • Viechtbauer W.
      • Hospers H.J.
      • Schaalma H.P.
      • Kok G.
      Standard care quality determines treatment outcomes in control groups of HAART-adherence intervention studies: implications for the interpretation and comparison of intervention effects.
      ]. However, there is much room for improvement in the reporting of treatment as usual (TAU), the most frequently used control group in psychotherapy trials for depression [
      • Munder T.
      • Geisshusler A.
      • Krieger T.
      • Zimmermann J.
      • Wolf M.
      • Berger T.
      • et al.
      Intensity of treatment as usual and its impact on the effects of face-to-face and internet-based psychotherapy for depression: a preregistered meta-analysis of randomized controlled trials.
      ]. Fifth, information about integrity is essential in evidence synthesis, particularly to assess performance bias (bias due to deviations from intended interventions) and the external validity of the evidence [
      • Guyatt G.H.
      • Oxman A.D.
      • Kunz R.
      • Woodcock J.
      • Brozek J.
      • Helfand M.
      • et al.
      GRADE guidelines: 8. Rating the quality of evidence--indirectness.
      ].
      Psychological interventions are interpersonal or informational activities that target biological, behavioral, cognitive, emotional, interpersonal, social, or environmental factors to improve health and wellbeing [
      • England M.J.
      • Butler A.S.
      • Gonzalez M.L.
      Institute of Medicine (U.S.)
      Committee on developing evidence-based standards for psychosocial interventions for mental disorders, institute of medicine (U.S.). Board on health sciences policy. Psychosocial interventions for mental and substance use disorders : a framework for establishing evidence-based standards.
      ]. Examples are health education, lifestyle interventions, and psychotherapy. Face-to-face psychological interventions are delivered in the same physical space where the provider and the participant interact [
      • Suh H.
      • Sohn H.
      • Kim T.
      • Lee D.G.
      A review and meta-analysis of perfectionism interventions: comparing face-to-face with online modalities.
      ].
      Assessing integrity in face-to-face psychological interventions is more complex than it is in many medical interventions. First, face-to-face psychological interventions are collaborative [
      • Yeaton W.H.
      • Sechrest L.
      Critical dimensions in the choice and maintenance of successful treatments: strength, integrity, and effectiveness.
      ,
      • Craig P.
      • Dieppe P.
      • Macintyre S.
      • Michie S.
      • Nazareth I.
      • Petticrew M.
      Developing and evaluating complex interventions: the new Medical Research Council guidance.
      ], involving therapist-patient interactions at different levels, including single, couple, family, and group settings. Second, integrity depends on several factors, such as the providers' and recipients' behaviors, skills, and experiences [
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ,
      • Durlak J.A.
      • DuPre E.P.
      Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation.
      ]. Notably, patients' motivation and therapists’ allegiance to treatment are assumed as necessary conditions for effective treatments [
      • van der Helm G.H.P.
      • Kuiper C.H.Z.
      • Stams G.J.J.M.
      Group climate and treatment motivation in secure residential and forensic youth care from the perspective of self determination theory.
      ,
      • Wampold B.E.
      • Imel Z.E.
      ]. Third, psychological interventions use theories different from those underpinning medical interventions and established medical concepts (like dose-response relationship) do not always apply [
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ,
      • Michie S.
      • Wood C.E.
      • Johnston M.
      • Abraham C.
      • Francis J.J.
      • Hardeman W.
      Behaviour change techniques: the development and evaluation of a taxonomic method for reporting and describing behaviour change interventions (a suite of five studies involving consensus methods, randomised controlled trials and analysis of qualitative data).
      ]. Fourth, while a framework for assessing intervention delivery in behavioral intervention trials is available [
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ], there is neither an equivalent for psychological interventions nor corresponding guidance for reporting.
      Although integrity is central to evaluating, comparing, and implementing psychological interventions [
      • Dumas J.E.
      • Lynch A.M.
      • Laughlin J.E.
      • Phillips Smith E.
      • Prinz R.J.
      Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial.
      ] and researchers in psychology recognize its relevance, it is rarely verified and reported [
      • Perepletchikova F.
      • Hilt L.M.
      • Chereji E.
      • Kazdin A.E.
      Barriers to implementing treatment integrity procedures: survey of treatment outcome researchers.
      ,
      • Borrelli B.
      • Sepinwall D.
      • Ernst D.
      • Bellg A.J.
      • Czajkowski S.
      • Breger R.
      • et al.
      A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
      ,
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ,
      • Glasziou P.
      • Meats E.
      • Heneghan C.
      • Shepperd S.
      What is missing from descriptions of treatment in trials and reviews?.
      ,
      • Hoffmann T.C.
      • Erueti C.
      • Glasziou P.P.
      Poor description of non-pharmacological interventions: analysis of consecutive sample of randomised trials.
      ]. Integrity is described in just 6% of psychological intervention articles [
      • Borrelli B.
      • Sepinwall D.
      • Ernst D.
      • Bellg A.J.
      • Czajkowski S.
      • Breger R.
      • et al.
      A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
      ], so its reporting requires urgent improvement. Several factors can explain this poor reporting. First, integrity definitions in psychology research fail to include all relevant components [
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ,
      • Borrelli B.
      • Sepinwall D.
      • Ernst D.
      • Bellg A.J.
      • Czajkowski S.
      • Breger R.
      • et al.
      A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
      ]. Second, there are no precise editorial requirements [
      • Perepletchikova F.
      • Hilt L.M.
      • Chereji E.
      • Kazdin A.E.
      Barriers to implementing treatment integrity procedures: survey of treatment outcome researchers.
      ] and integrity reporting guidelines specific to psychological interventions are lacking. Although CONSORT-SPI 2018 and TIDieR [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ,
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ] provide general helpful advice, in our opinion, they do not cover all the key actors and components relevant for the integrity of psychological interventions, for example, nonspecific intervention aspects like therapeutic alliance or expectations.
      The RIPI-f (Reporting Integrity of Psychological Interventions delivered face-to-face) checklist was developed by consensus. RIPI-f will be a helpful tool for health researchers writing manuscripts of studies evaluating the effects of face-to-face psychological interventions. It proposes a clear list of items that should be reported to ensure the readers know the integrity of face-to-face psychological interventions. This article aims to (1) describe the methods used to develop the checklist and (2) present the checklist.

      2. Development of the checklist

      We implemented a five-step procedure (Fig. 1) following established methods [
      • Moher D.
      • Schulz K.F.
      • Simera I.
      • Altman D.G.
      Guidance for developers of health research reporting guidelines.
      ]. First, the steering group (S.G.) (J.B., N.Y., and J.L.A.) defined the checklist purpose. Second, we consulted the following sources for risk of bias/quality tools and guidance for measuring and reporting integrity: (a) EQUATOR until June 2020 (psychology, psychiatry, public health, and behavioral medicine); (b) quality tools used in systematic reviews published in Clin Psychol Rev 2015 to January 2019; and (c) bibliographies of relevant articles and forward snowballing. The SG assessed 56 documents (Appendix 2), identified 84 candidate aspects potentially relevant for intervention integrity, and organized them in 13 domains and five subdomains (Appendix 3) as per the TIDieR structure [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ]. Third, a multidisciplinary group of 29 experts (psychologists with different psychotherapy orientations and physicians, all of them with research experience) completed a single-round online Delphi survey (experts invited: 304; response rate: 9.5%). They rated the relevance of each item on a 9-point Likert scale as very important (7–9 points), important (4–6 points), or not important (1–3 points). The respondents indicated their confidence in each rating (high, moderate, or low) and could comment or propose new items. The SG analyzed the survey results and classified the relevance of each item as very critical, critical, not critical, or unclear (Box 1). Fifty six percent (47/84) of the items were defined as ‘very critical’, 38% (32/84) as ‘critical’, and 6% (5/84) as unclear. Because no item was deemed ‘not critical’, there was only one Delphi round.
      Figure thumbnail gr1
      Fig. 1Stepwise procedure to develop the RIPI-f checklist. 1Delphi exercise: July 20 to August 13, 2020; 2Online consensus meetings: September 23, October 13, and December 8, 2020 (4 h each).
      Item relevance for the integrity of face-to-face psychological interventions.
      Very critical item: At least 50% of respondents had high confidence in defining the item as very important.
      Critical item: At least 80% of respondents had moderate or high confidence in defining the item as important or very important.
      Not critical item: At least 80% of respondents defined the item as being not important (independent of the confidence in the rating).
      Unclear: The survey results did not allow for classifying the item's relevance.
      The S.G. held three online consensus meetings. The 11 attendees (nine also participated in the survey) were psychologists and physicians, all of them with research experience. They discussed the survey procedures and results, such as the decision to conduct only one Delphi round. They also commented, organized, and added items to the initial list and approved its final version. Decisions were adopted by consensus and voting (two-thirds of the votes needed for approval). The manuscript draft was circulated via email among the meeting participants. The S.G. incorporated their feedback and approved the final version of the manuscript.

      3. The RIPI-f checklist

      3.1 Scope

      RIPI-f aims to improve the reporting of intervention integrity in evaluative studies of face-to-face psychological interventions. Our tool focuses on the face-to-face setting and excludes digital interventions, such as cognitive-behavioural therapy via smartphone. In face-to-face psychological interventions, the interaction between provider and participant is vital. Also, digital interventions may merit an extension of the checklist [
      • Hrynyschyn R.
      • Dockweiler C.
      Effectiveness of smartphone-based cognitive behavioral therapy among patients with major depression: systematic review of health implications.
      ,
      • Agarwal S.
      • LeFevre A.E.
      • Lee J.
      • L'Engle K.
      • Mehl G.
      • Sinha C.
      • et al.
      Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist.
      ].
      The checklist includes critical items that authors should report separately for each study arm with a face-to-face psychological intervention, including control groups, such as those defined as TAU. Thus, although our checklist does not consider the assessment of the fairness of treatment comparisons, a straightforward reporting of the intervention integrity in both study arms is a required step toward this aim.
      RIPI-f applies to protocols of evaluative studies or their full reports, such as randomized controlled trials, nonrandomized trials, or observational studies. RIPI-f complements relevant reporting guidelines, particularly TIDieR [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ], CONSORT-SPI 2018 Extension [
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ], SPIRIT 2013 [
      • Chan A.W.
      • Tetzlaff J.M.
      • Altman D.G.
      • Laupacis A.
      • Gotzsche P.C.
      • Krleza-Jeric K.
      • et al.
      SPIRIT 2013 statement: defining standard protocol items for clinical trials.
      ], and TREND [
      • Des Jarlais D.C.
      • Lyles C.
      • Crepaz N.
      • Group T.
      Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement.
      ].

      3.2 Contents

      Table 1 presents RIPI-f. The checklist includes 50 items, divided into 12 domains and 16 subdomains. We tried to maintain the TIDieR domains but elaborated or added domains, subdomains, and items where necessary.
      Table 1The RIPI-f checklist
      DomainItemItem brief name and explanationPlanned
      Possible answers: R, NR or NA. Also, detail the location in the text.
      Observed
      Possible answers: R, NR or NA. Also, detail the location in the text. Detail if there were relevant differences between the observed situation and the plan.
      1. Intervention brief name
      1Intervention brief name: A name or phrase that describes the intervention.NA
      2. Why
      2Rationale: Justification of why the intervention can work (any rationale, theory, or goal of the components essential to the intervention/s).NA
      3. What
      3Main intervention/s: (a) Main intervention/s and components; (b) Manual (or protocol) for delivery (with reference, online appendix, or URL, preferably with a permanent link).
      4Materials: Physical or informational materials used by providers and participants (with reference, online appendix, or URL, preferably with a permanent link).
      5Co-interventions: Provision of additional care (content, materials, and procedures).
      4. Who: Intervention provider (person/s delivering the intervention/s)
      Nr. Providers
      6Nr. providers: (a) Total nr. providers in the study; (b) Nr. participants per provider; (c) Nr. providers per participant; (d) Delivery in individual or group sessions (if applicable, nr. participants and providers per group).
      Professional competencies: A competency is a knowledge, skill, or attitude shown by the provider that enables to effectively perform an activity to the expected standard.
      7Professional competencies at study entry: (a) General qualification
      (a) Academic background, such as degree, master, or PhD; (b) Profession, such as psychologist, nurse, etc.; (c) General professional experience, such as years or nr. participants attended; (d) % of providers in training.
      ; (b) Experience with the study intervention, such as years or nr. participants treated.
      8Training for the intervention: (a) Training of providers
      (a) Presence (yes/no); (b) Content; (c) Trainer: internal or external, experience, etc. (d) Modality: indirect training (didactic instructions and written materials) or direct training (opportunities for practice, such as role-playing); (e) When: punctually or continuous during the trial; (f) Intensity: nr. and duration of sessions; (g) Materials; and (h) Standardization of training across providers.
      ,
      Observed aspects: (a) Overall % of providers fully trained; (b) Differences in levels of training among providers, centers (in multicentric studies), and over time.
      ; (b) Assessment of providers' competencies post-training
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      ; (c) Acceptable competencies threshold.
      9Providers' supervision during the study: (a) Supervision during the study
      (a) Presence of supervision or monitoring (yes/no); (b) Content: what is considered; (c) How the assessment is done, such as the measurement tool; (d) Who assesses: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ; (b) Acceptable performance threshold.
      10Strategies for handling providers below standards
      (a) Presence of correction procedures (yes/no). For example, strategies for providers dropping out, such as having a pool of trained providers ready to join, or strategies for participants missing a session, such as instruction on how to use the booklet session and practice that session content; (b) Observed overall % of corrected providers or participants; (c) Observed differences in the % of corrected providers or participants among centers (in multicentric studies) and over time.
      Providers' allegiance to the intervention: Provider's professional preference for the intervention model, which deems it superior to other models of intervention.
      11Providers' allegiance to the intervention
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      Providers' motivation: Extent to which a provider is inclined to work with a particular participant.
      12Providers' motivation
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      Therapeutic alliance: A cooperative working relationship between the provider and participant. It consists of 3 components: agreement on the treatment goals, agreement on the tasks, and development of a personal bond.
      13Therapeutic alliance
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      Providers' awareness of being observed: If the providers know they are observed, they may change their behavior.
      14Providers' awareness of being observed: (a) Providers' knowledge of being observed; (b) Observation methods, if applicable (recording, direct observation, etc.).
      5. Who: Participant (person receiving the intervention)
      Participants' receipt of the intervention: Degree to which the participants understand and can use the intervention skills during the study.
      15Participants' comprehension of the intervention
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      16Strategies to increase participants' comprehension
      Participants' enactment of the intervention: Extent to which the participants can use the intervention skills in a relevant real-life setting.
      17Participants' enactment
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      18Strategies to increase participants' enactment
      Participants' expectations: Cognitions about treatment-related health outcomes in the future after a specific intervention.
      19Participants' expectations
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      20Strategies to increase participants' expectations
      Participants' awareness of being observed: If the participants know they are being observed, they may change their behavior.
      21Participants' awareness
      (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ,
      Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      : (a) Participants' knowledge of being observed; (b) Detail observation methods, if applicable (recording, observation, etc.).
      Participants' monitoring during the study (assessment and handling of noncompliant participants)
      22Participants' monitoring during the study: (a) Monitoring during the study
      (a) Presence of supervision or monitoring (yes/no); (b) Content: what is considered; (c) How the assessment is done, such as the measurement tool; (d) Who assesses: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      ; (b) Acceptable performance threshold.
      Strategies for handling participants below standards
      23Strategies for handling noncompliant participants
      (a) Presence of correction procedures (yes/no). For example, strategies for providers dropping out, such as having a pool of trained providers ready to join, or strategies for participants missing a session, such as instruction on how to use the booklet session and practice that session content; (b) Observed overall % of corrected providers or participants; (c) Observed differences in the % of corrected providers or participants among centers (in multicentric studies) and over time.
      6. How: How the main intervention is delivered
      24How: (a) Procedures: Activities or processes for the intervention; (b) Enabling or supporting activities; (c) Mode of delivery: State that the intervention is delivered face-to-face.
      7. Where: The location description can help consider the site effects (interventions may be implemented differently across sites).
      25Locations of the interventions: (a) Nr. sites involved in the study (monocentric/multicentric, if multicentric, detail the nr. sites); (b) Nr. sites that each participant had to attend; (c) Types of sites. Examples: Outpatient or inpatient setting or the participant's home; (d) Necessary infrastructure or relevant features.
      8. When and how much
      26When Intervention timing: (a) Intervention period (such as from March to August); (b) Scheduling of sessions (time between sessions); (c) Total intervention period (days or months). If possible, present a graphical presentation depicting the flow and timing of the sessions.
      27How much Dose of the intervention: (a) Length of each session (minutes); (b) Total nr. sessions; (c) Minimal nr. sessions to attend (if applicable).
      9. Tailoring of the intervention: The intervention includes elements adapted to the individual needs of each participant. Tailoring occurs at the participant level, so not all the participants receive an identical intervention. The provider can adhere to the manual but still incorporate flexibility in therapeutic technique and style by adjusting certain features as per the participant's individual needs.
      28Tailoring characteristics
      (a) Presence (yes/no); (b) Why: justify the need of tailoring/modification; (c) What (content): elements tailored/modified; (d) How; (e) When.
      10. Legitimate intervention modifications: Allowed changes at the study level (not individual tailoring).
      29Legitimate modifications
      (a) Presence (yes/no); (b) Why: justify the need of tailoring/modification; (c) What (content): elements tailored/modified; (d) How; (e) When.
      : For example, report if the trial allowed substantial variation across sites in multicentric studies.
      11. How well (planned): The plan to assess the integrity and how it was finally assessed.
      30Critical items for intervention integrity (as defined by the study authors)NA
      31Assessment of the providers' adherence
      (a) Presence (yes/no); (b) Content: what elements are assessed; (c) Who assesses: self-assessment, internal or external observer, observer's experience and blinding status to the allocated intervention; (d) How the assessment is done: direct observation or video or audio recording, indirect assessments, such as providers' self-reports, interviews with providers or participants, completed homework; (e) When the assessment is done: punctually or continuous during the trial.
      : Degree to which the providers deliver the planned intervention procedures (and avoid proscribed procedures).
      32Assessment of the intervention differentiation
      (a) Presence (yes/no); (b) Content: what elements are assessed; (c) Who assesses: self-assessment, internal or external observer, observer's experience and blinding status to the allocated intervention; (d) How the assessment is done: direct observation or video or audio recording, indirect assessments, such as providers' self-reports, interviews with providers or participants, completed homework; (e) When the assessment is done: punctually or continuous during the trial.
      : Extent to which the interventions under investigation differ from each other over critical dimensions in the intended manner.
      33Assessment of the participants' adherence
      (a) Presence (yes/no); (b) Content: what elements are assessed; (c) Who assesses: self-assessment, internal or external observer, observer's experience and blinding status to the allocated intervention; (d) How the assessment is done: direct observation or video or audio recording, indirect assessments, such as providers' self-reports, interviews with providers or participants, completed homework; (e) When the assessment is done: punctually or continuous during the trial.
      : Degree to which the participants perform the planned intervention (and avoid proscribed procedures) as planned. Any intervention change agreed upon with care providers or investigators but not permitted by the trial protocol is also considered a deviation.
      12. How well (actual)
      Actual adherence of providers
      34Deviations in professional competencies: (a) Deviations in training (more/less intense than planned); (b) Deviations in supervision (more/less intense than planned).NA
      35Errors of commission or omission: (a) Errors of commission: Adding interventions (or cointerventions) not specified by the protocol; (b) Errors of omission: Deleting interventions (or cointerventions) that were specified by the protocol.NA
      36Deviations in the numbers of providers: (a) Deviation in the nr. providers by participant (such as lower nr. providers by participant); (b) Providers who decided to discontinue the intervention; (c) Delivery to a group of participants (instead of individually) or vice versa.NA
      37Deviations in the mode of delivery: Internet-based instead of face-to-face.NA
      38Deviations in the intervention location: For example, if the intervention was planned to be delivered at the hospital but was ultimately delivered at home.NA
      39Deviations in the intervention timing: (a) Intervention period; (b) Scheduling of contact sessions (time between sessions); (c) Total duration of the intervention period (days or months).NA
      40Deviations in the intervention dose: (a) Length of each session (minutes); (b) Total nr. Sessions.NA
      41Deviations in the planned tailoring and accepted modifications during the studyNA
      Actual intervention differentiation
      42Actual intervention differentiationNA
      43Contamination across treatment/control conditions
      Participants in one group receive the treatment or are exposed to the intervention meant solely for the other group, thereby minimizing any real difference between the groups.
      NA
      Actual adherence of study participants
      44Deviation in the preparation for the intervention
      For example, the study protocol required that the patients attended an informative session previously to the intervention start. However, some patients did not attend that session.
      : More/less intense than planned.
      NA
      45Errors of commission or omission: (a) Errors of commission: Adding interventions, cointerventions, or behavior not specified by the protocol; (b) Errors of omission: Deleting interventions (or cointerventions) that were specified by the protocol.NA
      46Deviations in the numbers of participants: Nr. participants' that decided to discontinue the intervention.
      Overall summary of the actual intervention integrity (as per the critical items defined in item 30)
      47Overall % of providers with compromised intervention integrity within each study armNA
      48Overall % of participants with compromised intervention integrity within each study armNA
      49Verification of intervention differentiation (whether the treatment conditions differed in the intended manner)NA
      50Overall judgment on the intervention integrityNA
      NA, not applicable; NR, not reported; Nr, number; R, reported; %, percentage.
      a Possible answers: R, NR or NA. Also, detail the location in the text.
      b Possible answers: R, NR or NA. Also, detail the location in the text. Detail if there were relevant differences between the observed situation and the plan.
      c (a) Academic background, such as degree, master, or PhD; (b) Profession, such as psychologist, nurse, etc.; (c) General professional experience, such as years or nr. participants attended; (d) % of providers in training.
      d (a) Presence (yes/no); (b) Content; (c) Trainer: internal or external, experience, etc. (d) Modality: indirect training (didactic instructions and written materials) or direct training (opportunities for practice, such as role-playing); (e) When: punctually or continuous during the trial; (f) Intensity: nr. and duration of sessions; (g) Materials; and (h) Standardization of training across providers.
      e Observed aspects: (a) Overall % of providers fully trained; (b) Differences in levels of training among providers, centers (in multicentric studies), and over time.
      f (a) Presence of assessment (yes/no); (b) Content: what is assessed; (c) How the assessment is done, such as the measurement tool; (d) Who measures: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      g Observed aspects: (a) Overall level per study arm; (b) Differences in levels among providers, centers (in multicentric studies), and over time.
      h (a) Presence of supervision or monitoring (yes/no); (b) Content: what is considered; (c) How the assessment is done, such as the measurement tool; (d) Who assesses: self-assessment, internal or external observer and blinding status to the allocated intervention; and (e) When: punctually or continuously during the trial.
      i (a) Presence of correction procedures (yes/no). For example, strategies for providers dropping out, such as having a pool of trained providers ready to join, or strategies for participants missing a session, such as instruction on how to use the booklet session and practice that session content; (b) Observed overall % of corrected providers or participants; (c) Observed differences in the % of corrected providers or participants among centers (in multicentric studies) and over time.
      j (a) Presence (yes/no); (b) Why: justify the need of tailoring/modification; (c) What (content): elements tailored/modified; (d) How; (e) When.
      k (a) Presence (yes/no); (b) Content: what elements are assessed; (c) Who assesses: self-assessment, internal or external observer, observer's experience and blinding status to the allocated intervention; (d) How the assessment is done: direct observation or video or audio recording, indirect assessments, such as providers' self-reports, interviews with providers or participants, completed homework; (e) When the assessment is done: punctually or continuous during the trial.
      l Participants in one group receive the treatment or are exposed to the intervention meant solely for the other group, thereby minimizing any real difference between the groups.
      m For example, the study protocol required that the patients attended an informative session previously to the intervention start. However, some patients did not attend that session.

      3.3 Use of the RIPI-f checklist

      RIPI-f proposes critical aspects that authors should report to describe the integrity of face-to-face psychological interventions, whether they are part of the experimental condition or the control condition. A thorough description of the comparator will help explain the magnitude of the observed effects [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ]. We suggest that authors consider the checklist at the study planning stage.
      Users of RIPI-f for a given study report or protocol may judge whether each checklist item is “reported”, “not reported”, or “not applicable”. In addition, the checklist addresses the intervention's delivery plan and the observed delivery, which helps in judging whether the intervention differed from the plan—a critical component of intervention integrity [
      • Yeaton W.H.
      • Sechrest L.
      Critical dimensions in the choice and maintenance of successful treatments: strength, integrity, and effectiveness.
      ,
      • Perepletchikova F.
      On the topic of treatment integrity.
      ,
      • Perepletchikova F.
      • Treat T.A.
      • Kazdin A.E.
      Treatment integrity in psychotherapy research: analysis of the studies and examination of the associated factors.
      ,
      • Perepletchikova F.
      • Hilt L.M.
      • Chereji E.
      • Kazdin A.E.
      Barriers to implementing treatment integrity procedures: survey of treatment outcome researchers.
      ]. It may not be possible to report all the information in the printed report. In that case, authors may present an expanded description in locations beyond the primary article, such as supplementary material with a stable location [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ,
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ,
      • Moher D.
      • Hopewell S.
      • Schulz K.F.
      • Montori V.
      • Gotzsche P.C.
      • Devereaux P.J.
      • et al.
      CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials.
      ,
      • Tetzlaff J.M.
      • Moher D.
      • Chan A.W.
      Developing a guideline for clinical trial protocol content: Delphi consensus survey.
      ]. Journals and publishers may endorse the use of RIPI-f and refer to the checklist in the “Instructions to authors”.
      As RIPI-f applies to any evaluative study, we suggest using it with the relevant reporting guideline, particularly CONSORT-SPI, SPIRIT, TREND, and STROBE [
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ,
      • Chan A.W.
      • Tetzlaff J.M.
      • Altman D.G.
      • Laupacis A.
      • Gotzsche P.C.
      • Krleza-Jeric K.
      • et al.
      SPIRIT 2013 statement: defining standard protocol items for clinical trials.
      ,
      • Des Jarlais D.C.
      • Lyles C.
      • Crepaz N.
      • Group T.
      Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement.
      ,
      • von Elm E.
      • Altman D.G.
      • Egger M.
      • Pocock S.J.
      • Gøtzsche P.C.
      • Vandenbroucke J.P.
      Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies.
      ]. When authors consider aspects corresponding to interventions, for example, item 5 of the CONSORT-SPI checklist, they could refer to RIPI-f. RIPI-f enhances TIDieR [
      • Hoffmann T.C.
      • Glasziou P.P.
      • Boutron I.
      • Milne R.
      • Perera R.
      • Moher D.
      • et al.
      Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
      ]; there is no need to use both. Box 2 and Appendix 4 describe the main differences from TIDieR. Appendix 5 presents terminology applied to a behavior change intervention trial.
      Main changes of RIPI-f to TIDieR.
      • 1
        RIPI-f is more comprehensive and specific than TIDieR: It allows for fine-grained reporting and unambiguous description of face-to-face psychological intervention integrity.
      • 2
        Integration of relevant methodological guidance on psychological intervention integrity [
        • Bellg A.J.
        • Borrelli B.
        • Resnick B.
        • Hecht J.
        • Minicucci D.S.
        • Ory M.
        • et al.
        Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
        ,
        • Leeuw M.
        • Goossens M.E.
        • de Vet H.C.
        • Vlaeyen J.W.
        The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
        ,
        • Mayo-Wilson E.
        Reporting implementation in randomized trials: proposed additions to the consolidated standards of reporting trials statement.
        ,
        • Borrelli B.
        • Sepinwall D.
        • Ernst D.
        • Bellg A.J.
        • Czajkowski S.
        • Breger R.
        • et al.
        A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
        ,
        • Dumas J.E.
        • Lynch A.M.
        • Laughlin J.E.
        • Phillips Smith E.
        • Prinz R.J.
        Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial.
        ,
        • Glasziou P.
        • Meats E.
        • Heneghan C.
        • Shepperd S.
        What is missing from descriptions of treatment in trials and reviews?.
        ,
        • Perepletchikova F.
        • Kazdin A.E.
        Treatment integrity and therapeutic change: issues and research recommendations.
        ,
        • Capin P.
        • Walker M.A.
        • Vaughn S.
        • Wanzek J.
        Examining how treatment fidelity is supported, measured, and reported in K–3 reading intervention research.
        ,
        • McCambridge J.
        • Witton J.
        • Elbourne D.R.
        Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects.
        ,
        • Miller S.
        • Binder J.
        The effects of manual-based training on treatment fidelity and outcome: a review of the literature on adult individual psychotherapy.
        ,
        • Robins J.L.
        • Jallo N.
        • Kinser P.A.
        Treatment fidelity in mind-body interventions.
        ,
        • Boutron I.
        • Moher D.
        • Tugwell P.
        • Giraudeau B.
        • Poiraudeau S.
        • Nizard R.
        • et al.
        A checklist to evaluate a report of a nonpharmacological trial (CLEAR NPT) was developed using consensus.
        ].
      • 3
        Differentiation between participants' and providers' contributions to integrity.
      • 4
        Distinction between planned and observed intervention delivery.
      • 5
        Consideration of peculiarities of psychological interventions, such as the provider's allegiance to the intervention, motivation, and the therapeutic alliance and the participants' receipt, enactment, and expectations.
      • 6
        More detailed description of the intervention provider.
      • 7
        Differentiation between the integrity assessment plan and how integrity was finally assessed.
      • 8
        Clear guidance to describe the integrity of the cointerventions.

      4. Discussion

      4.1 Summary

      We developed guidance for reporting intervention integrity in evaluative studies of face-to-face psychological interventions. RIPI-f covers critical aspects concerning integrity: the intervention providers, the participants receiving the intervention, how and where the intervention was delivered, when and how often the intervention was delivered, whether the intervention was tailored or modified during the study, methods to assess adherence, and the actual adherence. RIPI-f enhances TIDieR and can be used in conjunction with other reporting guidelines, such as CONSORT-SPI [
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ], STROBE [
      • von Elm E.
      • Altman D.G.
      • Egger M.
      • Pocock S.J.
      • Gøtzsche P.C.
      • Vandenbroucke J.P.
      Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies.
      ], SPIRIT [
      • Chan A.W.
      • Tetzlaff J.M.
      • Altman D.G.
      • Laupacis A.
      • Gotzsche P.C.
      • Krleza-Jeric K.
      • et al.
      SPIRIT 2013 statement: defining standard protocol items for clinical trials.
      ], and TREND [
      • Des Jarlais D.C.
      • Lyles C.
      • Crepaz N.
      • Group T.
      Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement.
      ].

      4.2 Potential beneficiaries of the checklist

      The use of RIPI-f can help several stakeholders. First, authors can submit manuscripts with a better description of integrity. Second, journal editors and peer-reviewers will probably receive better submissions after completion of the checklist. Third, RIPI-f can help researchers improve their study designs and adequately plan intervention integrity. Fourth, knowing integrity can help readers interpret the study results, such as explaining whether the absence of effect could be due to low integrity or whether unplanned interventions contributed to effectiveness [
      • Bellg A.J.
      • Borrelli B.
      • Resnick B.
      • Hecht J.
      • Minicucci D.S.
      • Ory M.
      • et al.
      Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
      ,
      • Moncher F.J.P.R.
      Treatment fidelity in outcome studies.
      ]. Fifth, researchers can consider the observed integrity to design future interventions to meet participants’ needs [
      • Mayo-Wilson E.
      Reporting implementation in randomized trials: proposed additions to the consolidated standards of reporting trials statement.
      ,
      • Dumas J.E.
      • Lynch A.M.
      • Laughlin J.E.
      • Phillips Smith E.
      • Prinz R.J.
      Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial.
      ].
      The use of RIPI-f can help systematic reviewers extract information relevant in several ways. The first is to assess performance bias when using a risk of bias tool; RIPI-f can help reviewers verify whether the participants received the planned intervention within each study arm, which is critical to judge if the comparison was fair [
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ]. The second is to interpret heterogeneity in the study results [
      • Perepletchikova F.
      • Kazdin A.E.
      Treatment integrity and therapeutic change: issues and research recommendations.
      ] and assess their external validity, which is essential to grading the certainty of the evidence [
      • Guyatt G.H.
      • Oxman A.D.
      • Kunz R.
      • Woodcock J.
      • Brozek J.
      • Helfand M.
      • et al.
      GRADE guidelines: 8. Rating the quality of evidence--indirectness.
      ].
      Adequate integrity reporting will also benefit the article readers (providers, consumers, policymakers, guideline developers, and other decision-makers) in terms of assessing the trustworthiness and replicability of the intervention in their setting. Integrity relates to the feasibility and acceptability of interventions [
      • Mayo-Wilson E.
      Reporting implementation in randomized trials: proposed additions to the consolidated standards of reporting trials statement.
      ,
      • Guyatt G.H.
      • Oxman A.D.
      • Kunz R.
      • Woodcock J.
      • Brozek J.
      • Helfand M.
      • et al.
      GRADE guidelines: 8. Rating the quality of evidence--indirectness.
      ], information needed to make meaningful comparisons of the available interventions, and informed decisions [
      • Dumas J.E.
      • Lynch A.M.
      • Laughlin J.E.
      • Phillips Smith E.
      • Prinz R.J.
      Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial.
      ]. Thus, it can also help guideline developers formulate appropriate recommendations for practice [
      • USPST Force
      Behavioral counseling in primary care to promote physical activity: recommendation and rationale.
      ,
      • Davidson K.W.
      • Goldstein M.
      • Kaplan R.M.
      • Kaufmann P.G.
      • Knatterud G.L.
      • Orleans C.T.
      • et al.
      Evidence-based behavioral medicine: what is it and how do we achieve it?.
      ] and, consequently, facilitate the implementation of research findings in clinical practice [
      • Bellg A.J.
      • Borrelli B.
      • Resnick B.
      • Hecht J.
      • Minicucci D.S.
      • Ory M.
      • et al.
      Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
      ]. Finally, better reporting of psychological intervention integrity may alleviate the crisis of replicability of findings in clinical psychology research [
      • Leichsenring F.
      • Abbass A.
      • Hilsenroth M.J.
      • Leweke F.
      • Luyten P.
      • Keefe J.R.
      • et al.
      Biases in research: risk factors for non-replicability in psychotherapy and pharmacotherapy research.
      ,
      • Open Science Collaboration
      Psychology. Estimating the reproducibility of psychological science.
      ]. For example, variation in TAU intensity can impact the outcome of trials and bias estimates of psychotherapy efficacy [
      • Munder T.
      • Geisshusler A.
      • Krieger T.
      • Zimmermann J.
      • Wolf M.
      • Berger T.
      • et al.
      Intensity of treatment as usual and its impact on the effects of face-to-face and internet-based psychotherapy for depression: a preregistered meta-analysis of randomized controlled trials.
      ,
      • Faltinsen E.
      • Todorovac A.
      • Staxen Bruun L.
      • Hróbjartsson A.
      • Gluud C.
      • Kongerslev M.T.
      • et al.
      Control interventions in randomised trials among people with mental health disorders.
      ]; RIPI-f can also help improve the reporting of these comparators and therefore help study their impact on the intervention effects.

      4.3 Strengths and limitations

      We developed RIPI-f using rigorous methods [
      • Moher D.
      • Schulz K.F.
      • Simera I.
      • Altman D.G.
      Guidance for developers of health research reporting guidelines.
      ] and our exhaustive searches likely captured all the relevant components of psychological intervention integrity.
      The Delphi response rate (9.5%) was lower than similar reporting guidelines, such as TIDieR-Placebo (31%) [
      • Howick J.
      • Webster R.K.
      • Rees J.L.
      • Turner R.
      • Macdonald H.
      • Price A.
      • et al.
      TIDieR-Placebo: a guide and checklist for reporting placebo and sham controls.
      ]. Consequently, the checklist may have been influenced by the authors' perspectives (who were also consensus meetings experts). The specificity of our topic and the long list of items in the survey could explain this low participation. We invited colleagues as per their general background in clinical psychology research and systematic reviews but not as per their potential interest in intervention integrity. Consequently, some experts may have declined due to their lack of interest/competence in the topic. We did not invite a broader community, such as full-time practitioners, consumers, or experts from similar intervention fields, like educational science or social work. We preferred to set up a group specialized in research methods and various theoretical orientations in clinical psychology to capture their views concerning methodological considerations.
      The main limitation of the RIPI-f checklist is its extension, which may reduce feasibility. As with other reporting guidelines [
      • Montgomery P.
      • Grant S.
      • Mayo-Wilson E.
      • Macdonald G.
      • Michie S.
      • Hopewell S.
      • et al.
      Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
      ], the survey participants defined most items as potentially relevant (n = 79; 94%). Several factors may explain this high proportion of relevant items. The survey took many items from risk of bias tools and published reporting guidelines, so other researchers had already considered them relevant. Moreover, there is no well-established definition of the integrity of psychological interventions [
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ]; participants may have refrained from excluding items due to this concept ambiguity. On the other hand, the RIPI-f checklist may be criticized as looking at psychological interventions as drugs, thus disregarding their dynamic and interactional nature. While the researcher's allegiance toward a specific intervention may be seen as a performance bias (i.e., exaggeration of effects because the provider knows the intervention being delivered), the therapist's allegiance is a vital component of delivering a specific intervention. Similarly, bias from a deviation from the intended intervention is likely to happen if therapists have no allegiance to the treatment.
      It can be argued that the therapeutic alliance, or expectations, is not part of treatment integrity per se but rather intermediate outcomes on the process level. The inclusion of process-like aspects was a topic of discussion during our consensus meetings. We decided to include these constructs because the breakdown of any of these aspects can seriously compromise treatment integrity. Besides, we aimed to compile all relevant items in one checklist; therefore, following other authors' approaches and previous frameworks [
      • Perepletchikova F.
      • Treat T.A.
      • Kazdin A.E.
      Treatment integrity in psychotherapy research: analysis of the studies and examination of the associated factors.
      ,
      • Bellg A.J.
      • Borrelli B.
      • Resnick B.
      • Hecht J.
      • Minicucci D.S.
      • Ory M.
      • et al.
      Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
      ,
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ,
      • Borrelli B.
      • Sepinwall D.
      • Ernst D.
      • Bellg A.J.
      • Czajkowski S.
      • Breger R.
      • et al.
      A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
      ], we incorporated these constructs into the RIPI-f checklist.

      4.4 Implications for practice and research

      We will make RIPI-f available in Open Science Framework and submit the checklist for inclusion in the EQUATOR Library and goodreports.org to enhance dissemination. We will also disseminate the tool through our institutions’ channels.
      The RIPI-f checklist is a work in progress and we encourage feedback and further discussion, particularly proposals to reduce its extension. We will test the tool for feasibility and invite stakeholders to provide feedback. Once the next version of the checklist is available, we will elaborate a report explaining each checklist item with examples of transparent reporting. We will welcome translations and empirical studies evaluating the impact of the final tool.
      Adhering to RIPI-f might be time-consuming and increase the manuscript length. However, we consider that not investing effort in reporting intervention integrity is more costly than doing so [
      • Bellg A.J.
      • Borrelli B.
      • Resnick B.
      • Hecht J.
      • Minicucci D.S.
      • Ory M.
      • et al.
      Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
      ]. Better reporting is vital to addressing integrity's complexity, increasing internal and external study validity, and allowing replicability, a critical challenge in research on psychological interventions [
      • Leichsenring F.
      • Abbass A.
      • Hilsenroth M.J.
      • Leweke F.
      • Luyten P.
      • Keefe J.R.
      • et al.
      Biases in research: risk factors for non-replicability in psychotherapy and pharmacotherapy research.
      ]. On the other hand, the manuscript can still be concise through the efficient use of the tool and details on the intervention provided as supplementary material.
      We acknowledge an urgent need to define valid measurements for several checklist items. Examples are the therapist's allegiance or expectations. In this line, a framework for quantitatively assessing the integrity of psychological interventions is also needed. For example, a score per study arm summarizing intervention integrity would help identify fair comparisons. The task is challenging, as the model variables, and their relative weight are not agreed upon [
      • Leeuw M.
      • Goossens M.E.
      • de Vet H.C.
      • Vlaeyen J.W.
      The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
      ,
      • Wampold B.E.
      • Imel Z.E.
      ]. In addition, the importance of each item can vary depending on the study, intervention, and outcome [
      • Perepletchikova F.
      • Kazdin A.E.
      Treatment integrity and therapeutic change: issues and research recommendations.
      ]. For example, supervision may be less critical in simple interventions or pragmatic trials.
      Finally, further research is required to demonstrate whether the completion of RIPI-f by authors improves reporting, for example, if the tool reduces the risk of bias domains flagged as unclear in systematic reviews due to poor reporting.

      5. Conclusion

      The RIPI-f checklist proposes guidance for reporting intervention integrity in evaluative studies of face-to-face psychological interventions. It enhances TIDieR by addressing psychological interventions specifically. The checklist can help trialists plan their studies, derive valid conclusions, and facilitate the transfer of effective interventions into clinical practice. RIPI-f is a work in progress that may require feedback and revision in the future.

      CRediT authorship contribution statement

      Jesus Lopez-Alcalde and Ninib Simon Yakoub both share the role of first authors. Jürgen Barth is senior author. J.B. administered the project and obtained funding. J.B., J.L.A., and N.Y. conceived this article and defined its methodology. J.B. and N.Y. designed and conducted the literature review. J.B., J.L.A., and N.Y. designed the survey. N.Y. administered the survey and analyzed the data. J.B., J.L.A., and N.Y. prepared the meetings. J.B. moderated and presented proposals at the meetings. Christiane Steinert, Christoph Flückiger, Erik von Elm, J.B., Jenny Rosendahl, Markus Wolf, Sarah Liebherz, and Thomas Munder responded to the survey. All authors attended the consensus meetings. J.L.A. and N.Y. took notes from the meetings. J.L.A. led the drafting and editing of the article. All authors revised the article critically for important intellectual content. All authors approved the final version of the article. J.B. is the guarantor of this work and is corresponding author. J.L.A. attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

      Acknowledgments

      We thank the survey participants for their valuable contribution and Kayley Basler for her help with identifying relevant literature.

      Supplementary data

      References

        • Yeaton W.H.
        • Sechrest L.
        Critical dimensions in the choice and maintenance of successful treatments: strength, integrity, and effectiveness.
        J Consult Clin Psychol. 1981; 49: 156-167
        • Perepletchikova F.
        On the topic of treatment integrity.
        Clin Psychol (New York). 2011; 18: 148-153
        • Perepletchikova F.
        • Treat T.A.
        • Kazdin A.E.
        Treatment integrity in psychotherapy research: analysis of the studies and examination of the associated factors.
        J Consult Clin Psychol. 2007; 75: 829-841
        • Perepletchikova F.
        • Hilt L.M.
        • Chereji E.
        • Kazdin A.E.
        Barriers to implementing treatment integrity procedures: survey of treatment outcome researchers.
        J Consult Clin Psychol. 2009; 77: 212-218
        • Carroll C.
        • Patterson M.
        • Wood S.
        • Booth A.
        • Rick J.
        • Balain S.
        A conceptual framework for implementation fidelity.
        Implement Sci. 2007; 2: 40
        • Bellg A.J.
        • Borrelli B.
        • Resnick B.
        • Hecht J.
        • Minicucci D.S.
        • Ory M.
        • et al.
        Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium.
        Health Psychol. 2004; 23: 443-451
        • Leeuw M.
        • Goossens M.E.
        • de Vet H.C.
        • Vlaeyen J.W.
        The fidelity of treatment delivery can be assessed in treatment outcome studies: a successful illustration from behavioral medicine.
        J Clin Epidemiol. 2009; 62: 81-90
        • Mayo-Wilson E.
        Reporting implementation in randomized trials: proposed additions to the consolidated standards of reporting trials statement.
        Am J Public Health. 2007; 97: 630-633
        • Leichsenring F.
        • Salzer S.
        • J Hilsenroth M.
        • Leibing E.
        • Leweke F.
        • Rabung S.
        Treatment integrity: an unresolved issue in psychotherapy research.
        Curr Psychiatry Rev. 2011; 7: 313-321
        • Borrelli B.
        • Sepinwall D.
        • Ernst D.
        • Bellg A.J.
        • Czajkowski S.
        • Breger R.
        • et al.
        A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research.
        J Consult Clin Psychol. 2005; 73: 852-860
        • Dumas J.E.
        • Lynch A.M.
        • Laughlin J.E.
        • Phillips Smith E.
        • Prinz R.J.
        Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial.
        Am J Prev Med. 2001; 20: 38-47
        • Hoffmann T.C.
        • Glasziou P.P.
        • Boutron I.
        • Milne R.
        • Perera R.
        • Moher D.
        • et al.
        Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.
        BMJ. 2014; 348: g1687
        • Montgomery P.
        • Grant S.
        • Mayo-Wilson E.
        • Macdonald G.
        • Michie S.
        • Hopewell S.
        • et al.
        Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension.
        Trials. 2018; 19: 407
        • Goense P.B.
        • Assink M.
        • Stams G.-J.
        • Boendermaker L.
        • Hoeve M.
        Making ‘what works’ work: a meta-analytic study of the effect of treatment integrity on outcomes of evidence-based interventions for juveniles with antisocial behavior.
        Aggression Violent Behav. 2016; 31: 106-115
        • de Bruin M.
        • Viechtbauer W.
        • Hospers H.J.
        • Schaalma H.P.
        • Kok G.
        Standard care quality determines treatment outcomes in control groups of HAART-adherence intervention studies: implications for the interpretation and comparison of intervention effects.
        Health Psychol. 2009; 28: 668-674
        • Munder T.
        • Geisshusler A.
        • Krieger T.
        • Zimmermann J.
        • Wolf M.
        • Berger T.
        • et al.
        Intensity of treatment as usual and its impact on the effects of face-to-face and internet-based psychotherapy for depression: a preregistered meta-analysis of randomized controlled trials.
        Psychother Psychosom. 2022; 91: 200-209
        • Guyatt G.H.
        • Oxman A.D.
        • Kunz R.
        • Woodcock J.
        • Brozek J.
        • Helfand M.
        • et al.
        GRADE guidelines: 8. Rating the quality of evidence--indirectness.
        J Clin Epidemiol. 2011; 64: 1303-1310
        • England M.J.
        • Butler A.S.
        • Gonzalez M.L.
        • Institute of Medicine (U.S.)
        Committee on developing evidence-based standards for psychosocial interventions for mental disorders, institute of medicine (U.S.). Board on health sciences policy. Psychosocial interventions for mental and substance use disorders : a framework for establishing evidence-based standards.
        The National Academies Press, Washington, D.C.2015
        • Suh H.
        • Sohn H.
        • Kim T.
        • Lee D.G.
        A review and meta-analysis of perfectionism interventions: comparing face-to-face with online modalities.
        J Couns Psychol. 2019; 66: 473-486
        • Craig P.
        • Dieppe P.
        • Macintyre S.
        • Michie S.
        • Nazareth I.
        • Petticrew M.
        Developing and evaluating complex interventions: the new Medical Research Council guidance.
        Int J Nurs Stud. 2013; 50: 587-592
        • Durlak J.A.
        • DuPre E.P.
        Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation.
        Am J Community Psychol. 2008; 41: 327-350
        • van der Helm G.H.P.
        • Kuiper C.H.Z.
        • Stams G.J.J.M.
        Group climate and treatment motivation in secure residential and forensic youth care from the perspective of self determination theory.
        Child Youth Serv Rev. 2018; 93: 339-344
        • Wampold B.E.
        • Imel Z.E.
        The great psychotherapy debate: The evidence for what makes psychotherapy work. 2nd ed. Routledge/Taylor & Francis Group, New York, NY, US2015
        • Michie S.
        • Wood C.E.
        • Johnston M.
        • Abraham C.
        • Francis J.J.
        • Hardeman W.
        Behaviour change techniques: the development and evaluation of a taxonomic method for reporting and describing behaviour change interventions (a suite of five studies involving consensus methods, randomised controlled trials and analysis of qualitative data).
        Health Technol Assess. 2015; 19: 1-188
        • Glasziou P.
        • Meats E.
        • Heneghan C.
        • Shepperd S.
        What is missing from descriptions of treatment in trials and reviews?.
        BMJ. 2008; 336: 1472-1474
        • Hoffmann T.C.
        • Erueti C.
        • Glasziou P.P.
        Poor description of non-pharmacological interventions: analysis of consecutive sample of randomised trials.
        BMJ. 2013; 347: f3755
        • Moher D.
        • Schulz K.F.
        • Simera I.
        • Altman D.G.
        Guidance for developers of health research reporting guidelines.
        PLoS Med. 2010; 7: e1000217
        • Hrynyschyn R.
        • Dockweiler C.
        Effectiveness of smartphone-based cognitive behavioral therapy among patients with major depression: systematic review of health implications.
        JMIR Mhealth Uhealth. 2021; 9: e24703
        • Agarwal S.
        • LeFevre A.E.
        • Lee J.
        • L'Engle K.
        • Mehl G.
        • Sinha C.
        • et al.
        Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist.
        BMJ. 2016; 352: i1174
        • Chan A.W.
        • Tetzlaff J.M.
        • Altman D.G.
        • Laupacis A.
        • Gotzsche P.C.
        • Krleza-Jeric K.
        • et al.
        SPIRIT 2013 statement: defining standard protocol items for clinical trials.
        Ann Intern Med. 2013; 158: 200-207
        • Des Jarlais D.C.
        • Lyles C.
        • Crepaz N.
        • Group T.
        Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement.
        Am J Public Health. 2004; 94: 361-366
        • Moher D.
        • Hopewell S.
        • Schulz K.F.
        • Montori V.
        • Gotzsche P.C.
        • Devereaux P.J.
        • et al.
        CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials.
        J Clin Epidemiol. 2010; 63: 1-37
        • Tetzlaff J.M.
        • Moher D.
        • Chan A.W.
        Developing a guideline for clinical trial protocol content: Delphi consensus survey.
        Trials. 2012; 13: 176
        • von Elm E.
        • Altman D.G.
        • Egger M.
        • Pocock S.J.
        • Gøtzsche P.C.
        • Vandenbroucke J.P.
        Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies.
        BMJ. 2007; 335: 806-808
        • Perepletchikova F.
        • Kazdin A.E.
        Treatment integrity and therapeutic change: issues and research recommendations.
        Clin Psychol Sci Pract. 2005; 12: 365-383
        • Capin P.
        • Walker M.A.
        • Vaughn S.
        • Wanzek J.
        Examining how treatment fidelity is supported, measured, and reported in K–3 reading intervention research.
        Educ Psychol Rev. 2018; 30: 885-919
        • McCambridge J.
        • Witton J.
        • Elbourne D.R.
        Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects.
        J Clin Epidemiol. 2014; 67: 267-277
        • Miller S.
        • Binder J.
        The effects of manual-based training on treatment fidelity and outcome: a review of the literature on adult individual psychotherapy.
        Psychotherapy: Theor Res Pract Train. 2002; 39: 184-198
        • Robins J.L.
        • Jallo N.
        • Kinser P.A.
        Treatment fidelity in mind-body interventions.
        J Holist Nurs. 2019; 37: 189-199
        • Boutron I.
        • Moher D.
        • Tugwell P.
        • Giraudeau B.
        • Poiraudeau S.
        • Nizard R.
        • et al.
        A checklist to evaluate a report of a nonpharmacological trial (CLEAR NPT) was developed using consensus.
        J Clin Epidemiol. 2005; 58: 1233-1240
        • Moncher F.J.P.R.
        Treatment fidelity in outcome studies.
        Clin Psychol Rev. 1991; 11: 247e66
        • USPST Force
        Behavioral counseling in primary care to promote physical activity: recommendation and rationale.
        Ann Intern Med. 2002; 137: 205-207
        • Davidson K.W.
        • Goldstein M.
        • Kaplan R.M.
        • Kaufmann P.G.
        • Knatterud G.L.
        • Orleans C.T.
        • et al.
        Evidence-based behavioral medicine: what is it and how do we achieve it?.
        Ann Behav Med. 2003; 26: 161-171
        • Leichsenring F.
        • Abbass A.
        • Hilsenroth M.J.
        • Leweke F.
        • Luyten P.
        • Keefe J.R.
        • et al.
        Biases in research: risk factors for non-replicability in psychotherapy and pharmacotherapy research.
        Psychol Med. 2017; 47: 1000-1011
        • Open Science Collaboration
        Psychology. Estimating the reproducibility of psychological science.
        Science. 2015; 349: aac4716
        • Faltinsen E.
        • Todorovac A.
        • Staxen Bruun L.
        • Hróbjartsson A.
        • Gluud C.
        • Kongerslev M.T.
        • et al.
        Control interventions in randomised trials among people with mental health disorders.
        Cochrane Database Syst Rev. 2022; 4: MR000050
        • Howick J.
        • Webster R.K.
        • Rees J.L.
        • Turner R.
        • Macdonald H.
        • Price A.
        • et al.
        TIDieR-Placebo: a guide and checklist for reporting placebo and sham controls.
        PLoS Med. 2020; 17: e1003294