Titre :
|
Efficacy vs effectiveness trial results of an indicated "model" substance abuse program : Implications for public health. (2006)
|
Auteurs :
|
Denise HALLFORS ;
BAUER (Daniel) : USA. Psychology Department. University of North Carolina. Chapel Hill. ;
Hyunsan CHO ;
. HYUNG MIN KIM ;
KHATAPOUSH (Shereen) : USA. Daniel Bryant Youth and Family Treatment Center. Council on Alcoholism and Drug Abuse. Santa Barbara. CA. ;
SANCHEZ (Victoria) : USA. School of Nursing. University of North Carolina. Chapel Hill.
|
Type de document :
|
Article
|
Dans :
|
American journal of public health (vol. 96, n° 12, 2006)
|
Pagination :
|
2254-2259
|
Langues:
|
Anglais
|
Mots-clés :
|
Etude comparée
;
Essai préventif
;
Essai comparatif
;
Essai thérapeutique
;
Résultat
;
Drogue synthèse
;
Programme santé
;
Etats Unis
;
Amérique
;
Homme
;
Etude critique
;
Adolescent
;
Prévention santé
;
Amérique du Nord
|
Résumé :
|
[BDSP. Notice produite par INIST-CNRS 8K9yR0x2. Diffusion soumise à autorisation]. Objectives. The US Department of Education requires schools to choose substance abuse and violence prevention programs that meet standards of effectiveness. The Substance Abuse and Mental Health Services Agency certifies "model" programs that meet this standard. We compared findings from a large, multisite effectiveness trial of 1 model program to its efficacy trial findings, upon which the certification was based. Methods. 1370 high-risk youths were randomized to experimental or control groups across 9 high schools in 2 large urban school districts. We used intent-to-treat and on-treatment approaches to examine baseline equivalence, attrition, and group differences in outcomes at the end of the program and at a 6-month follow-up. Results. Positive efficacy trial findings were not replicated in the effectiveness trial. All main effects were either null or worse for the experimental than for the control group. Conclusions. These findings suggest that small efficacy trials conducted by developers provide insufficient evidence of effectiveness. Federal agencies and public health scientists must work together to raise the standards of evidence and ensure that data from new trials are incorporated into ongoing assessments of program effects.
|