Gregory Samsa

Overview:

Greg Samsa is an applied statistician whose primary interests are in study design, instrument development, information synthesis, practice improvement, effective communication of statistical results, and teaching. He is a believer in the power of statistical thinking, as broadly defined.

Positions:

Professor of Biostatistics & Bioinformatics

Biostatistics & Bioinformatics
School of Medicine

Director, Research Integrity Office

Biostatistics & Bioinformatics
School of Medicine

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 1988

University of North Carolina - Chapel Hill

Grants:

Inter-disciplinary Program for Training and Mentoring in CER Methods and Practice

Administered By
Biostatistics & Bioinformatics
Awarded By
National Institutes of Health
Role
Program Coordinator
Start Date
End Date

Integrating Palliative Care in Oncology Practice

Administered By
Duke Clinical Research Institute
Awarded By
American SOciety of Clinical Oncology
Role
Statistician
Start Date
End Date

Caregiver-Guided Pain Management Training in Palliative Care

Administered By
Psychiatry & Behavioral Sciences, Behavioral Medicine & Neurosciences
Awarded By
National Institutes of Health
Role
Statistician
Start Date
End Date

Hypertension Improvement Project (HIP)

Administered By
Medicine, Nephrology
Awarded By
National Institutes of Health
Role
Co Investigator
Start Date
End Date

Increasing Colorectal Cancer Screening Among Carpenters

Administered By
Duke Cancer Institute
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Publications:

Institutional approaches to preventing questionable research practices.

Questionable research practices (QRP) are actions taken by researchers that span a range of concern related to violation of research best practices, and ultimately expose institutions and research participants to risk. Numerous studies have shown that QRP are common. The continued prevalence of QRP indicates that existing approaches for dealing with QRP are falling short. In this editorial we discuss the risks associated with QRP and propose mitigation strategies at the institutional level using a common QRP as an example, questionable treatment of subgroup analyses. We argue that the need for institutional intervention in cases such as this are particularly motivating when both the investigator and the institution have a substantial financial conflict of interest related to intellectual property that requires the investigator's expertise to continue developing. To address this, we propose an expansion of the traditional conflict of interest management process.
Authors
Troy, JD; Rockhold, F; Samsa, GP
MLA Citation
Troy, Jesse D., et al. “Institutional approaches to preventing questionable research practices.Account Res, Oct. 2021, pp. 1–8. Pubmed, doi:10.1080/08989621.2021.1986017.
URI
https://scholars.duke.edu/individual/pub1497208
PMID
34569387
Source
pubmed
Published In
Account Res
Published Date
Start Page
1
End Page
8
DOI
10.1080/08989621.2021.1986017

The biomedical research pyramid: A model for the practice of biostatistics

Biostatisticians apply statistical methods to solve problems in the biological sciences. Successful practioners of biostatistics have advanced technical knowledge, are skilled communicators, and can seamlesslessly integrate with interdisciplinary scientific teams. Despite the breadth of skills required for success in this field, most biostatistics education programs place heavier emphasis on development of technical skills than skills necessary for collaborative work, including critical thinking, writing, and public speaking. Our master's degree program in biostatistics aims for stronger integration of education in collaborative work alongside development of technical knowledge in biostatistics. Toward that end, we propose a model that provides students with a mental map for practicing biostatistics, and that can serve as a tool for faculty to create hands-on learning experiences for biostatistics students. The model helps students organize their knowledge of biostatistics, unifying the technical and collaborative aspects of the discipline in a single framework that can be applied across the broad array of activities that biostatisticians engage in. In this article we describe the model in detail and provide an initial assessment of whether the model might meet its intended purpose by applying the model to a common task for practicing biostatisticians and biostatistics students: describing the results of a medical research study.
Authors
Troy, JD; Neely, ML; Grambow, SC; Samsa, GP
MLA Citation
Troy, J. D., et al. “The biomedical research pyramid: A model for the practice of biostatistics.” Journal of Curriculum and Teaching, vol. 10, no. 1, Feb. 2021, pp. 10–17. Scopus, doi:10.5430/jct.v10n1p10.
URI
https://scholars.duke.edu/individual/pub1475473
Source
scopus
Published In
Journal of Curriculum and Teaching
Volume
10
Published Date
Start Page
10
End Page
17
DOI
10.5430/jct.v10n1p10

Evolution of a qualifying examination from a timed closed-book format to an open-book collaborative take-home format: A case study and commentary

Objective: Our master's program in biostatistics requires a qualifying examination (QE). A curriculum review led us to question whether to replace a closed-book format with an open-book one. Our goal was to improve the QE. Methods: This is a case study and commentary, where we describe the evolution of the QE, both in its goals and its content. The result was a week-long, open-book, collaborative, take-home examination structured around the analysis of two types of studies commonly encountered in biostatistical practice. Our evaluation of the revised format includes its fairness, student performance, and student feedback. Results: The new format has a number of advantages: (1) it has a specific educational goal; (2) it provides sufficient time for students to produce their best work; (3) it encourages students to review elements of the first-year curriculum as needed; and (4) it can be administered remotely, even during a pandemic. Potential concerns pertaining to cheating and rigor can be adequately addressed. The results of our evaluation of the examination have been encouraging. The QE is intended to be a "fair" examination that covers important material which is beneficial to students, and does so in a way that is transparent and puts everyone in a position to perform their best work. Conclusions: An examination using this format has much to recommend it. When designing an examination, it is important to (a) match its format with clearly specified educational goals; and (b) distinguish between the distinct constructs of difficulty and rigor.
Authors
MLA Citation
Samsa, G. “Evolution of a qualifying examination from a timed closed-book format to an open-book collaborative take-home format: A case study and commentary.” Journal of Curriculum and Teaching, vol. 10, no. 1, Feb. 2021, pp. 47–55. Scopus, doi:10.5430/jct.v10n1p47.
URI
https://scholars.duke.edu/individual/pub1501442
Source
scopus
Published In
Journal of Curriculum and Teaching
Volume
10
Published Date
Start Page
47
End Page
55
DOI
10.5430/jct.v10n1p47

Aerobic Versus Resistance Training Effects on Ventricular-Arterial Coupling and Vascular Function in the STRRIDE-AT/RT Trial.

Background: The goal was studying the differential effects of aerobic training (AT) vs. resistance training (RT) on cardiac and peripheral arterial capacity on cardiopulmonary (CP) and peripheral vascular (PV) function in sedentary and obese adults. Methods: In a prospective randomized controlled trial, we studied the effects of 6 months of AT vs. RT in 21 subjects. Testing included cardiac and vascular ultrasoundography and serial CP for ventricular-arterial coupling (Ees/Ea), strain-based variables, brachial artery flow-mediated dilation (BAFMD), and peak VO2 (pVO2; mL/kg/min) and peak O2-pulse (O2p; mL/beat). Results: Within the AT group (n = 11), there were significant increases in rVO2 of 4.2 mL/kg/min (SD 0.93) (p = 0.001); O2p of 1.9 mL/beat (SD 1.3) (p = 0.008) and the brachial artery post-hyperemia peak diameter 0.18 mm (SD 0.08) (p = 0.05). Within the RT group (n = 10) there was a significant increase in left ventricular end diastolic volume 7.0 mL (SD 9.8; p = 0.05) and percent flow-mediated dilation (1.8%) (SD 0.47) (p = 0.004). Comparing the AT and RT groups, post exercise, rVO2 2.97, (SD 1.22), (p = 0.03), O2p 0.01 (SD 1.3), (p = 0.01), peak hyperemic blood flow volume (1.77 mL) (SD 140.69) (p = 0.009), were higher in AT, but LVEDP 115 mL (SD 7.0) (p = 0.05) and Ees/Ea 0.68 mmHg/ml (SD 0.60) p = 0.03 were higher in RT. Discussion: The differential effects of AT and RT in this hypothesis generating study have important implications for exercise modality and clinical endpoints.
Authors
Lekavich, CL; Allen, JD; Bensimhon, DR; Bateman, LA; Slentz, CA; Samsa, GP; Kenjale, AA; Duscha, BD; Douglas, PS; Kraus, WE
MLA Citation
Lekavich, Carolyn L., et al. “Aerobic Versus Resistance Training Effects on Ventricular-Arterial Coupling and Vascular Function in the STRRIDE-AT/RT Trial.Front Cardiovasc Med, vol. 8, 2021, p. 638929. Pubmed, doi:10.3389/fcvm.2021.638929.
URI
https://scholars.duke.edu/individual/pub1479028
PMID
33869303
Source
pubmed
Published In
Frontiers in Cardiovascular Medicine
Volume
8
Published Date
Start Page
638929
DOI
10.3389/fcvm.2021.638929

Two Questions About the Design of Cluster Randomized Trials: A Tutorial.

This is a short tutorial on two key questions that pertain to cluster randomized trials (CRTs): 1) Should I perform a CRT? and 2) If so, how do I derive the sample size? In summary, a CRT is the best option when you "must" (e.g., the intervention can only be administered to a group) or you "should" (e.g., because of issues such as feasibility and contamination). CRTs are less statistically efficient and usually more logistically complex than individually randomized trials, and so reviewing the rationale for their use is critical. The most straightforward approach to the sample size calculation is to first perform the calculation as if the design were randomized at the level of the patient and then to inflate this sample size by multiplying by the "design effect", which quantifies the degree to which responses within a cluster are similar to one another. Although trials with large numbers of small clusters are more statistically efficient than those with a few large clusters, trials with large clusters can be more feasible. Also, if results are to be compared across individual sites, then sufficient sample size will be required to attain adequate precision within each site. Sample size calculations should include sensitivity analyses, as inputs from the literature can lack precision. Collaborating with a statistician is essential. To illustrate these points, we describe an ongoing CRT testing a mobile-based app to systematically engage families of intensive care unit patients and help intensive care unit clinicians deliver needs-targeted palliative care.
Authors
Samsa, GP; Winger, JG; Cox, CE; Olsen, MK
MLA Citation
Samsa, Gregory P., et al. “Two Questions About the Design of Cluster Randomized Trials: A Tutorial.J Pain Symptom Manage, vol. 61, no. 4, Apr. 2021, pp. 858–63. Pubmed, doi:10.1016/j.jpainsymman.2020.11.019.
URI
https://scholars.duke.edu/individual/pub1466306
PMID
33246075
Source
pubmed
Published In
J Pain Symptom Manage
Volume
61
Published Date
Start Page
858
End Page
863
DOI
10.1016/j.jpainsymman.2020.11.019