CTD Module 5 Clinical Study Reports: US Data Presentation, Tables & Appendices

CTD Module 5 Clinical Study Reports: US Data Presentation, Tables & Appendices

Published on 19/12/2025

Authoring CTD Module 5: US-Style Clinical Study Reports, Data Tables, and Appendices

Why Module 5 Matters: Turning Clinical Evidence into a Reviewable, Decision-Ready Record

CTD Module 5 is where efficacy and safety evidence becomes a regulatory-grade narrative. While Modules 2 and 3 set the context and quality foundation, it is the Clinical Study Report (CSR) that convinces reviewers your study design was fit for purpose, analyses were pre-specified and executed correctly, and results are robust, reproducible, and clinically meaningful. In the US, reviewers expect a disciplined application of ICH E3 structure, clear linkage to protocol and statistical analysis plan (SAP), and traceable Tables, Listings, and Figures (TLFs) that allow independent verification. Strong Module 5 writing shortens argument time: it clarifies what was planned, what actually happened, and how deviations were handled—then points unambiguously to the evidence.

For sponsors and CROs operating at speed, the temptation is to “write by export.” That approach produces large but incoherent CSRs—TLFs pasted without interpretation, protocol deviations dumped without classification, and appendices that are difficult to navigate. US-style Module 5 writing works the other way around: begin with the decision (does the study support the

indication and dose?), then present the design logic and analysis rigour, and finally link to TLFs and appendices that prove it. When done well, the Clinical Overview (Module 2.5) becomes a faithful summary; when done poorly, Module 2.5 is forced to compensate, creating inconsistencies that trigger information requests.

Anchor your content on harmonized guidance (CTD and E3) and agency expectations. Keep the ICH site bookmarked for E3 and E6(R3) principles; consult the U.S. Food & Drug Administration for US-specific expectations on submission content, formatting, and electronic standards; and use the European Medicines Agency pages when preparing multinational filings. These sources define “good CSR anatomy,” but your craft—clear prose, consistent terminology, tight cross-referencing—determines whether the evidence persuades on first pass.

Key Concepts & Definitions: CSR Anatomy, TLFs, Protocol Deviations, and Traceability

CSR (Clinical Study Report). The E3-structured report that documents objectives, design, conduct, analyses, and results. It includes a Synopsis; Ethics; Study Administrative Structure; Study Methods (design, randomization, blinding, populations, endpoints, sample size, statistical methods); Results (participant disposition, baseline characteristics, protocol deviations, efficacy, safety); Discussion/Conclusions; and Appendices (protocol/SAP and amendments, sample CRF, investigator list and credentials, audit certificates if applicable, randomization documentation, relevant publications).

TLFs (Tables, Listings, Figures). The quantitative backbone of the CSR. Tables summarize key endpoints and safety incidence; Listings provide subject-level transparency (e.g., adverse events, concomitant medications); Figures illustrate effects and diagnostics (e.g., Kaplan–Meier curves, forest plots, exposure–response). For US readability, each TLF should carry a stable ID, match the SAP’s planned outputs, and be cross-referenced precisely in text (“Table 14-1, Primary Endpoint”).

Populations. Define ITT/FAS (all randomized/all treated), Per-Protocol, Safety, and any biomarker-defined or PK-enriched sets. Specify inclusion rules, handling of missing data, and protocol deviation impact. Population clarity is foundational for reviewer trust.

Protocol deviations. Departures from the protocol categorized as major or minor, pre-specified in the SAP or deviation plan. Best practice is to define categories a priori, apply consistently, and present adjudicated counts by site and treatment arm with impact rationale. Unstructured deviation dumps are a frequent US deficiency.

Traceability. Every number in the Synopsis and body should be traceable to a TLF, which in turn traces to analysis datasets (e.g., ADaM) derived from SDTM. Although datasets are submitted elsewhere, your CSR prose must align with those derivations; mismatches between text and TLFs or between TLFs and datasets erode credibility.

ISS/ISE. The Integrated Summary of Safety and Integrated Summary of Efficacy roll up multiple studies. Your single-study CSR should call out when results will be integrated in Module 5.3.5/5.3.6 and use consistent endpoint naming so cross-study analyses don’t require harmonization after the fact.

Also Read:  NDA Filing Checks: Administrative & Technical Requirements for a Fileable, Review-Ready Dossier

Applicable Guidelines & Global Frameworks: Using E3, E6(R3) and US Conventions

ICH E3 (Structure & Content of CSR). E3 is your CSR blueprint. Use its section order and numbering so reviewers do not relearn your structure. Place the Synopsis immediately up front (with key efficacy/safety results and exposure) and maintain the canonical sequence for Methods and Results. Do not invent new layouts unless justified by study design (e.g., platform or master protocol); even then, keep an E3-to-your-layout mapping table in the preface.

ICH E6(R3) (Good Clinical Practice). E6 principles—prospective protocols, documented approvals, investigator responsibilities, data integrity—inform your CSR’s credibility. US reviewers look for “GCP breadcrumbs” in Ethics, Informed Consent, Monitoring/Audit, and Data Handling. E6(R3)’s quality-by-design ideas should surface as design justifications and risk mitigation reflections in the Methods and Discussion sections.

US presentation conventions. Beyond E3, FDA reviewers expect transparent SAP alignment (clearly mark which analyses are primary, secondary, exploratory, or sensitivity), accountable multiplicity control, handling of intercurrent events (treatment adherence, rescue, discontinuations), and crisp adverse event coding summaries. Label effect sizes with confidence intervals and state whether analyses are pre-specified or post hoc. Keep the CSR prose shy of promotion; it must read as a technical record, not marketing.

Cross-referencing. Use tight links between text and TLFs, and between CSR and appendices (protocol/SAP version, amendments, sample CRF). In the eCTD context, links should land on caption-level anchors rather than covers or section starts to aid navigation, consistent with the expectations described by the FDA and the formatting practices encouraged by the EMA.

US vs EU/UK vs Global Variations: What Changes and What Shouldn’t

US (FDA-first posture). Emphasize statistical clarity and clinical meaningfulness. US assessors will scrutinize how you defined estimands/analysis populations, handled missingness, controlled multiplicity, and interpreted exposure–response or subgroup signals. The CSR should make regulatory-grade claims in words that mirror your labeling proposals, with a clean handoff to the integrated summaries (ISS/ISE) across studies.

EU/UK. The same E3 skeleton applies, but EU reviewers often expect deeper narrative around risk context (benefit–risk reasoning in light of alternative therapies and patient-centric outcomes) and presentation of regional pharmacovigilance perspective. Device components (for combination products) and Patient-Reported Outcomes may receive extra attention. Maintain the same CSR but supplement Module 2.5 for regional nuance; do not fork the single-study CSR unless unavoidable.

Japan/other agencies. The CSR content remains E3-aligned. If you intend to localize the Synopsis or certain appendices (e.g., investigator credentials), keep ASCII-safe filenames and stable figure/table IDs for eCTD publishing. Regional statistical conventions (e.g., fixed vs random effects in meta-analyses) mostly affect ISS/ISE; keep single-study CSRs neutral and precise.

What must not change. The traceable story: pre-specified endpoints, clear populations, reproducible analyses, and TLFs that match the SAP and datasets. Harmonize endpoint names across studies to avoid re-labeling in ISS/ISE. Keep deviation categories and adjudication rules stable to preserve comparability.

Processes & Workflow: From Lock to CSR, Without Losing Scientific Signal

1) Pre-lock readiness. Freeze the protocol/SAP and amendments; pre-approve the TLF shells with IDs and footnote conventions; define protocol deviation categories and major/minor thresholds; and lock the terminology catalog (endpoints, populations, visit names). This creates a “no surprises” environment when data lock arrives.

2) Data lock & programming. After database lock, produce the pre-specified TLFs and sensitivity sets. Apply SAP flags for analysis populations, censoring rules for time-to-event outcomes, and coding dictionaries (e.g., MedDRA) for adverse events. Program traceability footnotes (dataset variables/derivations) in tables where helpful but avoid drowning the reader—save full derivations for the define/analysis data reviewer’s guide.

Also Read:  Dossier Templates Explained: Ultimate Guide to Streamlined CTD/eCTD Submissions

3) Synopsis first. Draft the Synopsis from final TLFs, not from memory. Include study design, populations, exposure, primary and key secondary results with confidence intervals, and key safety signals. Every number must cite a TLF ID. The Synopsis is the most read section; make it densely honest and consistent.

4) Methods and protocol deviations. Describe what you planned (estimands, hierarchy, success criteria) and what you actually did (any departures). Present a deviation summary table (major/minor by category, arm, and site) and a listing for major deviations with impact notes. State how deviations influenced analysis populations (e.g., Per-Protocol exclusions), referencing the SAP rules.

5) Efficacy. Present primary endpoint first, state effect size and uncertainty (CI), and interpret clinical relevance, not just statistical significance. Follow with key secondaries respecting multiplicity. Provide supportive sensitivity and subgroup analyses, but label exploratory work clearly. Link each claim to a specific TLF.

6) Safety. Summarize exposure, overall adverse events (AEs), serious AEs, discontinuations due to AEs, deaths, and special interests. Show pattern recognition (dose, time-to-onset, demographic subgroups). Provide laboratory, vital signs, ECG summaries as appropriate. Integrate narrative cases for notable risks sparingly and point to listings for details. Use consistent MedDRA versions and coding practices across studies.

7) Discussion & alignment. Conclude whether the study met its objectives, contextualize effect sizes versus clinical meaningfulness and standard of care, and identify residual uncertainties. Cross-align statements with Module 2.5 (Clinical Overview) and labeling proposals. Do not oversell; reviewers trust measured conclusions.

Tools, Templates & Authoring Aids: Make CSRs Fast, Consistent, and Navigable

CSR master template (E3-aligned). Maintain a controlled Word/XML template with locked headings and auto-numbered sections that mirror E3. Include placeholders for Synopsis tables, protocol deviation categorizations, primary/secondary endpoint blocks, and standardized safety summaries. Auto-insert boilerplate that reminds authors to cite TLF IDs at every numeric claim.

TLF library & IDs. Pre-approve table shells (e.g., “Table 14-1 Primary Endpoint—Change from Baseline in XYZ at Week 12”), figure shells (“Figure 14-3 KM Curve—Time to Event”), and listing shells (“Listing 16-2 Major Protocol Deviations”). Lock numbering rules and footnote grammar. Maintain a cross-reference manifest that maps each CSR paragraph to TLF IDs for eCTD hyperlinking.

Terminology catalog & style guide. Fix terms for analysis sets, visit windows, estimands, endpoints, and safety categories. Provide language patterns (“We pre-specified…,” “Exploratory analysis suggests…,” “Sensitivity analysis confirmed robustness…”) to keep tone objective and consistent.

Deviation adjudication workbook. Build a simple adjudication tool that classifies deviations by pre-defined categories, applies major/minor thresholds, and outputs both a site-level dashboard and patient-level listing. Consistency here prevents late-stage debates.

Programmer–writer handshake. Hold standing scrums between statisticians/programmers and writers. Resolve discrepancies (e.g., N mismatch) before drafting text. Enforce a “TLF freeze” milestone that triggers final line-editing; avoid version churn.

Publishing-aware anchors. Require caption-level named destinations in final PDFs and verify links with a crawler on the final zip. This eCTD-friendly habit saves reviewers time and prevents “link-to-cover” errors.

Common Challenges & Best Practices: What Trips US Reviews—and How to Avoid It

CSR says one thing; TLFs say another. Numeric claims that don’t match TLFs cause immediate trust erosion. Best practice: draft from TLFs; lock a TLF-to-text manifest; run automated number checks on near-final drafts.

Uncontrolled exploratory analyses. Explorations without clear labels or multiplicity context inflate perceived evidence. Best practice: segregate pre-specified vs exploratory; provide rationale; avoid over-interpretation; keep exploratory outputs in appendices or supplemental figures.

Protocol deviations dumped, not adjudicated. Long lists without categories or impact statements are unreviewable. Best practice: pre-define categories; adjudicate major/minor; summarize by site/arm; list only major deviations with impact notes in the body; put the rest in appendices.

Population fog. Ambiguous ITT/Per-Protocol definitions or inconsistent counts across sections confuse interpretation. Best practice: define analysis populations up front with rules; use a disposition diagram that reconciles randomization, treatment, analysis, and safety populations with exact Ns.

Also Read:  eCTD Tooling Stack: Lorenz, Extedo, MasterControl, Veeva — Pros, Cons & Pricing Signals

Effect size without clinical meaning. Stat-sig results that fail to translate to patient benefit invite queries. Best practice: tie effect to minimal clinically important difference (MCID), responder analyses, or time-to-event benefits; state external validity and comparative context.

Safety presented as a wall of counts. Count tables alone hide patterns. Best practice: analyze dose/exposure-response, onset timing, and severity; show TEAE leading to discontinuation; include AE of special interest narratives with cross-links to listings.

Appendix chaos. Missing SAP versions, inconsistent protocol numbering, unlabeled sample CRFs, or out-of-order randomization documents delay review. Best practice: use an appendix inventory with E3 numbering; include version dates; keep randomization documentation sealed but referenced; ensure investigator lists have credentials and site identifiers.

Latest Updates & Strategic Insights: Designing Today’s CSR for Tomorrow’s Lifecycle

Estimand-aware reporting. Modern US reviews benefit when CSRs articulate estimands (treatment effect targets) and how intercurrent events were handled (treatment discontinuation, rescue, death). Even if your trial pre-dated estimand guidance, explain alignments post hoc without rewriting history; clarity here prevents misreads and makes integrated summaries cleaner.

Integration-ready outputs. Write single-study CSRs with ISS/ISE in mind. Harmonize endpoint labels, visit windows, and response definitions across studies. Include standard subgroup structures (age, sex, region, baseline severity) in TLFs so integration doesn’t require new programming.

Benefit–risk signaling. Bridge to Module 2.5: in the Discussion, explicitly state the benefit–risk balance for the studied population, the uncertainties that remain, and the proposed monitoring or labeling guardrails. This pre-stages Advisory Committee or labeling conversations without turning the CSR into advocacy.

Data standards alignment. While datasets live outside the CSR, make your text consistent with SDTM/ADaM derivations and variable definitions. Use the same analysis flags and endpoint names readers will see in the data reviewer’s guide. Consistency accelerates independent replication.

Graphics that clarify, not decorate. Favor figures that illuminate decisions—KM curves with numbers at risk; forest plots with CIs; exposure–response overlays—each with clear footnotes and TLF IDs. Keep graphic exports legible at 100% zoom and ensure fonts embed cleanly for eCTD.

US-first, globally portable. Keep E3 skeletons intact, SAP-anchored logic transparent, and TLFs traceable. Then adjust Module 2.5 and national modules (Module 1) for regional nuance. With this discipline, your clinical story will remain coherent from NDA/BLA through global submissions—saving cycles, preventing avoidable queries, and keeping reviewer attention on what matters: patient-relevant benefit with acceptable risk.