QA for ACTD Dossiers: File Integrity, Cross-References, and Leaf-Title Checks

QA for ACTD Dossiers: File Integrity, Cross-References, and Leaf-Title Checks

Published on 17/12/2025

Quality-Assuring ACTD Dossiers: Integrity, Navigation, and Naming That Accelerate Review

The QA Mission for ACTD: From “Looks Fine” to “Provably Correct” Files

In ASEAN Common Technical Dossier (ACTD) markets, many first-cycle delays come from problems that have nothing to do with your science. They arise because reviewers cannot open, search, or navigate your files quickly—or because the same concept is named three different ways across forms, labels, and filenames. A disciplined quality assurance (QA) program converts those soft spots into predictable wins. Its job is to transform “the PDF looks fine on my screen” into provable integrity—searchable text, embedded fonts for non-Latin scripts, caption-level bookmarks, hyperlinks that land on the right table, and filenames that behave correctly when you replace a leaf later. When these foundations are solid, your clinical, nonclinical, and CMC evidence becomes discoverable, and assessors can verify claims in seconds rather than minutes.

Think in three layers. Layer 1—File Integrity: PDFs must be technically healthy: embedded fonts (including Thai/Khmer/Lao), selectable text, lossless figures where legibility matters, and clean metadata (no password protection, expected page sizes, correct page counts). Layer 2—Navigation: bookmarks that reach captions, named destinations for the tables/figures

cited in Module 2, and a hyperlink manifest to inject links consistently. Layer 3—Governance: a leaf-title catalog and filename rules so lifecycle “replace” operations work in portals that may not implement full eCTD logic. Across all three layers, use harmonized vocabulary from the International Council for Harmonisation for definitions, and align authoring and readability conventions with practices visible at the European Medicines Agency and the U.S. Food & Drug Administration. QA is not an afterthought; it is the operating system of a smooth ACTD review.

A high-performing QA function behaves like a regulated build shop. It runs gates (pre-QC, publishing, shipment), generates logs (font/embed checks, link crawls, checksums), and owns a compact defect taxonomy (identity drift, link breakage, non-searchable scans, bookmark depth, filename mismatch). The output is a “ship-set” that is identical across countries except for Module 1 wrappers—forms, translations, legalizations, and labeling—so you can localize with confidence without touching the science.

File Integrity First: Searchability, Embedded Fonts, Images, and Checksums

Technical integrity is the lowest-effort, highest-impact prevention against technical rejection. ACTD portals differ in sophistication, but three defects recur: image-only scans, missing fonts, and mutated PDFs from round-tripping through different authoring tools. Your QA baseline should enforce:

  • Searchable text: All narrative and table text must be selectable. If a source is scanned (e.g., legalized documents), create a searchable layer via OCR while retaining the original appearance. Never ship a science leaf as an image-only file; reviewers cannot copy numbers or navigate to captions reliably.
  • Embedded fonts: Non-Latin scripts require embedded fonts to display correctly across systems. Validate that every PDF embeds its fonts; failures often surface only after portal processing. Test especially for Thai/Khmer/Lao in bilingual files.
  • Legibility at 100%: Figures and plots must remain legible without zoom gymnastics. Save vector graphics where possible; for rasters, use resolution that preserves axis labels and confidence bands. Crop excess margins to reduce file size while keeping content intact.
  • Sanity of metadata: Confirm page size, orientation, and page count match expectations; remove passwords; ensure the document title field matches the leaf title; and strip authoring artifacts that could confuse automated checks.
  • Checksums & lineage: Produce SHA-256 (or comparable) hashes for each file and the final archive so you can prove a replacement leaf is exactly what you claim. Store hashes in your shipment log and reference them in your “What Changed” note when lifecycle updates occur.

Balancing size and quality matters. Large clinical reports and CSR appendices can exceed portal caps if you rely on uncompressed images; at the same time, over-compression can render key annotations unreadable. QA should own profiles—lossless for data-dense pages and light optimization for narrative—to keep bundles within size constraints without sacrificing legibility. Always regenerate a final shipment and test that shipment, not the working folder; many link and font issues appear only in the last mile.

Also Read:  Global Lifecycle Dashboard for Pharma: KPIs, Owner of Record, and Alerts that Keep Dossiers in Sync

Cross-References That Always Land: Named Destinations, Link Manifests, and Coverage Audits

ACTD lacks a universal XML backbone, so the PDF is the interface. Cross-references must therefore be explicit and resilient. Build a hyperlink manifest—a controlled list that maps every claim in Module 2 (quality overall summary, clinical overview/summaries) to a named destination on a caption in Modules 3–5. QA should verify three things before shipment:

  • Anchor existence: Every cited table/figure has a caption-level named destination, not just a page-level bookmark. Links must land on the caption so reviewers can immediately confirm they are in the right place.
  • Coverage ratio: A simple metric—claims with links / total claims—should be 100% for Module 2. Any claim without a link must be justified (e.g., a narrative concept with no specific figure).
  • Post-pack link crawl: Run an automated crawl on the final shipment to confirm that every link resolves and each destination exists. The crawl report (pass/fail and broken link list) belongs in the QA log.

Make link integrity robust against layout changes. If a figure is re-exported, named destinations can vanish even when bookmarks survive; your SOP must re-generate both bookmarks and named destinations during any PDF update. Avoid “hard-typed” page numbers (“see page 317”) in Module 2; numbers drift, and you cannot QA them at scale. Use descriptive links (“see Stability Fig. 5: 30 °C/75% RH”) that remain accurate even when pagination shifts, because the caption title is stable. For bilingual or localized files, ensure that the anchor IDs remain identical across languages; links from the English overview should land on the destination in the English scientific file, not the localized leaflet.

Finally, test links without your authoring plugins. Open the final PDFs in a standard reader used by authorities. If a link only works in your authoring suite, it will not work for reviewers. QA’s role is to simulate the assessor experience and ensure there are no surprises.

Leaf-Title & Filename Governance: Replacement-Friendly Names and Lifecycle Stability

Portals that accept ACTD often implement sorting and replacement behavior based on filename or a simple index. If titles and names are inconsistent, you risk duplicates and broken lifecycle continuity. QA should own a leaf-title catalog with canonical titles and ASCII-safe filenames that will not change across sequences or countries. Robust naming rules include:

  • Padded numerals: Use “01_”, “02_” prefixes for natural sort order; do not trust alphabetical sorting of “1, 10, 2.”
  • ASCII-safe characters: Avoid diacritics and punctuation that some gateways strip or mutate. Replace spaces with underscores; document the convention.
  • Stable grammar: Decide once between “3.2.P.5_Specification” and “P5_Specifications” and never mix styles. Stability helps replacement behave predictably.
  • Title ↔ filename linkage: Ensure the PDF’s internal title field matches the visible leaf title and filename stem. Reviewers appreciate coherence when triaging large packages.

QA must also control version notes. Do not encode dates or versions into filenames if the portal does not require them; that breaks replacement logic. Instead, maintain a shipment ledger with filename, hash, and sequence ID and provide a one-page “What Changed” note that lists the leaves touched, the exact paragraphs or captions changed, and the before/after hashes. This preserves lifecycle clarity while keeping filenames stable. For country variants, use a short, controlled suffix (e.g., “_IDN”, “_THA”) only when absolutely necessary and never for scientific leaves that are shared across countries.

Finally, retire ad-hoc renames. Renaming a file to “fix” a cosmetic issue can explode into a cascade of broken links and duplicate leaves. Any rename request should be evaluated by QA against the impact on links, bookmarks, and country packs already in flight; if accepted, it should occur at a defined lifecycle boundary with all anchors regenerated and the catalog updated.

Also Read:  Bridging Data from a CTD Core to ACTD: What Truly Needs CMC and Clinical Re-work

Bookmarks, Table of Contents, and Caption Practices: Teaching the PDF to Behave Like an Index

Good bookmarks are more than a convenience; they are a review accelerator. ACTD reviewers expect deep navigation, especially where Module 2 statements cite specific tables and figures. QA should enforce:

  • Depth to captions: Bookmarks must reach H2/H3 and extend to caption level for numbered tables and figures. A “Tables and Figures” super-bookmark is not enough; reviewers should not have to hunt inside a long file.
  • Consistent caption grammar: Use a single style (“Figure 5. Long-term stability at 30 °C/75% RH”) and keep numbering stable across regenerations. Caption titles double as link targets; QA verifies they are unique and descriptive.
  • Named destinations on captions: Every caption has a named destination so links land precisely; bookmarks alone are insufficient when multiple anchors live on a page.
  • Table of contents (TOC) where helpful: For very large CSRs or validation reports, include a clickable TOC synchronized with bookmarks. QA cross-checks that TOC page numbers and bookmark targets align.

For figures that are data-dense (e.g., Q1E regressions, dissolution profiles), ensure labels are legible at default zoom and that color-only distinctions include shape or pattern cues for accessibility. Where bilingual constraints exist, do not cram captions until they become unreadable; move translation to a parallel caption only where legibility remains high. QA should confirm that caption references in Module 2 match the exact titles—small wording drift is a common source of “cannot locate” feedback even when numbering is correct.

Lastly, test cross-document navigation. If Module 2 links to a stability figure inside a multi-file bundle, ensure the bookmark and named destination survive when the target file is opened directly from the portal context. Some readers reset view on open; set the destination to zoom to the caption rather than the top of the page to minimize reviewer scrolling.

Pre-Shipment QC Gates and Defect Taxonomy: From Spot Checks to Measurable Control

A repeatable QA program runs gates that produce evidence of control. Three gates cover most risks:

  • Pre-QC Gate (content): Regulatory writing confirms that each Module 2 claim has an anchor in Modules 3–5; CMC/clinical leads confirm that the numbers shown in Module 2 match the underlying tables (no re-typing).
  • Publishing Gate (navigation): PDF linter verifies searchability and embedded fonts; bookmark audit checks depth to captions; hyperlink manifest injection completes; a post-pack link crawl passes with 100% links resolving.
  • Shipment Gate (governance): Leaf-title catalog is unchanged; filenames are ASCII-safe and padded; checksums are captured; “What Changed” note is present if replacing leaves; identity parity across Module 1 forms, legalized documents, and labeling is signed off.

Defects must be classified to improve the system, not just the file. A light taxonomy helps: Identity drift (names/addresses, signatory titles), Navigation (bookmarks, anchors, hyperlinks), Integrity (fonts, searchability, password), Naming (leaf titles, filenames), Label–data parity (storage/in-use vs stability), and Lifecycle (duplicates, missed replacements). QA should publish weekly metrics: gateway pass rate, anchor coverage, broken links per 100 pages, and first-pass acceptance. Over time, you will see which defects pay the highest ROI when fixed upstream (typically fonts/searchability and link coverage).

Remember that ACTD is a wrapper. Your gates must not mutate science late in the process. If a numeric mismatch appears, fix it at the source table and regenerate anchors; do not “patch” Module 2 prose. Gate logs are not bureaucracy—they are your negotiation leverage if a portal mis-orders files or a reviewer experiences a rendering glitch. Evidence beats argument when schedules are tight.

Issue Response & Lifecycle: Micro-Corrections, “What Changed,” and Country Waves

Even strong QA programs face late surprises: a consulate returns a document with a different stamp position; a portal truncates long filenames; a link breaks during a last-minute figure update. Treat fixes as micro-corrections with visible lineage. The “What Changed” note should list the leaves touched, the specific paragraphs or captions edited, and the before/after checksums. Re-run the post-pack link crawl and font/searchability checks on the updated shipment; attach logs to your response so reviewers can trust the replacement without re-auditing the entire file.

Also Read:  Module 3 CMC Alignment: Sites, Processes, Validation, and Comparability that Reviewers Can Verify

Lifecycle management in ACTD resembles eCTD discipline without the XML. Keep filenames and leaf titles stable; avoid appending “_v2” unless the portal requires it. Instead, rely on sequence IDs and logs. If a change affects multiple countries mid-wave, decide explicitly whether to (1) hold the wave and update wrappers, (2) move the country to the next wave with the corrected ship-set, or (3) execute a controlled fork (rare) with a documented rationale. The worst option is silent divergence—different numbers or titles in different markets with no record of why.

For label-driven changes (e.g., storage statement refined after new IVb points), synchronize the copy deck and verify numeric parity across bilingual files. Regenerate anchors where the label cites Module 3. When a query asks “where did this number come from?”, respond with a claim→anchor map rather than narrative. Fast, precise answers shorten queue time and build confidence that subsequent lifecycle updates will also be tidy.

People, Tools, and SOPs: The Minimal Stack for Repeatable Quality

You do not need a sprawling tech footprint to deliver excellent ACTD QA. You need a small set of enforcing tools and clear SOPs. Minimum stack:

  • PDF linter: checks fonts, searchability, page sizes, bookmark depth, and flags image-only pages. Run on final shipments.
  • Link crawler & injector: reads the hyperlink manifest and confirms every Module 2 link resolves to a caption-level named destination in Modules 3–5.
  • Leaf-title catalog & identity sheet: controlled spreadsheets (or a lightweight DB) that freeze names, filenames, and identity strings for forms and artwork.
  • Checksum generator: hashes each file and archive; outputs a ledger stored with the submission record and attached to responses when you replace leaves.
  • Terminology & numeric rules: a bilingual glossary and decimal/precision standards that translation vendors must follow; enforce via a numeric parity scan.

Staffing hinges on clear roles. Publishing owns linting, bookmarks, links, and catalogs; Regulatory Writing owns the manifest and claim coverage; QA owns the gates, logs, and defect taxonomy; Local Agents validate Module 1 etiquette and portal quirks. Train with golden packs—de-identified examples that passed quickly—and build checklists from them. Update SOPs after each wave so improvements stick. Over time, your QA program becomes a quiet engine: it prevents avoidable defects, accelerates review, and lets your science speak for itself with minimal friction.