Pharmacovigilance (PV) & Safety Reporting in Clinical Trials: Timelines, Workflows, and Audit-Proof Documentation
Safety governance that stands up to questions
In inspections, the most common safety-system weakness is not the lack of a procedure—it’s the inability to show who did what, when, and why, consistently across partners (Sponsor, CRO, labs, sites). A practical way to prevent this is to make governance visible and traceable through a small, controlled set of documents that align with your protocol and Safety Management Plan (SMP). This section is operational guidance only and not legal advice.
1) Build a RACI that matches reality (and your contracts)
A one-page RACI (Responsible/Accountable/Consulted/Informed) is more useful than a 30-page narrative—if it’s accurate and used. Inspectors frequently probe “Accountable” responsibilities: medical review, reporting decision-making, oversight of delegated activities, and reconciliation.
| Process step | Sponsor Safety Lead | Medical Monitor | CRO Case Processing | Data Management/EDC | QPPV/Local Safety Officer* |
|---|---|---|---|---|---|
| Safety intake & triage | A | C | R | I | I |
| Medical review of seriousness/causality | A | R | C | I | C |
| Expedited reporting decision & submission | A | R | R | I | C |
| Follow-up requests & tracking | A | C | R | I | I |
| SAE/AE reconciliation | A | C | R | R | I |
*Titles vary by region and organizational model. Document what applies to your trial and markets.
2) Minimum “inspection-ready” safety governance set
Keep these artifacts current, version-controlled, and easy to retrieve (ideally in your TMF/eTMF with clear filing conventions; see TMF/eTMF Excellence):
- Safety Management Plan (SMP): case intake channels, clock-start definitions, reporting rules, role mapping, reconciliation approach, metrics, escalation.
- Vendor oversight package: qualification summary, KPIs, quality agreement excerpts, audit plan (see Vendor Oversight).
- System assurance evidence for safety database and interfaces (see CSV vs CSA).
- Training matrix for safety roles, including refresh cadence and competency checks.
- Safety governance meeting minutes with decisions and action tracking (e.g., signal discussion, trend review, reconciliation issues).
End-to-end ICSR workflow (what to standardize, what to evidence)
Well-run PV teams standardize the steps that create repeatable quality, then focus expertise on exceptions (complex narratives, medical assessment, meaningful follow-up). The key is to document decisions so another qualified reviewer can reconstruct the case logic.
1) Define “clock start” and data cut rules
Before first patient first visit, ensure your SMP clearly defines:
- Clock start: which organization’s receipt counts as “Day 0,” how weekends/holidays are handled, and how late follow-up is managed.
- Minimum information for a valid case (e.g., identifiable patient, identifiable reporter, suspect product, adverse event).
- Data cut-off times for reporting windows and what constitutes “receipt” when information arrives via EDC, email, hotline, or vendor portal.
2) A practical, auditable ICSR processing flow
- Intake & triage: document source, receipt time, and initial assessment (serious/non-serious; expectedness framework).
- Data entry: use controlled fields; avoid free-text duplication; capture references to source documents (visit note, discharge summary, lab reports).
- Narrative drafting: write a concise timeline: baseline status → exposure → event onset → key diagnostics → treatment → outcome. Include relevant negatives if clinically meaningful.
- Medical review: capture rationale for serious criteria, causality, and expectedness decisions. If “not related,” document reasoning (alternative etiology, temporal relationship, dechallenge/rechallenge).
- Quality control (QC): independent check using a structured checklist (see below).
- Submission: keep evidence of what was submitted, when, and through which gateway; retain acknowledgements and error-handling records.
- Follow-up: track outstanding information with due dates; document attempts and responses (including non-response).
ICSR QC checklist (example)
- Valid case criteria met and correctly documented
- Event term coded appropriately; seriousness criteria consistent across fields and narrative
- Suspect product exposure dates and dose are coherent with protocol/records
- Concomitant meds and relevant medical history captured (as needed to interpret causality)
- Lab/diagnostic values included when they drive seriousness classification
- Narrative provides a clear timeline with dates (or clearly states unknown)
- Medical assessment rationale recorded (not just a tick-box)
- Attachments are complete, readable, and traceable to source
- Submission package includes version/date of reference safety information used for expectedness
Reconciliation: where PV, ClinOps, and Data Management must align
Reconciliation is one of the fastest ways inspectors detect weak oversight. If your safety database, EDC, and TMF tell different stories about the same participant, the discussion escalates quickly. A “good” reconciliation process is scheduled, documented, and closes discrepancies with traceable decisions.
1) Reconciliation types you should plan for
- SAE ↔ EDC reconciliation: ensure all SAEs in EDC exist in safety DB and vice versa; align onset dates, seriousness criteria, and outcomes.
- Deaths: cross-check all deaths against discontinuation forms, end-of-study forms, and narratives; ensure death certificates/autopsy (if applicable) are handled per plan.
- Pregnancy: verify pregnancy reporting workflow, follow-up, and outcome documentation; ensure appropriate classification and reporting rules are applied.
- Laboratory signals: if central labs detect critical values, document how alerts trigger AE/SAE evaluation and follow-up.
2) Operational checklist for reconciliation cycles
- Run extracts from both systems using the same cut-off timestamp
- Match on participant ID + event onset date + event term (allow controlled tolerance rules)
- Triage mismatches by category (missing case, field discrepancy, expectedness mismatch, date mismatch)
- Assign owners and due dates; log queries and resolutions
- Document closure and trend recurring root causes (training, form design, site workflow)
Trend outputs should feed your monitoring strategy and CAPA system (see RBM That Works and Protocol Deviations and CAPA).
What “audit-proof documentation” looks like in practice
Audit-proof does not mean perfect. It means complete traceability, timely actions, and a quality system that detects and corrects issues. Use the following as an evidence map for your trial:
Safety evidence map (suggested)
- Process evidence: SMP, relevant SOPs, work instructions, decision trees, escalation pathways.
- People evidence: training records, role-based onboarding checklists, delegation/authorization evidence.
- System evidence: validation/assurance summary, access control reviews, audit trail policies (see ALCOA+ Data Integrity).
- Oversight evidence: vendor KPIs, reconciliation logs, QC sampling plans, management review minutes.
- Issue evidence: deviations, CAPAs, effectiveness checks, and documented learning.
Example: documenting a late follow-up without creating a finding
If follow-up information arrives after an expedited report is submitted, your record should clearly show:
- Date/time follow-up was received (and by whom)
- Assessment of impact on seriousness/causality/expectedness
- Decision on whether a follow-up report was submitted and timeline rationale
- Root cause if lateness was preventable (e.g., site training gap) and what you changed
Keep the story consistent across systems, and make it easy to retrieve within your inspection readiness approach (see Inspection Readiness).
Ongoing safety oversight: metrics and governance that show control
Beyond individual case processing, inspectors often test whether the sponsor has active oversight of safety performance across the study. That oversight is easiest to demonstrate when you define a small set of metrics, review them routinely, and document actions taken when performance drifts.
Safety performance metrics (examples)
- Timeliness: median and 90th percentile time from receipt to triage, medical review, and submission (split by source: site, call center, vendor).
- Quality: QC rejection/rework rate, most common QC defects, and corrective actions implemented.
- Follow-up effectiveness: % follow-up requests closed within X days; non-response patterns by site/region.
- Reconciliation health: discrepancy counts by category (missing case, date mismatch, seriousness mismatch) and closure cycle time.
- Signal surveillance inputs: emerging trends (event clustering, lab critical values, product complaint linkage) and the governance decisions made.
Governance outputs to file
File dated governance minutes that capture: what was reviewed, what changed vs last period, what decisions were made (site contact, training, process change), and how effectiveness will be checked. These records support both vendor oversight and inspection readiness (see Vendor Oversight and Inspection Readiness).
Inspection questions to pre-answer (and where the evidence lives)
- “How do you define clock start?” → SMP and intake channel definitions.
- “How do you ensure completeness across systems?” → reconciliation logs and trend reports.
- “How do you oversee the CRO?” → KPIs, governance minutes, and audit/CAPA evidence.
- “How do you control system access and audit trails?” → assurance summaries and ALCOA+ controls (see ALCOA+).
Narratives and medical assessment: make your reasoning reconstructable
Even when timelines are met, case quality can fail if the narrative and medical assessment do not allow a second reviewer to understand the logic. A practical standard is that another qualified safety professional should be able to reconstruct: what happened, why it was considered serious (or not), why causality and expectedness were assigned, and what follow-up was pursued.
- Narrative timeline: baseline → exposure → onset → key diagnostics → treatment → outcome, with dates (or explicit “unknown”).
- Rationale fields: document why “not related” or “expected” was concluded (alternative etiology, temporal relationship, dechallenge/rechallenge).
- Follow-up logic: what information was missing, why it mattered, and how attempts were documented.
Build narrative expectations into training and QC checklists so quality is consistent across internal staff and CRO partners (see Vendor Oversight).