Streamline ATMP Data Analysis Workflows to Save Experiments & Accelerate Commercialization

Advanced therapy teams generate more data than manual tools can handle. In this session, Körber experts demonstrate how Savvy, a domain-specific analytics environment for bioprocesses, automates data capture and alignment, detects process phases, extracts KPIs, and builds integrated process models and digital twins. The result is fewer physical runs, stronger control strategies aligned with ICH Q10 and Q12, and a quicker path from R&D to commercialization

05 Sep 2023
| Ashley Alderson
Streamline ATMP Data Analysis Workflows to Save Experiments & Accelerate Commercialization


 

Speakers

  • Georgie Makin, Host

  • Christoph Herwig, Körber Business Area Pharma

  • Younge Qu, Körber Business Area Pharma

What this webinar covers

  • The ATMP data challenge: heterogeneous sources, process variability, and knowledge silos across partners and the product lifecycle.

  • A practical, four-part framework: data management, information extraction, knowledge generation, and knowledge management.

  • How Savvy operationalizes this framework with automated contextualization, phase detection, KPI and feature extraction, integrated process modeling, and digital twins purpose-built for cell and gene therapy manufacturing.

Four-part framework, applied

  • Data management

    • Automated data collection and alignment across instruments, scales, and sites.

    • Domain-specific data models replace error-prone spreadsheets and manual time alignment.

  • Information extraction

    • Automated phase detection of steps such as induction or transfection.

    • KPI extraction for consistent, time-aligned analysis of cell growth, productivity, and quality signals.

    • Feature extraction turns time-series into analytics-ready matrices for multivariate statistics and machine learning.

  • Knowledge generation

    • Integrated process modeling that links critical process parameters (CPPs) to critical quality attributes (CQAs) across unit operations.

    • Monte Carlo simulations to understand variability propagation and to set scientifically justified limits.

    • Hybrid digital twins that combine mechanistic understanding with data-driven models for virtual experimentation and scale-up.

  • Knowledge management

    • Systematic capture, deployment, and adaptation of process knowledge using ontologies, models, and twins.

    • Reuse across programs, sites, and partners to accelerate lifecycle management and tech transfer.

Key insights

  • Automate contextualization to keep pace with ATMP data. A domain-specific model enables rapid, accurate, and scalable handling across the lifecycle, reducing delays and errors.

  • Phase detection and KPI extraction improve comparability. Consistent, time-aligned metrics make clone selection, batch comparison, and deviation analysis more reliable.

  • Feature engineering unlocks advanced analytics. Dimension-reduced feature sets support robust multivariate statistics, outlier detection, and early signal discovery.

  • Integrated models strengthen control strategies. Linking CPPs to CQAs across the chain supports justified acceptance criteria, not arbitrary statistical rules.

  • Digital twins cut physical experimentation. Virtual testing of changes and scale-up scenarios reduces cost and de-risks tech transfer.

  • Start in R&D for maximum ROI. Early data discipline and model building compound benefits through characterization, validation, and commercial readiness.

  • Manage knowledge as a strategic asset. Standardized capture and reuse speed continuous improvement and scale-out.

FAQs

ROI is typically realised within one to three months due to time saved in data handling and analysis, fewer physical experiments, and faster decision-making. Commercial terms (subscription vs hosting) can shift timelines, but savings often start immediately.

Automated contextualisation rapidly aligns heterogeneous sources and detects process phases without manual intervention, enabling faster, more accurate analysis and reducing errors and delays common to Excel-based workflows.

As early as R&D. Early foundations yield richer datasets, stronger process understanding, and smoother characterisation/validation, accelerating later phases.

They simulate end-to-end process behaviour and show how variation impacts quality, supporting science-based limits, fewer experimental runs, and lower risk during scale-up and tech transfer.

Begin with simple data-driven correlations, then iteratively layer domain knowledge and mechanistic elements. This lifecycle approach builds capability without delaying today’s decisions.

View all Insights & Resources
Loading

Related News