Skip to main content
Atomscale monitors your process data for anomalies using learned embeddings to catch subtle patterns across instruments, correlate anomalies across data sources, and predict developing problems. Depending on your integration level, the platform can recommend or execute corrective actions.

What Gets Detected

Anomaly detection covers three general categories:
Unusual patterns in individual timeseries. The system encodes timeseries windows into a learned representation using a timeseries model fine-tuned on process data. Outliers in this latent space correspond to patterns that deviate from what the model has learned as normal for your process.This captures both sudden anomalies and gradual drift. Because detection operates on learned embeddings rather than fixed thresholds, it catches subtle structural changes that raw value comparisons miss.Recipe-data drift is also detected for metrology sources by comparing measured values against expected recipe setpoints.

Viewing Anomalies

On Timeseries Charts

Anomalies appear directly on timeseries plots:
Anomaly TypeVisual Indicator
Point anomalyMarker at the detected timestamp
Drift anomalyHighlighted region spanning the detection window
Correlated groupHighlighted region spanning the detection window
Forecast anomalyDashed region in the projected portion of the timeseries
Markers are color-coded by severity: Warning and Error. Click any marker to see the anomaly type, classification label, score, and detection metadata. The growth quality score displays as a running overlay on the monitoring dashboard.

Anomaly Summary

Each run includes an anomaly summary with:
  • Counts by type and severity
  • Growth quality score for the run
  • Anomaly classification labels
  • Correlated anomaly groups
Zero errors with a few warnings and a high quality score likely means a normal run. Multiple errors or a low quality score warrants investigation.

Filtering and Querying

You can filter anomalies by:
FilterOptions
Data sourceAny data source (e.g. characterization or tool state)
Anomaly typePoint anomaly, drift, recipe-data drift, predicted anomaly
ClassificationRoot cause category
SeverityWarning or error
Correlated groupsOnly anomalies correlated across sources
Time rangeBefore or after a given time
PropertySpecific timeseries property (e.g., pyrometer channel, RHEED spot spacing)
Quality score impactHow much the anomaly affected the growth quality score

Real-Time Alerts

During streaming runs, anomaly detections are delivered in real time to any of the following channels:

Platform

Alerts appear in the monitoring dashboard and notification panel during active runs.

Email

Email notifications including anomaly details and a link to the affected growth session.

Slack

Route alerts to create messages in Slack channels.

Custom

Configure custom alerting channels.

Per-Project Thresholds

Each project can override detection thresholds. Tighten sensitivity for critical parameters or relax thresholds for parameters with known noise. Configuration is available under project settings. Default thresholds are learned baselines calibrated from your organization’s historical data. Per-project overrides build on top of these defaults.

Responding to Anomalies

During an Active Run

When an anomaly is detected during a live run, Atomscale surfaces it alongside a recommended corrective action.
1

Review the alert and recommended action

Check which parameter triggered, its severity, classification, and detection window. The system shows a recommended corrective action with the expected effect and confidence level.
2

Check the safety envelope

Review the safety envelope: the hard constraints for affected parameters that no corrective action can exceed. The recommended action is always within these bounds.
3

Approve, modify, or dismiss

Accept the recommended action, modify its parameters, or dismiss it. Approved or modified actions trigger effects depending on integration level (e.g. assist or control). Dismissed anomalies are logged as acknowledged with no intervention.
4

Monitor the intervention

Track actions and outcomes on an intervention history log. The log records the detection, the action taken, and subsequent parameter behavior to improve future recommendations.
For an overview of the progression from monitoring to autonomous control, see Integrating Process Control.

After Historical Data Upload

For historical data, anomaly detection provides diagnostic value. Review the anomaly timeline against your process log and ex-situ characterization to see whether detected anomalies correlate with quality outcomes. Confirmed and dismissed anomalies feed back into the classification model, and anomaly-outcome pairs contribute to growth quality score training.

How Anomaly Detection Works

Detection Process

As timeseries data arrives, characterization workflows extract features and embed them into the model’s latent space. Outlier detection scores each embedding against learned baselines, then correlates related anomalies across instruments. A classification model assigns root cause categories, forecasts predicted anomalies, and routes to alert channels based on severity.
For more details, see the Anomaly Detection - Technical Reference.

Batch Processing

When you upload historical data, anomaly detection runs automatically after the analysis pipeline completes. Each timeseries is evaluated independently, then cross-source correlation runs after all sources are processed. Anomalies appear in the UI once processing finishes.

Live Streaming

During a live run, anomaly detection runs incrementally with each incoming data chunk. The system maintains a rolling detection context across chunks to detect both sudden anomalies and emerging trends. Forecasts and the growth quality score update with each chunk. Error-level anomalies trigger real-time alerts.

Severity Levels

SeverityMeaningAction
WarningScore elevated but within tolerance, or minor quality score impact. May self-correct.Review after the run or if warnings accumulate.
ErrorScore exceeds the critical threshold, or quality score has dropped significantly.Investigate immediately. Triggers real-time alerts in streaming runs.
An anomaly that significantly drops the quality prediction is more likely to be assigned Error-level, even if its individual score is moderate.

Thresholds and Sensitivity

The system uses learned baselines calibrated from your organization’s historical data, adapted to the patterns of each data source, recipe, and instrument combination. Per-project threshold overrides let you adjust sensitivity: relax thresholds for parameters with natural variation, or tighten them for critical parameters.

Next Steps