Skip to main content
When a run produces unexpected results, the first question is always: what’s different? Atomscale provides two complementary approaches to answer this, depending on the level of detail you need.

Two Approaches to Diagnosis

Compare a specific run against chosen reference samples.Growth Monitoring is for when you have specific reference samples to compare against: known good runs, known bad runs, or runs representing particular outcomes. It shows time-resolved detail on how similarity evolves, which metrics diverge and when, and what the tool was doing at the moment of divergence.Use this when you need to understand exactly when and how a run departs from expected behavior.What you see:
  • Similarity trajectory showing how the tracked sample’s process fingerprint evolves relative to each reference over time
  • Growth metrics compared side-by-side between tracked and reference samples as time series
  • Tool state instrument parameter logs providing context for any observed divergence

Setting Up

1

Create a Growth Monitoring project

On the Project page, create a new Growth Monitoring project.
2

Configure reference samples

Add reference samples that represent known outcomes. Assign each reference a categorical label or a continuous value, depending on whether you’re monitoring for pass/fail or tracking a specific physical property.
3

Add the run to diagnose

Add the run you’re investigating as a tracked sample. For a live run, stream data directly. For a completed run, add the existing sample to the project.
4

Review the monitoring dashboard

The dashboard displays the similarity trajectory, growth metrics, and tool state views synchronized in real time. Look for inflection points where the tracked sample diverges from references.

What to Look For

Both approaches follow the same logic: identify where the run diverges, assess how much, and connect that to what it means for your outcome.

Identifying Divergence

In Growth Monitoring, watch the similarity trajectory for inflection points where the tracked sample’s fingerprint shifts away from the reference cluster. The synchronized tool state view shows whether a specific instrument event coincides with the divergence, and the growth metrics view shows which derived properties are changing and by how much. In Global Similarity, look at where the problem run sits relative to runs with known outcomes. A run that clusters with failures but used a “good” recipe suggests the issue is in execution, not design. A run between clusters may indicate a borderline process condition.

Assessing Significance

Not every difference matters. Growth Monitoring shows the magnitude of divergence over time: a brief excursion that self-corrects is different from a sustained drift. Global Similarity scores quantify how far a run sits from its nearest neighbors, which you can compare against the natural spread of your process. Cross-reference with anomaly detection results: if a run looks anomalous in similarity analysis and also triggered drift or point anomalies, that strengthens the case that the deviation is real.

Connecting to Outcomes

The diagnostic value comes from linking process divergence to outcome differences. When Growth Monitoring shows a run diverged from references during a specific phase, check whether that phase historically correlates with the outcome property that degraded. When Global Similarity shows a run clustering with failures, examine what those failures have in common. Runs that are “close” in embedding space tend to produce similar outcomes, making the distance itself a meaningful diagnostic signal.

Choosing the Right Approach

ScenarioApproachWhy
You have specific reference runs to compare againstGrowth MonitoringTime-resolved detail on exactly how and when the run diverges from chosen references
You want to find what a problem run is most similar toGlobal SimilaritySearches your full dataset without pre-selecting references
Active run drifting from expectationGrowth MonitoringReal-time granularity lets you decide whether to intervene before the run completes
Process transfer to a new toolGlobal SimilarityShows how new-tool behavior compares to original-tool signatures across your history
Recurring intermittent failuresBothGlobal Similarity identifies what failing runs share; Growth Monitoring catches the pattern with full detail

Documenting Findings

When you identify the source of divergence, record it:
  • Tag the run with descriptive labels (e.g., “temperature-drift”, “flux-excursion”) so it’s findable in future similarity analyses.
  • Add notes documenting which parameters diverged, your hypothesis, and any corrective action taken.
  • Update your reference set with more informative growth runs (good or bad) over time to improve future comparisons.

Next Steps