Two Approaches to Diagnosis
- Growth Monitoring (Detailed)
- Global Similarity (Broad)
Compare a specific run against chosen reference samples.Growth Monitoring is for when you have specific reference samples to compare against: known good runs, known bad runs, or runs representing particular outcomes. It shows time-resolved detail on how similarity evolves, which metrics diverge and when, and what the tool was doing at the moment of divergence.Use this when you need to understand exactly when and how a run departs from expected behavior.What you see:
- Similarity trajectory showing how the tracked sample’s process fingerprint evolves relative to each reference over time
- Growth metrics compared side-by-side between tracked and reference samples as time series
- Tool state instrument parameter logs providing context for any observed divergence
Setting Up
Configure reference samples
Add reference samples that represent known outcomes. Assign each reference a categorical label
or a continuous value, depending on whether you’re monitoring for pass/fail or tracking a
specific physical property.
Add the run to diagnose
Add the run you’re investigating as a tracked sample. For a live run, stream data directly. For
a completed run, add the existing sample to the project.
What to Look For
Both approaches follow the same logic: identify where the run diverges, assess how much, and connect that to what it means for your outcome.Identifying Divergence
In Growth Monitoring, watch the similarity trajectory for inflection points where the tracked sample’s fingerprint shifts away from the reference cluster. The synchronized tool state view shows whether a specific instrument event coincides with the divergence, and the growth metrics view shows which derived properties are changing and by how much. In Global Similarity, look at where the problem run sits relative to runs with known outcomes. A run that clusters with failures but used a “good” recipe suggests the issue is in execution, not design. A run between clusters may indicate a borderline process condition.Assessing Significance
Not every difference matters. Growth Monitoring shows the magnitude of divergence over time: a brief excursion that self-corrects is different from a sustained drift. Global Similarity scores quantify how far a run sits from its nearest neighbors, which you can compare against the natural spread of your process. Cross-reference with anomaly detection results: if a run looks anomalous in similarity analysis and also triggered drift or point anomalies, that strengthens the case that the deviation is real.Connecting to Outcomes
The diagnostic value comes from linking process divergence to outcome differences. When Growth Monitoring shows a run diverged from references during a specific phase, check whether that phase historically correlates with the outcome property that degraded. When Global Similarity shows a run clustering with failures, examine what those failures have in common. Runs that are “close” in embedding space tend to produce similar outcomes, making the distance itself a meaningful diagnostic signal.Choosing the Right Approach
| Scenario | Approach | Why |
|---|---|---|
| You have specific reference runs to compare against | Growth Monitoring | Time-resolved detail on exactly how and when the run diverges from chosen references |
| You want to find what a problem run is most similar to | Global Similarity | Searches your full dataset without pre-selecting references |
| Active run drifting from expectation | Growth Monitoring | Real-time granularity lets you decide whether to intervene before the run completes |
| Process transfer to a new tool | Global Similarity | Shows how new-tool behavior compares to original-tool signatures across your history |
| Recurring intermittent failures | Both | Global Similarity identifies what failing runs share; Growth Monitoring catches the pattern with full detail |
Documenting Findings
When you identify the source of divergence, record it:- Tag the run with descriptive labels (e.g., “temperature-drift”, “flux-excursion”) so it’s findable in future similarity analyses.
- Add notes documenting which parameters diverged, your hypothesis, and any corrective action taken.
- Update your reference set with more informative growth runs (good or bad) over time to improve future comparisons.