- How is this run the same or different from previous runs? Compare any run against your full process history using learned similarity embeddings, instead of parameter-by-parameter matching.
- How uniform is this run internally? Assess consistency within a single run, from whole-recipe sequences down to individual atomic layers and segment-to-segment variation.
Exploring Run Similarity
Comparing successful and failed runs is one of the most challenging tasks in process engineering. Run-to-run similarity uses learned embeddings to compare runs across their full process signatures, surfacing relationships that individual parameters wouldn’t reveal.How It Works
Select a workflow and metric
In the Explore Similarity
page, choose a metric, transformation, and time scale window for the similarity calculation.
These settings control what aspects of the process signature are used for comparison.
Explore the similarity map
A 2D scatter plot shows your data items positioned by similarity. Runs that were similar cluster
together, while runs that diverged are far apart. Points are colored by the selected metric
value.
When to Use This
Run-to-run similarity is useful when you need to diagnose an unexpected result. Instead of manually comparing numerous parameter logs, you can start from a problem run and immediately find its closest matches, both good and bad, to isolate what changed. It’s also valuable for process transfer: when bringing up a recipe on a new tool, similarity analysis shows you how closely the new tool’s behavior matches the original across the full process fingerprint rather than just a handful of setpoints.Growth Monitoring
During a live run, Growth Monitoring tracks how the current sample evolves in real-time relative to reference samples. This makes analysis actionable by watching the process unfold and comparing it against outcomes you care about.Setting Up
Configure reference samples
Add reference samples that represent known outcomes. Assign each reference a categorical label
or a continuous value, depending on whether you’re monitoring for pass/fail or tracking a
specific physical property.
Add a tracked sample
Stream live data or add an existing sample to the project as a tracked sample. This will be
compared to the reference samples to see how it evolves over time.
Reading the Dashboard
The monitoring dashboard displays three synchronized views in real-time:- Similarity Trajectory: How the tracked sample’s process fingerprint evolves relative to the reference samples over time. This shows you where the current growth is trending towards holistically.
- Growth Metrics: Time series comparison of derived metrics between the tracked and reference samples. This shows you the specific metrics that are changing.
- Tool State: Instrument parameter logs during the growth, providing context for any changes in the similarity trajectory. This shows you how the tool is affecting the growth metrics and overall trajectory.
Assess Within-Run Uniformity
Explore Similarity and Growth Monitoring can be used with segments of a single run to assess how consistent it is internally, identifying variation between segments, layers, or across different regions of the process timeline. This catches issues that run-to-run analysis alone can miss: a growth that looks acceptable overall but has early-stage drift, mid-run excursions, or layer-to-layer inconsistency that will affect device performance.Visualize and Explore
Whenever you open a data item, physical sample, or project, Atomscale displays interactive charts for the associated data. These charts show both raw tool parameters and Atomscale’s derived metrics all synchronized on the same timeline.- Series Selection
- Multi-Chart Layout
- Live Streaming
Open chart settings to choose which series to display. Series are organized by data stream
(RHEED, optical, metrology, etc.), then by series type (e.g., specular intensity, lattice
spacing), then by individual data item. Toggle series individually or in bulk.