Skip to main content
Once your data is connected to Atomscale, the next step is analyzing it for information. The Analyze step turns raw process data into actionable insights by answering two core questions:
  1. How is this run the same or different from previous runs? Compare any run against your full process history using learned similarity embeddings, instead of parameter-by-parameter matching.
  2. How uniform is this run internally? Assess consistency within a single run, from whole-recipe sequences down to individual atomic layers and segment-to-segment variation.
Everything in this section feeds directly into the Act step, where the metrics, comparisons, and similarity scores here power real-time alerts, anomaly detection, and process control.

Exploring Run Similarity

Comparing successful and failed runs is one of the most challenging tasks in process engineering. Run-to-run similarity uses learned embeddings to compare runs across their full process signatures, surfacing relationships that individual parameters wouldn’t reveal.

How It Works

1

Select a workflow and metric

In the Explore Similarity page, choose a metric, transformation, and time scale window for the similarity calculation. These settings control what aspects of the process signature are used for comparison.
2

Explore the similarity map

A 2D scatter plot shows your data items positioned by similarity. Runs that were similar cluster together, while runs that diverged are far apart. Points are colored by the selected metric value.
3

Investigate specific runs

Click any point and select View Similar Data to see ranked matches with similarity scores. Filter results by project, physical sample, or tags to focus your comparison.

When to Use This

Run-to-run similarity is useful when you need to diagnose an unexpected result. Instead of manually comparing numerous parameter logs, you can start from a problem run and immediately find its closest matches, both good and bad, to isolate what changed. It’s also valuable for process transfer: when bringing up a recipe on a new tool, similarity analysis shows you how closely the new tool’s behavior matches the original across the full process fingerprint rather than just a handful of setpoints.

Growth Monitoring

During a live run, Growth Monitoring tracks how the current sample evolves in real-time relative to reference samples. This makes analysis actionable by watching the process unfold and comparing it against outcomes you care about.

Setting Up

1

Create a Growth Monitoring project

On the Project page, create a new Growth Monitoring project.
2

Configure reference samples

Add reference samples that represent known outcomes. Assign each reference a categorical label or a continuous value, depending on whether you’re monitoring for pass/fail or tracking a specific physical property.
3

Add a tracked sample

Stream live data or add an existing sample to the project as a tracked sample. This will be compared to the reference samples to see how it evolves over time.
4

Start monitoring

Once configured, the monitoring dashboard appears automatically. Stream new growth data to begin live tracking.

Reading the Dashboard

The monitoring dashboard displays three synchronized views in real-time:
  • Similarity Trajectory: How the tracked sample’s process fingerprint evolves relative to the reference samples over time. This shows you where the current growth is trending towards holistically.
  • Growth Metrics: Time series comparison of derived metrics between the tracked and reference samples. This shows you the specific metrics that are changing.
  • Tool State: Instrument parameter logs during the growth, providing context for any changes in the similarity trajectory. This shows you how the tool is affecting the growth metrics and overall trajectory.
During active streaming, all dashboard panels refresh automatically. Latency indicators show how current the displayed data is.

Assess Within-Run Uniformity

Explore Similarity and Growth Monitoring can be used with segments of a single run to assess how consistent it is internally, identifying variation between segments, layers, or across different regions of the process timeline. This catches issues that run-to-run analysis alone can miss: a growth that looks acceptable overall but has early-stage drift, mid-run excursions, or layer-to-layer inconsistency that will affect device performance.

Visualize and Explore

Whenever you open a data item, physical sample, or project, Atomscale displays interactive charts for the associated data. These charts show both raw tool parameters and Atomscale’s derived metrics all synchronized on the same timeline.
Open chart settings to choose which series to display. Series are organized by data stream (RHEED, optical, metrology, etc.), then by series type (e.g., specular intensity, lattice spacing), then by individual data item. Toggle series individually or in bulk.

Next Steps

The analysis capabilities on this page (similarity scoring, uniformity assessment, and growth monitoring) are the foundation for everything in the Act step. The same models that let you compare runs historically also power real-time anomaly detection and alerting.