Skip to main content
Common questions about Atomscale’s platform, technology, and how it fits into your workflow.

How Atomscale Works

Atomscale is an AI platform that extracts 1000x more information from your existing process data and makes it available in real time. We connect to your instruments, transform raw data into quantitative materials-state information, and surface insights that help you detect anomalies, diagnose drift, and optimize processes during the run, not after it.We call this the Integrated Process Environment (IPE): a unified system that brings real-time and asynchronous measurements into a single feedback loop for materials processing.
Atomscale operates through three stages:
  1. Connect: Tool-specific adapter models ingest and unify raw data across your make and measure instruments. Data flows in through real-time streaming interfaces, file watchers, or our programmatic client, with subsecond inference.
  2. Analyze: A timeseries foundation model embeds adapter model outputs into similarity embeddings that provide a quantitative foundation for comparison. This answers questions like: How is the current run the same or different from previous runs? and How uniform is this run from segment to segment? — with resolution from individual atomic layers up to whole-recipe sequences.
  3. Act: Process intelligence flags out-of-distribution runs, identifies trends toward anomalies, and provides the foundation for real-time recipe adjustments and closed-loop process control.
Adapter models are proprietary pipelines customized to specific data types (e.g., RHEED video, ellipsometry signals, tool sensor logs). They transform raw source data in real time into generalized, comprehensive fingerprints of the data stream. This enables frictionless use of the data for large-scale pattern recognition and automation, without requiring you to build custom analysis for each data type.
Traditional process control relies on parametric statistical models built from a design of experiments, with fixed setpoints and periodic manual adjustments. This approach has several limitations: the design space explodes in complexity for modern tools, most process data goes unused because it isn’t actionable in real time, and feedback only happens after deposition is complete.Atomscale works directly with raw data instead of point-solution outputs, eliminates rigid statistical assumptions, and provides real-time, short-loop feedback at scale. In demonstrated cases, Atomscale detects changes in material state 40 seconds earlier than experienced human operators — with higher consistency, no noise artifacts, and no reliance on classical image processing.
Atomscale enablesReplacing
Real-time physical property extraction across material systemsManual analysis with point solutions
Information extraction from file artifacts into a unified data modelCatalogs of files in proprietary formats
Rapid generation of internally consistent, machine-readable datasetsNoisy, subjective conclusions from manual analysis
Recognition of proxy relationships mapping external measurements to real-time feedbackFeedback between trials only after full measurement sets
Reactive process control informed by direct materials feedbackIndirect process control informed only by tool controllers
Results depend on your process and data, but demonstrated outcomes include:
  • Earlier anomaly detection: Detecting nucleating surface reconstructions 40 seconds before expert operators
  • High-accuracy predictions: Surrogate models for wafer uniformity achieving >96% accuracy from recipe and sensor data alone
  • In-situ composition estimation: Correlating diffraction features with ex-situ composition measurements to enable real-time composition predictions on future runs
  • Trial success prediction: Predicting growth success or failure in 90% of cases from an initial set of roughly 10 labeled samples
  • Quantitative layer-by-layer comparisons: Automatically differentiating growth conditions and doping compositions from raw ellipsometry data in ALD workflows
More broadly, customers see improved yield consistency, faster process optimization cycles, and the ability to identify process–outcome correlations that might otherwise take months to discover.

Supported Tools & Data

Atomscale supports the following deposition methods:
  • Molecular beam epitaxy (MBE): most mature capabilities
  • Chemical vapor deposition (CVD / MOCVD)
  • Atomic layer deposition (ALD)
  • Physical vapor deposition (PVD)
  • Sputtering
  • Atomic layer etch: in active development
The platform is designed to be flexible across tool types. If you’re working with a deposition method not listed here, reach out to discuss your setup.
Atomscale integrates data from a wide range of in-situ and ex-situ characterization techniques:
  • Diffraction: RHEED, XRD, LEED
  • Spectroscopy: XPS, Raman, Ellipsometry, NMR
  • Microscopy: SEM, AFM, TEM, STEM
Our adapter model architecture is designed to extend to new data types. If your instrument generates structured or streaming data, we can likely support it.
Our customers work across advanced materials applications including:
  • Silicon photonics — barium titanate on silicon, perovskite/silicon tandems
  • III-V compound semiconductors — GaN, GaAs, InP, SiC, and combinations for photonics, optoelectronics, and quantum devices
  • Next-generation transistors — controlling chemical composition for 2D FET channel materials
  • Quantum cascade lasers — real-time feedback for dynamic process control across alternating layer stacks
  • Advanced magnets and functional materials
If you’re working on thin film deposition of active electronic or photonic materials, Atomscale can likely help — contact us to discuss your specific use case.

Integration & Getting Started

Atomscale connects to your existing instruments and data sources — we work alongside your current setup rather than replacing anything. The platform ingests data through multiple interfaces including real-time streaming, file watchers, and a programmatic API client. We support standard control interfaces including recipe files, control commands, and SECS/GEM API.Insights are delivered through our web interface, API, and real-time alerts.
We use a three-stage approach:
  1. Proof of concept with historical data: We onboard your existing data at no cost to demonstrate value on your specific process. This validates that our models extract meaningful information from your data.
  2. Integration with live data: We configure the platform for your production environment with real-time data connections, alerting, and controls integration.
  3. Always-on monitoring: Based on demonstrated value, we ramp to continuous operation with every-run monitoring, operator assistance, and (where appropriate) automated intervention.
Our serial model architecture is designed for exactly this. The adapter models handle tool-specific data transformation, while the foundation model provides generalized pattern recognition. This means the platform adapts to your specific processes without requiring massive retraining. Even with small labeled datasets — as few as 10 samples in some cases — we can produce practical models for continuous tunability over your process design space.
Atomscale offers three deployment options:
  • Cloud (Web UX) — Hosted platform accessible through your browser
  • API — Programmatic integration for automation workflows and custom tooling
  • On-premises — Local deployment for environments with strict data requirements
Your process data stays within the agreed deployment boundary. We work with your security and IT teams to meet your organization’s requirements.

Pricing & Engagement

Atomscale uses a usage-based model tied to the volume of data ingested and processed. This aligns our pricing with the value you receive — you pay for what you use, and costs scale with your operations rather than requiring a large upfront commitment.The initial proof of concept uses historical data and is provided at no cost.
Atomscale serves several personas:
  • Process engineers and module owners: Primary users who develop, maintain, and diagnose the process of record. They interact with the platform daily during runs.
  • Metrology engineers: Use the platform to bridge in-situ and ex-situ measurements, building stronger connections between characterization and process.
  • Engineering managers and fab directors: Track KPIs including yield improvement, downtime reduction, and product variance. Typically the decision makers and economic buyers.

Company Background

Advanced, application-specific materials are critical for the next generation of technological progress — but compelling materials often get stuck at the lab scale. We can propose new materials faster than we can prove they work in production.Atomscale is building purpose-specific AI to bring real-time feedback, visibility, and automation to advanced materials manufacturing. Our product vision progresses through three stages:
  1. Real-time analysis: A platform to find an edge in your data by using 100% of your signal in real time
  2. Virtual characterization: Intelligence layer models that predict the state of the process relative to past runs or in absolute terms
  3. Self-driving process of record: Adaptive, active process control that optimizes for consistency of output rather than consistency of tool state
By enabling dynamic control of complex process physics, we’re creating a new standard for how advanced materials get made, with the repeatability and precision needed to scale electronics, photonics, quantum technologies, and energy storage.
Atomscale was founded by Chris Price and Jason Munro, two materials scientists with a shared goal of turning decade-long synthesis campaigns into tractable computational problems.Chris Price has spent nearly a decade at the intersection of quantum mechanics, AI, and physics-based modeling. His career spans deep technical research and enterprise data products, focused on translating advanced science into commercially viable technologies. He holds a PhD from the University of Pennsylvania.Jason Munro earned his PhD in computational materials science from Penn State and continued as a staff scientist at Lawrence Berkeley National Laboratory. There, he served as a lead developer of the Materials Project, the world’s largest open database of simulated materials data.
The team brings expertise spanning materials science, physics, AI/ML, high-performance computing, scientific software, and life sciences. Personal backgrounds, advisors, and investors span institutions including Duke, Penn, Berkeley, The Materials Project, Notre Dame, MIT, and the Semiconductor Research Corporation.