Manufacturing test analytics is the practice of collecting structured test data across production runs and analyzing it for yield patterns, failure clustering, and measurement drift. If you're trying to figure out how to analyze PCB test results beyond pass/fail, this is the approach that works: capture per-step measurements, aggregate them, and look for trends. Teams that do this catch batch-level quality problems — component lot shifts, process variations, fixture-induced measurement error — before those problems reach customers. Teams that don't are flying blind between individual pass/fail results.
The gap between "we test boards" and "we learn from test data" is smaller than most teams assume. If your tests already run through a structured framework, the data exists. What's missing is the pipeline to aggregate it and the dashboards to make patterns visible.
What Manufacturing Test Data Actually Contains#
A structured test framework like pytest-f3ts captures more than pass/fail. Each test run produces:
- Per-step measurements — actual voltage, current, or timing values, not just whether they fell within limits
- Limit thresholds — the pass/fail boundaries for each measurement, so you can see how close results run to the edge
- Timestamps and duration — when each test executed and how long it took
- Serial and part numbers — traceability back to specific boards and component lots
- Step-level pass/fail — which specific test failed, not just "board failed"
Compare this to the alternative: a test operator watches a green or red indicator, marks a spreadsheet, and moves on. The measurement data — the part that actually tells you why things pass or fail — disappears.
The difference matters at scale. When you test 50 prototype boards, you can eyeball the results. When you test 5,000 production boards across six months, you need structured data to answer basic questions: Is yield improving or degrading? Which test step catches the most failures? Did something change when your CM switched component suppliers?
From Pass/Fail to Trend Analysis#
Individual test results tell you whether a specific board works. Aggregate test data tells you whether your process works. That's the shift.
Here's what becomes visible when you plot test data over time:
Yield drift. First-pass yield that gradually drops from 97% to 93% over three weeks doesn't show up in individual test reports. Each board either passes or fails, and a 93% yield still means most boards pass. But that 4-point drop might indicate a component tolerance shift or a process change at your CM that's worth investigating before it gets worse.
Measurement distributions shifting toward limits. A 3.3V rail measurement that averages 3.28V and slowly trends toward 3.20V (your lower limit) is a problem you can catch months before it causes failures. Without trend data, you won't see it until boards start failing.
Failure clustering. When 80% of your failures come from two test steps out of forty, that's a signal. Maybe those steps have overly tight limits. Maybe the component under test has a supplier quality issue. Maybe the fixture contact at that test point is wearing out. You can't diagnose what you can't see.
The belief shift here is specific: you don't need a data science team to do this. You need structured test output and a platform that knows how to display manufacturing data. The analysis itself is straightforward once the data is visible.
Building an Analytics Pipeline#
A manufacturing test analytics pipeline has three layers. None of them requires custom data engineering.
Test execution framework. pytest-f3ts structures your test results automatically. When a test measures a voltage, the framework captures the measurement value, the limits, the step name, and the pass/fail result in a format that analytics platforms can ingest. You write normal pytest tests — the plugin handles data capture.
Data aggregation platform. TofuPilot collects structured test results and stores them in a queryable format. The pytest-f3ts integration sends data to TofuPilot at the end of each test session using two pytest hooks:
pytest_runtest_logreport()captures results as each test completespytest_terminal_summary()transmits the collected data to TofuPilot's API
Setup takes roughly an hour on an existing test plan.
Visualization and analysis. TofuPilot provides dashboards for the metrics that matter in manufacturing test: yield over time, failure Pareto charts, measurement distributions, and SPC on parametric data. You don't build these dashboards — they're built into the platform.
The alternative — building this yourself with Jupyter notebooks, a database, and custom visualization code — typically takes weeks of engineering time and creates a system that only the person who built it can maintain. For most teams, that's not a reasonable investment when purpose-built tools exist.
What to Measure and Why#
Not all metrics are equally useful. A hardware quality assurance plan that tracks everything measures nothing well. These are the metrics that actually drive action:
First-pass yield (FPY). The percentage of boards that pass all tests on the first attempt. This is your headline number. Track it daily and weekly. A sustained drop means something changed in your process, your components, or your fixture. FPY below 95% on a mature product usually warrants investigation.
Cumulative yield by test step. Break FPY down by individual test. If your power rail test catches 60% of all failures, that's where to focus improvement effort. This is also how you spot tests with limits set too tightly — a step that fails 8% of boards while all other steps fail less than 1% is probably measuring variation, not defects.
Parametric Cpk on critical measurements. Process capability indices tell you how much margin you have between your measurement distribution and your pass/fail limits. A Cpk below 1.0 means your process variation is wider than your spec — you'll see intermittent failures even if nothing is actually wrong. A Cpk above 1.33 means you have healthy margin.
Failure Pareto by component. When failures cluster around specific components (not just test steps), that points to supplier quality issues or design margin problems that test limits alone won't solve.
Common Analytics Mistakes#
Measuring only final pass/fail. If your test system records "board passed" or "board failed" without per-step data, you can calculate yield but you can't diagnose why it changes. Per-step results are the minimum useful granularity.
Setting limits without understanding distributions. Pass/fail limits should reflect your actual measurement distributions, not just the component datasheet spec. A ±5% limit on a measurement with ±1% actual variation wastes diagnostic resolution. A ±1% limit on a measurement with ±3% process variation creates false failures. Measurement distribution data tells you where your limits should be.
Treating test data as a compliance artifact. Some teams collect test data because their customer or quality system requires it, then never look at the data again. The data has value beyond compliance — but only if someone reviews the dashboards regularly. Set a cadence: weekly yield review, monthly Pareto review, immediate investigation when FPY drops below a threshold.
Ignoring fixture-induced variation. Test fixtures contribute their own measurement uncertainty — contact resistance, probe wear, alignment repeatability. If your analytics show a measurement slowly drifting over thousands of test cycles, the root cause might be fixture maintenance, not a component or process issue. Track fixture-correlated variation separately when possible.
Analytics without action is just data storage
Dashboards are only useful if someone acts on what they show. Assign ownership: who reviews yield data, how often, and what thresholds trigger investigation. Without this, you'll build a beautiful analytics pipeline that nobody looks at.
Getting Started#
Where you start depends on where you are:
Need a test framework? pytest-f3ts is open-source and structures your test results for analytics from the start. If you're writing pytest-based functional tests, adding the plugin is the first step.
Have tests but no analytics? Configure the TofuPilot integration on your existing pytest-f3ts test plan. The TofuPilot documentation covers the setup process.
Need to design a test system first? Start with test system design to understand the full architecture — mechanical, electrical, and software — before focusing on the analytics layer.
Writing test specifications? The analytics pipeline needs well-defined test steps with explicit measurements and limits. See how to write a test specification for the upstream document that feeds your test framework.
Need fixtures? Analytics requires consistent, reliable test data. That starts with a bed-of-nails fixture that makes solid contact every time. Inconsistent fixture contact creates measurement noise that corrupts your analytics.