Category:

Hardware Test Software

Last Updated:

January 9, 2025

Marzieh Barnes

Introduction

Manufacturing test data often goes underutilized, representing missed opportunities for quality improvement and process optimization. While test systems generate valuable measurements during production, many manufacturers struggle to transform this data into actionable insights. This article demonstrates how to leverage pytest-f3ts and Tofu Pilot to implement automated test analytics, using FixturFab's control board production as a practical example.

Manufacturing test analytics

A Tale of Unused Data

So, let’s say you’ve just released your product and finished setting up your brand-new test fixture in your manufacturing line. Boards are running through; tests are passing, they’re getting stamped for quality, and they are leaving the factory. Your test fixture is taking thousands of electrical measurements as part of its internal process. The sad reality is that a lot of the time, measurement data doesn’t go anywhere else. Your factory technicians might download the data to store it on a hard drive or on an internal cloud storage system. But usually, this data will remain untouched until there’s a problem with a board and you need to track down what went wrong in production.

However, this measurement test data is incredibly valuable. It could be that despite passing all the tests at the factory, a new batch of boards has some minor manufacturing defect that couldn’t get caught by your automated test system. Maybe an on-board voltage regulator is underperforming significantly compared to past batches, and while it’s still within acceptable limits, it could cause power draw issues down the line. A test fixture that only measures and validates data on a single board would never be able to catch these kinds of larger-scale manufacturing anomalies.

So why do many manufacturers do nothing with this test data? The answer is almost always time and complexity. You may technically have access to a test fixture’s measurement data, but turning that into analytics and insights can be very difficult and time-consuming, depending on how your test fixture has decided to manage, store, and collect that data. However, having the right tools and architecture could also take a few hours.

FixturFab Control Board Example

Test Automation Software using pytest-f3ts

Recently, I set up some test automation software for a production test fixture that we use at FixturFab. I wrote this article detailing how to develop test software with pytest and this article outlining how to use our open source pytest plugin pytest-f3ts to do so. If you haven’t, read through how pytest-f3ts uses pytest metadata to log and store test data during a run, as we pull from that feature to send test data to a data analytics platform in this article.

Plug and Play Analytics with Tofu Pilot

To set up test analytics on our existing test plans, we used a test analytics platform called Tofu Pilot. The first time I did this, it took no more than an hour, and I was able to view the test data almost immediately. All you need to do is set up your Tofu Pilot API key and collect and store your measurement data using the TofuPilotClient.

To automatically populate this data with pytest, I made some changes to the pytest hooks in the pytest configuration file (config.py):

"""pytest configuration for the FixturFab FixturCtrl Test Fixture."""

import logging
from datetime import timedelta
import datetime

import pytest
from pytest_f3ts.utils import SerialNumber
from tofupilot import TofuPilotClient

logger = logging.getLogger(__name__)

def pytest_configure():
    pytest.dut_serial_num = ""
    pytest.tofu_collected_steps = []


@pytest.fixture(scope="session")
def serial_number(backend_api) -> SerialNumber:
    """Fixture for accessing the serial number of the device under test (DUT)."""
    sn = SerialNumber(backend_api)

    yield sn

		# Collect and update the serial number at teardown
    pytest.dut_serial_num = sn.serial_number


def pytest_runtest_logreport(report):
    """Collect and log data at end of each test case"""

    # Only send results for "call" actions
    if report.when == "call":

        duration = datetime.timedelta(seconds=report.duration)
        end = datetime.datetime.now()
        
        step = {"name": report.head_line,
                 "started_at": end - duration,
                 "duration": duration,
                 "step_passed": bool(report.passed)}
                 
        # Add keys to tofus step object from user_properties recorded during test run
        user_properties_dict = dict()
        for user_property in report.user_properties:
            user_properties_dict[user_property[0]] = user_property[1]
        
        # Collect from the pytest-f3ts test_config object:
        if "test_config" in user_properties_dict.keys():
            config_dict = dict(user_properties_dict["test_config"])
                    
            if "min_limit" in config_dict.keys():
                step["limit_low"] = config_dict["min_limit"]
                
            if "max_limit" in config_dict.keys():
                step["limit_high"] = config_dict["max_limit"]
		            
        # Collect from user defined variables:
        if "meas" in user_properties_dict.keys():
            step["measurement_value"] = user_properties_dict["meas"]
            
        if "min_limit" in user_properties_dict.keys():
            step["limit_low"] = user_properties_dict["min_limit"]
            
        if "max_limit" in user_properties_dict.keys():
            step["limit_high"] = user_properties_dict["max_limit"]

        pytest.tofu_collected_steps = pytest.tofu_collected_steps + [step]

def pytest_terminal_summary(terminalreporter, exitstatus):
    """Send collected run information to Tofu Pilot API."""
    # Initialize the TofuPilot client.
    client = TofuPilotClient()

    # Create a test run for the unit with serial number "00102" and part number "PCB01"
    client.create_run(
        procedure_id="FVT2",
        unit_under_test={"serial_number": str(pytest.dut_serial_num), "part_number": "415-0047-00r2"},
        run_passed=(exitstatus == 0),
        steps = pytest.tofu_collected_steps
    )
  

All this does is check for the appropriate pytest-f3ts metadata at the end of each test case and store it as a global pytest object tofu_collected_steps. At the end of the run, it will send all of the data in ‘tofu_collected_steps’ to the Tofu Pilot web platform.

Now, whenever we run this test suite within our Test Fixture, our fixture automatically sends our test data to the Tofu Pilot web platform. Like that, we can see analytics and data insights for all the boards we ran through our fixture. Below is an example of what a single test run looks like in the Tofu Pilot web platform:

If you want to look at the overall measurement trends for an individual measurement, click on the test step that you want to view, and it’ll show insights about the trends in the factory for that particular test:

Tofu Pilot is still in development, but I’ve found that the ease of setting up this tool has made it a great choice for the low-volume board production line we set up at FixturFab. I highly recommend trying it out and seeing if this is the right platform for your team.

Conclusion

Integrating test analytics doesn't need to be complex or time-consuming. By combining pytest-f3ts's data logging capabilities with Tofu Pilot's analytics platform, manufacturers can:

  • Automatically collect and analyze test measurements
  • Track production trends and identify potential issues early
  • Make data-driven decisions about manufacturing processes
  • Set up analytics workflows in hours rather than weeks

This approach turns unused test data into valuable manufacturing intelligence with minimal development overhead.

The introduction frames the problem and solution, while the conclusion emphasizes the proposed approach's practical benefits and low implementation burden.

More from FixturFab