PROJECTS
Share article
Written by: Kristian Husevåg Krohn (Senior Development Engineer)
Hardware can be faulty, sorting out the bad ones from the good ones at an early stage can lead to less waste in production by not continue the product assembly with faulty components.
Design Goals
When developing a production test application for high volume production it is wise to try to map out some design goals beforehand, usually they look something like this:
- Fast – Production time is expensive
- Early – The earlier you can discard faulty hardware the less resources is wasted by continuing to produce a device that is going to be faulty anyway
- Flexible – What you want to test will change over time as you discover things that don’t need to be tested and things that do
- Scope – We want to test the hardware, not the software
- Coverage – You want to exercise as much of the hardware as possible, also the parts not being actively in use
- Pass/Fail – In the end everything needs to boil down to a pass or fail, this decision can be beneficial to do with a test-machine to be able to consider extra parameters, ex. voltages from a test bed of nails.
Abstraction
A reasonable abstraction to start on is on top of the driver level, this will abstract out the nitty gritty details of each peripheral but is sufficiently low to not be much affected by the quirks of your application code. This will also help keeping the production tests quite self-contained and will hopefully not break when a trigger-happy application developer bumps a library. Another good idea can be to make a framework to put all the tests in, this way it’s easier to enforce standardized test execution, test documentation with coverage and usage, logging, and communication with the test-machine. All this will make scripting the test procedure a breeze either in the test-machine or on target and make things quite easy to maintain in the future.
Defining features
When defining the set of features included in the production test framework, I prefer to not get in the way of the developer that is actually going to write tests (myself) by having the tests as a function pointers to the good old int test_whatever(const char **argv, int argc) and passing arguments just like a regular application. I like to think of the framework as a program of programs. When registering a test in the framework, I prefer to try to do that without cluttering up the framework itself and keep everything related to one test contained in a single file and control which tests to be built in the Makefile/CMakeLists.txt. When it comes to logging, I prefer to supply my own print functions so I can print to both stdout and a file.
A typical case for a production test is peripherals to the CPU. You don’t need to test every functionality of the peripheral. The chip/module itself it most likely good and tested by the chip manufacturer, the big unknown here is your own PCBA. When you can communicate with the peripheral, it’s probably good, try to keep your tests to the same level. An exception is when the peripheral you’re communicating with is connected to something more. Let’s say you have an external Ethernet controller, which in turn is connected to a Ethernet PHY and through some magnetics and out to the RJ45 jack. To have some level of confidence that Ethernet is working you’ll have to test the whole chain, either with a loopback cable or connect it to a switch to see that you get link. It is however enough to have link to be able to say the Ethernet is working and anything more will slow down the test execution.
Be careful
This will become a really handy dandy tool for the other developers, testers and other personnel on the project. Although it might be ok to add tests that aren’t going to be used in production it’s important to not lose focus on the target here, especially when project management or sales finds out about this amazing tool that can make the hardware do stuff long before the product is finished.
One final note: you probably don’t want to leave the tool on target when shipping the product to customers. First, it doesn’t look good to have your internal production tools on a device you’re shipping to customers and second, you might have some functionality in it that you don’t want customers to fiddle with, like setting power rails to the point where the blue smoke starts to come out.
So to summarize: Test hardware, not software and plan for scaling, scripting and rapid change of the test set.
Want to know more? Reach out to:
Marianne Holmstrøm
R&D Manager
Related articles
The post Developing a production test application appeared first on Data Respons R&D Services.