Our CTO and Co-founder David Mendels answers our most commonly asked question.
One of the questions we often get asked is “how accurate is xRapid”. It’s a good question, but can be surprisingly difficult to answer. The simple answer is that we have created an automatic, optical test that performs within a 0.1% discrepancy of perfect accuracy. However, the more complicated answer takes us down a harder path. This is because there are no solid and universal standards for these sorts of measurements; one could argue that there aren’t for most diagnostics, as they tend to be very distinct from traditional methods of engineering.
Looking at material stiffness, for instance, you can determine its value from the displacement response it gives to a known applied force. Both displacement and force are calibrated from a known standard, and the measurement, or “diagnostic”, is fully codified; how many repetitions of the measurements should we take, how do we extract the data from the response curve and so on. Unfortunately this is not such a simple process with biological diagnostics. We do have guidelines from the WHO, but they are not enough to ensure the robustness of our product. This means that we have to build our own internal standards with the help of our scientific partners that not only adhere to WHO guidelines, but also enable consistent results between machines, operators, preparation methods and the other variables effecting the accuracy of our test.
So how can we ensure our test is so accurate?
The past 18 months have been a learning experience for all of us at the company, and in so many different ways. One thing that we have increased our efficiency on to a very high standard is experimental testing. These tests are vital for building acceptable internal standards and legitimising the accuracy of our tool.
Let’s take one of our recent tests as an example. We were looking at the combined influence of staining time, operators and equipment on the parasite count from a thick smear experiment. These are labour intensive experiments which we run in a fully randomized statistical design. It ensures that the results are representative of the widest range of experimental conditions. We perform the diagnostics on a set of slides from the same blood, prepared by our partner; the Pitié-Salpétrière Hospital in Paris, France’s reference center for Tropical diseases. All external factors are tested simultaneously, and we use statistical analysis to make sense of the relative importance of each one, and the combined influence they exhibit. These tests, spanning a large number of experimental conditions, are also interesting because they give us a value of the absolute and relative error of the diagnostic within a known range of uncertainty.
Comparing ourselves to the Gold Standard.
One of the difficulties of quality control is that we need to have a traceable route to compare our diagnostic results to. xRapid is more than software; it is also a piece of equipment, the mobile microscope, the iPhone, and a set of procedures. Eventually, after building an extensive data bank of images for analysis, the software testing can be fully automated. However this is not the case for everything else, nor is it the case for the reference material. We use the parasitemia determined by trained microscopists as a reference. This means that any parasitemia has been checked by at least two expert microscopists.
One difference between our process and the traditional measurement, is that the statistical analysis delivers a confidence interval. For example, we determined on our set of slides that the parasitemia was 0.41% with a confidence interval of +/- 0.07%, which compares well to the value quoted by our expert partners (0.45%). Stemming from that set of experiments, we do not need to adjust the method further, but must ensure that the results are consistent whichever counting method is selected and for the widest range of sample preparation conditions.
The near future:
In our immediate future we need to keep working on both staining methods and the software. The goal is to eventually split those two quality assurance processes so that they can be run separately, once we have determined that their testing is robust on its own.
Our most recent tests are very positive; as mentioned earlier we are working with a highly respected institution in the field in La Pitié-Salpétrière, and our recent results have performed beyond expectations when compared to the diagnosis made by their experts on the same samples. As we move forward we expect new challenges and variables to spring up, but the quality of testing being done both internally and externally gives us confidence that these obstacles can be quickly and efficiently overcome.