The world's scientific and social network for malaria professionals
Subscribe to free Newsletter | 9943 malaria professionals are enjoying the free benefits of MalariaWorld today

WHO should measure the prevalence of malaria in Africa

March 8, 2012 - 19:56 -- Bart G.J. Knols

The article below was written by Dr. Bill Jobin and first posted as a comment under the most recent poll. I elevated it to a Guest editorial.


It is unfortunate that we have recently seen a great deal of confusion about the amount of malaria in Africa.  The confusion arises because most of the people making the estimates are not scientists but artists; computer artists.  It would be better if we relied on scientists.  Computer artists, using their own data and their own inspirations, get varying answers and generate conflicting maps and graphs.  But scientists, using standardized techniques and randomized sampling, get the same answers, no matter who is doing the work.  We urgently need accurate numbers on malaria...

To determine how much malaria there really is, WHO in Geneva should take on the task of scientifically measuring the prevalence of malaria on a regular basis.  For Africa, they should probably measure prevalence every 5 years, to minimize costs.


Look at the confusion.  Chris Murray in Seattle - with support from the Gates Foundation - said there were 1.1 million deaths from malaria in Africa in 2010 (e), but Rob Newman from WHO in Geneva maintained there were only 0.7 million deaths (h).  Murray criticized Newman for the way he handled the data, and likewise Newman criticized Murray.  They are both right; and they are both wrong.  Neither one measured the number of deaths.  Instead they both estimated the numbers, and by different techniques.  The artists painted different pictures.


Switching parameters, Cibulski and WHO colleagues in Geneva said that the number of people infected with malaria in 2009 in Africa was 173 million if you use national surveillance reports, or 261 million in 2007 if you use their WHO mapping technique (a). Gething at Oxford and others in the Malaria Mapping Project used their own mapping technique to develop an estimate of 248 million infected people in 2010 (c).  We can’t tell who is right because these differences are due to artistic license by the estimators.


Who should we believe?


The cause of this confusion is the difference between solid data versus complex extrapolation of poor data.  We need to be careful of being swayed by impressionistic estimates based on artistic computer extrapolations.  An additional problem with such complex interpolation of scattered data is that few people other than the authors have the time to understand what was actually done, and thus few people can analyze it critically.  We have little idea of how much the artists influenced the outcomes by their artistry.


It has always been difficult to evaluate the amount of malaria in Africa and the trends in its transmission.  But it will help us understand these disparate numbers if we realize that there are really just two ways to go at the problem.  The first method is is to estimate parameters from scattered sources of limited data.  The second method is to directly measure prevalence of the infection in carefully selected human populations at a specified season of the year, with a sample large enough to give precise answers.


The first method - artistic estimations


The first method of artistically estimating the amount of malaria, was used by WHO and other groups.  However their estimates of prevalence or incidence - or of various transmission factors - were not based on carefully planned measurements or sampling. Instead their estimates were developed from routine administrative reports of the number of malaria cases encountered by national health care systems, or sometimes by combining these reports with limited field measurements.  Manipulation of these data on incidence can be done in a variety of ways.  Estimates can be made for the number of deaths, in similar ways.  But these are highly impressionistic estimates.


Besides estimating the number of deaths from malaria, several reports in the last few years have estimated the incidence of malaria (number of new cases reported) from annual surveillance reports from ministries of health, submitted through administrative channels.  However because of the administrative nature of these reports, several adjustments had to be made to correct for sporadic and incomplete reporting, limited scope of the public health services, and crude diagnostic methods.  These estimates were then subjected by the artists to several more adjustments based on assumed effects of control efforts, and other intangible factors.  To develop these estimates, no two artists used the same adjustments, nor even the same data sources.  Thus we should again not be surprised that no two artists painted the same picture.


Unfortunately routine administrative reports from national malaria programs in Africa do not provide reliable data for assessing the amount of malaria, no matter how sophisticated the subsequent computer manipulations might be.  They are based on completely unreliable clinical diagnoses, based on symptoms such as fever and headaches.  When compared with microscopic examination of blood slides for malaria parasites, the proportion of false positives in incidence data reported at health posts around Khartoum was found by Sudanese scientists to be 76% (b).   In Angola, scientists from CDC found the false positive proportion to be 96% (f).  Of 864 patients diagnosed in a large number of clinics around Luanda, only 31 actually had malaria!  Trying to interpret such unreliable data is a waste of time.


In the mapping techniques - which produced impressively colored maps - estimates were made of various transmission parameters by geographical interpolation.  These estimates manipulated data from a wide variety of data sources, while trying to compensate for their inherent inaccuracies and defects.  For instance the data came from different times of year, from different age groups, from various laboratories using uncalibrated techniques, and from areas of differing transmission risk.


Additional imprecision was generated when malaria was reported in terms of numbers of deaths.  Although the number of deaths appears to be a solid and meaningful figure, and might be a useful concept for comparing the health burden from malaria with the health burden from other diseases, further examination indicated that it could only be a general estimate, not a precise number.  The weakness was in specifying the cause of death.  Diagnoses of death in Africa were beset by inaccuracies.  


Even less precise data was generated by retro-active mortality surveys in which heads of households were asked if anyone died in the previous year, and if someone in the household could recall if the death was due to malaria.   Apparently the US Presidential Malaria Initiative was trying to use this method (g).  Such vague methods for determining cause of death have no place in scientific evaluation of malaria. 


The second method - scientific measurement of prevalence


The second method for monitoring malaria is by direct measurement of the point prevalence.   Prevalence is a precise epidemiological term which means the proportion of a population infected at a single point in time.   The Gold Standard in direct measurement of malaria prevalence is to take blood samples from a carefully selected and statistically representative human population of a given age composition, taken during the same month each year.  The size of the sample is calculated from the desired precision of the results.  The parasite prevalence in the blood samples is determined by staining and preserving glass slides for microscopic examination.  This process is statistically sound and uses verifiable laboratory techniques (d).  The measurements are thus scientific and the results will be reliable. 


In Africa, the only data worth developing to monitor malaria and to assess control efforts will come from such scientific and statistically designed measurements of point prevalence.  


WHO might be able to regain its central responsibility for malaria in Africa if it developed a periodic measurement program to scientifically determine prevalence in sentinel populations in key epidemiological strata, perhaps every 5 years.  The selection of strata and the design of the sampling program must be done by epidemiologists with a refined knowledge of statistics.  Then we might know how much malaria there is, at last....




a. Cibulskis et al 2011 Worldwide incidence of malaria in 2009, PLoS 8(12)

b. El-Gayoum et al 2009 Malaria overdiagnosis and burden of misdiagnosis in central Sudan, Diagn microbiol infect dis, v64(1) p20

c. Gething et al 2011 A new world malaria map, Malaria Journal, 10:378

d. Jobin 2010 A realistic strategy for fighting malaria in Africa, Boston Harbor Publishers

e. Murray C et al 2012 Global malaria mortality between 1980 and 2010, Lancet V379, 9814, pp413-431

f. Thwing J 2009 How much malaria occurs in urban Luanda, Angola?  Am J Trop Med Hyg v(80)3 p48

g. USAID 2011 President’s Malaria Initiative Five-year evaluation, Washington DC.

h. World Malaria Report 2011 The global burden of disease, WHO Geneva



William Jobin's picture
Submitted by William Jobin on

It is very gratifying that the malaria people in Niger measured malaria, instead of blindly estimating it as did WHO and Chris Murray from the Gates Foundation.  The paper by Doudou and colleagues from Niamey followed exactly the procedures I recommend in this editorial.

Once we get into the practice of measuring the malaria, we will be able to devise sound strategies, evaluate them properly, and make progress in the fight.

Let us hope Geneva and Seattle learn from Niamey.  Congratulations to the malaria people in Niger.


William Jobin Director of Blue Nile Associates