The SAIMM is a professional institute with local and international links aimed at assisting members source information about technological developments in the mining, metallurgical and related sectors.
twitter1 facebook1 linkedin logo
 

‘I often say that when you can measure what you are speaking about, And express it in numbers, you know something about it; But when you cannot measure it, when you cannot express it in numbers, Your knowledge is of a meagre and unsatisfactory kind.’ Lord Kelvin, 1882

The papers in this issue were selected from a conference on sampling and blending held last year. This topic is generally considered by most production industries as a necessary, often costly, but mundane exercise. In an introductory paper titled ‘The Good, the bad and the ugly’, the author, Holmes, observes that taking samples is often left to persons who do the mundane jobs and who have little appreciation of the finer points of sampling statistics. I maintain that of all the production industries, it is the mining and minerals industry that has the greatest stake in undertaking the sampling and statistical evaluation with as much discipline and diligence as they can muster.

If one considers one of the most common of statistical sampling exercises in obtaining a representative sample of bulk products on a belt, from a truck or being loaded into a ship,  where enormous tonnages are being handled, such as with coal and iron ore, the economic penalties for delivering substandard products are immense. The loss of credibility can have disastrous implications, even if we take the humblest of all mining operations (often carried out by illegal operators) which is the one of supplying large tonnages of ‘river sand’ for reinforced concrete with some demanding quality specifications, to the civil engineering contractors to build soccer stadia, skyscrapers or bridges.

In South Africa with our prominent precious metals activity, the importance of proper sampling and interpretation is paramount. From the act of exploration by drilling, to proving a payable ore reserve, to the operation of a mine and recovery plant, one is involved in the evaluation of a number of different micro sized gold or platinum metal particles among several million times this volume of gangue impurities.

In my first contact in the 1950s with the statistical problem of interpreting the results of fire assays of composites of drill cores or stopeface samples, to derive an anticipated average grade of many tons of mined rock, the statistical methods were elementary.  It may be of historical interest to record that in those early days of illicit gold sales, the law of the land required that if the mine records showed a ‘significant’ discrepancy in the amount of gold recovered, the mine manager was obliged to report it to the police and a theft dossier had to be registered and an investigation had to be opened. The mine call factors were invented to provide a convenient way to regularize the inevitable discrepancies due to the many random factors in sampling and assay. It was much more credible to convince the bureaucratic government authorities that the monthly production was within the statistical standard deviation limits of the mine call factor of, say, 0.7, than to attempt to justify a negative discrepancy of 30% of predicted gold production.

In 1960, Danie Krige in South Africa, undertook pioneering work on mathematical geostatistics. From this work emerged the ‘kriging’ methodology to provide a rigorous method of interpolating and extrapolating the analyses of adjacent samples to obtain greater precision in deriving estimates of the average gold content of bulk areas of the mines and in resource evaluation from drill cores and other small sample situations such as the stope faces in narrow reef mining. This resulted in a much improved correlation between the different mine call factors. In spite of these advances, there were many shafts where the call factors were of the order of 0.7. This was shrugged off as being due to statistical inaccuracies, even though many involved believed that gold losses of this order were being incurred. Such unsatisfactory situations endured for many decades.

However, in a paper in this issue ‘Sampling error or nugget effects’, Clarke states, ‘This name was chosen to reflect the large differences found between neighbouring samples in  ‘nuggety’ mineralizations such as Wits gold reefs.’

A second paper in this issue provides an extensive reference to ‘nugget effects’. The new concept of the ‘nugget theory’ extending the ‘kriging’ methodology was of interest. Although the mathematics of this was difficult for me to understand, it could explain discrepancies that existed for many decades, which I believe require further work and explanation.

The most obvious of these anomalies occurs in the mining of narrow gold-bearing reefs in the Witwatersrand conglomerates. These are often referred to as the ‘carbon leader reefs’ that occur in many mining areas. The gold occurs usually with uraninite and carbonaceous material such as thucolite as a very narrow band in the conglomerate structure, which is itself narrow and of the order of a few decimeters. Both the gold and uraninite are believed to be of secondary genesis and thus particularly finely structured. The gold content is high and, in spite of the narrow width, is usually highly payable with normal mining practice. These reefs often extend to greater depth and are candidates for future ultra deep mining.

A common feature of the mining operations on such reefs is that the mine call factor is invariably below 1 and generally only approximately 0.7. This implies that of the gold estimated by the mine surveyor to have been taken to the surface from the stope faces, only 70% can be accounted for in the surface reconciliation.

A recent paper by Fourie, (general manager) and Zaniewski (project leader) of the Kopaneng mine of AngloGold Ashanti showed conclusively that there were indeed serious losses of gold in mining the carbon leader reefs. They associated these losses with ‘fragmentation’ effects associated with the blasting explosives used but failed to identify exactly where the missing gold was to be found, in spite of careful examination of all sources of gold losses. Future work must, we believe, focus not only on the mathematics of geostatistics but also on the vitally important sampling methodology.

Many of us believe the lost gold is carried away in the enormous volumes of explosion gases emitted in the microseconds after detonation of the blast. The gold released is likely to contain a high proportion of ultra fine particles, maybe even in the nano-sized range, and which I have dubbed ‘nano gold.’

The velocity of the gases emitted is in the ultrasonic range and well above the Stokes Settling velocity of these fine particles. The proof of this theory will lie in obtaining  representative samples of the content of these explosion gases, which can be expected to remain in the stoping areas for only a few milliseconds. This is an extremely difficult  assignment but not impossible, taking into account the modern technology available to the sampling specialists. If this theory can be proved then there are several known mining techniques to obviate these losses, even at ultra deep levels.

This situation suggests a similarity to one of the few Television programmes I watch. This is the popular BBC programme ‘Who wants to be a Millionaire?’. In this the contestants who reach the jackpot level are required to provide one answer to four or maybe two multichoice questions. If the answer is correct, the contestant wins £1 million.

In our situation only two questions are needed.
Nugget effect or nano gold?