ORE RESERVE/RESOURCE ESTIMATION
Ore reserve estimates are assessments of the quantity and
tenor of a mineral that may be profitably and legally extracted
from a mineral deposit through mining and/or mineral beneficia-
tion. Estimation of ore reserves involves not only evaluation of
the tonnage and grade of a deposit but also consideration of the
technical and legal aspects of mining the deposit, of beneficiating
the ores, and of selling the product. Thus a number of profes-
sional disciplines may be involved in ore reserve estimation in-
cluding geology, geostatistics, mining engineering, mineral pro-
cess engineering, mineral economics, land and legal issues, and
This chapter, however, addresses only the aspects of ore
reserve estimation that include determination of the tonnage,
grade, size, shape, and location of mineral deposits. Although
these are often referred to as ore reserve estimates, the term
resource estimation is used rather than ore reserve estimation to
emphasize that all aspects of the ore reserve estimate are not
5.6.1 RESOURCE ESTIMATION METHODOLOGY
A resource estimate is based on prediction of the physical
characteristics of a mineral deposit through collection of data,
analysis of the data, and modeling the size, shape, and grade of
the deposit. Important physical characteristics of the ore body
that must be predicted include (1) the size, shape, and continuity
of ore zones, (2) the frequency distribution of mineral grade,
and (3) the spatial variability of mineral grade. These physical
characteristics of the mineral deposit are never completely
known, but are inferred from sample data. The sample data
consist of one or more of the following:
1. Physical samples taken by drilling, trenching, test pitting,
and channel sampling.
2. Measurement of the quantity of mineral in the samples
through assaying or other procedures.
3. Direct observations such as geologic mapping and drill
Estimation of the resource requires analysis and synthesis of
these data to develop a resource model. Methods used to develop
the resource model may include
1. Compilation of the geologic and assay data into maps,
reports, and computer databases.
2. Delineation of the physical limits of the deposit based
on geologic interpretation of the mineralization controls at a
reasonable range of mining cutoff grades.
3. Compositing of samples into larger units such as mining
bench height, seam thickness, or minable vein width.
4. Modeling of the grade distribution based on histograms
and cumulative frequency plots of grades.
5. Evaluation of the spatial variability of grade using experi-
6. Selection of a resource estimation method and estimation
of quantity and grade of the mineral resource.
The estimation procedure must be made with at least mini-
mal knowledge of the proposed mining method since different
mining methods may affect the size, shape, and/or grade of
the potentially minable ore reserve. The most important mining
factors for consideration in evaluation of the ore reserve from
the resource are
1. The range of likely cutoff grades.
2. The degree of selectivity and the size of the selective
mining unit for likely mining methods.
3. Variations in the deposit that affect the ability to mine
and/or process the ore.
These mining factors often determine the degree of detail
that is required for the resource model and thus the degree of
difficulty to develop a resource model for estimating ore reserves.
For example, a disseminated gold deposit may be continuous
and regular in shape, if mined by bulk, open pit methods. The
same deposit may be discontinuous and difficult to estimate,
however, if mined by more selective underground methods at a
higher cutoff grade. Such large differences in deposit shape due
to variations in cutoff grade and mining method may require
different ore reserve estimation methods for different mining
5.6.2 DATA COLLECTION AND GEOLOGIC
Data that must be collected and compiled for the resource
estimate are as follows:
1. Reliable assays from an adequate number of representative
2. Coordinate locations for the sample data.
3. Consistently recorded geologic data that describe the min-
4. Cross sections or plan maps with the geologic interpreta-
tion of the mineralization controls.
5. Tonnage factors or specific gravities for the various ore
and waste rock categories.
6. Surface topographic map, especially for deposits to be
Although small deposits may be evaluated manually using
data on maps and in reports, the amount of data required for a
resource estimate is often large, and data may be more efficiently
evaluated if they are entered into a computer database. Computer
programs can then be used to retrieve the data for printing
reports, plotting on digital plotters, statistical analysis, and re-
source estimation. Minimum information that should be in-
cluded in a drillhole database are
1. Drillhole number or other identification.
2. Hole length, collar coordinates, and down-hole surveys.
3. Sample intervals and assay data.
4. Geologic data such as lithology, alteration, oxidation, etc.
5. Geotechnical data such as RQD (rock quality desig-
Entry of data into a computer database is a process that is
subject to a high error rate if not carefully controlled and
checked. Some procedures that may be used to ensure that the
data have been entered correctly are
1. Verification of the data using independent entry by two
ORE RESERVE/RESOURCE ESTIMATION 345
persons. This is a standard procedure at many commercial data-
entry shops that may dramatically reduce data-entry errors.
2. Manual comparison of a random sample of the original
data sheets to a print-out of the database.
3. Scanning the data for outlier values. For example: drill
locations outside the project limits, high and low assays, and
sample intervals that overlap or are not continuous.
4. Comparison of computer-plotted data with manually plot-
ted maps of the same data. Collar location maps and cross sec-
tions are especially useful to rapidly locate inconsistent collar
locations and down-hole surveys.
Additional care and attention to detail and accuracy during
data entry are essential. A database with a large number of errors
may result in a resource estimate that is inaccurate and requires
a complete revision to provide defendable results.
5.6.3 GEOLOGIC INTERPRETATION
The sample database represents a large three-dimensional
array of point locations in a deposit. The sample data are quanti-
tative and have been subjected to minimal reinterpretation after
the original measurements. There is another body of geologic
knowledge, however, that does not fit this description. This is
the interpretation resulting from the geologist’s assimilation of
the large quantity of geologic data. These interpretative data
are often represented on plan maps or cross sections that show
outlines of the extent of geologic features or iso-grade contours
that define ore zones. These interpretations combine to provide
an interpretative geologic model that is one of the most critical
factors in the resource estimation. Failure to develop an appro-
priate geologic ore body model is the most common reason for
large errors in the resource estimates. As shown in Fig. 5.6.1, an
inappropriate geologic model may lead to errors greater than an
order of magnitude.
The geologist’s interpretation of the ore body should be used
as much as possible in developing the resource estimate. There
are, however, practical limits to the amount of complexity that
can be included in the resource model, and the geologic interpre-
tation will be limited to critical inputs that define the shape and
trends of the mineral zones at different cutoff grades and the
character of the mineral zone contacts.
Examples of geologic features that are often modeled include
1. Receptive vs. nonreceptive host rocks.
2. Alteration types that accompany mineralization or create
problems in beneficiation.
3. Faulting, folding, and other structural modifications.
4. Multiple phases of mineralization.
5. Post-mineral features such as oxidation and leaching.
Changes in lithology are often important variables in re-
source estimation because mineralization can vary due to physi-
cal or chemical attributes of the rocks. The differences may be
distinct, such as the sharp contact between a skarn ore body
and an unmineralized hornfels country rock. They also may be
gradational, such as the gradual decrease in grade that is often
observed between a favorable and slightly less favorable host in
a porphyry copper deposit. Other important lithologic controls
include barren post-mineral intrusive rocks, nonreceptive shale
beds, and other unmineralized materials that are contained
within the mineralized zone.
The effects of faulting will vary according to whether the
faulting occurred before or after the mineralization, and to what
processes accompanied the faulting. A simple post-ore displace-
ment may create a discontinuity in the ore trends, preventing
simple interpolation across the fault. The same type of fault
occurring prior to mineralization may have little or no effect on
the mineralization or may localize high-grade, vein-type mineral-
ization that must be modeled independently of a more uniform
disseminated ore body. It is also important to determine whether
the fault is a thin, well-defined structure or many smaller struc-
tures in a complex, wide shear zone. In the first case, the fault
is modeled as a simple surface with no thickness; in the second,
the fault zone must be defined and modeled apart from the
adjoining rock units.
Folding is particularly significant in sedimentary and stra-
tabound deposits. Modeling of folding depends on whether fold-
ing happened before or after ore deposition, on the tendency of
the ore zoning to follow the stratigraphy, on any remobilization
that occurred with the folding, and on the creation of traps or
other favorable structures. In addition to defining the shape of
the folds, it is important to determine whether the mineralization
follows the contours of the folds or is independent of the fold
Multiple phases of mineralization must be defined, particu-
larly where they complicate the ore zoning pattern through over-
lapping, discordant trends, and through post-mineral oxidation
or leaching. Secondary enrichment and oxidation will almost
always require delineation of the modified ore zones.
The character of the ore zone contact must be determined
and input into the resource model. A sharp contact will be
handled as a discontinuity and the data used strictly indepen-
dently on either side of the contact. A transitional contact, how-
ever, is a broad, gradational boundary that may require data
selection from zones of tens of feet (meters) to over 100 ft (30
m) to achieve true differentiation between the different grade
zones. As a transitional zone becomes thinner, it will eventually
approach a sharp contact. For practical purposes, any transi-
tional boundary thinner than the smallest selective mining unit
will be modeled as a discontinuity.
In addition to definition of these physical ore controls and
post-mineral modifications, a clear understanding of ore genesis
will always be beneficial in creating a resource model. In the
simplest case, the ore genesis will give clues to the behavior of the
grade distributions and variograms; in other cases, the genetic
structure is so dominant that it can be used as a direct control
in the estimation of mineral resources.
Compositing is a procedure in which sample assay data are
combined by computing a weighted average over longer intervals
to provide a smaller number of data with greater length for use
in developing the resource estimate. Compositing is usually a
length-weighted average. If density is extremely variable (e.g.,
massive sulfides), however, compositing must be weighted by
length times density (or specific gravity).
Some of the reasons for and benefits of compositing include
1. Irregular length assay samples must be composited to
provide equal-sized data for geostatistical analysis.
2. Compositing reduces the number of data and may signifi-
cantly reduce computational time, which is often proportional
to the square of the number of data.
3. Compositing incorporates dilution such as that from min-
ing constant height benches in an open-pit mine or from mining
a minimum height/width in an underground mine.
4. Compositing reduces erratic variation due to a high nugget
effect caused by erratic high-grade values.
There are several different methods for compositing that
may be used depending on the nature of the mineralization and
346 MINING ENGINEERING HANDBOOK
Fig. 5.6.1. Overestimation of ore reserves based on a geo-
logic model that is less continuous than the actual ore zones.
the type of mining. Common compositing methods are (1) bench
compositing, (2) constant length compositing, and (3) ore zone
Bench compositing is a method often used for resource mod-
eling for open pit mining and is most useful for large, uniform
deposits. Composite intervals for bench compositing are chosen
at the crest and toe of the mining benches. Bench compositing
has the advantage of providing constant elevation data that are
simple to plot and interpret on plan maps. In addition, the
dilution from mining a constant-height, constant-elevation bench
is approximated by the bench composite.
Down-hole composites are computed using constant length
intervals starting from the collar of the drillhole or the top of
the first assayed interval. Down-hole composites are used when
the holes are drilled at oblique angles (45° or less) to the mining
benches, and bench composites would be excessively long.
Down-hole composites should also be used when the length of
the sample interval is greater than one-third the length of the
composite interval to prevent overdilution when the sum of the
lengths of the samples is much greater than the length of the
Ore-zone compositing is a method of compositing that is used
to prevent dilution of the composite when the width of the
contact between waste and ore (or low grade and high grade) is
less than the length of a composite. Use of bench compositing
or down-hole compositing in this case may distort the grade
distributions by adding low grade to the ore population and high
grade to the waste population, resulting in underestimation of
ore grade and overestimation of waste grades.
Ore-zone composites are computed by first identifying the
interval containing each ore zone in the drillhole. Each ore zone
is then composited individually as follows: (1) the length of the
ore zone is divided by the desired length of the composite; (2)
this ratio is rounded up and down to determine the number of
composites that provide a length nearest the desired length when
divided into the length of the ore zone; and (3) the ore zone is
composited using length composites starting at the beginning of
the ore zone and length as determined in the previous step.
A special case of ore-zone compositing is encountered in a
vein or bedded deposit in that the width of the ore zone is
determined by a combination of minimum mining thickness
(height) and assay limits. In these situations, composites must
be recomputed for each combination of assay cutoff grade and
minimum mining thickness.
Geologic codes are usually assigned to composites according
to the rock type, ore zone, or other geologic feature. This is often
a simple procedure, since most composites will be computed
from samples taken from a single geologic unit. Assignment of
geologic codes to composites that cross geologic contacts is more
complex, since the composite will be computed using data from
multiple geologic units.
If the geologic contact is transitional and does not separate
contrasting grade distributions, it is appropriate to assign the
geologic codes according to the majority rule. If the composite
ORE RESERVE/RESOURCE ESTIMATION 347
crosses a sharp boundary between contrasting grade distribu-
tions, it is best to use geologic unit compositing or to assign the
composite to the geologic unit with the most similar grade.
If some sample intervals in the data are missing assays, it is
important to determine the reason for the missing data and
account for it appropriately. Typical examples are
1. The missing zone was not assayed because it was low
grade or barren by visual inspection, or the sample was missing
because of poor core recovery in a barren zone.
Action: Composite using the average of the barren unit or
zero grade for the grade of the missing assay.
2. The sample was missing because of poor core recovery in
a narrow post-mineral fault.
Action: Ignore the missing interval when computing compos-
ites, but retain the length of the interval for use in estimating the
grade of the vein.
Action: Ignore the missing interval when computing compos-
ites. The volume of the fault zone is small and the grade will be
similar to the grades in the country rock.
3. The sample was missing because of poor core recovery in a
vein that is higher grade and less competent than the surrounding
5.6.5 BASIC STATISTICS AND GRADE
Computation of basic statistics and evaluation of grade dis-
tributions are the first quantitative analyses of the grade data
and are basic tools to provide both feedback to the geologic
analysis and input to the resource modeling. Important factors
in these basic studies include
1. Detection of high-grade or low-grade outlier values.
2. Evaluation of the favorability of different lithologies as
3. Differentiation of complex grade distributions into simple
populations for resource modeling.
4. Identification of highly skewed and/or highly variable
grade distributions that will be difficult to estimate.
Basic statistics should be computed for sample and/or com-
posite grades in each geologic domain that is suspected to have
different characteristics. This may include different lithologies,
alteration types, structural domains, grade zones, or other group-
ing of data that has been recognized (or suspected) to have
different grade distributions. Statistics that should be compiled
1. Number of data (samples or composites).
2. Average grade, thickness, etc. (mean)
3. Standard deviation (std. dev.) and/or variance.
4. Coefficient of variation (COV), the standard deviation
divided by average grade.
5. Histogram of grades.
6. Cumulative frequency distribution (probability plot).
The first item reviewed is the number of data; generally, at
least 25 data are required to make comparisons between different
geologic domains. If sufficient data are available, average grades
and coefficients of variation will be compared among the various
geologic domains. General rules for evaluating differences in
average grade are as follows
0% to 25% Grade populations that do not usually
differentiation for resource modeling.
25% to 100% Grade populations that require differentiation
for resource modeling if divided by a disconti-
nuity such as a fault, or if variograms or grade
trends are dissimilar.
Above 100% Grade distributions must be separated for
modeling. Differences of 1000% or more may
be observed when barren, mineralized, and/or
high-grade populations are present.
Rules for analyzing coefficient of variation are as follows
0% to 25%
Simple, symmetrical grade distribution. Re-
source estimation is easy, many methods will
100% to 200% Highly skewed distributions with a large
grade range. Difficulty in estimating local re-
25% to 100%
sources is indicated.
Skewed distributions with moderate difficulty
in resource estimation. Distributions are typi-
Above 200% Highly erratic, skewed data or multiple popu-
lations. Local grades are difficult or impossi-
ble to estimate.
Distributions with COV greater than 25% often have a log-
normal grade distribution, and the basic statistics will also be
compiled for the natural logarithms of grades. For a perfectly
lognormal distribution, the lognormal statistics are related to the
normal statistics as follows:
is the average of the natural logarithms of grades; and
is the standard deviation of the natural logarithms of grades.
Close agreement between the mean, standard deviation, and
coefficient of variation when estimated using both normal and
lognormal statistics is indicative of a lognormal population and
is required to use lognormal statistics.
5.6.6 GRADE DISTRIBUTION
The grade histogram and cumulative frequency distribution
are used to study the relationship between the statistical grade
distribution and geologic parameters. The analysis is usually
begun with a histogram of sample or composite grades. If the
histogram is bell-shaped and symmetrical, a normal distribution
is indicated, and the cumulative frequency will be plotted on
normal probability paper. Normal distributions are not usually
found in mineral deposits except those with sedimentary origins.
If the histogram is skewed to the right, a lognormal distribu-
tion is indicated and the cumulative frequency distribution will
be plotted on lognormal probability paper. Lognormal distribu-
tions are frequently observed in most hydrothermal precious and
base metal deposits.
Normal probability paper is a special graph paper in that
the y-axis is a cutoff grade and the x-axis is the percentage of
samples above (or below) the cutoff grade. The x-axis is scaled
such that a normal distribution will plot as a straight line, the
Percent Samples Above Cutoff
Fig. 5.6.2. Lognormal probability plotting paper.
slope of the line is proportional to the standard deviation of the
distribution, and the 50th percentile is the average grade.
Lognormal probability paper is similar to normal probability
paper except that the y-axis is scaled according to the logarithm
of cutoff grade. The slope of the line is proportional to the
standard deviation of logarithms of grade
and the 50th percen-
tile is the average of the logs of grades
An example of lognor-
ma1 probability paper is shown in Fig. 5.6.2.
The probability graph may be used to estimate the standard
deviation based on probabilities from the normal probability
distribution as follows:
which is based on 2 standard deviations, or
which is based on ± 2.05 standard deviations.
Often the probability graph is not a straight line, but will be
composed of multiple straight lines or curves. A typical deviation
from a straight line is a downward curve at the low end of
the graph as shown in Fig. 5.6.3. This curve represents excess
low-grade samples, and in porphyry systems is often attributed
to weakly mineralized late intrusions or to post-mineral, barren
dikes. On low coefficient of variation deposits, this type of graph
may also represent a normal distribution that has been plotted
on lognormal probability paper. The data should be examined to
determine the source of the low-grade material and to determine
whether that population has been or can be mapped geologically
and estimated separately.
Another common deviation from a straight line on the prob-
ability plot is a steeper slope at the upper end of the curve
as shown in Fig. 5.6.4. This represents excess material in the
high-grade population and may be caused by two superimposed
populations, such as high-grade veins within lower-grade dissem-
inated or stockwork mineralization. Other causes of excess high-
grade assays include small zones that are highly favorable to
mineralization because of higher permeability, favorable chemi-
cal properties, secondary enrichment, or metamorphic remobili-
zation. Since the high-grade mineralization usually has less conti-
nuity than the lower-grade mineralization, the source of the high
grade must usually be identified and estimated separately from
the remaining mineralization.
5.6.7 VARIOGRAM MODELING
The variogram is the fundamental tool used by the geostatis-
tician and geologist to measure spatial continuity of grade data.
The variogram is a graph of the average variability between
samples vs. the distance between samples. A variogram is com-
puted by averaging the squared differences between pairs of
samples that are a given distance apart as follows
where N is the number of pairs at distance h, and h is the distance
between the samples.
ORE RESERVE/RESOURCE ESTIMATION
Fig. 5.6.3. Deviation from a lognormal distribu-
tion that is caused by excess low-grade
Percent Samples Above Cutoff
Fig. 5.6.4. Deviation from a lognormal distribu-
tion that is caused by excess high-grade
Percent Samples Above Cutoff
The variogram function
(h) is computed for a number of
different sample distances, to provide an experimental variogram
that typically looks like the graph in Fig. 5.6.5. The most impor-
tant features of the variogram are the nugget, range, and sill.
The nugget value is identified as the y-intercept of the variogram
curve and represents random and short-distance variability fac-
tors such as sampling error, assaying error, and erratic mineral-
ization. High nugget values are commonly found in ore bodies
where short distance variability is extremely high, where accu-
rate sampling and assaying of ore is difficult, or where poor
sampling and assaying techniques are employed. High nugget
effects are found in many gold deposits because of random gold
nuggets that cause large grade changes over small distances.
Similar high nugget values are often found in molybdenum de-
posits; these are caused by small pockets of pure molybdenite in
a disseminated or stockwork mineralization.
Small nugget values indicate an ideal situation reflecting
good sampling techniques and locally continuous mineralization.
Fig. 5.6.5. Typical experimental variogram plot.
A small nugget value on a variogram confirms that the assays
can be reliably used for geologic interpretation and resource
estimation. Low nugget values are typically found in many types
of deposits, including hypogene porphyry copper, iron ore, and
coal. High nugget values have also been found for each of these
types of deposits so each deposit must be analyzed individually.
Most variograms increase in value from the nugget for some
distance and then level off to a constant value. This distance is
called the range of the variogram, and the variogram value is
called the sill. The range is equivalent to the geologist’s concept
of range of influence, that is, the distance beyond which samples
are not correlated with other samples and beyond which grade
trends should not be projected. The sill value is usually equal to
the sample variance. If the sill is higher or lower than the vari-
ance, zonal effects or multiple grade distributions are usually
The slope and shape of the variogram often vary in different
directions, with the range increasing in the direction of greatest
continuity of the mineralization. This behavior is referred to as
a geometric anisotropy.
22.214.171.124 Computing an Experimental Variogram
Computing an experimental variogram from a set of ran-
domly spaced data involves finding pairs of data that are oriented
in the required direction, determining the distance between the
samples, then summing the squared differences of the grades.
Since the data are usually sparse, it is necessary to use a tolerance
when locating samples in the desired direction and to use a
distance increment to classify samples by distance. The direc-
tional tolerance is usually achieved with a window angle, or a
fixed distance, as shown in Fig. 5.6.6. These methods may be
combined and/or generalized into three dimensions as shown in
Fig. 5.6.7. The distance tolerance is a fixed distance increment
(cell size), selected so a reasonable number of samples fall in
each cell. Some guidelines to aid in computing experimental
1. Variograms must be computed within continuous zones of
mineralization. Do not cross contacts between different geologic
Fig. 5.6.6. Angular and fixed-distance tolerance methods for select-
ing variogram pairs.
Fig. 5.6.7. Composite and three-dimensional methods for selecting
2. The maximum distance used should be less than one-half
3. The maximum search distance perpendicular to the direc-
the length of the mineralized zone in the direction of the vari-
tion of the variogram must be less than one-half the range of the
variogram in the perpendicular direction.
ORE RESERVE/RESOURCE ESTIMATION
Fig. 5.6.8. Experimental variogram modeled with a spherical vario-
4. The distance increment should be approximately equal to
the average spacing between samples in the direction of the
5. At least 30 pairs of samples are required to compute a
valid variogram. More pairs produce a more stable variogram.
6. All samples must be the same size and should be obtained
by the same or similar sampling methods.
7. Data should be declustered before computing the vario-
gram. In particular, a few twin holes may give a misleading
impression of the nugget effect.
A model, or equation, is fitted to the experimental variogram
for further geostatistical evaluations such as kriging. The most
common variogram model is the spherical model shown in Fig.
5.6.8. This model has the equation
is the nugget, C is the sill, and a is the range.
A spherical variogram model may be constructed graphi-
cally by drawing a horizontal line at the variogram value equal
to the variance of the samples. This value is equal to nugget plus
the sill C
+ C. A line is drawn through the points at the
short-distance end of the curve. The nugget C
is estimated where
the line intersects the Y-axis, and the range a is estimated as 1.5
times the distance where the line intersects the variance.
Other variogram models used in resource estimation include
the exponential, linear, hole effect, and various combinations of
“nested” structures. Examples of some of these variograms are
shown in Figs. 5.6.9 to 5.6.11.
126.96.36.199 Relative Variograms
Lognormally distributed data often exhibit a proportional
effect where the standard deviation of grades increases with
grade. This results in variograms with higher values in high-
grade areas than in low-grade areas. This may be corrected by
dividing each cell in the experimental variogram by the square
of the mean of the samples that were used in the variogram
Fig. 5.6.9. Experimental variogram modeled with a linear variogram
Fig. 5.6.10. Experimental variogram modeled with an exponential
for that cell. The resulting variogram is known as a relative
188.8.131.52 Lognormal Variograms
If data are clearly lognormal, a variogram may be computed
using the logarithms of sample grades. The resulting lognormal
variogram is often less erratic and more easily interpreted than
the variogram of untransformed values. This variogram may be
used directly for lognormal geostatistics or may be transformed
to a relative variogram as follows:
Caution must be exercised when using the lognormal vario-
gram since small deviations from lognormality may have large
effects on the transformation to a relative variogram.
Fig. 5.6.11. Experimental variogram modeled with nested spherical
and linear variogram models.
5.6.8 RESOURCE ESTIMATION (MODELING)
Methods for resource estimation or modeling are generally
divided into (1) traditional, geometric methods that are done
manually on plans or sections and (2) interpolation methods
such as inverse-distance-weighting and kriging that require the
use of a computer.
184.108.40.206 Geometric Methods
Manual resource estimations are usually done on plan maps
or cross-section maps that cut the deposit into sets of parallel
slices. Data plotted on the maps include drillhole locations, assay
values, and the geologic interpretation of the mineralization con-
trols. The geometric methods used are based on geometric
weighting of assays and include area averaging, polygonal, cross
sectional, and triangular.
Area Averaging: The area-averaging method is among the
simplest of all reserve estimation methods, involving only a geo-
logic interpretation of the shape of the ore and averaging of the
grades within that shape as follows:
1. Draw the outline of the ore body on each map; these are
the ore blocks and may be regular or irregular shapes. If several
ore zones or ore types are present, each is drawn individually.
2. Measure the area of each ore block (usually by planime-
tering). Multiply the area times the thickness of the ore and
divide the resulting volume (cubic feet) by the tonnage factor
(cubic feet per ton) to compute tons of ore; in SI units, multiply
the volume (cubic meters) by the density (tonnes per cubic meter)
to compute the tonnes of ore.
3. Compute the average grade of samples within each block.
4. Calculate the sum of the tonnage in the individual blocks.
Average grade is the tonnage-weighted average grade of the
Despite its simplicity, the area-averaging method provides
excellent estimates where the drilling pattern is uniform, grades
are continuous, and ore boundaries are distinct and sharp. Prob-
lems may arise, however, when the drill pattern is nonuniform.
With a nonuniform drill pattern, a cluster of holes in a high-
grade zone will cause overestimation of grade. Area-averaging
methods also may be difficult to implement on deposits with
discontinuous or spotty ore zones, especially if the ore contacts
are gradational, and multiple cutoff grades are desired.
Hole Grade Area
1 0.12 39.4
2 0.21 37.6
Total Area = 333.7
3 0.17 42.0
4 0.50 37.7
Total Area x Grade = 71.39
5 0.33 33.8
6 0.05 50.1
7 0.26 46.8
Fig. 5.6.12. Computation of an estimate using the polygonal method.
Polygonal and Cross-sectional Methods: Polygonal and
cross-sectional methods are related methods in that each ore
interval is assigned its own polygon of influence. Tonnage and
grade is then computed using the same procedure as was used for
the area-average method, except that the areas used to compute
tonnage are the area of each individual polygon. Polygons are
drawn on plan maps based on the perpendicular bisectors of the
line between each drillhole as shown in Fig. 5.6.12. The size and
shape of the polygons may be limited, if desired, by a maximum
distance from each hole. On cross sections, the polygons are
usually drawn one-half the distance from each drillhole as shown
in Fig. 5.6.13. The distance from a drillhole may also be limited
to a maximum distance in the cross-sectional method.
A computer approximation of the polygonal method is the
nearest neighbor estimation. This method requires superposition
of a rectangular grid of blocks over the drilled area as shown in
Fig. 5.6.14. The grade of the nearest sample is then assigned to
each block. This method will closely approximate the polygonal
method if the block size is no more than 25% of the average
Triangular Method: The triangular method is similar to the
polygonal method except that areas of triangles are estimated,
and the grade of each triangle is based on the average of the
grades at each of the corners of the triangles as shown in Fig.
The geometric methods all have the advantage of simplicity
and ease of implementation. In addition, they will provide an
unbiased estimate of the average grade of a deposit at a zero
cutoff grade. A resource estimate using a geometric method
provides a quick, inexpensive check to verify nonbias of a more
complicated, computer-generated resource model.
ORE RESERVE/RESOURCE ESTIMATION
Fig. 5.6.13. Computation of an estimate using the cross-sectional
Total Blocks = 114
Sum Blocks × Grade = 24.21
Average Grade = 0.2124
Fig. 5.6.14. Computation of an estimate using the nearest-neighbor
The most common problem with geometric methods is that
they may imply more selective mining than may be achieved by
the mining method. This results from estimating the resource
from samples the size of a drillhole but mining larger, less selec-
tive volumes. High-grade blocks usually include lower-grade ma-
terial when they are mined, and low-grade blocks usually include
Fig. 5.6.15. Computation of an estimate using the triangular method.
some higher-grade material. The resulting mined grades are dif-
ferent from the predicted distribution; for cutoff grades below
the average grade of the deposit, the mined grade will be lower
and the tons will be higher. If the cutoff grade is significantly
higher than the average grade of the deposit, however, both
the mined grade and tons can be lower, resulting in a severe
overestimation of contained metal.
220.127.116.11 Moving Average Methods
The moving average methods, inverse-distance weighting and
kriging, are the most widely used procedures for computer-
assisted resource estimation. The basic procedure for both of
these methods is as follows
1. Divide the ore body into a matrix of rectangular blocks
as shown in Fig. 5.6.16.
2. If geologic controls are present and will be used to control
or modify grade assignment, a geologic code must be assigned
to each block.
3. Estimate the grade of each block by searching the database
for the samples surrounding each block and computing the
weighted average of those samples. The weighted average is
computed using the following equation:
where g* is the estimated grade, g
is the grade of sample i, w
the weight given to sample i, and n is the number of samples
18.104.22.168 Practical Considerations for Moving
The determination of the block size, anisotropies, and the
sample selection criteria are common considerations for either
- Table of Contents
- Sec 5 Contents
- 5.6.1 Resource Estimation Methodology
- 5.6.2 Data Collection and Geologic Interpretation
- 5.6.3 Geologic Interpretation
- 5.6.4 Compositing
- 5.6.5 Basic Statistics and Grade Distribution
- 5.6.6 Grade Distribution
- 5.6.7 Variogram Modeling
- 5.6.8 Resource Estimation (Modeling)