Getting started and unique ID numbers
A point layer must be loaded into ArcMap in order to use this tool. The point attribute table must contain an integer field that
represents unique ID numbers for groups of points. This number forms the basis for the batch processing. For instance, if your
points represent telemetry locations then the unique ID number would represent different animals, or different animals in different
seasons (depending on how you want to partition your data). Each kernel is calculated using only the points corresponding to
that unique ID number. Kernels can be calculated with as few as 1 point, so you need not be concerned that a low sample size will
cause this tool to fail. If you wish to use point weights in the kernel density estimate,
ensure that the attribute table contains a numerical field with a weight for each point. Note that a weight of 1 is neutral, and a
weight of 0 effectively eliminates the point from the dataset (the point contributes nothing to the density estimate). Negative
weights will result in nonsensical results or errors during processing.
Extent of output rasters
The extent of the output raster is determined automatically (i.e. any Spatial Analyst options you may have set are ignored as this
tool is independent of Spatial Analyst). However, there are two options that control the extent of the raster:
- FULL EXTENT: all of the rasters have identical extents and the extent is calculated as: the extent of all the point data +
the smoothing factor. This option is very useful if you wish to subsequently combine the rasters (because if the extents do not
overlap, then the output of Raster Calculator expressions is limited to the area of overlap of all the input layers, and all other cells
receive a NoData value in the output).
- SMALLEST EXTENT: in this case, the extent of the output raster is minimized for each unique ID using this calculation:
the extent of the points for only that unique ID + the smoothing factor. The benefit of this option is that there is the potential to
create much smaller output files.
As this tool can generate many different output layers, the user is asked to specify an output folder (preferably a new, empty folder).
An empty folder will ensure that the program does not encounter naming conflicts with pre-existing data layers.
The first and preferred naming convention this tool will attempt to implement is: the prefix you specify + the unique ID. If the length
of this name is greater than 14 characters, or a raster of that name already exists (for ANY of the output rasters) then the tool
switches to an automated naming convention: the prefix you specify + an arbitrary number that results in a unique file name. This
arbitrary number begins at 1, and increments until a unique file name results. Because it would be difficult to associate an arbitrarily
named raster file with the corresponding unique ID, the tool also then creates a text file called rasternames.txt in the output folder
that maps each arbitrary raster name to the input data file and the unique ID number. This is a much less convenient naming system
to work with, so it is highly recommended that you define a short prefix name and use unique ID numbers that are less than 6 digits long.
This will enable the first, more intuitive naming convention to engage.
The smoothing factor (also referred to as the bandwidth or h statistic) is what controls how smoothed the kernel density estimate is. There are
a number of ways of determining what the smoothing factor should be. Two objective approaches include the href estimate, and least-squares
cross validation (LSCV). The appropriateness of these estimates depends to a large degree on the nature of your data. There is some evidence that
neither of these estimates perform particularly well. In most of the applications I deal with, an estimate of the smoothing factor based on
expert biological knowledge and careful inspection of the resulting kernel density estimate is often the best approach.
Kernel density estimates often produce very small numbers, e.g. 0.000000147. The scaling factor simply multiplies these small values by
a constant (e.g. 1000000). It is usually very wise to use a scaling factor, especially because Grids only permit the storage of single precision
floating point numbers. If you do not use a scaling factor, you are likely to lose a great deal of precision in the kernel density estimate as a
result of the density values being truncated. The important thing to note about the scaling factor is that the relative values in the output cells
are the same, it is simply the units of the density estimate that change. There is therefore no down-side to using a scaling factor and it is
highly recommended that you do so. (The scaling factor will not affect the percent volume contours either).
Percent volume contours
Note that a percent volume contour is not the same as the simple contours that are typically produced in tools like Spatial Analyst. A percent
volume contour represents the boundary of the area that contains x% of the volume of a probability density distribution. A simple
contour (like the ones that are produced in Spatial Analyst) represent only the boundary of a specific value of the raster data, and does not
in any way relate to probability. For applications like animal home range delineation it is the percent volume contour that is required. The 95%
volume contour would therefore on average contain 95% of the points that were used to generate the kernel density estimate.