Considerations in Particle Sizing

Part 2: Specifying a Particle Size Analyzer

Particle Sciences - Technical Brief: 2009: Volume 7
PDF Version

In Part 1 of this guide (Technical Brief 2009 Volume 6) we stated that the aim is to provide a pathway through the decision-making process of choosing a particle sizing analyzer by means of asking and answering three general questions:

  1. How do I classify the various techniques?
  2. How do I set specifications (quantitative or qualitative)?
  3. Which technique(s) have the best chance of solving my problems?

We started by classifying the different particle sizing techniques in four ways: (i) size range, (ii) degree of separation (i.e., fractionation), (iii) imaging vs. non-imaging methods and(iv) weighting: intensity, volume, surface and number.

Information Content

A fifth way to classify a particle sizer is by information content. This final major classification revolves around the amount of information required to solve a particular problem. There are two key questions to ask that determine which techniques are useful.

  1. What do you want? Averages, widths, tables & graphs, etc.
  2. How will you use it? Process control, QC or R&D applications

If all that is needed is an average size, then a single-moment instrument is sufficient. For average and width, an ensemble averaging instrument is sufficient. However, the more information needed, the more resolution is required. But regarding the "zero-to-infinity" trap set by over-hyped marketing claims made for many instruments.

Answering the first question -What do you want? -may not be easy but often follows from the answer to the second question. For example, in most process control environments, varying a single parameter is reasonable; varying multiple parameters is difficult. In this case, opt for one piece of information, which might be 90% of particles less than a stated size. For QC, an average and a measure of distribution width is often sufficient, though sometimes the second piece of information is nothing more than a spec such as d90 < 2 µm. Generally, only in an R&D environment is it useful to consider asking for more information. Additional size distribution information, often hard to come by reliably, might be the skewness of a single, broad distribution, or the size and relative amounts of several peaks in a multi-modal distribution, or the existence of a few particles at one extreme of a distribution. Where the distribution has several, closely-spaced, features a true high resolution technique is an imperative

Specifications:

Specifications are of two types: quantitative and qualitative.

Quantitative

Specifications of this type comprise size range, throughput and definitions: accuracy, precision, reproducibility and resolution

Size Range

This was discussed in Part 1 in the section on Classification of Techniques (Figure 1).

Throughput

The novice often mistakenly assumes that the measurement duration is sufficient to characterize the typical time per sample. Sometimes the measurement duration is only a fraction of the actual time per cycle. Throughput is the total sum over all of the following: sample preparation, analysis, data reduction/printout/interpretation, and cleanup.

Throughput is probably most important to a QC laboratory where, often, large numbers of samples must be run in one day. Speed of analysis is sometimes a major consideration even for one measurement in process control applications. Sample preparation may be as short as a few minutes or require overnight. Warm-up, calibration or instrument adjustment all add to the overall time. Generally, with most modern instruments the actual measurement or analysis time can be short. Yet, for broad distributions, sieving and sedimentation techniques (including field-flow fractionation) are relative slow compared to most forms of light scattering. Single particle counting (SPC) is fast for narrow distributions but can be slow for broad distributions. Data reduction and printout are fast given modern computers. The time to interpret the data depends on the analyst and what criteria have been set. Cleanup time is often seriously underestimated.

Finally, it is wise to consider whether a fast measurement or analysis time is worth it if the sum of all the other times is considerable. If the total throughput time is not much different a higher resolution but slower technique is a better choice.

Definitions

Accuracyis a measure of how close an experimental result is to the "true" value. For irregularly shaped particles or techniques that cannot be calibrated, or for any other set of conditions where a "true" value is either unknown, or not well defined, then accuracy has no meaning. For spheres and other simple shapes accuracy can be established between several techniques. Surprisingly, below one µm, absolute accuracy is typically no better than 3%.

Precision is a measure of the variation in repeated measurements under the same conditions (instrument, sample, and operator). Accuracy (associated with systematic error) and precision (associated with random error) are related: the results of many measurements may group tightly together (high precision, low random error) but the mean of the group may be far from the true value (low accuracy, high systematic error). However, if a measurement is highly accurate, then repeated measurements must have grouped around the true value. Still, accurate mean values may consist of either high or low precision. In such cases, precision limits accuracy. Precision limits resolution and reproducibility and is a useful criterion by which to assess instruments even if the accuracy cannot be determined.

Resolution is a measure of the minimum detectable differences between distinct features in a size distribution. For broad, unimodal distributions, resolution is still an important concept. If the measured breadth of a distribution is meaningful, then the instrument that produces it should be able to separate narrow size peaks closer than or equal to that breadth. Otherwise, the measured breadth is really an instrumental broadening effect. Generally, SPC's and fractionators produce high resolution size distributions and ensemble averaging devices (light scattering and diffraction instruments) produce medium to low resolution size distributions. Resolution is a function of the signal-to-noise ratio of the instrument. Reporting more than this is like magnifying the noise; more numbers are obtained but they are meaningless. The particle size of many API's is typically above one µm and the size distribution is very broad and a common assertion is that resolution would seem to be unimportant. However, if the fundamental resolution of an instrument is undetermined then how can one know if the broad distribution is really hiding practical and, possibly, significant information?

Reproducibilityis a measure of the variation between different machines, operators, sample preparations, etc. It becomes most important when comparing the results produced on two different machines of the same type. Such a situation is quite common where multiple particle sizers of one make and model have been purchased for use in different laboratories and/or locations. It is surprising how often the resolution, expressed as a range of values, exceeds the basic precision for any one of the machines. In such cases, it is useful to have round-robin tests conducted on the same sample, and under the same set of prescribed conditions, to isolate any machine-to-machine variations. A classic example is the big differences obtained on FD instruments with high angle light scattering detectors from the same manufacturer because of evolving software variations on how best to handle the necessary light scattering (Mie) corrections.

QUALITATIVE

In addition to quantitative specifications, there are qualitative ones that are important considerations for the purchase of any analytical instruments. These include the following:

Support: Is training, service, and applications assistance available during the installation, warranty period and for as long as the instrument is still serviceable? An instrument might be available at a lower price from a supplier in another country but check that it comes with the expected type and level of support. Ask for references to verify any claims that are made. Ask also about any continuing program of development to ward against obsolescence.

Ease-of-Use: This is a very subjective concept. Will the instrument be used by experts or by inexperienced users? Although the goal of a “one button” device is admirable, it is rarely achieved if for no other reason than sampling and sample preparation are not one-button amenable. If this concept is important then, initially, be sure to watch measurements being made – the entire process from sample preparation to clean-up.

Versatility: This is defined as the ability to measure a wide variety of samples under a wide variety of conditions. Does the instrument handle samples in air, liquids, or both? Does the instrument work with polar as well as nonpolar liquids? Does the instrument work with dilute samples or concentrates or both? Try to estimate a realistic range of sample types and the corresponding size ranges intended to be measured. Experience has shown that it is usually better to choose dedicated instruments that do a good job for their intended purpose rather than a poor job on a wide variety of samples.

Life-Cycle Cost: The basic instrument cost is only one factor to consider. The total price is best judged in terms of the life-cycle cost. This includes purchase price, operating cost, maintenance, and repair costs. Every instrument needs some type of maintenance. It may be as simple as cleaning air filters once a month; it may be as difficult as replacing mechanical parts or aligning an optical system. And every instrument will, sooner or later, require repairs. If labor is intensive, the life-cycle cost can be quite high. If special solvents or expensive environmental costs are involved, the life-cycle cost may be high enough to consider alternate choices.

Of all these qualitative considerations, support is, perhaps, the most important. When choosing between vendors of similar equipment, the one with better support may tip the scale in its favor. Do not assume that the largest vendor, or the one with the fanciest brochure, will provide the best support. Today, many companies use representatives to sell and service instruments. Just as you would choose any professional service, asking for references and getting second opinions should be an integral part of the purchase process.

CONCLUSION TO PARTS 1 AND 2

Narrow down the possibilities and then make a choice

Start with Figure 1 and find the overlap of your expected size range with the various techniques that purport to measure that range. Identify techniques whose mid range covers your expected size range. Don't know your size range? Get some preliminary measurements made but pay attention to sampling and sample preparation. The biggest mistake at this point is to choose the apparent zero-to-infinity devices.

Given the list, narrow it further by deciding if you need imaging (irregular particle shapes that correlate with end-product performance) or not, single particle counting (absolute concentration) or not, and what degree of information you require.

Now carefully consider the quantitative and qualitative specifications, giving the most weight to those aspects that pertain to your situation. While automated, high throughput instrumentation is convenient, if it sacrifices the resolution you need to make good decisions, consider carefully.

Accuracy, precision, resolution and reproducibility are functions of the size range. Errors are always greatest at the extremes. A common mistake is to check an instrument in its midrange and then proceed to use it at one or other of the extremes. Be skeptical of claims if these refer only to the average size. The average of any distribution is least subject to variation. Even instruments with poor resolution and instrument-to-instrument reproducibility can yield results with 2% precision in the average. Higher moments such as the measure of width, or skewness, are much more sensitive to uncertainties; so pay particular attention to the variance in these statistics. If it is not clear from the manufacturer’s literature then ask for clarification.

Finally, before purchasing ask the vendor for a list of users who have had the instrument for at least one year. Contact them and ask for their experience with maintenance and repairs.

References

  1. J.D. Stockham and E.G. Fochtman (eds), Particle Size Analysis, Ann Arbor Science Publishers, Ann Arbor, 1977
  2. B. Kaye, Direct Characterization of Fine Particles, Wiley-Interscience, London, 1981
  3. H. Barth (ed), Modern Methods of Particle Size Analysis, Wiley-Interscience, New York, 1984
  4. T. Allen, Particle Size Measurement, 4th Edition, Chapman&Hall, New York, 1990
  5. N.G. Stanley-Wood and R.W. Lines (eds), Particle Size Analysis, Royal Society of Chemistry, London, 1992
  6. T. Provder (ed), Particle Size Distribution: Assessment and Characterization, ACS Symposium Series 332 (1987), 472 (1991), 693 (1998) and 881 (2004), American Chemical Society, Washington, DC.
  7. CPS Instruments Inc., Comparison of Particle Sizing Methods.
  8. Micromeritics Instruments Inc., Interpretation of Particle Size Reported by Different Analytical Techniques
  9. A. Jillavenkatesa, S.J. Dapkunas and L-S. H. Lum, Particle Size Characterization, NIST Special Publication 960-1, 2001
  10. F.M. Etzler and M.S. Sanderson, J. Particle and Particle Systems Characterization, 12 217 (1995)
  11. C.M. Keck and R.H. Muller, Int. Journal Pharmaceutics, 355 150 (2008)

Particle Sciences is a leading integrated provider of formulation and analytic services and both standard and nanotechnology approaches to drug development and delivery.

3894 Courtney Street, Suite 180
Bethlehem, PA 18017
610-861-4701
www.particlesciences.com