Particle size distribution analyzers based on measuring the phenomenon of Brownian motion can be broadly classified as being based on either autocorrelators or on power spectrums. For the purpose of this document, systems using autocorrelators will be called Photon Correlation Spectroscopy (PCS) systems. Power spectrum analyzers such as the HORIBA LB-550 are designed to examine the differences in frequency of light scattered off particles.
The LB-550 technique is designed to analyze fluctuations in the intensity of any
scattered light from a body in relation to the incident light. This method is also
referred to as the frequency analysis method. PCS analyzers are based on the method in which the number of photons per time unit is counted, assuming that light consists of a series of photons.
As described above, the PCS instrument is designed to count moving particles
in terms of the number of photons. Therefore, it must simultaneously measure particles which are moving at both fast and slow speeds. The simultaneous measurement requires that fast-moving particles be determined at high speeds, and slow-moving ones over extended periods of time. In actual practice, however, it is very difficult to create an instrument that combines the above functions with continuous data multiplication capability.
The particle size experts at HORIBA Scientific share their knowledge of the various methods of measuring particle size and discuss why it is so important to control and measure particulate materials in a variety of industries.
Monday, August 30, 2010
Friday, August 13, 2010
Conclusions on Building a State of the Art Laser Diffraction Analyzer
The HORIBA LA-950 particle size analyzer uses the laser diffraction method to measure size distributions. This technique uses first principles to calculate size using light scattered off the particle (edge diffraction) and through the particle (secondary scattering refraction). The LA-950 incorporates the full Mie scattering theory to cover the widest size range currently available. Wide measurement ranges, fast analyses, exceptional precision, and reliability have made laser diffraction the most popular modern sizing technique in both industry and academia.
Thursday, August 5, 2010
Building a State of the Art Laser Diffraction Analyzer
There’s a wide gulf between bare minimum and state of the art. The latter is always the industry leader in accuracy, repeatability, usability, flexibility, and reliability. The current state of the art in laser diffraction is the Partica LA-950 featuring two high intensity light sources, a single, continuous cast aluminum optical bench (see the figure below), a wide array of sample handling systems, and expert refinements expected from the fifth revision in the 900 series.
Using two light sources of different wavelengths is of critical importance because
the measurement accuracy of small particles is wavelength dependent. Figure A (below) shows the 360° light scattering patterns from 50nm and 70nm particles as generated from a 650 nm red laser. The patterns are practically identical across all angles and the algorithm will not be able to accurately calculate the different particle sizes. Figure B (below) shows the same experiment using a 405nm blue LED. Distinct differences are now seen on wide angle detectors which allows for accurate calculation of these materials. Integrating a second, shorter wavelength light source is the primary means of improving nano-scale performance beyond the bare minimum laser diffraction analyzer.
Using two light sources of different wavelengths is of critical importance because
the measurement accuracy of small particles is wavelength dependent. Figure A (below) shows the 360° light scattering patterns from 50nm and 70nm particles as generated from a 650 nm red laser. The patterns are practically identical across all angles and the algorithm will not be able to accurately calculate the different particle sizes. Figure B (below) shows the same experiment using a 405nm blue LED. Distinct differences are now seen on wide angle detectors which allows for accurate calculation of these materials. Integrating a second, shorter wavelength light source is the primary means of improving nano-scale performance beyond the bare minimum laser diffraction analyzer.
Wednesday, August 4, 2010
The Importance of Optical Model
In the beginning there was the Fraunhofer Approximation and it was good. This
model, which was popular in older laser diffraction instruments, makes certain
assumptions (hence the approximation) to simplify the calculation. Particles are
assumed…
- to be spherical
- to be opaque
- to scatter equivalently at wide angles as narrow angles
- to interact with light in a different manner than the medium
Practically, these restrictions render the Fraunhofer Approximation a very poor
choice for particle size analysis as measurement accuracy below roughly 20
microns is compromised. The Mie scattering theory overcomes these limitations. Gustav Mie developed a closed form solution (not approximation) to Maxwell’s electromagnetic equations for scattering from spheres; this solution exceeds Fraunhofer to include sensitivity to smaller sizes (wide angle scatter), a wide range of opacity (i.e. light absorption), and the user need only provide the refractive index of particle and dispersing medium. Accounting for light that refracts through the particle (a.k.a. secondary scatter) allows for accurate measurement even in cases of significant transparency. The Mie theory likewise makes certain assumptions that the particle…
- is spherical
- ensemble is homogeneous
- refractive index of particle and surrounding medium is known
These figures show a graphical representation of Fraunhofer and Mie models using
scattering intensity, scattering angle, and particle size (ref. 13). The two models
begin to diverge around 20 microns and these differences become pronounced below 10 microns. Put simply, the Fraunhofer Approximation contributes a magnitude of error for micronized particles that is typically unacceptable to the user. A measurement of spherical glass beads is shown in Figure 19 and calculated using the Mie (red) and Fraunhofer (blue) models. The Mie result meets the material specification while the Fraunhofer result fails the specification and splits the peak. The over-reporting of small particles (where Fraunhofer error is significant)is a typical comparison result.
model, which was popular in older laser diffraction instruments, makes certain
assumptions (hence the approximation) to simplify the calculation. Particles are
assumed…
- to be spherical
- to be opaque
- to scatter equivalently at wide angles as narrow angles
- to interact with light in a different manner than the medium
Practically, these restrictions render the Fraunhofer Approximation a very poor
choice for particle size analysis as measurement accuracy below roughly 20
microns is compromised. The Mie scattering theory overcomes these limitations. Gustav Mie developed a closed form solution (not approximation) to Maxwell’s electromagnetic equations for scattering from spheres; this solution exceeds Fraunhofer to include sensitivity to smaller sizes (wide angle scatter), a wide range of opacity (i.e. light absorption), and the user need only provide the refractive index of particle and dispersing medium. Accounting for light that refracts through the particle (a.k.a. secondary scatter) allows for accurate measurement even in cases of significant transparency. The Mie theory likewise makes certain assumptions that the particle…
- is spherical
- ensemble is homogeneous
- refractive index of particle and surrounding medium is known
These figures show a graphical representation of Fraunhofer and Mie models using
scattering intensity, scattering angle, and particle size (ref. 13). The two models
begin to diverge around 20 microns and these differences become pronounced below 10 microns. Put simply, the Fraunhofer Approximation contributes a magnitude of error for micronized particles that is typically unacceptable to the user. A measurement of spherical glass beads is shown in Figure 19 and calculated using the Mie (red) and Fraunhofer (blue) models. The Mie result meets the material specification while the Fraunhofer result fails the specification and splits the peak. The over-reporting of small particles (where Fraunhofer error is significant)is a typical comparison result.
Tuesday, August 3, 2010
Bench-top Instruments
Bench-top laser diffraction instruments became practical with the advent of high
intensity, reasonably priced lasers and sufficient computing power to process
the scattered light data. Once these barriers to market entry were eliminated
the advantages of laser diffraction over other techniques were apparent: speed
of analysis, application flexibility, small particle accuracy, and ease of use. The
ability to measure nano, micro and macro-sized powders, suspensions, and emulsions, and to do it within one minute, explains how laser diffraction displaced popular techniques such as sieving, sedimentation, and manual microscopy.
Such an instrument consists of at least one source of high intensity, monochromatic
light, a sample handling system to control the interaction of particles and incident light, and an array of high quality photodiodes to detect the scattered light over a wide range of angles. This last piece is the primary function of a laser diffraction instrument: to record angle and intensity of scattered light. This information is then input into an algorithm which, while complex, reduces to the following basic truth:
LARGE PARTICLES SCATTER INTENSELY AT NARROW ANGLES
SMALL PARTICLES SCATTER WEAKLY AT WIDE ANGELS
The algorithm, at its core, consists of an optical model with the mathematical
transformations necessary to get particle size data from scattered light. However,
not all optical models were created equally.
intensity, reasonably priced lasers and sufficient computing power to process
the scattered light data. Once these barriers to market entry were eliminated
the advantages of laser diffraction over other techniques were apparent: speed
of analysis, application flexibility, small particle accuracy, and ease of use. The
ability to measure nano, micro and macro-sized powders, suspensions, and emulsions, and to do it within one minute, explains how laser diffraction displaced popular techniques such as sieving, sedimentation, and manual microscopy.
Such an instrument consists of at least one source of high intensity, monochromatic
light, a sample handling system to control the interaction of particles and incident light, and an array of high quality photodiodes to detect the scattered light over a wide range of angles. This last piece is the primary function of a laser diffraction instrument: to record angle and intensity of scattered light. This information is then input into an algorithm which, while complex, reduces to the following basic truth:
LARGE PARTICLES SCATTER INTENSELY AT NARROW ANGLES
SMALL PARTICLES SCATTER WEAKLY AT WIDE ANGELS
The algorithm, at its core, consists of an optical model with the mathematical
transformations necessary to get particle size data from scattered light. However,
not all optical models were created equally.
Monday, July 26, 2010
Laser Diffraction Technique
The central idea in laser diffraction is that a particle will scatter light at an angle determined by that particle’s size. Larger particles will scatter at small angles and smaller particles scatter at wide angles. A collection of particles will produce a pattern of scattered light defined by intensity and angle that can be transformed into a particle size distribution result.
The knowledge that particles scatter light is not new. Rayleigh scattering of light from
particles in the atmosphere is what gives the sky a blue color and makes sunsets yellow, orange, and red. Light interacts with particles in any of four ways: diffraction, reflection, absorption, and refraction. The figure below shows the idealized edge diffraction of an incident plane wave on a spherical particle. Scientists discovered more than a century ago that light scattered differently off of differently sized objects. Only the relatively recent past, however, has seen the science of particle size analysis embrace light scattering as not only a viable technique, but the backbone of modern sizing.
The knowledge that particles scatter light is not new. Rayleigh scattering of light from
particles in the atmosphere is what gives the sky a blue color and makes sunsets yellow, orange, and red. Light interacts with particles in any of four ways: diffraction, reflection, absorption, and refraction. The figure below shows the idealized edge diffraction of an incident plane wave on a spherical particle. Scientists discovered more than a century ago that light scattered differently off of differently sized objects. Only the relatively recent past, however, has seen the science of particle size analysis embrace light scattering as not only a viable technique, but the backbone of modern sizing.
Friday, July 23, 2010
Including the Error
The reproducibility errors discussed in yesterday's blog should be investigated and minimized because they play an important role in the final setting of a specification. Once the specification based on product performance has been determined, then the final specification must be narrowed by the error range. In the example shown below, the specification for the D50 is 100 +/- 20% (or 80–120μm) based on product performance. If the total measurement error is +/- 10% (using USP<429> guidelines for the D50 value), the specification must be tightened to ~90–110μm (rounded for simplicity) in order to assure the product is never out of the performance specification. For example, if the D50 is measured to be 110μm, we are certain the D50 is actually less than 120μm even with a maximum 10% error.
This is why it is important to create robust standard operating procedures for any
material we wish to set a published specification for. Any combination of high
measurement error (usually stemming from non-optimized method development)
and tight specifications will make meeting that specification more difficult.
Why make life harder than it need be?
This is why it is important to create robust standard operating procedures for any
material we wish to set a published specification for. Any combination of high
measurement error (usually stemming from non-optimized method development)
and tight specifications will make meeting that specification more difficult.
Why make life harder than it need be?
Thursday, July 22, 2010
Testing Reproducibility
There are currently two internationally accepted standards written on the use of
laser diffraction: ISO 13320 and USP (429). Both standards state that samples should be measured at least three times and reproducibility must meet specified guidelines. Note that this means three independent measurements (i.e. prepare the sample, measure the sample, empty the instrument, and repeat). The coefficient of variation (COV, or (std dev/mean)*100) for the measurement set must be less than 3% at the D50 and less than 5% at the D10 and D90 to pass the ISO 13320 requirements. These guidelines change to less than 10% at the D50 and less than 15% at the D10 and D90 when following the USP<429> requirements. Finally, the guidelines all double when the D50 of the material is less than 10μm.
While following the ISO or USP guidelines to test reproducibility is suggested, it is typically part of an internal specification or procedure. The specifications shown to potential customers typically don’t include the reproducibility values.
laser diffraction: ISO 13320 and USP (429). Both standards state that samples should be measured at least three times and reproducibility must meet specified guidelines. Note that this means three independent measurements (i.e. prepare the sample, measure the sample, empty the instrument, and repeat). The coefficient of variation (COV, or (std dev/mean)*100) for the measurement set must be less than 3% at the D50 and less than 5% at the D10 and D90 to pass the ISO 13320 requirements. These guidelines change to less than 10% at the D50 and less than 15% at the D10 and D90 when following the USP<429> requirements. Finally, the guidelines all double when the D50 of the material is less than 10μm.
While following the ISO or USP guidelines to test reproducibility is suggested, it is typically part of an internal specification or procedure. The specifications shown to potential customers typically don’t include the reproducibility values.
Wednesday, July 21, 2010
More about setting particle size specifications
The task of setting a particle size specification for a material requires knowledge
of which technique will be used for the analysis and how size affects product
performance. Sources of error must be investigated and incorporated into the final
specification. Be aware that, in general, different particle sizing techniques will
produce different results for a variety of reasons including: the physical property
being measured, the algorithm used, the basis of the distribution (number,
volume, etc.) and the dynamic range of the instrument. Therefore, a specification
based on using laser diffraction is not easily compared to expectations from other
techniques such as particle counting or sieving. One exception to this rule is the
ability of dymanic image analysis to match sieve results.
Attempting to reproduce PSD results to investigate whether a material is indeed
within a stated specification requires detailed knowledge of how the measurement
was acquired including variables such as the refractive index, sampling
procedure, sample preparation, amount and power of ultrasound, etc. This
detailed information is almost never part of a published specification and would
require additional communications between the multiple parties involved.
of which technique will be used for the analysis and how size affects product
performance. Sources of error must be investigated and incorporated into the final
specification. Be aware that, in general, different particle sizing techniques will
produce different results for a variety of reasons including: the physical property
being measured, the algorithm used, the basis of the distribution (number,
volume, etc.) and the dynamic range of the instrument. Therefore, a specification
based on using laser diffraction is not easily compared to expectations from other
techniques such as particle counting or sieving. One exception to this rule is the
ability of dymanic image analysis to match sieve results.
Attempting to reproduce PSD results to investigate whether a material is indeed
within a stated specification requires detailed knowledge of how the measurement
was acquired including variables such as the refractive index, sampling
procedure, sample preparation, amount and power of ultrasound, etc. This
detailed information is almost never part of a published specification and would
require additional communications between the multiple parties involved.
Monday, July 19, 2010
Setting particle size specifications
The creation of a meaningful and product-appropriate particle size
specification requires knowledge of its effect on product performance in
addition to an understanding of how results should be interpreted for
a given technique. Today's blog provides guidelines for setting particle size
specifications on particulate materials—primarily when using the laser diffraction
technique, but also with information about dynamic light scattering (DLS), acoustic
spectroscopy, and image analysis.
DISTRIBUTION BASIS
Different particle sizing techniques report primary results based on number,
volume, weight, surface area, or intensity. As a general rule specifications should
be based in the format of the primary result for a given technique. Laser diffraction
generates results based on volume distributions and any specification should be
volume based. Likewise, an intensity basis should be used for DLS specifications,
volume for acoustic spectroscopy, and number for image analysis. Conversion to
another basis such as number—although possible in the software—is inadvisable
because significant error is introduced. The exception to this guideline is converting a number based result from a technique such as image analysis into a volume basis. The error involved is generally very low in this scenario.
DISTRIBUTION POINTS
While it is tempting to use a single number to represent a particle size distribution
(PSD), and thus the product specification, this is typically not a good idea. In
nearly every case, a single data point cannot adequately describe a distribution of
data points. This can easily lead to misunderstandings and provides no information
about the width of the distribution. Less experienced users may believe that the
“average particle size” can adequately describe a size distribution, but this implies
expecting a response based on a calculated average (or mean). If forced to use a
single calculated number to represent the mid-point of a particle size distribution,
then the common practice is to report the median and not the mean. The median
is the most stable calculation generated by laser diffraction and should be the
value used for a single point specification in most cases. Rather than use a single point in the distribution as a specification, it is suggested to include other size parameters in order to describe the width of the distribution. The span is a common calculation to quantify distribution width: (D90 – D10)/D50. However, it is rare to see span as part of a particle size specification. The more common practice is to include two points which describe the coarsest and finest parts of the distribution. These are typically the D90 and D10. Using the same convention as the D50, the D90 describes the diameter where ninety percent of the distribution has a smaller particle size and ten percent has a larger particle size. The D10 diameter has ten percent smaller and ninety percent larger. A three point specification featuring the D10, D50, and D90 will be considered complete and appropriate for most particulate materials. How these points are expressed may vary. Some specifications use a format
where the D10, D50, and D90 must not be more than (NMT) a stated size.
Example:
D10 NMT 20μm
D50 NMT 80μm
D90 NMT 200μm
Although only one size is stated for each point there is an implied range of
acceptable sizes (i.e. the D50 passes if between 20 and 80μm). Alternatively, a range of values can be explicitly stated.
Example:
D10 10 – 20μm
D50 70 – 80μm
D90 180 – 200μm
This approach better defines the acceptable size distribution, but may be
perceived as overly complicated for many materials. It may also be tempting to include a requirement that 100% of the distribution is smaller than a given size. This implies calculating the D100 which is not recommended. The D100 result (and to a lesser degree the D0) is the least robust calculation from any experiment. Any slight disturbance during the measurement such as an air bubble or thermal fluctuation can significantly influence the D100 value. Additionally, the statistics involved with calculating this value (and other “extreme” values such as the D99, D1, etc.) aren’t as robust because there may not be very many of the “largest” and “smallest” particles. Given the possible broad spread of D100 results, it is not recommended for use in creating specifications involving a statement that 100% of the particles are below a stated size.
specification requires knowledge of its effect on product performance in
addition to an understanding of how results should be interpreted for
a given technique. Today's blog provides guidelines for setting particle size
specifications on particulate materials—primarily when using the laser diffraction
technique, but also with information about dynamic light scattering (DLS), acoustic
spectroscopy, and image analysis.
DISTRIBUTION BASIS
Different particle sizing techniques report primary results based on number,
volume, weight, surface area, or intensity. As a general rule specifications should
be based in the format of the primary result for a given technique. Laser diffraction
generates results based on volume distributions and any specification should be
volume based. Likewise, an intensity basis should be used for DLS specifications,
volume for acoustic spectroscopy, and number for image analysis. Conversion to
another basis such as number—although possible in the software—is inadvisable
because significant error is introduced. The exception to this guideline is converting a number based result from a technique such as image analysis into a volume basis. The error involved is generally very low in this scenario.
DISTRIBUTION POINTS
While it is tempting to use a single number to represent a particle size distribution
(PSD), and thus the product specification, this is typically not a good idea. In
nearly every case, a single data point cannot adequately describe a distribution of
data points. This can easily lead to misunderstandings and provides no information
about the width of the distribution. Less experienced users may believe that the
“average particle size” can adequately describe a size distribution, but this implies
expecting a response based on a calculated average (or mean). If forced to use a
single calculated number to represent the mid-point of a particle size distribution,
then the common practice is to report the median and not the mean. The median
is the most stable calculation generated by laser diffraction and should be the
value used for a single point specification in most cases. Rather than use a single point in the distribution as a specification, it is suggested to include other size parameters in order to describe the width of the distribution. The span is a common calculation to quantify distribution width: (D90 – D10)/D50. However, it is rare to see span as part of a particle size specification. The more common practice is to include two points which describe the coarsest and finest parts of the distribution. These are typically the D90 and D10. Using the same convention as the D50, the D90 describes the diameter where ninety percent of the distribution has a smaller particle size and ten percent has a larger particle size. The D10 diameter has ten percent smaller and ninety percent larger. A three point specification featuring the D10, D50, and D90 will be considered complete and appropriate for most particulate materials. How these points are expressed may vary. Some specifications use a format
where the D10, D50, and D90 must not be more than (NMT) a stated size.
Example:
D10 NMT 20μm
D50 NMT 80μm
D90 NMT 200μm
Although only one size is stated for each point there is an implied range of
acceptable sizes (i.e. the D50 passes if between 20 and 80μm). Alternatively, a range of values can be explicitly stated.
Example:
D10 10 – 20μm
D50 70 – 80μm
D90 180 – 200μm
This approach better defines the acceptable size distribution, but may be
perceived as overly complicated for many materials. It may also be tempting to include a requirement that 100% of the distribution is smaller than a given size. This implies calculating the D100 which is not recommended. The D100 result (and to a lesser degree the D0) is the least robust calculation from any experiment. Any slight disturbance during the measurement such as an air bubble or thermal fluctuation can significantly influence the D100 value. Additionally, the statistics involved with calculating this value (and other “extreme” values such as the D99, D1, etc.) aren’t as robust because there may not be very many of the “largest” and “smallest” particles. Given the possible broad spread of D100 results, it is not recommended for use in creating specifications involving a statement that 100% of the particles are below a stated size.
Thursday, July 15, 2010
Transforming results
Results from number based systems, such as microscopes or image analyzers construct their beginning result as a number distribution. Results from laser diffraction or acoustic attenuation construct their beginning result as a volume distribution. The software for many of these systems includes the ability to transform the results from number to volume or vice versa. It is perfectly acceptable to transform image analysis results from a number to volume basis. In fact the pharmaceutical industry has concluded that it prefers results be reported on a volume basis for most applications. On the other hand, converting a volume result from laser diffraction to a number basis can lead to undefined errors and is only suggested when comparing to results generated by microscopy.
The figure below shows an example where a laser diffraction result is transformed from volume to both a number and a surface area based distribution. Notice the large change in median from 11.58μm to 0.30μm when converted from volume to number.
The figure below shows an example where a laser diffraction result is transformed from volume to both a number and a surface area based distribution. Notice the large change in median from 11.58μm to 0.30μm when converted from volume to number.
Wednesday, July 14, 2010
Number vs. volume distributions
Interpreting results of a particle size measurement requires an understanding
of which technique was used and the basis of the calculations.
Each technique generates a different result since each measures different
physical properties of the sample. Once the physical property is measured a
calculation of some type generates a representation of a particle size distribution.
Some techniques report only a central point and spread of the distribution,
others provide greater detail across the upper and lower particle size detected.
The particle size distribution can be calculated based on several models: most
often as a number or volume/mass distribution.
NUMBER VS. VOLUME DISTRIBUTION
The easiest way to understand a number distribution is to consider measuring
particles using a microscope. The observer assigns a size value to each particle
inspected. This approach builds a number distribution—each particle has equal
weighting once the final distribution is calculated. As an example, consider the
nine particles shown in the image above. Three particles are 1μm, three are 2μm, and
three are 3μm in size (diameter). Building a number distribution for these particles
will generate the result shown in the first graph below, where each particle size accounts for one third of the total. If this same result were converted to a volume distribution, the result would appear as shown in the second graph below where 75% of the total volume comes from the 3μm particles, and less than 3% comes from the 1μm particles. When presented as a volume distribution it becomes more obvious that the majority of the total particle mass or volume comes from the 3μm particles.
Nothing changes between the left and right graph except for the basis of the
distribution calculation.
Tuesday, July 13, 2010
Particle Size Characterization
All particle size analysis instruments provide the ability to measure and report the
particle size distribution of the sample. There are very few applications where a
single value is appropriate and representative. The modern particle scientist often
chooses to describe the entire size distribution as opposed to just a single point
on it. (One exception might be extremely narrow distributions such as latex size
standards where the width is negligible.) Almost all real world samples exist as
a distribution of particle sizes and it is recommended to report the width of the
distribution for any sample analyzed. The most appropriate option for expressing
width is dependent on the technique used. When in doubt, it is often wise to refer
to industry accepted standards such as ISO or ASTM in order to conform to
common practice.
HORIBA Instruments, Inc. distributes particle characterization tools based on
several principles including laser diffraction, dynamic light scattering, acoustic attenuation, and image analysis Each of these techniques generates results in both
similar and unique ways. Most techniques can describe results using standard
statistical calculations such as the mean and standard deviation. But commonly
accepted practices for describing results have evolved for each technique.
For a more in depth anaylsis, please view: The HORIBA PSA Guidebook.
particle size distribution of the sample. There are very few applications where a
single value is appropriate and representative. The modern particle scientist often
chooses to describe the entire size distribution as opposed to just a single point
on it. (One exception might be extremely narrow distributions such as latex size
standards where the width is negligible.) Almost all real world samples exist as
a distribution of particle sizes and it is recommended to report the width of the
distribution for any sample analyzed. The most appropriate option for expressing
width is dependent on the technique used. When in doubt, it is often wise to refer
to industry accepted standards such as ISO or ASTM in order to conform to
common practice.
HORIBA Instruments, Inc. distributes particle characterization tools based on
several principles including laser diffraction, dynamic light scattering, acoustic attenuation, and image analysis Each of these techniques generates results in both
similar and unique ways. Most techniques can describe results using standard
statistical calculations such as the mean and standard deviation. But commonly
accepted practices for describing results have evolved for each technique.
For a more in depth anaylsis, please view: The HORIBA PSA Guidebook.
Monday, July 12, 2010
Which Size to Measure?
A spherical particle can be described using a single number—the diameter—because every dimension is identical. Non-spherical particles can be described using multiple length and width measures (horizontal and vertical projections are shown here). These descriptions provide greater accuracy, but also greater complexity. Thus, many techniques make the useful and convenient assumption that every particle is a sphere. The reported value is typically an equivalent spherical diameter. This is essentially taking the physical measured value (i.e. scattered light, acoustic attenuation, settling rate) and determining the size of the sphere that could produce the data. Although this approach is simplistic and not perfectly accurate, the shapes of particles generated by most industrial processes are such that the spherical assumption does not cause serious problems. Problems can arise, however, if the individual particles have a verylarge aspect ratio, such as fibers or needles.
Shape factor causes disagreements when particles are measured with different
particle size analyzers. Each measurement technique detects size through the
use of its own physical principle. For example, a sieve will tend to emphasize the
second smallest dimension because of the way particles must orient themselves
to pass through the mesh opening. A sedimentometer measures the rate of
fall of the particle through a viscous medium, with the other particles and/or the
container walls tending to slow their movement. Flaky or plate-like particles will
orient to maximize drag while sedimenting, shifting the reported particle size in
the smaller direction. A light scattering device will average the various dimensions
as the particles flow randomly through the light beam, producing a distribution of
sizes from the smallest to the largest dimensions.
The only techniques that can describe particle size using multiple values are
microscopy or automated image analysis. An image analysis system could
describe a non-spherical using the longest and shortest diameters, perimeter, projected area, or again by equivalent spherical diameter. When reporting a particle size distribution the most common format used even for image analysis systems is equivalent spherical diameter on the x axis and percent on the y axis. It is only for elongated or fibrous particles that the x axis is typically displayed as length rather than equivalent spherical diameter.
Friday, July 9, 2010
Why is Particle Size Important?
Particle size influences many properties of particulate materials and is
a valuable indicator of quality and performance. This is true for powders,
suspensions, emulsions, and aerosols. The size and shape of powders influences
flow and compaction properties. Larger, more spherical particles will typically flow
more easily than smaller or high aspect ratio particles. Smaller particles dissolve
more quickly and lead to higher suspension viscosities than larger ones. Smaller
droplet sizes and higher surface charge (zeta potential) will typically improve
suspension and emulsion stability. Powder or droplets in the range of 2-5μm
aerosolize better and will penetrate into lungs deeper than larger sizes. For these
and many other reasons it is important to measure and control the particle size
distribution of many products.
Measurements in the laboratory are often made to support unit operations taking
place in a process environment. The most obvious example is milling (or size
reduction by another technology) where the goal of the operation is to reduce
particle size to a desired specification. Many other size reduction operations and
technologies also require lab measurements to track changes in particle size
including crushing, homogenization, emulsification, microfluidization, and others.
Separation steps such as screening, filtering, cyclones, etc. may be monitored by
measuring particle size before and after the process. Particle size growth may be
monitored during operations such as granulation or crystallization. Determining the
particle size of powders requiring mixing is common since materials with similar
and narrower distributions are less prone to segregation.
There are also industry/application specific reasons why controlling and
measuring particle size is important. In the paint and pigment industries particle
size influences appearance properties including gloss and tinctorial strength.
Particle size of the cocoa powder used in chocolate affects color and flavor.
The size and shape of the glass beads used in highway paint impacts reflectivity.
Cement particle size influences hydration rate & strength. The size and shape
distribution of the metal particles impacts powder behavior during die filling,
compaction, and sintering, and therefore influences the physical properties of
the parts created. In the pharmaceutical industry the size of active ingredients
influences critical characteristics including content uniformity, dissolution and
absorption rates. Other industries where particle size plays an important role
include nanotechnology, proteins, cosmetics, polymers, soils, abrasives,
fertilizers, and many more.
This is taken from the first page of the new HORIBA Particle Size Analysis Guidebook. The complete guide can be found here.
a valuable indicator of quality and performance. This is true for powders,
suspensions, emulsions, and aerosols. The size and shape of powders influences
flow and compaction properties. Larger, more spherical particles will typically flow
more easily than smaller or high aspect ratio particles. Smaller particles dissolve
more quickly and lead to higher suspension viscosities than larger ones. Smaller
droplet sizes and higher surface charge (zeta potential) will typically improve
suspension and emulsion stability. Powder or droplets in the range of 2-5μm
aerosolize better and will penetrate into lungs deeper than larger sizes. For these
and many other reasons it is important to measure and control the particle size
distribution of many products.
Measurements in the laboratory are often made to support unit operations taking
place in a process environment. The most obvious example is milling (or size
reduction by another technology) where the goal of the operation is to reduce
particle size to a desired specification. Many other size reduction operations and
technologies also require lab measurements to track changes in particle size
including crushing, homogenization, emulsification, microfluidization, and others.
Separation steps such as screening, filtering, cyclones, etc. may be monitored by
measuring particle size before and after the process. Particle size growth may be
monitored during operations such as granulation or crystallization. Determining the
particle size of powders requiring mixing is common since materials with similar
and narrower distributions are less prone to segregation.
There are also industry/application specific reasons why controlling and
measuring particle size is important. In the paint and pigment industries particle
size influences appearance properties including gloss and tinctorial strength.
Particle size of the cocoa powder used in chocolate affects color and flavor.
The size and shape of the glass beads used in highway paint impacts reflectivity.
Cement particle size influences hydration rate & strength. The size and shape
distribution of the metal particles impacts powder behavior during die filling,
compaction, and sintering, and therefore influences the physical properties of
the parts created. In the pharmaceutical industry the size of active ingredients
influences critical characteristics including content uniformity, dissolution and
absorption rates. Other industries where particle size plays an important role
include nanotechnology, proteins, cosmetics, polymers, soils, abrasives,
fertilizers, and many more.
This is taken from the first page of the new HORIBA Particle Size Analysis Guidebook. The complete guide can be found here.
Thursday, July 8, 2010
HORIBA Scientific joins the blogging world!
We've done it! We've joined the social networking world!
From now on, you can read excerpts from our monthly newsletter, receive reminders of our webinars and seminars, and find out which tradeshows we'll be attending right here. We'll also be posting our FAQ's and invite readers to share their experiences with our analyzers. Since this is a new technology for us, we're looking forward to hearing the feedback from the particle size community. So, stay in touch - there is more to come!
Subscribe to:
Posts (Atom)