Monday, February 13, 2012

Scanning Probe Microscopy SPM

 Scanning Probe Microscopy (SPM): 
The scanning probe microscopy is a general term for a wide variety of microscopic 
techniques, which measure the morphology and properties of surfaces on the atomic scale.  
This includes the following: 

Scanning Tunneling Microscopy (STM) – which studies the surface topography and electronic 
structure, Atomic Force Microscopy (AFM) – which studies the surface topography, surface 
hardness and elastic modulus, Lateral Force Microscopy (LFM) – which studies the relative 
frictional properties,  Scanning Thermal Microscopy (SThM) –  which studies the thermal 
conductivity, Magnetic Force & Electric Force Microscopies (MFM & EFM) – which study 
the magnetic and electric properties. 


The techniques of STM and AFM are discussed below, since these are widely used: 
   
Principle:  The general principle for all the scanning probe microscopes is that a sharper 
probe (or a very fine tip) is used to scan the surface of the sample with much lower force and 
obtain the topography and morphology information. 

Scanning tunneling microscope: When a sharp tip made of a conducting material is brought 
close to a  conducting sample, overlapping of the electron clouds between the two surfaces 
will occur. If a potential is given between them a current of electrons is formed, which is 
often referred as “tunneling”  current,  and the effect is known as “tunneling” effect.  This 
effect is largely depended on the distance between the tip and the sample material. Hence, if 
the scanning tip is controlled by a high precision motion device made of piezo-electric 
material, the distance between the tip and the sample can be measured during a scanning 
through a feedback loop control of the piezo-electric element. By this way the sample can be 
scanned with sub-angstrom precision.   

Atomic force microscope:  This technique operates by measuring the forces between the 
sample and the tip, and the sample need not be a conducting material. Here, the tip is brought 
close enough to the sample surface to detect the repulsive force between the atoms of the tip 
material and the sample. The probe tip is mounted at the end of a cantilever of a low spring 
constant and the tip-to-sample spacing is held fixed by maintaining a constant and very low 
force on the cantilever. Hence, if the tip is brought close to the sample surface, the repulsive 
force will induce a bending of the cantilever. This bending can be detected by a laser beam, 
which is reflected off the back of the cantilever. Thus by monitoring the deflection of the 
cantilever, the surface topography of the sample can be tracked. Since the force maintained 
on the cantilever is in the range of inter-atomic forces (about 10-9 
Newton), this technique 
derived the name “atomic force” microscopy. 

AFM operates at two modes: 
Repulsive or contact mode – which detects the repulsive forces between the tip and sample; 
Attractive or non-contact mode – which detects the van der waals forces that act between the 
tip and sample. 

Instrumentation:  
Scanning tunneling  microscope:  It mainly consists of a scanner, probe motion sensor 
composed of piezo-electric material, micro probe, etc. 


Atomic force microscope:  It mainly consists of a scanner, cantilever, laser source, photo-
diode detector, micro-probe, etc.

Applications: Both STM and AFM find applications widely in material sciences especially 
for surface studies on a nano scale range. While STM finds its applications in the 
characterization of surface structure (including the electronic structure), AFM finds its 
applications in measuring the  hardness of materials. Sometimes, AFM can be used in the 
study of “depth profile” of the deposited oxide layer on to a material.  

Disadvantages: A limitation to STM is that it can study only the conducting samples, since 
the technique is based on the tunneling current between two conducting areas. Hence, it 
doesn’t lend itself to the study of non-conducting materials. In fact, the AFM had been 
developed to encounter this  problem. These methods require  special sample preparation 
techniques, which are tedious, like, thin sectioning, electo-polishing, various mechanical 
cutting and polishing techniques, etc. 

Transmission Electron Microscopy TEM

Transmission Electron Microscopy (TEM): 
Principle:    In this technique, a beam of high-energy electrons (typically 100 - 400keV) is 
collimated by magnetic lenses and allowed to pass through a specimen under high vacuum. 
The transmitted beam and a number of diffracted beams can form a resultant diffraction 
pattern, which is imaged on a fluorescent screen kept below the specimen. The diffraction 
pattern gives the information regarding lattice  spacing and symmetry of the structure under 
consideration. Alternatively, either the transmitted beam or one of the diffracted beams can 
be made to form a magnified image of the sample on the viewing screen as bright-and dark-
field imaging modes respectively, which give  information about the size and shape of the 
micro-structural constituents of the material. High-resolution image, that contains 
information about the atomic structure of the material, can be obtained by recombining the 
transmitted beam and diffracted beams together.    

Instrumentation:    It comprises of a tungsten filament or LaB6 or a field emission gun as 
source of electron beam, objective lens, imaging lens, CCD camera, monitor, etc.   




Applications:  Transmission electron microscopy is used to study the local structures, 
morphology, and dispersion of multi-component polymers, cross sections & crystallization of 
metallic alloys and semiconductors, microstructure of composite  materials, etc. The 
instrument can be extended to include other detectors like Energy Dispersive Spectrometer 
(EDS) or Energy Loss Spectrometer (ELS) to study about the local chemistry of the material 
similar to SEM technique.   

Disadvantages:  The instrumentation is complicated and needs high vacuum. Sample 
preparation is very time consuming. Some materials, especially polymers, are sensitive to 
electron beam irradiation which results in the loss of crystallinity and/or mass. 

Scanning Electron Microscopy SEM

Scanning Electron Microscopy (SEM): 
Principle:  In this technique, an electron beam is focused onto the sample surface kept in a 
vacuum by electro-magnetic lenses (since electron possesses dual nature with properties of 
both particle and wave an electron beam can be focused or condensed like an ordinary light) 
The beam is then rastered or scanned over the surface of the sample. The scattered electron 
from the sample is then fed to the detector and then to a cathode ray tube through an 
amplifier, where the images are formed, which gives the information on the surface of the 
sample. 

Instrumentation:  It comprises of a heated filament as source of electron beam, condenser 
lenses, aperture, evacuated chamber for placing the sample, electron detector, amplifier, CRT 
with image forming electronics, etc. 




Applications: Scanning electron microscopy has been applied to the surface studies of metals, 
ceramics, polymers, composites and biological materials for both topography as well as 
compositional analysis. An extension (or sometimes conjunction to SEM) of this technique is 
Electron Probe Micro Analysis (EPMA), where the emission of X-rays, from the sample 
surface, is studied upon exposure to a beam of high energy electrons. Depending on the type 
of detectors used this method  is classified in to two as:  Energy Dispersive Spectrometry 
(EDS) and Wavelength Dispersive Spectrometry (WDS). This technique is used extensively 
in the analysis of metallic and ceramic inclusions, inclusions in  polymeric materials, 
diffusion profiles in electronic components.   

Disadvantages:  The instrumentation is complicated and needs high vacuum for the optimum 
performance.

Analysis Through Microscopy

ANALYSIS THROUGH MICROSCOPY 
The techniques described here are not for the simple, ordinary optical microscopes which use 
light for the magnification. These employ electron beam and mechanical probes to magnify 
the surfaces under study. 

Tuesday, February 7, 2012

X-Ray Diffractometry XRD

 X-Ray Diffractometry (XRD):
Principle:  In this technique the primary X-rays are made to fall on the sample substance
under study. Because of its wave nature, like light waves, it gets diffracted to a certain angle.
This angle of diffraction, which differs from that of the incident beam, will give the
information regarding the crystal nature of the substance. The wavelength of the X-rays can
be varied for the application by using a grating plate.

Instrumentation :  It consists of X-ray tube for the  source, monochromator and a rotating
detector.



Applications : The diffraction of X-rays is a good tool to study the nature of the crystalline
substances. In crystals the ions or molecules are arranged in well-defined positions in planes
in three dimensions. The impinging X-rays are reflected by each crystal plane. Since the
spacing between the atoms and hence the planes can’t be same or identical for any two
chemical substances, this technique provides vital information regarding the arrangement of
atoms and the spacing in between them and also to find out the chemical compositions of
crystalline substances. The sample under study can be of either a thin layer of crystal or in a
powder form. Since, the power of a diffracted beam is dependent on the quantity of the
corresponding crystalline substance, it is  also possible to carry out quantitative
determinations.

X-Ray Photo-emission Spectrometry XPS

 X-Ray Photo-emission Spectrometry (XPS)
Principle: When a primary X-ray beam of precisely known energy impinges on sample 
atoms, inner shell electrons are ejected and the energy of the ejected electrons  is measured. 
The difference in the energy of the impinging  X-ray and the ejected  electrons gives the 
binding energy (Eb) of the electron to the atom. Since,  this binding energy of the emitted 
electron depends on the energy of the electronic orbit and the element it can be used to 
identify the element involved. Further, the chemical form or environment of the atom affects 
the binding energy to a considerable extent to give rise to some chemical shift, which can be 
used to identify the valence state of the atom and its exact chemical form. This technique is 
mostly referred as Electron Spectroscopy for Chemical Analysis (ESCA).  
 
An associated process with this method is that when the electron is ejected from the inner 
orbital a vacancy is left with. Hence, another electron from the outer orbits may fall to fill the 
vacancy and by doing so emits X-ray fluorescence. The energy of this X-ray fluorescence is 
sometimes transferred to a second electron to make it to be ejected. This second electron thus 
emitted is termed as  Auger electron  and the method  Auger Spectroscopy  (after French 
Physicist  Pierre Auger).  Its applications are more or less similar to ESCA and both the 
methods are used in conjunction since in both cases the energies involved are similar.   
 
Instrumentation:  It consists of a radiation source for primary X-rays, monochromator, the 
energy analyzer (to resolve the electrons generated from the samples by energy) and detector 
to measure the intensity of the resolved electrons. The analysis is done in high vacuum. 



Applications : It is mainly used for surface analysis, especially in the qualitative identification 
of the elements in a sample. Based on the chemical shifts, the chemical environment around 
the atoms can also be estimated. This measurement is useful in determining the valence states 
of the atoms present in various moieties in a sample. Quantitative measurements can be made 
by determining the intensity of the ESCA lines of each element. 
 
Disadvantages : High vacuum is necessary for the system to avoid the low energy electrons to 
be collided with other impurities, which may result in low sensitivity. It is not possible to 
detect the impurities at the ppm or ppb levels. The whole instrumentation is highly 
complicated.

X-Ray Fluorescence Spectrometry XFS

X-Ray Fluorescence Spectrometry (XFS) : 
Principle: When a sample is placed in a beam of primary X-rays, part of it will be absorbed 
and the atoms get excited, by the ejection of electrons present in K and L shells. While 
relaxing they re-emit X-rays of characteristic wavelength. This re-emitted X-rays are called 
secondary or fluorescent X-rays  and hence the name for this technique. Since, the 
wavelength of the fluorescence is characteristic of the element being excited, measurement of 
the wavelength and intensity enables to carry out the qualitative and quantitative analyses. 

Instrumentation:  It comprises of a source for primary X-rays, collimators, analyzing crystal 
and detector.  



Applications:    It is one of the non-destructive methods in the elemental analysis of solid or 
liquid samples for major and minor constituents. Most of the elements in the periodic table, 
both metals and nonmetals, respond to this technique. Detection limit is between 10 to 100 
ppm. One of the significant uses of this method is in the medical field in identifying and 
determining the sulfur in protein.  


Disadvantages:  The sensitivity gets affected for elements with lower atomic numbers, 
particularly elements with atomic number  lower than 15 are difficult to analyze. The 
sensitivity is also limited by matrix absorption, secondary fluorescence and scattering of the 
particles. Instruments are often large, complicated and costly. 

Generation of X-Rays

Generation of X-Rays :  When metals, like copper, molybdenum, tungsten, etc., are 
bombarded directly with a stream of high energy electrons or radioactive particles, X-rays  
(wavelengths of order 0.1-100Å) are emitted because of the transitions involving K-Shell and 
L-Shell electrons. This can be simply expressed as follows: 

A cathode in the form of a metal wire when  electrically heated gives off electrons. If a 
positive voltage, in the form of an anode (target comprised of the metals mentioned above), is 
placed near these electrons, the electrons are accelerated toward the anode. Upon striking the 
anode, the electrons transfer their energy to the metallic surface, which then gives off X-ray 
radiation. This is referred as primary X-rays.  
The following is the schematic diagram for the process:



Note:  The wavelength of the emitted X-ray is characteristic of the element being 
bombarded. Hence with some modifications this process can be used as a tool for 
qualitative and quantitative elemental analysis by measuring the wavelength and emission 
intensity of the X-rays respectively. This forms the basis for Electron Probe Microanalysis! 

Differential Scanning Caloriemeter DSC

Differential Scanning Caloriemeter (DSC): 
This technique is more or less similar to DTA except that it measures the amount of heat 
absorbed or released by a sample as it is heated or cooled or kept at constant temperature  
(isothermal). Here the sample and reference material are simultaneously heated or cooled at a 
constant rate. The difference in temperature between them is proportional to the difference in 
heat flow (from the heating source i.e. furnace), between the two materials. It is the well-
suited technique in the detection and further studies of liquid crystals. This technique is 
applied to most of the polymers in evaluating the curing process of the thermoset materials as 
well as in determining the heat of melting and melting point of thermoplastic polymers, glass 
transition temperature (Tg.), endothermic & exothermic behaviour. Through the adjunct 
process of isothermal crystallization it provides information regarding  the molecular weight 
and structural differences between very similar materials. The instrumentation is exactly 
similar to that of DTA except for the difference in obtaining the results.


Differential Thermal Analysis DTA

 Differential Thermal Analysis (DTA):
This technique measures the temperature difference between a sample and a reference
material as a function of temperature as they  are heated or cooled or kept at a constant
temperature (isothermal). Here the sample and reference material are simultaneously heated

or cooled at a constant rate. Reaction or transition temperatures are then measured as a
function of the temperature difference between  the sample and reference.  It provides vital
information of the materials regarding their  endothermic and exothermic behaviour at high
temperatures. It finds most of its applications in analysing and characterising clay materials,
ceramic, ores, etc.

Thermomechanical Analysis TMA

Thermomechanical Analysis (TMA): 
Thermomechanical analysis is  the measurement of a material’s behaviour, ie. expansion or 
contraction, when temperature and a load is applied. A scan of dimensional changes related 
to temperature (at constant load) or load  (at constant temperature) provides valuable 
information about the sample’s mechanical properties. One of the most important 
applications of TMA is the characterization of composite and laminate materials, through the 
study of glass transition temperature and the expansion coefficient. Another application is in 
the quantitative measurement of extension and  contraction observed in textile fibres, thin 
films and similar materials.

Thermogravimetric Analysis TGA

Thermogravimetric Analysis (TGA):
In this technique the change in sample weight is measured while the sample is heated at a
constant rate (or at constant temperature), under air (oxidative) or nitrogen (inert)
atmosphere. This technique is effective for quantitative analysis of thermal reactions that are
accompanied by mass changes, such as evaporation, decomposition, gas absorption,
desorption and dehydration. The following is the simplified diagram for the instrumentation:



The micro-balance plays a significant role, during measurement the change in sample mass
affects the equilibrium of the balance. This  imbalance is fed back to a force coil, which
generates additional electromagnetic force to recover equilibrium. The amount of additional
electromagnetic force is proportional to the mass change. During the heating process the
temperature may go as high as 15000
 C inside the furnace.

THERMAL ANALYSIS

THERMAL ANALYSIS 
The technique of thermal analysis actually comprises of a series of methods, which detect the 
changes in the physical and mechanical properties of the given substance by the application 
of heat or thermal energy. The physical properties include mass, temperature, enthalpy, 
dimension, dynamic characteristics, etc. It finds its application in finding the purity, integrity, 
crystallinity and thermal stability of the chemical substances under study. Sometimes it is 
used in the determination of  the composition of complex mixtures. This technique has been 
adopted as testing standard in quality control in the production field, process control and 
material inspection. It is applied in wide fields,  including, polymer, glass, ceramics, metals, 
explosives, semiconductors, medicines and foods.

Gas Chromatography GC

 GAS CHROMATOGRAPHY (GC)  
Principle: Here an inert carrier gas (Helium or Nitrogen) acts as the mobile phase. This will 
carry the components of analyte mixture and elutes through the column. The column usually 
contains an immobilized stationary phase. The technique can be categorised depending on the 
type of stationary phase as follow: 

Gas Solid Chromatography (GSC) -  here the stationary phase is a solid which has a large 
surface area at which adsorption of components of the analyte takes place. The separation is 
possible based on the differences in the adsorption power and diffusion of gaseous analyte 
molecules. The application of this method is limited and is mostly used in the separation of 
the low-molecular-weight gaseous species like carbon monoxide, oxygen, nitrogen and lower 
hydrocarbons.

Gas Liquid Chromatography (GLC) - this is the most important and widely used method for
separating and determining the chemical components of volatile organic mixtures. Here the
stationary phase is a liquid that is immobilized on the surface of a solid support by adsorption
or by chemical bonding. The separation of the mixture into individual components is by
distribution ratio (partition) of these anayte components between the gaseous phase and the
immobilized liquid phase. Because of its wide applications most of  the GCs are configured
for the GLC technique.  

Instrumentation :  The instrumentation for GC is different from that of HPLC in that the
injection port, column and detector are to be heated to a pre-specified temperature. Since the
mobile phase here is a gas (carrier gas) the components present in the analyte mixture should
be vaporised, so that it can be effectively carried through the column. The basic
instrumentation for GC includes a carrier gas cylinder with regulator, a flow controller for the
gas, an injection port for introducing the sample, the column, the detector and the recorder.
An outlay is as follow:


In the above illustration the  injection port, column oven and detector are hot zones. The
success of this technique requires the appropriate selection of the column and the temperature
conditions at which the column to be maintained throughout the analysis. Basically the
columns for GC are classified as analytical columns and preparative columns. The analytical
columns are of two types: packed column and open-tubular or capillary column. Both differ
in the way the stationary phases are stacked inside.

In the instrumentation of GC detectors play  unique role. There are a number of detectors,
which vary in design, sensitivity and selectivity. Detectors in GC are designed to generate an
electronic signal when a gas other than the carrier gas elutes from the column. Few examples
and applications of the detectors are:


Thermal Conductivity Detector (TCD) - this operates on the principle that gases eluting from
the column have thermal conductivity different from that of the carrier gas. It is the universal
detector (detects most of the analytes) and is non-destructive and hence used with preparative
GC, but less sensitive than other detectors.

Flame Ionization Detector (FID) -  it is one of the important detectors where the column
effluent is passed into a hydrogen flame and the flammable components are burned. In this
process a fraction of the molecules gets fragmented into charged species as positive and negative.

While positively charged ions are drawn to a collector, negatively charged ions are
attracted to positively charged burner head, this creates an electric circuit and the signal is
amplified. The FID detector is very sensitive, but destroys the sample by burning. It only
detects organic substances that burn and fragment in a hydrogen flame (e.g. hydrocarbons).
Hence its usage is restricted for preparative GC and for inorganic substances which do not
burn.  

Electron Capture Detector (ECD)  -  this is another type of ionization detector which utilises
the beta emissions of a radioactive source, often nickel-63, to cause the ionization of the
carrier gas molecules, thus generating electrons which constitute an electrical current. This
detector is used for environmental and bio-medical applications. It is especially useful for
large halogenated hydrocarbons and hence in the analysis of halogenated pesticide residues
found in environmental and bio-medical samples. It is extremely sensitive. It does not destroy
the sample and thus may be used for the preparative work.

Nitrogen/Phosphorus Detector (NPD) -  the design of the detector is same to that of the FID
detector except that a bead of alkali metal salt is positioned just above the flame. It is also
known as ‘Thermionic Detector’. It is useful for the phosphorus and nitrogen containing
pesticides, the organophosphates and carbamates. The sensitivity for these compounds are
very high since the fragmentation of the other organic compounds are minimized.

Flame Photometric Detector (FPD) -  here a flame photometer is incorporated into the
instrument. The principle is that the sulfur or phosphorus compounds burn in the hydrogen
flame and produce light emitting  species. This detector is  specific for organic compounds
containing sulphur or phosphorus. It is very selective and very sensitive.

Electrolytic Conductivity Detector (ECD Hall) -  this otherwise known as ‘Hall detector’,
converts the eluting gaseous components into ions in liquid solution and then measures the
electrolytic conductivity of the solution in a conductivity cell. The conversion to ions is done
by chemically oxidizing or reducing the components with a “reaction gas” in a small reaction
chamber. This detector is used in the analysis of organic halides and has excellent sensitivity
& selectivity, but is a destructive detector.

The recent developments allow the GC to be  coupled with other analytical techniques like
Infra Red Spectrometry (Gas Chromatography-Infrared Spectrometry, GC-IR)   and    Mass
Spectrometry  (Gas Chromatography- Mass Spectrometry, GC-MS).   These are termed as
hyphenated techniques’, and are very efficient for qualitative analysis as very accurate and
precise information like mass or IR spectrum of the individual sample components are readily
obtained as they elute from the GC column. It saves time and reduces the steps involved for a
component to be separated and analysed.

Disadvantages: Samples must be volatile and thermally stable below about 4000
 C. No single
universal detector is available and most commonly used detectors  are non-selective. One
should take much care in the analytical steps  starting from the selection of the column, the
detector and must define the temperatures of all the three ports viz., injection port, column
oven and detector. An improper programming on these will lead to erratic results.




Sunday, February 5, 2012

High Performance Size Exclusion Chromatography


High Performance Size Exclusion Chromatography:  This technique is for separating 
dissolved species on the basis of their size and particularly applicable to high-molecular-  
weight species like oligomers and polymers to determine their  relative sizes and molecular 
weight distributions. Here, the stationary phase is polymer resin, which contains small pores. 
If the components to be separated are passed through the column the small sized particles can 
easily enter into these pores and their mobility is retarded. Whereas the large sized particles, 
which can’t enter into these pores can come out of the column fast and elude first. Thus the 
separation of various sized particles is possible  through variations in the elution time. It is 
classified into two categories based on the nature of the columns and their packing as:


Gel Filtration Chromatography - which uses hydrophilic packing  to separate polar species 
and uses mostly aqueous mobile phases. This  technique is mostly used to identify the 
molecular weights of large sized proteins & bio-molecules.  

Gel Permeation Chromatography -  which uses hydrophobic packing to separate nonpolar 
species and uses nonpolar organic solvents. This technique is used to identify the molecular 
weights of polymers.

Instrumentation :  The basic HPLC system consists of  a solvent (mobile phase) reservoir, 
pump, degasser, injection device, column and  detector. The pump draws the mobile phase 
from the reservoir and pumps it to the column through the injector. At the end of the column 
(effluent end), a detector is positioned. Mostly UV absorption detector is used. In the case of 
analytical studies, after the detection the eluents are collected in waste bottles. In the case of 
preparative studies the eluents are fractionally collected for further studies. Most of the 
HPLC design will be the same as described for all the four main groups previously described. 
However, there can be differences in selecting the specific detectors for particular type of 
analysis, say for example, with ion-exchange chromatography, detectors commonly used are 
conductivity detectors for obvious reasons. Other important detectors for HPLC separations 
include refractive index detector, fluorescence  detector and mass selective detector. The 
following is the most generalised outlay of the HPLC system:   



Disadvantages :  Column performance is very sensitive, which depends on the method of 
packing. Further, no universal and sensitive detection system is available.

High Performance Ion-Exchange Chromatography

High-Performance Ion-Exchange Chromatography:  This method is used to separate 
mixtures of ions (organic or inorganic),  and finds its application mostly in protein 
separations. The stationary phase consists of very small polymer resin “beads” which have 
many ionic bonding sites on their surface, termed as Ion Exchange Resins. This resin can be 
either an anion exchange resin, which possesses positively charged sites to attract negative 
ions, or a cation exchange resin, which possesses negatively charge sites to attract positive 
ions. If the analyte mixture which contains mixture of ions is introduced into the column 
packed with suitable ion-exchange resin, selected ions will be attached or bonded on to the 
resin, thus being separated from other species that do not bond. Later, these attached ions can 
be dislodged from the column by repeated elution with a solution that contains an ion that 
competes for the charged groups on the resin surface, in other words, which has high affinity 
for the charged sites on the resin than the analyte ions. Thus the analyte ions get exchanged 
and separated from the column. 

High Performance Partition Chromatography

High-Performance Partition Chromatography:  It is the most widely used liquid 
chromatographic procedures to separate most kinds of organic molecules. Here the 
components present in the analyte mixture distribute (or partition) themselves between the 
mobile phase and stationary phase as the  mobile phase moves through the column. The 
stationary phase actually consists of a thin liquid film either adsorbed or chemically bonded 
to the surface of finely divided solid particles. Of these the latter is considered more 
important and has a distinct stability advantage. It is not removed from the solid phase either 
by reaction or by heat and hence it is more popular. It finds wide applications in various 
fields, viz., pharmaceuticals, bio-chemicals, food products, industrial chemicals, pollutants, 
forensic chemistry, clinical medicine, etc.