Introduction to Common Laboratory Assays and Technology

in Basic Skills in Interpreting Laboratory Data
Full access


After completing this chapter, the reader should be able to

  • Discuss the current and developing roles of laboratory testing in accurately diagnosing diseases

  • Describe the basic elements of photometry and the major components of a spectrophotometer

  • Explain the principles of turbidimetry and nephelometry as applied to laboratory testing

  • Review the analytic techniques of electrochemistry based on potentiometry, coulometry, voltammetry, and conductometry

  • Describe the major electrophoresis techniques and their applications

  • Describe the major analytic techniques of chromatography and compare gas- and high-performance liquid chromatography with respect to equipment and methodology

  • Explain the basic principles of immunoassays and compare the underlying principles, methods, and tests performed involving radioimmunoassay, enzyme-linked immunosorbent assay, enzyme-multiplied immunoassay, fluorescent polarization immunoassay, and agglutination enzyme-linked immunoassay tests

  • Identify the basic components of a mass spectrometry system

  • Explain the basic principles of the commonly used cytometry systems

  • Describe the impact of genomics, epigenetics, and proteomics on the personalization of medical practice and the newer roles that laboratory tests will play in the future

  • Review the basic principles of molecular diagnostics

  • Diagram the basic techniques of the polymerase chain reaction


Traditionally, the physician bases a clinical diagnosis and patient management protocol on the patient’s family and medical history, clinical signs and symptoms, and data derived from laboratory and imaging diagnostic procedures. An accurate history and physical examination of the patient are still considered among the most informative procedures in establishing accurate diagnoses of disease, with clinical laboratory test results playing important roles in confirming and ruling out certain diseases.

Pharmaceutical companies have developed drugs based on these collective observations and known disease mechanisms. Some common examples include medications for high cholesterol, which modify the absorption, metabolism, and generation of cholesterol. Agents have been developed that are aimed at improving insulin release from the pancreas and sensitivity of the muscle and fat tissues to insulin action. Antibiotics are based on the observation that microbes produce substances, which inhibit other species. Hypotensive medications that lower blood pressure have typically been designed to act on physiologic pathways involved in hypertension (such as renal salt and water absorption, vascular contractility, and cardiac output). This has often been a reactive approach, with appropriate treatments and therapy starting after the signs and symptoms appear.

The past 30 years have seen remarkable progress in the role of the laboratory in personalizing medicine, a consequence of the advances in human and medical genetics. These advances have enabled a more detailed understanding of the impact of genetics in disease and have led to new disciplines: genomics, epigenetics, proteomics, and metabolomics. It is anticipated that further discoveries in these newly emerging domains will have profound impact on the practice of medicine into a more personalized medicine approach impacted through the use of precision diagnostics.

Many of the traditional laboratory procedures and tests that are described in the following parts of this chapter will create the framework upon which these potential advances will be based: researchers are simplifying them and improving throughput and analytic performance in real time. Because these tests have become more automated, they will take their place alongside current testing procedures. In the United States alone, approximately 12 billion laboratory assays are performed in clinical laboratories annually. Although most laboratory testing is not performed by clinicians themselves, it is essential that they have an understanding of the more common, as well as newer, emerging methods and techniques used to generate this clinical data. This understanding is essential for the proper utilization of the correct diagnostic assay and, most importantly, the correct interpretation of test results. This chapter provides an introduction to these methods and techniques.

Clinical laboratory testing represents a vast array of diverse procedures, ranging from the microscopic examination of tissue specimens (histopathology) to the measurement of cellular components to the amplification and detection of nucleic acids, such as the detection of a gene mutation or fusion for malignancies or the identification of antimicrobial resistance genes in bacteria. A consideration of all diverse methodologies used in these procedures is beyond the scope of this chapter, but all share some of the common characteristics of automation and mechanization. The two often intertwine; automation commonly involves the mechanization of basic manual laboratory techniques or procedures, such as those described throughout this chapter. The common goals of total laboratory automation (TLA) result in increased efficiency and throughput, which leads to decreased turnaround times, reduced errors, and the ability to integrate various quality assurance and improvement processes in the laboratory.


This trend toward automation in the hospital and clinical laboratory is, in part, motivated by the drive toward higher productivity and cost efficiency.1 Another key driver clinical laboratories face is the federal government. According to a report issued from the U.S. Department of Health and Human Services, the Office of the Inspector General (OIG) stated that Medicare could have saved $910 billion (38%) on laboratory test reimbursement if they lowered the reimbursement rate for the top 20 laboratory tests.2 A final conclusion from this report was the OIG should consider reintroducing competitive bidding and adjusting the reimbursement rate for these laboratory tests. Clinical laboratories, like many other departments in hospitals and other healthcare facilities, are facing the pressure of providing more services while maintaining high-quality standards with a reduced revenue stream. In its most comprehensive sense, TLA encompasses all procedures from receipt of the specimen to the reporting of results. System designs and functionality can vary depending on the specific application and manufacturer. They can involve consolidated analyzers, individual or integrated, and automated devices that address specific tasks, coupled to specimen processing and transportation systems, as well as process control software (ie, middleware) that automates each stage of the system. One plausible vision of the future is that the centralized hospital and clinical laboratory will consist mainly of automated laboratory systems capable of performing high-volume and esoteric testing operated by skilled medical laboratory scientists.3,4

Laboratory automation involves a variety of steps and generally begins with processes that are manual in nature: obtaining the specimen, identifying a patient, transporting, and conducting any preanalytic specimen processing. Once in the laboratory, a quality control (QC) process begins with a check of the preordered specimen to ensure that specimens have correct identification labels and bar codes, the correct tube was used for the blood test ordered, and the appropriate quality and adequate quantity of material is provided for the testing requested. The TLA systems are currently capable of performing only some of the previously listed preanalytic checks. Determining whether, for example, a specimen is grossly hemolyzed, icteric, or lipemic usually requires examination by a laboratory scientist.

In many divisions of the centralized laboratory, three major areas (eg, chemistry analyzers, hematology analyzers, and automated microbial identification systems) generate information in almost completely automated ways. Using the example of a chemistry analyzer, introduction of a specimen begins with aspiration of the sample into a continuous-flow system. Each specimen passes through the same continuous stream and is subjected to the same various analytical reactions. In some systems, the use of repeated flushing and washing steps of probes within the systems prevents carryover between specimens, while other systems use discrete specimen sampling through the use of disposable pipet tips. Many results generated by automated chemistry analyzers rely on reactions based on principles of photometry, which will be discussed later in this chapter. In addition to the more commonly requested serum or plasma chemistry analytes, enzymes, therapeutic drugs, hormones, and other substances can also be measured using these techniques.

All modern automated analyzers rely on computers and sophisticated software to perform these sample processing functions. Calculations (statistics on patient or control values), monitoring (linearity and QC), and display (acquisition and collation of patient results and warning messages and δ checks) functions are routinely performed by these instruments once the specimen has been processed. Automation does not end at this stage. Many centralized laboratories have electronic interfaces that link separate analyzers to the laboratory information system (LIS). In turn, the LIS is interfaced with the hospital electronic medical record system. This interface allows for vital two-way connectivity between the two systems. Laboratory orders are automatically sent to the LIS from the electronic medical record. This type of automation can prevent errors when a manual requisition system is utilized. Also, laboratory diagnostic information can be immediately uploaded into the patient chart for review by the clinician once results are verified manually by a medical laboratory scientist or through the use of automated rule systems developed by the laboratory. Then, based on the results generated, some laboratories have created electronic rules that can automatically order repeat and reflex testing, track samples and results through the system, and manage storage and, when necessary, retrieval of specimens for repeat or additional testing.

Standardization within the laboratory automation arena is an essential means of assuring QC and quality assurance for the diagnostic data. The Clinical and Laboratory Standards Institute is an organization that uses a consensus-based approach in developing a series of comprehensive standards and guidelines that serves as the “gold standard” for laboratory operation and automation.5

The discipline of informatics is a parallel component of laboratory automation. As generators and collectors of a large volume of information, laboratories provide relevant clinical information to a wide network of physicians and other healthcare professionals in an efficient manner. Informatics in the laboratory involves the use of collected data for the purposes of healthcare decision making. Modern LISs have the capability of analyzing data in a variety of ways that enhance patient care. The ability to transmit and share such information over the Internet is becoming as indispensable a function of the laboratory as performing the tests themselves. Some laboratories and healthcare systems have implemented patient access portals where patients can have limited access to their healthcare information, including laboratory test results after physicians have reviewed these results. The portals will become centers of information management for hospital-based medicine practice as well as for the community. In parallel with the development of the highly automated core laboratory, technological advances in the miniaturization of analyzers continue to enhance point-of-care (POC) testing platforms. Further progress in this area will allow greater opportunities for community engagement and outreach by laboratories and integrated health systems that are attempting to increase access to essential healthcare services and diagnostics in communities where health inequities exist.


Photometry is the measurement of light. Light is how we define the visible radiant energy from the ultraviolet (UV) and visible portions of the electromagnetic spectrum. The wavelength of light is often expressed in nanometers (nm). Humans can only naturally perceive a limited range of about 380 to 750 nm (Table 2-1). Modern clinical laboratory instruments, however, can accurately measure the absorbance or emittance between 150 (the low UV) and 2,500 nm (the near infrared region).7 These instruments are classified by the source of light as well as whether the light is absorbed or emitted. Four types of photometric instruments are currently in use in laboratories: molecular absorption, molecular emission (fluorometers), atomic emission (flame photometers), and atomic absorption (AA) spectrophotometers.

TABLE 2-1.

Wavelength Characteristics of Ultraviolet, Visible, and Infrared Light


























Not visible


Molecular Absorption Spectrophotometers

Molecular absorption spectrophotometers, usually referred to as spectrophotometers, are commonly employed in conjunction with other methodologies, such as nephelometry, which is discussed below, and enzyme immunoassay (EIA). In spectrophotometry, analyzers measure the intensity of light at selected wavelengths. Spectrophotometers are easy to use, have relatively high specificity, produce highly accurate results, and can generate both qualitative and quantitative data. The high specificity and accuracy are obtained by isolated analytes reacting with various substances that produce colorimetric reactions.

The basic components of two types of spectrophotometers (single and double beam) are depicted in Figure 2-1. Single-beam instruments have a light source (I) (eg, a tungsten bulb or laser), which passes through an entrance slit that minimizes stray light. Specific wavelengths of light are selected using a monochromator (II). Light of a specific wavelength then passes through the exit slit and illuminates the contents of the analytical cell or cuvette (III). After passing through the test solution, the light strikes a detector, usually a photomultiplier tube (IV). This tube amplifies the electronic signal, which is then sent to a recording device (V). The result is then compared with a standard curve to yield a specific concentration of analyte.


Schematic of single-beam (upper portion) and double-beam (lower portion) spectrophotometers. I = radiant light source; II = monochromator; III = analytical cuvette; IV = photomultiplier; V = recording device; VI = mirror.

The double-beam instrument, similar in design to single-beam instruments, is designed to compensate for changes in absorbance of the reagent blank and light source intensity. It utilizes a mirror (VI) to split the light from a single source into two beams, one passing through the test solution and one through the reagent blank. By doing so, it automatically corrects optical errors that may be introduced in the blank as the wavelength changes.

Most measurements are made in the visible range of the spectrum, although sometimes measurements in the UV and infrared ranges are employed. The greatest sensitivity is achieved by selecting the wavelength of light in the range of maximum absorption. If substances are known to interfere at this wavelength, measurements may be made at a different wavelength in the absorption spectrum. This modified procedure allows detection or measurement of the analyte with minimal interference from other substances.

Molecular Emission Spectrophotometers

Molecular emission spectrophotometry is usually referred to as fluorometry. The technology found in these instruments is based on the principle of luminescence: an energy exchange process that occurs when electrons absorb electromagnetic radiation and then emit this excited energy at a lower level (eg, longer wavelength). An atom or molecule that fluoresces is termed a fluorophore. Three types of photoluminescence techniques—fluorescence, phosphorescence, and chemiluminescence—form the principle on which these sensitive clinical laboratory instruments operate.

Fluorescence results from a three-stage process that occurs in fluorophores. The first stage involves the absorption of radiant energy by an electron in the ground state creating an excited singlet state. During the very short lifetime of this state (order of nanoseconds), energy from the electronic-vibrational excited state is partially dissipated through a radiationless transfer of energy that results from interactions with the molecular environment and leads to the formation of a relaxed excited singlet state. This is followed by relaxation to the electronic ground state by the emission of radiation (fluorescence). Because energy is dissipated, the energy of the emitted photon is lower and the wavelength is longer than the absorption photon. The difference between these two energies is known as Stokes shift. This principle is the basis for the sensitivity of the different fluorescence techniques because the emission photons can be detected at a different wavelength than the excitation photons. Consequently, the background is lower than with absorption spectrophotometry where the transmitted light is detected against a background of incident light at the same wavelength.7

The phenomenon of phosphorescence is similar to fluorescence because it also results from the absorption of radiant energy by a molecule; however, it is also a competitive process. Unlike fluorescence, which results from a singlet-singlet transition, phosphorescence is the result of a triplet-singlet transition. When a pair of electrons occupies a molecular orbital in the ground or excited state, a singlet state is created. When the electrons are no longer paired, three different arrangements are possible, each with a different magnetic moment, creating the triplet state. The electronic energy of a triplet state is lower than a singlet state; therefore, when the relaxed excited singlet state overlaps with a higher triplet state, energy may be transferred through a process called intersystem crossing. As in the case of an excited singlet state, energy may be dissipated through several radiationless mechanisms to the electronic ground state; however, when a triplet-singlet transition occurs, the result is phosphorescence. The probability of this type of transition is much lower than a singlet-singlet transition (fluorescence), and the emission wavelength and decay times are also longer than for fluorescence emission. Because the various forms of radiationless energy transfer compete so effectively, phosphorescence is generally limited to certain molecules, such as many aromatic and organometallic compounds, at very low temperatures or in highly viscous solutions.8,9

The phenomenon of chemiluminescence is also similar to that of fluorescence in that it results from light emitted from an excited singlet state. However, unlike both fluorescence and phosphorescence, the excitation energy is caused by a chemical or electrochemical reaction. The energy is typically derived from the oxidation of an organic compound, such as luminol, luciferin, or an acridinium ester. Light is derived from the excited products that are formed in the reaction.

Different instruments have been developed that use these basic principles of luminescence. These devices use similar basic components along the following pathway: a light source (laser or mercury arc lamp), an excitation monochromator, a sample cuvette, an emission monochromator, and a photodetector.7 Although the principles of these instruments are relatively straightforward, various modifications have been developed for specific applications.

An important example is fluorescent polarization in fluorometers. Fluorescent molecules (fluorophores) become excited by polarized light when the plane of polarization is parallel to their absorption transition vector, provided the molecule remains relatively stationary throughout the excited state. If the molecules rotate rapidly, light will be emitted in a different plane than the excitation plane. The intensity of light emitted by the molecules in the excitation polarization plane and at 90° permits the fluorescence polarization to be measured. The degree to which the emission intensity varies between the two planes of polarization is a function of the mobility of the fluorophore. Large molecules move slowly during the excited state and will remain highly polarized. Small molecules that rotate faster will emit light that is depolarized relative to the excitation plane.11

One of the most common applications of fluorescence polarization is competitive immunoassays, used to measure a wide range of analytes, including therapeutic and illicit drugs, hormones, antigens, and antibodies. This important methodology involves the addition of a known quantity of fluorescent-labeled analyte molecules to a serum antibody (specific to the analyte) mixture. The labeled analyte will emit depolarized light because its motion is not constrained. However, when it binds to an antibody, its motion will decrease, and the emitted light will be more polarized. When an unknown quantity of an unlabeled analyte is added to the mixture, competitive binding for the antibody will occur and reduce the polarization of the labeled analyte. By using standard curves of known drug concentrations versus polarization, the concentration of the unlabeled analyte can be determined.9

Atomic Emission and Atomic Absorption Spectrophotometers

Atomic absorption (AA) spectrophotometry has limited use in most modern clinical laboratories, and AA spectrophotometry procedures are currently associated mainly with toxicology laboratories where poisonous substances, such as lead and arsenic, need to be identified. In this technique, the element is dissociated from its chemical bonds (atomized) and placed into an unexcited ground state (neutral atom). In this state, the element is in its lowest energy state and capable of absorbing energy in a narrow range that corresponds to its line spectrum.10 Generally speaking, AA spectrophotometry methods have greater sensitivity compared with flame emission methods. Furthermore, due to the specificity of the wavelength from the cathode lamp, AA methods are much more specific for the element being measured.31


When light passes through a solution, it can be either absorbed or scattered. The basis for measuring light scatter has been applied to various immunoassays for specific proteins or haptens. Turbidimetry is the technique for measuring the percent of light absorbed. In this method, the turbidity of a solution decreases the intensity of the incident light beam as it passes through particles in a solution. A major advantage of turbidimetry is that measurements can be made with laboratory instruments (eg, a spectrophotometer) used for other procedures in laboratory testing. Errors associated with this method usually involve sample and reagent preparation. For example, because the amount of light blocked depends on both the concentration and size of each particle, differences in particle size between the sample and the standard is one cause of error. The length of time between sample preparation and measurement, another cause of error, should be consistent because particles settle to varying degrees, allowing more or less light to pass. Large concentrations are necessary because this test measures small differences in large numbers.

Nephelometry, which is similar to turbidimetry, is a technique that is used for measuring the scatter of light by particles. The main differences are that (1) the light source is usually a laser and (2) the detector, used to measure scattered light, is at a right angle to the incident light. Beam light scattered by particles is a function of the size and number of the particles. Nephelometric measurements are more precise than turbidimetric ones as the smaller signal generated for low analyte concentrations is more easily detected against a very low background.11 Because antigen–antibody complexes are easily detected by this method, it is commonly employed in combination with EIAs. Nephelometers are routinely used in clinical microbiology laboratories to prepare a standardized inoculum of a bacterium used in the performance of antimicrobial susceptibility testing.


Refractometry measurements are based on the principle that light bends as it passes through different media. The ability of a liquid to bend light depends on several factors: wavelength of the incident light, temperature, physical characteristics of the medium, and solute concentration in the medium. By keeping the first three parameters constant, refractometers can measure the total solute concentration of a liquid. This procedure is particularly useful, especially as a rapid screening test because no chemical reagents and reactions are involved.7

Refractometers are commonly used to measure total dissolved plasma solids (mostly proteins) and urine specific gravity. In the refractometer, light is passed through the sample and then through a series of prisms. The refracted light is projected on an eyepiece scale. The scale is calibrated in grams per deciliter for serum protein, and in the case of urine, for specific gravity. In the eyepiece, a sharp line of demarcation is apparent and represents the boundary between the sample and distilled water. In the case of plasma samples, the refraction angle is proportional to the total dissolved solids. Although proteins are the predominant chemical species, other substances such as electrolytes, glucose, lipids, and urea contribute to the refraction angle. Therefore, measurements made on plasma do not correlate exactly to the true protein concentrations, but as the nonprotein solutes contribute to the total solutes in a predictable manner, accurate corrections are possible.12


In the clinical laboratory, osmometer readings are interpreted as a measure of total concentration of solute particles and are used to measure the osmolality of biological fluids such as serum, plasma, or urine. When osmotically active particles are dissolved in a solvent (water, in the case of biological fluids), four physical properties of the water are affected: the osmotic pressure and the boiling point are increased, and the vapor pressure and the freezing point are decreased. Because each property is related, they can be expressed mathematically in terms of the others (colligative properties) and to osmolality. Osmolarity is the number of solute particles per liter of solvent. Osmolality is also the number of solute particles per kilogram of solvent. Consequently, several methods can be used to measure osmolality, with freezing-point depression and vapor pressure osmometry being used most routinely.13

The most commonly used devices to measure osmolality or other colligative properties of a solution are freezing-point depression osmometers. In these analyzers, the sample is rapidly cooled several degrees below its freezing point in the cooling chamber. The sample is stirred to initiate freezing of the supercooled solution. When the freezing point of the solution is reached (the point where the rate of the heat of fusion released by ice formation comes into equilibrium with the rate of heat removal by the cooling chamber), the osmolality can be calculated.7

In certain situations, it is important to measure the colloid osmotic pressure (COP), a direct measure of the contribution of plasma proteins to the osmolality. Because of the large molecular weight of plasma proteins, their contribution to the total osmolality is very small as measured by freezing-point depression and vapor pressure osmometers. Because a low COP favors a shift of fluid from the intravascular compartment to the interstitial compartment, measuring the COP is particularly important in monitoring intravascular volume and useful in guiding fluid therapy in different circumstances to prevent peripheral and pulmonary edema.

The COP osmometer, also known as a membrane osmometer, consists of two fluid-filled chambers separated by a semipermeable membrane. One chamber is filled with a colloid-free physiologic saline solution that is in contact with a pressure transducer. When the plasma or serum is placed in the sample chamber, fluid moves by osmosis from the saline chamber to the sample chamber, thus causing a negative pressure to develop in the saline chamber. The resultant pressure is the COP.13


In the clinical laboratory, analytic electrochemical techniques involve the measurement of the current or voltage produced by the activity of different types of ions. These analytic techniques are based on the fundamental electrochemical phenomena of potentiometry, coulometry, voltammetry, and conductometry.


Potentiometry involves the measurement of electrical potential differences between two electrodes in an electrochemical cell at zero current flow. This method is widely used in both laboratory-based analyzers and POC analyzers to measure pH, pCO2, and electrolytes in whole blood samples. This electrochemical method is based on the Nernst equation, which relates the potential to the concentration of an ion in solution, to measure analyte concentrations.14 Each electrode or half-cell in an electrochemical cell consists of a metal conductor that is in contact with an electrolyte solution. One of the electrodes is a reference electrode with a constant electric potential; the other is the measuring or indicator electrode. The boundaries between the ion conductive phases in the cell determine the type of potential gradients that exist between the electrodes and are defined as redox (oxidation reduction), membrane, and diffusion potentials.

A redox potential occurs when the two electrolyte solutions in the electrochemical cell are brought into contact with each other by a salt bridge so that the two solutions can achieve equilibrium. A potentiometer may be used to measure the potential difference between the two electrodes. This is known as the redox potential difference because the reaction involves the transfer of electrons between substances that accept electrons (oxidant) and substances that donate electrons (reductant). Junctional potentials rather than redox potentials occur when either a solid state or liquid interface exists between the ion conductive phases. These produce membrane or diffusion potentials, respectively. In each case the concentration of an ion in solution can be measured using the Nernst equation, which relates the electrode potential to the activity of the measured ions in the test solution7:

where E = the total potential (in mV), E0 = is the standard reduction potential, z = the number of electrons involved in the reduction reaction, Cred = the molar concentration of the ion in the reduced form, and Cox = the molar concentration of the ion in the oxidized form.

Ion-selective electrodes (ISEs) consisting of a membrane that separates the reference and test electrolyte solutions are very selective and sensitive for the ions that they measure. For this reason, further discussion on potentiometry will focus on these types of electrodes. The ISE method, having comparable or better sensitivity than flame photometry, has become the principal test for determining urine and serum electrolytes in the clinical laboratory. Typically, ion concentrations such as sodium, potassium, chloride, calcium, and lithium are measured using this method (Table 2-2).

TABLE 2-2.

Common Laboratory Tests Performed with Various Assays







Electrolytes, (sodium potassium, chloride, calcium, lithium, total carbon dioxide)

Primary testing method



Toxicologic screens, organic acids, drugs (eg, benzodiazepines and TCAs)

Primary testing method



Toxicologic screens, vanilmandelic acid, hydroxy-vanilmandelic acid, amino acids, drugs (eg, indomethacin, anabolic steroids, cyclosporin)

Primary and secondary or confirmatory testing methods



Serologic tests (eg, ANA, rheumatoid factor, hepatitis B, cytomegalovirus, and human immunodeficiency virus antigens/antibodies)

Primary testing method



Therapeutic drug monitoring, (eg, aminoglycosides, vancomycin, digoxin, antiepileptics, antiarrhythmics, theophylline), toxicology/drugs of abuse testing (acetaminophen, salicylate, barbiturates, TCAs, amphetamines, cocaine, opiates)

Primary testing method



Therapeutic drug monitoring (eg, aminoglycosides, vancomycin, antiepileptics, antiarrhythmics, theophylline, methotrexate, digoxin, cyclosporine), thyroxine, triiodothyronine, cortisol, amylase, cholesterol, homocysteine

Primary testing method



Microbiologic and virologic markers of organisms and genetic markers

Primary testing method

ANA = antinuclear antibody; ELISA = enzyme-linked immunosorbent assay; EMIT = enzyme-multiplied immunoassay technique; FPIA = fluorescent polarization immunoassay; GC = gas chromatography; HPLC = high-performance liquid chromatography; ISE = ion-selective electrode; PCR = polymerase chain reaction; TCAs = tricyclic antidepressants.

The principle of ISE involves the generation of a small electrical current when a particular ion makes contact with an electrode. The electrode selectively binds the ion to be measured. To measure the concentration, the circuit must be completed with a reference electrode. The three types of electrodes are ion-selective glass membranes, solid-state electrodes, and liquid ion-exchange membranes. As shown in Figure 2-2, ion-selective glass membranes preferentially allow hydrogen (H+), sodium (Na+), and ammonium (NH4+) ions to cross a hydrated outer layer of glass. The H+ glass electrode or pH electrode is the most common electrode for measuring H+. Electrodes for Na+, potassium (K+), lithium (Li+), and NH4+ are also available. An electrical potential is created when these ions diffuse across the membrane.


The pH meter is an example of a test that uses ISE to measure the concentration of hydrogen ions. An electric current is generated when hydrogen ions come in contact with the ISE (A). The circuit is completed using a reference electrode (B) submerged in the same liquid as the ISE (also known as the liquid junction). The concentration can then be read on a potentiometer (C).

Solid-state electrodes consist of halide-containing crystals for measuring specific ions. An example is the silver–silver chloride electrode for measuring chloride.7 Liquid ion-exchange membranes contain a water-insoluble, inert solvent that can dissolve an ion-selective carrier. Ions outside the membrane produce a concentration-related potential with the ions bound to the carrier inside the membrane.7 The electrodes are separated from the sample by a liquid junction or salt bridge. Because the liquid junction generates its own voltage at the sample interface, it is a source of error. This error is overcome by adjusting the composition of the liquid junction.15 Overall, this method is simple to use and more accurate than flame photometry for samples having low plasma water due to conditions such as hyperlipoproteinemia.16

Compared with other techniques, such as flame photometry, ISEs are relatively inexpensive and simple to use and have an extremely wide range of applications and wide concentration range. They are also very useful in biomedical applications because they measure the activity of the ion directly in addition to the concentration.


Coulometry is an analytical method for measuring an unknown concentration of an analyte in solution by completely converting the analyte from one oxidation state to another. This is accomplished through a form of titration where a standardized concentration of the titrant is reacted with the unknown analyte, requiring no chemical standards or calibration. The point at which all of the analyte has been converted to the new oxidation state is called the endpoint and is determined by some type of indicator that is also present in the solution.

This technique is based on the Faraday law, which relates the quantity of electric charge generated by an amount of substance produced or consumed in the redox process and is expressed as znF = It = Q, where z is the number of electrons involved in the reaction, n is the quantity of the analyte, F is the Faraday constant (96,487 C/mol), I is the current, t is time, and Q is the amount of charge that passes through the cell.

Coulometry is often used in clinical applications to determine the concentration of chloride in clinical samples. The chloridometer is used to measure the chloride ion (Cl) concentration in sweat, urine, and cerebrospinal fluid samples.7 The device uses a constant current across two silver electrodes. The silver ions (Ag+) that are generated at a constant rate react with the Cl ions in the sample. The reaction that produces insoluble AgCl ceases once excess Ag+ ions are detected by an indicator and reference electrodes. Because the quantity of Ag+ ions generated is known, the quantity of Cl ions may be calculated using the Faraday law.


Voltammetry encompasses a group of electrochemical techniques in which a potential is applied to an electrochemical cell with the simultaneous measurement of the resulting current. By varying the potential of an electrode, it is possible to oxidize and reduce analytes in a solution. At more positive potentials, the electrons within the electrode become lower in energy and the oxidation of species in a solution becomes more likely. At lower potentials, the opposite occurs. By monitoring the current of an electrochemical cell at varying electrode potentials, it is possible to determine several parameters, such as concentration, reaction kinetics, and thermodynamics of the analytes.13

This technique differs from potentiometry in several important ways. Voltammetric techniques use an externally applied force (potential) to generate a signal (current) in a way that would not normally occur, whereas in potentiometric techniques, the analytical signal is produced internally through a redox reaction. The electrode arrangement is also quite different between the two techniques. To analyze both the potential and the resulting current, three electrodes are employed in voltammetric devices. The three electrodes include the working, auxiliary, and reference electrodes, which (when connected through a voltmeter) permit the application of specific potential functions. The measurement of the resulting current can yield results about ionic concentrations, conductivity, and diffusion. The ability to apply different types of potential functions or waveforms has led to the development of different voltammetric techniques: linear potential sweep polarography, pulse polarography, cyclic voltammetry, and anode stripping voltammetry.7 These analytical methods, though not commonly used in clinical laboratories, are very sensitive (detection limits as low as the parts per billion range) and can identify trace elements in patient tissues such as hair and skin.


Conductometry is the measurement of current flow (proportional to conductivity) between two nonpolarized electrodes of which a known potential has been established. Clinical applications include urea estimation through the measurement of the rate-of-change of conductance that occurs with the urease-catalyzed formation of NH4+ and bicarbonate (HCO3). The technique is limited at low concentrations because of the high conductance of biological fluids. Perhaps the most important application of impedance (inversely proportional to conductance) measurements in the clinical laboratory involves the Coulter principle for the electronic counting of blood cells. This method is discussed in detail in the cytometry section.


Electrophoresis is a common laboratory technique, with applications in various clinical laboratory disciplines. Routine diagnostic applications of electrophoresis technology exist for infectious diseases, malignancies, genetic diseases, paternity testing, forensic analysis, and tissue typing for transplantation. Electrophoresis involves the separation (ie, migration) of charged solutes or particles based on its size. Briefly, samples are applied to an electric field in a solution or in a support medium (ie, agarose gel) and exposed to a current electric field for a set duration of time. The migration of molecules within the support medium when exposed to the electrical field is dependent on the overall molecular charge, shape, and size of the molecule being studied.17 Because most molecules of biologic importance are both water-soluble and charged, this analytical tool is one of the most important techniques for molecular separation in the clinical laboratory. The main types of electrophoresis techniques used in both clinical and research laboratories include cellulose acetate, agarose gel, polyacrylamide gel, isoelectric focusing (IEF), two-dimensional, and capillary electrophoresis (CE). Because of many clinical applications, electrophoresis apparatus, cellulose acetate and agarose gels, and reagents are available from commercial suppliers for each of these specific applications.

The primary application of electrophoresis is the analysis and purification of very large molecules such as proteins and nucleic acids. Electrophoresis also can be applied to the separation of smaller molecules, including charged sugars, amino acids, peptides, nucleotides, and simple ions. Through the proper selection of the medium for electrophoretic separations, extremely high resolution and sensitivity of separation can be achieved. Electrophoretic systems are usually combined with highly sensitive detection methods to monitor and analyze the separations that suit the specific application.18

The basic electrophoresis apparatus consists of a high voltage direct power supply that provides the electrical current, electrodes, a buffer, and a support for the buffer or a capillary tube. The support medium used provides a matrix that facilitates separation of the particles. Common support matrices include filter paper, cellulose acetate membranes, agarose, and polyacrylamide gels. When an electrostatic force is applied across the electrophoresis apparatus, the charged molecules will migrate to the anode or the cathode of the system depending on their charge. The force that acts on these molecules is proportional to the net charge on the molecular species and the applied voltage (electromotive force). This relationship is expressed as F = qE/d, where F is the force exerted on the charged molecule, q is its net charge, E is the electromotive force, and d is the distance across the electrophoretic medium.13

Although the basic principles are simple, procedures employed in the electrophoresis process are considerably more complex. For molecules to be separated, they must be dissolved in a buffer that contains electrolytes, which carry the applied current and fix the pH. The mobility of the molecules will be affected locally by the charge of the electrolytes, the viscosity of the medium, their size, and degree of asymmetry. These factors are related by the following equation:
µ= q/6ηr

where µ is the electrophoretic mobility of the charged molecule, q is its net charge, η is the viscosity of the medium, and r is the ionic radius.19

The conditions in which this process occur are further complicated by the use of a support medium, necessary to minimize diffusion and convective mixing of the bands (caused by the heated current flowing through the buffer). The most common media used include polysaccharides such as cellulose and agarose and synthetic media (ie, polyacrylamide). The porosity of these media will, to a large extent, determine the resistance to movement for different ionic species. Therefore, the type of support medium used depends on the application. The above cited factors affecting the process of electrophoresis are controllable and provide optimal resolution for each specific application.

Gel Electrophoresis

Cellulose Acetate and Agarose Gel Electrophoresis

Cellulose acetate and agarose gel electrophoresis are methods commonly used in many clinical laboratories for both serum protein and hemoglobin separations. Serum protein electrophoresis is often used as a screening procedure for the detection of disease states, such as inflammation, protein loss, monoclonal gammopathies, and other dysproteinemias. When the molecules have been separated into bands, specific stains can then be used to visualize them. Densitometry is typically used to quantify each band. When a monoclonal immunoglobulin (Ig) pattern is identified, another technique, immunofixation electrophoresis, is used to quantify IgG, IgA, IgM, IgD, and IgE that are present in the specimen. Once these proteins are separated on an agarose gel, specific antibodies are directed at the immunoglobulins. The sample is then fixed and stained to visualize and quantify the bands.20 Separation of proteins may also be accomplished with IEF where the proteins migrate through a stable pH gradient with the pH varying in the direction of migration. Each protein moves to its isoelectric point (ie, the point where the protein’s charge becomes zero and migration ceases). This technique is often used for separating isoenzymes and hemoglobin variants.

Hemoglobin electrophoresis is the most common method for the screening for the presence of abnormal hemoglobin protein variants (hemoglobinopathies), of which more than 1,000 have been described. These variant forms of hemoglobin are often the result of missense mutations in the various globin genes (α, β, γ, or δ) due to a single nucleotide substitution. Many of these abnormal hemoglobin variants are benign without any clinical signs or symptoms and are identified only accidentally. However, certain mutations can have different manifestations, including altering the structure, stability, synthesis, or function of the globin protein. Hemoglobin S disease (sickle cell disease) is the most common hemoglobinopathy that is caused by a single nucleotide substitution of valine for glutamic acid at the sixth position of the β globin chain. Hemoglobin S disease results in changes that affect the shape and deformability of red blood cells, which ultimately leads to veno-occlusive disease and hemolysis.

Normal adult hemoglobin is composed of two α-subunits and two β-subunits (α2β2) and comprises approximately 97% of the total hemoglobin. Hemoglobin proteins are separated on a cellulose acetate membrane at an alkaline pH (8.6) initially and then on an agarose gel at an acid pH (6.2). Electrophoresis at both pH conditions is performed for optimal resolution of comigrating hemoglobin bands that occur at either of the pH conditions. For example, hemoglobin S, which causes patients to have sickle cell disease or sickle cell trait, comigrates with hemoglobins D and G at pH 8.6, but it can be separated at pH 6.2. The choice of support media is determined by the resolution of the hemoglobin bands that are achieved. Following electrophoresis, the bands are stained for visualization and the relative proportions of the hemoglobins are obtained by densitometry.21

Electrophoresis is also an important technique used in the laboratory to separate deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein fragments. Three common techniques used are Southern, northern, and western blots. These techniques differ in the target molecules that are separated. Southern blots separate DNA that is cut with restriction endonucleases and then identified with a labeled (usually radioactive) DNA probe. Northern blots separate fragments of RNA that are probed with labeled DNA or RNA. Western blots separate proteins that are probed with radioactive or enzymatically tagged antibodies. The initial confirmation of human immunodeficiency virus (HIV) infection in patients that were repeatedly sero-reactive for HIV antibodies were performed using western blots (Figure 2-3). Western blots are largely no longer performed in clinical laboratories, but confirmatory assays employ the same targets in an immunochromatographic assay.

HIV-1 viral structure. (A) The complete structure of a single HIV virion is shown labeling key portions of the virus. (B) A diagram of the HIV western blot confirmatory ELISA assay. HIV-1 separated proteins are transferred from the gel to a nitrocellulose membrane. If antibodies specific to viral antigens are present, they will bind to the membrane and will remain visible following a series of wash steps and application of a conjugate and a horseradish peroxidase, which results in the appearance of bands seen visually on the nitrocellulose strip. Created with

Each method involves a series of steps that leads to the detection of the various targets. Following electrophoresis, typically performed with an agarose or polyacrylamide gel, the molecules are transferred to a solid stationary support during the probe hybridization, washing, and detection stages of the assay. The DNA, RNA, or protein in the gel may be transferred onto nitrocellulose paper through electrophoresis or capillary blotting. In the former method, the molecules, by virtue of their negative charge, are transferred by electrophoresis. The latter method involves layering the gel on wet filter paper with the nitrocellulose paper on top. Dry filter paper is placed on the nitrocellulose paper and the molecules are transferred with the flow of buffer from the wet to dry filter paper via capillary action. Following the transfer, the nitrocellulose paper is soaked in a blocking solution containing high concentrations of DNA, RNA, or protein. This prevents the probe from randomly sticking to the paper during hybridization. During the hybridization stage, the labeled DNA, RNA, or antibody is incubated with the blot where binding with the molecular target occurs. The probe-target hybrids are detected following a wash step to remove any unbound probe.

Two-Dimensional Electrophoresis

Two-dimensional electrophoresis is a powerful and widely used method in proteomics for the analysis of complex protein mixtures extracted from cells, tissues, or other biological samples. Proteins are sorted according to two independent properties: IEF, which separates proteins according to their isoelectric points, and sodium dodecylsulfate–polyacrylamide gel electrophoresis, which separates proteins according to their molecular weights. Each spot on the resulting two-dimensional array corresponds to a single protein species in the sample.22 Using this technique, thousands of different proteins can be separated through the use of Coomassie dyes, silver staining, radiography, or fluorographic analysis; quantified; and characterized. Additionally, this technology can be used to explore protein families and search for differences, either genetic or disease based.

Capillary Electrophoresis

Capillary electrophoresis (CE) includes diversified analytical techniques, such as capillary zone electrophoresis (CZE), capillary gel electrophoresis (CGE), capillary chromatography, capillary IEF, micelle electrokinetic capillary chromatography, and capillary isotachophoresis. Currently, only the first two in the previous list have practical applications in the clinical laboratory. Although historically a research tool, CE is being adapted for various applications in the clinical laboratory because of its rapid and high-efficiency separation power, diverse applications, and potential for automation. The possibility of CE becoming an important technology in the clinical laboratory is illustrated by its use in the separation and quantification of a wide spectrum of biological components ranging from macromolecules (proteins, lipoproteins, and nucleic acids) to small analytes (amino acids, organic acids, or drugs).

The CE apparatus consists of a small-bore, silica-fused capillary (25 to 75 µm), approximately 50 to 100 cm in length, connected to a detector at one end, and via buffer reservoirs to a high-voltage power supply (25 to 35 kV) at the other end.23 Because the small capillaries efficiently dissipate the heat, high voltages can be used to generate intense electric fields across the capillary to produce efficient separations with short separation times. In a CE separation, a very small amount of the sample (0.1 to 10 nL) is required. When the sample solution is injected into the apparatus, the molecules in the solution migrate through the capillary due to its charge in an electric field (electrophoretic mobility) or due to electroosmotic flow (EOF). The negatively charged surface of the silica capillary attracts positive-charged ions in the buffer solution, which in turn migrate toward the cathode and carry solvent molecules in the same direction. The overall movement of the solvent is called EOF. The separated proteins are eluted from the cathode end of the capillary. Quantitative detectors such as fluorescence, absorbance, electrochemical detectors, and mass spectrometry (MS) can be used to identify and quantify the proteins in the solution in amounts as little as 10 to 20 mol of substance in the injected volume.23 Two major advantages of using CE platforms include the ability to apply higher voltages than traditional electrophoresis platforms and its ease of automation.

Capillary Zone Electrophoresis

Capillary zone electrophoresis (CZE) is the most widely used type of CE and is used for the separation of both anionic and cationic solutes, usually in a single analysis. In CZE, the anions and cations migrate in different directions, but they both rapidly move toward the cathode due to EOF, which is usually significantly higher than the solute velocity. Therefore, all molecules, regardless of their charge, will migrate to the cathode. In this way the negative, neutral, and positive species can be detected and separated. Common clinical applications include high-throughput separation of serum and urine protein and hemoglobin variants. In the future, other applications will become more commonplace. However, these systems are expensive, and currently, conventional methods are used.

Capillary Gel Electrophoresis

Capillary gel electrophoresis (CGE) is the CE analog of traditional gel electrophoresis and is used for the size-based separation of biological macromolecules such as oligonucleotides, DNA restriction fragments, and proteins. The separation is performed by filling the capillary with a sieve-like matrix such as polyacrylamide or agarose to reduce the EOF. Therefore, larger molecules such as DNA will move more slowly resulting in better separation. Although CGE electrophoresis is primarily used in research, clinical applications are being developed.

Pulsed-Field Gel Electrophoresis

In pulsed-field gel electrophoresis (PFGE), the current is alternately applied to different pairs of electrodes so that the electric field is cycled through different directions. As the field adjusts direction, molecules reorient themselves to the new field before migration continues through the agarose gel. This technique has been widely used to permit the separation of very large molecules, such as DNA fragments larger than 50 kb. Also, PFGE has been used for typing various strains of bacterial DNA after the genetic material has been cut with a particular restriction enzyme (the cut DNA provides a unique bacterial fingerprint). In the past, PFGE was commonly used to assess bacterial strain relatedness in the setting of outbreak investigations when investigators are attempting to determine if multiple bacterial isolates arise from a common source.24 More recently, whole genome sequencing (discussed later) has largely replaced PFGE for outbreak investigations.


Densitometry is a specialized form of spectrophotometry used to evaluate electrophoretic patterns. Densitometers can perform measurements in an absorbance optical mode and a fluorescence mode, depending on the type of staining of the electrophoretic pattern. An absorbance optical system consists of a light source, filter system, a movable carriage to scan the electrophoretic medium, an optical system, and a photodetector (silicon photocell) to detect light in the absorbance mode. When a densitometer is operated in the absorbance mode, an electrophoretic pattern located on the carriage system is moved across a focused beam of incident light. After the light passes through the pattern, it is converted to an electronic signal by the photocell to indicate the amount of light absorbed by the pattern. The absorbance is proportional to the sample concentration. The filter system provides a narrow band of visible light to provide better sensitivity and resolution of the different densities. This mode of operation is commonly used to evaluate hemoglobin and protein electrophoresis patterns and applications of molecular diagnostics (MDx), including one- and two-dimensional, DNA, RNA, and polymerase chain reaction (PCR) gel electrophoresis bands; dot blots; slot blots; image analysis; and chromosome analysis.

The fluorescence method is used in the case of electrophoretic patterns that fluoresce when radiated by UV light (340 nm). Densitometers used in this mode include a UV light source and a photomultiplier tube instead of the silicon photocell. When the pattern located on the carriage moves across a focused beam of UV light, the pattern absorbs the light and emits visible light. The light is focused by a collection of lenses onto a UV-blocking filter and then to a photomultiplier tube, where the visible light is converted into an electronic signal that is proportional to the intensity of the light. In each case, the electrophoretic patterns are evaluated by comparison of peak heights or peak areas of the sample and the standards. Current densitometry systems employ sophisticated software to provide analysis of the signal intensities with high resolution and sensitivity.7


Chromatography is another method used primarily for separating and identifying various compounds. In this procedure, components (solutes) from a mixture are separated by the differential distribution between mobile and stationary phases. In routine clinical practice, paper chromatography has been replaced by three other types of chromatography: thin layer chromatography (TLC), gas chromatography (GC), and high-performance (or pressure) liquid chromatography (HPLC). Chromatographic assays require more time for specimen preparation and performance; they are usually performed only when another assay type is not available or when interferences are suspected with an immunoassay. Chromatographic assays do not require premanufactured antibodies and, therefore, afford better flexibility than an immunoassay.

Thin Layer Chromatography

Thin layer chromatography (TLC) is commonly used for drug screening and analysis of clinically important substances such as oligosaccharides and glycosaminoglycans (eg, dermatan sulfate, heparin sulfate, and chondroitin sulfate). In this method, a thin layer of gel (sorbent) is applied to glass or plastic, forming the stationary phase. The sorbent may be composed of silica, alumina, polyacrylamide, or starch. The choice of sorbent depends on the specific application because compounds have different relative affinities for the solvent (mobile phase) and the stationary phase. These factors affect the separation of a mixture into the different components. Silica gel is the most commonly used sorbent because it may be used to separate a broad range of compounds, including amino acids, alkaloids, sugars, fatty acids, lipids, and steroids.

Used for identification and separation of multiple components of a sample in a single step, TLC is also used in initial component separation prior to analysis by another technique. Quantification of various substances is possible with TLC; each spot can be scraped off and analyzed individually.7 Although TLC is a useful screening technique, it has lower sensitivity and resolution than either gas or high-performance chromatography. Another disadvantage, as with gas and high-performance chromatography, is that someone with skill and expertise must interpret the results.

Gas Chromatography

Gas chromatography (GC) is a subtype of column chromatography. This technique is used to identify and quantify volatile substances, such as alcohols, steroids, and drugs in the picogram range (Table 2-1). This technique is also based on the principles of paper and TLC, but it has better sensitivity. Instead of a solvent, GC uses an inert gas (eg, nitrogen or helium) as a carrier in the mobile phase for the volatile substance being analyzed.

A column packed with inert material, coated with a thin layer of a liquid phase, is substituted for paper or gel. The sample is injected into the column (contained in a heated compartment) where it is immediately volatilized and picked up by the carrier gas. Heating at precise temperature gradients is essential for good separation of the analytes. The gas carries the sample through the column where it contacts the liquid phase, which has a high boiling point. Analytes with lower boiling points migrate faster than those with higher boiling points, thus fractionating the sample components. When the sample leaves the column, it is exposed to a detector. The most common detector consists of a hydrogen flame with a platinum loop mounted above it. When the sample is exposed to the flame, ions collect on the platinum loop and generate a small current. This current is amplified by an electrometer, and the signal is sent on to an integrator or recorder. The recorder produces a chromatogram with various peaks being recorded at different times. Because each sample component is retained for a different length of time, the peak produced at a particular retention time is characteristic for a specific component (Figure 2-4). The amount of each component present is determined by the area of the characteristic peak or by the ratio of the peak heights calibrated against a standard curve.


Gas chromatogram. The area under the curve or peak height of an analyte (eg, drug or toxin) is compared with the area under the curve or peak height of an internal standard, and then the ratio is calculated. This ratio is compared with a standard curve of peak area ratios to give the concentration of the analyte.

This technique has many advantages, including high sensitivity and specificity. However, it requires sophisticated and expensive equipment. In addition, one or more compounds may produce peaks with the same retention time as the analyte of interest. In cases of such interference, the temperature and composition of the liquid phase can be adjusted for better peak resolution.

High-Performance Liquid Chromatography

High-performance liquid chromatography (HPLC) is widely used, especially in forensic laboratories for toxicologic screening and to measure various drugs (Table 2-2). This is the most widely utilized form of liquid chromatography in clinical laboratories. Its basic principles are similar to those of GC, but it is useful for nonvolatile or heat-sensitive substances.

Instead of gas, HPLC utilizes a liquid solvent (mobile phase) and a column packed with a stationary phase, usually with a porous silica base. The mobile phase is pumped through the column under high pressure to decrease the assay time. The sample is injected onto the column at one end and migrates to the other end in the mobile phase. Various components move at different rates, depending on their solubility characteristics and the amount of time spent in the solid versus liquid phases. As the mobile phase leaves the column, it passes through a detector that produces a peak proportional to the concentration of each sample component. The detector is usually a spectrophotometer with variable wavelength capability in the UV and visible ranges. A signal from the detector is sent to a recorder or integrator, which plots peaks for each component as it elutes from the column (Figure 2-5). Each component has its own characteristic retention time, so each peak represents a specific component. As with GC, interferences may occur with compounds of similar structure or solubility characteristics; the peaks may fall on top of each other. Better resolution can be obtained by using a column packing with different characteristics or by changing the composition and pH of the mobile phase. Compounds are identified by their retention times and quantified either by computing the area of the peak or by comparing the peak height or area to an internal standard to obtain a peak height or peak area ratio. This ratio is then used to calculate a concentration by comparison with a predetermined standard curve.


High-performance liquid chromatography chromatogram. The appearance of this chromatogram is similar to the gas chromatogram, and the area or peak height ratio is used to quantify the analyte in a sample.

Although HPLC offers both high sensitivity and specificity, it requires specialized equipment and personnel. Furthermore, because the substance being determined is usually in a body fluid (eg, urine or serum), one or more extraction steps are needed to isolate it. Another concern is that because many assays require a mobile phase composed of volatile and possibly toxic solvents, Occupational Safety and Health Administration guidelines must be followed. In addition, assays developed for commercial use may be costly as modifications to published methods are almost always required.


Immunoassays are based on a reaction between an antigenic determinant (ie, hapten) and a labeled antibody.25 The label may consist of a radioisotope, an enzyme, enzyme substrate, a fluorophore, or a chromophore. The reaction may be measured by several detection methods, including liquid scintillation, UV absorbance, fluorescence, fluorescent polarization, and turbidimetry or nephelometry. The immunoassay method is commonly used for determination of drug concentrations in serum.

Immunoassays can be divided into two general categories: heterogeneous and homogeneous. In heterogeneous assays, the free and bound portions of the determinant must be separated before either or both portions can be assayed. This separation can be accomplished by various methods, including protein precipitation, double antibody technique, adsorption of free drug, and removal by immobilized antibody on a solid phase support. Homogeneous assays do not require a separation step and, therefore, can be easily automated. The binding of the labeled hapten to the antibody alters its signal in a way (color change or reduction in enzymatic activity) that can then be used to measure the analyte concentration.

Early immunoassays used polyclonal antibodies (pAbs), generated as a result of an animal’s natural immune response. Typically, an antigen is injected into an animal. The animal’s immune system then recognizes the material as foreign and produces antibodies against it. These antibodies are then isolated from the blood. Many different antibodies may be generated in response to a single antigen. The numbers as well as the specificities of the antibodies depend on the size and number of antigenic sites on the antigen. In general, the larger and more complex the antigen (eg, cell or protein), the more antigenic sites (epitopes) it has and the greater the variety of antibodies formed.

Although pAbs have been used successfully, both specificity and response may vary greatly because of their heterogeneous nature. The result is a high degree of cross-reactivity with similar substances. This cross-reactivity difficulty was eliminated with the development of monoclonal antibodies (moAbs). Prior to 1975, the only moAbs available were from patients suffering from multiple myeloma, a cancer of the blood and bone marrow in which uncontrolled numbers of malignant plasma cells are produced. Usually, these tumor cells produce a single (monoclonal) type of antibody. In 1975, a technique was developed to make moAbs in the laboratory.26 The technique is based on the fusion of (1) genetic material from plasma cells that produce an antibody but cannot reproduce, and (2) myeloma cells that do not produce an antibody but can reproduce limitlessly. The plasma cells and myeloma cells are cultured together, resulting in a mixture of both parent cells and hybrid cells. This hybrid cell produces the specific antibody and reproduces indefinitely. The mixture is incubated in a special medium, which kills the parent cells and leaves only the hybrid antibody-producing cells alive. The hybrid cells can then be grown using conventional cell culture techniques, resulting in large amounts of the moAb. The development of moAbs has allowed for high sensitivity and specificity in immunoassay technology.


Today radioimmunoassay (RIA) is rarely used in the clinical laboratory and is discussed from a historical perspective. A heterogeneous immunoassay, RIA was developed in the late 1950s and has been primarily used for endocrinology testing purposes.27 This technique takes advantage of the fact that certain atoms can be either incorporated directly into the analyte’s structure or attached to antibodies.

The primary atoms used in the clinical laboratory fall into two classes: γ-emitters and β-emitters. The γ-emitters (125I and 57Co) are generally incorporated into compounds such as thyroid hormone and cyanocobalamin (vitamin B12).11 These types of isotopes can be counted directly with standard γ-counters that utilize a sodium iodide–thallium crystal. When the γ-ray hits the crystal, it gives off a flash of light. This light, in turn, stimulates a photomultiplier tube to amplify the signal. The β-emitters (14C and 3H) are primarily used to measure steroid concentrations.6 Because endogenous substances tend to absorb the radiation, β-rays cannot be counted directly. Therefore, this technique requires a scintillation cocktail with an organic compound capable of absorbing the β-radiation and reemitting it as a flash of light. This light is then amplified by a photomultiplier tube and counted.

Extremely sensitive, RIA has been made more specific with the introduction of moAbs. Unfortunately, this technique also has several significant disadvantages11: a short shelf-life for labeled reagents, lead shielding, waste disposal, monitoring of personnel for radiation exposure, strict record keeping, and special licensing. Because enzyme-linked immunoassays have none of these problems and can perform essentially the same tests as RIA, the clinical use of RIA has decreased in recent years.


The simplest immunoassay is agglutination. Typical tests that can be performed using this assay include tests for human chorionic gonadotropin, rheumatoid factor, antigens from infectious agents, such as bacteria and fungi, and antinuclear antibodies. The agglutination reaction, used to detect either antigens or antibodies, results when multivalent antibodies bind to antigens with more than one binding site. This reaction occurs through the formation of cross linkages between antigen and antibody particles. When enough complexes form, clumping results and a visible mass is formed (Figure 2-6). Because the reaction depends on the number of binding sites on the antibody, the greater the number the better the reaction. For example, IgM produces better agglutination than IgG because the former has more binding sites.


Schematic of latex agglutination immunoassay. The specimen (cerebrospinal fluid, serum, etc.) contains the analyte (in this case, antigens to bacteria) that causes an easily readable reaction. (Source: Adapted with permission from Power DA, McCuen PJ, eds. Manual of BBI Products and Laboratory Procedures. Cockeysville, MD: Becton Dickinson Microbiology Systems; 1998. Courtesy ©Becton, Dickinson, and Company.)

The agglutination reaction is also affected by other factors25: avidity and affinity of the antibody; number of binding sites on the antigen as well as the antibody; relative concentrations of the antigen and antibody; Z-potential (electrostatic interaction that causes particles in solution to repel each other); and viscosity of medium. The two types of agglutination reactions are direct and indirect. Direct agglutination occurs when the antigen and antibody are mixed together, resulting in visible clumping. An example of this reaction is the test for Salmonella typhi antibody. Indirect agglutination (also known as passive or particle agglutination) uses a carrier for either the antibody or antigen. Originally, erythrocytes were selected as the carrier (as described for hemolytic anemia tests). However, latex-coated particles are now commonly used, and the latex agglutination method is simpler and less expensive than the erythrocyte immunoassay. In addition, latex particles allow titration of the amount of antibody bound to the latex particle, thus reducing variability. Other advantages include a rapid performance time with no separation step, allowing full automation. Disadvantages include expensive equipment and lower sensitivity than either RIA or EIA. The use of an automated particle counter increases the sensitivity of the test 10 to 1,000 times.28

Enzyme Immunoassays

Enzyme immunoassays (EIAs) employ enzymes as labels for specific analytes. When antibodies bind to the antigen-enzyme complex, a defined reaction occurs (eg, color change, fluorescence, radioactivity, or altered activity). This altered enzyme activity is used to quantitate the analyte. The advantages of EIAs include commercial availability at a relatively low cost, long shelf life, good sensitivity, automation, and none of the specific requirements mentioned for RIA.

Enzyme-Linked Immunosorbent Assay

Enzyme-linked immunosorbent assay (ELISA) is a heterogeneous EIA. This assay employs the same basic principles as RIA except that enzyme activity rather than radioactivity is measured. The ELISA is commonly used to determine antibodies directed against a wide range of antigens, such as rheumatoid factor, hepatitis B antigen, and bacterial and viral antigens in the serum (Table 2-2).

In a competitive ELISA assay, the specific antibody is adsorbed to a solid phase. Enzyme-labeled antigen is incubated together with the sample containing unlabeled antigen and the antibodies attached to the solid phase. After a specified time, equilibrium is reached between the binding of the enzyme-labeled and unlabeled antigens to the solid phase antibody, and the solid phase is washed with buffer. The remaining product is measured with a spectrophotometer or fluorometer. The amount of the reaction product will be inversely proportional to the amount of unlabeled antigen in the sample because an increasing amount of unlabeled antigen will displace enzyme-labeled antigen from antibody binding.

Enzyme-Multiplied Immunoassay

Enzyme-multiplied immunoassay technique (EMIT) is a homogeneous EIA; the enzyme is used as a label for a specific analyte (eg, a drug). Many drugs commonly assayed using EMIT are also measured by fluorescence polarization immunoassay (FPIA) (eg, digoxin, quinidine, procainamide, N-acetylprocainamide, and aminoglycoside antibiotics) (Table 2-2). With the EMIT assay, the enzyme retains its activity after attaching to the analyte. For example, to determine a drug concentration, an enzyme is conjugated to the drug and incubated with antidrug antibody.

As shown in Figure 2-7, the test drug is covalently bound to an enzyme that retains its activity and acts as a label. When this complex is combined with antidrug antibody, the enzyme is inactivated. If the antibody and enzyme-bound drug are combined with serum that contains unbound drug, competition occurs. Because the amount of antidrug antibody is limited, the free drug in the sample and the enzyme-linked drug compete for binding to the antibody. When the antibody binds to the enzyme-linked drug, enzyme activity is inhibited. The result is that the serum drug concentration is proportional to the amount of active enzyme remaining. Because no separation step is required, this assay has been automated.


Enzyme-multiplied immunoassay technique. This assay is used in quantifying the drug concentration in a serum sample, as described in the text.

Fluorescent Polarization Immunoassay

Fluorescent polarization immunoassay (FPIA), the most common form of immunoassay, is used to measure concentrations of many serum analytes, such as blood urea nitrogen and creatinine. It is also commonly employed for determining serum drug concentrations of aminoglycoside antibiotics, vancomycin, and theophylline (Table 2-2).

Molecules having a ring structure and a large number of double bonds, such as aromatic compounds, can fluoresce when excited by a specific wavelength of light. These molecules must have a particular orientation with respect to the light source for electrons to be raised to an excited state. When the electrons return to their original lower energy state, some light is reemitted as a flash with a longer wavelength than the exciting light. Fluorescent immunoassays take advantage of this property by conjugating an antibody or analyte to a fluorescent molecule. The concentration can be determined by measuring either the degree of fluorescence or, more commonly, the decrease in the amount of fluorescence present.11,28 In FPIA, a polarizing filter is placed between the light source and the sample and between the sample and the detector. The first filter assures that the light exciting the molecules is in a particular orientation; the second filter assures that only fluorescent light of the appropriate orientation reaches the detector.

The fluorescent polarization of a small molecule is low because it rotates rapidly and is not in the proper orientation long enough to give off an easily detected signal. To decrease this molecular motion, the molecule is complexed with an antibody. Because this larger complex rotates at a slower rate, it stays in the proper orientation to be excited by the incident light. When unlabeled analyte is mixed with a fixed amount of antibody and fluorescent-labeled analyte, a competitive binding reaction occurs between the labeled and unlabeled analytes. The result is a decrease in fluorescence. Thus, the concentration of unlabeled analyte is inversely proportional to the amount of fluorescence.28

Because of their simplicity, automation, and low cost, assays have been developed with relatively high sensitivity for many drugs (eg, antiepileptics, antiarrhythmics, and antibiotics). The primary difficulty is interference from endogenous substances (lipids and bilirubin) or metabolites of the drugs within the patient specimen.


Mass spectrometry (MS) involves the fragmentation and ionization of molecules in the gas phase according to their mass to charge ratio (m/z). The resulting mass fragments are displayed on a mass spectrum, or a bar graph, that plots the relative abundance of an ion versus its m/z ratio. Because the mass spectrum is characteristic of the parent molecule, an unknown molecule can be identified by comparing its mass spectrum with a library of known spectra.

A wide array of MS systems has been developed to meet the increasing demands of the biomedical field. However, the basic principles and components of mass spectrometers are essentially the same. These include an inlet unit, an ion source, a mass analyzer, an ion detector, and a data/recording system. Compounds introduced into a mass spectrometer must first be isolated. This is accomplished with separation techniques such as GC, liquid chromatography, and CE, which are used in tandem with mass spectrometers. In a GC/MS system, an interface between the GC and MS components that restricts the gas flow from the GC column into the mass spectrometer is required to prevent a mismatch in the operating pressures between the two instruments. The unit must also be heated to maintain the volatile compounds in the vapor state and remove most of the carrier gas from the GC effluent entering the ion source unit.29

Ionization Methods

The ionization of the molecules introduced into MS is accomplished by several methods. In each case, the ion sources are maintained at high temperatures and high vacuum conditions necessary for ionizing vaporized molecules. The electron ionization (EI) method, a form of gas-phase ionization, consists of a beam of high-energy electrons that bombard the incoming gas molecules. The energy used is sufficiently high to not only ionize the gas molecules, but also cause them to fragment through the breaking of their chemical bonds. This process yields ion fragments in addition to intact molecular ions that appear in the mass spectra. The EI method is most useful for low molecular weight compounds (<400 Da) because of problems with excessive fragmentation and thermal decomposition of large molecules during vaporization.30,31 Therefore, EI is typically used in GC/MS systems that are suitable for applications, including the analysis of synthetic organic chemicals, hydrocarbons, pharmaceutical compounds, organic acids, and drugs of abuse.

Chemical ionization (CI) is another form of gas-phase ionization. Because the sample molecule is ionized by a reagent such as methane or ammonia that is first ionized by an electron beam, CI is a less energetic technique than EI. Less fragmentation is produced by this method, making it useful for determining the molecular weights of many organic compounds and for enhancing the abundance of intact molecular ions.

Electrospray ionization (ESI), a form of atmospheric pressure ionization, generates ions directly from solution, permitting it to be used in combination with HPLC and CE systems. This method involves the creation of a fine spray in the presence of a strong electric field. As the droplets become declustered, the force of the surface tension of the droplet is overcome by the mutual repulsion of like charges, allowing the ions to leave the droplet and enter the mass analyzer. This technique will yield multiple ionic species, especially for high molecular weight ions that have a large distribution of charge states, thus making it a very sensitive technique for small, large, and labile molecules.32 This ionization method is well-suited for the analysis of peptides, proteins, carbohydrates, DNA fragments, and lipids. Other common ionization techniques include fast atom bombardment, which uses high velocity atoms such as argon to ionize molecules in a liquid or solid, and matrix-assisted laser desorption/ionization (MALDI), which uses high energy photons to ionize molecules embedded on a solid organic matrix.32

Mass Analyzers

Following ionization, the gas phase ions enter the mass analyzer. This component of the mass spectrometer separates the ions by their m/z ratios. Commonly used mass analyzers include the double-focusing magnetic sector analyzer, quadrupole mass spectrometers, quadrupole ion trap mass spectrometers, and tandem mass spectrometers.

The double-focusing magnetic sector analyzer uses a magnetic field perpendicular to the direction of the ion motion to deflect the ions into a circular path with a radius dependent on the m/z ratio and the velocity of the ion. The detector will then separate the ions by their m/z ratios. However, because the kinetic energy (or velocity) of the molecules leaving the ion source is not necessarily constant, the path radii will become dependent on the velocity and the m/z ratio. To enhance the resolution, an electrostatic analyzer or electric sector is used to allow molecules with only a specific kinetic energy to pass through its field. That is, for a particular kinetic energy, the radius of curvature is directly related to the m/z ratio. This type of analyzer is commonly used in combination with EI and fast atom bombardment ionization systems.

Quadrupole mass spectrometers act as a filter for molecules or fragments with a specific m/z ratio. This is accomplished by using four equally spaced parallel rods with direct current (DC) and radio frequency (RF) potentials on opposing rods of the quadrupole. The field produced is along the x- and y-axis. The RF oscillation causes the ions to be attracted or repelled by the rods. Only ions with a specific m/z ratio will have a trajectory along the z-axis, allowing them to pass to the detector, while others will be trapped by the rods of the quadrupole. By varying the RF field, other m/z ranges are selected, thus resulting in the mass spectrum.31 The quadrupole mass spectrometer, commonly combined with the EI ionization system, is perhaps the most commonly used type of mass spectrometer because of its relatively low cost, ability to analyze m/z ratios up to 3,000, and compatibility with ESI ionization systems.

The ion trap analyzer is another form of a quadrupole mass spectrometer, consisting of a ring electrode to which an RF voltage is applied to two end caps at ground potential. This arrangement generates a quadrupole field trapping ions that are injected into the chamber or are generated within it. As the RF field is scanned, ions with specific and successive m/z ratios are ejected from the trap to the ion detector through holes in the caps.31 The quadrupole ion trap mass spectrometer is notable for its high sensitivity and compact size.

Tandem mass spectrometers use multiple stages of mass analysis on subsequent generations of ion fragments. This is accomplished by preselecting an ion from the first analysis and colliding it with an inert gas, such as argon or helium, to induce further fragmentation of the ion. The next stage involves analyzing the fragments generated by an earlier stage. The abbreviation MSn is applied to the stages, which analyze fragments beyond the initial ions (MS) to the first generation of ion fragments (MS2) and subsequent generations (MS3, MS4, etc.). These techniques can be tandem in space (two or more instruments) or tandem in time. In the former case, many combinations have been used for this type of analysis. In the later cases, quadrupole ion trap devices are often used and can achieve multiple MSn measurements.33 Tandem mass analysis is primarily used to obtain structural information such as peptides sequences, small DNA/RNA oligomers, fatty acids, and oligosaccharides. Other mass analyzers, such as time-of-flight and Fourier transform mass spectrometers, are not commonly used for clinical applications.

Ion Detector

The ion detector is the final element of the mass spectrometer. Once an ion passes through the mass analyzer, a signal is produced in the detector. The detector consists of an electron multiplier that converts the energy of the ion into a cascade of secondary electrons (similar to a photomultiplier tube), resulting in about a million-fold amplification of the signal. Due to the rapid rate at which data are generated, computerized data systems are indispensable components of all modern mass spectrometers. The introduction of rapid processors, large storage capacities, and spectra databases has led to automated high throughput. Miniaturization of components has also led to the development of bench-top systems practical for routine clinical laboratory analysis. Clinical applications include newborn screening for metabolic disorders, hemoglobin analysis, drug testing, and microbial identification. Pharmaceutical applications include drug discovery, pharmacokinetics, and drug metabolism. Clinical microbiology has seen a radical shift in the last decade in the approach used to identify microorganisms. Traditional identification had been based on biochemical testing and relies on well-isolated colonies from microbiologic media for testing. These methods are time-consuming, and much progress has been made to reduce the turnaround time to provide clinicians with more rapid identification to provide more targeted antimicrobial chemotherapy. Some laboratories have adopted the use of MALDI-TOF (time of flight) to provide more rapid bacterial or yeast identification of clinical pathogens. There are two current FDA-cleared MALDI-TOF systems available for use in clinical laboratories. Both systems have robust databases capable of identifying a variety of aerobic and anaerobic bacteria and yeasts. Identification of more complex organisms, including mycobacterial species and molds are possible, and both companies continue to add new improve their systems by adding more spectra to their databases to improve identification.

In this application, a colony from an agar plate or an aliquot directly from a positive blood culture bottle is applied to a card and is overlaid with a matrix and allowed to dry on the plate. The plate is loaded onto the analyzer where a laser ionizes the sample. The mass of the ions generated from the ionization process is analyzed using a flight tube, which detects the lighter ions that travel faster than the heavier, slower traveling ions. The result of the detection of the ionized targets (typically bacterial ribosomal proteins) is the generation of a unique mass spectrum. In this spectrum, the mass-to-charge ratio is plotted against the signal intensity. Therefore, the system only detects highly abundant proteins that are of low mass and readily ionized by the laser. In effect, the mass profile generated is a bacterial fingerprint. The spectrum is compared with a comprehensive database of spectra for well-characterized bacterial or fungal pathogens. Depending on the similarity of the peaks, laboratories can accurately identify a colony to the genus or species level.34


Cytometry is defined as a process of measuring physical, chemical, or other characteristics of (usually) cells or other biological particles. Although this definition encompasses the fields of flow cytometry and cellular image analysis, many additional methods are now used to study the vast spectrum of cellular properties. Consequently, the term cytomics has been introduced. Cytomics is defined as the science of cell-based analysis that integrates genomics and proteomics with dynamic functions of cells and tissues. The technology used includes techniques discussed in this chapter, such as flow cytometry and MS, and others that are beyond the scope of this chapter.

Flow Cytometry

Flow cytometry is the technology used to measure properties of cells as they move or flow in liquid suspension.35 It is a technique used to measure multiple characteristics of individual cells within heterogeneous populations. Instruments generally referred to as flow cytometers are based on the principles of laser-induced fluorometry and light scatter. The terminology can become confusing as various conventions have taken root over the years. However, regardless of the principles of detection or measurement, the term flow cytometry may in general be applied to technologies that rely on cells moving in a fluid stream for analysis.

The hematology analyzer, an instrument employing flow cytometry, also incorporates the principles of impedance, absorbance, and laser light scatter to measure cell properties and generate a complete blood count laboratory report. The basis of cell counting and sizing in hematology analyzers is the Coulter principle, which relates counting and sizing of particles to changes in electrical impedance across an aperture in a conductive medium (created when a particle or cell moves through it). The basic system consists of a smaller chamber within a larger chamber, both filled with a conductive medium and each with one electrode across in which a constant DC is applied. The fluids within each chamber communicate through a small aperture (100 µm) or sensing zone. When a nonconductive particle or cell passes through the aperture, it displaces an equivalent volume of conductive fluid. This increases the conductance and creates a voltage pulse for each cell counted, the intensity of which is proportional to the cell volume.14

In hematology analyzers, blood is separated into two samples for measurement. One volume is mixed with a diluent and delivered to a chamber where platelet and erythrocyte counts are performed. Particles with volumes between 2 and 20 femtoliter (fL) are counted as platelets, and particles with volumes >36 fL are counted as erythrocytes. The other volume is mixed with a diluent, and an erythrocyte lysing reagent is used to permit leukocyte (>36 fL) counts to be performed. The number of cells in this size range may be subtracted from the erythrocyte count performed in the other chamber.

Modern hematology analyzers employ additional technologies to enhance the resolution of blood cell analysis. The RF energy is used to assess important information about the internal structure of cells such as nuclear volume. Laser light scatter is used to obtain information about cell shape and granularity. The combination of these and other technologies—such as light absorbance for hemoglobin measurements—provide accurate blood cell differentials, counts, and other important blood cell indices. These basic principles are common to many hematology analyzers used in clinical laboratories. However, each uses different proprietary detection, measurement and software systems, and ways of displaying the data.

Flow cytometers incorporate the principles of fluorometry and light scatter to the analysis of particles or cells that pass within a fluid stream. This technology provides multiparametric measurements of intrinsic and extrinsic properties of cells. Intrinsic properties, including cell size and cytoplasmic complexity, are properties that can be assessed directly by light scatter and do not require the use of any type of probe. Extrinsic cellular properties, such as cell surface or cytoplasmic antigens, enzymes or other proteins, and DNA/RNA, require the use of a fluorescent dye or probe to label the components of interest and a laser to induce the fluorescence (older systems used mercury arc lamps as a light source) to be detected.

The basic flow cytometer consists of four types of components: fluidics, optics, electronics, and data analysis. Fluidics refers to the apparatus that directs the cells in suspension to the flow cell where they will be interrogated by the laser light. Fluidics systems use a combination of air pressure and vacuum to create the conditions that allow the cells to pass through the flow chamber in single file. The optical components include the laser (or other light source), flow chamber, monochromatic filters, dichroic mirrors, and lenses. These are used to direct the scattered or fluorescent light to detectors, which measure the signals that are subsequently analyzed.35

The light scattered by the cell when it reaches the flow chamber is used to measure its intrinsic properties. Forward-scattered light (FSC) is detected by a diode and reflects the size of the passing cell. Side-scattered light (SSC) is detected by a photomultiplier tube at an angle approximately 90 degrees to the laser beam. The SSC is a function of the cytoplasmic complexity of the cell, including the granularity of the cell. The correlated measurements and analysis of FSC and SSC can allow for differentiation among cell types (ie, leukocytes) and can be depicted on a scattergram.

The analysis of extrinsic properties is more complicated. The measurement of DNA or RNA, for example, requires the use of intercalating nucleic acid dyes such as propidium iodide. The detection of antigenic determinants on cells can be performed with fluorescent-labeled moAbs directed at these antigens. In each case, the principle of detection involves the use of laser light to excite the fluorescent dye and detect its emitted signal. Fluorescent dyes are characterized by their excitation (absorption) and emission wavelength spectra and by the difference between the maxima of these spectra or Stokes shift (discussed in the spectrophotometry section). These properties permit the use of multiple fluorescent probes on a single cell.

To illustrate the operation of a flow cytometer, consider a four-color, six-parameter (FSC and SSC) configuration (Figure 2-8).36 An argon gas laser with a wavelength of 488 nm is commonly used because it simultaneously excites several different dyes that possess different emission wavelengths. Fluorochromes conjugated with moAbs that may be used include fluorescein isothiocyanate, phycoerythrin (PE), energy-coupled dye, and Cy5PE (tandem dye composed of the carbocyanine derivative Cy5 and PE) with peak emission wavelengths of approximately 520, 578, 613, and 670 nm, respectively. The emitted light at each of these wavelengths is detected at an angle of 90 degrees. The array of optical filters selects light in each wavelength region and directs it to a different photomultiplier tube where it is detected, amplified, and converted into an electronic signal. This measurement can be made on thousands of cells in a matter of seconds. The result is a histogram that identifies distinct cell populations based on light scatter and extrinsic properties. In the case of blood, a histogram will distinguish lymphocytes, monocytes, and granulocytes by light scatter. The B cell, T cell, T-cell subsets, and natural killer cell populations can all be distinguished.


Schematic of a four-color flow cytometry system. The laser beam is focused onto the flow cell through which the cell suspension is directed. Scattered light is detected by the forward and side scatter detectors. Emitted light from specific moAb labeled with fluorochromes are detected. Appropriate dichroic long pass filters direct the specific wavelength of light through a narrow band pass filter and then to the appropriate PMT. (Courtesy of Beckman Coulter.)

This important method of cell analysis has found many applications in medicine, making it a relatively common clinical laboratory instrument. Flow cytometry analysis is routinely used to assist in classifying the type of leukemia and lymphoma, derive prognostic information in these and other malignancies, monitor immunodeficiency disease states such as HIV/AIDS, enumerate stem cells by cluster differentiation (CD34), and assess various functional properties of cells.

Image Cytometry

Image cytometry, more commonly known as histology, is a laboratory method that uses instruments and techniques to analyze tissue specimens. Examining individual cells, rather than the collection of cells that make up a tissue, is referred to as cytology. The basic components of an image cytometry system may include a microscope, camera, computer, and monitor. Variations and complexity of these systems exist, which are beyond the scope of this chapter. However, the essence of these instruments is the ability to acquire images in two or three (confocal microscopy) dimensions to study the distribution of various components within cells or tissues. The high optical resolution of these systems is an important determinant in obtaining morphometric information and precise data about cell and tissue constituents through the use of fluorescence/absorbance-based probes, as in flow cytometry.37 Specific applications of image cytometry generally involve unique methods of cell or tissue preparation and other modifications. This lends to the versatility of this technology, which yields such applications as the measurement of DNA content in nuclei to assess prognosis in cancer and the detection of specific nucleic acid sequences to diagnose genetic disorders.

In Situ Hybridization

Among the methods of image cytometry, in situ hybridization is perhaps the most commonly used in the clinical laboratory, particularly in molecular cytogenetics laboratories. In situ hybridization is used to localize nucleic acid sequences (entire chromosomes or parts, including genes) in cells or tissues through the use of probes, which consist of a nucleic acid sequence that is complementary to the target sequence and labeled in some way that makes the hybridized sequence detectable. These principles are common to all methods of in situ hybridization, but they differ in the type of probe that is used. Fluorescent probes, which provide excellent spatial resolution, have become a preferred method of in situ hybridization for many applications. (Radioactive probes are also used for this application. However, because their spatial resolution is limited, detection and artifacts are often produced.)

Fluorescent in situ hybridization (FISH) is a powerful molecular cytogenetics technique used for detecting genes and genetic anomalies and monitoring different diseases at the genetic level. These assays are more sensitive and can detect chromosomal abnormalities that cannot be appreciated by routine chromosome analysis (ie, karyotyping). Typically, metaphase chromosomes or interphase nuclei are denatured on a slide along with a fluorescent labeled DNA probe. The probe and chromosomes are hybridized, and the slide is washed, counterstained, and analyzed by fluorescent microscopy. There are various types of FISH probes that can be utilized, such as DNA probes to detect nick translations or RNA probes that can detect in vitro transcription. An appropriate arrangement of filters is used to direct the relevant wavelength of light from the light source to excite the fluorescent molecule on the probe. All but the emission wavelength of light is blocked with a special filter permitting the signal from the probe to be visualized.38 In molecular cytogenetics, these assays are commonly used to identify gene fusions or translocations.


Molecular diagnostics (MDx) were initially introduced into the clinical laboratories as manual, labor-intensive techniques. This discipline has experienced an overwhelming period of maturation in the past several years. Testing has moved quickly from highly complex, labor-intensive procedures to more user-friendly, semiautomated protocols, and the application potential of MDx continues to evolve. Nucleic acid amplification technologies are among the procedures that have most revolutionized MDx testing.

Nucleic Acid Amplification

Polymerase chain reaction (PCR) is the most frequently used of these technologies. Other amplification techniques that are used in clinical laboratory procedures include ligase chain reaction, transcription mediated amplification, branched DNA amplification, isothermal amplification, and nucleic acid sequence-based amplification.

The PCR technology is used principally for detecting microbiologic organisms and genetic diseases (Table 2-2). Examples of microorganisms identified by this process include chlamydia, cytomegalovirus, Epstein-Barr virus, HIV, mycobacteria, and herpes simplex virus. As PCR amplifies nucleic acid sequences, if one knows the sequence of interest, then an assay can be developed and optimized for use in patient testing. While the list of cleared and approved targets is limited to the most commonly encountered microorganisms, new assays are being developed by many diagnostic companies. PCR can often identify organisms with greater speed and sensitivity than conventional methods. For clinical microbiology laboratories, PCR methods are attractive because they are rapid, sensitive, and specific. Many laboratories have moved from culture-based methods to molecular amplification methods for the rapid identification of patients that may be colonized with multidrug-resistant organisms, such as Clostridioides difficile, methicillin-resistant Staphylococcus aureus, or vancomycin-resistant enterococci. Rapid identification of these patients is crucial in healthcare settings that often place these patients on contact precautions to try and reduce the spread of these organisms. The PCR applications in microbiology can also be used to identify organisms carrying antibiotic resistance genes, such as the Klebsiella pneumoniae carbapenemase blaKPC gene that confers resistance to all β-lactam antibiotics among members of the Enterobacterales and other gram-negative bacilli.

Genetic diseases diagnosed using PCR include α-1 antitrypsin deficiency, cystic fibrosis, sickle cell anemia, fragile X syndrome, Tay-Sachs disease, drug-induced hemolytic anemia, and Von Willebrand disease. In addition, cancer research has benefited from PCR through the diagnosis of various cancers (eg, chronic myeloid leukemia and pancreatic and colon cancers) as well as through the detection of residual disease after treatment.39 This technique is used to amplify specific DNA and RNA sequences enzymatically.

In addition, PCR takes advantage of the normal DNA replication process. In vivo, DNA replicates when the double helix unwinds and the two strands separate. A new strand forms on each separate strand through the coupling of specific base pairs (eg, adenosine with thymidine and cytosine with guanosine). The PCR cycle is similar and consists of three separate steps (Figure 2-9)28:

  1. Denaturation: The reaction tube is heated causing the double stranded DNA to separate.

  2. Primer annealing: Sequence-specific primers are allowed to bind to opposite strands flanking the region of interest by decreasing the temperature.

  3. Primer extension: DNA polymerase then extends the hybridized primers, generating a copy of the original DNA template.


Stages of a single PCR reaction cycle. Beginning with your DNA template, the sample is added into a microtube with DNA polymerase, forward and reverse sequence specific primers that will bind to and amplify the region of interest within the template, and excess amounts of deoxynucleotide triphosphates (dNTP). During Step 1 (denaturation), the sample is heated between 93°C and 96°C to separate the double-stranded DNA into two single stranded templates. During Step 2 (annealing), the forward and reverse sequence specific primers will bind to complementary sequences within the template to facilitate replication. Annealing occurs between 50°C and 70°C. During Step 3, extension of the template occurs at 68°C and 75°C. During extension, DNA polymerase will catalyze the addition of complementary dNTP to the primer using the sample DNA as the template. This completes one cycle of the PCR reaction yielding two copies of the amplified region of interest.

The efficiency of the extension step can be increased by raising the temperature. Typical temperatures for the three steps are 201.2°F (94°C) for denaturation, 122°F to 149°F (50°C to 65°C) for annealing, and 161.6°F (72°C) for extension. Note that cycle temperatures are influenced by the specific enzyme used, the primer sequence, and the genomic sample. Because one cycle is typically completed in less than three minutes, many cycles can occur within a short time, resulting in the exponential production of millions of copies of the target sequence.40 The genetic material is then identified by agarose gel electrophoresis.

One potential disadvantage of this method is contamination of the amplification reaction with products of a previous PCR (carryover), exogenous DNA, or other cellular material. Contamination can be reduced by prealiquoting reagents, using dedicated positive-displacement pipettes, and physically separating the reaction preparation from the area where the product is analyzed. In addition, multiple negative controls are necessary to monitor for contamination. Also common in clinical laboratories are instrument platforms that can perform real-time (q)PCR as well as multiplex PCR, which allows amplification of two or more products in parallel in a single reaction tube.40 In real-time detection methods, a labeled oligonucleotide probe containing a fluorophore on the 5ʹ end and a quencher on the 3ʹ end bind to the DNA template. With the probe bound, the quencher prevents the fluor from emitting light. During DNA synthesis, the extending forward primer causes strand displacement. As the activity of the DNA polymerase enzyme continues reading along the template, the exonuclease activity of the polymerase enzyme cleaves the probe, resulting in the generation of light that is detected by the instrument. As each cycle of PCR continues, more fluorescent molecules are released, resulting in increasing fluorescence proportional to the amount of amplicon present. Several in vitro diagnostic companies such as BD Diagnostics, BioFire, Cepheid, and Nanosphere have U.S. Food and Drug Administration (FDA)-approved platforms that can allow for simultaneous detection of multiple microorganism targets. Use of these multiplex assays is attractive because they require minimal sample volumes to generate multiple results. These tests are typically referred to as syndromic panels—named as such for the most common samples tested, which include upper and lower respiratory tract specimens, stool, blood, and cerebrospinal fluid to detect CNS infections. Other companies are developing similar panels that can aid in the detection of prosthetic joint infections, which are routinely diagnosed using culture methods, but those suffer from decreased analytic sensitivity.


Newly developed techniques capable of examining the DNA, messenger RNA (mRNA), and proteins of cells have provided a framework for detailed molecular classifications and treatments of diseases. Genetic analysis of cystic fibrosis, for example, has shown the disease to be the result of more than 1,500 different mutations in the gene cystic fibrosis transmembrane conductance regulator.41 The most common mutation accounts for two-thirds of cystic fibrosis cases. Several related developments, especially in the areas of tumor classifications, are based on the fields of genomics, epigenetics, and proteomics. The most important laboratory procedures are array-based comparative genomic hybridization and the data derived from these studies—bioinformatics.


The study of all the genes of a cell, its DNA sequences, and the fine-scale mapping of genes is the science of genomics. A genome is the sum total of all genes of an individual organism. Knowledge of full genomes has created multiple possibilities, mainly concerned with patterns of gene expression associated with various diseases.42,43


Epigenetics refers to modifications of the genome that are functionally relevant but do not involve a change in the nucleotide sequence. Histone deacetylation and DNA methylation are examples of such changes, both of which serve to suppress gene expression without altering the sequence of the silenced genes. Such changes may continue to exist for many cell divisions and even the remainder of the cell’s life, as well as for future generations of cells. However, because there is no change in the underlying DNA sequence of the organism, nongenetic factors cause the organism’s genes to express themselves differently.44


The study of the full complement of proteins in a cell or tissue is called proteomics and includes the comprehensive analysis and characterization of all proteins, including their structure and function that are encoded by the human genome. Protein-based assays were among the first assays to be approved by the FDA, mostly using immunohistochemistry techniques. Most important biological functions are controlled by signal transduction, which are processes governed by the enzyme activities of proteins. Diseases such as cancer, while fundamentally the result of genomic mutations, manifest as dysfunctional protein signal transduction. Many pharmaceuticals are now being developed to aim at modulating the aberrant protein activity, not the genetic defect.4547

Proteomics will eventually have a great impact in the practice of medicine. Although the genome is the source of basic cellular information, the functional aspects of the cell are controlled by and through proteins, not genes. The main challenge to the study of proteomics is due to the proteome’s complexity compared with the genome. The human genome encodes approximately 23,000 genes, approximately 21,000 of which encode proteins. However, the total number of proteins in human cells is estimated to be between 250,000 and 1 million. Furthermore, proteins are dynamic and constantly undergo changes, synthesis, and breakdown. Currently, most of the FDA-approved targeted therapeutics are directed at proteins and not genes.


Similar to proteomics, metabolomics is a rapidly emerging field that combines multiple strategies and techniques to identify and quantify metabolites. Metabolites are small molecule substrates (ie, intermediates substances and the products of our metabolism). Like the Human Genome Project, which launched in 1990 and was completed in April 2003, the Human Metabolome Project, funded by Genome Canada, was launched in 2005 to describe and understand the complete collection of small molecules in a sample, including endogenous and exogenous compounds. The project led to the development of the Human Metabolome Database ( a freely available web-accessible database that contains detailed information about small molecule metabolites found in the human body. To date, the database contains more than 114,000 metabolite entries with links to more than 5,700 protein sequences associated with the metabolites. Metabolomics is useful for future identification biomarkers that could be used as diagnostic or prognostic of any number of diseases.


Molecular profiles of cells can now be determined using array-based comparative hybridization.48 This technique is especially useful in profiling tumor cells. Until recently, changes occurring in cancer cells were studied one at a time or in small groups in small sets of tumors. New array comparative hybridization or microarray technology (“gene chips”) has enabled investigators to simultaneously detect and quantify the expression of large numbers of genes (potentially all genes) in different tumors using mRNA levels. In this technique, samples are obtained from tissues embedded in paraffin blocks, and serve as the sources to prepare new blocks that may contain up to thousands of tissue fragments. These multiple samples are then used to test the expression of potential tumor markers by mRNA expression profiling. The mRNA levels, however, do not always correspond to changes in tumor cell proteins. The quantity of protein within a cell depends not only on the amount and rate of transcription and translation, but also on protein breakdown and the rate of transport out of the cell. Although tissue used for mRNA profiling may include both tumor and stromal cells, by adding immunohistochemistry methods, specific proteins in tissue sections originating from both normal as well as tumor cells can be identified.

As a specific example, several types of breast cancer cells, which were previously identified only by morphology, are now being studied by array-based comparative hybridization techniques. Combined with immunohistochemistry staining and protein expression levels, new subtypes that were not previously well defined have been identified (eg, the basal-like carcinomas).49 As a consequence, new treatment modalities have been developed. Array-based comparative hybridization methods have also identified new subtypes of other tumors, such as lymphomas and prostate cancer with potential for susceptibility and prognosis.50


Nanotechnology refers to the emerging science that studies interactions of cellular and molecular components at the most elemental level of biology, typically clusters of atoms, molecules, and molecular fragments. Nanoscale objects have dimensions smaller than 100 nm. At this dimension, smaller than human cells (which vary from 10,000 to 20,000 nm in diameter), small clusters of molecules and their interactions can be detected. Nanoscale devices smaller than 50 nm can easily enter most cells, while those smaller than 20 nm can move out of blood vessels, offering the possibility that these devices will be able to enter biological chambers, such as the blood–brain barrier or the gastrointestinal epithelium, and identify tumors, abnormalities, and deficiencies in enzymes and cellular receptor sites. Within these biological chambers, they will be able to interact with an individual cell in real time and in that cell’s native environment.

Despite their small size, nanoscale devices can also hold tens of thousands of small molecules, such as a magnetic resonance imaging contrast agent or a multicomponent diagnostic system capable of assaying a cell’s metabolic state. A good example of this approach will capitalize on existing “lab-on-a-chip” and microarray technologies developed at the micron scale. Widely used in biomedical research and to a lesser extent for clinical diagnostic applications today, these technologies will find new uses when shrunk to nanoscale. (In some instances, nanotechnology has already taken advantage of previous clinically relevant technological developments on larger scales.)

Currently, innovative testing is available for many different viruses, mutation analysis, and hematologic and solid tumors. With continuing advances and developments in nanotechnology, it is impossible to speculate as to what this new area of testing holds for the future of the clinical laboratory.


This chapter presents a brief overview of the more common and some emerging laboratory methodologies, including their potential advantages and pitfalls. Some historical methods have been discussed to provide a basis and description of the simple principles on which the more complex methods are based. A summary of some of the most common assay methods performed for routine laboratory tests is provided in Table 2-2.

Because of its simplicity and improved sensitivity, ISE has replaced flame photometry as the principal method for measuring serum and urine electrolytes in clinical specimens. Some methods, including turbidimetry, nephelometry, and spectrophotometry, are used in conjunction with other tests such as immunoassays. With these methods, concentrations of substances such as immune complexes are able to be determined.

Mass spectrometry is the gold standard for the identification of unknown substances, including drugs of abuse. Many of the newest designer drugs and bath salts are only identifiable based on this technique, as no other methodologies exist to detect them in clinical specimens. The two principal forms of chromatography are liquid and gas. Both types are similar in that they depend on differences in either solubilities or boiling points, respectively, to separate different analytes in a sample. Another group of important tests are the immunoassays: EIA, EMIT, ELISA, and FPIA. These methods depend on an immunologically mediated reaction that increases sensitivity and specificity over RIA. These assays are commonly used to determine routine clinical chemistries and drug concentrations. PCR and other nucleic acid amplification techniques are used to amplify specific DNA and RNA sequences, primarily in the areas of microbiology and detection of genetic diseases. Finally, with the potential advances envisioned in the area of nanotechnology, the laboratory will be able to provide clinicians with information and access to the patient’s cellular and molecular environments, thus providing the ability to target therapies at the exact site of the pathologic process.

The rapid technological advancement of laboratory instrumentation has led to the implementation of new and enhanced clinical laboratory methodologies, including MS, cytometry, laboratory automation, and POC testing. Although laboratory medicine endeavors to keep pace with the burgeoning developments in biomedical sciences, especially with an increase in the sophistication of the tests, it is essential that today’s clinicians have a basic understanding of the more common and esoteric tests to select the most appropriate one in each case. All of these developments will translate directly into improved patient care.


  • 1.

    Smith T. Quality automated. In: Advance for Administrators of the Laboratory. King of Prussia, PA: Merion Publications Inc; 2007:4448.

    • Search Google Scholar
    • Export Citation
  • 2.

    US Department of Health and Human Services, Office of the Inspector General. Comparing lab test payment rates: Medicare could achieve substantial savings. (accessed 2015 October 1).

  • 3.

    Felder RA. Automation: survival tools for the hospital laboratory. Paper presented at The Second International Bayer Diagnostics Laboratory Testing Symposium. New York; 1998 Jul 17.

    • Search Google Scholar
    • Export Citation
  • 4.

    Felder RA, Graves S, Mifflin T. Reading the future: increasing the relevance of laboratory medicine in the next century. MLO Med Labs Obs. 1999; 31:20–21, 2426.

    • Search Google Scholar
    • Export Citation
  • 5.

    Clinical and Laboratory Standards Institute. Standards documents for automation and informatics. 2015 Oct 20).

  • 6.

    Imants RL. Microfabricated biosensors and microanalytical systems for blood analysis. Acc Chem Res. 1998;31:317324.

  • 7.

    Nguyen A. Principles of instrumentation. McPherson RA, and Pinkus MR, eds. Henry’s Clinical Diagnosis and Management by Laboratory Method. 21st ed. Philadelphia: WB Saunders; 2006:6079.

    • Search Google Scholar
    • Export Citation
  • 8.

    Wehry EA. Molecular fluorescence and phosphorescence spectrometry. Settle FA, ed. Handbook of Instrumental Techniques for Analytical Chemistry. Upper Saddle River, NJ: Prentice-Hall; 1997:507539.

    • Search Google Scholar
    • Export Citation
  • 9.

    Tiffany TO. Fluorometry, nephelometry, and turbidimetry. Burtis CA, and Ashwood ER, eds. Tietz Fundamentals of Clinical Chemistry. 5th ed. Philadelphia: WB Saunders; 2001:7490.

    • Search Google Scholar
    • Export Citation
  • 10.

    Evenson MA. Photometry. Burtis CA, and Ashwood ER, eds. Tietz Fundamentals of Clinical Chemistry. 5th ed. Philadelphia: WB Saunders; 2001:5673.

    • Search Google Scholar
    • Export Citation
  • 11.

    Moore RE. Immunochemical methods. McClatchey KD, ed. Clinical Laboratory Medicine. Baltimore: Williams & Wilkins; 1994:213238.

  • 12.

    George JW, O’Neill SL. Comparison of refractometer and biuret methods for total protein measurement in body cavity fluids. Vet Clin Pathol. 2001;30(1):1618.PubMed

    • Search Google Scholar
    • Export Citation
  • 13.

    Freier ES. Osmometry. Burtis CA, and Ashwood ER, eds. Clinical Chemistry. 2nd ed. Philadelphia: WB Saunders; 1994:184190.

  • 14.

    Durst RA. Siggaard-Andersen. Electrochemistry. Burtis CA, and Ashwood ER, eds. Tietz Fundamentals of Clinical Chemistry. 5th ed. Philadelphia: WB Saunders; 2001:104120.

    • Search Google Scholar
    • Export Citation
  • 15.

    Burnett W, Lee-Lewandrowski E, Lewandrowski K. Electrolytes and acid-base balance. McClatchey KD, ed. Clinical laboratory medicine. Baltimore: Williams & Wilkins; 1994:331354.

    • Search Google Scholar
    • Export Citation
  • 16.

    Ladenson JH, Apple FS, Koch DD. Misleading hyponatremia due to hyperlipemia: a method-dependent error. Ann Intern Med. 1981;95(6):707708.PubMed

    • Search Google Scholar
    • Export Citation
  • 17.

    Southern EM. Detection of specific sequences among DNA fragments separated by gel electrophoresis. J Mol Biol. 1975;98(3):503517.PubMed

    • Search Google Scholar
    • Export Citation
  • 18.

    Hoefer Scientific Instruments. Protein Electrophoresis Applications Guide—Hoefer. San Francisco: Hoefer Scientific Instruments; 1994.

  • 19.

    Christenson RH, Azzazy HME. Amino acids and proteins. Burtis CA, and Ashwood ER, eds. Tietz Fundamentals of Clinical Chemistry. 5th ed. Philadelphia: WB Saunders; 2001:300351.

    • Search Google Scholar
    • Export Citation
  • 20.

    Chang R. Physical Chemistry With Applications to Biological Systems. New York: MacMillan; 1977.

  • 21.

    Fairbanks VF, Klee GG. Biochemical aspects of hematology. Burtis CA, and Ashwood ER, eds. Clinical Chemistry. 2nd ed. Philadelphia: WB Saunders; 1994:19742072.

    • Search Google Scholar
    • Export Citation
  • 22.

    Görg A, Postel W, Günther S. The current state of two-dimensional electrophoresis with immobilized pH gradients. Electrophoresis. 1988;9(9):531546.PubMed

    • Search Google Scholar
    • Export Citation
  • 23.

    Karcher RE, Nuttall KL. Electrophoresis. Burtis CA, and Ashwood ER, eds. Tietz Fundamentals of Clinical Chemistry. 5th ed. Philadelphia, PA: WB Saunders; 2001:121132.

    • Search Google Scholar
    • Export Citation
  • 24.

    Tenover FC, Arbeit RD, Goering RV, et al.Interpreting chromosomal DNA restriction patterns produced by pulsed-field gel electrophoresis: criteria for bacterial strain typing. J Clin Microbiol. 1995;33(9):22332239.PubMed

    • Search Google Scholar
    • Export Citation
  • 25.

    Slagle KM. Immunoassays: tools for sensitive, specific, and accurate test results. Lab Med. 1996;27:177.

  • 26.

    Köhler G, Milstein C. Continuous cultures of fused cells secreting antibody of predefined specificity. Nature. 1975;256(5517):495497.PubMed

    • Search Google Scholar
    • Export Citation
  • 27.

    Berson SA, Yalow RS, Bauman A, et al.Insulin-I131 metabolism in human subjects: demonstration of insulin binding globulin in the circulation of insulin treated subjects. J Clin Invest. 1956;35(2):170190.PubMed

    • Search Google Scholar
    • Export Citation
  • 28.

    Ashihara Y, Kasahara Y, Nakamura RM. Immunoassays and Immunochemistry. In: McPherson RA, Pinkus MR, eds. Henry’s Clinical Diagnosis and Management by Laboratory Methods. 21st ed. Philadelphia, PA: WB Saunders; 2001.

    • Search Google Scholar
    • Export Citation
  • 29.

    Kitson FG, Larsen BS, McEwen CN. Gas Chromatography and Mass Spectrometry: A Practical Guide. San Diego: Academic Press; 1996.

  • 30.

    Siuzdak G. Mass Spectrometry for Biotechnology. San Diego: Academic Press; 1996.

  • 31.

    Bowers LD, Ullman MD, Burtis CA. Chromatography. Burtis CA, and Ashwood ER, eds. Tietz Fundamentals of Clinical Chemistry. 5th ed. Philadelphia: WB Saunders; 2001:133156.

    • Search Google Scholar
    • Export Citation
  • 32.

    Van Bramer SE. An introduction to mass spectrometry (1997). (accessed 2015 Oct 18).

  • 33.

    Busch KL, Glish GL, McLuckey SA. Mass Spectrometry/Mass Spectrometry: Techniques and Applications of Tandem Mass Spectrometry. New York: VCH Publishers Inc; 1988.

    • Search Google Scholar
    • Export Citation
  • 34.

    van Veen SQ, Claas EC, Kuijper EJ. High-throughput identification of bacteria and yeast by matrix-assisted laser desorption ionization-time of flight mass spectrometry in conventional medical microbiology laboratories. J Clin Microbiol. 2010;48(3):900907.PubMed

    • Search Google Scholar
    • Export Citation
  • 35.

    Melnick SJ. Acute lymphoblastic leukemia. Clin Lab Med. 1999;19(1):169186.PubMed

  • 36.

    Alamo AL, Melnick SJ. Clinical application of four and five-color flow cytometry lymphocyte subset immunophenotyping. Cytometry. 2000;42(6):363370.PubMed

    • Search Google Scholar
    • Export Citation
  • 37.

    Raap AK. Overview of fluorescent in situ hybridization techniques for molecular cytogenetics. Current Protocols in Cytometry. 1997;

    • Search Google Scholar
    • Export Citation
  • 38.

    Wilkinson DG. The theory and practice of in situ hybridization. Wilkinson DG, ed. In Situ Hybridization—A Practical Approach. Oxford: Oxford University Press; 1992:113.

    • Search Google Scholar
    • Export Citation
  • 39.

    Erlich HA, Gelfand D, Sninsky JJ. Recent advances in the polymerase chain reaction. Science. 1991;252(5013):16431651.PubMed

  • 40.

    Remick DG. Clinical applications of molecular biology. McClatchey KD, ed. Clinical Laboratory Medicine. Baltimore: Williams & Wilkins; 1994:165174.

    • Search Google Scholar
    • Export Citation
  • 41.

    Ratjen F, Döring G. Cystic fibrosis. Lancet. 2003;361(9358):681689.PubMed

  • 42.

    Bloom MV, Freyer GA, Micklos DA. Laboratory DNA Science: An Introduction to Recombinant DNA Techniques and Methods of Genome Analysis. Menlo Park, CA: Addison-Wesley; 1996.

    • Search Google Scholar
    • Export Citation
  • 43.

    Russo VEA, Martienssen RA, Riggs AD. 1996 Epigenetic Mechanisms Of Gene Regulation. Plainview, NY: Cold Spring Harbor Laboratory Press; 1996.

    • Search Google Scholar
    • Export Citation
  • 44.

    Anderson NL, Anderson NG. Proteome and proteomics: new technologies, new concepts, and new words. Electrophoresis. 1998;19(11):18531861.PubMed

    • Search Google Scholar
    • Export Citation
  • 45.

    Blackstock WP, Weir MP. Proteomics: quantitative and physical mapping of cellular proteins. Trends Biotechnol. 1999;17(3):121127.PubMed

    • Search Google Scholar
    • Export Citation
  • 46.

    Wilkins MR, Pasquali C, Appel RD, et al.From proteins to proteomes: large scale protein identification by two-dimensional electrophoresis and amino acid analysis. Biotechnology (N Y). 1996;14(1):6165.PubMed

    • Search Google Scholar
    • Export Citation
  • 47.

    Shinawi M, Cheung SW. The array CGH and its clinical applications. Drug Discov Today. 2008;13(17–18):760770.PubMed

  • 48.

    Peppercorn J, Perou CM, Carey LA. Molecular subtypes in breast cancer evaluation and management: divide and conquer. Cancer Invest. 2008;26(1):110.PubMed

    • Search Google Scholar
    • Export Citation
  • 49.

    Rosenwald A, Wright G, Chan WC, et al.The use of molecular profiling to predict survival after chemotherapy for diffuse large-B-cell lymphoma. N Engl J Med. 2002;346(25):19371947.PubMed

    • Search Google Scholar
    • Export Citation
  • 50.

    Eeles RA, Kote-Jarai Z, Giles GG, et al.Multiple newly identified loci associated with prostate cancer susceptibility. Nat Genet. 2008;40(3):316321.PubMed

    • Search Google Scholar
    • Export Citation