Download Biotechnology - Education Vijay

Document related concepts

Bio-MEMS wikipedia , lookup

Nanomedicine wikipedia , lookup

Public health genomics wikipedia , lookup

Pharmacognosy wikipedia , lookup

Bioinformatics wikipedia , lookup

Drug discovery wikipedia , lookup

Biomedical engineering wikipedia , lookup

Genetic engineering wikipedia , lookup

Pharmacogenomics wikipedia , lookup

Transcript
Biotechnology
Biotechnology is the use of living systems and organisms to develop or make products, or "any
technological application that uses biological systems, living organisms or derivatives thereof, to
make or modify products or processes for specific use" (UN Convention on Biological Diversity, Art.
2). Depending on the tools and applications, it often overlaps with the (related) fields
of bioengineering, biomedical engineering, biomanufacturing, etc.
For thousands of years, humankind has used biotechnology in agriculture, food production,
and medicine. The term is largely believed to have been coined in 1919 by
Hungarian engineer Károly Ereky. In the late 20th and early 21st century, biotechnology has
expanded to include new and diverse sciences such as genomics, recombinant gene techniques,
applied immunology, and development of pharmaceutical therapies and diagnostic tests.
Definitions
The wide concept of "biotech" or "biotechnology" encompasses a wide range of procedures for
modifying living organisms according to human purposes, going back to domestication of animals,
cultivation of plants, and "improvements" to these through breeding programs that employ artificial
selection and hybridization. Modern usage also includes genetic engineering as well
as cell and tissue culture technologies. The American Chemical Society defines biotechnology as
the application of biological organisms, systems, or processes by various industries to learning about
the science of life and the improvement of the value of materials and organisms such as
pharmaceuticals, crops, and livestock. As per European Federation of Biotechnology, Biotechnology
is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for
products and services.[4] Biotechnology also writes on the pure biological sciences (animal cell
culture, biochemistry, cell biology, embryology, genetics, microbiology, and molecular biology). In
many instances, it is also dependent on knowledge and methods from outside the sphere of biology
including:




bioinformatics, a new brand of computer science
bioprocess engineering
biorobotics
chemical engineering
Conversely, modern biological sciences (including even concepts such as molecular ecology) are
intimately entwined and heavily dependent on the methods developed through biotechnology and
what is commonly thought of as the life sciences industry. Biotechnology is the research and
development in the laboratory using bioinformatics for exploration, extraction, exploitation and
production from any living organisms and any source of biomass by means of biochemical
engineering where high value-added products could be planned (reproduced by biosynthesis, for
example), forecasted, formulated, developed, manufactured and marketed for the purpose of
sustainable operations (for the return from bottomless initial investment on R & D) and gaining
durable patents rights (for exclusives rights for sales, and prior to this to receive national and
international approval from the results on animal experiment and human experiment, especially on
the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety
concerns by using the products)
By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes
higher systems approaches (not necessarily the altering or using of biological materials directly) for
interfacing with and utilizing living things. Bioengineering is the application of the principles of
engineering and natural sciences to tissues, cells and molecules. This can be considered as the use
of knowledge from working with and manipulating biology to achieve a result that can improve
functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often
draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of
biomedical and/or chemical engineering such as tissue engineering, biopharmaceutical engineering,
and genetic engineering.
History
Brewing was an early application of biotechnology
Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit
the broad definition of "'utilizing a biotechnological system to make products". Indeed, the cultivation
of plants may be viewed as the earliest biotechnological enterprise.
Agriculture has been theorized to have become the dominant way of producing food since
the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the
best suited crops, having the highest yields, to produce enough food to support a growing
population. As crops and fields became increasingly large and difficult to maintain, it was discovered
that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control
pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their
crops through introducing them to new environments and breeding them with other plants — one of
the first forms of biotechnology.
These processes also were included in early fermentation of beer. These processes were introduced
in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods.
In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then
adding specific yeasts to produce beer. In this process,carbohydrates in the grains were broken
down into alcohols such as ethanol. Later other cultures produced the process of lactic acid
fermentation which allowed the fermentation and preservation of other forms of food, such as soy
sauce. Fermentation was also used in this time period to produce leavened bread. Although the
process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first
use of biotechnology to convert a food source into another form.
Before the time of Charles Darwin's work and life, animal and plant scientists had already used
selective breeding. Darwin added to that body of work with his scientific observations about the
ability of science to change species. These accounts contributed to Darwin's theory of natural
selection.
For thousands of years, humans have used selective breeding to improve production of crops and
livestock to use them for food. In selective breeding, organisms with desirable characteristics are
mated to produce offspring with the same characteristics. For example, this technique was used with
corn to produce the largest and sweetest crops.
In the early twentieth century scientists gained a greater understanding of microbiology and explored
ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological
culture in an industrial process, that of manufacturing corn starch using Clostridium
acetobutylicum, to produce acetone, which the United Kingdom desperately needed to
manufacture explosives during World War I.
Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered
the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold
by Howard Florey, Ernst Boris Chain and Norman Heatley - to form what we today know
as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in
humans.
The field of modern biotechnology is generally thought of as having been born in 1971 when Paul
Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at
San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972
by transferring genetic material into a bacterium, such that the imported material would be
reproduced. The commercial viability of a biotechnology industry was significantly expanded on June
16, 1980, when the United States Supreme Court ruled that a genetically
modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born
Ananda Chakrabarty, working for General Electric, had modified a bacterium (of
the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating
oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire
organelles between strains of the Pseudomonas bacterium.
Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the
biotechnology sector's success is improved intellectual property rights legislation—and
enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products
to cope with an ageing, and ailing, U.S. population.
Rising demand for biofuels is expected to be good news for the biotechnology sector, with
the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel
consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry
to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing
genetically modified seeds which are resistant to pests and drought. By boosting farm productivity,
biotechnology plays a crucial role in ensuring that biofuel production targets are met.
Examples
A rose plant that began as cells grown in a tissue culture
Biotechnology has applications in four major industrial areas, including health care (medical), crop
production and agriculture, non food (industrial) uses of crops and other products
(e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.
For example, one application of biotechnology is the directed use of organisms for the manufacture
of organic products (examples include beer and milk products). Another example is using naturally
present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat
waste, clean up sites contaminated by industrial activities (bioremediation), and also to
produce biological weapons.
A series of derived terms have been coined to identify several branches of biotechnology; for
example:

Bioinformatics is an interdisciplinary field which addresses
biological problems using computational techniques, and makes the
rapid organization as well as analysis of biological data possible.
The field may also be referred to as computational biology, and can
be defined as, "conceptualizing biology in terms of molecules and
then applying informatics techniques to understand and organize
the information associated with these molecules, on a large
scale."[16] Bioinformatics plays a key role in various areas, such
as functional genomics, structural genomics, and proteomics, and
forms a key component in the biotechnology and pharmaceutical
sector.




Blue biotechnology is a term that has been used to describe the
marine and aquatic applications of biotechnology, but its use is
relatively rare.
Green biotechnology is biotechnology applied to agricultural
processes. An example would be the selection and domestication of
plants via micropropagation. Another example is the designing
of transgenic plants to grow under specific environments in the
presence (or absence) of chemicals. One hope is that green
biotechnology might produce more environmentally friendly
solutions than traditional industrial agriculture. An example of this is
the engineering of a plant to express a pesticide, thereby ending the
need of external application of pesticides. An example of this would
be Bt corn. Whether or not green biotechnology products such as
this are ultimately more environmentally friendly is a topic of
considerable debate.
Red biotechnology is applied to medical processes. Some
examples are the designing of organisms to produce antibiotics,
and the engineering of genetic cures throughgenetic manipulation.
White biotechnology, also known as industrial biotechnology, is
biotechnology applied to industrial processes. An example is the
designing of an organism to produce a useful chemical. Another
example is the using of enzymes as industrial catalysts to either
produce valuable chemicals or destroy hazardous/polluting
chemicals. White biotechnology tends to consume less in resources
than traditional processes used to produce industrial goods. The
investment and economic output of all of these types of applied
biotechnologies is termed as "bioeconomy".
Medicine
In medicine, modern biotechnology finds applications in areas such as pharmaceutical
drug discovery and production, pharmacogenomics, and genetic testing (or genetic screening).
DNA microarray chip – some can do as many as a million blood tests at once
Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses
how genetic makeup affects an individual's response to drugs.[17] It deals with the influence
of genetic variation on drug response in patients by correlating gene expression or single-nucleotide
polymorphisms with a drug's efficacy or toxicity.[18] By doing so, pharmacogenomics aims to develop
rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum
efficacy with minimal adverse effects.[19] Such approaches promise the advent of "personalized
medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic
makeup.
Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it
together, and the histidineresidues involved in zinc binding.
Biotechnology has contributed to the discovery and manufacturing of traditional small
molecule pharmaceutical drugs as well as drugs that are the product of biotechnology biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively
easily and cheaply. The first genetically engineered products were medicines designed to treat
human diseases. To cite one example, in 1978 Genentech developed synthetic
humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia
coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas
of abattoir animals (cattle and/or pigs). The resulting genetically engineered bacterium enabled the
production of vast quantities of synthetic human insulin at relatively low cost.[22][23] Biotechnology has
also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic
science (for example through the Human Genome Project) has also dramatically improved our
understanding of biology and as our scientific knowledge of normal and disease biology has
increased, our ability to develop new medicines to treat previously untreatable diseases has
increased as well. Genetic testing allows the genetic diagnosis of vulnerabilities to
inherited diseases, and can also be used to determine a child's parentage (genetic mother and
father) or in general a person's ancestry. In addition to studying chromosomes to the level of
individual genes, genetic testing in a broader sense includes biochemical tests for the possible
presence of genetic diseases, or mutant forms of genes associated with increased risk of developing
genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins.[24] Most of
the time, testing is used to find changes that are associated with inherited disorders. The results of a
genetic test can confirm or rule out a suspected genetic condition or help determine a person's
chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests
were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is
often accompanied by genetic counseling.
Agriculture
Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture,
the DNA of which has been modified with genetic engineering techniques. In most cases the aim is
to introduce a new trait to the plant which does not occur naturally in the species.
Examples in food crops include resistance to certain pests, diseases, stressful environmental
conditions, resistance to chemical treatments (e.g. resistance to aherbicide), reduction of
spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production
of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.
Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of
land cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers
(4,200,000 acres) to 1,600,000 km2 (395 million acres). 10% of the world's crop lands were planted
with GM crops in 2010.[38] As of 2011, 11 different transgenic crops were grown commercially on 395
million acres (160 million hectares) in 29 countries such as the USA, Brazil, Argentina, India,
Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines,
Myanmar, Burkina Faso, Mexico and Spain.
Genetically modified foods are foods produced from organisms that have had specific changes
introduced into their DNA with the methods of genetic engineering. These techniques have allowed
for the introduction of new crop traits as well as a far greater control over a food's genetic structure
than previously afforded by methods such as selective breeding and mutation
breeding.[39] Commercial sale of genetically modified foods began in 1994, when Calgene first
marketed its Flavr Savr delayed ripening tomato.[40] To date most genetic modification of foods have
primarily focused on cash crops in high demand by farmers such as soybean,corn, canola,
and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and
better nutrient profiles. GM livestock have also been experimentally developed, although as of
November 2013 none are currently on the market.
There is broad scientific consensus that food on the market derived from GM crops poses no greater
risk to human health than conventional food. GM crops also provide a number of ecological benefits,
if not used in excess.[47] However, opponents have objected to GM crops per se on several grounds,
including environmental concerns, whether food produced from GM crops is safe, whether GM crops
are needed to address the world's food needs, and economic concerns raised by the fact these
organisms are subject to intellectual property law.
Industrial biotechnology
Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of
biotechnology for industrial purposes, including industrial fermentation. It includes the practice of
using cells such asmicro-organisms, or components of cells like enzymes, to
generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper
and pulp, textiles and biofuels. In doing so, biotechnology uses renewable raw materials and may
contribute to lowering greenhouse gas emissions and moving away from a petrochemical-based
economy.
Regulation
Main articles: Regulation of genetic engineering and Regulation of the release of genetic modified
organisms
The regulation of genetic engineering concerns approaches taken by governments to assess and
manage the risks associated with the use of genetic engineering technology, and the development
and release of genetically modified organisms (GMO), including genetically modified
crops and genetically modified fish. There are differences in the regulation of GMOs between
countries, with some of the most marked differences occurring between the USA and
Europe.[50] Regulation varies in a given country depending on the intended use of the products of the
genetic engineering. For example, a crop not intended for food use is generally not reviewed by
authorities responsible for food safety.[51] The European Union differentiates between approval for
cultivation within the EU and approval for import and processing. While only a few GMOs have been
approved for cultivation in the EU a number of GMOs have been approved for import and
processing.[52] The cultivation of GMOs has triggered a debate about coexistence of GM and non GM
crops. Depending on the coexistence regulations incentives for cultivation of GM crops differ.[53]
Biological engineering
Modeling of the spread of disease using Cellular Automata and Nearest Neighbor Interactions
Biological engineering or bioengineering (including biological systems engineering) is the
application of concepts and methods of biology (and secondarily of physics,chemistry, mathematics,
and computer science) to solve real-world problems related to SSBS life sciences or the application
thereof, using engineering's own analytical andsynthetic methodologies and also its traditional
sensitivity to the cost and practicality of the solution(s) arrived at. In this context, while traditional
engineering applies physical and mathematical sciences to
analyze, design and manufacture inanimate tools, structures and processes, biological engineering
uses primarily the rapidly developing body of knowledge known as molecular biology to study and
advance applications of organisms and to create biotechnology.
An especially important application is the analysis and cost-effective solution of problems related
to human health, but the field is much more general than that. For example,biomimetics is a branch
of biological engineering which strives to find ways in which the structures and functions of living
organisms can be used as models for the design and engineering of materials and
machines. Systems biology, on the other hand, seeks to exploit the engineer's familiarity with
complex artificial systems, and perhaps the concepts used in "reverse engineering", to facilitate the
difficult process of recognition of the structure, function, and precise method of operation of complex
biological systems.
The differentiation between biological engineering and biomedical engineering can be unclear, as
many universities loosely use the terms "bioengineering" and "biomedical engineering"
interchangeably.[1] Biomedical engineers are specifically focused on applying biological and other
sciences toward medical innovations, whereas biological engineers are focused principally on
applying engineering principles to biology - but not necessarily for medical uses. Hence neither
"biological" engineering nor "biomedical" engineering is wholly contained within the other, as there
can be "non-biological" products for medical needs as well as "biological" products for nonmedical needs (the latter including notably biosystems engineering).
Biological engineering is a science-based discipline founded upon the biological sciences in the
same way that chemical engineering, electrical engineering, and mechanical engineering[2] can be
based upon chemistry, electricity and magnetism, and classical mechanics, respectively.[3]
Biological engineering can be differentiated from its roots of pure biology or other engineering fields.
Biological studies often follow a reductionist approach in viewing a system on its smallest possible
scale which naturally leads toward the development of tools like functional genomics. Engineering
approaches, using classical design perspectives, are constructionist, building new devices,
approaches, and technologies from component parts or concepts. Biological engineering uses both
approaches in concert, relying on reductionist approaches to identify, understand, and organize the
fundamental units, which are then integrated to generate something new.[4] In addition, because it is
an engineering discipline, biological engineering is fundamentally concerned with not just the basic
science, but its practical application of the scientific knowledge to solve real-world problems in a
cost-effective way.
Although engineered biological systems have been used to manipulate information, construct
materials, process chemicals, produce energy, provide food, and help maintain or enhance human
health and our environment, our ability to quickly and reliably engineer biological systems that
behave as expected is at present less well developed than our mastery over mechanical and
electrical systems.[5]
ABET,[6] the U.S.-based accration board for engineering B.S. programs, makes a distinction between
biomedical engineering and biological engineering, though there is much overlap (see above).
Foundational courses are often the same and include thermodynamics, fluid and mechanical
dynamics, kinetics, electronics, and materials properties.[7][8] According to Professor Doug
Lauffenburger of MIT,[10] biological engineering (like biotechnology) has a broader base which applies
engineering principles to an enormous range of size and complexities of systems ranging from the
molecular level - molecular
biology, biochemistry,microbiology, pharmacology, protein chemistry, cytology, immunology, neurobi
ology and neuroscience (often but not always using biological substances) - to cellular and tissuebased methods (including devices and sensors), whole macroscopic organisms (plants, animals),
and up increasing length scales to whole ecosystems.
The word bioengineering was coined by British scientist and broadcaster Heinz Wolff in 1954.[11] The
term bioengineering is also used to describe the use of vegetation in civil engineering construction.
The term bioengineering may also be applied to environmental modifications such as surface soil
protection, slope stabilization, watercourse and shoreline protection, windbreaks, vegetation barriers
including noise barriers and visual screens, and the ecological enhancement of an area. The first
biological engineering program was created at Mississippi State University in 1967, making it the first
biological engineering curriculum in the United States.[12] More recent programs have been launched
at MIT and Utah State University.[13]
Description
Biological engineers or bioengineers are engineers who use the principles of biology and the tools of
engineering to create usable, tangible, economically viable products.[14] Biological engineering
employs knowledge and expertise from a number of pure and applied sciences,[15] such as mass and
heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification
processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer
science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials,
renewable bioenergy, ecological engineering, agricultural engineering, and other areas that improve
the living standards of societies.
In general, biological engineers attempt to either mimic biological systems to create products or
modify and control biological systems so that they can replace, augment, sustain, or predict
chemical and mechanical processes.[16] Bioengineers can apply their expertise to other applications
of engineering and biotechnology, including genetic modification of plants and microorganisms,
bioprocess engineering, and biocatalysis.
Because other engineering disciplines also address living organisms (e.g., prosthetics in biomechanical engineering), the term biological engineering can be applied more broadly to
include agricultural engineeringand biotechnology, which notably can address non-healthcare
objectives as well (unlike biomedical engineering). In fact, many old agricultural engineering
departments in universities over the world have rebranded themselves as agricultural and biological
engineering or agricultural and biosystems engineering. Biological engineering is also
called bioengineering by some colleges, and biomedical engineering is calledbioengineering by
others, and is a rapidly developing field with fluid categorization. Depending on the institution and
particular definitional boundaries employed, some major fields of bioengineering may be categorized
as (note these may overlap):






biological systems engineering
biomedical engineering: biomedical technology, biomedical
diagnostics, biomedical therapy, biomechanics, biomaterials;
genetic engineering (involving both of the above, although in
different applications): synthetic biology, horizontal gene transfer;
bioprocess engineering: bioprocess
design, biocatalysis, bioseparation, bioinformatics, bioenergy;
cellular engineering: cell engineering, tissue engineering, metabolic
engineering;
biomimetics: the use of knowledge gained from reverse engineering
evolved living systems to solve difficult design problems in artificial
systems.
Biotechnology High School
Biotechnology High School (nicknamed Biotech) is a comprehensive vocational public high
school serving students in ninth through twelfth grades in Freehold Township, in Monmouth
County, New Jersey, United States, as part of the Monmouth County Vocational School
District (MCVSD). Its curriculum boasts a rigorous science course which consists of eight different
science classes over four years, designed to prepare students to pursue further education
in biotechnology. Emphasis is placed on research, laboratory skills, critical thinking, problem solving,
technology, and teamwork. Over 90% of the 2009 graduates selected college majors in the life
sciences.[2] The school has been accred by the Middle States Association of Colleges and
Schools Commission on Secondary Schools since 2005.[3]
As of the 2012-13 school year, the school had an enrollment of 311 students and 23.0 classroom
teachers (on an FTE basis), for a student–teacher ratio of 13.52:1. There were 3 students (1.0% of
enrollment) eligible for free lunch and 1 (0.3% of students) eligible for reduced-cost lunch.[1]
As of April 2007, it became an official International Baccalaureate Organization school, offering
the IB Diploma Programme.[4] Most of the courses taken in the junior and senior years are IB
courses, which are comparable to Advanced Placement (AP) courses.[5] In their junior year, students
declare for either IB Chemistry or IB Physics concentration which requires two full years of study.[2][6]
Mission
"Biotechnology High School, an International Baccalaureate (IB) World School, integrates life
science, technology, and engineering into a rigorous curriculum that inspires students toward openminded participation in the global community and prepares them for higher education and leadership
in an increasingly demanding workplace through scholarly research, original investigations, and
interactive partnerships."
Awards, recognition, and rankings
The school was recognized by the National Blue Ribbon Schools Program in 2014, as one of 11
schools to be honored in the state that year.
Biotechnology High School was ranked the eleventh best high school in the United States and 1st in
New Jersey by U.S. News and World Report in 2014 and 2015.[8] In 2015, the school was ranked
nineteenth in the nation while still retaining its spot at the top of the list of best high schools in New
Jersey.
In its 2013 report on "America's Best High Schools", The Daily Beast ranked the school 24th in the
nation among participating public high schools and 2nd among schools in New Jersey. The school
was ranked 20th in the nation and first in New Jersey on the list of "America's Best High Schools
2012" prepared by The Daily Beast / Newsweek, with rankings based 25% each on graduation rate,
matriculation rate for college and number of Advanced Placement / International
Baccalaureate courses taken per student, with 10% based on average scores on the SAT / ACT,
10% on AP/IB scores and an additional 5% based on the number of AP/IB courses available to
students.[10]
In 2011, Biotech was ranked 25th out of more than 27,000 U.S. high schools by Newsweek, using a
six component rating system: graduation rate (25%), college matriculation rate (25%), AP/IB tests
taken per graduate (25%), average SAT/ACT scores (10%), average AP/IB scores (10%), and AP
courses offered per graduate (5%). It was ranked 18th for average SAT scores (2008–2010) and
26th for number of AP/IB tests given per graduate in 2010.[11]
In 2010, Biotechnology High School was categorized as a "Public Elite" by Newsweek.[12] In
September 2010, Biotechnology High School entered into a partnership with Monmouth University,
which allows graduates to enter into the University's bachelor of science program, in addition to
allowing students to participate in research projects with MU faculty.
2009-2010 average SAT scores were 654 reading, 674 math, and 680 writing.
Schooldigger.com ranked the school as one of 16 schools tied for first out of 381 public high schools
statewide in its 2011 rankings (unchanged from the 2010 rank) which were based on the combined
percentage of students classified as proficient or above proficient on the language arts literacy
(100.0%) and mathematics (100.0%) components of the High School Proficiency
Assessment (HSPA).
On October 29, 2009, the school was featured in an Asbury Park Press special feature "A Day in the
life of Biotechnology High School", which went into the classroom to examine the day-to-day life of
some of the students.
Administration History


Sean Meehan - Principal (January 2015 – Present). Meehan was a
history teacher at Biotechnology High School until he took a job in
2013 to work for the Monmouth County Vocational School District.
Dr. Linda Eno - Principal (2005-December 2014). Eno is a former
pre-medical teacher at the AAHS.
Faculty
Biotech's faculty includes four teachers holding doctorates in their fields from graduate schools such
as UNC Chapel Hill and Princeton University. Other staff members hold bachelor's and masters
degrees, many from Rutgers-New Brunswick.
Architecture
The Biotechnology High School facility was completed in December 2005 and cost approximately
$16.9 million to construct.[16] Designed by The Thomas Group, the building consists of two floors
which account to more than 76,000 total square feet and can hold a maximum student population of
280. It meets all requirements for LEED certification[17] and features:





Two dedicated biotech labs with state-of-the-art equipment
General-purpose labs for biology, chemistry and physics
Tissue Culture Lab
Hookups for alternative fuel vehicles
An Energy Star compliant EPDM roof
Admission criteria
As with all of the career academies in the Monmouth County Vocational School District, a student
wishing to attend Biotechnology High School must apply in a competitive admissions process. Only
those who possess high 7th grade final scores and 8th grade first semester scores (based on the
core subjects, mathematics, English, science, and history), and achieve high scores in an admission
exam that encompasses language arts literacy and mathematics, as compared to others applying,
will be offered admission. Monmouth County students may only apply to one of the five MCVSD
schools, so they must consider their application choice carefully.
Paperwork must also be submitted in the admissions process, including an application that is
received upon visiting the school during a designated open-house day. The application includes a
short writing task that is not considered in the admissions process.
Curriculum
All students at Biotechnology High School study under a rigorous curriculum that includes eight
mandatory laboratory science classes. All students enter the International Baccalaureate Diploma
Program upon starting junior year, at which point they declare for either an IB Physics or an IB
Chemistry concentration. All students must take a biology course each four years, in addition to at
least one more science class. Four years of humanities are also taught. In all career academies in
the Monmouth County Vocational School District, 160 cr-hours are required to graduate.
Biomedical engineering
Ultrasound representation of Urinary bladder (black butterfly-like shape) a hyperplastic prostate. An example
ofengineering science and medical scienceworking together.
Example of an approximately 40,000 probe spotted oligo microarray with enlarged inset to show detail.
Biomedical engineering (BME) is the application of engineering principles and design concepts to
medicine and biology for healthcare purposes (e.g. diagnostic or therapeutic). This field seeks to
close the gap between engineering and medicine: It combines the design and problem solving skills
of engineering with medical and biological sciences to advance health care treatment,
including diagnosis, monitoring, and therapy.[1] Biomedical engineering has only recently emerged as
its own study, compared to many other engineering fields. Such an evolution is common as a new
field transitions from being an interdisciplinary specialization among already-established fields, to
being considered a field in itself. Much of the work in biomedical engineering consists of research
and development, spanning a broad array of subfields (see below). Prominent biomedical
engineering applications include the development of biocompatible prostheses, various diagnostic
and therapeutic medical devices ranging from clinical equipment to micro-implants, common imaging
equipment such as MRIs and EEGs, regenerative tissue growth, pharmaceutical drugs and
therapeutic biologicals.
History
Biomedical engineering has existed for centuries, perhaps even thousands of years. In 2000,
German archaeologists uncovered a 3,000-year-old mummy from Thebes with a wooden prosthetic
tied to its foot to serve as a big toe. Researchers said the wear on the bottom surface suggests that
it could be the oldest known limb prosthesis.


1816: modesty prevented French physician René Laennec from
placing his ear next to a young woman’s bare chest, so he rolled up
a newspaper and listened through it, triggering the idea for his
invention that led to today’s ubiquitous stethoscope.
The roots of biomedical engineering reach back to early
developments in electrophysiology, which originated about 200
years ago.


1848: an early landmark occurred in electrophysiology. Hermann
von Helmholtz is cred with applying engineering principles to a
problem in physiology and identifying the resistance of muscle and
nervous tissues to direct current.
1895: Wilhelm Roentgen accidentally discovered that a cathode-ray
tube could make a sheet of paper coated with barium
platinocyanide glow, even when the tube and the paper were in
separate rooms. Roentgen decided the tube must be emitting some
kind of penetrating rays, which he called “X” rays. This set off a
flurry of research into the tissue-penetrating and tissue-destroying
properties of X-rays, a line of research that ultimately produced the
modern array of medical imaging technologies and virtually
eliminated the need for exploratory surgery.
Origin
Biomedical engineering originated during World War II. Biologists were needed to do work involving
advances on radar technology, which led them to the electronic developments in medicine. Due to
these developments, the next generation of biologists could not benefit from this technology, as they
couldn’t understand it. A bridge was needed to fill the gap between technical knowledge and biology.
Doctors and biologists who were interested in engineering and electrical engineers interested in
biology became the first bioengineers. Those primarily concerned with medicine became the first
biomedical engineers. The unique mix of engineering, medicine and science in biomedical
engineering emerged alongside biophysics and medical physics early this century.
Major milestones
Biomedical engineering achievements range from early devices, such as crutches, platform shoes,
and wooden teeth to more modern equipment, including pacemakers, heart-lung machine, dialysis
machines, diagnostic equipment, imaging technologies of every kind, and artificial organs, medical
implants and advanced prosthetics.














1895: Conrad Roentgen (Germany) discovered the X-ray using gas
discharged tubes.
1896: Henry Becquerel (France) discovered X-rays were emitted
from uranium ore.
1901: Roentgen received the Nobel Prize for discovery of X-Rays.
1903: William Eindhoven invented the electrocardiogram (ECG).
1921: First formal training in biomedical engineering was started at
Oswalt Institute for Physics in Medicine, Frankfurt, Germany.
1927: Invention of the Drinker respirator.
1929: Hans Berger invents the electroencephalogram (EEG).
1930: X-rays were being used to visualize most organ systems
using radio-opaque materials, refrigeration, permitted blood banks.
Mid 1930′s – early
1940's: Antibiotics, sulfanilamide and pencillin reduced crossinfection in hospitals.
1940: Cardiac catheterization.
1943: The International Bio-Physical Society was formed.
1948: The first conference of Engineering in Medicine & Biology
was held in the United States.
1950: Electron microscope.
1950′s – early 1960′s: Nuclear medicine.







1953: Cardiopulmonary bypass (heart–lung machine).
1969: Case Western Reserve created the first MD/PhD program
1970: computer tomography (CT) and magnetic resonance
imaging (MRI).
1975: Whitaker Foundation was founded.
1980: Gamma camera, positron emission tomography(PET)
and SPECT.
1997: First Indigenous endovascular coronary stent (Kalam-Raju
stent) was developed by the Care Foundation.
Biomedical engineering has provided advances in medical
technology to improve human health. As per the statistics of the
National Academy of Engineering, about 32000 biomedical
engineers are currently working in various areas of health care
technology.
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for
understanding biologicaldata. As an interdisciplinary field of science, bioinformatics combines
computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is both an umbrella term for the body of biological studies that use computer
programming as part of their methodology, as well as a reference to specific analysis "pipelines" that
are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the
identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the
aim of better understanding the genetic basis of disease, unique adaptations, desirable properties
(esp. in agricultural species), or differences between populations. In a less formal way,
bioinformatics also tries to understand the organisational principles within nucleic acid andprotein
sequences.
Biomechanics
Biomaterial
A biomaterial is any matter, surface, or construct that interacts with living systems. As a
science, biomaterials is about fifty years old. The study of biomaterials is calledbiomaterials
science or biomaterials engineering. It has experienced steady and strong growth over its history,
with many companies investing large amounts of money into the development of new products.
Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and
materials science.
Biomedical optics
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for
patients that need organ transplants. Biomedical engineers are currently researching methods of
creating such organs. Researchers have grown solid jawbones[2] and tracheas from human stem
cells towards this end. Several artificial urinary bladders have been grown in laboratories and
transplanted successfully into human patients.[3] Bioartificial organs, which use both synthetic and
biological component, are also a focus area in research, such as with hepatic assist devices that use
liver cells within an artificial bioreactor construct.[4]
Micromass cultures of C3H-10T1/2 cells at varied oxygen tensions stained with Alcian blue.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and
gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike
traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern
tools such as molecular cloning and transformation to directly alter the structure and characteristics
of target genes. Genetic engineering techniques have found success in numerous applications.
Some examples include the improvement of crop technology (not a medical application, but
see biological systems engineering), the manufacture of synthetic human insulin through the use of
modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of
new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering
techniques to understand, repair, replace, or enhance neural systems. Neural engineers are
uniquely qualified to solve design problems at the interface of living neural tissue and non-living
constructs.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug
delivery and targeting, pharmaceutical technology, unit operations of Chemical Engineering, and
Pharmaceutical Analysis. It may be deemed as a part of pharmacy due to its focus on the use of
technology on chemical agents in providing better medicinal treatment. The ISPE is an international
body that certifies this now rapidly emerging interdisciplinary science.
Medical devices
This is an extremely broad category—essentially covering all health care products that do not
achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological
(e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:


the diagnosis of disease or other conditions, or
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung
machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear
implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Biomedical instrumentation amplifierschematic used in monitoring low voltage biological signals, an example of
a biomedical engineering application ofelectronic engineering to electrophysiology.
Stereolithography is a practical example of medical modeling being used to create physical objects.
Beyond modeling organs and the human body, emerging engineering techniques are also currently
used in the research and development of new devices for innovative therapies,[5] treatments,[6] patient
monitoring,[7] of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
1. Class I devices present minimal potential for harm to the user
and are often simpler in design than Class II or Class III
devices. Devices in this category include tongue depressors,
bedpans, elastic bandages, examination gloves, and hand-held
surgical instruments and other similar types of common
equipment.
2. Class II devices are subject to special controls in addition to the
general controls of Class I devices. Special controls may
include special labeling requirements, mandatory performance
standards, and postmarket surveillance. Devices in this class
are typically non-invasive and include X-ray machines, PACS,
powered wheelchairs, infusion pumps, and surgical drapes.
3. Class III devices generally require premarket approval (PMA) or
premarket notification (510k), a scientific review to ensure the
device's safety and effectiveness, in addition to the general
controls of Class I. Examples include replacement heart valves,
hip and knee joint implants, silicone gel-filled breast implants,
implanted cerebellar stimulators, implantable pacemaker pulse
generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling
clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size,
and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
An MRI scan of a human head, an example of a biomedical engineering application of electrical
engineering to diagnostic imaging.Click here to view an animated sequence of slices.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex
equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear
medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as Xrays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as
compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants
that contact the body might be made of a biomedical material such as titanium, silicone or apatite
depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial
pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug
delivery devices in the form of implantable pills or drug-eluting stents.
Artificial limbs: The right arm is an example of aprosthesis, and the left arm is an example of myoelectric
control.
A prosthetic eye, an example of a biomedical engineering application of mechanical
engineering and biocompatible materials to ophthalmology.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the
intricate and thorough study of the properties and function of human body systems, bionics may be
applied to solve some engineering problems. Careful study of the different functions and processes
of the eyes, ears, and other organs paved the way for improved cameras, television, radio
transmitters and receivers, and many other useful tools. These developments have indeed made our
lives better, but the best contribution that bionics has made is in the field of biomedical engineering
(the building of useful replacements for various parts of the human body). Modern hospitals now
have available spare parts to replace body parts badly damaged by injury or disease. Biomedical
engineers work hand in hand with doctors to build these artificial body parts.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation
of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical
engineers include training and supervising biomedical equipment technicians (BMETs), selecting
technological products/services and logistically managing their implementation, working with
governmental regulators on inspections/audits, and serving as technological consultants for other
hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and
collaborate with medical device producers regarding prospective design improvements based on
clinical experiences, as well as monitor the progression of the state of the art so as to redirect
procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented
more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary
research & development or ideas that would be many years from clinical adoption; however, there is
a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory
of biomedical innovation. In their various roles, they form a "bridge" between the primary designers
and the end-users, by combining the perspectives of being both 1) close to the point-of-use, while 2)
trained in product and process engineering. Clinical Engineering departments will sometimes hire
not just biomedical engin
eers, but also industrial/systems engineers to help address operations research/optimization, human
factors, cost analysis, etc. Also see safety engineering for a discussion of the procedures used to
design safe systems.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design,
develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted
by individuals with disabilities. Functional areas addressed through rehabilitation engineering may
include mobility, communications, hearing, vision, and cognition, and activities associated with
employment, independent living, education, and integration into the community.[1]
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a
subspecialty of Biomedical engineering, most rehabilitation engineers have undergraduate or
graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A
Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation
Engineering and Accessibility.[2][3]Qualification to become a Rehab' Engineer in the UK is possible via
a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry
University.[4]
The rehabilitation process for people with disabilities often entails the design of assistive devices
such as Walking aids intended to promote inclusion of their users into the mainstream of society,
commerce, and recreation.
Schematic representation of a normal ECG trace showing sinus rhythm; an example of widely used clinical
medical equipment (operates by applying electronic engineering to electrophysiologyand medical diagnosis).
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many
incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119
FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration
(FDA), Class I recall is associated to “a situation in which there is a reasonable probability that the
use of, or exposure to, a product will cause serious adverse health consequences or death“ [8]
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For
example, in the medical device regulations, a product must be: 1) safe and2) effective and 3) for all
the manufactured devices
A product is safe if patients, users and third parties do not run unacceptable risks of physical
hazards (death, injuries, …) in its intended use. Protective measures have to be introduced on the
devices to reduce residual risks at acceptable level if compared with the benefit derived from the use
of it.
A product is effective if it performs as specified by the manufacturer in the intended use.
Effectiveness is achieved through clinical evaluation, compliance to performance standards or
demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This
requires that a quality system shall be in place for all the relevant entities and processes that may
impact safety and effectiveness over the whole medical device lifecyle.
The medical device engineering area is among the most heavily regulated fields of engineering, and
practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys
and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory
authority in the United States, having jurisdiction over medical devices, drugs, biologics, and
combination products. The paramount objectives driving policy decisions by the FDA are safety and
effectiveness of healthcare products that have to be assured through a quality system in place as
specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop
devices and technologies for "consumer" use, such as physical therapy devices (which are also
"medical" devices), these may also be governed in some respects by the Consumer Product Safety
Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or premarket "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity
Assessment" that is defined as "the method by which a manufacturer demonstrates that its device
complies with the requirements of the European Medical Device Directive". The directive specifies
different procedures according to the class of the device ranging from the simple Declaration of
Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality
assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II).
The Medical Device Directive specifies detailed procedures for Certification. In general terms, these
procedures include tests and verifications that are to be contained in specific deliveries such as the
risk management file, the technical file and the quality system deliveries. The risk management file is
the first deliverable that conditions the following design and manufacturing steps. Risk management
stage shall drive the product so that product risks are reduced at an acceptable level with respect to
the benefits expected for the patients for the use of the device. The technical file contains all the
documentation data and records supporting medical device certification. FDA technical file has
similar content although organized in different structure. The Quality System deliverables usually
includes procedures that ensure quality throughout all product life cycle. The same standard (ISO
EN 13485) is usually applied for quality management systems in US and worldwide.
Implants, such as artificial hipjoints, are generally extensively regulated due to the invasive nature of such
devices.
In the European Union, there are certifying entities named "Notified Bodies", accred by European
Member States. The Notified Bodies must ensure the effectiveness of the certification process for all
medical devices apart from the class I devices where a declaration of conformity produced by the
manufacturer is sufficient for marketing. Once a product has passed all the steps required by the
Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is
believed to be safe and effective when used as intended, and, therefore, it can be marketed within
the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed
first for either the U.S. or in Europe depending on the more favorable form of regulation. While
nations often strive for substantive harmony to facilitate cross-national distribution, philosophical
differences about the optimal extent of regulation can be a hindrance; more restrictive regulations
seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to
life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002.
The original EU legislation “Restrictions of Certain Hazardous Substances in Electrical and
Electronics Devices” (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU
published in July 2011 and commonly known as RoHS 2. RoHSseeks to limit the dangerous
substances in circulation in electronics products, in particular toxins and heavy metals, which are
subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices
and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk
assessments and test reports – or explain why they are lacking. For the first time, not only
manufacturers, but also importers and distributors share a responsibility to ensure Electrical and
Electronic Equipment within the scope of RoHS comply with the hazardous substances limits and
have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the
requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must
now be incorporated into the design and verification of a wide range of home use and point of care
medical devices along with other applicable standards in the IEC 60601 3rd ion series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013.
The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently
extended the required date from June 2012 to April 2013. The North American agencies will only
require these standards for new device submissions, while the EU will take the more severe
approach of requiring all applicable devices being placed on the market to consider the home
healthcare standard.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically
have a Master's (M.S., M.Tech, M.S.E., or M.Eng.) or a Doctoral (Ph.D.) degree in BME (Biomedical
Engineering) or another branch of engineering with considerable potential for BME overlap. As
interest in BME increases, many engineering colleges now have a Biomedical Engineering
Department or Program, with offerings ranging from the undergraduate (B.Tech,B.S., B.Eng or
B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own
discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME
programs at all levels are becoming more widespread, including the Bachelor of Science in
Biomedical Engineering which actually includes so much biological science content that many
students use it as a "pre-med" major in preparation for medical school. The number of biomedical
engineers is expected to rise as both a cause and effect of improvements in medical technology.[10]
In the U.S., an increasing number of undergraduate programs are also becoming recognized
by ABET as accred bioengineering/biomedical engineering programs. Over 65 programs are
currently accred by ABET.[11][12] Case Western Reserve created the first MD/PhD program in 1969 to
develop engineers with a physician’s perspective.
In Canada and Australia, accred graduate programs in Biomedical Engineering are common, for
example in Universities such as McMaster University, and the first Canadian undergraduate BME
program atRyerson University offering a four-year B.Eng program.[13][14][15][16] The Polytechnique in
Montreal is also offering a bachelors's degree in biomedical engineering.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a
degree holder for either employment or graduate admission. The reputation of many undergraduate
degrees are also linked to the institution's graduate or research programs, which have some tangible
factors for rating, such as research funding and volume, publications and citations. With BME
specifically, the ranking of a university's hospital and medical school can also be a significant factor
in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such
as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level
job in their field, the majority of BME positions do prefer or even require them.[17] Since most BMErelated professions involve scientific research, such as in pharmaceutical and medical
device development, graduate education is almost a requirement (as undergraduate degrees
typically do not involve sufficient research training and experience). This can be either a Masters or
Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it
is hardly ever the majority (except in academia). In fact, the perceived need for some kind of
graduate credential is so strong that some undergraduate BME programs will actively discourage
students from majoring in BME without an expressed intention to also obtain a master's degree or
apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs
may emphasize certain aspects within the field. They may also feature extensive collaborative efforts
with programs in other fields (such as the University's Medical School or other engineering
divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically
require applicants to have an undergraduate degree in BME, or another engineering discipline (plus
certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology
sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed
a great deal in its development of BME education and training opportunities. Europe, which also has
a large biotechnology sector and an impressive education system, has encountered trouble in
creating uniform standards as the European community attempts to supplant some of the national
jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to
develop BME-related education and professional standards.[18] Other countries, such as Australia,
are recognizing and moving to correct deficiencies in their BME education.[19] Also, as high
technology endeavors are usually marks of developed nations, some areas of the world are prone to
slower development in education, including in BME.
Licensure/certification
Engineering licensure in the US is largely optional, and rarely specified by branch/discipline. As with
other learned professions, each state has certain (fairly similar) requirements for becoming licensed
as a registeredProfessional Engineer (PE), but in practice such a license is not required to practice
in the majority of situations (due to an exception known as the private industry exemption, which
effectively applies to the vast majority of American engineers). This is notably not the case in many
other countries, where a license is as legally necessary to practice engineering as it is for law or
medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically
only recommended and not required.[20]
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or
Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical
Engineers. The Institution also runs the Engineering in Medicine and Health Division.[21] The Institute
of Physics and Engineering in Medicine (IPEM) has a panel for the accration of MSc courses in
Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations
for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second
exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates
may select a particular engineering discipline's content to be tested on; there is currently not an
option for BME with this, meaning that any biomedical engineers seeking a license must prepare to
take this examination in another category (which does not affect the actual license, since most
jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering
Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific
version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also
offer certifications with varying degrees of prominence. One such example is the Certified Clinical
Engineer (CCE) certification for Clinical engineers.
Biomanufacturing
Biomanufacturing is a type of manufacturing or biotechnology that utilizes biological systems to
produce commercially important biomaterials and biomolecules for use in medicines, food and
beverage processing, and industrial applications. Biomanufacturing products are recovered from
natural sources, such as blood, or from cultures of microbes, animal cells, or plant cells grown in
specialized equipment. The cells used during the production may have been naturally occurring or
derived using genetic engineering techniques.
Products
There are thousands of biomanufacturing products on the market today. Some examples of general
classes are listed below:
Medicine

Amino acids

Biopharmaceuticals

Cytokines

Fusion proteins

Growth factors

Monoclonal antibodies

Vaccines

Amino acids

Enzymes

Protein supplements
Food and beverage
Industrial applications that employ cells and/or enzymes

Biocementation

Bioremediation

Detergents

Plastics
Unit operations
A partial listing of unit operations utilized during biomanufacturing includes the following:

Blood plasma fractionation

Cell culture

Cell separation, such as filtration and centrifugation

Fermentation

Homogenization

Column chromatography

Ultrafiltration and/or diafiltration

Clarification, such as filtration

Formulation

Filling vials or syringes for injectable medicines
Equipment and facilities
Equipment and facility requirements are dictated by the product(s) being manufactured. Process
equipment is typically constructed of stainless steel or plastic. Stainless steel equipment can be
cleaned and reused. Some plastic equipment is disposed of after a single use.
Stainless Steel Bioreactors
Products manufactured for medical or food use must be produced in facilities designed and operated
according to Good Manufacturing Practice (GMP) regulations. Cleanrooms are often required to
control the levels of particulates and microorganisms. Sterilization and aseptic processing equipment
are required for production of injectable products.
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for
understanding biological data. As an interdisciplinary field of science, bioinformatics
combines computer science, statistics, mathematics, and engineering to analyze and
interpret biological data.
Bioinformatics is both an umbrella term for the body of biological studies that use computer
programming as part of their methodology, as well as a reference to specific analysis "pipelines" that
are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the
identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the
aim of better understanding the genetic basis of disease, unique adaptations, desirable properties
(esp. in agricultural species), or differences between populations. In a less formal way,
bioinformatics also tries to understand the organisational principles within nucleic
acid and protein sequences.
Introduction
Bioinformatics has become an important part of many areas of biology. In experimental molecular
biology, bioinformatics techniques such as image and signal processingallow extraction of useful
results from large amounts of raw data. In the field of genetics and genomics, it aids in sequencing
and annotating genomes and their observedmutations. It plays a role in the text mining of biological
literature and the development of biological and gene ontologies to organize and query biological
data. It also plays a role in the analysis of gene and protein expression and regulation.
Bioinformatics tools aid in the comparison of genetic and genomic data and more generally in the
understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps
analyze and catalogue the biological pathways and networks that are an important part of systems
biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, and protein
structures as well as molecular interactions.
History
Historically, the term bioinformatics did not mean what it means today. Paulien Hogeweg and Ben
Hesper coined it in 1970 to refer to the study of information processes in biotic systems.[1][2][3] This
definition placed bioinformatics as a field parallel to biophysics (the study of physical processes in
biological systems) or biochemistry (the study of chemical processes in biological systems).[1]
Sequences
Sequences of genetic material are frequently used in bioinformatics and are easier to manage using computers
than manually.
Computers became essential in molecular biology when protein sequences became available
after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple
sequences manually turned out to be impractical. A pioneer in the field was Margaret Oakley
Dayhoff, who has been hailed by David Lipman, director of the National Center for Biotechnology
Information, as the "mother and father of bioinformatics."[4] Dayhoff compiled one of the first protein
sequence databases, initially published as books[5] and pioneered methods of sequence alignment
and molecular evolution.[6] Another early contributor to bioinformatics was Elvin A. Kabat, who
pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody
sequences released with Tai Te Wu between 1980 and 1991.[7]
Goals[]
To study how normal cellular activities are altered in different disease states, the biological data
must be combined to form a comprehensive picture of these activities. Therefore, the field of
bioinformatics has evolved such that the most pressing task now involves the analysis and
interpretation of various types of data. This includes nucleotide and amino acid sequences, protein
domains, and protein structures.[8] The actual process of analyzing and interpreting data is referred to
as computational biology. Important sub-disciplines within bioinformatics and computational biology
include:


Development and implementation of computer programs that enable efficient access to, use and
management of, various types of information
Development of new algorithms (mathematical formulas) and statistical measures that assess
relationships among members of large data sets. For example, there are methods to locate
a gene within a sequence, to predict protein structure and/or function, and to cluster protein
sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What
sets it apart from other approaches, however, is its focus on developing and applying
computationally intensive techniques to achieve this goal. Examples include: pattern
recognition, data mining, machine learning algorithms, and visualization. Major research efforts in
the field include sequence alignment, gene finding, genome assembly, drug design, drug
discovery, protein structure alignment, protein structure prediction, prediction of gene
expression and protein–protein interactions, genome-wide association studies, the modeling
ofevolution and cell division/mitosis.
Bioinformatics now entails the creation and advancement of databases, algorithms, computational
and statistical techniques, and theory to solve formal and practical problems arising from the
management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research
technologies and developments in information technologies have combined to produce a
tremendous amount of information related to molecular biology. Bioinformatics is the name given to
these mathematical and computing approaches used to glean understanding of biological
processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences,
aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of
protein structures.
Relation to other fields
Bioinformatics is a science field that is similar to but distinct from biological
computation and computational biology. Biological computation uses bioengineering and biology to
build biological computers, whereas bioinformatics uses computation to better understand biology.
Bioinformatics and computational biology have similar aims and approaches, but they differ in scale:
bioinformatics organizes and analyzes basic biological data, whereas computational biology builds
theoretical models of biological systems, just as mathematical biology does with mathematical
models.
Analyzing biological data to produce meaningful information involves writing and running software
programs that use algorithms from graph theory, artificial intelligence, soft computing, data
mining, image processing, and computer simulation. The algorithms in turn depend on theoretical
foundations such as discrete mathematics, control theory, system theory, information theory,
and statistics.
Sequence analysis
The sequences of different genes or proteins may be aligned side-by-side to measure their similarity. This
alignment compares protein sequences containing WPP domains.
Since the Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms
have been decoded and stored in databases. This sequence information is analyzed to determine
genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive
sequences. A comparison of genes within a species or between different species can show
similarities between protein functions, or relations between species (the use of molecular
systematics to construct phylogenetic trees). With the growing amount of data, it long ago became
impractical to analyze DNA sequences manually. Today, computer programs such as BLAST are
used daily to search sequences from more than 260 000 organisms, containing over 190
billionnucleotides.[10] These programs can compensate for mutations (exchanged, deleted or inserted
bases) in the DNA sequence, to identify sequences that are related, but not identical. A variant of
this sequence alignment is used in the sequencing process itself. The so-called shotgun
sequencing technique (which was used, for example, by The Institute for Genomic Research to
sequence the first bacterial genome, Haemophilus influenzae)[11] does not produce entire
chromosomes. Instead it generates the sequences of many thousands of small DNA fragments
(ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of
these fragments overlap and, when aligned properly by a genome assembly program, can be used
to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task
of assembling the fragments can be quite complicated for larger genomes. For a genome as large as
the human genome, it may take many days of CPU time on large-memory, multiprocessor
computers to assemble the fragments, and the resulting assembly usually contains numerous gaps
that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes
sequenced today, and genome assembly algorithms are a critical area of bioinformatics research.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, a new
project developed by the National Human Genome Research Institute in the U.S appeared. The socalled ENCODE project is a collaborative data collection of the functional elements of the human
genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays,
technologies able to generate automatically large amounts of data with lower research costs but with
the same quality and viability.
Another aspect of bioinformatics in sequence analysis is annotation. This involves
computational gene finding to search for protein-coding genes, RNA genes, and other functional
sequences within a genome. Not all of the nucleotides within a genome are part of genes. Within the
genomes of higher organisms, large parts of the DNA do not serve any obvious purpose.
See also: sequence analysis, sequence mining, sequence profiling tool and sequence motif
Genome annotation
In the context of genomics, annotation is the process of marking the genes and other biological
features in a DNA sequence. This process needs to be automated because most genomes are too
large to annotate by hand, not to mention the desire to annotate as many genomes as possible, as
the rate of sequencing has ceased to pose a bottleneck. Annotation is made possible by the fact that
genes have recognisable start and stop regions, although the exact sequence found in these regions
can vary between genes.
The first genome annotation software system was designed in 1995 by Owen White, who was part of
the team at The Institute for Genomic Research that sequenced and analyzed the first genome of a
free-living organism to be decoded, the bacterium Haemophilus influenzae.[11] White built a software
system to find the genes (fragments of genomic sequence that encode proteins), the transfer RNAs,
and to make initial assignments of function to those genes. Most current genome annotation
systems work similarly, but the programs available for analysis of genomic DNA, such as
the GeneMark program trained and used to find protein-coding genes in Haemophilus influenzae,
are constantly changing and improving.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over
time. Informatics has assisted evolutionary biologists by enabling researchers to:




trace the evolution of a large number of organisms by measuring changes in their DNA, rather
than through physical taxonomy or physiological observations alone,
more recently, compare entire genomes, which permits the study of more complex evolutionary
events, such as gene duplication, horizontal gene transfer, and the prediction of factors
important in bacterialspeciation,
build complex computational models of populations to predict the outcome of the system over
time[12]
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
The area of research within computer science that uses genetic algorithms is sometimes confused
with computational evolutionary biology, but the two areas are not necessarily related.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence
between genes (orthology analysis) or other genomic features in different organisms. It is these
intergenomic maps that make it possible to trace the evolutionary processes responsible for the
divergence of two genomes. A multitude of evolutionary events acting at various organizational
levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a
higher level, large chromosomal segments undergo duplication, lateral transfer, inversion,
transposition, deletion and insertion.[13] Ultimately, whole genomes are involved in processes of
hybridization, polyploidization and endosymbiosis, often leading to rapid speciation. The complexity
of genome evolution poses many exciting challenges to developers of mathematical models and
algorithms, who have recourse to a spectra of algorithmic, statistical and mathematical techniques,
ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on
parsimony models to Markov Chain Monte Carlo algorithms for Bayesian analysis of problems based
on probabilistic models.
Many of these studies are based on the homology detection and protein families computation.[14]
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini which eventually took root in
bioinformatics. Pan genome is the complete gene repertoire of a particular taxonomic group:
although initially applied to closely related strains of a species, it can be applied to a larger context
like genus, phylum etc. It is divided in two parts- The Core genome: Set of genes common to all the
genomes under study (These are often housekeeping genes vital for survival) and The
Dispensable/Flexible Genome: Set of genes not present in all but one or some genomes under
study.
Genetics of disease
With the advent of next-generation sequencing we are obtaining enough sequence data to map the
genes of complex diseases such as diabetes,[15] infertility,[16] breast cancer[17] or Alzheimer's
Disease.[18] Genome-wide association studies are a useful approach to pinpoint the mutations
responsible for such complex diseases.[19] Through these studies, thousands of DNA variants have
been identified that are associated with similar diseases and traits.[20] Furthermore, the possibility for
genes to be used at prognosis, diagnosis or treatment is one of the most essential applications.
Many studies are discussing both the promising ways to choose the genes to be used and the
problems and pitfalls of using genes to predict disease presence or prognosis.[21]
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or even unpredictable ways.
Massive sequencing efforts are used to identify previously unknown point mutations in a variety
of genes in cancer. Bioinformaticians continue to produce specialized automated systems to
manage the sheer volume of sequence data produced, and they create new algorithms and software
to compare the sequencing results to the growing collection of human genome sequences
and germline polymorphisms. New physical detection technologies are employed, such
as oligonucleotide microarrays to identify chromosomal gains and losses (calledcomparative
genomic hybridization), and single-nucleotide polymorphism arrays to detect known point mutations.
These detection methods simultaneously measure several hundred thousand sites throughout the
genome, and when used in high-throughput to measure thousands of samples,
generate terabytes of data per experiment. Again the massive amounts and new types of data
generate new opportunities for bioinformaticians. The data is often found to contain considerable
variability, or noise, and thus Hidden Markov model and change-point analysis methods are being
developed to infer real copy number changes.
However, with the breakthroughs that the next-generation sequencing technology is providing to the
field of Bioinformatics, cancer genomics may be drastically change. This new methods and software
allow bioinformaticians to sequence in a rapid and affordable way many cancer genomes. This could
mean a more flexible process to classify types of cancer by analysis of cancer driven mutations in
the genome. Furthermore, individual tracking of patients during the progression of the disease may
be possible in the future with the sequence of cancer samples.[22]
Another type of data that requires novel informatics development is the analysis of lesions found to
be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple
techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis
of gene expression(SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNASeq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of
multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to
bias in the biological measurement, and a major research area in computational biology involves
developing statistical tools to separate signal fromnoise in high-throughput gene expression studies.
Such studies are often used to determine the genes implicated in a disorder: one might compare
microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the
transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of
the proteins present in a biological sample. Bioinformatics is very much involved in making sense of
protein microarray and HT MS data; the former approach faces similar problems as with microarrays
targeted at mRNA, the latter involves the problem of matching large amounts of mass data against
predicted masses from protein sequence databases, and the complicated statistical analysis of
samples where multiple, but incomplete peptides from each protein are detected.
Analysis of regulation
Regulation is the complex orchestration of events starting with an extracellular signal such as
a hormone and leading to an increase or decrease in the activity of one or more proteins.
Bioinformatics techniques have been applied to explore various steps in this process. For example,
promoter analysis involves the identification and study of sequence motifs in the DNA surrounding
the coding region of a gene. These motifs influence the extent to which that region is transcribed into
mRNA. Expression data can be used to infer gene regulation: one might compare microarray data
from a wide variety of states of an organism to form hypotheses about the genes involved in each
state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress
conditions (heat shock, starvation, etc.). One can then applyclustering algorithms to that expression
data to determine which genes are co-expressed. For example, the upstream regions (promoters) of
co-expressed genes can be searched for over-represented regulatory elements. Examples of
clustering algorithms applied in gene clustering are k-means clustering, self-organizing
maps (SOMs), hierarchical clustering, and consensus clustering methods such as the Bi-CoPaM.
The later, namely Bi-CoPaM, has been actually proposed to address various issues specific to gene
discovery problems such as consistent co-expression of genes over multiple microarray datasets.
Structural bioinformatics
3-dimensional protein structures such as this one are common subjects in bioinformatic analyses.
Protein structure prediction is another important application of bioinformatics. The amino
acid sequence of a protein, the so-called primary structure, can be easily determined from the
sequence on the gene that codes for it. In the vast majority of cases, this primary structure uniquely
determines a structure in its native environment. (Of course, there are exceptions, such as
the bovine spongiform encephalopathy – a.k.a. Mad Cow Disease – prion.) Knowledge of this
structure is vital in understanding the function of the protein. Structural information is usually
classified as one of secondary, tertiary and quaternary structure. A viable general solution to such
predictions remains an open problem. Most efforts have so far been directed towards heuristics that
work most of the time.[citation needed]
One of the key ideas in bioinformatics is the notion of homology. In the genomic branch of
bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose
function is known, is homologous to the sequence of gene B, whose function is unknown, one could
infer that B may share A's function. In the structural branch of bioinformatics, homology is used to
determine which parts of a protein are important in structure formation and interaction with other
proteins. In a technique called homology modeling, this information is used to predict the structure of
a protein once the structure of a homologous protein is known. This currently remains the only way
to predict protein structures reliably.
One example of this is the similar protein homology between hemoglobin in humans and the
hemoglobin in legumes (leghemoglobin). Both serve the same purpose of transporting oxygen in the
organism. Though both of these proteins have completely different amino acid sequences, their
protein structures are virtually identical, which reflects their near identical purposes.[25]
Other techniques for predicting protein structure include protein threading and de novo (from
scratch) physics-based modeling.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such
as metabolic or protein-protein interaction networks. Although biological networks can be
constructed from a single type of molecule or entity (such as genes), network biology often attempts
to integrate many different data types, such as proteins, small molecules, gene expression data, and
others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as
the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways
and gene regulatory networks) to both analyze and visualize the complex connections of these
cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes
via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Interactions between proteins are frequently visualized and analyzed using networks. This network is made up
of protein-protein interactions fromTreponema pallidum, the causative agent of syphilis and other diseases.
Main articles: Protein–protein interaction prediction and interactome
Tens of thousands of three-dimensional protein structures have been determined by X-ray
crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central
question in structural bioinformatics is whether it is practical to predict possible protein–protein
interactions only based on these 3D shapes, without performingprotein–protein
interaction experiments. A variety of methods have been developed to tackle the protein–protein
docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–
peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the
fundamental principle behind computational algorithms, termed docking algorithms, for
studying molecular interactions.
Others
Literature analysis
The growth in the number of published literature makes it virtually impossible to read every paper,
resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and
statistical linguistics to mine this growing library of text resources. For example:



Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named entity recognition – recognizing biological terms such as gene names
Protein-protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to accelerate or fully automate the processing, quantification
and analysis of large amounts of high-information-content biomedical imagery. Modern image
analysis systems augment an observer's ability to make measurements from a large or complex set
of images, by improving accuracy, objectivity, or speed. A fully developed analysis system may
completely replace the observer. Although these systems are not unique to biomedical imagery,
biomedical imaging is becoming more important for both diagnostics and research. Some examples
are:








high-throughput and high-fidelity quantification and sub-cellular localization (high-content
screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during
arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data,
such as that obtained from flow cytometry. These methods typically involve finding populations of
cells that are relevant to a particular disease state or experimental condition.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic
databases, or microbiome data. Examples of such analyses include phylogenetics, niche
modelling, species richness mapping, or species identification tools.
Databases
Main articles: List of biological databases and Biological database
Databases are essential for bioinformatics research and applications. There is a huge number of
available databases covering almost everything from DNA and protein sequences, molecular
structures, to phenotypes and biodiversity. Databases generally fall into one of three types. Some
contain data resulting directly from empirical methods such as gene knockouts. Others consist of
predicted data, and most contain data from both sources. There are meta-databases that
incorporate data compiled from multiple other databases. Some others are specialized, such as
those specific to an organism. These databases vary in their format, way of accession and whether
they are public or not. Some of the most commonly used databases are listed below. For a more
comprehensive list, please check the link at the beginning of the subsection.








Used in Motif Finding: GenomeNet MOTIF Search
Used in Gene Ontology: ToppGene FuncAssociate, Enrichr, GATHER
Used in Gene Finding: Hidden Markov Model
Used in finding Protein Structures/Family: PFAM
Used for Next Generation Sequencing: (Not database but data format), FASTQ Format
Used in Gene Expression Analysis: GEO, ArrayExpress
Used in Network Analysis: Interaction Analysis Databases(BioGRID, MINT, HPRD, Curated
Human Signaling Network), Functional Networks (STRING, KEGG)
Used in design of synthetic genetic circuits: GenoCAD
Please keep in mind that this is a quick sampling and generally most computation data is supported
by wet lab data as well.
Software and tools
Software tools for bioinformatics range from simple command-line tools, to more complex graphical
programs and standalone web-services available from various bioinformatics companies or public
institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the
1980s.[26] The combination of a continued need for new algorithms for the analysis of emerging types
of biological readouts, the potential for innovative in silico experiments, and freely available open
code bases have helped to create opportunities for all research groups to contribute to both
bioinformatics and the range of open-source software available, regardless of their funding
arrangements. The open source tools often act as incubators of ideas, or communitysupported plug-ins in commercial applications. They may also provide de factostandards and shared
object models for assisting with the challenge of bioinformation integration.
The range of open-source software packages includes titles such
as Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET
Bio, Apache Taverna, UGENE and GenoCAD. To maintain this tradition and create further
opportunities, the non-profit Open Bioinformatics Foundation[26] have supported the
annual Bioinformatics Open Source Conference (BOSC) since 2000.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed for a wide variety of bioinformatics
applications allowing an application running on one computer in one part of the world to use
algorithms, data and computing resources on servers in other parts of the world. The main
advantages derive from the fact that end users do not have to deal with software and database
maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search
Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis).[29] The
availability of theseservice-oriented bioinformatics resources demonstrate the applicability of webbased bioinformatics solutions, and range from a collection of standalone tools with a common data
format under a single, standalone or web-based interface, to integrative, distributed and
extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A Bioinformatics workflow management system is a specialized form of a workflow management
system designed specifically to compose and execute a series of computational or data manipulation
steps, or a workflow, in a Bioinformatics application. Such systems are designed to




provide an easy-to-use environment for individual application scientists themselves to create
their own workflows
provide interactive tools for the scientists enabling them to execute their workflows and view
their results in real-time
simplify the process of sharing and reusing workflows between the scientists.
enable scientists to track the provenance of the workflow execution results and the workflow
creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril.
Biorobotics
Biorobotics is a term that loosely covers the fields of cybernetics, bionics and even genetic
engineering as a collective study.
Biorobotics is often used to refer to a real subfield of robotics: studying how to make robots that
emulate or simulate living biological organisms mechanically or even chemically. The term is also
used in a reverse definition: making biological organisms as manipulatable and functional as robots,
or making biological organisms as components of robots.
In the latter sense, biorobotics can be referred to as a theoretical discipline of comprehensive
genetic engineering in which organisms are created and designed by artificial means. The creation
of life from non-living matter for example, would be biorobotics. The field is in its infancy and is
sometimes known as synthetic biology or bionanotechnology.
In fiction, biorobotics makes many appearances, like the biots of the novel Rendezvous with
Rama or the replicants of the film Blade Runner.
Biorobotics in fiction
Biorobotics has made many appearances in works of fiction, often in the form of synthetic biological
organisms.
The robots featured in Rossum's Universal Robots (the play that originally coined the term robot) are
presented as synthetic biological entities closer to biological organisms than the mechanical objects
that the term robot came to refer to. The replicants in the film Blade Runner are biological in nature:
they are organisms of living tissue and cells created artificially. In the novel The Windup Girl by
Paolo Bacigalupi, "windups" or "New People" are genetically engineered organisms.
The word biot, a portmanteau of "biological robot", was originally coined by Arthur C. Clarke in his
1972 novel Rendezvous with Rama. Biots are depicted as artificial biological organisms created to
perform specific tasks on the space vessel Rama. This affects their physical attributes and cognitive
abilities.
Humanoid Cylons in the 2004 television series Battlestar Galactica are another example of biological
robots in fictions. Almost undistinguishable from humans, they are artificial creations created by
mechanical cylons.
The authors of the Star Wars book series The New Jedi Order reprised the term biot. The Yuuzhan
Vong uses the term to designate their biotechnology.
The term bioroid, or biological android, as also been used to designate artificial biological organisms,
like in the manga Appleseed. In 1985 the animated Robotech television series popularized the term
when it reused the term from the 1984 Japanese series The Super Dimension Cavalry Southern
Cross. The term was also popularized in the 1988 computer game Snatcher, which features bioroids
that kill, or "snatch", humans that they then take place of in society.
The term reploid, or replicate-android, is used in the Capcom video game series Mega Man X. Much
like Blade Runner's replicants these beings are more human in thinking and emotional awareness
then previous generations, despite some being made into animal shapes to purposely distinguish
them from humans. Another further future form of these robots, seen in Mega Man Zero, have them
being so close to humans that they can actually eat, sleep, and reproduce. The only thing setting
them apart is their semi-inorganic skeletal systems. In another farther flung future version and
evolution of these bio-machines in the series Mega Man ZX humans and robots have merged
seemingly so closely they are no longer a separate race but in-fact two parts to a whole new race.
The final future of the game series is Mega Man Legends where humans and robots are one being
to the point they can actually get together and reproduce offspring without effort.
In Warhammer 40k Imperium uses servitors, vat-grown humans with cybernetic parts replacing most
parts of the body.They lack emotions or intelligence and are only capable of simple tasks. A more
advanced imperial robots are used by the imperium; instead of using humans, they are mostly
machine with a biological brain and other biological parts. Unlike servitors, they are capable of
having programmed instincts for combat, but still they lack any true intelligence.
Chemical engineering
Chemical engineers design, construct and operate process plants (distillation columns pictured)
Chemical engineering is a branch of science that applies physical sciences
(physics and chemistry) and life sciences (microbiology and biochemistry) together withapplied
mathematics and economics to produce, transform, transport, and properly use chemicals, materials
and energy. Essentially, chemical engineers design large-scale processes that convert chemicals,
raw materials, living cells, microorganisms and energy into useful forms and products.
Etymology
George E. Davis
A 1996 British Journal for the History of Science article cites James F. Donnelly for mentioning an
1839 reference to chemical engineering in relation to the production of sulfuric acid.[1] In the same
paper however, George E. Davis, an English consultant, was cred for having coined the
term.[2] The History of Science in United States: An Encyclopedia puts this at around
1890.[3] "Chemical engineering", describing the use of mechanical equipment in the chemical
industry, became common vocabulary in England after 1850.[4] By 1910, the profession, "chemical
engineer," was already in common use in Britain and the United States.[5]
History
Main article: History of chemical engineering
Chemical engineering emerged upon the development of unit operations, a fundamental concept of
the discipline chemical engineering. Most authors agree that Davis invented unit operations if not
substantially developed it.[6] He gave a series of lectures on unit operations at the Manchester
Technical School (later part of the University of Manchester) in 1887, considered to be one of the
earliest such about chemical engineering.[7] Three years before Davis' lectures, Henry Edward
Armstrong taught a degree course in chemical engineering at the City and Guilds of London
Institute. Armstrong's course "failed simply because its graduates ... were not especially attractive to
employers." Employers of the time would have rather hired chemists and mechanical
engineers.[3] Courses in chemical engineering offered by Massachusetts Institute of
Technology (MIT) in the United States, Owens College in Manchester, England, and University
College London suffered under similar circumstances.
Students inside an industrial chemistry laboratory at MIT
Starting from 1888, Lewis M. Norton taught at MIT the first chemical engineering course in the
United States. Norton's course was contemporaneous and essentially similar with Armstrong's
course. Both courses, however, simply merged chemistry and engineering subjects. "Its practitioners
had difficulty convincing engineers that they were engineers and chemists that they were not simply
chemists."[3] Unit operations was introduced into the course by William Hultz Walker in 1905.[10] By
the early 1920s, unit operations became an important aspect of chemical engineering at MIT and
other US universities, as well as at Imperial College London.[11] The American Institute of Chemical
Engineers (AIChE), established in 1908, played a key role in making chemical engineering
considered an independent science, and unit operations central to chemical engineering. For
instance, it defined chemical engineering to be a "science of itself, the basis of which is ... unit
operations" in a 1922 report; and with which principle, it had published a list of academic institutions
which offered "satisfactory" chemical engineering courses.[12] Meanwhile, promoting chemical
engineering as a distinct science in Britain lead to the establishment of the Institution of Chemical
Engineers (IChemE) in 1922.[13] IChemE likewise helped make unit operations considered essential
to the discipline.[14]
New concepts and innovations
By the 1940s, it became clear that unit operations alone was insufficient in developing chemical
reactors. While the predominance of unit operations in chemical engineering courses in Britain and
the United States continued until the 1960s, transport phenomena started to experience greater
focus.[15] Along with other novel concepts, such process systems engineering (PSE), a "second
paradigm" was defined.[16][17] Transport phenomena gave an analytical approach to chemical
engineering[18] while PSE focused on its synthetic elements, such as control system and process
design.[19] Developments in chemical engineering before and after World War II were mainly incited
by the petrochemical industry,[20] however, advances in other fields were made as well.
Advancements in biochemical engineering in the 1940s, for example, found application in
the pharmaceutical industry, and allowed for the mass production of various antibiotics,
including penicillin and streptomycin.[21] Meanwhile, progress in polymer science in the 1950s paved
way for the "age of plastics".[22]
Safety and hazard developments
Concerns regarding the safety and environmental impact of large-scale chemical manufacturing
facilities were also raised during this period. Silent Spring, published in 1962, alerted its readers to
the harmful effects of DDT, a potent insecticide[citation needed]. The 1974 Flixborough disaster in the
United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby
villages[citation needed]. The 1984 Bhopal disaster in India resulted in almost 4,000 deaths[citation needed]. These
incidents, along with other incidents, affected the reputation of the trade as industrial
safety and environmental protection were given more focus.[23] In response, the IChemE required
safety to be part of every degree course that it accred after 1982. By the 1970s, legislation and
monitoring agencies were instituted in various countries, such as France, Germany, and the United
States.[24]
Recent progress
Advancements in computer science found applications designing and managing plants, simplifying
calculations and drawings that previously had to be done manually. The completion of the Human
Genome Project is also seen as a major development, not only advancing chemical engineering
but genetic engineering and genomics as well.[25] Chemical engineering principles were used to
produce DNA sequences in large quantities.[26]
Concepts
Chemical engineering involves the application of several principles. Key concepts are presented
below.
Chemical reaction engineering
Chemical engineering involves managing plant processes and conditions to ensure optimal plant
operation. Chemical reaction engineers construct models for reactor analysis and design using
laboratory data and physical parameters, such as chemical thermodynamics, to solve problems and
predict reactor performance
Plant design and construction
Chemical engineering design concerns the creation of plans, specification, and economic analyses
for pilot plants, new plants or plant modifications. Design engineers often work in a consulting role,
designing plants to meet clients' needs. Design is limited by a number of factors, including funding,
government regulations and safety standards. These constraints dictate a plant's choice of process,
materials and equipment.
Plant construction is coordinated by project engineers and project managers depending on the size
of the investment. A chemical engineer may do the job of project engineer full-time or part of the
time, which requires additional training and job skills, or act as a consultant to the project group. In
USA the education of chemical engineering graduates from the Baccalaureate programs accred
by ABET do not usually stress project engineering education, which can be obtained by specialized
training, as electives, or fromgraduate programs. Project engineering jobs are some of the largest
employers for chemical engineers.[29]
Process design and analysis
A unit operation is a physical step in an individual chemical engineering process. Unit operations
(such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying
and separating its products, recycling unspent reactants, and controlling energy transfer in
reactors.[30] On the other hand, a unit process is the chemical equivalent of a unit operation. Along
with unit operations, unit processes constitute a process operation. Unit processes (such
as nitration and oxidation) involve the conversion of material by biochemical, thermochemical and
other means. Chemical engineers responsible for these are called process engineers.[31]
Process design requires the definition of equipment types and sizes as well as how they are
connected together and the materials of construction. Details are often printed on a Process Flow
Diagram which is used to control the capacity and reliability of a new or modified chemical factory.
Education for chemical engineers in the first college degree 3 or 4 years of study stresses the
principles and practices of process design. The same skills are used in existing chemical plants to
evaluate the efficiencyand make recommendations for improvements.
Transport phenomena
Transport phenomena occur frequently in industrial problems. These include fluid dynamics, heat
transfer and mass transfer, which mainly concern momentum transfer, energy transfer and transport
of chemical species respectively. Basic equations for describing the three phenomena in
the macroscopic, microscopic and molecular levels are very similar. Thus, understanding transport
phenomena requires a thorough understanding of mathematics.
Applications and practice
Chemical engineers use computers to control automated systems in plants.
Operators in a chemical plant using an older analog control board, seen in East-Germany, 1986.
Chemical engineers "develop economic ways of using materials and energy". Chemical engineers
use chemistry and engineering to turn raw materials into usable products, such as medicine,
petrochemicals and plastics on a large-scale, industrial setting. They are also involved in waste
management and research. Both applied and research facets could make extensive use of
computers.[33]
A chemical engineer may be involved in industry or university research where they are tasked in
designing and performing experiments to create new and better ways of production, controlling
pollution, conserving resources and making these processes safer. They may be involved in
designing and constructing plants as a project engineer. In this field, the chemical engineer uses
their knowledge in selecting plant equipment and the optimum method of production to minimize
costs and increase profitability. After its construction, they may help in upgrading its equipment.
They may also be involved in its daily operations. [35] Chemical engineers may be permanently
employed at chemical plants to manage operations. Alternatively, they may serve in a consultant
role to troubleshoot problems, manage process changes and otherwise assist plant operators.
History of biotechnology
Biotechnology is the application of scientific and engineering principles to the processing of
materials by biological agents to provide goods and services.[1] From its inception, biotechnology has
maintained a close relationship with society. Although now most often associated with the
development of drugs, historically biotechnology has been principally associated with food,
addressing such issues as malnutrition and famine. The history of biotechnology begins with
zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I,
however, zymotechnology would expand to tackle larger industrial issues, and the potential
of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and
gasohol projects failed to progress due to varying issues including public resistance, a changing
economic scene, and shifts in political power.
Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront
of science in society, and the intimate relationship between the scientific community, the public, and
the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference,
where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology.
By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would
prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a
media event designed to capture public support, and by the 1980s, biotechnology grew into a
promising real industry. In 1988, only five proteins from genetically engineered cells had been
approved as drugs by the United States Food and Drug Administration (FDA), but this number would
skyrocket to over 125 by the end of the 1990s.
The field of genetic engineering remains a heated topic of discussion in today's society with the
advent of gene therapy, stem cell research, cloning, and genetically modified food. While it seems
only natural nowadays to link pharmaceutical drugs as solutions to health and societal problems, this
relationship of biotechnology serving social needs began centuries ago.
Origins of biotechnology
Biotechnology arose from the field of zymotechnology or zymurgy, which began as a search for a
better understanding of industrial fermentation, particularly beer. Beer was an important industrial,
and not just social, commodity. In late 19th century Germany, brewing contributed as much to the
gross national product as steel, and taxes on alcohol proved to be significant sources of revenue to
the government.[2] In the 1860s, institutes and remunerative consultancies were dedicated to the
technology of brewing. The most famous was the private Carlsberg Institute, founded in 1875, which
employed Emil Christian Hansen, who pioneered the pure yeast process for the reliable production
of consistent beer. Less well known were private consultancies that advised the brewing industry.
One of these, the Zymotechnic Institute, was established in Chicago by the German-born chemist
John Ewald Siebel.
The heyday and expansion of zymotechnology came in World War I in response to industrial needs
to support the war. Max Delbrück grew yeast on an immense scale during the war to meet 60
percent of Germany's animal feed needs.[3] Compounds of another fermentation product, lactic acid,
made up for a lack of hydraulic fluid, glycerol. On the Allied side the Russian chemist Chaim
Weizmann used starch to eliminate Britain's shortage of acetone, a key raw material in explosives,
by fermenting maize to acetone. The industrial potential of fermentation was outgrowing its
traditional home in brewing, and "zymotechnology" soon gave way to "biotechnology."
With food shortages spreading and resources fading, some dreamed of a new industrial solution.
The Hungarian Károly Ereky coined the word "biotechnology" in Hungary during 1919 to describe a
technology based on converting raw materials into a more useful product. He built a slaughterhouse
for a thousand pigs and also a fattening farm with space for 50,000 pigs, raising over 100,000 pigs a
year. The enterprise was enormous, becoming one of the largest and most profitable meat and fat
operations in the world. In a book entitled Biotechnologie, Ereky further developed a theme that
would be reiterated through the 20th century: biotechnology could provide solutions to societal
crises, such as food and energy shortages. For Ereky, the term "biotechnologie" indicated the
process by which raw materials could be biologically upgraded into socially useful products.[4]
This catchword spread quickly after the First World War, as "biotechnology" entered German
dictionaries and was taken up abroad by business-hungry private consultancies as far away as the
United States. In Chicago, for example, the coming of prohibition at the end of World War I
encouraged biological industries to create opportunities for new fermentation products, in particular a
market for nonalcoholic drinks. Emil Siebel, the son of the founder of the Zymotechnic Institute,
broke away from his father's company to establish his own called the "Bureau of Biotechnology,"
which specifically offered expertise in fermented nonalcoholic drinks.[5]
The belief that the needs of an industrial society could be met by fermenting agricultural waste was
an important ingredient of the "chemurgic movement."[6] Fermentation-based processes generated
products of ever-growing utility. In the 1940s, penicillin was the most dramatic. While it was
discovered in England, it was produced industrially in the U.S. using a deep fermentation process
originally developed in Peoria, Illinois. The enormous profits and the public expectations penicillin
engendered caused a radical shift in the standing of the pharmaceutical industry. Doctors used the
phrase "miracle drug", and the historian of its wartime use, David Adams, has suggested that to the
public penicillin represented the perfect health that went together with the car and the dream house
of wartime American advertising.[7] In the 1950s, steroids were synthesized using fermentation
technology. In particular, cortisone promised the same revolutionary ability to change medicine as
penicillin had.
Penicillin was viewed as a miracle drug that brought enormous profits and public expectations.
Single-cell protein and gasohol projects
Even greater expectations of biotechnology were raised during the 1960s by a process that grew
single-cell protein. When the so-called protein gap threatened world hunger, producing food locally
by growing it from waste seemed to offer a solution. It was the possibilities of growing
microorganisms on oil that captured the imagination of scientists, policy makers, and
commerce.[8] Major companies such as British Petroleum (BP) staked their futures on it. In 1962, BP
built a pilot plant at Cap de Lavera in Southern France to publicize its product, Toprina. Initial
research work at Lavera was done by Alfred Champagnat,[10] In 1963, construction started on BP's
second pilot plant at Grangemouth Oil Refinery in Britain.[10]
As there was no well-accepted term to describe the new foods, in 1966 the term "single-cell protein"
(SCP) was coined at MIT to provide an acceptable and exciting new title, avoiding the unpleasant
connotations of microbial or bacterial.
The "food from oil" idea became quite popular by the 1970s, when facilities for growing yeast fed by
n-paraffins were built in a number of countries. The Soviets were particularly enthusiastic, opening
large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil
refineries in Kstovo (1973) [11][12][13] and Kirishi(1974).[14]
By the late 1970s, however, the cultural climate had completely changed, as the growth in SCP
interest had taken place against a shifting economic and cultural scene (136). First, the price of
oil rose catastrophically in 1974, so that its cost per barrel was five times greater than it had been
two years earlier. Second, despite continuing hunger around the world, anticipated demand also
began to shift from humans to animals. The program had begun with the vision of growing food for
Third World people, yet the product was instead launched as an animal food for the developed
world. The rapidly rising demand for animal feed made that market appear economically more
attractive. The ultimate downfall of the SCP project, however, came from public resistance.[15]
This was particularly vocal in Japan, where production came closest to fruition. For all their
enthusiasm for innovation and traditional interest in microbiologically produced foods, the Japanese
were the first to ban the production of single-cell proteins. The Japanese ultimately were unable to
separate the idea of their new "natural" foods from the far from natural connotation of oil.[15] These
arguments were made against a background of suspicion of heavy industry in which anxiety over
minute traces of petroleum was expressed. Thus, public resistance to an unnatural product led to the
end of the SCP project as an attempt to solve world hunger.
Also, in 1989 in the USSR, the public environmental concerns made the government decide to close
down (or convert to different technologies) all 8 paraffin-fed-yeast plants that the Soviet Ministry of
Microbiological Industry had by that time.[14]
In the late 1970s, biotechnology offered another possible solution to a societal crisis. The escalation
in the price of oil in 1974 increased the cost of the Western world's energy tenfold.[16] In response,
the U.S. government promoted the production of gasohol, gasoline with 10 percent alcohol added,
as an answer to the energy crisis.[7] In 1979, when the Soviet Union sent troops to Afghanistan, the
Carter administration cut off its supplies to agricultural produce in retaliation, creating a surplus of
agriculture in the U.S. As a result, fermenting the agricultural surpluses to synthesize fuel seemed to
be an economical solution to the shortage of oil threatened by the Iran-Iraq war. Before the new
direction could be taken, however, the political wind changed again: the Reagan administration
came to power in January 1981 and, with the declining oil prices of the 1980s, ended support for the
gasohol industry before it was born.[17]
Biotechnology seemed to be the solution for major social problems, including world hunger and
energy crises. In the 1960s, radical measures would be needed to meet world starvation, and
biotechnology seemed to provide an answer. However, the solutions proved to be too expensive and
socially unacceptable, and solving world hunger through SCP food was dismissed. In the 1970s, the
food crisis was succeeded by the energy crisis, and here too, biotechnology seemed to provide an
answer. But once again, costs proved prohibitive as oil prices slumped in the 1980s. Thus, in
practice, the implications of biotechnology were not fully realized in these situations. But this would
soon change with the rise of genetic engineering.
Genetic engineering
The origins of biotechnology culminated with the birth of genetic engineering. There were two key
events that have come to be seen as scientific breakthroughs beginning the era that would unite
genetics with biotechnology. One was the 1953 discovery of the structure of DNA, by Watson and
Crick, and the other was the 1973 discovery by Cohen and Boyer of a recombinant DNA technique
by which a section of DNA was cut from the plasmid of an E. coli bacterium and transferred into the
DNA of another.[18] This approach could, in principle, enable bacteria to adopt the genes and produce
proteins of other organisms, including humans. Popularly referred to as "genetic engineering," it
came to be defined as the basis of new biotechnology.
Genetic engineering proved to be a topic that thrust biotechnology into the public scene, and the
interaction between scientists, politicians, and the public defined the work that was accomplished in
this area. Technical developments during this time were revolutionary and at times frightening. In
December 1967, the first heart transplant by Christian Barnard reminded the public that the physical
identity of a person was becoming increasingly problematic. While poetic imagination had always
seen the heart at the center of the soul, now there was the prospect of individuals being defined by
other people's hearts.[19] During the same month, Arthur Kornberg announced that he had managed
to biochemically replicate a viral gene. "Life had been synthesized," said the head of the National
Institutes of Health.[19] Genetic engineering was now on the scientific agenda, as it was becoming
possible to identify genetic characteristics with diseases such as beta thalassemia and sickle-cell
anemia.
Responses to scientific achievements were colored by cultural skepticism. Scientists and their
expertise were looked upon with suspicion. In 1968, an immensely popular work, The Biological
Time Bomb, was written by the British journalist Gordon Rattray Taylor. The author's preface saw
Kornberg's discovery of replicating a viral gene as a route to lethal doomsday bugs. The publisher's
blurb for the book warned that within ten years, "You may marry a semi-artificial man or
woman…choose your children's sex…tune out pain…change your memories…and live to be 150 if
the scientific revolution doesn’t destroy us first."[20] The book ended with a chapter called "The Future
– If Any." While it is rare for current science to be represented in the movies, in this period of "Star
Trek", science fiction and science fact seemed to be converging. "Cloning" became a popular word
in the media. Woody Allen satirized the cloning of a person from a nose in his 1973 movie Sleeper,
and cloning Adolf Hitler from surviving cells was the theme of the 1976 novel by Ira Levin, The Boys
from Brazil.[21]
In response to these public concerns, scientists, industry, and governments increasingly linked the
power of recombinant DNA to the immensely practical functions that biotechnology promised. One of
the key scientific figures that attempted to highlight the promising aspects of genetic engineering
was Joshua Lederberg, a Stanford professor and Nobel laureate. While in the 1960s "genetic
engineering" described eugenics and work involving the manipulation of the human genome,
Lederberg stressed research that would involve microbes instead.[22] Lederberg emphasized the
importance of focusing on curing living people. Lederberg's 1963 paper, "Biological Future of Man"
suggested that, while molecular biology might one day make it possible to change the human
genotype, "what we have overlooked is euphenics, the engineering of human
development."[23] Lederberg constructed the word "euphenics" to emphasize changing
the phenotype after conception rather than the genotype which would affect future generations.
With the discovery of recombinant DNA by Cohen and Boyer in 1973, the idea that genetic
engineering would have major human and societal consequences was born. In July 1974, a group of
eminent molecular biologists headed by Paul Berg wrote to Science suggesting that the
consequences of this work were so potentially destructive that there should be a pause until its
implications had been thought through.[24] This suggestion was explored at a meeting in February
1975 at California's Monterey Peninsula, forever immortalized by the location, Asilomar. Its historic
outcome was an unprecedented call for a halt in research until it could be regulated in such a way
that the public need not be anxious, and it led to a 16-month moratorium until National Institutes of
Health (NIH) guidelines were established.
Joshua Lederberg was the leading exception in emphasizing, as he had for years, the potential
benefits. At Asilomar, in an atmosphere favoring control and regulation, he circulated a paper
countering the pessimism and fears of misuses with the benefits conferred by successful use. He
described "an early chance for a technology of untold importance for diagnostic and therapeutic
medicine: the ready production of an unlimited variety of human proteins. Analogous applications
may be foreseen in fermentation process for cheaply manufacturing essential nutrients, and in the
improvement of microbes for the production of antibiotics and of special industrial chemicals."[25] In
June 1976, the 16-month moratorium on research expired with the Director's Advisory Committee
(DAC) publication of the NIH guidelines of good practice. They defined the risks of certain kinds of
experiments and the appropriate physical conditions for their pursuit, as well as a list of things too
dangerous to perform at all. Moreover, modified organisms were not to be tested outside the
confines of a laboratory or allowed into the environment.[18]
Synthetic insulin crystals synthesized using recombinant DNAtechnology
Atypical as Lederberg was at Asilomar, his optimistic vision of genetic engineering would soon lead
to the development of the biotechnology industry. Over the next two years, as public concern over
the dangers of recombinant DNA research grew, so too did interest in its technical and practical
applications. Curing genetic diseases remained in the realms of science fiction, but it appeared that
producing human simple proteins could be good business. Insulin, one of the smaller, best
characterized and understood proteins, had been used in treating type 1 diabetes for a half century.
It had been extracted from animals in a chemically slightly different form from the human product.
Yet, if one could produce synthetic human insulin, one could meet an existing demand with a
product whose approval would be relatively easy to obtain from regulators. In the period 1975 to
1977, synthetic "human" insulin represented the aspirations for new products that could be made
with the new biotechnology. Microbial production of synthetic human insulin was finally announced in
September 1978 and was produced by a startup company, Genentech.,[26] although that company did
not commercialize the product themselves, instead, it licensed the production method to Eli Lilly and
Company. 1978 also saw the first application for a patent on a gene, the gene which
produces human growth hormone, by the University of California, thus introducing the legal principle
that genes could be patented. Since that filing, almost 20% of the more than 20,000 genes in the
human DNA have been patented.
The radical shift in the connotation of "genetic engineering" from an emphasis on the inherited
characteristics of people to the commercial production of proteins and therapeutic drugs was
nurtured by Joshua Lederberg. His broad concerns since the 1960s had been stimulated by
enthusiasm for science and its potential medical benefits. Countering calls for strict regulation, he
expressed a vision of potential utility. Against a belief that new techniques would entail
unmentionable and uncontrollable consequences for humanity and the environment, a growing
consensus on the economic value of recombinant DNA emerged.
Biotechnology and industry
A Genentech-sponsored sign declaring South San Francisco to be "The Birthplace of Biotechnology."
With ancestral roots in industrial microbiology that date back centuries, the new biotechnology
industry grew rapidly beginning in the mid-1970s. Each new scientific advance became a media
event designed to capture investment confidence and public support. Although market expectations
and social benefits of new products were frequently overstated, many people were prepared to see
genetic engineering as the next great advance in technological progress. By the 1980s,
biotechnology characterized a nascent real industry, providing titles for emerging trade organizations
such as the Biotechnology Industry Organization (BIO).
The main focus of attention after insulin were the potential profit makers in the pharmaceutical
industry: human growth hormone and what promised to be a miraculous cure for viral
diseases, interferon. Cancer was a central target in the 1970s because increasingly the disease was
linked to viruses.[29] By 1980, a new company, Biogen, had producedinterferon through recombinant
DNA. The emergence of interferon and the possibility of curing cancer raised money in the
community for research and increased the enthusiasm of an otherwise uncertain and tentative
society. Moreover, to the 1970s plight of cancer was added AIDS in the 1980s, offering an enormous
potential market for a successful therapy, and more immediately, a market for diagnostic tests based
on monoclonal antibodies.[30] By 1988, only five proteins from genetically engineered cells had been
approved as drugs by the United States Food and Drug Administration (FDA):
synthetic insulin, human growth hormone, hepatitis B vaccine, alpha-interferon, and tissue
plasminogen activator (TPa), for lysis of blood clots. By the end of the 1990s, however, 125 more
genetically engineered drugs would be approved.[30]
Genetic engineering also reached the agricultural front as well. There was tremendous progress
since the market introduction of the genetically engineered Flavr Savr tomato in 1994.[31] Ernst and
Young reported that in 1998, 30% of the U.S. soybean crop was expected to be from genetically
engineered seeds. In 1998, about 30% of the US cotton and corn crops were also expected to be
products of genetic engineering.[31]
Genetic engineering in biotechnology stimulated hopes for both therapeutic proteins, drugs and
biological organisms themselves, such as seeds, pesticides, engineered yeasts, and modified
human cells for treating genetic diseases. From the perspective of its commercial promoters,
scientific breakthroughs, industrial commitment, and official support were finally coming together,
and biotechnology became a normal part of business. No longer were the proponents for the
economic and technological significance of biotechnology the iconoclasts.[32] Their message had
finally become accepted and incorporated into the policies of governments and industry.
Fermentation
Fermentation in progress: Bubbles of CO2 form a froth on top of the fermentation mixture.
Overview of ethanol fermentation. One glucose molecule breaks down into two pyruvate molecules (1). The
energy from this exothermic reaction is used to bind inorganic phosphates to ATP and convert NAD+ to NADH.
The two pyruvates are then broken down into two acetaldehyde molecules and give off two CO2 molecules as
a waste product (2). The acetaldehyde is then reduced into ethanol using the energy and hydrogen from
NADH; in this process the NADH is oxidized into NAD+ so that the cycle may repeat (3).
Fermentation is a metabolic process that converts sugar to acids, gases or alcohol. It occurs
in yeast and bacteria, and also in oxygen-starved muscle cells, as in the case oflactic acid
fermentation. Fermentation is also used more broadly to refer to the bulk growth
of microorganisms on a growth medium, often with the goal of producing a specific chemical product.
French microbiologist Louis Pasteur is often remembered for his insights into fermentation and its
microbial causes. The science of fermentation is known aszymology.
Fermentation takes place in the lack of oxygen (when the electron transport chain is unusable) and
becomes the cell’s primary means of ATP (energy) production.[1] It
turnsNADH and pyruvate produced in the glycolysis step into NAD+ and various small molecules
depending on the type of fermentation (see examples below). In the presence of O2, NADH and
pyruvate are used to generate ATP in respiration. This is called oxidative phosphorylation, and it
generates much more ATP than glycolysis alone. For that reason, cells generally benefit from
avoiding fermentation when oxygen is available, the exception being obligate anaerobes which
cannot tolerate oxygen.
The first step, glycolysis, is common to all fermentation pathways:
C6H12O6 + 2 NAD+ + 2 ADP + 2 Pi → 2 CH3COCOO− + 2 NADH +
2 ATP + 2 H2O + 2H+
Pyruvate is CH3COCOO−. Pi is phosphate. Two ADP molecules and
two Pi are converted to two ATP and two water molecules
via substrate-level phosphorylation. Two molecules of NAD+ are
also reduced to NADH.[2]
In oxidative phosphorylation the energy for ATP formation is derived
from an electrochemical proton gradient generated across the inner
mitochondrial membrane (or, in the case of bacteria, the plasma
membrane) via the electron transport chain. Glycolysis has
substrate-level phosphorylation (ATP generated directly at the point
of reaction).
Humans have used fermentation to produce food and beverages
since the Neolithic age. For example, fermentation is used for
preservation in a process that produces lactic acid as found in such
sour foods as pickled
cucumbers, kimchi and yogurt (see fermentation in food
processing), as well as for producing alcoholic beverages such
as wine (seefermentation in winemaking) and beer. Fermentation
can even occur within the stomachs of animals, such as
humans. Auto-brewery syndrome is a rare medical condition where
the stomach contains brewers yeast that break down starches into
ethanol; which enters the blood stream.[3]
Definitions
To many people, fermentation simply means the production of
alcohol: grains and fruits are fermented to produce beer and wine. If
a food soured, one might say it was 'off' or fermented. Here are
some definitions of fermentation. They range from informal, general
usage to more scientific definitions.[4]
1. Preservation methods for food via microorganisms (general
use).
2. Any process that produces alcoholic beverages or acidic
dairy products (general use).
3. Any large-scale microbial process occurring with or without
air (common definition used in industry).
4. Any energy-releasing metabolic process that takes place
only under anaerobic conditions (becoming more scientific).
5. Any metabolic process that releases energy from a sugar or
other organic molecules, does not require oxygen or an
electron transport system, and uses an organic molecule as
the final electron acceptor (most scientific).
Examples
Fermentation does not necessarily have to be carried out in
an anaerobic environment. For example, even in the presence of
abundant oxygen, yeast cells greatly prefer fermentation to aerobic
respiration, as long assugars are readily available for consumption
(a phenomenon known as the Crabtree effect).[5] The antibiotic
activity of hops also inhibits aerobic metabolism in yeast[citation needed].
Fermentation reacts NADH with an endogenous, organic electron
acceptor.[1] Usually this is pyruvate formed from the sugar during the
glycolysis step. During fermentation, pyruvate is metabolized to
various compounds through several processes:


ethanol fermentation, aka alcoholic fermentation, is the
production of ethanol and carbon dioxide
lactic acid fermentation refers to two means of producing lactic
acid:
1. homolactic fermentation is the production of lactic acid
exclusively
2. heterolactic fermentation is the production of lactic acid
as well as other acids and alcohols.
Sugars are the most common substrate of fermentation, and typical
examples of fermentation products are ethanol, lactic acid, carbon
dioxide, and hydrogen gas (H2). However, more exotic compounds
can be produced by fermentation, such as butyric
acid and acetone. Yeast carries out fermentation in the production
of ethanol in beers, wines, and other alcoholic drinks, along with the
production of large quantities ofcarbon dioxide. Fermentation occurs
in mammalian muscle during periods of intense exercise where
oxygen supply becomes limited, resulting in the creation of lactic
acid.[6]
Chemistry
Comparison of aerobic respirationand most known fermentation types
ineucaryotic cell.[7] Numbers in circles indicate counts of carbon atoms
in molecules, C6 is glucose C6H12O6, C1carbon
dioxide CO2. Mitochondrialouter membrane is omitted.
Fermentation products contain chemical energy (they are not fully
oxidized), but are considered waste products, since they cannot be
metabolized further without the use of oxygen.
Ethanol fermentation
The chemical equation below shows the alcoholic fermentation
of glucose, whose chemical formula is C6H12O6.[8] One glucose
molecule is converted into two ethanol molecules and two carbon
dioxide molecules:
C6H12O6 → 2 C2H5OH + 2 CO2
C2H5OH is the chemical formula for ethanol.
Before fermentation takes place, one glucose molecule is
broken down into two pyruvate molecules. This is known
as glycolysis.[8]
Lactic acid fermentation
Homolactic fermentation (producing only lactic acid) is the
simplest type of fermentation. The pyruvate from
glycolysis[10] undergoes a simple redox reaction, forming lactic
acid.[2][11] It is unique because it is one of the only respiration
processes to not produce a gas as a byproduct. Overall, one
molecule of glucose (or any six-carbon sugar) is converted to
two molecules of lactic acid: C6H12O6 → 2 CH3CHOHCOOH
It occurs in the muscles of animals when they need energy
faster than the blood can supply oxygen. It also occurs in some
kinds of bacteria (such as lactobacilli) and some fungi. It is this
type of bacteria that convertslactose into lactic acid in yogurt,
giving it its sour taste. These lactic acid bacteria can carry out
either homolactic fermentation, where the end-product is mostly
lactic acid, or
Heterolactic fermentation, where some lactate is further
metabolized and results in ethanol and carbon dioxide[2] (via
the phosphoketolase pathway), acetate, or other metabolic
products, e.g.: C6H12O6 → CH3CHOHCOOH + C2H5OH + CO2
If lactose is fermented (as in yogurts and cheeses), it is first
converted into glucose and galactose (both six-carbon sugars
with the same atomic formula): C12H22O11 + H2O → 2 C6H12O6
Heterolactic fermentation is in a sense intermediate between
lactic acid fermentation, and other types, e.g. alcoholic
fermentation (see below). The reasons to go further and convert
lactic acid into anything else are:





The acidity of lactic acid impedes biological processes; this
can be beneficial to the fermenting organism as it drives out
competitors who are unadapted to the acidity; as a result
the food will have a longer shelf-life (part of the reason
foods are purposely fermented in the first place); however,
beyond a certain point, the acidity starts affecting the
organism that produces it.
The high concentration of lactic acid (the final product of
fermentation) drives the equilibrium backwards (Le
Chatelier's principle), decreasing the rate at which
fermentation can occur, and slowing down growth
Ethanol, that lactic acid can be easily converted to, is
volatile and will readily escape, allowing the reaction to
proceed easily. CO2 is also produced, however it's only
weakly acidic, and even more volatile than ethanol.
Acetic acid (another conversion product) is acidic, and not
as volatile as ethanol; however, in the presence of limited
oxygen, its creation from lactic acid releases a lot of
additional energy. It is a lighter molecule than lactic acid,
that forms fewer hydrogen bonds with its surroundings (due
to having fewer groups that can form such bonds), and thus
more volatile and will also allow the reaction to move
forward more quickly.
If propionic acid, butyric acid and longer monocarboxylic
acids are produced (see mixed acid fermentation), the
amount of acidity produced per glucose consumed will
decrease, as with ethanol, allowing faster growth.
Aerobic respiration
In aerobic respiration, the pyruvate produced by glycolysis is
oxidized completely, generating additional ATP and NADH in
the citric acid cycle and by oxidative phosphorylation. However,
this can occur only in the presence of oxygen. Oxygen is toxic
to organisms that are obligate anaerobes, and is not required
by facultative anaerobic organisms. In the absence of oxygen,
one of the fermentation pathways occurs in order to
regenerate NAD+; lactic acid fermentation is one of these
pathways.[2]
Hydrogen gas production in fermentation
Hydrogen gas is produced in many types of fermentation
(mixed acid fermentation, butyric
acid fermentation, caproate fermentation, butanol fermentation,
glyoxylate fermentation), as a way to regenerate NAD+ from
NADH. Electrons are transferred to ferredoxin, which in turn is
oxidized by hydrogenase, producing H2.[8] Hydrogen gas is
a substrate for methanogens and sulfate reducers, which keep
the concentration of hydrogen low and favor the production of
such an energy-rich compound,[12] but hydrogen gas at a fairly
high concentration can nevertheless be formed, as in flatus.
As an example of mixed acid fermentation, bacteria such
as Clostridium pasteurianum ferment glucose
producing butyrate, acetate, carbon dioxide and hydrogen
gas:[13] The reaction leading to acetate is:
C6H12O6 + 4 H2O → 2 CH3COO− + 2 HCO3− + 4 H+ + 4 H2
Glucose could theoretically be converted into just CO2 and
H2, but the global reaction releases little energy.
Methane gas production in
fermentation
Acetic acid can also undergo a dismutation reaction to
produce methane and carbon dioxide:[14][15]
CH3COO− + H+ → CH4 + CO2
ΔG° = -36 kJ/reaction
This disproportionation reaction is catalysed
by methanogen archaea in their fermentative
metabolism. One electron is transferred from
the carbonyl function (e− donor) of the carboxylic group
to the methyl group (e−acceptor) of acetic acid to
respectively produce CO2 and methane gas.
History
The use of fermentation, particularly for beverages, has
existed since the Neolithic and has been documented
dating from 7000–6600 BCE in Jiahu, China,[16] 6000
BCE in Georgia,[17] 3150 BCE in ancient Egypt,[18]3000
BCE in Babylon,[19] 2000 BCE in pre-Hispanic
Mexico,[19] and 1500 BC in Sudan.[20] Fermented foods
have a religious significance
in Judaism and Christianity. The Baltic god Rugutis was
worshiped as the agent of fermentation.[21][22]
The first solid evidence of the living nature of yeast
appeared between 1837 and 1838 when three
publications appeared by C. Cagniard de la Tour, T.
Swann, and F. Kuetzing, each of whom independently
concluded as a result of microscopic investigations that
yeast is a living organism that reproduces by budding. It
is perhaps because wine, beer, and bread were each
basic foods in Europe that most of the early studies on
fermentation were done on yeasts, with which they
were made. Soon, bacteria were also discovered; the
term was first used in English in the late 1840s, but it
did not come into general use until the 1870s, and then
largely in connection with the new germ theory of
disease.[23]
Louis Pasteur in his laboratory
Louis Pasteur (1822–1895), during the 1850s and
1860s, showed that fermentation is initiated by living
organisms in a series of investigations.[11] In 1857,
Pasteur showed that lactic acid fermentation is caused
by living organisms.[24] In 1860, he demonstrated that
bacteria cause souring in milk, a process formerly
thought to be merely a chemical change, and his work
in identifying the role of microorganisms in food
spoilage led to the process of pasteurization.[25] In 1877,
working to improve the French brewing industry,
Pasteur published his famous paper on fermentation,
"Etudes sur la Bière", which was translated into English
in 1879 as "Studies on fermentation".[26] He defined
fermentation (incorrectly) as "Life without air", but
correctly showed that specific types of microorganisms
cause specific types of fermentations and specific endproducts.
Although showing fermentation to be the result of the
action of living microorganisms was a breakthrough, it
did not explain the basic nature of the fermentation
process, or prove that it is caused by the
microorganisms that appear to be always present.
Many scientists, including Pasteur, had unsuccessfully
attempted to extract the
fermentationenzyme from yeast. Success came in 1897
when the German chemist Eduard Buechner ground up
yeast, extracted a juice from them, then found to his
amazement that this "dead" liquid would ferment a
sugar solution, forming carbon dioxide and alcohol
much like living yeasts. Buechner's results are
considered to mark the birth of biochemistry. The
"unorganized ferments" behaved just like the organized
ones. From that time on, the term enzyme came to be
applied to all ferments. It was then understood that
fermentation is caused by enzymes that are produced
by microorganisms.[29] In 1907, Buechner won the Nobel
Prize in chemistry for his work.[30]
Advances in microbiology and fermentation technology
have continued steadily up until the present. For
example, in the late 1970s, it was discovered that
microorganisms could be mutated with physical and
chemical treatments to be higher-yielding, fastergrowing, tolerant of less oxygen, and able to use a
more concentrated
medium.[31] Strain selection and hybridization developed
as well, affecting most modern food fermentations.
Other approaches to advancing the fermentation
industry has been done by companies such as BioTork,
a biotechnology company that
naturally evolves microorganisms to improve
fermentation processes. This approach differs from the
more popular genetic modification, which has become
the current industry standard.
Biotechnology
Biotechnology is the use of living systems and organisms to develop or make products, or "any
technological application that uses biological systems, living organisms or derivatives thereof, to
make or modify products or processes for specific use" (UN Convention on Biological Diversity, Art.
2).[1] Depending on the tools and applications, it often overlaps with the (related) fields
of bioengineering, biomedical engineering, biomanufacturing, etc.
For thousands of years, humankind has used biotechnology in agriculture, food production,
and medicine.[2] The term is largely believed to have been coined in 1919 by
Hungarian engineer Károly Ereky. In the late 20th and early 21st century, biotechnology has
expanded to include new and diverse sciences such as genomics, recombinant gene techniques,
applied immunology, and development of pharmaceutical therapies and diagnostic tests.[2]
Definitions
The wide concept of "biotech" or "biotechnology" encompasses a wide range of procedures for
modifying living organisms according to human purposes, going back to domestication of animals,
cultivation of plants, and "improvements" to these through breeding programs that employ artificial
selection and hybridization. Modern usage also includes genetic engineering as well
as cell and tissue culture technologies. The American Chemical Society defines biotechnology as
the application of biological organisms, systems, or processes by various industries to learning about
the science of life and the improvement of the value of materials and organisms such as
pharmaceuticals, crops, and livestock.[3] As per European Federation of Biotechnology,
Biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular
analogues for products and services.[4] Biotechnology also writes on the pure biological sciences
(animal cell culture, biochemistry, cell biology, embryology, genetics, microbiology, and molecular
biology). In many instances, it is also dependent on knowledge and methods from outside the
sphere of biology including:




bioinformatics, a new brand of computer science
bioprocess engineering
biorobotics
chemical engineering
Conversely, modern biological sciences (including even concepts such as molecular ecology) are
intimately entwined and heavily dependent on the methods developed through biotechnology and
what is commonly thought of as the life sciences industry. Biotechnology is the research and
development in the laboratory using bioinformatics for exploration, extraction, exploitation and
production from any living organisms and any source of biomass by means of biochemical
engineering where high value-added products could be planned (reproduced by biosynthesis, for
example), forecasted, formulated, developed, manufactured and marketed for the purpose of
sustainable operations (for the return from bottomless initial investment on R & D) and gaining
durable patents rights (for exclusives rights for sales, and prior to this to receive national and
international approval from the results on animal experiment and human experiment, especially on
the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety
concerns by using the products).[5][6][7]
By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes
higher systems approaches (not necessarily the altering or using of biological materials directly) for
interfacing with and utilizing living things. Bioengineering is the application of the principles of
engineering and natural sciences to tissues, cells and molecules. This can be considered as the use
of knowledge from working with and manipulating biology to achieve a result that can improve
functions in plants and animals.[8] Relatedly, biomedical engineering is an overlapping field that often
draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of
biomedical and/or chemical engineering such as tissue engineering, biopharmaceutical engineering,
and genetic engineering.
History
Brewing was an early application of biotechnology
Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit
the broad definition of "'utilizing a biotechnological system to make products". Indeed, the cultivation
of plants may be viewed as the earliest biotechnological enterprise.
Agriculture has been theorized to have become the dominant way of producing food since
the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the
best suited crops, having the highest yields, to produce enough food to support a growing
population. As crops and fields became increasingly large and difficult to maintain, it was discovered
that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control
pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their
crops through introducing them to new environments and breeding them with other plants — one of
the first forms of biotechnology.
These processes also were included in early fermentation of beer. These processes were introduced
in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods.
In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then
adding specific yeasts to produce beer. In this process,carbohydrates in the grains were broken
down into alcohols such as ethanol. Later other cultures produced the process of lactic acid
fermentation which allowed the fermentation and preservation of other forms of food, such as soy
sauce. Fermentation was also used in this time period to produce leavened bread. Although the
process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first
use of biotechnology to convert a food source into another form.
Before the time of Charles Darwin's work and life, animal and plant scientists had already used
selective breeding. Darwin added to that body of work with his scientific observations about the
ability of science to change species. These accounts contributed to Darwin's theory of natural
selection.[10]
For thousands of years, humans have used selective breeding to improve production of crops and
livestock to use them for food. In selective breeding, organisms with desirable characteristics are
mated to produce offspring with the same characteristics. For example, this technique was used with
corn to produce the largest and sweetest crops.[11]
In the early twentieth century scientists gained a greater understanding of microbiology and explored
ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological
culture in an industrial process, that of manufacturing corn starch using Clostridium
acetobutylicum, to produce acetone, which the United Kingdom desperately needed to
manufacture explosives during World War I.[12]
Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered
the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold
by Howard Florey, Ernst Boris Chain and Norman Heatley - to form what we today know
as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in
humans.[11]
The field of modern biotechnology is generally thought of as having been born in 1971 when Paul
Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at
San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972
by transferring genetic material into a bacterium, such that the imported material would be
reproduced. The commercial viability of a biotechnology industry was significantly expanded on June
16, 1980, when the United States Supreme Court ruled that a genetically
modified microorganism could be patented in the case of Diamond v. Chakrabarty.[13] Indian-born
Ananda Chakrabarty, working for General Electric, had modified a bacterium (of
the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating
oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire
organelles between strains of the Pseudomonas bacterium.
Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the
biotechnology sector's success is improved intellectual property rights legislation—and
enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products
to cope with an ageing, and ailing, U.S. population.[14]
Rising demand for biofuels is expected to be good news for the biotechnology sector, with
the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel
consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry
to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing
genetically modified seeds which are resistant to pests and drought. By boosting farm productivity,
biotechnology plays a crucial role in ensuring that biofuel production targets are met.[15]
Examples
A rose plant that began as cells grown in a tissue culture
Biotechnology has applications in four major industrial areas, including health care (medical), crop
production and agriculture, non food (industrial) uses of crops and other products
(e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.
For example, one application of biotechnology is the directed use of organisms for the manufacture
of organic products (examples include beer and milk products). Another example is using naturally
present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat
waste, clean up sites contaminated by industrial activities (bioremediation), and also to
produce biological weapons.
A series of derived terms have been coined to identify several branches of biotechnology; for
example:





Bioinformatics is an interdisciplinary field which addresses
biological problems using computational techniques, and makes the
rapid organization as well as analysis of biological data possible.
The field may also be referred to as computational biology, and can
be defined as, "conceptualizing biology in terms of molecules and
then applying informatics techniques to understand and organize
the information associated with these molecules, on a large
scale."[16] Bioinformatics plays a key role in various areas, such
as functional genomics, structural genomics, and proteomics, and
forms a key component in the biotechnology and pharmaceutical
sector.
Blue biotechnology is a term that has been used to describe the
marine and aquatic applications of biotechnology, but its use is
relatively rare.
Green biotechnology is biotechnology applied to agricultural
processes. An example would be the selection and domestication of
plants via micropropagation. Another example is the designing
of transgenic plants to grow under specific environments in the
presence (or absence) of chemicals. One hope is that green
biotechnology might produce more environmentally friendly
solutions than traditional industrial agriculture. An example of this is
the engineering of a plant to express a pesticide, thereby ending the
need of external application of pesticides. An example of this would
be Bt corn. Whether or not green biotechnology products such as
this are ultimately more environmentally friendly is a topic of
considerable debate.
Red biotechnology is applied to medical processes. Some
examples are the designing of organisms to produce antibiotics,
and the engineering of genetic cures throughgenetic manipulation.
White biotechnology, also known as industrial biotechnology, is
biotechnology applied to industrial processes. An example is the
designing of an organism to produce a useful chemical. Another
example is the using of enzymes as industrial catalysts to either
produce valuable chemicals or destroy hazardous/polluting
chemicals. White biotechnology tends to consume less in resources
than traditional processes used to produce industrial goods.[citation
needed]
The investment and economic output of all of these types of applied biotechnologies is termed as
"bioeconomy".
Medicine
In medicine, modern biotechnology finds applications in areas such as pharmaceutical
drug discovery and production, pharmacogenomics, and genetic testing (or genetic screening).
DNA microarray chip – some can do as many as a million blood tests at once
Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses
how genetic makeup affects an individual's response to drugs.[17] It deals with the influence
of genetic variation on drug response in patients by correlating gene expression or single-nucleotide
polymorphisms with a drug's efficacy or toxicity.[18] By doing so, pharmacogenomics aims to develop
rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum
efficacy with minimal adverse effects.[19] Such approaches promise the advent of "personalized
medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic
makeup.[20][21]
Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it
together, and the histidineresidues involved in zinc binding.
Biotechnology has contributed to the discovery and manufacturing of traditional small
molecule pharmaceutical drugs as well as drugs that are the product of biotechnology biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively
easily and cheaply. The first genetically engineered products were medicines designed to treat
human diseases. To cite one example, in 1978 Genentech developed synthetic
humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia
coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas
of abattoir animals (cattle and/or pigs). The resulting genetically engineered bacterium enabled the
production of vast quantities of synthetic human insulin at relatively low cost.[22][23] Biotechnology has
also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic
science (for example through the Human Genome Project) has also dramatically improved our
understanding of biology and as our scientific knowledge of normal and disease biology has
increased, our ability to develop new medicines to treat previously untreatable diseases has
increased as well.[23]
Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be
used to determine a child's parentage (genetic mother and father) or in general a person's ancestry.
In addition to studying chromosomes to the level of individual genes, genetic testing in a broader
sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of
genes associated with increased risk of developing genetic disorders. Genetic testing identifies
changes in chromosomes, genes, or proteins.[24] Most of the time, testing is used to find changes that
are associated with inherited disorders. The results of a genetic test can confirm or rule out a
suspected genetic condition or help determine a person's chance of developing or passing on
a genetic disorder. As of 2011 several hundred genetic tests were in use.[25][26]Since genetic testing
may open up ethical or psychological problems, genetic testing is often accompanied by genetic
counseling.
Agriculture
Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture,
the DNA of which has been modified with genetic engineering techniques. In most cases the aim is
to introduce a new trait to the plant which does not occur naturally in the species.
Examples in food crops include resistance to certain pests, diseases, stressful environmental
conditions,[29] resistance to chemical treatments (e.g. resistance to aherbicide[30]), reduction of
spoilage,[31] or improving the nutrient profile of the crop.[32] Examples in non-food crops include
production of pharmaceutical agents,[33] biofuels,[34]and other industrially useful goods,[35] as well as
for bioremediation.[36][37]
Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of
land cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers
(4,200,000 acres) to 1,600,000 km2 (395 million acres).[38] 10% of the world's crop lands were planted
with GM crops in 2010.[38] As of 2011, 11 different transgenic crops were grown commercially on 395
million acres (160 million hectares) in 29 countries such as the USA, Brazil, Argentina, India,
Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines,
Myanmar, Burkina Faso, Mexico and Spain.[38]
Genetically modified foods are foods produced from organisms that have had specific changes
introduced into their DNA with the methods of genetic engineering. These techniques have allowed
for the introduction of new crop traits as well as a far greater control over a food's genetic structure
than previously afforded by methods such as selective breeding and mutation
breeding.[39] Commercial sale of genetically modified foods began in 1994, when Calgene first
marketed its Flavr Savr delayed ripening tomato.[40] To date most genetic modification of foods have
primarily focused on cash crops in high demand by farmers such as soybean,corn, canola,
and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and
better nutrient profiles. GM livestock have also been experimentally developed, although as of
November 2013 none are currently on the market.[41]
There is broad scientific consensus that food on the market derived from GM crops poses no greater
risk to human health than conventional food. GM crops also provide a number of ecological benefits,
if not used in excess.[47] However, opponents have objected to GM crops per se on several grounds,
including environmental concerns, whether food produced from GM crops is safe, whether GM crops
are needed to address the world's food needs, and economic concerns raised by the fact these
organisms are subject to intellectual property law.
Industrial biotechnology
Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of
biotechnology for industrial purposes, including industrial fermentation. It includes the practice of
using cells such asmicro-organisms, or components of cells like enzymes, to
generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper
and pulp, textiles and biofuels.[48] In doing so, biotechnology uses renewable raw materials and may
contribute to lowering greenhouse gas emissions and moving away from a petrochemical-based
economy.[49]
Regulation
The regulation of genetic engineering concerns approaches taken by governments to assess and
manage the risks associated with the use of genetic engineering technology, and the development
and release of genetically modified organisms (GMO), including genetically modified
crops and genetically modified fish. There are differences in the regulation of GMOs between
countries, with some of the most marked differences occurring between the USA and
Europe.[50] Regulation varies in a given country depending on the intended use of the products of the
genetic engineering. For example, a crop not intended for food use is generally not reviewed by
authorities responsible for food safety.[51] The European Union differentiates between approval for
cultivation within the EU and approval for import and processing. While only a few GMOs have been
approved for cultivation in the EU a number of GMOs have been approved for import and
processing.[52] The cultivation of GMOs has triggered a debate about coexistence of GM and non GM
crops. Depending on the coexistence regulations incentives for cultivation of GM crops differ.[53]
Biopharmaceutical
A biopharmaceutical, also known as a biologic(al) medical product, biological,[1] or biologic, is
any medicinal product manufactured in, extracted from, or semisynthesized from biological sources.
Different fromchemically synthesized pharmaceuticals, they include vaccines, blood, or blood
components, allergenics, somatic cells, gene therapies, tissues, recombinant therapeutic protein,
and living cells used in cell therapy. Biologics can be composed of sugars, proteins, or nucleic acids
or complex combinations of these substances, or may be living cells or tissues. They are isolated
from natural sources—human, animal, or microorganism.
Terminology surrounding biopharmaceuticals varies between groups and entities, with different
terms referring to different subsets of therapeutics within the general biopharmaceutical category.
Some regulatory agencies use the terms biological medicinal products or therapeutic biological
product to refer specifically to engineered macromolecular products like protein- and nucleic acid–
based drugs, distinguishing them from products like blood, blood components, or vaccines, which
are usually extracted directly from a biological source.[2][3][4] Specialty drugs, a recent classification of
pharmaceuticals, are high-cost drugs that are often biologics.[5][5][6][7]
Gene-based and cellular biologics, for example, often are at the forefront of biomedical research,
and may be used to treat a variety of medical conditions for which no other treatments are
available.[8]
In some jurisdictions, biologics are regulated via different pathways than other small molecule drugs
and medical devices.
The term biopharmacology is sometimes used to describe the branch of pharmacology that studies
biopharmaceuticals.
Major classes
Blood plasma is a type of biopharmaceutical directly extracted from living systems.
Extracted from living systems
Some of the oldest forms of biologics are extracted from the bodies of animals, and other humans
especially. Important biologics include:




Whole blood and other blood components
Organs and tissue transplants
Stem cell therapy
Antibodies for passive immunization (e.g., to treat a virus infection)
Some biologics that were previously extracted from animals, such as insulin, are now more
commonly produced by recombinant DNA.
Produced by recombinant DNA
As indicated the term "biologics" can be used to refer to a wide range of biological products in
medicine. However, in most cases, the term "biologics" is used more restrictively for a class of
therapeutics (either approved or in development) that are produced by means of biological
processes involving recombinant DNA technology. These medications are usually one of three
types:
1. Substances that are (nearly) identical to the body's own key
signalling proteins. Examples are the blood-production
stimulating protein erythropoetin, or the growth-stimulating
hormone named (simply) "growth hormone" or biosynthetic
human insulin and its analogues.
2. Monoclonal antibodies. These are similar to the antibodies that
the human immune system uses to fight off bacteria and
viruses, but they are "custom-designed"
(using hybridoma technology or other methods) and can
therefore be made specifically to counteract or block any given
substance in the body, or to target any specific cell type;
examples of such monoclonal antibodies for use in various
diseases are given in the table below.
3. Receptor constructs (fusion proteins), usually based on a
naturally-occurring receptor linked to the immunoglobulin frame.
In this case, the receptor provides the construct with detailed
specificity, whereas the immunoglobulin-structure imparts
stability and other useful features in terms of pharmacology.
Some examples are listed in the table below.
Biologics as a class of medications in this narrower sense have had a profound impact on many
medical fields, primarily rheumatology and oncology, but
also cardiology, dermatology, gastroenterology, neurology, and others. In most of these disciplines,
biologics have added major therapeutic options for the treatment of many diseases, including some
for which no effective therapies were available, and others where previously existing therapies were
clearly inadequate. However, the advent of biologic therapeutics has also raised complex regulatory
issues (see below), and significant pharmacoeconomic concerns, because the cost for biologic
therapies has been dramatically higher than for conventional (pharmacological) medications. This
factor has been particularly relevant since many biological medications are used for the treatment
of chronic diseases, such as rheumatoid arthritis or inflammatory bowel disease, or for the treatment
of otherwise untreatable cancer during the remainder of life. The cost of treatment with a typical
monoclonal antibody therapy for relatively common indications is generally in the range of €7,000–
14,000 per patient per year.
Older patients who receive biologic therapy for diseases such as rheumatoid arthritis, psoriatic
arthritis, or ankylosing spondylitis are at increased risk for life-threatening infection, adverse
cardiovascular events, andmalignancy.[10]
The first such substance approved for therapeutic use was biosynthetic "human" insulin made
via recombinant DNA. Sometimes referred to as rHI, under the trade name Humulin, was developed
by Genentech, but licensed to Eli Lilly and Company, who manufactured and marketed it starting in
1982.
Major kinds of biopharmaceuticals include:









Blood factors (Factor VIII and Factor IX)
Thrombolytic agents (tissue plasminogen activator)
Hormones (insulin, glucagon, growth hormone, gonadotrophins)
Haematopoietic growth factors (Erythropoietin, colony stimulating
factors)
Interferons (Interferons-α, -β, -γ)
Interleukin-based products (Interleukin-2)
Vaccines (Hepatitis B surface antigen)
Monoclonal antibodies (Various)
Additional products (tumour necrosis factor, therapeutic enzymes)
Research and development investment in new medicines by the biopharmaceutical industry stood at
$65.2 billion in 2008.[11] A few examples of biologics made with recombinant DNA technology include:
USAN/INN
Trade
name
Indication
Technology
Mechanism of
action
Orencia
rheumatoid arthritis
immunoglobin CTLA4 fusion protein
T-cell deactivation
adalimumab
Humira
rheumatoid arthritis, ankylosing
spondylitis, psoriatic arthritis,
psoriasis, Ulcerative
Colitis, Crohn's disease
monoclonal antibody
TNF antagonist
alefacept
Amevive
chronic plaque psoriasis
immunoglobin G1 fusion incompletely
protein
characterized
anemia arising from
cancer chemotherapy, chronic
renal failure, etc.
recombinant protein
rheumatoid arthritis, ankylosing
spondylitis, psoriatic arthritis,
recombinant human
TNF-receptor fusion
abatacept
erythropoietin Epogen
etanercept
Enbrel
stimulation of red
blood cell
production
TNF antagonist
psoriasis
protein
infliximab
rheumatoid arthritis, ankylosing
spondylitis, psoriatic arthritis,
Remicade
psoriasis, Ulcerative
Colitus, Crohn's disease
monoclonal antibody
TNF antagonist
trastuzumab
Herceptin
breast cancer
humanized monoclonal
antibody
HER2/neu (erbB2)
antagonist
ustekinumab
Stelara
psoriasis
humanized monoclonal
antibody
IL-12 and IL-23
antagonist
denileukin
diftitox
Ontak
cutaneous T-cell lymphoma
(CTCL)
Diphtheria toxin
engineered protein
combining Interleukin-2
and Diphtheria toxin
Interleukin2 receptor binder
golimumab
Simponi
rheumatoid arthritis, psoriatic
arthritis, ankylosing
spondylitis, Ulcerative colitis
monoclonal antibody
TNF antagonist
Vaccines
Gene therapy
Viral gene therapy involves artificially manipulating a virus to include a desirable piece of genetic
material.
Biosimilars
With the expiration of numerous patents for blockbuster biologics between 2012 and 2019, the
interest in biosimilar production, i.e., follow-on biologics, has increased.[12] Compared to small
molecules that consist of chemically identical active ingredients, biologics are vastly more complex
and consist of a multitude of subspecies. Due to their heterogeneity and the high process sensitivity,
neither originators nor follow-on manufacturers produce reliably constant quality profiles over
time.[13] The process variations are monitored by modern analytical tools (e.g., liquid
chromatography, immunoassays, mass spectrometry, etc.) and describe a unique design space for
each biologic.
Thus, biosimilars require a different regulatory framework compared to small-molecule generics.
Legislation in the 21st century has addressed this by recognizing an intermediate ground of testing
for biosimilars. The filing pathway requires more testing than for small-molecule generics, but less
testing than for registering completely new therapeutics.[14]
In 2003, the European Medicines Agency introduced an adapted pathway for biosimilars,
termed similar biological medicinal products. This pathway is based on a thorough demonstration of
"comparability" of the "similar" product to an existing approved product.[15] Within the United States,
the Patient Protection and Affordable Care Act of 2010 created an abbreviated approval pathway for
biological products shown to be biosimilar to, or interchangeable with, an FDA-licensed reference
biological product.[14][16] A major hope linked to the introduction of biosimilars is a reduction of costs to
the patients and the healthcare system.[12]
Commercialization
When a new biopharmaceutical is developed, the company will typically apply for a patent, which is
a grant for exclusive manufacturing rights. This is the primary means by which the developer of the
drug can recover the investment cost for development of the biopharmaceutical. The patent laws in
the United States and Europe differ somewhat on the requirements for a patent, with the European
requirements are perceived as more difficult to satisfy. The total number of patents granted for
biopharmaceuticals has risen significantly since the 1970s. In 1978 the total patents granted was 30.
This had climbed to 15,600 in 1995, and by 2001 there were 34,527 patent applications.[17]
Large-scale production
Biopharmaceuticals may be produced from microbial cells (e.g., recombinant E. coli or yeast
cultures), mammalian cell lines (see cell culture) and plant cell cultures (see plant tissue culture)
and moss plants inbioreactors of various configurations, including photo-bioreactors.[18] Important
issues of concern are cost of production (low-volume, high-purity products are desirable) and
microbial contamination (by bacteria,viruses, mycoplasma). Alternative platforms of production which
are being tested include whole plants (plant-made pharmaceuticals).
Transgenics
A potentially controversial method of producing biopharmaceuticals involves transgenic organisms,
particularly plants and animals that have been genetically modified to produce drugs. This
production is a significant risk for the investor, due to production failure or scrituny from regulatory
bodies based on perceived risks and ethical issues. Biopharmaceutical crops also represent a risk of
cross-contamination with non-engineered crops, or crops engineered for non-medical purposes.
One potential approach to this technology is the creation of a transgenic mammal that can produce
the biopharmaceutical in its milk, blood, or urine. Once an animal is produced, typically using
the pronuclear microinjection method, it becomes efficacious to use cloning technology to create
additional offspring that carry the favorable modified genome.[19] The first such drug manufactured
from the milk of a genetically modified goat was ATryn, but marketing permission was blocked by
the European Medicines Agency in February 2006.[20] This decision was reversed in June 2006 and
approval was given August 2006.[21]
Pharmaceutical drug
A pharmaceutical drug (also referred to as a pharmaceutical, pharmaceutical
preparation, pharmaceutical product, medicinal product, medicine, medication,medicament,
or simply a drug) is a drug used to diagnose, cure, treat, or prevent disease.[1][2][3] Drug therapy
(pharmacotherapy) is an important part of the medical field and relies on the science
of pharmacology for continual advancement and on pharmacy for appropriate management.
Drugs are classified in various ways. One of the key divisions is by level of control, which
distinguishes prescription drugs (those that a pharmacist dispenses only on the orderof
a physician, physician assistant, or qualified nurse) from over-the-counter drugs (those that
consumers can order for themselves). Another key distinction is between traditional small
molecule drugs, usually derived from chemical synthesis, and biopharmaceuticals, which
include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene
therapy, and cell therapy (for instance, stem cell therapies). Other ways to classify medicines are by
mode of action, route of administration,biological system affected, or therapeutic effects. An
elaborate and widely used classification system is the Anatomical Therapeutic Chemical
Classification System (ATC system). The World Health Organization keeps a list of essential
medicines.
Drug discovery and drug development are complex and expensive endeavors undertaken
by pharmaceutical companies, academic scientists, and governments. Governments generally
regulate what drugs can be marketed, how drugs are marketed, and in some jurisdictions, drug
pricing. Controversies have arisen over drug pricing and disposal of used drugs.
Definition
In Europe, the term is "medicinal product", and it is defined by EU law as: "(a) Any substance or
combination of substances presented as having properties for treating or preventing disease in
human beings; or
(b) Any substance or combination of substances which may be used in or administered to human
beings either with a view to restoring, correcting or modifying physiological functions by exerting a
pharmacological, immunological or metabolic action, or to making a medical diagnosis."[4]:36
In the US, a "drug" is:





A substance recognized by an official pharmacopoeia or formulary.
A substance intended for use in the diagnosis, cure, mitigation,
treatment, or prevention of disease.
A substance (other than food) intended to affect the structure or any
function of the body.
A substance intended for use as a component of a medicine but not
a device or a component, part or accessory of a device.
Biological products are included within this definition and are
generally covered by the same laws and regulations, but differences
exist regarding their manufacturing processes (chemical process
versus biological process.)[5]
Classification
Pharmaceutical or a drug is classified on the basis of their origin.
1. Drug from natural origin: Herbal or plant or mineral origin, some
drug substances are of marine origin.
2. Drug from chemical as well as natural origin: Derived from
partial herbal and partial chemical synthesis Chemical, example
steroidal drugs
3. Drug derived from chemical synthesis.
4. Drug derived from animal origin: For example, hormones, and
enzymes.
5. Drug derived from microbial origin: Antibiotics
6. Drug derived by biotechnology genetic-engineering, hybridoma
technique for example
7. Drug derived from radioactive substances.
One of the key classifications is between traditional small molecule drugs, usually derived from
chemical synthesis, and biologic medical products, which include recombinant
proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, and cell
therapy (for instance, stem cell therapies).
Pharmaceutical or drug or medicines are classified in various other groups besides their origin on
the basis of pharmacological properties like mode of action and their pharmacological action or
activity,[6] such as bychemical properties, mode or route of administration, biological system affected,
or therapeutic effects. An elaborate and widely used classification system is the Anatomical
Therapeutic Chemical Classification System(ATC system). The World Health Organization keeps a
list of essential medicines.
A sampling of classes of medicine includes:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Antipyretics: reducing fever (pyrexia/pyresis)
Analgesics: reducing pain (painkillers)
Antimalarial drugs: treating malaria
Antibiotics: inhibiting germ growth
Antiseptics: prevention of germ growth
near burns, cuts and wounds
Mood stabilizers: lithium and valpromide
Hormone replacements: Premarin
Oral contraceptives: Enovid, "biphasic" pill, and "triphasic" pill
Stimulants: methylphenidate, amphetamine
Tranquilizers: meprobamate, chlorpromazine, reserpine, chlordi
azepoxide, diazepam, and alprazolam
Statins: lovastatin, pravastatin, and simvastatin
Pharmaceuticals may also be described as "specialty", independent of other classifications, which is
an ill defined class of drugs that might be difficult to administer, require special handling during
administration, require patient monitoring during and immediately after administration, have
particular regulatory requirements restricting their use, and are generally expensive relative to other
drugs.[7]
Types of medicines
For the gastrointestinal tract (digestive system)


Upper digestive tract: antacids, reflux
suppressants, antiflatulents, antidopaminergics, proton pump
inhibitors (PPIs), H2-receptor
antagonists, cytoprotectants, prostaglandin analogues
Lower digestive
tract: laxatives, antispasmodics, antidiarrhoeals, bile acid
sequestrants, opioid
For the cardiovascular system




General: β-receptor blockers ("beta blockers"), calcium channel
blockers, diuretics, cardiac
glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, v
asodilators.
Affecting blood pressure/(antihypertensive drugs): ACE
inhibitors, angiotensin receptor blockers, beta-blockers, α
blockers, calcium channel blockers, thiazide diuretics, loop
diuretics, aldosterone inhibitors
Coagulation: anticoagulants, heparin, antiplatelet
drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs
HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol
inhibitors: hypolipidaemic agents.
For the central nervous system
See also: Psychiatric medication and Psychoactive drug
Drugs affecting the central nervous
system include: Psychedelics, hypnotics, anaesthetics, antipsychotics, antidepressants (including tri
cyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake
inhibitors (SSRIs)), antiemetics, Anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement
disorder (e.g., Parkinson's disease)
drugs, stimulants (including amphetamines),benzodiazepines, cyclopyrrolones, dopamine
antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT
(serotonin) antagonists.
For pain and consciousness (analgesic drugs)
The main classes of painkillers are NSAIDs, opioids and Local anesthetics.
For musculo-skeletal disorders
The main categories of drugs for musculoskeletal disorders are: NSAIDs (including COX-2 selective
inhibitors), muscle relaxants, neuromuscular drugs, and anticholinesterases.
For the eye








General: adrenergic neurone blocker, astringent, ocular lubricant
Diagnostic: topical
anesthetics, sympathomimetics, parasympatholytics, mydriatics, cyc
loplegics
Antibacterial: antibiotics, topical antibiotics, sulfa
drugs, aminoglycosides, fluoroquinolones
Antiviral drug
Anti-fungal: imidazoles, polyenes
Anti-inflammatory: NSAIDs, corticosteroids
Anti-allergy: mast cell inhibitors
Anti-glaucoma: adrenergic agonists, beta-blockers, carbonic
anhydrase
inhibitors/hyperosmotics, cholinergics, miotics, parasympathomimeti
cs, prostaglandin agonists/prostaglandin inhibitors. nitroglycerin
For the ear, nose and oropharynx
Antibiotics, sympathomimetics, antihistamines, anticholinergics, NSAIDs, corticosteroids, antiseptics,
local anesthetics, antifungals, cerumenolytic
For the respiratory system
bronchodilators, antitussives, mucolytics, decongestants
inhaled and systemic corticosteroids, Beta2-adrenergic agonists, anticholinergics, Mast cell
stabilizers. Leukotriene antagonists
For endocrine problems
androgens, antiandrogens, estrogens, gonadotropin, corticosteroids, human growth
hormone, insulin, antidiabetics (sulfonylureas, biguanides/metformin, thiazolidinediones, insulin), thyr
oid hormones, antithyroid drugs, calcitonin, diphosponate, vasopressin analogues
For the reproductive system or urinary system
antifungal, alkalinizing
agents, quinolones, antibiotics, cholinergics, anticholinergics, antispasmodics, 5-alpha reductase
inhibitor, selective alpha-1 blockers, sildenafils, fertility medications
For contraception



Hormonal contraception
Ormeloxifene
Spermicide
For obstetrics and gynecology
NSAIDs, anticholinergics, haemostatic drugs, antifibrinolytics, Hormone Replacement
Therapy (HRT), bone regulators, beta-receptor agonists, follicle stimulating hormone, luteinising
hormone, LHRH
gamolenic acid, gonadotropin release inhibitor, progestogen, dopamine
agonists, oestrogen, prostaglandins, gonadorelin, clomiphene, tamoxifen, Diethylstilbestrol
For the skin
emollients, anti-pruritics, antifungals, disinfectants, scabicides, pediculicides, tar products, vitamin A
derivatives, vitamin D analogues, keratolytics, abrasives, systemic
antibiotics, topical antibiotics, hormones,desloughing agents, exudate
absorbents, fibrinolytics, proteolytics, sunscreens, antiperspirants, corticosteroids, immune
modulators
For infections and infestations
antibiotics, antifungals, antileprotics, antituberculous
drugs, antimalarials, anthelmintics, amoebicides, antivirals, antiprotozoals, probiotics,
prebiotics, antitoxins and antivenoms.
For the immune system
vaccines, immunoglobulins, immunosuppressants, interferons, monoclonal antibodies
For allergic disorders
anti-allergics, antihistamines, NSAIDs, Corticosteroids
For nutrition
Tonics, electrolytes and mineral preparations (including iron preparations and magnesium
preparations), parenteral nutritions, vitamins, anti-obesity drugs, anabolic drugs, haematopoietic
drugs, food product drugs
For neoplastic disorders
cytotoxic drugs, therapeutic antibodies, sex hormones, aromatase inhibitors, somatostatin inhibitors,
recombinant interleukins, G-CSF, erythropoietin
For diagnostics
contrast media
For euthanasia
A euthanaticum is used for euthanasia and physician-assisted suicide. Euthanasia is not permitted
by law in many countries, and consequently medicines will not be licensed for this use in those
countries.
Administration
Administration is the process by which a patient takes a medicine. There are three major categories
of drug administration; enteral (by mouth), parenteral (into the blood stream), and other (which
includes giving a drug through intranasal, topical, inhalation, and rectal means).[8]
It can be performed in various dosage forms such as pills, tablets, or capsules.
There are many variations in the routes of administration, including intravenous (into the blood
through a vein) and oral administration (through the mouth).
They can be administered all at once as a bolus, at frequent intervals or continuously. Frequencies
are often abbreviated from Latin, such as every 8 hours reading Q8H from Quaque VIII Hora.
Drug discovery
In the fields of medicine, biotechnology and pharmacology, drug discovery is the process by which
new candidate drugs are discovered.
Historically, drugs were discovered through identifying the active ingredient from traditional remedies
or by serendipitous discovery. Later chemical libraries of synthetic small molecules, natural
products or extractswere screened in intact cells or whole organisms to identify substances that
have a desirable therapeutic effect in a process known as classical pharmacology.
Since sequencing of the human genome which allowed rapid cloning and synthesis of large
quantities of purified proteins, it has become common practice to use high throughput screening of
large compounds libraries against isolated biological targets which are hypothesized to be disease
modifying in a process known as reverse pharmacology. Hits from these screens are then tested in
cells and then in animals for efficacy. Even more recently, scientists have been able to understand
the shape of biological molecules at the atomic level, and to use that knowledge to design (see drug
design) drug candidates.
Modern drug discovery involves the identification of screening hits, medicinal chemistry and
optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects),
efficacy/potency,metabolic stability (to increase the half-life), and oral bioavailability. Once a
compound that fulfills all of these requirements has been identified, it will begin the process of drug
development prior to clinical trials. One or more of these steps may, but not necessarily,
involve computer-aided drug design.
Despite advances in technology and understanding of biological systems, drug discovery is still a
lengthy, "expensive, difficult, and inefficient process" with low rate of new therapeutic discovery. In
2010, the research and development cost of each new molecular entity (NME) was approximately
US$1.8 billion.[10] Drug discovery is done by pharmaceutical companies, with research assistance
from universities. The "final product" of drug discovery is a patent on the potential drug. The drug
requires very expensive Phase I, II and III clinical trials, and most of them fail. Small companies have
a critical role, often then selling the rights to larger companies that have the resources to run the
clinical trials.
Development
Drug development is a blanket term used to define the process of bringing a new drug to the market
once a lead compound has been identified through the process of drug discovery. It includes preclinical research (microorganisms/animals) and clinical trials (on humans) and may include the step
of obtaining regulatory approval to market the drug.
Regulation
The regulation of drugs varies by jurisdiction. In some countries, such as the United States, they are
regulated at the national level by a single agency. In other jurisdictions they are regulated at the
state level, or at both state and national levels by various bodies, as is the case in Australia. The role
of therapeutic goods regulation is designed mainly to protect the health and safety of the population.
Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are
covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be
registered before they are allowed to be marketed. There is usually some degree of restriction of the
availability of certain therapeutic goods depending on their risk to consumers.
Depending upon the jurisdiction, drugs may be divided into over-the-counter drugs (OTC) which may
be available without special restrictions, and prescription drugs, which must be prescribed by a
licensed medical practitioner. The precise distinction between OTC and prescription depends on the
legal jurisdiction. A third category, "behind-the-counter" drugs, is implemented in some jurisdictions.
These do not require a prescription, but must be kept in the dispensary, not visible to the public, and
only be sold by a pharmacist or pharmacy technician. Doctors may also prescribe prescription drugs
for off-label use - purposes which the drugs were not originally approved for by the regulatory
agency. The Classification of Pharmaco-Therapeutic Referrals helps guide the referral process
between pharmacists and doctors.
The International Narcotics Control Board of the United Nations imposes a world law of prohibition of
certain drugs. They publish a lengthy list of chemicals and plants whose trade and consumption
(where applicable) is forbidden. OTC drugs are sold without restriction as they are considered safe
enough that most people will not hurt themselves accidentally by taking it as instructed. Many
countries, such as the United Kingdomhave a third category of "pharmacy medicines", which can
only be sold in registered pharmacies by or under the supervision of a pharmacist.
Drug pricing
In many jurisdictions drug prices are regulated.
United Kingdom
In the UK the Pharmaceutical Price Regulation Scheme is intended to ensure that the National
Health Service is able to purchase drugs at reasonable prices.[citation needed]
Canada
In Canada, the Patented Medicine Prices Review Board examines drug pricing, compares the
proposed Canadian price to that of seven other countries and determines if a price is excessive or
not. In these circumstances, drug manufacturers must submit a proposed price to the appropriate
regulatory agency.[citation needed]
Brazil
In Brazil, the prices are regulated through a legislation under the name of Medicamento
Genérico (generic drugs) since 1999.[citation needed]
India
In India, drug prices are regulated by the National Pharmaceutical Pricing Authority.
United States
In the United States, drug costs are unregulated, but instead are the result of negotiations between
drug companies and insurance companies.[citation needed]
Blockbuster drug
Drug
Trade
name
Type
Indication
Company
Sales[14] ($billion/yea
r)*
Atorvastatin
Lipitor
Small
molecul
e
Hypercholesterole
mia
Pfizer
12.5
Clopidogrel
Plavix
Small
molecul
e
Atherosclerosis
Bristol-Myers
Squibb
Sanofi
9.1
Fluticasone/salmete
rol
Advair
Small
molecul
e
Asthma
GlaxoSmithKli
ne
8.7
Esomeprazole
Nexium
Small
molecul
e
Gastroesophageal
reflux disease
AstraZeneca
8.3
Rosuvastatin
Crestor
Small
molecul
e
Hypercholesterole
mia
AstraZeneca
7.4
Quetiapine
Small
Seroquel molecul
e
Bipolar disorder
Schizophrenia
Major Depressive
Disorder
AstraZeneca
7.2
Sales[14] ($billion/yea
r)*
Drug
Trade
name
Type
Adalimumab
Humira
Biologi
c
Rheumatoid
Arthritis
AbbVie
6.6
Etanercept
Enbrel
Biologi
c
Rheumatoid
Arthritis
Amgen
Pfizer
6.5
Infliximab
Remicad
e
Biologi
c
Crohn's Disease
Rheumatoid
Arthritis
Johnson &
Johnson
6.4
Olanzapine
Zyprexa
Small
molecul
e
Schizophrenia
Eli Lilly and
Company
6.2
Indication
Company
History
Prescription drug history
Antibiotics first arrived on the medical scene in 1932 thanks to
Gerhard Domagk;[15] and coined the "wonder drugs". The
introduction of the sulfa drugs led to a decline in the U.S. mortality
rate from pneumonia to drop from 0.2% each year to 0.05% by
1939.[16] Antibiotics inhibit the growth or the metabolic activities of
bacteria and other microorganisms by a chemical substance of
microbial origin. Penicillin, introduced a few years later, provided a
broader spectrum of activity compared to sulfa drugs and reduced
side effects. Streptomycin, found in 1942, proved to be the first drug
effective against the cause of tuberculosis and also came to be the
best known of a long series of important antibiotics. A second
generation of antibiotics was introduced in the 1940s: aureomycin
and chloramphenicol. Aureomycin was the best known of the
second generation.
Lithium was discovered in the 19th century for nervous disorders
and its possible mood-stabilizing or prophylactic effect; it was cheap
and easily produced. As lithium fell out of favor in France,
valpromide came into play. This antibiotic was the origin of the drug
that eventually created the mood stabilizer category. Valpromide
had distinct psychotrophic effects that were of benefit in both the
treatment of acute manic states and in the maintenance treatment
of manic depression illness. Psychotropics can either be sedative or
stimulant; sedatives aim at damping down the extremes of behavior.
Stimulants aim at restoring normality by increasing tone. Soon
arose the notion of a tranquilizer which was quite different from any
sedative or stimulant. The term tranquilizer took over the notions of
sedatives and became the dominant term in the West through the
1980s. In Japan, during this time, the term tranquilizer produced the
notion of a psyche-stabilizer and the term mood stabilizer
vanished.[17]
Premarin (conjugated estrogens, introduced in 1942) and Prempro
(a combination estrogen-progestin pill, introduced in 1995)
dominated the hormone replacement therapy (HRT) during the
1990s. HRT is not a life-saving drug, nor does it cure any disease.
HRT has been prescribed to improve one's quality of life. Doctors
prescribe estrogen for their older female patients both to treat shortterm menopausal symptoms and to prevent long-term diseases. In
the 1960s and early 1970s more and more physicians began to
prescribe estrogen for their female patients. between 1991 to 1999,
Premarin was listed as the most popular prescription and bestselling drug in America.[17]
The first oral contraceptive, Enovid, was approved by FDA in 1960.
Oral contraceptives inhibit ovulation and so prevent conception.
Enovid was known to be much more effective than alternatives
including the condom and the diaphragm. As early as 1960, oral
contraceptives were available in several different strengths by every
manufacturer. In the 1980s and 1990s an increasing number of
options arose including, most recently, a new delivery system for
the oral contraceptive via a transdermal patch. In 1982, a new
version of the Pill was introduced, known as the "biphasic" pill. By
1985, a new triphasic pill was approved. Physicians began to think
of the Pill as an excellent means of birth control for young women.[17]
Stimulants such as Ritalin (methylphenidate) came to be pervasive
tools for behavior management and modification in young children.
Ritalin was first marketed in 1955 for narcolepsy; its potential users
were middle-aged and the elderly. It wasn't until some time in the
1980s along with hyperactivity in children that Ritalin came onto the
market. Medical use of methlyphanidate is predominately for
symptoms of attention deficit/hyperactivity disorder (ADHD).
Consumption of methylphenidate in the U.S. out-paced all other
countries between 1991 and 1999. Significant growth in
consumption was also evident in Canada, New Zealand, Australia,
and Norway. Currently, 85% of the world's methylphanidate is
consumed in America.[17]
The first minor tranquilizer was Meprobamate. Only fourteen
months after it was made available, meprobamate had become the
country's largest-selling prescription drug. By 1957, meprobamate
had become the fastest-growing drug in history. The popularity of
meprobamate paved the way for Librium and Valium, two minor
tranquilizers that belonged to a new chemical class of drugs called
the benzodiazepines. These were drugs that worked chiefly as antianxiety agents and muscle relaxants. The first benzodiazepine was
Librium. Three months after it was approved, Librium had become
the most prescribed tranquilizer in the nation. Three years later,
Valium hit the shelves and was ten times more effective as a
muscle relaxant and anti-convulsant. Valium was the most versatile
of the minor tranquilizers. Later came the widespread adoption of
major tranquilizers such as chlorpromazine and the drug reserpine.
In 1970 sales began to decline for Valium and Librium, but sales of
new and improved tranquilizers, such as Xanax, introduced in 1981
for the newly created diagnosis of panic disorder, soared.[17]
Mevacor (lovastatin) is the first and most influential statin in the
American market. The 1991 launch of Pravachol (pravastatin), the
second available in the United States, and the release of Zocor
(simvastatin) made Mevacor no longer the only statin on the market.
In 1998, Viagra was released as a treatment for erectile
dysfunction.[17]
Ancient pharmacology
Using plants and plant substances to treat all kinds of diseases and
medical conditions is believed to date back to prehistoric medicine.
The Kahun Gynaecological Papyrus, the oldest known medical text
of any kind, dates to about 1800 BC and represents the first
documented use of any kind of drug.[18][19] It and other medical
papyri describe Ancient Egyptian medical practices, such as
using honey to treat infections and the legs of bee-eaters to treat
neck pains.
Ancient Babylonian medicine demonstrate the use
of prescriptions in the first half of the 2nd millennium BC. Medicinal
creams and pills were employed as treatments.[20]
On the Indian subcontinent, the Atharvaveda, a sacred text
of Hinduism whose core dates from the 2nd millennium BC,
although the hymns recorded in it are believed to be older, is the
first Indic text dealing withmedicine. It describes plant-based drugs
to counter diseases.[21] The earliest foundations of ayurveda were
built on a synthesis of selected ancient herbal practices, together
with a massive addition of theoretical conceptualizations,
new nosologies and new therapies dating from about 400 BC
onwards.[22] The student of Āyurveda was expected to know ten arts
that were indispensable in the preparation and application of his
medicines: distillation, operative skills, cooking, horticulture,
metallurgy, sugar manufacture, pharmacy, analysis and separation
of minerals, compounding of metals, and preparation of alkalis.
The Hippocratic Oath for physicians, attributed to 5th century BC
Greece, refers to the existence of "deadly drugs", and ancient
Greek physicians imported drugs from Egypt and elsewhere.[23]
Medieval pharmacology
Al-Kindi's 9th century AD book, De Gradibus and Ibn
Sina (Avicenna)'s The Canon of Medicine cover a range of drugs
known to Medicine in the medieval Islamic world.
Medieval medicine saw advances in surgery, but few truly effective
drugs existed, beyond opium (found in such extremely popular
drugs as the "Great Rest" at the time)[24] and quinine. Folklore cures
and potentially poisonous metal-based compounds were popular
treatments. Theodoric Borgognoni, (1205–1296), one of the most
significant surgeons of the medieval period, responsible for
introducing and promoting important surgical advances including
basic antiseptic practice and the use of anaesthetics. Garcia de
Orta described some herbal treatments that were used.
Modern pharmacology
For most of the 19th century, drugs were not highly effective,
leading Oliver Wendell Holmes, Sr. to famously comment in 1842
that "if all medicines in the world were thrown into the sea, it would
be all the better for mankind and all the worse for the fishes".[25]:21
During the First World War, Alexis Carrel and Henry
Dakin developed the Carrel-Dakin method of treating wounds with
an irrigation, Dakin's solution, a germicide which helped prevent
gangrene.
In the inter-war period, the first anti-bacterial agents such as
the sulpha antibiotics were developed. The Second World War saw
the introduction of widespread and effective antimicrobial therapy
with the development and mass production of penicillin antibiotics,
made possible by the pressures of the war and the collaboration of
British scientists with the American pharmaceutical industry.
Medicines commonly used by the late 1920s
included aspirin, codeine, and morphine for
pain; digitalis, nitroglycerin, and quinine for heart disorders,
and insulin for diabetes. Other drugs included antitoxins, a few
biological vaccines, and a few synthetic drugs. In the 1930s
antibiotics emerged: first sulfa drugs, then penicillin and other
antibiotics. Drugs increasingly became "the center of medical
practice".[25]:22 In the 1950s other drugs emerged
including corticosteroids for inflammation, rauwolfia alkaloids as
tranqulizers and antihypertensives, antihistamines for nasal
allergies, xanthines for asthma, and typical antipsychotics for
psychosis.[25]:23–24 As of 2007, thousands of approved drugs have
been developed. Increasingly, biotechnology is used to
discover biopharmaceuticals.[25] Recently, multi-disciplinary
approaches have yielded a wealth of new data on the development
of novel antibiotics and antibacterials and on the use of biological
agents for antibacterial therapy.[26]
In the 1950s new psychiatric drugs, notably the
antipsychotic chlorpromazine, were designed in laboratories and
slowly came into preferred use. Although often accepted as an
advance in some ways, there was some opposition, due to serious
adverse effects such as tardive dyskinesia. Patients often opposed
psychiatry and refused or stopped taking the drugs when not
subject to psychiatric control.
Governments have been heavily involved in the regulation of drug
development and drug sales. In the U.S., the Elixir Sulfanilamide
disaster led to the establishment of the Food and Drug
Administration, and the 1938 Federal Food, Drug, and Cosmetic Act
required manufacturers to file new drugs with the FDA. The 1951
Humphrey-Durham Amendment required certain drugs to be sold by
prescription. In 1962 a subsequent amendment required new drugs
to be tested for efficacy and safety in clinical trials.[25]:24–26
Until the 1970s, drug prices were not a major concern for doctors
and patients. As more drugs became prescribed for chronic
illnesses, however, costs became burdensome, and by the 1970s
nearly every U.S. state required or encouraged the substitution
of generic drugs for higher-priced brand names. This also led to the
2006 U.S. law, Medicare Part D, which offers Medicare coverage for
drugs.[25]:28–29
As of 2008, the United States is the leader in medical research,
including pharmaceutical development. U.S. drug prices are among
the highest in the world, and drug innovation is correspondingly
high. In 2000 U.S. based firms developed 29 of the 75 top-selling
drugs; firms from the second-largest market, Japan, developed
eight, and the United Kingdom contributed 10. France, which
imposes price controls, developed three. Throughout the 1990s
outcomes were similar.[25]:30–31
Controversies
Controversies concerning pharmaceutical drugs include patient
access to drugs under development and not yet approved, pricing,
and environmental issues.
Access to unapproved drugs
Main articles: Named patient programs and Expanded access
Governments worldwide have created provisions for granting
access to drugs prior to approval for patients who have exhausted
all alternative treatment options and do not match clinical trial entry
criteria. Often grouped under the labels of compassionate
use, expanded access, or named patient supply, these programs
are governed by rules which vary by country defining access
criteria, data collection, promotion, and control of drug distribution.
Within the United States, pre-approval demand is generally met
through treatment IND (investigational new drug) applications
(INDs), or single-patient INDs. These mechanisms, which fall under
the label of expanded access programs, provide access to drugs for
groups of patients or individuals residing in the US. Outside the US,
Named Patient Programs provide controlled, pre-approval access to
drugs in response to requests by physicians on behalf of specific, or
"named", patients before those medicines are licensed in the
patient's home country. Through these programs, patients are able
to access drugs in late-stage clinical trials or approved in other
countries for a genuine, unmet medical need, before those drugs
have been licensed in the patient's home country.
Patients who have not been able to get access to drugs in
development have organized and advocated for greater access. In
the United States, ACT UP formed in the 1980s, and eventually
formed its Treatment Action Group in part to pressure the US
government to put more resources into discovering treatments
for AIDS and then to speed release of drugs that were under
development.
The Abigail Alliance was established in November 2001 by Frank
Burroughs in memory of his daughter, Abigail.[29] The Alliance seeks
broader availability of investigational drugs on behalf of terminally ill
patients.
In 2013, BioMarin Pharmaceutical was at the center of a high profile
debate regarding expanded access of cancer patients to
experimental drugs.[30][31]
Access to medicines and drug pricing
Main articles: Essential medicines and Societal views on patents
Essential medicines as defined by the World Health
Organization (WHO) are "those drugs that satisfy the health care
needs of the majority of the population; they should therefore be
available at all times in adequate amounts and in appropriate
dosage forms, at a price the community can afford."[32] Recent
studies have found that most of the medicines on the WHO
essential medicines list, outside of the field of HIV drugs, are not
patented in the developing world, and that lack of widespread
access to these medicines arise from issues fundamental to
economic development - lack of infrastructure and
poverty.[33] Médecins Sans Frontières also runs a Campaign for
Access to Essential Medicines campaign, which includes advocacy
for greater resources to be devoted to currently untreatable
diseases that primarily occur in the developing world. The Access to
Medicine Index tracks how well pharmaceutical companies make
their products available in the developing world.
World Trade Organization negotiations in the 1990s, including
the TRIPS Agreement and the Doha Declaration, have centered on
issues at the intersection of international trade in pharmaceuticals
and intellectual property rights, with developed world nations
seeking strong intellectual property rights to protect investments
made to develop new drugs, and developing world nations seeking
to promote their generic pharmaceuticals industries and their ability
to make medicine available to their people via compulsory licenses.
Some have raised ethical objections specifically with respect to
pharmaceutical patents and the high prices for drugs that they
enable their proprietors to charge, which poor people in the
developed world, and developing world, cannot afford.[34][35] Critics
also question the rationale that exclusive patent rights and the
resulting high prices are required for pharmaceutical companies to
recoup the large investments needed for research and
development.[34] One study concluded that marketing expenditures
for new drugs often doubled the amount that was allocated for
research and development.[36] Other critics claim that patent
settlements would be costly for consumers, the health care system,
and state and federal governments because it would result in
delaying access to lower cost generic medicines.[37]
Novartis fought a protracted battle with the government of India over
the patenting of its drug, Gleevec, in India, which ended up in
India's Supreme Court in a case known as Novartis v. Union of India
& Others. The Supreme Court ruled narrowly against Novartis, but
opponents of patenting drugs claimed it as a major victory
Environmental issues
Main article: Environmental impact of pharmaceuticals and personal
care products
The environmental impact of pharmaceuticals and personal care
products is controversial. PPCPs are substances used by
individuals for personal health or cosmetic reasons and the
products used by agribusiness to boost growth or health of
livestock. PPCPs comprise a diverse collection of thousands of
chemical substances, including prescription and over-the-counter
therapeutic drugs, veterinary drugs, fragrances, and cosmetics.
PPCPs have been detected in water bodies throughout the world
and ones that persist in the environment are called Environmental
Persistent Pharmaceutical Pollutants. The effects of these
chemicals on humans and the environment are not yet known, but
to date there is no scientific evidence that they have an impact on
human health.[39]
Pharmacogenomics
Pharmacogenomics (a portmanteau of pharmacology and genomics) is the study of the role of
genetics in drug response. It deals with the influence of acquired and inherited genetic variation on
drug response in patients by correlating gene expression or single-nucleotide polymorphisms with
drug absorption, distribution, metabolismand elimination, as well as drug receptor target
effects.[1][2][3] The term pharmacogenomics is often used interchangeably with pharmacogenetics.
Although both terms relate to drug response based on genetic influences, pharmacogenetics
focuses on single drug-gene interactions, while pharmacogenomics encompasses a more genomewide association approach, incorporating genomics and epigenetics while dealing with the effects of
multiple genes on drug response.[4][5][6]
Pharmacogenomics aims to develop rational means to optimize drug therapy, with respect to the
patients' genotype, to ensure maximum efficacy with minimal adverse effects.[7] Through the
utilization of pharmacogenomics, it is hoped that drug treatments can deviate from what is dubbed
as the "one-dose-fits-all" approach. It attempts to eliminate the trial-and-error method of prescribing,
allowing physicians to take into consideration their patient’s genes, the functionality of these genes,
and how this may affect the efficacy of the patient’s current and/or future treatments (and where
applicable, provide an explanation for the failure of past treatments).[4] Such approaches promise the
advent of "personalized medicine"; in which drugs and drug combinations are optimized for each
individual's unique genetic makeup.[8] Whether used to explain a patient’s response or lack thereof to
a treatment, or act as a predictive tool, it hopes to achieve better treatment outcomes, greater
efficacy, minimization of the occurrence of drug toxicities and adverse drug reactions (ADRs). For
patients who have lack of therapeutic response to a treatment, alternative therapies can be
prescribed that would best suit their requirements. In order to provide pharmacogenomic-based
recommendations for a given drug, two possible types of input can be used: genotyping or exome or
whole genome sequencing.[10] Sequencing provides many more data points, including detection of
mutations that prematurely terminate the synthesized protein (early stop codon).[10]
History
Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection
between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress.
Interestingly, this identification was later validated and attributed to deficiency of G6PD in the 1950s
and called favism.[11][12] Although the first official publication dates back to 1961,[13] circa 1950s
marked the unofficial beginnings of this science. Reports of prolonged paralysis and fatal reactions
linked to genetic variants in patients who lacked butyryl-cholinesterase (‘pseudocholinesterase’)
following administration of succinylcholine injection during anesthesia were first reported in
1956.[1][14] The term pharmacogenetic was first coined in 1959 by Friedrich Vogel of Heidelberg,
Germany (although some papers suggest it was 1957). In the late 1960s, twin studies supported the
inference of genetic involvement in drug metabolism, with identical twins sharing remarkable
similarities to drug response compared to fraternity twins.[15] The term pharmacogenomics first began
appearing around the 1990s.[11]
The first FDA approval of a pharmacogenetic test was in 2005[16] (for alleles in CYP2D6 and
CYP2C19).
Drug-metabolizing enzymes
There are several known genes which are largely responsible for variances in drug metabolism and
response. The focus of this article will remain on the genes that are more widely accepted and
utilized clinically for brevity.



Cytochrome P450s
VKORC1
TPMT
Cytochrome P450
The most prevalent drug-metabolizing enzymes (DME) are the Cytochrome P450 (CYP) enzymes.
The term Cytochrome P450 was coined by Omura and Sato in 1962 to describe the membranebound, heme-containing protein characterized by 450 nm spectral peak when complexed
with carbon monoxide.[17] The human CYP family consists of 57 genes, with 18 families and 44
subfamilies. CYP proteins are conveniently arranged into these families and subfamilies on the basis
of similarities identified between the amino acid sequences. Enzymes that share 35-40% identity are
assigned to the same family by an Arabic numeral, and those that share 55-70% make up a
particular subfamily with a designated letter.[18] For example, CYP2D6 refers to family 2, subfamily D,
and gene number 6.
From a clinical perspective, the most commonly tested CYPs
include: CYP2D6, CYP2C19, CYP2C9, CYP3A4 and CYP3A5. These genes account for the
metabolism of approximately 80-90% of currently available prescription drugs.[19][20] The table below
provides a summary for some of the medications that take these pathways.
CYP2D6
Also known as debrisoquine hydroxylase (named after the drug that led to its discovery), CYP2D6 is
the most well-known and extensively studied CYP gene.[23] It is a gene of great interest also due to its
highlypolymorphic nature, and involvement in a high number of medication metabolisms (both as a
major and minor pathway). More than 100 CYP2D6 genetic variants have been identified.[22]
CYP2C19
Discovered in the early 1980s, CYP2C19 is the second most extensively studied and well
understood gene in pharmacogenomics.[21] Over 28 genetic variants have been identified for
CYP2C19,[24] of which affects the metabolism of several classes of drugs, such
as antidepressants and proton pump inhibitors.[25]
CYP2C9
CYP2C9 constitutes the majority of the CYP2C subfamily, representing approximately 20% of the
liver content. It is involved in the metabolism of approximately 10% of all drugs, which include
medications with narrow therapeutic windows such as warfarin and tolbutamide.[25][26] There are
approximately 57 genetic variants associated with CYP2C9.[24]
CYP3A4 and CYP3A5
The CYP3A family is the most abundantly found in the liver, with CYP3A4 accounting for 29% of the
liver content.[21] These enzymes also cover between 40-50% of the current prescription drugs, with
the CYP3A4 accounting for 40-45% of these medications.[12] CYP3A5 has over 11 genetic variants
identified at the time of this publication.[24]
VKORC1
The vitamin K epoxide reductase complex subunit 1 (VKORC1) is responsible for the
pharmacodynamics of warfarin. VKORC1 along with CYP2C9 are useful for identifying the risk of
bleeding during warfarin administration. Warfarin works by inhibiting VKOR, which is encoded by the
VKORC1 gene. Individuals with polymorphism in this have an affected response to warfarin
treatment.
TPMT
Thiopurine methyltransferase (TPMT) catalyzes the S-methylation of thiopurines, thereby regulating
the balance between cytotoxic thioguanine nucleotide and inactive metabolites in hematopoietic
cells.[29] TPMT is highly involved in 6-MP metabolism and TMPT activity and TPMT genotype is
known to affect the risk of toxicity. Excessive levels of 6-MP can cause myelosuppression and
myelotoxicity.[30]
Codeine, clopidogrel, tamoxifen, and warfarin a few examples of medications that follow the above
metabolic pathways.
Predictive prescribing
Patient genotypes are usually categorized into the following predicted phenotypes:




Ultra-Rapid Metabolizer: Patients with substantially increased metabolic activity.
Extensive Metabolizer: Normal metabolic activity;
Intermediate Metabolizer: Patients with reduced metabolic activity; and
Poor Metabolizer: Patients with little to no functional metabolic activity.
The two extremes of this spectrum are the Poor Metabolizers and Ultra-Rapid Metabolizers. Efficacy
of a medication is not only based on the above metabolic statuses, but also the type of drug
consumed. Drugs can be classified into two main groups: active drugs and pro-drugs. Active drugs
refer to drugs that are inactivated during metabolism, and Pro-Drugs are inactive until they are
metabolized.
An overall process of how pharmacogenomics functions in a clinical practice. From the raw genotype results,
this is then translated to the physical trait, the phenotype. Based on these observations, optimal dosing is
evaluated.[29]
For example, we have two patients who are taking codeine for pain relief. Codeine is a pro-drug, so
it requires conversion from its inactive form to its active form. The active form of codeine is
morphine, which provides the therapeutic effect of pain relief. If person A receives one *1 allele each
from mother and father to code for the CYP2D6 gene, then that person is considered to have an
extensive metabolizer (EM) phenotype, as allele *1 is considered to have a normal-function (this
would be represented as CYP2D6 *1/*1). If person B on the other hand had received one *1 allele
from the mother and a *4 allele from the father, that individual would be an Intermediate Metabolizer
(IM) (the genotype would be CYP2D6 *1/*4). Although both individuals are taking the same dose of
codeine, person B could potentially lack the therapeutic benefits of codeine due to the decreased
conversion rate of codeine to its active counterpart morphine.
Each phenotype is based upon the allelic variation within the individual genotype. However, several
genetic events can influence a same phenotypic trait, and establishing genotype-to-phenotype
relationships can thus be far from consensual with many enzymatic patterns. For instance, the
influence of the CYP2D6*1/*4 allelic variant on the clinical outcome in patients treated with
Tamoxifen remains debated today. In oncology, genes coding
for DPD, UGT1A1, TPMT, CDA involved in the pharmacokinetics of 5-FU/capecitabine, irinotecan, 6mercaptopurine and gemcitabine/cytarabine, respectively, have all been described as being highly
polymorphic. A strong body of evidence suggests that patients affected by these genetic
polymorphisms will experience severe/lethal toxicities upon drug intake, and that pre-therapeutic
screening does help to reduce the risk of treatment-related toxicities through adaptive dosing
strategies.[31]
Identification of the genetic basis for polymorphic expression of a gene is done through intronic or
exomic SNPs which abolishes the need for different mechanisms for explaining the variability in drug
metabolism. TheSNPs based variations in membrane receptors lead to multidrug resistance (MDR)
and the drug–drug interactions. Even drug induced toxicity and many adverse effects can be
explained by genome-wide association studies (GWAS).[32]
Applications
The list below provides a few more commonly known applications of pharmacogenomics:[33]




Improve drug safety, and reduce ADRs;
Tailor treatments to meet patients' unique genetic pre-disposition, identifying optimal dosing;
Improve drug discovery targeted to human disease; and
Improve proof of principle for efficacy trials.
Pharmacogenomics may be applied to several areas of medicine, including Pain
Management, Cardiology, Oncology, and Psychiatry. A place may also exist in Forensic Pathology,
in which pharmacogenomics can be used to determine the cause of death in drug-related deaths
where no findings emerge using autopsy.[34]
In cancer treatment, pharmacogenomics tests are used to identify which patients are most likely to
respond to certain cancer drugs. In behavioral health, pharmacogenomic tests provide tools for
physicians and care givers to better manage medication selection and side effect amelioration.
Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with
drugs. Examples include KRAS test withcetuximab and EGFR test with gefitinib. Beside efficacy,
germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when
given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as
canonical 5-FU.[35]
In cardio vascular disorders, the main concern is response to drugs
including warfarin, clopidogrel, beta blockers, and statins.[10]
Example case studies
Patient A suffers from schizophrenia. Their treatment included a combination of ziprasidone,
olanzapine, trazodone and benzotropine. The patient experienced dizziness and sedation, so they
were tapered off ziprasidone and olanzapine, and transition to quetiapine. Trazodone was
discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained
considerable weight and had hallucinations. Five months later, quetiapine was tapered and
discontinued, with ziprasidone re-introduction into their treatment due to the excessive weight gain.
Although the patient lost the excessive weight they gained, they then developed muscle
stiffness, cogwheeling, tremor and night sweats. When benztropine was added they experienced
blurry vision. After an additional five months, the patient was switched from ziprasidone to
aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain,
sedation, developed difficulty with their gait, stiffness, cogwheel and dyskinetic ocular movements. A
pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, with has a predicted
phenotype of IM and CYP2C19 *1/*2 with predicted phenotype of IM as well.
Case B – Pain Management [37]
Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for
post-caesarian pain. She took the standard prescribed dose, however experienced nausea and
dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and
feeding poorly. When the patient mentioned these symptoms to her physician, they recommended
that she discontinue codeine use. Within a few days, both the patient and her infant’s symptoms
were no longer present. It is assumed that if the patient underwent a pharmacogenomic test, it would
have revealed she may have had a duplication of the gene CYP2D6 placing her in the Ultra-rapid
metabolizer (UM) category, explaining her ADRs to codeine use.
Case C – FDA Warning on Codeine Overdose for Infants[38]
On February 20, 2013, the FDA released a statement addressing a serious concern regarding the
connection between children who are known as CYP2D6 UM and fatal reactions to codeine
following tonsillectomyand/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They
released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming
codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are at
danger of producing large amounts of morphine due to the increased function of the gene. The
morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three
children in August 2012.
Polypharmacy
A potential role pharmacogenomics may play would be to reduce the occurrence of polypharmacy. It
is theorized that with tailored drug treatments, patients will not have the need to take several
medications that are intended to treat the same condition. In doing so, they could potentially
minimize the occurrence of ADRs, have improved treatment outcomes, and can save costs by
avoiding purchasing extraneous medications. An example of this can be found in Psychiatry, where
patients tend to be receiving more medications then even age-matched non-psychiatric patients.
This has been associated with an increased risk of inappropriate prescribing.[39]
The need for pharmacogenomics tailored drug therapies may be most evident in a survey conducted
by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study
elucidated that an average of 82% of adults in the United States are taking at least one medication
(prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are
taking five or more. The study suggested that those aged 65 years or older continue to be the
biggest consumers of medications, with 17-19 % in this age group taking at least ten medications in
a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%.[40]
Drug labeling
The U.S. Food and Drug Administration (FDA) appears to be very invested in the science of
pharmacogenomics[41] as is demonstrated through the 120 and more FDA-approved drugs that
include pharmacogenomicbiomarkers in their labels.[42] On May 22, 2005, the FDA issued its
first Guidance for Industry: Pharmacogenomic Data Submissions, which clarified the type of
pharmacogenomic data required to be submitted to the FDA and when.[43] Experts recognized the
importance of the FDA’s acknowledgement that pharmacogenomics experiments will not bring
negative regulatory consequences.[44] The FDA had released its latest guideClinical
Pharmacogenomics (PGx): Premarket Evaluation in Early-Phase Clinical Studies and
Recommendations for Labeling in January, 2013. The guide is intended to address the use of
genomic information during drug development and regulartory review processes.
Challenges
Consecutive phases and associated challenges in Pharmacogenomics.[45]
Although there appears to be a general acceptance of the basic tenet of pharmacogenomics
amongst physicians and healthcare professionals,[46] several challenges exist that slow the uptake,
implementation, and standardization of pharmacogenomics. Some of the concerns raised by
physicians include:[6][46][47]




Limitation on how to apply the test into clinical practices and treatment;
A general feeling of lack of availability of the test;
The understanding and interpretation of evidence-based research; and
Ethical, legal and social issues.
Issues surrounding the availability of the test include:[45]


The lack of availability of scientific data: Although there are considerable number of DME
involved in the metabolic pathways of drugs, only a fraction have sufficient scientific data to
validate their use within a clinical setting; and
Demonstrating the cost-effectiveness of pharmacogenomics: Publications for
the pharmacoeconomics of pharmacogenomics are scarce, therefore sufficient evidence does
not at this time exist to validate the cost-effectiveness and cost-consequences of the test.
Although other factors contribute to the slow progression of pharmacogenomics (such as developing
guidelines for clinical use), the above factors appear to be the most prevalent.
Controversies
Some alleles that vary in frequency between specific populations have been shown to be associated
with differential responses to specificdrugs. The beta blocker atenolol is an antihypertensive medication that is shown to more significantly lower the blood pressure
of Caucasianpatients than African American patients in the United States. This observation suggests
that Caucasian and African American populations have different alleles governing oleic
acid biochemistry, which react differentially with atenolol.[48] Similarly, hypersensitivity to the
antiretroviral drug abacavir is strongly associated with a single-nucleotide polymorphism that varies
in frequency between populations.[49]
The FDA approval of the drug BiDil with a label specifying African-Americans with congestive heart
failure, produced a storm of controversy over race-based medicine[50] and fears of genetic
stereotyping, even though the label for BiDil did not specify any genetic variants but was based on
racial self-identification.[51][52]
Future of Pharmacogenomics[]
Computational advances in Pharmacogenomics has proven to be a blessing in research. As a
simple example, for nearly a decade the ability to store more information on a hard drive has
enabled us to investigate a human genome sequence cheaper and in more detail with regards to the
effects/risks/safety concerns of drugs and other such substances. Such computational advances are
expected to continue in the future.[53]The aim is to use the genome sequence data to effectively make
decisions in order to minimise the negative impacts on, say, a patient or the health industry in
general. A large amount of research in the biomedical sciences regarding Pharmacogenomics as of
late stems from combinatorial chemistry,[54] genomic mining, omic technologies and high throughput
screening. In order for the field to grow rich knowledge enterprises and business must work more
closely together and adopt simulation strategies. Consequently, more importance must be placed on
the role of computational biology with regards to safety and risk assessments. Here we can find the
growing need and importance of being able to manage large, complex data sets, being able to
extract information by integrating disparate data so that developments can be made in improving
human health
Microorganism
A cluster of Escherichia coli bacteriamagnified 10,000 times
A microorganism (from the Greek: μικρός, mikros, "small" and ὀργανισμός, organismós,
"organism") is a microscopic living organism, which may be single celled[1] ormulticellular. The study
of microorganisms is called microbiology, a subject that began with the discovery of microorganisms
in 1674 by Antonie van Leeuwenhoek, using amicroscope of his own design.
Microorganisms are very diverse and include all bacterial, archaean and most of
the protozoan species on the planet. This group also contains some species of fungi,algae, and
certain animals, such as rotifers. Many macroscopic animals and plants have microscopic juvenile
stages. Some microbiologists also classify viruses (andviroids) as microorganisms, but others
consider these as nonliving.[2][3]
Microorganisms live in every part of the biosphere, including soil, hot springs, "seven miles deep" in
the ocean, "40 miles high" in the atmosphere and inside rocks far down within the Earth's crust (see
also endolith).[4] Microorganisms, under certain test conditions, have been observed to thrive in the
vacuum of outer space.[5][6] The total amount of soil and subsurface bacterial carbon is estimated as 5
x 1017 g, or the "weight of the United Kingdom".[4] The mass of prokaryote microorganisms — which
includes bacteria and archaea, but not the nucleated eukaryote microorganisms — may be as much
as 0.8 trillion tons of carbon (of the total biosphere mass, estimated at between 1 and 4 trillion
tons).[7] On 17 March 2013, researchers reported data that suggested microbial life forms thrive in
the Mariana Trench. the deepest spot in the Earth's oceans.[8] Other researchers reported related
studies that microorganisms thrive inside rocks up to 580 m (1,900 ft; 0.36 mi) below the sea floor
under 2,590 m (8,500 ft; 1.61 mi) of ocean off the coast of the northwestern United States,[8][10] as
well as 2,400 m (7,900 ft; 1.5 mi) beneath the seabed off Japan.[11] On 20 August 2014, scientists
confirmed the existence of microorganisms living 800 m (2,600 ft; 0.50 mi) below the ice
of Antarctica.[12][13] According to one researcher, "You can find microbes everywhere — they're
extremely adaptable to conditions, and survive wherever they are."[8]
Microorganisms are crucial to nutrient recycling in ecosystems as they act as decomposers. As
some microorganisms can fix nitrogen, they are a vital part of the nitrogen cycle, and recent studies
indicate that airborne microorganisms may play a role in precipitation and
weather.[14] Microorganisms are also exploited in biotechnology, both in traditional food and beverage
preparation, and in modern technologies based ongenetic engineering. A small proportion of
microorganisms are pathogenic and cause disease and even death in plants and
animals.[15] Microorganisms are often referred to as microbes, but this is usually used in reference to
pathogens.
Evolution
Further information: Timeline of evolution and Experimental evolution
Single-celled microorganisms were the first forms of life to develop on Earth, approximately 3–
4 billion years ago.[16][17][18] Further evolution was slow,[19] and for about 3 billion years in
the Precambrian eon, all organisms were microscopic.[20] So, for most of the history of life on Earth,
the only forms of life were microorganisms.[21] Bacteria, algae and fungi have been identified
in amber that is 220 million years old, which shows that the morphology of microorganisms has
changed little since the Triassic period.[22] The newly discovered biological role played by nickel,
however — especially that engendered by volcanic eruptions from the Siberian Traps (site of the
modern city of Norilsk) — is thought to have accelerated the evolution of methanogens towards the
end of the Permian–Triassic extinction event.[23]
Microorganisms tend to have a relatively fast rate of evolution. Most microorganisms can reproduce
rapidly, and bacteria are also able to freely exchange genes
through conjugation, transformation and transduction, even between widely divergent
species.[24] This horizontal gene transfer, coupled with a high mutation rate and many other means
of genetic variation, allows microorganisms to swiftly evolve (via natural selection) to survive in new
environments and respond to environmental stresses. This rapid evolution is important in medicine,
as it has led to the recent development of "super-bugs", pathogenic bacteria that are resistant to
modern antibiotics.
Pre-microbiology
The possibility that microorganisms exist was discussed for many centuries before their discovery in
the 17th century. The existence of unseen microbiological life was postulated by Jainism, which is
based onMahavira's teachings as early as 6th century BCE.[26] Paul Dundas notes that Mahavira
asserted the existence of unseen microbiological creatures living in earth, water, air and
fire. The Jain scriptures also describe nigodas, which are sub-microscopic creatures living in large
clusters and having a very short life, which are said to pervade every part of the universe, even the
tissues of plants and animals. The earliest known idea to indicate the possibility of diseases
spreading by yet unseen organisms was that of the Roman scholar Marcus Terentius Varro in a 1stcentury BC book titled On Agriculture in which he warns against locating a homestead near swamps:
… and because there are bred certain minute creatures that cannot be seen by the eyes, which float
in the air and enter the body through the mouth and nose and they cause serious diseases.[29]
In The Canon of Medicine (1020), Abū Alī ibn Sīnā (Avicenna) hypothesized that tuberculosis and
other diseases might be contagious
In 1546, Girolamo Fracastoro proposed that epidemic diseases were caused by transferable
seedlike entities that could transmit infection by direct or indirect contact, or even without contact
over long distances.
All these early claims about the existence of microorganisms were speculative and while grounded
on indirect observations, they were not based on direct observation of microorganisms or
systematized empirical investigation, e.g. experimentation. Microorganisms were neither proven,
observed, nor accurately described until the 17th century. The reason for this was that all these early
studies lacked the microscope.
History of microorganisms' discovery
Antonie van Leeuwenhoek, the first microbiologist and the first to observe microorganisms using amicroscope
Lazzaro Spallanzanishowed that boiling a broth stopped it from decaying
Louis Pasteur showed that Spallanzani's findings held even if air could enter through a filter that kept particles
out
Robert Koch showed that microorganisms caused disease
Antonie Van Leeuwenhoek (1632–1723) was one of the first people to observe microorganisms,
using microscopes of his own design.[32] Robert Hooke, a contemporary of Leeuwenhoek, also used
microscopes to observe microbial life; his 1665 book Micrographia describes these observations and
coined the term cell.
Before Leeuwenhoek's discovery of microorganisms in 1675, it had been a mystery
why grapes could be turned into wine, milk into cheese, or why food would spoil. Leeuwenhoek did
not make the connection between these processes and microorganisms, but using a microscope, he
did establish that there were signs of life that were not visible to the naked eye.[33][34] Leeuwenhoek's
discovery, along with subsequent observations by Spallanzani and Pasteur, ended the long-held
belief that life spontaneously appeared from non-living substances during the process of spoilage.
Lazzaro Spallanzani (1729–1799) found that boiling broth would sterilise it, killing any
microorganisms in it. He also found that new microorganisms could only settle in a broth if the broth
was exposed to air.
Louis Pasteur (1822–1895) expanded upon Spallanzani's findings by exposing boiled broths to the
air, in vessels that contained a filter to prevent all particles from passing through to the growth
medium, and also in vessels with no filter at all, with air being admitted via a curved tube that would
not allow dust particles to come in contact with the broth. By boiling the broth beforehand, Pasteur
ensured that no microorganisms survived within the broths at the beginning of his experiment.
Nothing grew in the broths in the course of Pasteur's experiment. This meant that the living
organisms that grew in such broths came from outside, as spores on dust, rather than
spontaneously generated within the broth. Thus, Pasteur dealt the death blow to the theory of
spontaneous generation and supported germ theory.
In 1876, Robert Koch (1843–1910) established that microorganisms can cause disease. He found
that the blood of cattle which were infected with anthrax always had large numbers of Bacillus
anthracis. Koch found that he could transmit anthrax from one animal to another by taking a small
sample of blood from the infected animal and injecting it into a healthy one, and this caused the
healthy animal to become sick. He also found that he could grow the bacteria in a nutrient broth,
then inject it into a healthy animal, and cause illness. Based on these experiments, he devised
criteria for establishing a causal link between a microorganism and a disease and these are now
known as Koch's postulates.[35]Although these postulates cannot be applied in all cases, they do
retain historical importance to the development of scientific thought and are still being used today.
On 8 November 2013, scientists reported the discovery of what may be the earliest signs of life on
Earth—the oldest complete fossils of amicrobial mat (associated with sandstone in Western
Australia) estimated to be 3.48 billion years old.
Classification and structure
Evolutionary tree showing the common ancestry of all threedomains of life.[39] Bacteria are colored
blue, eukaryotes red, andarchaea green. Relative positions of some phyla are shown around the tree.
Microorganisms can be found almost anywhere in the taxonomic organization of life on the
planet. Bacteria and archaea are almost always microscopic, while a number of eukaryotes are also
microscopic, including most protists, some fungi, as well as some animals and plants. Virusesare
generally regarded as not living and therefore not considered as microorganisms, although the field
of microbiology also encompasses the study of viruses.
Prokaryotes
Prokaryotes are organisms that lack a cell nucleus and the other membrane bound organelles. They
are almost always unicellular, although some species such as myxobacteria can aggregate into
complex structures as part of their life cycle.
Consisting of two domains, bacteria and archaea, the prokaryotes are the most diverse and
abundant group of organisms on Earth and inhabit practically all environments where the
temperature is below +140 °C. They are found in water, soil, air, animals' gastrointestinal tracts, hot
springs and even deep beneath the Earth's crust in rocks.[40] Practically all surfaces that have not
been specially sterilized are covered by prokaryotes. The number of prokaryotes on Earth is
estimated to be around five million trillion trillion, or 5 × 1030, accounting for at least half
the biomass on Earth.[41]
Bacteria
Staphylococcus aureus bacteria magnified about 10,000x
Almost all bacteria are invisible to the naked eye, with a few extremely rare exceptions, such
as Thiomargarita namibiensis.[42] They lack a nucleus and other membrane-bound organelles, and
can function and reproduce as individual cells, but often aggregate in multicellular colonies.[43] Their
genome is usually a single loop of DNA, although they can also harbor small pieces of DNA
called plasmids. These plasmids can be transferred between cells through bacterial conjugation.
Bacteria are surrounded by a cell wall, which provides strength and rigidity to their cells. They
reproduce by binary fission or sometimes by budding, but do not undergo meiotic sexual
reproduction. However, many bacterial species can transfer DNA between individual cells by a
process referred to as natural transformation.[44][45] Some species form extraordinarily resilient spores,
but for bacteria this is a mechanism for survival, not reproduction. Under optimal conditions bacteria
can grow extremely rapidly and can double as quickly as every 20 minutes.[46]
Archaea
Archaea are also single-celled organisms that lack nuclei. In the past, the differences between
bacteria and archaea were not recognised and archaea were classified with bacteria as part of the
kingdom Monera. However, in 1990 the microbiologist Carl Woese proposed the three-domain
system that divided living things into bacteria, archaea and eukaryotes.[47] Archaea differ from
bacteria in both their genetics and biochemistry. For example, while bacterial cell membranes are
made from phosphoglycerideswith ester bonds, archaean membranes are made of ether lipids.[48]
Archaea were originally described in extreme environments, such as hot springs, but have since
been found in all types of habitats.[49] Only now are scientists beginning to realize how common
archaea are in the environment, with crenarchaeota being the most common form of life in the
ocean, dominating ecosystems below 150 m in depth.[50][51] These organisms are also common in soil
and play a vital role in ammonia oxidation.[52]
Eukaryotes
Most living things that are visible to the naked eye in their adult form are eukaryotes,
including humans. However, a large number of eukaryotes are also microorganisms.
Unlike bacteria and archaea, eukaryotes contain organelles such as the cell nucleus, the Golgi
apparatus and mitochondria in their cells. The nucleus is an organelle that houses the DNA that
makes up a cell's genome. DNA itself is arranged in complexchromosomes.[53] Mitochondria are
organelles vital in metabolism as they are the site of the citric acid cycle and oxidative
phosphorylation. They evolved from symbiotic bacteria and retain a remnant genome.[54] Like
bacteria, plant cells have cell walls, and contain organelles such as chloroplasts in addition to the
organelles in other eukaryotes. Chloroplasts produce energy from light by photosynthesis, and were
also originally symbiotic bacteria.[54]
Unicellular eukaryotes consist of a single cell throughout their life cycle. This qualification is
significant since most multicellular eukaryotes consist of a single cell called a zygote only at the
beginning of their life cycles. Microbial eukaryotes can be either haploid or diploid, and some
organisms have multiple cell nuclei.[55]
Unicellular eukaryotes usually reproduce asexually by mitosis under favorable conditions. However,
under stressful conditions such as nutrient limitations and other conditions associated with DNA
damage, they tend to reproduce sexually by meiosis and syngamy.[56]
Protists
Of eukaryotic groups, the protists are most commonly unicellular and microscopic. This is a highly
diverse group of organisms that are not easy to
classify.[57][58] Several algae species are multicellular protists, andslime molds have unique life cycles
that involve switching between unicellular, colonial, and multicellular forms.[59] The number of species
of protists is unknown since we may have identified only a small portion. Studies from 2001-2004
have shown that a high degree of protist diversity exists in oceans, deep sea-vents, river sediment
and an acidic river which suggests that a large number of eukaryotic microbial communities have yet
to be discovered.[60][61]
Animals
Some micro animals are multicellular but at least one animal group, Myxozoa, is unicellular in its
adult form. Microscopic arthropods include dust mites and spider mites.
Microscopic crustaceans include copepods, some cladocera and water bears. Many nematodes are
also too small to be seen with the naked eye. A common group of microscopic animals are
the rotifers, which are filter feeders that are usually found in fresh water. Some micro-animals
reproduce both sexually and asexually and may reach new habitats by producing eggs which can
survive harsh environments that would kill the adult animal. However, some simple animals, such as
rotifers, tardigrades and nematodes, can dry out completely and remain dormant for long periods of
time.[62]
Fungi
The fungi have several unicellular species, such as baker's yeast (Saccharomyces cerevisiae) and
fission yeast (Schizosaccharomyces pombe). Some fungi, such as the pathogenic yeast Candida
albicans, can undergo phenotypic switching and grow as single cells in some environments,
and filamentous hyphae in others.[63] Fungi reproduce both asexually, by budding or binary fission, as
well by producing spores, which are called conidia when produced asexually, or basidiospores when
produced sexually.
Plants
The green algae are a large group of photosynthetic eukaryotes that include many microscopic
organisms. Although some green algae are classified as protists, others such as charophyta are
classified withembryophyte plants, which are the most familiar group of land plants. Algae can grow
as single cells, or in long chains of cells. The green algae include unicellular and colonial flagellates,
usually but not always with two flagella per cell, as well as various colonial, coccoid, and filamentous
forms. In the Charales, which are the algae most closely related to higher plants, cells differentiate
into several distinct tissues within the organism. There are about 6000 species of green algae.[64]
Habitats and ecology
Microorganisms are found in almost every habitat present in nature. Even in hostile environments
such as the poles, deserts, geysers, rocks, and the deep sea. Some types of microorganisms have
adapted to the extreme conditions and sustained colonies; these organisms are known
as extremophiles. Extremophiles have been isolated from rocks as much as 7 kilometres below the
Earth's surface,[65] and it has been suggested that the amount of living organisms below the Earth's
surface may be comparable with the amount of life on or above the surface.[40] Extremophiles have
been known to survive for a prolonged time in avacuum, and can be highly resistant to radiation,
which may even allow them to survive in space.[66] Many types of microorganisms have
intimate symbiotic relationships with other larger organisms; some of which are mutually beneficial
(mutualism), while others can be damaging to the host organism (parasitism). If microorganisms can
cause disease in a host they are known as pathogens and then they are sometimes referred to
as microbes.
Extremophiles
Extremophiles are microorganisms that have adapted so that they can survive and even thrive in
conditions that are normally fatal to most life-forms. For example, some species have been found in
the following extreme environments:



Temperature: as high as 130 °C (266 °F),[67] as low as −17 °C
(1 °F)[68]
Acidity/alkalinity: less than pH 0,[69] up to pH 11.5[70]
Salinity: up to saturation[71]


Pressure: up to 1,000-2,000 atm, down to 0 atm
(e.g. vacuum of space)[72]
Radiation: up to 5kGy[73]
Extremophiles are significant in different ways. They extend terrestrial life into much of the
Earth's hydrosphere, crust and atmosphere, their specific evolutionary adaptation mechanisms to
their extreme environment can be exploited in bio-technology, and their very existence under such
extreme conditions increases the potential for extraterrestrial life.[74]
Soil microorganisms
The nitrogen cycle in soils depends on the fixation of atmospheric nitrogen. One way this can occur
is in the nodules in the roots of legumes that contain symbiotic bacteria of the
genera Rhizobium, Mesorhizobium,Sinorhizobium, Bradyrhizobium, and Azorhizobium.[75]
Symbiotic microorganisms
Symbiotic microorganisms such as fungi and algae form an association in lichen. Certain fungi
form mycorrhizal symbioses with trees that increase the supply of nutrients to the tree.
Applications
Microorganisms are vital to humans and the environment, as they participate in
the carbon and nitrogen cycles, as well as fulfilling other vital roles in virtually all ecosystems, such
as recycling other organisms' dead remains and waste products through decomposition.
Microorganisms also have an important place in most higher-order multicellular organisms
as symbionts. Many blame the failure of Biosphere 2 on an improper balance of microorganisms.[76]
Digestion
Some forms of bacteria that live in animals' stomachs help in their digestion. For example, cows
have a variety of different microorganisms in their stomachs that are essential in their digestion of
grass and hay.
The gastrointestinal tract contains an immensely complex ecology of microorganisms known as
the gut flora. A typical person harbors more than 500 distinct species of bacteria, representing
dozens of different lifestyles and capabilities. The composition and distribution of this menagerie
varies with age, state of health and diet.
The number and type of bacteria in the gastrointestinal tract vary dramatically by region. In healthy
individuals the stomach and proximal small intestine contain few microorganisms, largely a result of
the bacteriocidal activity of gastric acid; those that are present are aerobes and facultative
anaerobes. One interesting testimony to the ability of gastric acid to suppress bacterial populations
is seen in patients with achlorhydria, a genetic condition which prevents secretion of gastric acid.
Such patients, which are otherwise healthy, may have as many as 10,000 to 100,000,000
microorganisms per ml of stomach contents.
In sharp contrast to the stomach and small intestine, the contents of the colon literally teem with
bacteria, predominantly strict anaerobes (bacteria that survive only in environments virtually devoid
of oxygen). Between these two extremes is a transitional zone, usually in the ileum, where moderate
numbers of both aerobic and anaerobic bacteria are found.
The gastrointestinal tract is sterile at birth, but colonization typically begins within a few hours of
birth, starting in the small intestine and progressing caudally over a period of several days. In most
circumstances, a "mature" microbial flora is established by 3 to 4 weeks of age.
It is also clear that microbial populations exert a profound effect on structure and function of the
digestive tract. For example:
The morphology of the intestine of germ-free animals differs considerably from normal animals - villi
of the small intestine are remarkably regular, the rate of epithelial cell renew is reduced and, as one
would expect, the number and size of Peyer's patches is reduced. The cecum of germ-free rats is
roughly 10 times the size of that in a conventional rat. Bacteria in the intestinal lumen metabolize a
variety of sterols and steroids. For example, bacteria convert the bile salt cholic acid to deoxycholic
acid. Small intestinal bacteria also have an important role in sex steroid metabolism. Finally,
bacterial populations in the large intestine digest carbohydrates, proteins and lipids that escape
digestion and absorption in small intestine. This fermentation, particularly of cellulose, is of critical
importance to herbivores like cattle and horses which make a living by consuming plants. However,
it seems that even species like humans and rodents derive significant benefit from the nutrients
liberated by intestinal microorganisms.
Food production
Soil
Microbes can make nutrients and minerals in the soil available to plants, produce hormones that
spur growth, stimulate the plant immune system and trigger or dampen stress responses. In general
a more diverse soil microbiome results in fewer plant diseases and higher yield.[77]
Fermentation
Microorganisms are used in brewing, wine making, baking, pickling and other food-making
processes.
They are also used to control the fermentation process in the production of cultured dairy
products such as yogurt and cheese. The cultures also provide flavour and aroma, and inhibit
undesirable organisms.[78]
Hygiene
Hygiene is the avoidance of infection or food spoiling by eliminating microorganisms from the
surroundings. As microorganisms, in particular bacteria, are found virtually everywhere, the levels of
harmful microorganisms can be reduced to acceptable levels. However, in some cases, it is required
that an object or substance be completely sterile, i.e. devoid of all living entities and viruses. A good
example of this is ahypodermic needle.
In food preparation microorganisms are reduced by preservation methods (such as the addition
of vinegar), clean utensils used in preparation, short storage periods, or by cool temperatures. If
complete sterility is needed, the two most common methods are irradiation and the use of
an autoclave, which resembles a pressure cooker.
There are several methods for investigating the level of hygiene in a sample of food, drinking water,
equipment, etc. Water samples can be filtrated through an extremely fine filter. This filter is then
placed in a nutrient medium. Microorganisms on the filter then grow to form a visible colony. Harmful
microorganisms can be detected in food by placing a sample in a nutrient broth designed to enrich
the organisms in question. Various methods, such as selective media or polymerase chain reaction,
can then be used for detection. The hygiene of hard surfaces, such as cooking pots, can be tested
by touching them with a solid piece of nutrient medium and then allowing the microorganisms to
grow on it.
There are no conditions where all microorganisms would grow, and therefore often several methods
are needed. For example, a food sample might be analyzed on three different nutrient
mediums designed to indicate the presence of "total" bacteria (conditions where many, but not all,
bacteria grow), molds (conditions where the growth of bacteria is prevented by, e.g., antibiotics)
and coliform bacteria (these indicate a sewage contamination).
Water treatment
The majority of all oxidative sewage treatment processes rely on a large range of microorganisms to
oxidise organic constituents which are not amenable to sedimentation or flotation. Anaerobic
microorganisms are also used to reduce sludge solids producing methane gas (amongst other
gases) and a sterile mineralised residue. In potable water treatment, one method, the slow sand
filter, employs a complex gelatinous layer composed of a wide range of microorganisms to remove
both dissolved and particulate material from raw water.[79]
Energy
Microorganisms are used in fermentation to produce ethanol,[80] and in biogas reactors to
produce methane. Scientists are researching the use of algae to produce liquid fuels,[82] and bacteria
to convert various forms of agricultural and urban waste into usable fuels.[83]
Chemicals, enzymes
Microorganisms are used for many commercial and industrial production of chemicals, enzymes and
other bioactive molecules.
Examples of organic acid produced include




Acetic acid: Produced by the bacterium Acetobacter aceti and
other acetic acid bacteria (AAB)
Butyric acid (butanoic acid): Produced by the
bacterium Clostridium butyricum
Lactic acid: Lactobacillus and others commonly called as lactic
acid bacteria (LAB)
Citric acid: Produced by the fungus Aspergillus niger
Microorganisms are used for preparation of bioactive molecules and enzymes.



Streptokinase produced by the bacterium Streptococcus and
modified by genetic engineering is used as a clot buster for
removing clots from the blood vessels of patients who have
undergone myocardial infarctions leading to heart attack.
Cyclosporin A is a bioactive molecule used as
an immunosuppressive agent in organ transplantation
Statins produced by the yeast Monascus purpureus are
commercialised as blood cholesterol lowering agents which act
by competitively inhibiting the enzyme responsible for synthesis of
cholesterol.
Science
Microorganisms are essential tools in biotechnology, biochemistry, genetics, and molecular biology.
The yeasts (Saccharomyces cerevisiae) and fission yeast (Schizosaccharomyces pombe) are
important model organisms in science, since they are simple eukaryotes that can be grown rapidly in
large numbers and are easily manipulated. They are particularly valuable
in genetics, genomics and proteomics. Microorganisms can be harnessed for uses such as creating
steroids and treating skin diseases. Scientists are also considering using microorganisms for
living fuel cells, and as a solution for pollution.
Warfare
In the Middle Ages, diseased corpses were thrown into castles during sieges using catapults or
other siege engines. Individuals near the corpses were exposed to the pathogen and were likely to
spread that pathogen to others.
Human health
Human digestion
Microorganisms can form an endosymbiotic relationship with other, larger organisms. For example,
the bacteria that live within the human digestive system contribute to gut immunity,
synthesise vitamins such as folic acid and biotin, and ferment complex indigestible carbohydrates
Disease
Microorganisms are the cause of many infectious diseases. The organisms involved
include pathogenic bacteria, causing diseases such as plague, tuberculosis and anthrax; protozoa,
causing diseases such asmalaria, sleeping sickness, dysentery and toxoplasmosis; and also fungi
causing diseases such as ringworm, candidiasis or histoplasmosis. However, other diseases such
as influenza, yellow fever or AIDS are caused by pathogenic viruses, which are not usually classified
as living organisms and are not, therefore, microorganisms by the strict definition. As of 2007, no
clear examples of archaean pathogens are known, although a relationship has been proposed
between the presence of some archaean methanogens and human periodontal disease.[93]
Ecology
Microorganisms play critical roles in Earth's biogeochemical cycles as they are responsible
for decomposition and play vital parts in carbon and nitrogen fixation, as well as oxygen production.