Download Principle of Biochemistry

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Supply and demand wikipedia , lookup

Marginal utility wikipedia , lookup

Marginalism wikipedia , lookup

Transcript
Principle of Chemistry
Duration -3 Hrs
Max Marks: 70
Note:
1. Answer any EIGHT questions from Section A. Each question carries 5 marks.
2. Answer any THREE questions from Section B. Each question carries 10 marks.
Section A
Answer any EIGHT questions from Section A. Each question carries 5 marks.
Time: 3 hours
Note: - Attempt any 5 questions.
M.Marks:60
All questions carry equal marks.
1. Write the structures of tyrosine and tryptophan.
Tyrosine (abbreviated as Tyr or Y)[1] or 4-hydroxyphenylalanine, is one of the 22 amino
acids that are used by cells to synthesize proteins. Its codonsare UAC and UAU. It is a nonessential amino acid with a polar side group. The word "tyrosine" is from the Greek tyri,
meaning cheese, as it was first discovered in 1846 by German chemist Justus von Liebig in the
protein casein from cheese.[2][3] It is called tyrosyl when referred to as a functional group or side
chain Aside from being a proteinogenic amino acid, tyrosine has a special role by virtue of
the phenol functionality. It occurs in proteins that are part of signal transduction processes. It
functions as a receiver of phosphate groups that are transferred by way of protein kinases (socalled receptor tyrosine kinases). Phosphorylation of the hydroxyl group changes the activity of
the target protein.
A tyrosine residue also plays an important role in photosynthesis. In chloroplasts (photosystem
II), it acts as an electron donor in the reduction of oxidizedchlorophyll. In this process, it
undergoes deprotonation of its phenolic OH-group. This radical is subsequently reduced in the
photosystem II by the four core manganese clusters.
[edit]Dietary
sources
Tyrosine, which can also be synthesized in the body from phenylalanine, is found in many highprotein food products such as soy
products, chicken,turkey, fish, peanuts, almonds, avocados, milk, cheese, yogurt, cottage
cheese, lima beans, pumpkin seeds, and sesame seeds.[4] Tyrosine can also be obtained through
supplementation.
[edit]Biosynthesis
Plant biosynthesis of tyrosine from shikimic acid.
In plants and most microorganisms, tyr is produced via prephenate, an intermediate on
the shikimate pathway. Prephenate is oxidatively decarboxylatedwith retention of
the hydroxyl group to give p-hydroxyphenylpyruvate, which is transaminated using glutamate as
the nitrogen source to give tyrosine and α-ketoglutarate.
Mammals synthesize tyrosine from the essential amino acid phenylalanine (phe), which is
derived from food. The conversion of phe to tyr is catalyzed by the enzyme phenylalanine
hydroxylase, a monooxygenase. This enzyme catalyzes the reaction causing the addition of a
hydroxyl group to the end of the 6-carbon aromatic ring of phenylalanine, such that it becomes
tyrosine.
Some of the tyrosine residues can be tagged with a phosphate group (phosphorylated) by protein
kinases. (In its phosphorylated state, it is referred to as phosphotyrosine). Tyrosine
phosphorylation is considered to be one of the key steps in signal transduction and regulation of
enzymatic activity. Phosphotyrosine can be detected through specific antibodies. Tyrosine
residues may also be modified by the addition of a sulfate group, a process known as tyrosine
sulfation.[5] Tyrosine sulfation is catalyzed by tyrosylprotein sulfotransferase (TPST). Like the
phosphotyrosine antibodies mentioned above, antibodies have recently been described that
specifically detect sulfotyrosine.
[edit]Precursor
to neurotransmitters and hormones
In dopaminergic cells in the brain, tyrosine is converted to levodopa by the enzyme tyrosine
hydroxylase (TH). TH is the rate-limiting enzyme involved in the synthesis of
the neurotransmitter dopamine. Dopamine can then be converted into
the catecholamines norepinephrine (noradrenaline) and epinephrine (adrenaline).
The thyroid hormones triiodothyronine (T3) and thyroxine (T4) in the colloid of the thyroid also are
derived from tyrosine.
[edit]Precursor
to alkaloids
In Papaver somniferum, the opium poppy, tyrosine is used to produce the alkaloid morphine.
[edit]Precursor
to pigments
Tyrosine is also the precursor to the pigment melanin.
The decomposition of L-tyrosine (syn. para-hydroxyphenylalanine) begins with an α-ketoglutarate
dependent transamination through the tyrosine transaminase to para-hydroxyphenylpyruvate.
The positional description para, abbreviated p, mean that the hydroxyl group and side chain on
the phenyl ring are across from each other (see the illustration below).
The next oxidation step catalyzes by p-hydroxylphenylpyruvate-dioxygenase and splitting off
CO2 homogentisate (2,5-dihydroxyphenyl-1-acetate). In order to split the aromatic ring of
homogentisate, a further dioxygenase, homogentistate-oxygenase is required. Thereby, through
the incorporation of a further O2 molecule, maleylacetoacetate is created.
Fumarylacetate is created maleylacetoacetate-cis-trans-isomerase through rotation of the
carboxyl group created from the hydroxyl group via oxidation. This cis-trans-isomerase
contains glutathione as a coenzyme. Fumarylacetoacetate is finally split via fumarylacetoacetatehydrolase through the addition of a water molecule.
Thereby fumarate (also a metabolite of the citric acid cycle) and acetoacetate (3-ketobutyroate)
are liberated. Acetoacetate is a ketone body, which is activated with succinyl-CoA, and thereafter
it can be converted into acetyl-CoA, which in turn can be oxidized by the citric acid cycle or be
used for fatty acid synthesis.
2. What is the difference between adenosine. and a "elate.
MECHANISM OF PHOTOSYNTHESIS
The process of photosynthesis is summed up in the word and chemical equation below:
carbon dioxide + Water
(Raw materials)
Glucose + Oxygen
sun light and (Products)
Chlorophyll.
Sun light
6CO2 (g)+ 6H20 (l)
C6H12O6 + 6O2
Chlorophyll
Photosynthesis occurs in two phases; namely light- dependent stage (light reaction/
stage) and light independent stage (dark reaction/stage.)
Light – dependent stage
Radiant energy is absorbed by chlorophyll. The chlorophyll is activated and it converts
light energy into chemical energy in the form of ATP. [Adenosine Triphosphate]
Water is split into hydrogen ions and hydroxyl ions.
Light
[ H ]+ + [OH]¯
H2O
Chlorophyll
Oxygen gas is formed from reactions, involving hydroxyl ions.
The products of the light dependent stage are ATP, H+ ions and oxygen.
Light independent stage
The hydrogen ions and chemical energy that were produced during the light dependent
phase are used to reduce carbon dioxide to form glucose.
MACRO AND MICRO NUTRIENTS
Table : Macro and micronutrients and their uses.
Macro
nutrients
Uses
Nitrogen
Amino acid and protein synthesis
Phosphorus
Protein formation
Sulphur
Formation of certain amino acids
Calcium
colourings
- Reddish purple leaves and
stunted growth.
- Stunted growth
- yellow patches on leaves
- Poor development of
lamella)
leaves at shoot apex
carbohydrates and protein
metabolism in young leaves.
Magnesium
- Chlorosis with purple
Formation of cell wall (middle
Concerned with synthesis of
Potassium
Deficiency
- Chlorosis, and yellowing
of leaves at the margins,
tips.
- Retarded growth.
Chlorophyll formation. Facilitates
- Chlorosis of older leaves.
enzyme activity.
- Stunted growth.
Required for the formation of
Iron
chlorophyll but not part of the
- Chlorosis with pale leaves
molecule.
Micro
nutrients
They are necessary for the
Cobalt
manufacture of enzymes.
Copper
Zinc
Manganese
Boron
3. Write the structures of cortisol and epinephrine.
Cortisol (hydrocortisone) is a steroid hormone, more specifically a glucocorticoid, produced by
the zona fasciculata of the adrenal gland.[1] It is released in response to stress and a low level of
blood glucocorticoids. Its primary functions are to increase blood sugar through gluconeogenesis;
suppress the immune system; and aid in fat, protein and carbohydrate metabolism. [1] It also
decreases bone formation. During pregnancy, increased production of cortisol between weeks 30
and 32 initiates production of fetal lung surfactant to promote maturation of the lungs. Various
synthetic forms of cortisol are used to treat a variety of diseases.
Cortisol is produced by the adrenal gland in the zona fasciculata, the second of three layers
comprising the outer adrenal cortex. This release is controlled by the hypothalamus, a part of the
brain. The secretion of corticotropin-releasing hormone (CRH) by the hypothalamus triggers
anterior pituitary secretion of adrenocorticotropic hormone (ACTH). ACTH is carried by the blood
to the adrenal cortex, where it triggers glucocorticoid secretion.
[edit]Main
functions in the body

increasing blood sugar through gluconeogenesis

suppressing the immune system

aiding in fat, protein, and carbohydrate metabolism
It suppresses the immune system by "muting" the white blood cells. Another function is to
decrease bone formation.
Cortisol is used to treat diseases such as Addison’s disease, inflammatory
and rheumatoid diseases, and allergies. Low-potency hydrocortisone, available over the counter
in some countries, is used to treat skin problems such as rashes, eczema and others.
Cortisol prevents the release of substances in the body that cause inflammation. It stimulates
gluconeogenesis (the breakdown of protein and fat to provide metabolites that can be converted
to glucose in the liver) and it activates anti-stress and anti-inflammatory pathways.
[edit]Patterns
The amount of cortisol present in the blood undergoes diurnal variation; the level peaks in the
early morning (approximately 8 am) and reaches its lowest level at about midnight-4 am, or three
to five hours after the onset of sleep. Information about the light/dark cycle is transmitted from
the retina to the pairedsuprachiasmatic nuclei in the hypothalamus. This pattern is not present at
birth; estimates of when it begins vary from two weeks to nine months of age. [2]
Changed patterns of serum cortisol levels have been observed in connection with
abnormal ACTH levels, clinical depression, psychological stress, and physiological stressors such
as hypoglycemia, illness, fever, trauma, surgery, fear, pain, physical exertion,
or temperature extremes. Cortisol levels may also differ for individuals with autism or Asperger's
syndrome.[3]
There is also significant individual variation, although a given person tends to have consistent
rhythms.
Insulin
Cortisol counteracts insulin, contributes to hyperglycemia-causing
hepatic gluconeogenesis[10] and inhibits the peripheral utilization of glucose (insulin
resistance) [10] by decreasing the translocation ofglucose transporters (especially GLUT4) to the
cell membrane.[11][12] However, cortisol increases glycogen synthesis (glycogenesis) in
the liver.[13] The permissive effect of cortisol on insulin action in liver glycogenesis is observed in
hepatocyte culture in the laboratory, although the mechanism for this is unknown.
Collagen
In laboratory rats, cortisol-induced collagen loss in the skin is ten times greater than in any other
tissue.[14][15] Cortisol (as opticortinol) may inversely inhibit IgA precursor cells in the intestines of
calves.[16] Cortisol also inhibits IgA in serum, as it does IgM; however, it is not shown to
inhibit IgE.[17]
Gastric and renal secretion
Cortisol stimulates gastric-acid secretion.[18] Cortisol's only direct effect on the hydrogen ion
excretion of the kidneys is to stimulate the excretion of ammonium ions by deactivating the renal
glutaminase enzyme.[19] Net chloride secretion in the intestines is inversely decreased by
cortisol in vitro (methylprednisolone).[20]
Sodium
Cortisol inhibits sodium loss through the small intestine of mammals. [21] Sodium depletion,
however, does not affect cortisol levels[22] so cortisol cannot be used to regulate serum sodium.
Cortisol's original purpose may have been sodium transport. This hypothesis is supported by the
fact that freshwater fish utilize cortisol to stimulate sodium inward, while saltwater fish have a
cortisol-based system for expelling excess sodium.[23]
Potassium
A sodium load augments the intense potassium excretion by cortisol; corticosterone is
comparable to cortisol in this case.[24] In order for potassium to move out of the cell, cortisol
moves an equal number of sodium ions into the cell.[25] This should make pH regulation much
easier (unlike the normal potassium-deficiency situation, in which two sodium ions move in for
each three potassium ions that move out—closer to the deoxycorticosterone effect).
Nevertheless, cortisol consistently causes serum alkalosis; in a deficiency, serum pH does not
change. The purpose of this may be to reduce serum pH to an optimum value for some immune
enzymes during infection, when cortisol declines. Potassium is also blocked from loss in the
kidneys by a decline in cortisol (9 alpha fluorohydrocortisone).[26]
Water
Cortisol acts as a diuretic hormone, controlling one-half of intestinal diuresis;[21] it has also been
shown to control kidney diuresis in dogs. The decline in water excretion following a decline in
cortisol (dexamethasone) in dogs is probably due to inverse stimulation of antidiuretic
hormone (ADH or arginine vasopressin), which is not overridden by water loading.[27] Humans
and other animals also use this mechanism.[28]
Copper
Cortisol stimulates many copper enzymes (often to 50% of their total potential), probably to
increase copper availability for immune purposes.[29] This includes lysyl oxidase, an enzyme
which is used to cross-link collagen and elastin.[30] Especially valuable for immune response is
cortisol's stimulation of the superoxide dismutase,[31] since this copper enzyme is almost certainly
used by the body to permit superoxides to poison bacteria. Cortisol causes an inverse four- or
fivefold decrease of metallothionein (a copper storage protein) in mice;[32] however, rodents do
not synthesize cortisol themselves. This may be to furnish more copper for ceruloplasmin
synthesis or to release free copper. Cortisol has an opposite effect on aminoisobuteric acid than
on the other amino acids.[33] If alpha-aminoisobuteric acid is used to transport copper through the
cell wall, this anomaly might be explained.
Immune system
Cortisol can weaken the activity of the immune system. Cortisol prevents proliferation of T-cells
by rendering the interleukin-2 producer T-cells unresponsive to interleukin-1 (IL-1), and unable to
produce the T-cell growth factor.[34] Cortisol also has a negative-feedback effect on interleukin1.[35] IL-1 must be especially useful in combating some diseases; however, endotoxic bacteria
have gained an advantage by forcing the hypothalamus to increase cortisol levels (forcing the
secretion of CRH hormone, thus antagonizing IL-1). The suppressor cells are not affected by
glucosteroid response-modifying factor (GRMF),[36] so the effective setpoint for the immune cells
may be even higher than the setpoint for physiological processes
(reflecting leukocyte redistribution to lymph nodes, bone marrow, and skin). Rapid administration
of corticosterone (the endogenous Type I and Type II receptor agonist) or RU28362 (a specific
Type II receptor agonist) to adrenalectomized animals induced changes
in leukocyte distribution. Natural killer cells are not affected by cortisol.[37]
Bone metabolism
Cortisol reduces bone formation, favoring long-term development of osteoporosis. It
transports potassium out of cells in exchange for an equal number of sodium ions (see
above).[38] This can trigger thehyperkalemia of metabolic shock from surgery. Cortisol also
reduces calcium absorption in the intestine.[39]
Memory
Cortisol works with epinephrine (adrenaline) to create memories of short-term emotional events;
this is the proposed mechanism for storage of flash bulb memories, and may originate as a
means to remember what to avoid in the future. However, long-term exposure to cortisol
damages cells in the hippocampus;[40] this damage results in impaired learning. Furthermore, it
has been shown that cortisol inhibits memory retrieval of already stored information
4. Write notes on the synthesis and biological functions of prealbumin.
Prealbumin, mortality, and cause-specific hospitalization in hemodialysis patients.
Background
Prealbumin (transthyretin) is a hepatic secretory protein thought to be important in
the evaluation of nutritional deficiency and nutrition support. Prior studies have
suggested that the serum prealbumin concentration is independently associated with
mortality in hemodialysis patients, even with adjustment for serum albumin and
other nutritional parameters.
Methods
To determine whether prealbumin was independently associated with mortality and
morbidity (cause-specific hospitalization) in hemodialysis patients, we analyzed data
on 7815 hemodialysis patients with at least one determination of serum prealbumin
during the last three months of 1997. Unadjusted, case mix-adjusted, and
multivariable-adjusted relative risks of death were calculated for categories of
serum prealbumin using proportional hazards regression. We also determined
whether the prealbumin concentration was associated with all-cause,
cardiovascular, infection-related, and vascular access–related hospitalization.
Results
The relative risk (RR) of death was inversely related to the serum prealbumin
concentration. Relative to prealbumin ≥40 mg/dL, the adjusted RRs of death were
2.41, 1.85, 1.49, and 1.23 for prealbumin <15, 15–20, 20–25, and 25–30 mg/dL,
respectively. The adjusted RRs of hospitalization due to infection were 2.97, 1.95,
1.81, and 1.61 for prealbumin <15, 15–20, 20–25, and 25–30 mg/dL, respectively.
The adjusted RRs of vascular access-related hospitalization were 0.48, 0.52, 0.58,
and 0.71 for prealbumin <15, 15–20, 20–25, and 25–30 mg/dL, respectively. While
serum albumin was strongly associated with mortality and all-cause hospitalization,
it was not associated with hospitalization due to infection, and lower levels were
associated with higher rather than lower rates of vascular access–related
hospitalization.
Conclusion
In hemodialysis patients, lower prealbumin concentrations were associated with
mortality and hospitalization due to infection, independent of serum albumin and
other clinical characteristics. Higher prealbumin concentrations were associated
with vascular access–related hospitalization. In light of these findings, more
intensive study into the determinants and biological actions of prealbumin
(transthyretin) in end-stage renal disease is warranted.
Keywords:
prealbumin; mortality; dialysis; infection; vascular access; epidemiology
Protein energy malnutrition (PEM) affects a large fraction of maintenance hemodialysis
patients and is unequivocally associated with mortality and morbidity1. While difficult to
define, PEM depends on several intersecting dimensions of health and disease, including
reduced dietary intake, sarcopenia, and loss of subcutaneous fat (often referred to as
“somatic mass”) and reduced concentrations of plasma proteins and leukocytes (often
referred to as “visceral mass”)2. Inflammation directly affects the catabolism of plasma
proteins as well as hepatic synthesis3.
While serum albumin has proved to be a potent predictor of mortality and cardiovascular
morbidity in patients with end-stage renal disease (ESRD), several studies have suggested
that other plasma proteins, including prealbumin, have additive predictive value4,5,6,7,8,9.
For example, we previously demonstrated a significant (2.5-fold) increase in risk among
hemodialysis patients with prealbumin concentrations <20 mg/dL, but were unable to
identify an optimal prealbumin concentration, or a level below which a definitive
increase in risk could be identified10. Moreover, prior studies have focused only on
mortality without consideration of hospitalization or other morbidities.
Therefore, we aimed to determine the risk profile associated with the spectrum of
prealbumin concentrations in a large cohort of hemodialysis patients, using mortality as
the principle outcome of interest. We also explored the association between prealbumin
concentration and cause-specific hospitalization. We hypothesized that prealbumin would
be independently associated with mortality and associated with hospitalization due to
cardiovascular disease and infection.
Topof page
METHODS
Data source
The sample of patients was taken from the Fresenius Medical Care North America
Patient Statistical Profile system. The database and methods of abstraction have been
previously described11. The cohort consisted of patients on thrice weekly hemodialysis as
of January 1, 1998 who had at least one determination of serum phosphorus and calcium
during the last three months of 1997. Where repeated, all laboratory data were averaged
to provide a better estimate of exposure. The sample included 40,538 patients. Of the
40,538 patients, 7815 (19.3%) had at least one serum prealbumin concentration during
the three-month period. Patients with and without prealbumin determinations were
compared to assess generalizability. Prealbumin was categorized a priori into seven
categories in 5 mg/dL increments: <15, 15–20, 20–25, 25–30, 30–35, 35–40, and ≥40
mg/dL.
The primary ICD-9-CM code for each hospitalization was recorded. Cardiovascular
hospitalization incorporated the following ICD-9-CM codes: 390−459 (diseases of the
circulatory system), 518.4 (acute pulmonary edema), 276.6 (fluid overload), 785
(symptoms involving cardiovascular system), 786.5 (chest pain), 780.2 (syncope and
collapse), and 798 (sudden death). Infection-related hospitalization included the
following ICD-9-CM codes: 001−139 (infectious and parasitic diseases), 320−324
(meningitis and encephalitis), 421 (endocarditis), 480−486 (pneumonia), 590 (infections
of the kidney), 680−686 (infections of the skin and subcutaneous tissue), and 790.7
(bacteremia). Vascular access–related (non-infection–related) hospitalization included
ICD-9-CM codes 996.1 and 996.70, 73, and 74.
Several confounding variables were included in the analyses. Age, sex, race, or ethnicity,
diabetes, and vintage (time since initiation of dialysis) were considered to represent “case
mix.” Laboratory variables included parameters of mineral metabolism [phosphorus,
calcium, and parathyroid hormone (PTH)], hematologic status (hemoglobin and ferritin),
and other markers of nutritional status [serum albumin, predialysis blood urea nitrogen
(BUN), creatinine, cholesterol, and bicarbonate]. Body size was estimated using body
weight, body surface area, or Quetèlet's (body mass) index. Dialysis dose was estimated
using the urea reduction ratio (URR) or the indexed or nonindexed urea clearance × time
product (Kt/Vurea and Kturea).
Statistical analyses
Continuous variables were expressed as mean ± standard deviation or median with
interquartile range and compared with parametric [Student t test or analysis of variance
(ANOVA)] or nonparametric tests (Wilcoxon rank sum test or the Kruskal–Wallis test),
where appropriate. Categorical variables were expressed as proportions and compared
with the χ2 test. We calculated unadjusted survival rates using the Kaplan–Meier product
limit method. Unadjusted, case mix–adjusted, and multivariable-adjusted survival
analyses were performed using the proportional hazards regression model. Relative risks
(RR) and 95% confidence intervals (95% CI) were calculated from model parameter
coefficients and standard errors, respectively. Multivariable models were constructed
with backward variable selection, using P < 0.05 for variable retention. Plots of log
(−log [survival rate]) against log (survival time) were performed to establish the validity
of the proportionality assumption. Effect modification was evaluated by including
multiplicative interaction terms for selected variables. Factors not included in
multivariable models were reentered individually to evaluate for residual confounding
(>10% change in the parameter estimate for prealbumin or albumin). There were few
missing laboratory data except for PTH (N = 1566, 20%) and cholesterol (N = 1946,
25%). To avoid a significant loss of power we categorized these data and included
missing indicator variables in regression models. Patients who underwent kidney
transplantation (N = 337, 4.3%), recovered kidney function (N = 34, 0.4%), transferred
dialysis facilities (N = 1071, 13.7%), withdrew from dialysis (N = 303, 3.9%), or were
lost to follow-up for unknown reasons (N = 4, 0.05%) were censored. Two-tailed Pvalues < 0.05 were considered statistically significant. Statistical analyses were
conducted using SAS 8.2 (SAS Institute, Cary, NC, USA).
5.
Define iodine number and highlight its significance.
Olive oil is an oil obtained from the olive (Olea europaea; family Oleaceae), a traditional
tree crop of the Mediterranean Basin. The oil is produced by grinding whole olives and extracting
the oil by mechanical or chemical means. It is commonly used
in cooking, cosmetics, pharmaceuticals, and soapsand as a fuel for traditional oil lamps. Olive oil
is used throughout the world, but especially in the Mediterranean countries
Outline the principle of affinity chromatography. The olive tree is native to the
Mediterranean basin; wild olives were collected by Neolithic peoples as early as the 8th
millennium BC.[1] The wild olive tree originated in Asia Minor[2] in modern Turkey.
It is not clear when and where olive trees were first domesticated: in Asia Minor in the 6th
millennium; along the Levantine coast stretching from the Sinai Peninsula to modern Turkey in
the 4th millennium;[1] or somewhere in the Mesopotamian Fertile Crescent in the 3rd millennium.
A widespread view exists that the first cultivation took place on the island of Crete. Archeological
evidence suggest that olives were being grown in Crete as long ago as 2,500 B.C. The earliest
surviving olive oilamphorae date to 3500 BC (Early Minoan times), though the production of olive
is assumed to have started before 4000 BC. An alternative view retains that olives were turned
into oil by 4500 BC by Canaanites in present-day Israel.[3]
Ancient oil press
Bodrum Museum of Underwater Archaeology, Bodrum, Turkey
Homer called it "liquid gold." In ancient Greece, athletes ritually rubbed it all over their bodies.
Olive oil has been more than mere food to the peoples of the Mediterranean: it has been
medicinal, magical, an endless source of fascination and wonder and the fountain of great wealth
and power. Indeed the importance of the olive industry in ancient economies cannot be
overstated. The tree is extremely hardy and its useful lifespan can be measured in centuries. Its
wide and deep root system ensures its survival without additional watering, even in the watersparse Mediterranean. It thrives close to the sea, where other plants cannot tolerate the
increased salt content of underground water. Other than pruning in late spring, it needs minimal
cultivation and its fruit matures in the late autumn in the Northern Mediterranean or through the
winter (further south), when other staple food harvests are over and there is no other agricultural
work to be done. Olive collecting and processing is relatively straightforward, and needs minimal,
mechanical technology. Olive oil, being almost pure fat, is dense in calories yet healthy, without
adverse health effects. Unlike cereals which can be destroyed by humidity and pests in storage,
olive oil can be very easily stored and will not go rancid for at least a year (unless needlessly
exposed to light or extremely hot weather), by which time a fresh harvest will be available. The
combination of these factors helped ensure that the olive industry has become the region's most
dependable food and cash crop since prehistoric times.
Besides food, olive oil has been used for religious rituals, medicines, as a fuel in oil lamps, soapmaking, and skin care application. The importance and antiquity of olive oil can be seen in the
fact that the English word oil derives from c. 1175, olive oil, from Anglo-Fr. and O.N.Fr. olie, from
O.Fr. oile (12c., Mod.Fr. huile), from L. oleum "oil, olive oil" (cf. It. olio), from Gk. elaion "olive
tree",[4] which may have been borrowed through trade networks from the Semitic Phoenician use
of el'yon meaning "superior", probably in recognized comparison to other vegetable or
animal fats available at the time. Robin Lane Fox suggests[5] that the Latin borrowing of
Greek elaion for oil (Latin oleum) is itself a marker for improved Greek varieties of oil-producing
olive, already present in Italy as Latin was forming, brought byEuboean traders, whose presence
in Latium is signaled by remains of their characteristic pottery, from the mid-eighth century.
Recent genetic studies suggest that species used by modern cultivators descend from multiple
wild populations, but a detailed history of domestication is not yet understood.[6]
Many ancient presses still exist in the Eastern Mediterranean region, and some dating to the
Roman period are still in use today.[citation needed]
[edit]Eastern
Mediterranean
Over 5,000 years ago oil was being extracted from olives in the Eastern Mediterranean. In the
centuries that followed, olive presses became common, from the Atlantic shore of North Africa
to Persia and from the Po Valley to the settlements along the Nile.[citation needed]
Olive trees and oil production in the Eastern Mediterranean can be traced to archives of the
ancient city-state Ebla (2600–2240 BC), which were located on the outskirts of
the Syrian city Aleppo. Here some dozen documents dated 2400 BC describe lands of the king
and the queen. These belonged to a library of clay tablets perfectly preserved by having been
baked in the fire that destroyed the palace. A later source is the frequent mentions of oil
in Tanakh.
The International Olive Council (IOC) is an intergovernmental organization based
in Madrid, Spain, with 23 member states.[17] It promotes olive oil around the world by tracking
production, defining quality standards, and monitoring authenticity. More than 85% of the world's
olives are grown in IOC member nations. The United States is not a member of the IOC, and
the U.S. Department of Agriculture does not legally recognize its classifications (such as extravirgin olive oil). The USDA uses a different system, which it defined in 1948 before the IOC
existed. On October 25, 2010, the United States adopted new olive oil standards, a revision of
those that had been in place since 1948, which affect importers and domestic growers and
producers by ensuring conformity with the benchmarks commonly accepted in the U.S. and
abroad.[18]
Olive oil is classified by how it was produced, by its chemistry, and by panels that perform olive oil
taste testing.[19] The IOC officially governs 95% of international production and holds great
influence over the rest. The EU regulates the use of different protected designation of originlabels
for olive oils.[20]
U.S. Customs regulations on "country of origin" state that if a non-origin nation is shown on the
label, then the real origin must be shown on the same side of the label and in comparable size
letters so as not to mislead the consumer.[21][22] Yet most major U.S. brands continue to put
"imported from Italy" on the front label in large letters and other origins on the back in very small
print.[23] "In fact, olive oil labeled 'Italian' often comes from Turkey, Tunisia, Morocco, Spain, and
Greece."[24] These products are a mixture of olive oil from more than one nation and it is not clear
what percentage of the olive oil is really of Italian origin. This practice makes it difficult for high
quality, lower cost producers outside of Italy to enter the U.S. market, and for genuine Italian
producers to compete.
[edit]Adulteration
The adulteration of oil can be no more serious than passing off inferior, but safe, product as
superior olive oil, but there are no guarantees. It is believed that almost 700 people died as a
consequence of consuming rapeseed oil adulterated with aniline intended for use as an industrial
lubricant, but sold in 1981 as olive oil in Spain (see toxic oil syndrome).[25]
There have been allegations that regulation, particularly in Italy, is extremely lax and corrupt.
Major Italian shippers are claimed to routinely adulterate olive oil and that only about 40% of olive
oil sold as "extra virgin" actually meets the specification. [26] In some cases, colza oil (Swedish
turnip) with added color and flavor has been labeled and sold as olive oil. [27] This extensive fraud
prompted the Italian government to mandate a new labeling law in 2007 for companies selling
olive oil, under which every bottle of Italian olive oil would have to declare the farm and press on
which it was produced, as well as display a precise breakdown of the oils used, for blended
oils.[28] In February 2008, however, EU officials took issue with the new law, stating that under EU
rules such labeling should be voluntary rather than compulsory. [29] Under EU rules, olive oil may
be sold as Italian even if it only contains a small amount of Italian oil.[28]
In March 2008, 400 Italian police officers conducted "Operation Golden Oil", arresting 23 people
and confiscating 85 farms after an investigation revealed a large-scale scheme to relabel oils
from otherMediterranean nations as Italian.[30] In April 2008, another operation impounded seven
olive oil plants and arrested 40 people in nine provinces of northern and southern Italy for
adding chlorophyll tosunflower and soybean oil, and selling it as extra virgin olive oil, both in Italy
and abroad; 25,000 liters of the fake oil were seized and prevented from being exported.[31]
On March 15, 2011, the Florence, Italy prosecutor's office, working in conjunction with the forestry
department, indicted two managers and an officer of Carapelli, one of the brands of the Spanish
company Grupo SOS (which recently changed its name to Deoleo). The charges involved
falsified documents and food fraud. Carapelli lawyer Neri Pinucci said the company was not
worried about the charges and that "the case is based on an irregularity in the documents."[32]
[edit]Commercial
grades
All production begins by transforming the olive fruit into olive paste. This paste is
then malaxed (slowly churned or mixed) to allow the microscopic oil droplets to concentrate. The
oil is extracted by means of pressure (traditional method) or centrifugation (modern method).
After extraction the remnant solid substance, called pomace, still contains a small quantity of oil.
The grades of oil extracted from the olive fruit can be classified as:

Virgin means the oil was produced by the use of physical means and no chemical
treatment. The term virgin oil referring to production is different from Virgin Oil on a retail
label (see next section).

Refined means that the oil has been chemically treated to neutralize strong tastes
(characterized as defects) and neutralize the acid content (free fatty acids). Refined oil is
commonly regarded as lower quality than virgin oil; oils with the retail labels extra-virgin olive
oil and virgin olive oil cannot contain any refined oil.

Olive pomace oil means oil extracted from the pomace using solvents, mostly hexane,
and by heat.
Quantitative analysis can determine the oil's acidity, defined as the percent, measured by weight,
of free oleic acid it contains. This is a measure of the oil's chemical degradation; as the oil
degrades, more fatty acids are freed from the glycerides, increasing the level of free acidity and
thereby increasing rancidity. Another measure of the oil's chemical degradation is the organic
peroxide level, which measures the degree to which the oil is oxidized, another cause of rancidity.
To classify it by taste, olive oil is subjectively judged by a panel of professional tasters in a blind
taste test. This is also called its organoleptic quality.
6.
How will you differentiate starch from glycogen in the laboratory ?
The use of Lugol's iodine reagent (IKI) is useful to distinguish starch and glycogen from
other polysaccharides. Lugol's iodine yields a blue-black color in the presence of starch.
Glycogen reacts with Lugol's reagent to give a brown-blue color. Other polysaccharides
and monosaccharides yield no color change; the test solution remains the characteristic
brown-yellow of the reagent. It is thought that starch and glycogen form helical coils.
Iodine atoms can then fit into the helices to form a starch-iodine or glycogen-iodine
complex. Starch in the form of amylose and amylopectin has less branches than
glycogen. This means that the helices of starch are longer than glycogen, therefore
binding more iodine atoms. The result is that the color produced by a starch-iodine
complex is more intense than that obtained with a glycogen-iodine complex.
Method
Add 2-3 drops of Lugol's iodine solution to 5 ml of solution to be tested. Starch gives a
blue-black color. A positive test for glycogen is a brown-blue color. A negative test is the
brown-yellow color of the test reagent.
A cell may be compared to a living chemistry laboratory. Most functions within the cell take the form of interactions between
organic (carbon-containing) molecules. Organic molecules found in living systems can be classified as carbohydrates, fats,
proteins, or nucleic acids. Each of these classes of molecules is made of smaller units and both the smaller and larger units
have specific properties that can be identified by simple chemical tests. In this laboratory investigation, you will learn to identify
three of the four major types of organic molecules: carbohydrates, fats, and proteins and some of their smaller subunits.
The tests for the three types of organic molecules will be done:
1. on water (to demonstrate negative results)
2. on one or two substances which contain the molecules being tested for (to demonstrate positive results)
3. on several substances of unknown composition
Answer the questions in the laboratory exercise.
EXERCISE A: TESTING FOR CARBOHYDRATES
The basic structural unit of carbohydrates is the monosaccharide (single or simple sugar). Monosaccharides are classified by
the number of carbons they contain: for example, trioses have three carbons, pentoses have five carbons, and hexoses have
six carbons. Monosaccharides may contain as few as three or as many as ten carbons.
Monosaccharides are also characterized by the presence of a carbon-oxygen bond. If found at the end of the molecule it is
called a terminal aldehyde group or if found in the interior it would be called a ketone group. Both of these groups contain a
double-bonded oxygen that reacts with Benedict's solution to form a colored precipitate.
When two monosaccharides are bonded together, they form a disaccharide. If the reactive aldehyde or ketone groups are
involved in the bond between the monosaccharide units (as in sucrose), the disaccharide will NOT react with Benedict's
solution. If only one group is involved in the bond (as in maltose), the other is free to react with the Benedict's reagent. Sugars
with free aldehyde or ketone groups, whether monosaccharides or disaccharides, are called reducing sugars. These sugars
are oxidized (lose electrons) to the copper ions in the Benedict's reagent which becomes reduced (gains electrons), hence the
name reducing sugar. The color of the precipitate (material that settles to the bottom of the tube) varies dependent on the
strength of the reducing sugar present. Scroll down in this animation to see the various color changes for Benedict's solution.
In this exercise, you will use Benedict's reagent to test for the presence of reducing sugars.
Monosaccharides may join together to form long chains (polysaccharides) that may be either straight or branched. Starch is an
example of a polysaccharide formed entirely of glucose units. Starch does not show a reaction with Benedict's reagent
because the number of free aldehyde groups (found only at the end of each chain) is small in proportion to the rest of the
molecule. Therefore, we will test for the presence of starch with Lugol's reagent (iodine/potassium iodide, I 2KI).
Objectives:
Identify reducing sugars using Benedict's reagent
Identify polysaccharides using Lugol's reagent
PART 1. BENEDICT'S TEST FOR REDUCING SUGARSCheck out the animation showing the results.
When Benedict's reagent is heated with a reactive sugar, such as glucose or maltose, the color of the reagent changes from
blue to yellow to reddish-orange, depending on the amount of reactive sugar present. Orange and red indicate the highest
proportion of these sugars. Benedict's test will show a positive reaction for starch only if the starch has been broken down into
maltose or glucose units by excessive heating.
PART 2. LUGOL'S TEST FOR STARCHCheck out the animation showing the results.
Lugol's reagent changes from a brownish or yellowish color to blue-black when starch is present, but there is no color change
in the presence of monosaccharides or disaccharides.
These five tests identify the main biologically important chemical compounds.
For each test take a small amount of the substance to test, and shake it in
water in a test tube. If the sample is a piece of food, then grind it with some
water in a pestle and mortar to break up the cells and release the cell contents.
Many of these compounds are insoluble, but the tests work just as well on a
fine suspension.
• Starch (iodine test). To approximately 2 cm³ of test solution add two
drops of iodine/potassium iodide solution. A blue-black colour indicates the
presence of starch as a starch-polyiodide complex is formed. Starch is only
slightly soluble in water, but the test works well in a suspension or as a solid.
• Reducing Sugars (Benedict's test). All monosaccharides and most
disaccharides (except sucrose) will reduce copper (II) sulphate, producing a
precipitate of copper (I) oxide on heating, so they are called reducing sugars.
Benedict’s reagent is an aqueous solution of copper (II) sulphate, sodium
carbonate and sodium citrate. To approximately 2 cm³ of test solution add an
equal quantity of Benedict’s reagent. Shake, and heat for a few minutes at
95°C in a water bath. A precipitate indicates reducing sugar. The colour and
density of the precipitate gives an indication of the amount of reducing sugar
present, so this test is semi-quantitative. The original pale blue colour means
no reducing sugar, a green precipitate means relatively little sugar; a brown or
red precipitate means progressively more sugar is present.
• Non-reducing Sugars (Benedict's test). Sucrose is called a nonreducing sugar because it does not reduce copper sulphate, so there is no
direct test for sucrose. However, if it is first hydrolysed (broken down) to its
constituent monosaccharides (glucose and fructose), it will then give a positive
Benedict's test. So sucrose is the only sugar that will give a negative Benedict's
test before hydrolysis and a positive test afterwards. First test a sample for
reducing sugars, to see if there are any present bef7ore hydrolysis. Then,
using a separate sample, boil the test solution with dilute hydrochloric acid for a
few minutes to hydrolyse the glycosidic bond. Neutralise the solution by gently
adding small amounts of solid sodium hydrogen carbonate until it stops fizzing,
then test as before for reducing sugars.
• Lipids (emulsion test). Lipids do not dissolve in water, but do dissolve in
ethanol. This characteristic is used in the emulsion test. Do not start by
dissolving the sample in water, but instead shake some of the test sample with
about 4 cm³ of ethanol. Decant the liquid into a test tube of water, leaving any
undissolved substances behind. If there are lipids dissolved in the ethanol, they
will precipitate in the water, forming a cloudy white emulsion.
• Protein (biuret test). To about 2 cm³ of test solution add an equal volume
of biuret solution, down the side of the test tube. A blue ring forms at the
surface of the solution, which disappears on shaking, and the solution turns
lilac-purple, indicating protein. The colour is due to a complex between nitrogen
atoms in the peptide chain and Cu2+ ions, so this is really a test for peptide
bonds.
BENEDICT'S TEST
Introduction: Monosaccharides and disaccharides can be detected because of their free aldehyde
groups, thus, testing positive for the Benedict's test. Such sugars act as a reducing agent, and is
called a reducing sugar. By mixing the sugar solution with the Benedict's solution and adding heat,
an oxidation-reduction reaction will occur. The sugar will oxidize, gaining an oxygen, and the
Benedict's reagent will reduce, loosing an oxygen. If the resulting solution is red orange, it tests
positive, a change to green indicates a smaller amount of reducing sugar, and if it remains blue, it
tests negative.
Materials: onion juice 5 test tubes 1 beaker potato juice ruler hot plate deionized water permanent
marker 5 tongs glucose solution labels starch solution 6 barrel pipettes Benedict's reagent 5
toothpicks
Procedure: 1. Marked 5 test tubes at 1 cm and 3 cm from the bottom. Label test tubes #1-#5. 2.
Used 5 different barrel pipettes, added onion juice up to the 1 cm mark of the first
test tube, potato juice to the 1 cm mark of the second, deionized water up to the 1
cm mark of the third, glucose solution to the 1 cm mark of the fourth, and the
starch solution to the 1 cm mark of the fifth test tube. 3. Used the last barrel pipette, added
Benedict's Reagent to the 3 cm mark of all 5
test tubes and mix with a toothpick. 4. Heated all 5 tubes for 3 minutes in a boiling water bath, using
a beaker, water, and
a hot plate. 5. Removed the tubes using tongs. Recorded colors on the following table. 6. Cleaned
out the 5 test tubes with deionized water.
Data:
Benedict's Test Results
Discussion: From the results, the Benedict's test was successful. Onion juice contains glucose, and
of course, glucose would test positive. Starch doesn't have a free aldehyde group, and neither does
potato juice, which contains starch. Water doesn't have glucose monomers in it, and was tested to
make sure the end result would be negative, a blue color.
IODINE TEST
Introduction: The iodine test is used to distinguish starch from monosaccharides, disaccharides,
and other polysaccharides. Because of it's unique coiled geometric configuration, it reacts with
iodine to produce a blue-black color and tests positive. A yellowish brown color indicates that the
test is negative.
Materials: 6 barrel pipettes potato juice starch solution 5 test tubes water iodine solution onion juice
glucose solution 5 toothpicks
Procedure: 1. Used 5 barrel pipettes, filled test tube #1 with onion juice, second with potato
juice, third with water, fourth with glucose solution, and fifth with starch solution. 2. Added 3 drops of
iodine solution with a barrel pipette, to each test tube. Mixed
with 5 different toothpicks. 3. Observed reactions and recorded in the table below. Cleaned out the
5 test tubes. Data:
Iodine Test Results
Discussion: The iodine test was successful. Potato juice and starch were the only two substances
containing starch. Again, glucose and onion juice contains glucose, while water doesn't contain
starch or glucose and was just tested to make sure the test was done properly.
SUDAN III TEST
Introduction: Sudan III test detects the hydrocarbon groups that are remaining in the molecule. Due
to the fact that the hydrocarbon groups are nonpolar, and stick tightly together with their polar
surroundings, it is called a hydrophobic interaction and this is the basis for the Sudan III test. If the
end result is a visible orange, it tests positive.
Material: scissors deionized water margarine Sudan III solution
Pre-Laboratory Exercise
You are given solutions containing: fructose, glucose, lactose, galactose,
ribose, ribulose, sucrose, and starch. Devise a scheme by which you can
systematically identify these compounds.
Procedure
Perform the following qualitative tests on 0.2 M solutions (unless otherwise
stated) of starch, sucrose, glucose, lactose, galactose, ribose, and ribulose.
Use the scheme you devised in the prelab section to identify an unknown
solution. The unknown will be one of the above solutions or a mixture of
two of the above solutions.
Test 1. Molisch Test for Carbohydrates
The Molisch test is a general test for the presence of
carbohydrates. Molisch reagent is a solution of alpha-naphthol in 95%
ethanol. This test is useful for identifying any compound which can be
dehydrated to furfural or hydroxymethylfurfural in the presence of H 2SO4.
Furfural is derived from the dehydration of pentoses and pentosans, while
hydroxymethylfurfural is produced from hexoses and hexosans.
Oligosaccharides and polysaccharides are hydrolyzed to yield their
repeating monomers by the acid. The alpha-naphthol reacts with the cyclic
aldehydes to form purple colored condensation products. Although
this test will detect compounds other than carbohydrates (i.e.
glycoproteins), a negative result indicates the ABSENCE of carbohydrates.
Method: Add 2 drops of Molisch reagent to 2 ml of the sugar solution and
mix thoroughly. Incline the tube, and GENTLY pour 5 ml of concentrated
H2SO4 down the side of the testtube. A purple color at the interface of the
sugar and acid indicates a positive test. Disregard a green color if it
appears.
Test 2. Benedicts's Test for Reducing Sugars
Alkaline solutions of copper are reduced by sugars having a free aldehyde
or ketone group, with the formation of colored cuprous oxide. Benedict's
solution is composed of copper sulfate, sodium carbonate, and sodium
citrate (pH 10.5). The citrate will form soluble complex ions with Cu++,
preventing the precipitation of CuCO3 in alkaline solutions.
Method: Add 1 ml of the solution to be tested to 5 ml of Benedict's
solution, and shake each tube. Place the tube in a boiling water bath and
heat for 3 minutes. Remove the tubes from the heat and allow them to
cool. Formation of a green, red, or yellow precipitate is a positive test for
reducing sugars.
Test 3. Barfoed's Test for Monosaccharides
This reaction will detect reducing monosaccharides in the presence of
disaccharides. This reagent uses copper ions to detect reducing sugars in
an acidic solution. Barfoed's reagent is copper acetate in dilute acetic acid
(pH 4.6). Look for the same color changes as in Benedict's test.
Method: Add 1 ml of the solution to be tested to 3 ml of freshly prepared
Barfoed's reagent. Place test tubes into a boiling water bath and heat for 3
minutes. Remove the tubes from the bath and allow to cool. Formation of a
green, red, or yellow precipitate is a positive test for reducing
monosaccharides. Do not heat the tubes longer than 3 minutes, as a
positive test can be obtained with disaccharides if they are heated long
enough.
Test 4. Lasker and Enkelwitz Test for Ketoses
T
he Lasker and Enkelwitz test utilizes Benedict's solution, although the
reaction is carried out at a much lower temperature. The color changes that
are seen during this test are the same as with Benedict's solution.
Use DILUTE sugar solutions with this test (0.02 M).
Method: Add 1 ml of the solution to be tested to 5 ml of Benedict's solution
to a test tube and mix well. The test tube is heated in a 55oC water bath for
10-20 minutes. Ketopentoses demonstrate a positive reaction within 10
minutes, while ketohexoses take about 20 minutes to react. Aldoses do
not react positively with this test.
Test 5. Bial's Test for Pentoses
Bial's reagent uses orcinol, HCl, and FeCl3. Orcinol forms colored
condensation products with furfural generated by the dehydration of
pentoses and pentosans. It is necessary to use DILUTE sugar solutions
with this test (0.02 M).
Method: Add 2 ml of the solution to be tested to 5 ml of Bial's reagent.
Gently heat the tube to boiling. Allow the tube to cool. Formation of a
green colored solution or precipitate denotes a positive reaction.
Test 6. Mucic Acid Test for Galactose
Oxidation of most monosaccharides by nitric acid yields soluble dicarboxylic
acids. However, oxidation of galactose yields an insoluble mucic acid.
Lactose will also yield a mucic acid, due to hydrolysis of the glycosidic
linkage between its glucose and galactose subunits.
Method: Add 1 ml of concentrated nitric acid to 5 ml of the solution to be
tested and mix well. Heat on a boiling water bath until the volume of the
solution is reduced to about 1 ml. Remove the mixture from the water bath
and let it cool at room temperature overnight. The presence of insoluble
crystals in the bottom of the tube indicates the presence of mucic
acid. CAUTION: PERFORM THE REACTION UNDER A FUME HOOD.
Test 7. Iodine Test for Starch and Glycogen
The use of Lugol's iodine reagent (IKI) is useful to distinguish starch and
glycogen from other polysaccharides. Lugol's iodine yields a blue-black
color in the presence of starch. Glycogen reacts with Lugol's reagent to
give a brown-blue color. Other polysaccharides and monosaccharides
yield no color change; the test solution remains the characteristic brownyellow of the reagent. It is thought that starch and glycogen form helical
coils. Iodine atoms can then fit into the helices to form a starch-iodine or
glycogen-iodine complex. Starch in the form of amylose and amylopectin
has less branches than glycogen. This means that the helices of starch are
longer than glycogen, therefore binding more iodine atoms. The result is
that the color produced by a starch-iodine complex is more intense than
that obtained with a glycogen-iodine complex.
Method: Add 2-3 drops of Lugol's iodine solution to 5 ml of solution to be
tested. Starch gives a blue-black color. A positive test for glycogen is a
brown-blue color. A negativetest is the brown-yellow color of
the test reagent.
7.
What are the force that stabilise the tertiary and quaternary structures of
proteins ?
Biomolecular structure is the structure of biomolecules, mainly proteins and the nucleic
acids DNA and RNA. The structure of these molecules is frequently decomposed into primary
structure, secondary structure, tertiary structure, and quaternary structure. The scaffold for this
structure is provided by secondary structural elements which are hydrogen bonds within the
molecule. This leads to several recognizable "domains" of protein structure and nucleic acid
structure, including secondary structure like hairpin loops, bulges and internal loops for nucleic
acids, and alpha helices and beta sheets for proteins.
The terms primary, secondary, tertiary, and quaternary structure were first coined by Kaj Ulrik
Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
In biochemistry, the Primary structure of a biological molecule is the exact specification of its
atomic composition and the chemical bonds connecting those atoms (including stereochemistry).
For a typical unbranched, un-crosslinked biopolymer (such as a molecule of DNA, RNA or typical
intracellular protein), the primary structure is equivalent to specifying the sequence of
its monomericsubunits, e.g., the nucleotide or peptide sequence.
Primary structure is sometimes mistakenly termed primary sequence, but there is no such term,
as well as no parallel concept of secondary or tertiary sequence. By convention, the primary
structure of a protein is reported starting from the amino-terminal (N) end to the carboxyl-terminal
(C) end, while the primary structure of DNA or RNA molecule is reported from the 5' end to the 3'
end.
The primary structure of a nucleic acid molecule refers to the exact sequence of nucleotides that
comprise the whole molecule. Frequently the primary structure encodes motifs that are of
functional importance. Some examples of sequence motifs are: the C/D[1] and H/ACA
boxes[2] of snoRNAs, Sm binding site found in spliceosomal RNAs such
as U1, U2, U4, U5, U6, U12 and U3, the Shine-Dalgarno sequence,[3] the Kozak consensus
sequence[4] and the RNA polymerase III terminator.[5]
[edit]Secondary
structure
In biochemistry and structural biology, secondary structure is the general three-dimensional
form of local segments of biopolymers such as proteinsand nucleic acids (DNA/RNA). It does not,
however, describe specific atomic positions in three-dimensional space, which are considered to
be tertiary structure. Secondary structure is formally defined by the hydrogen bonds of the
biopolymer, as observed in an atomic-resolution structure. In proteins, the secondary structure is
defined by patterns of hydrogen bonds between backbone amide and carboxyl groups (sidechainmainchain and sidechain-sidechain hydrogen bonds are irrelevant), where the DSSP definition of
a hydrogen bond is used. In nucleic acids, the secondary structure is defined by the hydrogen
bonding between the nitrogenous bases.
For proteins, however, the hydrogen bonding is correlated with other structural features, which
has given rise to less formal definitions of secondary structure. For example, residues in protein
helices generally adopt backbone dihedral angles in a particular region of the Ramachandran
plot; thus, a segment of residues with such dihedral angles is often called a "helix", regardless of
whether it has the correct hydrogen bonds. Many other less formal definitions have been
proposed, often applying concepts from the differential geometry of curves, such
as curvature and torsion. Least formally, structural biologists solving a new atomic-resolution
structure will sometimes assign its secondary structure "by eye" and record their assignments in
the corresponding PDB file.
The secondary structure of a nucleic acid molecule refers to the basepairing interactions within a
single molecule or set of interacting molecules. The secondary structure of biological RNA's can
often be uniquely decomposed into stems and loops. Frequently these elements, or combinations
of them, can be further classified, for example, tetraloops, pseudoknots and stem-loops. There
are many secondary structure elements of functional importance to biological RNA's; some
famous examples are the Rho-independent terminator stem-loops and the tRNA cloverleaf. There
is a minor industry of researchers attempting to determine the secondary structure of RNA
molecules. Approaches include both experimental and computational methods (see also the List
of RNA structure prediction software).
[edit]Tertiary
structure
Main articles: Protein tertiary structure and Nucleic acid tertiary structure
In biochemistry and molecular biology, the tertiary structure of a protein or any
other macromolecule is its three-dimensional structure, as defined by the atomic
coordinates.[6] Proteins and nucleic acids are capable of diverse functions ranging from molecular
recognition to catalysis. Such functions require a precise three-dimensional tertiary structure.
While such structures are diverse and seemingly complex, they are composed of recurring, easily
recognizable tertiary structure motifs that serve as molecular building blocks. Tertiary structure is
considered to be largely determined by the biomolecule's primary structure, or the sequence
of amino acids or nucleotides of which it is composed. Efforts to predict tertiary structure from the
primary structure are known generally as structure prediction.
[edit]Quaternary
structure
Main articles: Protein quaternary structure and Nucleic acid quaternary structure
In biochemistry, quaternary structure is the arrangement of multiple folded protein or coiling
protein molecules in a multi-subunit complex. For nucleic acids, the term is less common, but can
refer to the higher-level organization of DNA in chromatin,[7] including its interactions
with histones, or to the interactions between separate RNA units in
the ribosome[8][9] or spliceosome.
[edit]Structure
determination
Main articles: Protein structure determination and Nucleic acid structure determination
Structure probing is the process by which biochemical techniques are used to determine
biomolecular structure.[10] This analysis can be used to define the patterns which can infer the
molecular structure, experimental analysis of molecular structure and function, and further
understanding on development of smaller molecules for further biological research. [11] Structure
probing analysis can be done through many different methods, which include chemical probing,
hydroxyl radical probing, nucleotide analog interference mapping (NAIM), and in-line probing.
DNA structures can be determined using either nuclear magnetic resonance spectroscopy or Xray crystallography. The first published reports of A-DNA X-ray diffraction patterns-- and also BDNA—employed analyses based on Patterson transforms that provided only a limited amount of
structural information for oriented fibers of DNA isolated from calf thymus. [12][13] An alternate
analysis was then proposed by Wilkins et al. in 1953 for B-DNA X-ray diffraction/scattering
patterns of hydrated, bacterial oriented DNA fibers and trout sperm heads in terms of squares
of Bessel functions.[14] Although the `B-DNA form' is most common under the conditions found in
cells,[15] it is not a well-defined conformation but a family or fuzzy set of DNA-conformations that
occur at the high hydration levels present in a wide variety of living cells. [16] Their corresponding
X-ray diffraction & scattering patterns are characteristic of molecular paracrystals with a
significant degree of disorder (>20%),[17][18] and concomitantly the structure is not tractable using
only the standard analysis.
On the other hand, the standard analysis, involving only Fourier transforms of Bessel
functions[19] and DNA molecular models, is still routinely employed for the analysis of A-DNA and
Z-DNA X-ray diffraction patterns.[20]
[edit]Structure
prediction
Main articles: Protein structure prediction and Nucleic acid structure prediction
Biomolecular structure prediction is the prediction of the three-dimensional structure of
a protein from its amino acid sequence, or of a nucleic acid from itsbase sequence. In other
words, it is the prediction of secondary and tertiary structure from its primary structure. Structure
prediction is the inverse of biomolecular design.
Protein structure prediction is one of the most important goals pursued
by bioinformatics and theoretical chemistry. Protein structure prediction is of high importance
in medicine (for example, in drug design) and biotechnology (for example, in the design of
novel enzymes). Every two years, the performance of current methods is assessed in
the CASP experiment.
There has also been a significant amount of bioinformatics research directed at the RNA structure
prediction problem. A common problem for researchers working with RNA is to determine the
three-dimensional structure of the molecule given just the nucleic acid sequence. However, in the
case of RNA much of the final structure is determined by the secondary structure or intramolecular base-pairing interactions of the molecule. This is shown by the high conservation
of base-pairings across diverse species.
Secondary structure of small nucleic acid molecules is largely determined by strong, local
interactions such as hydrogen bonds and base stacking. Summing the free energy for such
interactions, usually using a nearest-neighbor model, provides an approximation for the stability
of given structure.[21] The most straighforward way to find the lowest free energy structure would
be to generate all possible structures and calculate the free energy for it, but the number of
possible structures for a sequence increases exponentially with the length of the nucleic
acid.[22] For longer molecules, the number of possible secondary structures is enormous. [21]
Sequence covariation methods rely on the existence of a data set composed of
multiple homologous RNA sequences with related but dissimilar sequences. These methods
analyze the covariation of individual base sites in evolution; maintenance at two widely separated
sites of a pair of base-pairing nucleotides indicates the presence of a structurally required
hydrogen bond between those positions. The general problem of pseudoknot prediction has been
shown to be NP-complete.[23]
[edit]Design
Main articles: Protein design and Nucleic acid design
Biomolecular design can be considered the inverse of structure prediction. In structure prediction,
the structure is determined from a known sequence, while in nucleic acid design, a sequence is
generated which will form a desired structure.
8. Which of the following is used as neutron absorber in nuclear reactors?
9. Explain the terms utility value wealth and consumption.
In economics, the marginal utility of a good or service is the utility gained (or lost) from an
increase (or decrease) in the consumption of that good or service. Economists sometimes speak
of a law of diminishing marginal utility, meaning that the first unit of consumption of a good or
service yields more utility than the second and subsequent units. [citation needed]
The concept of marginal utility played a crucial role in the marginal revolution of the late 19th
century, and led to the replacement of the labor theory of value by neoclassical value theory in
which therelative prices of goods and services are simultaneously determined by marginal rates
of substitution in consumption and marginal rates of transformation in production, which are equal
in economic equilibrium
The term marginal refers to a small change, starting from some baseline level. As Philip
Wicksteed explained the term,
"Marginal considerations are considerations which concern a slight increase or diminution
of the stock of anything which we possess or are considering"[1]
Frequently the marginal change is assumed to start from the endowment, meaning the total
resources available for consumption (see Budget constraint). This endowment is determined
by many things including physical laws (which constrain how forms of energy and matter may
be transformed), accidents of nature (which determine the presence of natural resources),
and the outcomes of past decisions made both by others and by the individual himself or
herself.
For reasons of tractability, it is often assumed in neoclassical analysis that goods and
services are continuously divisible. Under this assumption, marginal concepts, including
marginal utility may be expressed in terms of differential calculus. Marginal utility can be
defined as a measure of relative satisfaction gained or lost from an increase or decrease in
the consumption of that good or service.
However, strictly speaking, the smallest relevant division may be quite large. Frequently,
economic analysis concerns the marginal values associated with a change of one unit of a
discrete good or service, such as a motor vehicle or a haircut.
[edit]Utility
Main article: Utility
Different concepts of utility underlie different theories in which marginal utility plays a role. It
has been common among economists to describe utility as if it were quantifiable, that is, as if
different levels of utility could be compared along a numerical scale.[2][3] This has significantly
affected the development and reception of theories of marginal utility. Concepts of utility that
entail quantification allow familiar arithmetic operations, and further assumptions of continuity
and differentiability greatly increase tractability.
Contemporary mainstream economic theory frequently defers metaphysical questions, and
merely notes or assumes that preference structures conforming to certain rules can be
usefully proxied by associating goods, services, or uses thereof with quantities,
and defines “utility” as such a quantification.[4]
Another conception is Benthamite philosophy, which equated usefulness with the production
of pleasure and avoidance of pain,[5] assumed subject to arithmetic operation.[6] British
economists, under the influence of this philosophy (especially by way of John Stuart Mill),
viewed utility as “the feelings of pleasure and pain”[7] and further as a “quantity of feeling”
(emphasis added).[8]
Though generally pursued outside of the mainstream methods, there are conceptions of
utility that do not rely on quantification. For example, the Austrian school generally attributes
value to the satisfaction of needs,[9][10][11] and sometimes rejects even the possibility of
quantification.[12] It has been argued that the Austrian framework makes it possible to
consider rational preferences that would otherwise be excluded.[10]
In any standard framework, the same object may have different marginal utilities for different
people, reflecting different preferences or individual circumstances.[13]
[edit]Diminishing
marginal utility
The utility which one receives with the increase in the stock that one already had "The law of
diminishing marginal utility is at the heart of the explanation of numerous economic
phenomena, includingtime preference and the value of goods. . . . The law says, first, that the
marginal utility of each (homogenous) unit decreases as the supply of units increases (and
vice versa); second, that the marginal utility of a larger-sized unit is greater than the marginal
utility of a smaller-sized unit (and vice versa). The first law denotes the law of diminishing
marginal utility, the second law the law of increasing total utility."[14]
An individual will typically be able to partially order the potential uses of a good or service.
For example, a ration of water might be used to sustain oneself, a dog, or a rose bush. Say
that a given person gives her own sustenancility of a third unit would be that of watering the
roses.
(The diminishing of marginal utility should not necessarily be taken to be itself
an arithmetic subtraction. It may be no more than a purely ordinal change.[10][11])
The notion that marginal utilities are diminishing across the ranges relevant to decisionmaking is called “the law of diminishing marginal utility” (and also known as a “Gossen's First
Law”). However, it will not always hold. The case of the person, dog, and roses is one in
which potential uses operate independently—there is no complementarity across the three
uses. Sometimes an amount added brings things past a desired tipping point, or an amount
subtracted causes them to fall short. In such cases, the marginal utility of a good or service
might actually be increasing. For example:

bed sheets, which up to some number may only provide warmth, but after that point may
allow one to effect an escape by being tied together into a rope;

tickets, for travel or theatre, where a second ticket might allow one to take a date on an
otherwise uninteresting outing;

dosages of antibiotics, where having too few pills would leave bacteria with greater
resistance, but a full supply could effect a cure.
The fact that a tipping point may be reached does not imply that marginal utility will continue
to increase indefinitely thereafter. For example, beyond some point, further doses of
antibiotics would kill no pathogens at all, and might even become harmful to the body. Simply
put, as the rate of commodity consumption increases, marginal utility decreases. If
commodity consumption continues to rise, marginal utility at some point falls to zero,
reaching maximum total utility. Further increase in consumption of units of commodities
causes marginal utility to become negative; this signifiesdissatisfaction.
[edit]Independence
from presumptions of self-interested behavior
While the above example of water rations conforms to ordinary notions of self-interested
behavior, the concept and logic of marginal utility are independent of the presumption that
people pursue self-interest.[15] For example, a different person might give highest priority to
the rose bush, next highest to the dog, and last to himself. In that case, if the individual has
three rations of water, then the marginal utility of any one of those rations is that of watering
the person. With just two rations, the person is left unwatered and the marginal utility of either
ration is that of watering the dog. Likewise, a person could give highest priority to the needs
of one of her neighbors, next to another, and so forth, placing her own welfare last; the
concept of diminishing marginal utility would still apply.
[edit]Marginalist
theory
Marginalism explains choice with the hypothesis that people decide whether to effect any
given change based on the marginal utility of that change, with rival alternatives being
chosen based upon which has the greatest marginal utility.
[edit]Market
price and diminishing marginal utility
If an individual has a stock or flow of a good or service whose marginal utility is less than
would be that of some other good or service for which he or she could trade, then it is in his
or her interest to effect that trade. Of course, as one thing is traded-away and another is
acquired, the respective marginal gains or losses from further trades are now changed. On
the assumption that the marginal utility of one is diminishing, and the other is not increasing,
all else being equal, an individual will demand an increasing ratio of that which is acquired to
that which is sacrificed. (One important way in which all else might not be equal is when the
use of the one good or service complements that of the other. In such cases, exchange ratios
might be constant.[10]) If any trader can better his or her own marginal position by offering a
trade more favorable to complementary traders, then he or she will do so.
In an economy with money, the marginal utility of a quantity is simply that of the best good or
service that it could purchase.
Hence, the “law” of diminishing marginal utility provides an explanation for
diminishing marginal rates of substitution and thus for the “laws” of supply and demand, as
well as essential aspects of models of “imperfect” competition.
[edit]The paradox of water and diamonds
Main article: Paradox of value
The “law” of diminishing marginal utility is said to explain the “paradox of water and
diamonds”, most commonly associated with Adam Smith[16] (though recognized by earlier
thinkers).[17] Human beings cannot even survive without water, whereas diamonds are mere
ornamentation or engraving bits. Yet water had a very low price, and diamonds a very high
price, by any normal measure. Marginalists explained that it is the marginal usefulness of any
given quantity that determines its price, rather than the usefulness of a classor of a totality.
For most people, water was sufficiently abundant that the loss or gain of a gallon would
withdraw or add only some very minor use if any; whereas diamonds were in much more
restricted supply, so that the lost or gained use would be much greater.
That is not to say that the price of any good or service is simply a function of the marginal
utility that it has for any one individual nor for some ostensibly typical individual. Rather,
individuals are willing to trade based upon the respective marginal utilities of the goods that
they have or desire (with these marginal utilities being distinct for each potential trader), and
prices thus develop constrained by these marginal utilities.
The “law” does not tell us such things as why diamonds are naturally less abundant on the
earth than is water, but helps us to understand how this affects the value imputed to a given
diamond and the price of diamonds in a market.
Section B
Answer any THREE questions from Section B. Each question carries 10 marks.
10. What is meant by elasticity of demand ? Discuss.
Price elasticity of demand (PED or Ed) is a measure used in economics to show the
responsiveness, or elasticity, of the quantity demanded of a good or service to a change in its
price. More precisely, it gives the percentage change in quantity demanded in response to a one
percent change in price (holding constant all the other determinants of demand, such as income).
It was devised byAlfred Marshall.
Price elasticities are almost always negative, although analysts tend to ignore the sign even
though this can lead to ambiguity. Only goods which do not conform to the law of demand, such
as Veblenand Giffen goods, have a positive PED. In general, the demand for a good is said to
be inelastic (or relatively inelastic) when the PED is less than one (in absolute value): that is,
changes in price have a relatively small effect on the quantity of the good demanded. The
demand for a good is said to be elastic (or relatively elastic) when its PED is greater than one (in
absolute value): that is, changes in price have a relatively large effect on the quantity of a good
demanded.
Revenue is maximized when price is set so that the PED is exactly one. The PED of a good can
also be used to predict the incidence (or "burden") of a tax on that good. Various research
methods are used to determine price elasticity, including test markets, analysis of historical sales
data and conjoint analysis.
PED is a measure of responsiveness of the quantity of a good or service demanded to changes
in its price.[1] The formula for the coefficient of price elasticity of demand for a good is:[2][3][4]
The above formula usually yields a negative value, due to the inverse nature of the
relationship between price and quantity demanded, as described by the "law of
demand".[3] For example, if the price increases by 5% and quantity demanded decreases by
5%, then the elasticity at the initial price and quantity = −5%/5% = −1. The only classes of
goods which have a PED of greater than 0 areVeblen and Giffen goods.[5] Because the PED
is negative for the vast majority of goods and services, however, economists often refer to
price elasticity of demand as a positive value (i.e., in absolute value terms).[4]
This measure of elasticity is sometimes referred to as the own-price elasticity of demand for a
good, i.e., the elasticity of demand with respect to the good's own price, in order to
distinguish it from the elasticity of demand for that good with respect to the change in the
price of some other good, i.e., a complementary or substitute good.[1] The latter type of
elasticity measure is called a cross-price elasticity of demand.[6][7]
As the difference between the two prices or quantities increases, the accuracy of the PED
given by the formula above decreases for a combination of two reasons. First, the PED for a
good is not necessarily constant; as explained below, PED can vary at different points along
the demand curve, due to its percentage nature.[8][9] Elasticity is not the same thing as
the slope of the demand curve, which is dependent on the units used for both price and
quantity.[10][11] Second, percentage changes are not symmetric; instead, the percentage
change between any two values depends on which one is chosen as the starting value and
which as the ending value. For example, if quantity demanded increases from 10 units to 15
units, the percentage change is 50%, i.e., (15 − 10) ÷ 10 (converted to a percentage). But if
quantity demanded decreases from 15 units to 10 units, the percentage change is −33.3%,
i.e., (10 − 15) ÷ 15.[12][13]
Two alternative elasticity measures avoid or minimise these shortcomings of the basic
elasticity formula: point-price elasticity and arc elasticity.
[edit]Point-price
elasticity
One way to avoid the accuracy problem described above is to minimise the difference
between the starting and ending prices and quantities. This is the approach taken in the
definition of point-priceelasticity, which uses differential calculus to calculate the elasticity for
an infinitesimal change in price and quantity at any given point on the demand curve: [14]
In other words, it is equal to the absolute value of the first derivative of quantity with
respect to price (dQd/dP) multiplied by the point's price (P) divided by its quantity (Q d).[15]
In terms of partial-differential calculus, point-price elasticity of demand can be defined as
follows:[16] let
be the demand of goods
parameters price and wealth, and let
of demand for good
as a function of
be the demand for good . The elasticity
with respect to price
is
However, the point-price elasticity can be computed only if the formula for
the demand function,
price,
[edit]Arc
, is known so its derivative with respect to
, can be determined.
elasticity
A second solution to the asymmetry problem of having a PED dependent on which
of the two given points on a demand curve is chosen as the "original" point and
which as the "new" one is to compute the percentage change in P and Q relative to
the average of the two prices and the average of the two quantities, rather than just
the change relative to one point or the other. Loosely speaking, this gives an
"average" elasticity for the section of the actual demand curve—i.e., the arc of the
curve—between the two points. As a result, this measure is known as the arc
elasticity, in this case with respect to the price of the good. The arc elasticity is
defined mathematically as:[13][17][18]
This method for computing the price elasticity is also known as the "midpoints
formula", because the average price and average quantity are the coordinates
of the midpoint of the straight line between the two given points. [12][18] However,
because this formula implicitly assumes the section of the demand curve
between those points is linear, the greater the curvature of the actual demand
curve is over that range, the worse this approximation of its elasticity will
be.[17][19]
[edit]History
The illustration that accompanied Marshall's original definition of PED, the ratio of PT to Pt
Together with the concept of an economic "elasticity" coefficient, Alfred
Marshall is credited with defining PED ("elasticity of demand") in his
book Principles of Economics, published in 1890.[20] He described it thus: "And
we may say generally:— the elasticity (or responsiveness) of demand in a
market is great or small according as the amount demanded increases much or
little for a given fall in price, and diminishes much or little for a given rise in
price".[21] He reasons this since "the only universal law as to a person's desire
for a commodity is that it diminishes... but this diminution may be slow or rapid.
If it is slow... a small fall in price will cause a comparatively large increase in his
purchases. But if it is rapid, a small fall in price will cause only a very small
increase in his purchases. In the former case... the elasticity of his wants, we
may say, is great. In the latter case... the elasticity of his demand is
small."[22] Mathematically, the Marshallian PED was based on a point-price
definition, using differential calculus to calculate elasticities. [23]
[edit]Determinants
The overriding factor in determining PED is the willingness and ability of
consumers after a price change to postpone immediate consumption decisions
concerning the good and to search for substitutes ("wait and look"). [24] A number
of factors can thus affect the elasticity of demand for a good:[25]

Availability of substitute goods: the more and closer the substitutes
available, the higher the elasticity is likely to be, as people can easily switch
from one good to another if an even minor price change is
made;[25][26][27] There is a strong substitution effect.[28] If no close substitutes
are available the substitution of effect will be small and the demand
inelastic.[28]
 Breadth of definition of a good: the broader the definition of a good (or
service), the lower the elasticity. For example, Company X's fish and
chips would tend to have a relatively high elasticity of demand if a
significant number of substitutes are available, whereas food in general
would have an extremely low elasticity of demand because no
substitutes exist.[29]

Percentage of income: the higher the percentage of the consumer's
income that the product's price represents, the higher the elasticity tends to
be, as people will pay more attention when purchasing the good because of
its cost;[25][26] The income effect is substantial.[30] When the goods represent
only a negligible portion of the budget the income effect will be insignificant
and demand inelastic,[30]

Necessity: the more necessary a good is, the lower the elasticity, as
people will attempt to buy it no matter the price, such as the case
of insulin for those that need it.[10][26]

Duration: for most goods, the longer a price change holds, the higher the
elasticity is likely to be, as more and more consumers find they have the
time and inclination to search for substitutes.[25][27] When fuel prices
increase suddenly, for instance, consumers may still fill up their empty
tanks in the short run, but when prices remain high over several years,
more consumers will reduce their demand for fuel by switching
to carpooling or public transportation, investing in vehicles with greater fuel
economy or taking other measures.[26] This does not hold for consumer
durables such as the cars themselves, however; eventually, it may become
necessary for consumers to replace their present cars, so one would expect
demand to be less elastic.[26]

Brand loyalty: an attachment to a certain brand—either out of tradition or
because of proprietary barriers—can override sensitivity to price changes,
resulting in more inelastic demand.[29][31]

Who pays: where the purchaser does not directly pay for the good they
consume, such as with corporate expense accounts, demand is likely to be
more inelastic.
11. What are the features of partnership company? Explain.
In the commercial and legal parlance of most countries, a general partnership (the basic form
of partnership under common law), refers to an association of persons or an unincorporated
company with the following major features:

Created by agreement, proof of existence and estoppel.

Formed by two or more persons

The owners are all personally liable for any legal actions and debts the company may
face
It is a partnership in which partners share equally in both responsibility and liability
Partnerships have certain default characteristics relating to both (a) the relationship between the
individual partners and (b) the relationship between the partnership and the outside world. The
former can generally be overridden by agreement between the partners, whereas the latter
generally cannot be done.
The assets of the business are owned on behalf of the other partners, and they are each
personally liable, jointly and severally, for business debts, taxes ortortious liability. For example, if
a partnership defaults on a payment to a creditor, the partners' personal assets are subject to
attachment and liquidation to pay the creditor.
By default, profits are shared equally amongst the partners. However, a partnership agreement
will almost invariably expressly provide for the manner in which profits and losses are to be
shared.
Each general partner is deemed the agent of the partnership. Therefore, if that partner is
apparently carrying on partnership business, all general partners can be held liable for his
dealings with third persons.
By default a partnership will terminate upon the death, disability, or even withdrawal of any one
partner. However, most partnership agreements provide for these types of events, with the share
of the departed partner usually being purchased by the remaining partners in the partnership.
By default, each general partner has an equal right to participate in the management and control
of the business. Disagreements in the ordinary course of partnership business are decided by a
majority of the partners, and disagreements of extraordinary matters and amendments to the
partnership agreement require the consent of all partners. However, in a partnership of any size
the partnership agreement will provide for certain electees to manage the partnership along the
lines of a company board.
Unless otherwise provided in the partnership agreement, no one can become a member of the
partnership without the consent of all partners, though a partner may assign his share of the
profits and losses and right to receive distributions ("transferable interest"). A
partner's judgment creditor may obtain an order charging the partner's "transferable interest" to
satisfy a judgment.
[edit]Separate
legal personality
There has been considerable debate in most states as to whether a partnership should remain
aggregate or be allowed to become a business entity with a separate continuing legal personality.
In the United States, section 201 of the Revised Uniform Partnership Act (RUPA) of 1994
provides that "A partnership is an entity distinct from its partners."
In England and Wales, a partnership does not have separate legal personality. Although the
English & Welsh Law Commission in Report 283 [1] proposed to amend the law to create
separate personality for all general partnerships, the British government decided not to implement
the proposals relating to general partnerships. The Limited Liability Partnerships Act 2000 confers
separate personality on limited liability partnerships—separating them almost entirely from
general partnerships and limited partnerships, despite the naming similarities.
In Scotland partnerships do have some degree of legal personality.
While France, Luxembourg, Norway, the Czech Republic and Sweden also grant some degree of
legal personality to business partnerships, other countries such
as Belgium, Germany, Italy,Switzerland and Poland do not allow partnerships to acquire a
separate legal personality, but permit partnerships the rights to sue and be sued, to hold property,
and to postpone a creditor's lawsuitagainst the partners until he or she has exhausted
all remedies against the partnership assets.
In December 2002 the Netherlands proposed to replace their general partnership, which does not
have legal personality, with a public partnership which allows the partners to opt for legal
personality.
Japanese law provides for Civil Code partnerships (組合 kumiai?), which have no legal
personality, and Commercial Code partnership corporations (持分会社 mochibun kaisha?) which
have full corporate personhood but otherwise function similarly to partnerships.
The two main consequences of allowing separate personality are that one partnership will be able
to become a partner in another partnership in the same way that a registered company can, and
a partnership will not be bound by the doctrine of ultra vires but will have unlimited legal capacity
like any other natural person.
12. Discuss the laws of returns and agents of production in detail.
Marshall began writing the Principles of Economics in 1881 and he spent much of the next
decade at work on the treatise. His plan for the work gradually extended to a two-volume
compilation on the whole of economic thought; the first volume was published in 1890 to
worldwide acclaim that established him as one of the leading economists of his time. The second
volume, which was to address foreign trade, money, trade fluctuations, taxation, and collectivism,
was never published at all. Over the next two decades he worked to complete his second volume
of the Principles, but his unyielding attention to detail and ambition for completeness prevented
him from mastering the work's breadth.
In economics, diminishing returns (also called diminishing marginal returns) is the decrease
in the marginal (per-unit) output of a production process as the amount of a single factor of
production is increased, while the amounts of all other factors of production stay constant.
The law of diminishing returns (also law of diminishing marginal returns or law of
increasing relative cost) states that in all productive processes, adding more of one factor of
production, while holding all others constant, will at some point yield lower per-unit returns.[1] The
law of diminishing returns does not imply that adding more of a factor will decrease
the total production, a condition known as negative returns, though in fact this is common.
For example, the use of fertilizer improves crop production on farms and in gardens; but at some
point, adding more and more fertilizer improves the yield less per unit of fertilizer, and excessive
quantities can even reduce the yield. A common sort of example is adding more workers to a job,
such as assembling a car on afactory floor. At some point, adding more workers causes problems
such as getting in each other's way, or workers frequently find themselves waiting for access to a
part. In all of these processes, producing one more unit of output per unit of time will eventually
cost increasingly more, due to inputs being used less and less effectively.
The law of diminishing returns is a fundamental principle of economics.[1] It plays a central role
in production theory.
The concept of diminishing returns can be traced back to the concerns of early economists such
as Johann Heinrich von Thünen, Turgot, Thomas Malthus and David Ricardo. However, classical
economists such as Malthus and Ricardo attributed the successive diminishment of output to the
decreasing quality of the inputs. Neoclassical economists assume that each "unit" of labor is
identical = perfectly homogeneous. Diminishing returns are due to the disruption of the entire
productive process as additional units of labor are added to a fixed amount of capital.
Karl Marx developed a version of the law of diminishing returns in his theory of the tendency of
the rate of profit to fall, described in Volume III of Capital.
[edit]Examples
Seed yield gives diminishing returns to seeds planted in this hypothetical example. [2] The slope of the response
curve is continually decreasing.
From ancient times until the Industrial Revolution, crop yields were sometimes expressed as
seeds harvested per seed planted. With some grains the yield is believed to have been six to
one, giving a net yield of five. Because plants compete with one another for moisture, minerals
and sunlight, planting too many seed did not produce enough additional seed to offset the
additional seed planted plus the labor to cut and thresh the additional seed grain. Not planting
enough underutilized scarce cropland. The seed drill gave more uniform seed spacing and
coverage and allowed higher yields for a given amount of seed. Even though today's yields are
several times higher, diminishing returns on seeding rates remains an important consideration in
farming.
There is an inverse relationship between returns of inputs and the cost of production. Suppose
that a kilogram of seed costs one dollar, and this price does not change; although there are other
costs, assume they do not vary with the amount of output and are therefore fixed costs. One
kilogram of seeds yields one ton of crop, so the first ton of the crop costs one extra dollar to
produce. That is, for the first ton of output, the marginal cost (MC) of the output is $1 per ton. If
there are no other changes, then if the second kilogram of seeds applied to land produces only
half the output of the first, the MC equals $1 per half ton of output, or $2 per ton. Similarly, if the
third kilogram produces only ¼ ton, then the MC equals $1 per quarter ton, or $4 per ton. Thus,
diminishing marginal returns imply increasing marginal costs. This also implies rising average
costs. In this numerical example, average cost rises from $1 for 1 ton to $2 for 1.5 tons to $3 for
1.75 tons, or approximately from 1 to 1.3 to 1.7 dollars per ton.
In this example, the marginal cost equals the extra amount of money spent on seed divided by
the extra amount of crop produced, while average cost is the total amount of money spent on
seeds divided by the total amount of crop produced.
Cost can also be measured in terms of opportunity cost. In this case the law also applies to
societies; the opportunity cost of producing a single unit of a good generally increases as a
society attempts to produce more of that good. This explains the bowed-out shape of
the production possibilities frontier.
The marginal returns discussed apply to cases when only one of the many inputs is increased (for
example, the quantity of seed increases, but the amount of land remains constant). If all inputs
are increased in proportion, the result is generally constant or increased output.
it is a long run concept. As such, explaining exactly why this law holds true, when people make
investments in their firms, has proven problematic. The idea that Adam Smith iterated regarding
increasing productivity from the division of labor, in fact contradicts the theory, as in practice
actions such as "division of labor" (or other organizational or technological improvement) will
always come with an additional increase in a specific factor of production. The manager of a
company is likely to add those improvements, or factors of production, which ultimately leads to
greater returns. As a thought experiment, it is difficult to imagine in reality why only one factor of
production (e.g. hammers) would be added in the making of a specific product, without other
factors that would be needed to increase marginal productivity (e.g. labor to use the extra
hammers). And, since it is hardly an occurrence in the world, the "law of diminishing returns" can
be modeled graphically but it has very few examples in practice. As has been understood from
the time of Smith and Mill,[3] and further explained by more recent economists such as Paul
Romer,[4] "increasing returns" is more likely to occur when companies invest on a factor of
production, as they do not hold everything else constant. This is how companies such as WalMart and Microsoft can become more profitable as they grow in size.
As a firm in the long-run increases the quantities of all factors employed, all other things being
equal, initially the rate of increase in output may be more rapid than the rate of increase in inputs,
later output might increase in the same proportion as input, then ultimately, output will increase
less proportionately than input.
13. Comparatively discuss the perfect competition and imperfect competition.