id
stringlengths 40
40
| source
stringclasses 9
values | title
stringlengths 2
345
| clean_text
stringlengths 35
1.63M
| raw_text
stringlengths 4
1.63M
| url
stringlengths 4
498
| overview
stringlengths 0
10k
|
---|---|---|---|---|---|---|
b67e9c0a83151eb95a4636c32543ab869708408e | wikidoc | Esophagogram | Esophagogram
# Overview
A barium swallow is a medical imaging procedure used to examine the upper GI (gastrointestinal) tract, which includes the oesophagus and, to a lesser extent, the stomach.
# Principle
Barium sulphate is a type of Contrast medium that is visible to x-rays. As the patient swallows the Barium suspension, it coats the oesphagus with a thin layer of the barium. This enables the hollow structure to be imaged.
# Examination
The patient is asked to drink a suspension of barium sulfate. Fluoroscopy images are taken as the barium is swallowed. This is typically at a rate of 2 or 3 frames pers second. The patient is asked to swallow the Barium a number of times, whilst standing in different positions, i.e. AP, oblique and lateral, to assess the 3D structure as best possible.
# Pathology
Pathologies detected on a Barium Swallow include:
- Achalasia
- Oesophageal pouch
- Cancer of oesophagus
- Tracheoesophageal fistula
- Schatzki ring
- Reflux
- Zenker's diverticulum
- Hiatus hernia | Esophagogram
# Overview
A barium swallow is a medical imaging procedure used to examine the upper GI (gastrointestinal) tract, which includes the oesophagus and, to a lesser extent, the stomach.
# Principle
Barium sulphate is a type of Contrast medium that is visible to x-rays. As the patient swallows the Barium suspension, it coats the oesphagus with a thin layer of the barium. This enables the hollow structure to be imaged.
# Examination
The patient is asked to drink a suspension of barium sulfate. Fluoroscopy images are taken as the barium is swallowed. This is typically at a rate of 2 or 3 frames pers second. The patient is asked to swallow the Barium a number of times, whilst standing in different positions, i.e. AP, oblique and lateral, to assess the 3D structure as best possible.
# Pathology
Pathologies detected on a Barium Swallow include:
- Achalasia
- Oesophageal pouch
- Cancer of oesophagus
- Tracheoesophageal fistula
- Schatzki ring
- Reflux
- Zenker's diverticulum
- Hiatus hernia | https://www.wikidoc.org/index.php/Barium_swallow | |
8ff2e51c427f5c69dda0e98795d84e29761914d3 | wikidoc | Baroreceptor | Baroreceptor
# Overview
Baroreceptors (or baroceptors) in the human body detect the pressure of blood flowing through them, and can send messages to the central nervous system to increase or decrease total peripheral resistance and cardiac output.
Baroreceptors can be divided into two categories, high pressure arterial baroreceptors and low pressure baroreceptors (also known as cardiopulmonary receptors).
# Arterial baroreceptors
There are baroreceptors present in the arch of the aorta, and the carotid sinuses of the left and right internal carotid arteries. In some sensitive people, due to baroreceptors, vigorous palpation of a carotid artery can cause severe bradycardia or even cardiac arrest.
Baroreceptors act to maintain mean arterial blood pressure to allow tissues to receive the right amount of blood.
See main article Baroreflex
If blood pressure falls, such as in shock, baroreceptor firing rate decreases. Signals from the carotid baroreceptors are sent via the glossopharyngeal nerve (cranial nerve IX). Signals from the aortic baroreceptors travel through the vagus nerve (cranial nerve X).
Baroreceptors work by detecting the amount of stretch. The more the baroreceptor walls are stretched, the more frequently they generate action potentials. The arterial baroreceptors have a lower threshold of around 70 mmHg (typical arterial blood pressure is around 80-90 mmHg). Below this the receptors stop firing signals completely, any further decrease in pressure will cause no additional effect. At this low pressure however the response of chemoreceptors becomes more vigorous, especially below 60 mmHg.
Baroreceptors respond very quickly to maintain a stable blood pressure, but they only respond to short term changes. Over a period of days or weeks they will reset to a new value. Thus, in people with essential hypertension the baroreceptors behave as if the elevated blood pressure is normal and aim to maintain this high blood pressure.
# Low pressure baroreceptors
These are found in the large veins and in the walls of the atria of the heart. The low pressure baroreceptors are involved with the regulation of blood volume. The blood volume determines the mean pressure throughout the system, in particular in the venous side where most of the blood is held.
The low pressure baroreceptors have both circulatory and renal effects, they produce changes in hormone secretion which have profound effects on the retention of salt and water and also influence intake of salt and water. The renal effects allow the receptors to change the mean pressure in the system in the long term.
Denervating these receptors 'fools' the body into thinking that we have too low blood volume and initiates mechanisms which retain fluid and so push up the blood pressure to a higher level than we would otherwise have. | Baroreceptor
# Overview
Baroreceptors (or baroceptors) in the human body detect the pressure of blood flowing through them, and can send messages to the central nervous system to increase or decrease total peripheral resistance and cardiac output.
Baroreceptors can be divided into two categories, high pressure arterial baroreceptors and low pressure baroreceptors (also known as cardiopulmonary receptors).
# Arterial baroreceptors
There are baroreceptors present in the arch of the aorta, and the carotid sinuses of the left and right internal carotid arteries. In some sensitive people, due to baroreceptors, vigorous palpation of a carotid artery can cause severe bradycardia or even cardiac arrest.
Baroreceptors act to maintain mean arterial blood pressure to allow tissues to receive the right amount of blood.
See main article Baroreflex
If blood pressure falls, such as in shock, baroreceptor firing rate decreases. Signals from the carotid baroreceptors are sent via the glossopharyngeal nerve (cranial nerve IX). Signals from the aortic baroreceptors travel through the vagus nerve (cranial nerve X).
Baroreceptors work by detecting the amount of stretch. The more the baroreceptor walls are stretched, the more frequently they generate action potentials. The arterial baroreceptors have a lower threshold of around 70 mmHg (typical arterial blood pressure is around 80-90 mmHg). Below this the receptors stop firing signals completely, any further decrease in pressure will cause no additional effect. At this low pressure however the response of chemoreceptors becomes more vigorous, especially below 60 mmHg.
Baroreceptors respond very quickly to maintain a stable blood pressure, but they only respond to short term changes. Over a period of days or weeks they will reset to a new value. Thus, in people with essential hypertension the baroreceptors behave as if the elevated blood pressure is normal and aim to maintain this high blood pressure.
# Low pressure baroreceptors
These are found in the large veins and in the walls of the atria of the heart. The low pressure baroreceptors are involved with the regulation of blood volume. The blood volume determines the mean pressure throughout the system, in particular in the venous side where most of the blood is held.
The low pressure baroreceptors have both circulatory and renal effects, they produce changes in hormone secretion which have profound effects on the retention of salt and water and also influence intake of salt and water. The renal effects allow the receptors to change the mean pressure in the system in the long term.
Denervating these receptors 'fools' the body into thinking that we have too low blood volume and initiates mechanisms which retain fluid and so push up the blood pressure to a higher level than we would otherwise have.
# External links
- Baroreceptors at the US National Library of Medicine Medical Subject Headings (MeSH)
Template:Somatosensory system
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Baroreceptor | |
a9bb0646b0b882ea57a42a82b24f376a0ae408ef | wikidoc | Baruch Modan | Baruch Modan
# Overview
Dr. Baruch Modan (born c. 1944) is a famous medic from Israel. Dr. Modan has made significant findings in his specialized field, oncology, and he is also an expert in radiation.
Admired by many of his colleagues, Dr. Modan has worked with various types of cancer, and, in 1974, he demonstrated that chances of getting breast cancer increase for anyone who has X-rays done with doses as low as 1.6 rem. He is also an expert on treating cancer among children.
A professor at the University of Tel Aviv, Dr. Modan is Israel's department of health's secretary. During a 2004 rally by some Israeli manifestants from an area where the government allegedly made atomic testings, Dr. Modan was criticized in person by some of that area's inhabitants, who accused him of being on the government's side, contrary to being with the public, among other things. This rally was later televised in the United States.
Dr. Modan has travelled extensively to share his knowledge with other doctors as well as patients, to treat patients in need of emergency service, and to give lectures. He has also written a number of books on cancer and cancer treatment. | Baruch Modan
# Overview
Dr. Baruch Modan (born c. 1944) is a famous medic from Israel. Dr. Modan has made significant findings in his specialized field, oncology, and he is also an expert in radiation.
Admired by many of his colleagues, Dr. Modan has worked with various types of cancer, and, in 1974, he demonstrated that chances of getting breast cancer increase for anyone who has X-rays done with doses as low as 1.6 rem. He is also an expert on treating cancer among children.
A professor at the University of Tel Aviv, Dr. Modan is Israel's department of health's secretary. During a 2004 rally by some Israeli manifestants from an area where the government allegedly made atomic testings, Dr. Modan was criticized in person by some of that area's inhabitants, who accused him of being on the government's side, contrary to being with the public, among other things. This rally was later televised in the United States.
Dr. Modan has travelled extensively to share his knowledge with other doctors as well as patients, to treat patients in need of emergency service, and to give lectures. He has also written a number of books on cancer and cancer treatment.
# External links
- Garynull.com's take on Modan's 1974 findings
Template:WH
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Baruch_Modan | |
8e2caf61ce1b1cb27f4bd79f19504719b7bc19b3 | wikidoc | Basal lamina | Basal lamina
The basal lamina is a layer on which epithelium sits and which is secreted by the epithelial cells. It is often confused with the basement membrane, and sometimes used inconsistently in the literature, see below.
It is typically about 40-50 nanometres thick (with exceptions such as the basal laminae that compose the 100-200 nanometre thick glomerular basement membrane).
# Layers
The layers of the basal lamina ("BL") and those of the basement membrane ("BM") are described below:
Anchoring fibers composed of type IV collagen extend from the basal lamina into the underlying reticular lamina and loop around collagen bundles. Although found beneath all basal laminae, they are especially numerous in stratified squamous cells of the skin.
These layers should not be confused with the lamina propria, which is found outside the basal lamina.
# Basal lamina vs. basement membrane
The term "basal lamina" is usually used with electron microscopy, while the term "basement membrane" is usually used with light microscopy. The structure known as the basement membrane in light microscopy refers to the stained structure anchoring an epithelial layer. This encompasses the basal lamina secreted by epithelial cells and typically a reticular lamina secreted by other cells.
The basal lamina cannot be distinguished under the light microscope, but under the higher magnification of an electron microscope, the basal lamina and lamina reticularis are visibly distinct structures.
Some theorize that the lamina lucida is an artifact created when preparing the tissue, and that the basement membrane is therefore equal to the lamina densa in vivo.
Examples of basement membranes include:
- Basilar membrane
- Bruch's membrane
- Descemet's membrane
- Glomerular basement membrane
# Additional images
- Transverse section of a villus, from the human intestine. X 350.
- The basal lamina is a component of the basement membrane that separates epithelium from the underlying connective tissue. | Basal lamina
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The basal lamina is a layer on which epithelium sits and which is secreted by the epithelial cells. It is often confused with the basement membrane, and sometimes used inconsistently in the literature, see below.
It is typically about 40-50 nanometres thick (with exceptions such as the basal laminae that compose the 100-200 nanometre thick glomerular basement membrane).
# Layers
The layers of the basal lamina ("BL") and those of the basement membrane ("BM") are described below:
Anchoring fibers composed of type IV collagen extend from the basal lamina into the underlying reticular lamina and loop around collagen bundles. Although found beneath all basal laminae, they are especially numerous in stratified squamous cells of the skin.
These layers should not be confused with the lamina propria, which is found outside the basal lamina.[5]
# Basal lamina vs. basement membrane
The term "basal lamina" is usually used with electron microscopy, while the term "basement membrane" is usually used with light microscopy. The structure known as the basement membrane in light microscopy refers to the stained structure anchoring an epithelial layer. This encompasses the basal lamina secreted by epithelial cells and typically a reticular lamina secreted by other cells.
The basal lamina cannot be distinguished under the light microscope, but under the higher magnification of an electron microscope, the basal lamina and lamina reticularis are visibly distinct structures.
Some theorize that the lamina lucida is an artifact created when preparing the tissue, and that the basement membrane is therefore equal to the lamina densa in vivo.[6]
Examples of basement membranes include:
- Basilar membrane
- Bruch's membrane
- Descemet's membrane
- Glomerular basement membrane
# Additional images
- Transverse section of a villus, from the human intestine. X 350.
- The basal lamina is a component of the basement membrane that separates epithelium from the underlying connective tissue. | https://www.wikidoc.org/index.php/Basal_lamina | |
d0361b872a323446535ce61c8f3f75b6cdb537c8 | wikidoc | Base of lung | Base of lung
The base of the lung is broad, concave, and rests upon the convex surface of the diaphragm, which separates the right lung from the right lobe of the liver, and the left lung from the left lobe of the liver, the stomach, and the spleen.
Since the diaphragm extends higher on the right than on the left side, the concavity on the base of the right lung is deeper than that on the left.
Laterally and behind, the base is bounded by a thin, sharp margin which projects for some distance into the phrenicocostal sinus of the pleura, between the lower ribs and the costal attachment of the diaphragm.
The base of the lung descends during inspiration and ascends during expiration. | Base of lung
Template:Infobox Anatomy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The base of the lung is broad, concave, and rests upon the convex surface of the diaphragm, which separates the right lung from the right lobe of the liver, and the left lung from the left lobe of the liver, the stomach, and the spleen.
Since the diaphragm extends higher on the right than on the left side, the concavity on the base of the right lung is deeper than that on the left.
Laterally and behind, the base is bounded by a thin, sharp margin which projects for some distance into the phrenicocostal sinus of the pleura, between the lower ribs and the costal attachment of the diaphragm.
The base of the lung descends during inspiration and ascends during expiration. | https://www.wikidoc.org/index.php/Base_of_lung | |
5d97a51fa8d1c048c0f7403f1d7665f3947391be | wikidoc | Basilic vein | Basilic vein
# Overview
In human anatomy, the basilic vein is a large superficial vein of the upper limb that helps drain parts of hand and forearm. It originates on the medial (ulnar) side of the dorsal venous network of the hand, and it travels up the base of the forearm and arm. Most of its course is superficial; it generally travels in the fat and other fasciae that lie superficial to the muscles of the upper extremity. Because of this, it is usually visible through the skin.
Near the region anterior to the cubital fossa, in the bend of the elbow joint, the basilic vein usually connects with the other large superficial vein of the upper extremity, the cephalic vein, via the median cubital vein. The layout of superficial veins in the forearm is highly variable from person to person, and there are generally a variety of other unnamed superficial veins that the basilic vein communicates with.
About halfway up the arm proper (the part between the shoulder and elbow), the basilic vein goes deep, travelling under the muscles. There, around the lower border of the teres major muscle, it joins the brachial veins to form the axillary vein.
Along with other superficial veins in the forearm, the basilic vein is a possible site for venipuncture.
# Additional images
- Cross-section through the middle of upper arm.
- Cross-section through the middle of the forearm.
- The brachial artery.
- The veins on the dorsum of the hand. | Basilic vein
Template:Infobox Vein
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
In human anatomy, the basilic vein is a large superficial vein of the upper limb that helps drain parts of hand and forearm. It originates on the medial (ulnar) side of the dorsal venous network of the hand, and it travels up the base of the forearm and arm. Most of its course is superficial; it generally travels in the fat and other fasciae that lie superficial to the muscles of the upper extremity. Because of this, it is usually visible through the skin.
Near the region anterior to the cubital fossa, in the bend of the elbow joint, the basilic vein usually connects with the other large superficial vein of the upper extremity, the cephalic vein, via the median cubital vein. The layout of superficial veins in the forearm is highly variable from person to person, and there are generally a variety of other unnamed superficial veins that the basilic vein communicates with.
About halfway up the arm proper (the part between the shoulder and elbow), the basilic vein goes deep, travelling under the muscles. There, around the lower border of the teres major muscle, it joins the brachial veins to form the axillary vein.
Along with other superficial veins in the forearm, the basilic vein is a possible site for venipuncture.
# Additional images
- Cross-section through the middle of upper arm.
- Cross-section through the middle of the forearm.
- The brachial artery.
- The veins on the dorsum of the hand. | https://www.wikidoc.org/index.php/Basilic_vein | |
0c41ed18ae2eac41fe959b13ae1af50dff7306f4 | wikidoc | Bathmophobia | Bathmophobia
# Overview
Bathmophobia is the fear of stairs or slopes, a type of specific phobia. It is similar to climacophobia, except that climacophobics suffer symptoms when climbing or going downstairs, while bathmophobics suffer symptoms simply by observing stairs or slopes.
This fear is often caused by falling down stairs or a hill. The most prominent symptom of bathmophobics is vertigo, with a fear associated with it (illygnophobia). Bathmophobia can even be triggered from other phobia, acrophobia, due to height variations when using stairs.
Like climacophobia, bathmophobia can most commonly be treated using cognitive-behavioral therapy. | Bathmophobia
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Bathmophobia is the fear of stairs or slopes, a type of specific phobia.[1] It is similar to climacophobia, except that climacophobics suffer symptoms when climbing or going downstairs, while bathmophobics suffer symptoms simply by observing stairs or slopes.
This fear is often caused by falling down stairs or a hill. The most prominent symptom of bathmophobics is vertigo, with a fear associated with it (illygnophobia). Bathmophobia can even be triggered from other phobia, acrophobia, due to height variations when using stairs.
Like climacophobia, bathmophobia can most commonly be treated using cognitive-behavioral therapy. | https://www.wikidoc.org/index.php/Bathmophobia | |
7c377922c971cb3d4a0bc8247020b04bea64048a | wikidoc | Fear of fish | Fear of fish
# Background
There are a number of specific meaning in the term fear of fish or ichtyophobia. Although the latter term technically refers to a specific phobia, in many contexts it may refer to any kind of fear of fish, such as Fear of Eating Fish, or Fear of Dead Fish.
# Phobia
Ichthyophobia is an intense and persistent fear of fish, described in Psychology: An International Perspective as an "unusual" specific phobia. The Diagnostic and Statistical Manual of Mental Disorders (DSM IV) classifies it as as a fear that the individual who holds it recognizes as excessive. Both symptoms and remedies of ichthyophobia are common to most specific phobias.
John B. Watson, a renowned name of behaviorism, describes an example, quoted in many books in psychology, of conditioned fear of a goldfish in an infant and a way of unconditioning of the fear by what is called now graduated exposure therapy:
Try another method. Let his brother, aged four, who has no fear of fish, come up to the bowl and put his hands in the bowl and catch the fish. No amount of watching a fearless child play with these harmless animals will remove the fear from the toddler. Try shaming him, making a scapegoat of him. Your attempts are equally futile. Let us try, however, this simple method. Place the child at meal time at one end of a table ten or twelve feet long, and move the fish bowl to the extreme other end of the table and cover it. Just as soon as the meal is placed before him remove the cover from the bowl. If disturbance occurs, extend your table and place the bowl still farther off, so far away that no disturbance occurs. Eating takes place normally, nor is digestion interfered with. Repeat the procedure on the next day, but move the bowl a little nearer. In four or five days the bowl can be brought right up to the food tray without causing the slightest disturbance. Then take a small glass dish, fill it with water and move the dish back, and at subsequent meal times bring it nearer and nearer to him. Again in three or four days the small glass dish can be put on the tray alongside of his milk. The old fear has been driven out by training, unconditioning has taken place, and this unconditioning is permanent.
In contrast, radical exposure therapy was used successfully to cure a man with a "life affecting" fish phobia on the 2007 documentary series, The Panic Room.
# Cultural phenomenon
Historically, the Navajo people were described as being ichthyophobic, due to their aversion to fish. However, this was later recognised as a cultural or mythic aversion to aquatic animals, and not a psychological condition.
# Fear of eating fish
The Journal of the American Medical Association have published a research paper addressing the fears of eating fish because contaminants, such as mercury may be accumulated in fish.
# Cases of ichthyophobia
In his autobiography, Italian footballer Paolo Di Canio describes finding that his then team-mate, Peter Grant suffered from ichthyophobia. During a practical joke, Di Canio describes Grant's fearful reaction after finding a salmon head in his bed. Grant told The Independent that item in his bed was in fact a "shark's head" and "to say I got a fright when I put my feet between the sheets is an understatement." | Fear of fish
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Background
There are a number of specific meaning in the term fear of fish or ichtyophobia. Although the latter term technically refers to a specific phobia, in many contexts it may refer to any kind of fear of fish, such as Fear of Eating Fish, or Fear of Dead Fish.
# Phobia
Ichthyophobia is an intense and persistent fear of fish, described in Psychology: An International Perspective as an "unusual" specific phobia.[1] The Diagnostic and Statistical Manual of Mental Disorders (DSM IV) classifies it as as a fear that the individual who holds it recognizes as excessive.[1] Both symptoms and remedies of ichthyophobia are common to most specific phobias.
John B. Watson, a renowned name of behaviorism, describes an example, quoted in many books in psychology, of conditioned fear of a goldfish in an infant and a way of unconditioning of the fear by what is called now graduated exposure therapy: [2]
Try another method. Let his brother, aged four, who has no fear of fish, come up to the bowl and put his hands in the bowl and catch the fish. No amount of watching a fearless child play with these harmless animals will remove the fear from the toddler. Try shaming him, making a scapegoat of him. Your attempts are equally futile. Let us try, however, this simple method. Place the child at meal time at one end of a table ten or twelve feet long, and move the fish bowl to the extreme other end of the table and cover it. Just as soon as the meal is placed before him remove the cover from the bowl. If disturbance occurs, extend your table and place the bowl still farther off, so far away that no disturbance occurs. Eating takes place normally, nor is digestion interfered with. Repeat the procedure on the next day, but move the bowl a little nearer. In four or five days the bowl can be brought right up to the food tray without causing the slightest disturbance. Then take a small glass dish, fill it with water and move the dish back, and at subsequent meal times bring it nearer and nearer to him. Again in three or four days the small glass dish can be put on the tray alongside of his milk. The old fear has been driven out by training, unconditioning has taken place, and this unconditioning is permanent.
In contrast, radical exposure therapy was used successfully to cure a man with a "life affecting" fish phobia on the 2007 documentary series, The Panic Room. [3]
# Cultural phenomenon
Historically, the Navajo people were described as being ichthyophobic,[4][5] due to their aversion to fish. However, this was later recognised as a cultural or mythic aversion to aquatic animals,[6] and not a psychological condition.
# Fear of eating fish
The Journal of the American Medical Association have published a research paper[7] addressing the fears of eating fish[8] because contaminants, such as mercury may be accumulated in fish.
# Cases of ichthyophobia
In his autobiography, Italian footballer Paolo Di Canio describes finding that his then team-mate, Peter Grant suffered from ichthyophobia. During a practical joke, Di Canio describes Grant's fearful reaction after finding a salmon head in his bed.[9] Grant told The Independent that item in his bed was in fact a "shark's head" and "to say I got a fright when I put my feet between the sheets is an understatement."[10] | https://www.wikidoc.org/index.php/Batrachophobia | |
f586874975890b0c23977de8f9ab231129250eca | wikidoc | Bayes factor | Bayes factor
# Overview
In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing.
Given a model selection problem in which we have to choose between two models M1 and M2, on the basis of a data vector x. The Bayes factor K is given by
where p(x|M_i) is called the marginal likelihood for model i. This is similar to a likelihood-ratio test, but instead of maximising the likelihood, Bayesians average it over the parameters. Generally, the models M1 and M2 will be parametrised by vectors of parameters θ1 and θ2; thus K is given by
The logarithm of K is sometimes called the weight of evidence given by x for M1 over M2, measured in bits, nats, or bans, according to whether the logarithm is taken to base 2, base e, or base 10.
A value of K > 1 means that the data indicate that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:
The second column gives the corresponding weights of evidence in decibans (tenths of a power of 10). According to I. J. Good a change in a weight of evidence of 1 deciban (ie a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.
The use of Bayes factors or classical hypothesis testing takes place in the context of inference rather than decision-making under uncertainty. That is, we merely wish to find out which hypothesis is true, rather than actually making a decision on the basis of this information. Frequentist statistics draws a strong distinction between these two because classical hypothesis tests are not coherent in the Bayesian sense. Bayesian procedures, including Bayes factors, are coherent, so there is no need to draw such a distinction. Inference is then simply regarded as a special case of decision-making under uncertainty in which the resulting action is to report a value. In a decision-making context Bayesian statisticians might use a Bayes factor as part of making a choice, but would also combine it with a prior distribution and a loss function associated with making the wrong choice. In an inference context the loss function would take the form of a scoring rule. Use of a logarithmic score function for example, leads to the expected utility taking the form of the Kullback-Leibler divergence. If the logarithms are to the base 2 this is equivalent to Shannon information.
# Example
Suppose we have a random variable which produces either a success or a failure. We want to compare a model M1 where the probability of success is q = ½, and another model M2 where q is completely unknown and we take a prior distribution for q which is uniform on . We take a sample of 200, and find 115 successes and 85 failures. The likelihood is
So we have
but
The ratio is then 1.197..., which is "barely worth mentioning" even if it points very slightly towards M1.
This is not the same as a classical likelihood ratio test, which would have found the maximum likelihood estimate for q, namely 115⁄200 = 0.575, and from that get a ratio of 0.1045..., and so pointing towards M2. Alternatively, Edwards's "exchange rate" of two units of likelihood per degree of freedom suggests that M_2 is preferable (just) to M_1, as 0.1045\ldots = e^{-2.25\ldots} and 2.25>2: the extra likelihood compensates for the unknown parameter in M_2.
A frequentist hypothesis test of M_1 (here considered as a null hypothesis) would have produced a more dramatic result, saying that M1 could be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = ½ is 0.0200..., and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.0400... Note that 115 is more than two standard deviations away from 100.
M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors. | Bayes factor
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing[1][2].
Given a model selection problem in which we have to choose between two models M1 and M2, on the basis of a data vector x. The Bayes factor K is given by
where <math>p(x|M_i)</math> is called the marginal likelihood for model i. This is similar to a likelihood-ratio test, but instead of maximising the likelihood, Bayesians average it over the parameters. Generally, the models M1 and M2 will be parametrised by vectors of parameters θ1 and θ2; thus K is given by
The logarithm of K is sometimes called the weight of evidence given by x for M1 over M2, measured in bits, nats, or bans, according to whether the logarithm is taken to base 2, base e, or base 10.
A value of K > 1 means that the data indicate that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:[3]
The second column gives the corresponding weights of evidence in decibans (tenths of a power of 10). According to I. J. Good a change in a weight of evidence of 1 deciban (ie a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.
The use of Bayes factors or classical hypothesis testing takes place in the context of inference rather than decision-making under uncertainty. That is, we merely wish to find out which hypothesis is true, rather than actually making a decision on the basis of this information. Frequentist statistics draws a strong distinction between these two because classical hypothesis tests are not coherent in the Bayesian sense. Bayesian procedures, including Bayes factors, are coherent, so there is no need to draw such a distinction. Inference is then simply regarded as a special case of decision-making under uncertainty in which the resulting action is to report a value. In a decision-making context Bayesian statisticians might use a Bayes factor as part of making a choice, but would also combine it with a prior distribution and a loss function associated with making the wrong choice. In an inference context the loss function would take the form of a scoring rule. Use of a logarithmic score function for example, leads to the expected utility taking the form of the Kullback-Leibler divergence. If the logarithms are to the base 2 this is equivalent to Shannon information.
# Example
Suppose we have a random variable which produces either a success or a failure. We want to compare a model M1 where the probability of success is q = ½, and another model M2 where q is completely unknown and we take a prior distribution for q which is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood is
So we have
but
The ratio is then 1.197..., which is "barely worth mentioning" even if it points very slightly towards M1.
This is not the same as a classical likelihood ratio test, which would have found the maximum likelihood estimate for q, namely 115⁄200 = 0.575, and from that get a ratio of 0.1045..., and so pointing towards M2. Alternatively, Edwards's "exchange rate" of two units of likelihood per degree of freedom suggests that <Math>M_2</math> is preferable (just) to <math>M_1</math>, as <math>0.1045\ldots = e^{-2.25\ldots}</math> and <math>2.25>2</math>: the extra likelihood compensates for the unknown parameter in <math>M_2</math>.
A frequentist hypothesis test of <math>M_1</math> (here considered as a null hypothesis) would have produced a more dramatic result, saying that M1 could be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = ½ is 0.0200..., and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.0400... Note that 115 is more than two standard deviations away from 100.
M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors. | https://www.wikidoc.org/index.php/Bayes_factor | |
23dfbd9e6b4ce7fdeec087d61cd443eb8886e81b | wikidoc | Bazedoxifene | Bazedoxifene
Bazedoxifene is a selective estrogen receptor modulator (SERM), developed by Wyeth Pharmaceuticals, undergoing clinical evaluation for the prevention and treatment of postmenopausal osteoporosis. It is currently in the early phases of review by the United States' Food and Drug Administration. When approved, bazedoxifene is to be sold by Wyeth under the tradename Viviant™. Bazedoxifene's combination with conjugated estrogens, Aprela™, is currently undergoing Phase III studies.
Wyeth received an approvable letter for Bazedoxifene in late April 2007. The FDA called for final safety and efficacy data from Phase III studies, and acceptable valuation of manufacturing and testing facilities where problems were found earlier in the year. Wyeth is working with the FDA to resolve these issues, and expects an FDA action date at year end.
# CITATIONS
BROWN, EMILY. "Wyeth's osteoporosis drug cut risk of new fractures". Retrieved 2007-09-28..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
"Wyeth Receives Approvable Letter from FDA for Bazedoxifene for the Prevention of Postmenopausal Osteoporosis". Retrieved 2007-09-28. | Bazedoxifene
Bazedoxifene is a selective estrogen receptor modulator (SERM), developed by Wyeth Pharmaceuticals, undergoing clinical evaluation for the prevention and treatment of postmenopausal osteoporosis. It is currently in the early phases of review by the United States' Food and Drug Administration. When approved, bazedoxifene is to be sold by Wyeth under the tradename Viviant™. Bazedoxifene's combination with conjugated estrogens, Aprela™, is currently undergoing Phase III studies.
Wyeth received an approvable letter for Bazedoxifene in late April 2007. The FDA called for final safety and efficacy data from Phase III studies, and acceptable valuation of manufacturing and testing facilities where problems were found earlier in the year. Wyeth is working with the FDA to resolve these issues, and expects an FDA action date at year end.
Template:Sex hormones
# CITATIONS
BROWN, EMILY. "Wyeth's osteoporosis drug cut risk of new fractures". Retrieved 2007-09-28..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
"Wyeth Receives Approvable Letter from FDA for Bazedoxifene for the Prevention of Postmenopausal Osteoporosis". Retrieved 2007-09-28.
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Bazedoxifene | |
fd87ce072d3e90bc1f5bc9991a14ddd8dcbee088 | wikidoc | Beau's lines | Beau's lines
# Overview
Beau's lines are deep grooved lines that run from side to side on the fingernail. They may look like indentations or ridges in the nail plate that could be a sign of stress. Beau's lines are the result of a temporary cessation of cell division in the nail matrix, and they are associated with many serious conditions.
# Historical Perspective
This condition of the nail was named by a French physician, Joseph Honoré Simon Beau (1806–1865), who first described it in 1846.
# Causes
## Causes by Organ System
## Causes in Alphabetical Order
- Aging
- Arthritis
- Chemotherapy particularly cytotoxic agents
- Coronary occlusion
- Delirium
- Depression
- Dermatologic disorders
- Diabetes
- Drugs
- Gout
- Hypocalcemia
- Infection
- Iron deficiency
- Malnutrition
- Reiter's Disease
- Severe infectious disease
- Shock
- Stress
- Surgery
- Toxins
- Trauma
- Traumatic damage to nail matrix
# Diagnosis
## Physical Examination
Beau's lines should be distinguished from Muehrcke's lines of the fingernails. While Beau's lines are actual ridges and indentations in the nail plate, Muehrcke's lines are areas of hypopigmentation without palpable ridges.
# Research
A researcher found Beau's lines in the fingernails of 6 divers following a deep saturation dive to a pressure equal to 335 meters of sea water, and in 2 of 6 divers following a similar dive to 305 meters. | Beau's lines
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Beau's lines are deep grooved lines that run from side to side on the fingernail. They may look like indentations or ridges in the nail plate that could be a sign of stress. Beau's lines are the result of a temporary cessation of cell division in the nail matrix, and they are associated with many serious conditions.
# Historical Perspective
This condition of the nail was named by a French physician, Joseph Honoré Simon Beau (1806–1865), who first described it in 1846.
# Causes
## Causes by Organ System
## Causes in Alphabetical Order
- Aging
- Arthritis
- Chemotherapy particularly cytotoxic agents
- Coronary occlusion
- Delirium
- Depression
- Dermatologic disorders
- Diabetes
- Drugs
- Gout
- Hypocalcemia
- Infection
- Iron deficiency
- Malnutrition
- Reiter's Disease
- Severe infectious disease
- Shock
- Stress
- Surgery
- Toxins
- Trauma
- Traumatic damage to nail matrix
# Diagnosis
## Physical Examination
Beau's lines should be distinguished from Muehrcke's lines of the fingernails. While Beau's lines are actual ridges and indentations in the nail plate, Muehrcke's lines are areas of hypopigmentation without palpable ridges.
# Research
A researcher found Beau's lines in the fingernails of 6 divers following a deep saturation dive to a pressure equal to 335 meters of sea water, and in 2 of 6 divers following a similar dive to 305 meters.[1] | https://www.wikidoc.org/index.php/Beau%27s_lines | |
6e3a5a3f298a35a904f2e118e34de3bf0fe077be | wikidoc | Bemotrizinol | Bemotrizinol
Bemotrizinol (USAN, Tinosorb® S, INCI Bis-Ethylhexyloxyphenol Methoxyphenyl Triazine) is an oil soluble chemical which is added to sunscreens to absorb UV rays. It's marketed by Ciba Specialty Chemicals. It is a broad spectrum UV absorber, absorbing UVB as well as UVA rays. Bemotrizinol is highly photostable. Even after 50 MED (minimal edemal dose) 98.4% remains intact. It helps prevent photodegradation of other sunscreen actives.
Bemotrizinol is not approved by the United States Food and Drug Administration, but is approved in the European Union since the year 2000 and other parts of the world, including Australia.
Unlike some other organic sunscreen actives, it shows no estrogenic effects in vitro. | Bemotrizinol
Template:Chembox new
Bemotrizinol (USAN[1], Tinosorb® S, INCI Bis-Ethylhexyloxyphenol Methoxyphenyl Triazine) is an oil soluble chemical which is added to sunscreens to absorb UV rays. It's marketed by Ciba Specialty Chemicals. It is a broad spectrum UV absorber, absorbing UVB as well as UVA rays. Bemotrizinol is highly photostable. Even after 50 MED (minimal edemal dose) 98.4% remains intact. It helps prevent photodegradation of other sunscreen actives.[2]
Bemotrizinol is not approved by the United States Food and Drug Administration, but is approved in the European Union since the year 2000[3] and other parts of the world, including Australia.[4][5]
Unlike some other organic sunscreen actives, it shows no estrogenic effects in vitro.[6] | https://www.wikidoc.org/index.php/Bemotrizinol | |
c2741be7c37eb0a333be36b92394f2ccf78f2263 | wikidoc | Bendamustine | Bendamustine
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Bendamustine is a Alkylating Drug that is FDA approved for the treatment of Chronic Lymphocytic Leukemia (CLL), Non-Hodgkin Lymphoma (NHL). Common adverse reactions include injection site pain, pruritus, rash, weight loss, constipation, diarrhea, loss of appetite, nausea, stomatitis, vomiting, headache, cough, dyspnea, dehydration, fatigue , fever.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
### Chronic Lymphocytic Leukemia
- Dosing information
- Recommended Dosage:
- Recommended dosage:100 mg/m2 administered intravenously over 30 minutes on Days 1 and 2 of a 28-day cycle, up to 6 cycles.
- Dose Delays, Dose Modifications and Reinitiation of Therapy for CLL:
- Bendamustine administration should be delayed in the event of Grade 4 hematologic toxicity or clinically significant ≥ Grade 2 non-hematologic toxicity. Once non-hematologic toxicity has recovered to ≤ Grade 1 and/or the blood counts have improved Absolute Neutrophil Count (ANC) ≥ 1 x 109/L, platelets ≥ 75 x 109/L], Bendamustine can be reinitiated at the discretion of the treating physician. In addition, dose reduction may be warranted.
- Dose modifications for hematologic toxicity: for Grade 3 or greater toxicity, reduce the dose to 50 mg/m2 on Days 1 and 2 of each cycle; if Grade 3 or greater toxicity recurs, reduce the dose to 25 mg/m2 on Days 1 and 2 of each cycle.
- Dose modifications for non-hematologic toxicity: for clinically significant Grade 3 or greater toxicity, reduce the dose to 50 mg/m2 on Days 1 and 2 of each cycle.
- Dose re-escalation in subsequent cycles may be considered at the discretion of the treating physician.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Bendamustine in adult patients.
### Non–Guideline-Supported Use
### Metastatic Breast Cancer
- Dosing information
- 120 mg/m(2) IV over 30 minutes on days 1 and 2 every 4 weeks17872900
- 60 mg/m(2) IV over 30 minutes on days 1, 8, and 15 every 28 days17667603
### Multiple myeloma
- Dosing information
- 150 mg/m(2) (in 500 mL of normal saline) IV over 30 minutes on days 1 and 216402269
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
The effectiveness of Bendamustine in pediatric patients has not been established
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Bendamustine in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Bendamustine in pediatric patients.
# Contraindications
Bendamustine is contraindicated in patients with a known hypersensitivity (e.g., anaphylactic and anaphylactoid reactions) to bendamustine.
# Warnings
### Myelosuppression
Bendamustine caused severe myelosuppression (Grade 3-4) in 98% of patients in the two NHL studies (see Table 4). Three patients (2%) died from myelosuppression-related adverse reactions; one each from neutropenic sepsis, diffuse alveolar hemorrhage with Grade 3 thrombocytopenia, and pneumonia from an opportunistic infection (CMV).
In the event of treatment-related myelosuppression, monitor leukocytes, platelets, hemoglobin (Hgb), and neutrophils frequently. In the clinical trials, blood counts were monitored every week initially. Hematologic nadirs were observed predominantly in the third week of therapy. Myelosuppression may require dose delays and/or subsequent dose reductions if recovery to the recommended values has not occurred by the first day of the next scheduled cycle. Prior to the initiation of the next cycle of therapy, the ANC should be ≥ 1 x 109/L and the platelet count should be ≥ 75 x 109/L.
### Infections
Infection, including pneumonia, sepsis, septic shock, and death have occurred in adult and pediatric patients in clinical trials and in postmarketing reports. Patients with myelosuppression following treatment with Bendamustine are more susceptible to infections. Advise patients with myelosuppression following Bendamustine treatment to contact a physician if they have symptoms or signs of infection.
### Anaphylaxis and Infusion Reactions
Infusion reactions to Bendamustine have occurred commonly in clinical trials. Symptoms include fever, chills, pruritus and rash. In rare instances severe anaphylactic and anaphylactoid reactions have occurred, particularly in the second and subsequent cycles of therapy. Monitor clinically and discontinue drug for severe reactions. Ask patients about symptoms suggestive of infusion reactions after their first cycle of therapy. Patients who experience Grade 3 or worse allergic-type reactions should not be rechallenged. Consider measures to prevent severe reactions, including antihistamines, antipyretics and corticosteroids in subsequent cycles in patients who have experienced Grade 1 or 2 infusion reactions. Discontinue Bendamustine for patients with Grade 4 infusion reactions. Consider discontinuation for Grade 3 infusions reactions as clinically appropriate considering individual benefits, risks, and supportive care.
### Tumor Lysis Syndrome
Tumor lysis syndrome associated with Bendamustine treatment has occurred in patients in clinical trials and in postmarketing reports. The onset tends to be within the first treatment cycle of Bendamustine and, without intervention, may lead to acute renal failure and death. Preventive measures include vigorous hydration and close monitoring of blood chemistry, particularly potassium and uric acid levels. Allopurinol has also been used during the beginning of Bendamustine therapy. However, there may be an increased risk of severe skin toxicity when Bendamustine and allopurinol are administered concomitantly .
### Skin Reactions
Skin reactions have been reported with Bendamustine treatment in clinical trials and postmarketing safety reports, including rash, toxic skin reactions and bullous exanthema. Some events occurred when Bendamustine was given in combination with other anticancer agents.
In a study of Bendamustine (90 mg/m2) in combination with rituximab, one case of toxic epidermal necrolysis (TEN) occurred. TEN has been reported for rituximab (see rituximab package insert). Cases of Stevens-Johnson syndrome (SJS) and TEN, some fatal, have been reported when Bendamustine was administered concomitantly with allopurinol and other medications known to cause these syndromes. The relationship to Bendamustine cannot be determined.
Where skin reactions occur, they may be progressive and increase in severity with further treatment. Monitor patients with skin reactions closely. If skin reactions are severe or progressive, withhold or discontinue Bendamustine.
### Other Malignancies
There are reports of pre-malignant and malignant diseases that have developed in patients who have been treated with Bendamustine, including myelodysplastic syndrome, myeloproliferative disorders, acute myeloid leukemia and bronchial carcinoma. The association with Bendamustine therapy has not been determined.
### Extravasation Injury
Bendamustine extravasations have been reported in post marketing resulting in hospitalizations from erythema, marked swelling, and pain. Assure good venous access prior to starting Bendamustine infusion and monitor the intravenous infusion site for redness, swelling, pain, infection, and necrosis during and after administration of Bendamustine.
### Embryo-fetal Toxicity
Bendamustine can cause fetal harm when administered to a pregnant woman. Single intraperitoneal doses of bendamustine in mice and rats administered during organogenesis caused an increase in resorptions, skeletal and visceral malformations, and decreased fetal body weights.
# Adverse Reactions
## Clinical Trials Experience
The data described below reflect exposure to Bendamustine in 153 patients with CLL studied in an active-controlled, randomized trial. The population was 45-77 years of age, 63% male, 100% white, and were treatment naïve. All patients started the study at a dose of 100 mg/m2 intravenously over 30 minutes on Days 1 and 2 every 28 days.
Adverse reactions were reported according to NCI CTC v.2.0. Non-hematologic adverse reactions (any grade) in the Bendamustine group that occurred with a frequency greater than 15% were pyrexia (24%), nausea (20%), and vomiting (16%).
Other adverse reactions seen frequently in one or more studies included asthenia, fatigue, malaise, and weakness; dry mouth; somnolence; cough; constipation; headache; mucosal inflammation and stomatitis.
Worsening hypertension was reported in 4 patients treated with Bendamustine in the CLL trial and in none treated with chlorambucil. Three of these 4 adverse reactions were described as a hypertensive crisis and were managed with oral medications and resolved.
The most frequent adverse reactions leading to study withdrawal for patients receiving Bendamustine were hypersensitivity (2%) and pyrexia (1%).
Table 1 contains the treatment emergent adverse reactions, regardless of attribution, that were reported in ≥ 5% of patients in either treatment group in the randomized CLL clinical study.
The Grade 3 and 4 hematology laboratory test values by treatment group in the randomized CLL clinical study are described in Table 2. These findings confirm the myelosuppressive effects seen in patients treated with Bendamustine. Red blood cell transfusions were administered to 20% of patients receiving Bendamustine compared with 6% of patients receiving chlorambucil.
In the CLL trial, 34% of patients had bilirubin elevations, some without associated significant elevations in AST and ALT. Grade 3 or 4 increased bilirubin occurred in 3% of patients. Increases in AST and ALT of Grade 3 or 4 were limited to 1% and 3% of patients, respectively. Patients treated with Bendamustine may also have changes in their creatinine levels. If abnormalities are detected, monitoring of these parameters should be continued to ensure that further deterioration does not occur.
### Clinical Trials Experience in NH
The data described below reflect exposure to Bendamustine in 176 patients with indolent B-cell NHL treated in two single-arm studies. The population was 31-84 years of age, 60% male, and 40% female. The race distribution was 89% White, 7% Black, 3% Hispanic, 1% other, and <1% Asian. These patients received Bendamustine at a dose of 120 mg/m2 intravenously on Days 1 and 2 for up to eight 21-day cycles.
The adverse reactions occurring in at least 5% of the NHL patients, regardless of severity, are shown in Table 3. The most common non-hematologic adverse reactions (≥30%) were nausea (75%), fatigue (57%), vomiting (40%), diarrhea (37%) and pyrexia (34%). The most common non-hematologic Grade 3 or 4 adverse reactions (≥5%) were fatigue (11%), febrile neutropenia (6%), and pneumonia, hypokalemia and dehydration, each reported in 5% of patients.
Hematologic toxicities, based on laboratory values and CTC grade, in NHL patients treated in both single arm studies combined are described in Table 4. Clinically important chemistry laboratory values that were new or worsened from baseline and occurred in >1% of patients at Grade 3 or 4, in NHL patients treated in both single arm studies combined were hyperglycemia (3%), elevated creatinine (2%), hyponatremia (2%), and hypocalcemia (2%).
In both studies, serious adverse reactions, regardless of causality, were reported in 37% of patients receiving Bendamustine. The most common serious adverse reactions occurring in ≥5% of patients were febrile neutropenia and pneumonia. Other important serious adverse reactions reported in clinical trials and/or postmarketing experience were acute renal failure, cardiac failure, hypersensitivity, skin reactions, pulmonary fibrosis, and myelodysplastic syndrome.
Serious drug-related adverse reactions reported in clinical trials included myelosuppression, infection, pneumonia, tumor lysis syndrome and infusion reactions . Adverse reactions occurring less frequently but possibly related to Bendamustine treatment were hemolysis, dysgeusia/taste disorder, atypical pneumonia, sepsis, herpes zoster, erythema, dermatitis, and skin necrosis.
## Postmarketing Experience
The following adverse reactions have been identified during post-approval use of Bendamustine. Because these reactions are reported voluntarily from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure: anaphylaxis; and injection or infusion site reactions including phlebitis, pruritus, irritation, pain, and swelling; pneumocystis jiroveci pneumonia and pneumonitis.
Skin reactions including SJS and TEN have occurred when Bendamustine was administered concomitantly with allopurinol and other medications known to cause these syndromes.
# Drug Interactions
No formal clinical assessments of pharmacokinetic drug-drug interactions between Bendamustine and other drugs have been conducted.
Bendamustine's active metabolites, gamma-hydroxy bendamustine (M3) and N-desmethyl-bendamustine (M4), are formed via cytochrome P450 CYP1A2. Inhibitors of CYP1A2 (e.g., fluvoxamine, ciprofloxacin) have potential to increase plasma concentrations of bendamustine and decrease plasma concentrations of active metabolites. Inducers of CYP1A2 (e.g., omeprazole, smoking) have potential to decrease plasma concentrations of bendamustine and increase plasma concentrations of its active metabolites. Caution should be used, or alternative treatments considered if concomitant treatment with CYP1A2 inhibitors or inducers is needed.
The role of active transport systems in bendamustine distribution has not been fully evaluated. In vitro data suggest that P-glycoprotein, breast cancer resistance protein (BCRP), and/or other efflux transporters may have a role in bendamustine transport.
Based on in vitro data, bendamustine is not likely to inhibit metabolism via human CYP isoenzymes CYP1A2, 2C9/10, 2D6, 2E1, or 3A4/5, or to induce metabolism of substrates of cytochrome P450 enzymes.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): D
Risk Summary
Bendamustine can cause fetal harm when administered to a pregnant woman. Bendamustine caused malformations in animals, when a single dose was administered to pregnant animals. Advise women to avoid becoming pregnant while receiving Bendamustine and for 3 months after therapy has stopped. If this drug is used during pregnancy, or if the patient becomes pregnant while receiving this drug, the patient should be apprised of the potential hazard to a fetus. Advise men receiving Bendamustine to use reliable contraception for the same time period.
Animal data
Single intraperitoneal doses of bendamustine from 210 mg/m2 (70 mg/kg) in mice administered during organogenesis caused an increase in resorptions, skeletal and visceral malformations (exencephaly, cleft palates, accessory rib, and spinal deformities) and decreased fetal body weights. This dose did not appear to be maternally toxic and lower doses were not evaluated. Repeat intraperitoneal dosing in mice on gestation days 7-11 resulted in an increase in resorptions from 75 mg/m2 (25 mg/kg) and an increase in abnormalities from 112.5 mg/m2 (37.5 mg/kg) similar to those seen after a single intraperitoneal administration. Single intraperitoneal doses of bendamustine from 120 mg/m2 (20 mg/kg) in rats administered on gestation days 4, 7, 9, 11, or 13 caused embryo and fetal lethality as indicated by increased resorptions and a decrease in live fetuses. A significant increase in external and internal (hydronephrosis and hydrocephalus) malformations were seen in dosed rats. There are no adequate and well-controlled studies in pregnant women. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be apprised of the potential hazard to the fetus.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Bendamustine in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Bendamustine during labor and delivery.
### Nursing Mothers
It is not known whether this drug is excreted in human milk. Because many drugs are excreted in human milk and because of the potential for serious adverse reactions in nursing infants and tumorigenicity shown for bendamustine in animal studies, a decision should be made whether to discontinue nursing or to discontinue the drug, taking into account the importance of the drug to the mother.
### Pediatric Use
The effectiveness of Bendamustine in pediatric patients has not been established. Bendamustine was evaluated in a single Phase 1/2 trial in pediatric patients with leukemia. The safety profile for Bendamustine in pediatric patients was consistent with that seen in adults, and no new safety signals were identified.
The trial included pediatric patients from 1-19 years of age with relapsed or refractory acute leukemia, including 27 patients with acute lymphocytic leukemia (ALL) and 16 patients with acute myeloid leukemia (AML). Bendamustine was administered as an intravenous infusion over 60 minutes on Days 1 and 2 of each 21-day cycle. Doses of 90 and 120 mg/m2 were evaluated. The Phase 1 portion of the study determined that the recommended Phase 2 dose of Bendamustine in pediatric patients was 120 mg/m2.
A total of 32 patients entered the Phase 2 portion of the study at the recommended dose and were evaluated for response. There was no treatment response (CR+ CRp) in any patient at this dose. However, there were 2 patients with ALL who achieved a CR at a dose of 90 mg/m2 in the Phase 1 portion of the study.
In the above-mentioned pediatric trial, the pharmacokinetics of Bendamustine at 90 and 120 mg/m2 doses were evaluated in 5 and 38 patients, respectively, aged 1 to 19 years (median age of 10 years).
The geometric mean body surface adjusted clearance of bendamustine was 14.2 L/h/m2. The exposures (AUC0-24 and Cmax) to bendamustine in pediatric patients following a 120 mg/m2 intravenous infusion over 60 minutes were similar to those in adult patients following the same 120 mg/m2 dose.
### Geriatic Use
In CLL and NHL studies, there were no clinically significant differences in the adverse reaction profile between geriatric (≥ 65 years of age) and younger patients.
Chronic Lymphocytic Leukemia
In the randomized CLL clinical study, 153 patients received Bendamustine. The overall response rate for patients younger than 65 years of age was 70% (n=82) for Bendamustine and 30% (n = 69) for chlorambucil. The overall response rate for patients 65 years or older was 47% (n=71) for Bendamustine and 22% (n = 79) for chlorambucil. In patients younger than 65 years of age, the median progression-free survival was 19 months in the Bendamustine group and 8 months in the chlorambucil group. In patients 65 years or older, the median progression-free survival was 12 months in the Bendamustine group and 8 months in the chlorambucil group.
Non-Hodgkin Lymphoma
Efficacy (Overall Response Rate and Duration of Response) was similar in patients < 65 years of age and patients ≥ 65 years. Irrespective of age, all of the 176 patients experienced at least one adverse reaction.
### Gender
No clinically significant differences between genders were seen in the overall incidences of adverse reactions in either CLL or NHL studies.
Chronic Lymphocytic Leukemia
In the randomized CLL clinical study, the overall response rate (ORR) for men (n=97) and women (n=56) in the Bendamustine group was 60% and 57%, respectively. The ORR for men (n=90) and women (n=58) in the chlorambucil group was 24% and 28%, respectively. In this study, the median progression-free survival for men was 19 months in the Bendamustine treatment group and 6 months in the chlorambucil treatment group. For women, the median progression-free survival was 13 months in the Bendamustine treatment group and 8 months in the chlorambucil treatment group.
Non-Hodgkin Lymphoma
The pharmacokinetics of bendamustine were similar in male and female patients with indolent NHL. No clinically-relevant differences between genders were seen in efficacy (ORR and DR).
### Race
There is no FDA guidance on the use of Bendamustine with respect to specific racial populations.
### Renal Impairment
No formal studies assessing the impact of renal impairment on the pharmacokinetics of bendamustine have been conducted. Bendamustine should be used with caution in patients with mild or moderate renal impairment. Bendamustine should not be used in patients with CrCL < 40 mL/min.
### Hepatic Impairment
No formal studies assessing the impact of hepatic impairment on the pharmacokinetics of bendamustine have been conducted. Bendamustine should be used with caution in patients with mild hepatic impairment. Bendamustine should not be used in patients with moderate (AST or ALT 2.5-10 X ULN and total bilirubin 1.5-3 X ULN) or severe (total bilirubin > 3 X ULN) hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Bendamustine in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Bendamustine in patients who are immunocompromised.
# Administration and Monitoring
### Administration
Administered intravenously
### Monitoring
FDA Package Insert for Bendamustine contains no information regarding Adverse Reactions.
# IV Compatibility
There is limited information about the IV Compatibility.
# Overdosage
The intravenous LD50 of bendamustine HCl is 240 mg/m2 in the mouse and rat. Toxicities included sedation, tremor, ataxia, convulsions and respiratory distress.
Across all clinical experience, the reported maximum single dose received was 280 mg/m2. Three of four patients treated at this dose showed ECG changes considered dose-limiting at 7 and 21 days post-dosing. These changes included QT prolongation (one patient), sinus tachycardia (one patient), ST and T wave deviations (two patients) and left anterior fascicular block (one patient). Cardiac enzymes and ejection fractions remained normal in all patients.
No specific antidote for Bendamustine overdose is known. Management of overdosage should include general supportive measures, including monitoring of hematologic parameters and ECGs.
# Pharmacology
## Mechanism of Action
Bendamustine is a bifunctional mechlorethamine derivative containing a purine-like benzimidazole ring. Mechlorethamine and its derivatives form electrophilic alkyl groups. These groups form covalent bonds with electron-rich nucleophilic moieties, resulting in interstrand DNA crosslinks. The bifunctional covalent linkage can lead to cell death via several pathways. Bendamustine is active against both quiescent and dividing cells. The exact mechanism of action of bendamustine remains unknown.
## Structure
Bendamustine contains bendamustine hydrochloride, an alkylating drug, as the active ingredient. The chemical name of bendamustine hydrochloride is 1H-benzimidazole-2-butanoic acid, 5--1 methyl-, monohydrochloride. Its empirical molecular formula is C16H21Cl2N3O2 ∙ HCl, and the molecular weight is 394.7. Bendamustine hydrochloride contains a mechlorethamine group and a benzimidazole heterocyclic ring with a butyric acid substituent, and has the following structural formula:
Bendamustine (bendamustine hydrochloride) for Injection is intended for intravenous infusion only after reconstitution with Sterile Water for Injection, USP, and after further dilution with either 0.9% Sodium Chloride Injection, USP, or 2.5% Dextrose/0.45% Sodium Chloride Injection, USP. It is supplied as a sterile non-pyrogenic white to off-white lyophilized powder in a single-use vial. Each 25-mg vial contains 25 mg of bendamustine hydrochloride and 42.5 mg of mannitol, USP. Each 100-mg vial contains 100 mg of bendamustine hydrochloride and 170 mg of mannitol, USP. The pH of the reconstituted solution is 2.5 - 3.5.
## Pharmacodynamics
Based on the pharmacokinetics/pharmacodynamics analyses of data from adult NHL patients, nausea increased with increasing bendamustine Cmax.
Cardiac Electrophysiology
The effect of bendamustine on the QTc interval was evaluated in 53 patients with indolent NHL and mantle cell lymphoma on Day 1 of Cycle 1 after administration of rituximab at 375 mg/m2 intravenous infusion followed by a 30-minute intravenous infusion of bendamustine at 90 mg/m2/day. No mean changes greater than 20 milliseconds were detected up to one hour post-infusion. The potential for delayed effects on the QT interval after one hour was not evaluated.
## Pharmacokinetics
Absorption
Following a single IV dose of bendamustine hydrochloride Cmax typically occurred at the end of infusion. The dose proportionality of bendamustine has not been studied.
Distribution
In vitro, the binding of bendamustine to human serum plasma proteins ranged from 94-96% and was concentration independent from 1-50 μg/mL. Data suggest that bendamustine is not likely to displace or to be displaced by highly protein-bound drugs. The blood to plasma concentration ratios in human blood ranged from 0.84 to 0.86 over a concentration range of 10 to 100 μg/mL indicating that bendamustine distributes freely in human red blood cells.
In a mass balance study, plasma radioactivity levels were sustained for a greater period of time than plasma concentrations of bendamustine, γ hydroxybendamustine (M3), and N desmethylbendamustine (M4). This suggests that there are bendamustine derived materials (detected via the radiolabel), that are rapidly cleared and have a longer half-life than bendamustine and its active metabolites.
The mean steady-state volume of distribution (Vss) of bendamustine was approximately 20-25 L. Steady-state volume of distribution for total radioactivity was approximately 50 L, indicating that neither bendamustine nor total radioactivity are extensively distributed into the tissues.
Metabolism
In vitro data indicate that bendamustine is primarily metabolized via hydrolysis to monohydroxy (HP1) and dihydroxy-bendamustine (HP2) metabolites with low cytotoxic activity. Two active minor metabolites, M3 and M4, are primarily formed via CYP1A2. However, concentrations of these metabolites in plasma are 1/10th and 1/100th that of the parent compound, respectively, suggesting that the cytotoxic activity is primarily due to bendamustine.
Results of a human mass balance study confirm that bendamustine is extensively metabolized via hydrolytic, oxidative, and conjugative pathways.
In vitro studies using human liver microsomes indicate that bendamustine does not inhibit CYP1A2, 2C9/10, 2D6, 2E1, or 3A4/5. Bendamustine did not induce metabolism of CYP1A2, CYP2A6, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2E1, or CYP3A4/5 enzymes in primary cultures of human hepatocytes.
Elimination
Mean recovery of total radioactivity in cancer patients following IV infusion of bendamustine hydrochloride was approximately 76% of the dose. Approximately 50% the dose was recovered in the urine and approximately a 25% of the dose was recovered in the feces. Urinary excretion was confirmed as a relatively minor pathway of elimination of bendamustine, with approximately 3.3% of the dose recovered in the urine as parent. Less than 1% of the dose was recovered in the urine as M3 and M4, and less than 5% of the dose was recovered in the urine as HP2.
Bendamustine clearance in humans is approximately 700 mL/minute. After a single dose of 120 mg/m2 bendamustine IV over 1-hour the intermediate t½ of the parent compound is approximately 40 minutes. The mean apparent terminal elimination t½ of M3 and M4 are approximately 3 hours and 30 minutes respectively. Little or no accumulation in plasma is expected for bendamustine administered on Days 1 and 2 of a 28-day cycle.
Renal Impairment
In a population pharmacokinetic analysis of bendamustine in patients receiving 120 mg/m2 there was no meaningful effect of renal impairment (CrCL 40 - 80 mL/min, N=31) on the pharmacokinetics of bendamustine. Bendamustine has not been studied in patients with CrCL < 40 mL/min.
These results are however limited, and therefore bendamustine should be used with caution in patients with mild or moderate renal impairment. Bendamustine should not be used in patients with CrCL < 40 mL/min.
Hepatic Impairment
In a population pharmacokinetic analysis of bendamustine in patients receiving 120 mg/m2 there was no meaningful effect of mild (total bilirubin ≤ ULN, AST ≥ ULN to 2.5 x ULN, and/or ALP ≥ ULN to 5.0 x ULN, N=26) hepatic impairment on the pharmacokinetics of bendamustine. Bendamustine has not been studied in patients with moderate or severe hepatic impairment.
These results are however limited, and therefore bendamustine should be used with caution in patients with mild hepatic impairment. Bendamustine should not be used in patients with moderate (AST or ALT 2.5 - 10 x ULN and total bilirubin 1.5 - 3 x ULN) or severe (total bilirubin > 3 x ULN) hepatic impairment.
Effect of Age
Bendamustine exposure (as measured by AUC and Cmax) has been studied in adult patients ages 31 through 84 years. The pharmacokinetics of bendamustine (AUC and Cmax) were not significantly different between patients less than or greater than/equal to 65 years of age.
Effect of Gender
The pharmacokinetics of bendamustine were similar in male and female patients.
Effect of Race
The effect of race on the safety, and/or efficacy of Bendamustine has not been established. Based on a cross-study comparison, Japanese subjects (n = 6) had on average exposures that were 40% higher than non-Japanese subjects receiving the same dose. The significance of this difference on the safety and efficacy of Bendamustine in Japanese subjects has not been established.
## Nonclinical Toxicology
### Carcinogenesis, Mutagenesis, Impairment of Fertility
Bendamustine was carcinogenic in mice. After intraperitoneal injections at 37.5 mg/m2/day (12.5 mg/kg/day, the lowest dose tested) and 75 mg/m2/day (25 mg/kg/day) for four days, peritoneal sarcomas in female AB/jena mice were produced. Oral administration at 187.5 mg/m2/day (62.5 mg/kg/day, the only dose tested) for four days induced mammary carcinomas and pulmonary adenomas.
Bendamustine is a mutagen and clastogen. In a reverse bacterial mutation assay (Ames assay), bendamustine was shown to increase revertant frequency in the absence and presence of metabolic activation. Bendamustine was clastogenic in human lymphocytes in vitro, and in rat bone marrow cells in vivo (increase in micronucleated polychromatic erythrocytes) from 37.5 mg/m2, the lowest dose tested.
Impaired spermatogenesis, azoospermia, and total germinal aplasia have been reported in male patients treated with alkylating agents, especially in combination with other drugs. In some instances spermatogenesis may return in patients in remission, but this may occur only several years after intensive chemotherapy has been discontinued. Patients should be warned of the potential risk to their reproductive capacities.
# Clinical Studies
### Chronic Lymphocytic Leukemia (CLL)
The safety and efficacy of Bendamustine were evaluated in an open-label, randomized, controlled multicenter trial comparing Bendamustine to chlorambucil. The trial was conducted in 301 previously-untreated patients with Binet Stage B or C (Rai Stages I - IV) CLL requiring treatment. Need-to-treat criteria included hematopoietic insufficiency, B-symptoms, rapidly progressive disease or risk of complications from bulky lymphadenopathy. Patients with autoimmune hemolytic anemia or autoimmune thrombocytopenia, Richter’s syndrome, or transformation to prolymphocytic leukemia were excluded from the study.
The patient populations in the Bendamustine and chlorambucil treatment groups were balanced with regard to the following baseline characteristics: age (median 63 vs. 66 years), gender (63% vs. 61% male), Binet stage (71% vs. 69% Binet B), lymphadenopathy (79% vs. 82%), enlarged spleen (76% vs. 80%), enlarged liver (48% vs. 46%), hypercellular bone marrow (79% vs. 73%), “B” symptoms (51% vs. 53%), lymphocyte count (mean 65.7x109/L vs. 65.1x109/L), and serum lactate dehydrogenase concentration (mean 370.2 vs. 388.4 U/L). Ninety percent of patients in both treatment groups had immuno-phenotypic confirmation of CLL (CD5, CD23 and either CD19 or CD20 or both).
Patients were randomly assigned to receive either Bendamustine at 100 mg/m2, administered intravenously over a period of 30 minutes on Days 1 and 2 or chlorambucil at 0.8 mg/kg (Broca’s normal weight) administered orally on Days 1 and 15 of each 28-day cycle. Efficacy endpoints of objective response rate and progression-free survival were calculated using a pre-specified algorithm based on NCI working group criteria for CLL1.
The results of this open-label randomized study demonstrated a higher rate of overall response and a longer progression-free survival for Bendamustine compared to chlorambucil (see Table 5). Survival data are not mature.
- CR was defined as peripheral lymphocyte count ≤ 4.0 x 109/L, neutrophils ≥ 1.5 x 109/L, platelets >100 x 109/L, hemoglobin > 110g/L, without transfusions, absence of palpable hepatosplenomegaly, lymph nodes ≤ 1.5 cm, < 30% lymphocytes without nodularity in at least a normocellular bone marrow and absence of “B” symptoms. The clinical and laboratory criteria were required to be maintained for a period of at least 56 days.
nPR was defined as described for CR with the exception that the bone marrow biopsy shows persistent nodules.
- nPR was defined as described for CR with the exception that the bone marrow biopsy shows persistent nodules.
† PR was defined as ≥ 50% decrease in peripheral lymphocyte count from the pretreatment baseline value, and either ≥50% reduction in lymphadenopathy, or ≥50% reduction in the size of spleen or liver, as well as one of the following hematologic improvements: neutrophils ≥ 1.5 x 109/L or 50% improvement over baseline, platelets >100 x 109/L or 50% improvement over baseline, hemoglobin >110g/L or 50% improvement over baseline without transfusions, for a period of at least 56 days.
†† PFS was defined as time from randomization to progression or death from any cause.
Kaplan-Meier estimates of progression-free survival comparing Bendamustine with chlorambucil are shown in Figure 1.
### Non-Hodgkin Lymphoma (NHL)
The efficacy of Bendamustine was evaluated in a single arm study of 100 patients with indolent B-cell NHL that had progressed during or within six months of treatment with rituximab or a rituximab-containing regimen. Patients were included if they relapsed within 6 months of either the first dose (monotherapy) or last dose (maintenance regimen or combination therapy) of rituximab. All patients received Bendamustine intravenously at a dose of 120 mg/m2, on Days 1 and 2 of a 21-day treatment cycle. Patients were treated for up to 8 cycles.
The median age was 60 years, 65% were male, and 95% had a baseline WHO performance status of 0 or 1. Major tumor subtypes were follicular lymphoma (62%), diffuse small lymphocytic lymphoma (21%), and marginal zone lymphoma (16%). Ninety-nine percent of patients had received previous chemotherapy, 91% of patients had received previous alkylator therapy, and 97% of patients had relapsed within 6 months of either the first dose (monotherapy) or last dose (maintenance regimen or combination therapy) of rituximab.
Efficacy was based on the assessments by a blinded independent review committee (IRC) and included overall response rate (complete response + complete response unconfirmed + partial response) and duration of response (DR) as summarized in Table 6.
# How Supplied
### Safe Handling and Disposal
As with other potentially toxic anticancer agents, care should be exercised in the handling and preparation of solutions prepared from Bendamustine. The use of gloves and safety glasses is recommended to avoid exposure in case of breakage of the vial or other accidental spillage. If a solution of Bendamustine contacts the skin, wash the skin immediately and thoroughly with soap and water. If Bendamustine contacts the mucous membranes, flush thoroughly with water.
Bendamustine is an antineoplastic product. Follow special handling and disposal procedures1.
### How Supplied
Bendamustine (bendamustine hydrochloride) for Injection is supplied in individual cartons as follows:
NDC 63459-390-08 Bendamustine (bendamustine hydrochloride) for Injection, 25 mg in 8 mL amber single-use vial
NDC 63459-391-20 Bendamustine (bendamustine hydrochloride) for Injection, 100 mg in 20 mL amber single-use vial
## Storage
Bendamustine may be stored up to 25°C (77°F) with excursions permitted up to 30°C (86°F) (see USP Controlled Room Temperature). Retain in original package until time of use to protect from light.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
### Allergic (hypersensitivity) Reactions
Inform patients of the possibility of mild or serious allergic reactions and to immediately report rash, facial swelling, or difficulty breathing during or soon after infusion.
### myelosuppression
Inform patients of the likelihood that Bendamustine will cause a decrease in white blood cells, platelets, and red blood cells, and the need for frequent monitoring of blood counts. Advise patients to report shortness of breath, significant fatigue, bleeding, fever, or other signs of infection.
### Fatigue
Advise patients that Bendamustine may cause tiredness and to avoid driving any vehicle or operating any dangerous tools or machinery if they experience this side effect.
### nausea and vomiting
Advise patients that Bendamustine may cause nausea and/or vomiting. Patients should report nausea and vomiting so that symptomatic treatment may be provided.
### Diarrhea
Advise patients that Bendamustine may cause diarrhea. Patients should report diarrhea to the physician so that symptomatic treatment may be provided.
### Rash
Advise patients that a mild rash or itching may occur during treatment with Bendamustine. Advise patients to immediately report severe or worsening rash or itching.
### Pregnancy and Nursing
Bendamustine can cause fetal harm. Women should be advised to avoid becoming pregnant throughout treatment and for 3 months after Bendamustine therapy has stopped. Men receiving Bendamustine should use reliable contraception for the same time period. Advise patients to report pregnancy immediately. Advise patients to avoid nursing while receiving Bendamustine.
# Precautions with Alcohol
Alcohol-Bendamustine interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
TREANDA
# Look-Alike Drug Names
There is limited information about the look-alike drug names.
# Drug Shortage Status
# Price | Bendamustine
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Sheng Shi, M.D. [2]; Sree Teja Yelamanchili, MBBS [3]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Bendamustine is a Alkylating Drug that is FDA approved for the treatment of Chronic Lymphocytic Leukemia (CLL), Non-Hodgkin Lymphoma (NHL). Common adverse reactions include injection site pain, pruritus, rash, weight loss, constipation, diarrhea, loss of appetite, nausea, stomatitis, vomiting, headache, cough, dyspnea, dehydration, fatigue , fever.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
### Chronic Lymphocytic Leukemia
- Dosing information
- Recommended Dosage:
- Recommended dosage:100 mg/m2 administered intravenously over 30 minutes on Days 1 and 2 of a 28-day cycle, up to 6 cycles.
- Dose Delays, Dose Modifications and Reinitiation of Therapy for CLL:
- Bendamustine administration should be delayed in the event of Grade 4 hematologic toxicity or clinically significant ≥ Grade 2 non-hematologic toxicity. Once non-hematologic toxicity has recovered to ≤ Grade 1 and/or the blood counts have improved Absolute Neutrophil Count (ANC) ≥ 1 x 109/L, platelets ≥ 75 x 109/L], Bendamustine can be reinitiated at the discretion of the treating physician. In addition, dose reduction may be warranted.
- Dose modifications for hematologic toxicity: for Grade 3 or greater toxicity, reduce the dose to 50 mg/m2 on Days 1 and 2 of each cycle; if Grade 3 or greater toxicity recurs, reduce the dose to 25 mg/m2 on Days 1 and 2 of each cycle.
- Dose modifications for non-hematologic toxicity: for clinically significant Grade 3 or greater toxicity, reduce the dose to 50 mg/m2 on Days 1 and 2 of each cycle.
- Dose re-escalation in subsequent cycles may be considered at the discretion of the treating physician.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Bendamustine in adult patients.
### Non–Guideline-Supported Use
### Metastatic Breast Cancer
- Dosing information
- 120 mg/m(2) IV over 30 minutes on days 1 and 2 every 4 weeks17872900
- 60 mg/m(2) IV over 30 minutes on days 1, 8, and 15 every 28 days17667603
### Multiple myeloma
- Dosing information
- 150 mg/m(2) (in 500 mL of normal saline) IV over 30 minutes on days 1 and 216402269
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
The effectiveness of Bendamustine in pediatric patients has not been established
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Bendamustine in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Bendamustine in pediatric patients.
# Contraindications
Bendamustine is contraindicated in patients with a known hypersensitivity (e.g., anaphylactic and anaphylactoid reactions) to bendamustine.
# Warnings
### Myelosuppression
Bendamustine caused severe myelosuppression (Grade 3-4) in 98% of patients in the two NHL studies (see Table 4). Three patients (2%) died from myelosuppression-related adverse reactions; one each from neutropenic sepsis, diffuse alveolar hemorrhage with Grade 3 thrombocytopenia, and pneumonia from an opportunistic infection (CMV).
In the event of treatment-related myelosuppression, monitor leukocytes, platelets, hemoglobin (Hgb), and neutrophils frequently. In the clinical trials, blood counts were monitored every week initially. Hematologic nadirs were observed predominantly in the third week of therapy. Myelosuppression may require dose delays and/or subsequent dose reductions if recovery to the recommended values has not occurred by the first day of the next scheduled cycle. Prior to the initiation of the next cycle of therapy, the ANC should be ≥ 1 x 109/L and the platelet count should be ≥ 75 x 109/L.
### Infections
Infection, including pneumonia, sepsis, septic shock, and death have occurred in adult and pediatric patients in clinical trials and in postmarketing reports. Patients with myelosuppression following treatment with Bendamustine are more susceptible to infections. Advise patients with myelosuppression following Bendamustine treatment to contact a physician if they have symptoms or signs of infection.
### Anaphylaxis and Infusion Reactions
Infusion reactions to Bendamustine have occurred commonly in clinical trials. Symptoms include fever, chills, pruritus and rash. In rare instances severe anaphylactic and anaphylactoid reactions have occurred, particularly in the second and subsequent cycles of therapy. Monitor clinically and discontinue drug for severe reactions. Ask patients about symptoms suggestive of infusion reactions after their first cycle of therapy. Patients who experience Grade 3 or worse allergic-type reactions should not be rechallenged. Consider measures to prevent severe reactions, including antihistamines, antipyretics and corticosteroids in subsequent cycles in patients who have experienced Grade 1 or 2 infusion reactions. Discontinue Bendamustine for patients with Grade 4 infusion reactions. Consider discontinuation for Grade 3 infusions reactions as clinically appropriate considering individual benefits, risks, and supportive care.
### Tumor Lysis Syndrome
Tumor lysis syndrome associated with Bendamustine treatment has occurred in patients in clinical trials and in postmarketing reports. The onset tends to be within the first treatment cycle of Bendamustine and, without intervention, may lead to acute renal failure and death. Preventive measures include vigorous hydration and close monitoring of blood chemistry, particularly potassium and uric acid levels. Allopurinol has also been used during the beginning of Bendamustine therapy. However, there may be an increased risk of severe skin toxicity when Bendamustine and allopurinol are administered concomitantly [see Warnings and Precautions (5.5)].
### Skin Reactions
Skin reactions have been reported with Bendamustine treatment in clinical trials and postmarketing safety reports, including rash, toxic skin reactions and bullous exanthema. Some events occurred when Bendamustine was given in combination with other anticancer agents.
In a study of Bendamustine (90 mg/m2) in combination with rituximab, one case of toxic epidermal necrolysis (TEN) occurred. TEN has been reported for rituximab (see rituximab package insert). Cases of Stevens-Johnson syndrome (SJS) and TEN, some fatal, have been reported when Bendamustine was administered concomitantly with allopurinol and other medications known to cause these syndromes. The relationship to Bendamustine cannot be determined.
Where skin reactions occur, they may be progressive and increase in severity with further treatment. Monitor patients with skin reactions closely. If skin reactions are severe or progressive, withhold or discontinue Bendamustine.
### Other Malignancies
There are reports of pre-malignant and malignant diseases that have developed in patients who have been treated with Bendamustine, including myelodysplastic syndrome, myeloproliferative disorders, acute myeloid leukemia and bronchial carcinoma. The association with Bendamustine therapy has not been determined.
### Extravasation Injury
Bendamustine extravasations have been reported in post marketing resulting in hospitalizations from erythema, marked swelling, and pain. Assure good venous access prior to starting Bendamustine infusion and monitor the intravenous infusion site for redness, swelling, pain, infection, and necrosis during and after administration of Bendamustine.
### Embryo-fetal Toxicity
Bendamustine can cause fetal harm when administered to a pregnant woman. Single intraperitoneal doses of bendamustine in mice and rats administered during organogenesis caused an increase in resorptions, skeletal and visceral malformations, and decreased fetal body weights.
# Adverse Reactions
## Clinical Trials Experience
The data described below reflect exposure to Bendamustine in 153 patients with CLL studied in an active-controlled, randomized trial. The population was 45-77 years of age, 63% male, 100% white, and were treatment naïve. All patients started the study at a dose of 100 mg/m2 intravenously over 30 minutes on Days 1 and 2 every 28 days.
Adverse reactions were reported according to NCI CTC v.2.0. Non-hematologic adverse reactions (any grade) in the Bendamustine group that occurred with a frequency greater than 15% were pyrexia (24%), nausea (20%), and vomiting (16%).
Other adverse reactions seen frequently in one or more studies included asthenia, fatigue, malaise, and weakness; dry mouth; somnolence; cough; constipation; headache; mucosal inflammation and stomatitis.
Worsening hypertension was reported in 4 patients treated with Bendamustine in the CLL trial and in none treated with chlorambucil. Three of these 4 adverse reactions were described as a hypertensive crisis and were managed with oral medications and resolved.
The most frequent adverse reactions leading to study withdrawal for patients receiving Bendamustine were hypersensitivity (2%) and pyrexia (1%).
Table 1 contains the treatment emergent adverse reactions, regardless of attribution, that were reported in ≥ 5% of patients in either treatment group in the randomized CLL clinical study.
The Grade 3 and 4 hematology laboratory test values by treatment group in the randomized CLL clinical study are described in Table 2. These findings confirm the myelosuppressive effects seen in patients treated with Bendamustine. Red blood cell transfusions were administered to 20% of patients receiving Bendamustine compared with 6% of patients receiving chlorambucil.
In the CLL trial, 34% of patients had bilirubin elevations, some without associated significant elevations in AST and ALT. Grade 3 or 4 increased bilirubin occurred in 3% of patients. Increases in AST and ALT of Grade 3 or 4 were limited to 1% and 3% of patients, respectively. Patients treated with Bendamustine may also have changes in their creatinine levels. If abnormalities are detected, monitoring of these parameters should be continued to ensure that further deterioration does not occur.
### Clinical Trials Experience in NH
The data described below reflect exposure to Bendamustine in 176 patients with indolent B-cell NHL treated in two single-arm studies. The population was 31-84 years of age, 60% male, and 40% female. The race distribution was 89% White, 7% Black, 3% Hispanic, 1% other, and <1% Asian. These patients received Bendamustine at a dose of 120 mg/m2 intravenously on Days 1 and 2 for up to eight 21-day cycles.
The adverse reactions occurring in at least 5% of the NHL patients, regardless of severity, are shown in Table 3. The most common non-hematologic adverse reactions (≥30%) were nausea (75%), fatigue (57%), vomiting (40%), diarrhea (37%) and pyrexia (34%). The most common non-hematologic Grade 3 or 4 adverse reactions (≥5%) were fatigue (11%), febrile neutropenia (6%), and pneumonia, hypokalemia and dehydration, each reported in 5% of patients.
Hematologic toxicities, based on laboratory values and CTC grade, in NHL patients treated in both single arm studies combined are described in Table 4. Clinically important chemistry laboratory values that were new or worsened from baseline and occurred in >1% of patients at Grade 3 or 4, in NHL patients treated in both single arm studies combined were hyperglycemia (3%), elevated creatinine (2%), hyponatremia (2%), and hypocalcemia (2%).
In both studies, serious adverse reactions, regardless of causality, were reported in 37% of patients receiving Bendamustine. The most common serious adverse reactions occurring in ≥5% of patients were febrile neutropenia and pneumonia. Other important serious adverse reactions reported in clinical trials and/or postmarketing experience were acute renal failure, cardiac failure, hypersensitivity, skin reactions, pulmonary fibrosis, and myelodysplastic syndrome.
Serious drug-related adverse reactions reported in clinical trials included myelosuppression, infection, pneumonia, tumor lysis syndrome and infusion reactions . Adverse reactions occurring less frequently but possibly related to Bendamustine treatment were hemolysis, dysgeusia/taste disorder, atypical pneumonia, sepsis, herpes zoster, erythema, dermatitis, and skin necrosis.
## Postmarketing Experience
The following adverse reactions have been identified during post-approval use of Bendamustine. Because these reactions are reported voluntarily from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure: anaphylaxis; and injection or infusion site reactions including phlebitis, pruritus, irritation, pain, and swelling; pneumocystis jiroveci pneumonia and pneumonitis.
Skin reactions including SJS and TEN have occurred when Bendamustine was administered concomitantly with allopurinol and other medications known to cause these syndromes.
# Drug Interactions
No formal clinical assessments of pharmacokinetic drug-drug interactions between Bendamustine and other drugs have been conducted.
Bendamustine's active metabolites, gamma-hydroxy bendamustine (M3) and N-desmethyl-bendamustine (M4), are formed via cytochrome P450 CYP1A2. Inhibitors of CYP1A2 (e.g., fluvoxamine, ciprofloxacin) have potential to increase plasma concentrations of bendamustine and decrease plasma concentrations of active metabolites. Inducers of CYP1A2 (e.g., omeprazole, smoking) have potential to decrease plasma concentrations of bendamustine and increase plasma concentrations of its active metabolites. Caution should be used, or alternative treatments considered if concomitant treatment with CYP1A2 inhibitors or inducers is needed.
The role of active transport systems in bendamustine distribution has not been fully evaluated. In vitro data suggest that P-glycoprotein, breast cancer resistance protein (BCRP), and/or other efflux transporters may have a role in bendamustine transport.
Based on in vitro data, bendamustine is not likely to inhibit metabolism via human CYP isoenzymes CYP1A2, 2C9/10, 2D6, 2E1, or 3A4/5, or to induce metabolism of substrates of cytochrome P450 enzymes.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): D
Risk Summary
Bendamustine can cause fetal harm when administered to a pregnant woman. Bendamustine caused malformations in animals, when a single dose was administered to pregnant animals. Advise women to avoid becoming pregnant while receiving Bendamustine and for 3 months after therapy has stopped. If this drug is used during pregnancy, or if the patient becomes pregnant while receiving this drug, the patient should be apprised of the potential hazard to a fetus. Advise men receiving Bendamustine to use reliable contraception for the same time period.
Animal data
Single intraperitoneal doses of bendamustine from 210 mg/m2 (70 mg/kg) in mice administered during organogenesis caused an increase in resorptions, skeletal and visceral malformations (exencephaly, cleft palates, accessory rib, and spinal deformities) and decreased fetal body weights. This dose did not appear to be maternally toxic and lower doses were not evaluated. Repeat intraperitoneal dosing in mice on gestation days 7-11 resulted in an increase in resorptions from 75 mg/m2 (25 mg/kg) and an increase in abnormalities from 112.5 mg/m2 (37.5 mg/kg) similar to those seen after a single intraperitoneal administration. Single intraperitoneal doses of bendamustine from 120 mg/m2 (20 mg/kg) in rats administered on gestation days 4, 7, 9, 11, or 13 caused embryo and fetal lethality as indicated by increased resorptions and a decrease in live fetuses. A significant increase in external [effect on tail, head, and herniation of external organs (exomphalos)] and internal (hydronephrosis and hydrocephalus) malformations were seen in dosed rats. There are no adequate and well-controlled studies in pregnant women. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be apprised of the potential hazard to the fetus.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Bendamustine in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Bendamustine during labor and delivery.
### Nursing Mothers
It is not known whether this drug is excreted in human milk. Because many drugs are excreted in human milk and because of the potential for serious adverse reactions in nursing infants and tumorigenicity shown for bendamustine in animal studies, a decision should be made whether to discontinue nursing or to discontinue the drug, taking into account the importance of the drug to the mother.
### Pediatric Use
The effectiveness of Bendamustine in pediatric patients has not been established. Bendamustine was evaluated in a single Phase 1/2 trial in pediatric patients with leukemia. The safety profile for Bendamustine in pediatric patients was consistent with that seen in adults, and no new safety signals were identified.
The trial included pediatric patients from 1-19 years of age with relapsed or refractory acute leukemia, including 27 patients with acute lymphocytic leukemia (ALL) and 16 patients with acute myeloid leukemia (AML). Bendamustine was administered as an intravenous infusion over 60 minutes on Days 1 and 2 of each 21-day cycle. Doses of 90 and 120 mg/m2 were evaluated. The Phase 1 portion of the study determined that the recommended Phase 2 dose of Bendamustine in pediatric patients was 120 mg/m2.
A total of 32 patients entered the Phase 2 portion of the study at the recommended dose and were evaluated for response. There was no treatment response (CR+ CRp) in any patient at this dose. However, there were 2 patients with ALL who achieved a CR at a dose of 90 mg/m2 in the Phase 1 portion of the study.
In the above-mentioned pediatric trial, the pharmacokinetics of Bendamustine at 90 and 120 mg/m2 doses were evaluated in 5 and 38 patients, respectively, aged 1 to 19 years (median age of 10 years).
The geometric mean body surface adjusted clearance of bendamustine was 14.2 L/h/m2. The exposures (AUC0-24 and Cmax) to bendamustine in pediatric patients following a 120 mg/m2 intravenous infusion over 60 minutes were similar to those in adult patients following the same 120 mg/m2 dose.
### Geriatic Use
In CLL and NHL studies, there were no clinically significant differences in the adverse reaction profile between geriatric (≥ 65 years of age) and younger patients.
Chronic Lymphocytic Leukemia
In the randomized CLL clinical study, 153 patients received Bendamustine. The overall response rate for patients younger than 65 years of age was 70% (n=82) for Bendamustine and 30% (n = 69) for chlorambucil. The overall response rate for patients 65 years or older was 47% (n=71) for Bendamustine and 22% (n = 79) for chlorambucil. In patients younger than 65 years of age, the median progression-free survival was 19 months in the Bendamustine group and 8 months in the chlorambucil group. In patients 65 years or older, the median progression-free survival was 12 months in the Bendamustine group and 8 months in the chlorambucil group.
Non-Hodgkin Lymphoma
Efficacy (Overall Response Rate and Duration of Response) was similar in patients < 65 years of age and patients ≥ 65 years. Irrespective of age, all of the 176 patients experienced at least one adverse reaction.
### Gender
No clinically significant differences between genders were seen in the overall incidences of adverse reactions in either CLL or NHL studies.
Chronic Lymphocytic Leukemia
In the randomized CLL clinical study, the overall response rate (ORR) for men (n=97) and women (n=56) in the Bendamustine group was 60% and 57%, respectively. The ORR for men (n=90) and women (n=58) in the chlorambucil group was 24% and 28%, respectively. In this study, the median progression-free survival for men was 19 months in the Bendamustine treatment group and 6 months in the chlorambucil treatment group. For women, the median progression-free survival was 13 months in the Bendamustine treatment group and 8 months in the chlorambucil treatment group.
Non-Hodgkin Lymphoma
The pharmacokinetics of bendamustine were similar in male and female patients with indolent NHL. No clinically-relevant differences between genders were seen in efficacy (ORR and DR).
### Race
There is no FDA guidance on the use of Bendamustine with respect to specific racial populations.
### Renal Impairment
No formal studies assessing the impact of renal impairment on the pharmacokinetics of bendamustine have been conducted. Bendamustine should be used with caution in patients with mild or moderate renal impairment. Bendamustine should not be used in patients with CrCL < 40 mL/min.
### Hepatic Impairment
No formal studies assessing the impact of hepatic impairment on the pharmacokinetics of bendamustine have been conducted. Bendamustine should be used with caution in patients with mild hepatic impairment. Bendamustine should not be used in patients with moderate (AST or ALT 2.5-10 X ULN and total bilirubin 1.5-3 X ULN) or severe (total bilirubin > 3 X ULN) hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Bendamustine in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Bendamustine in patients who are immunocompromised.
# Administration and Monitoring
### Administration
Administered intravenously
### Monitoring
FDA Package Insert for Bendamustine contains no information regarding Adverse Reactions.
# IV Compatibility
There is limited information about the IV Compatibility.
# Overdosage
The intravenous LD50 of bendamustine HCl is 240 mg/m2 in the mouse and rat. Toxicities included sedation, tremor, ataxia, convulsions and respiratory distress.
Across all clinical experience, the reported maximum single dose received was 280 mg/m2. Three of four patients treated at this dose showed ECG changes considered dose-limiting at 7 and 21 days post-dosing. These changes included QT prolongation (one patient), sinus tachycardia (one patient), ST and T wave deviations (two patients) and left anterior fascicular block (one patient). Cardiac enzymes and ejection fractions remained normal in all patients.
No specific antidote for Bendamustine overdose is known. Management of overdosage should include general supportive measures, including monitoring of hematologic parameters and ECGs.
# Pharmacology
## Mechanism of Action
Bendamustine is a bifunctional mechlorethamine derivative containing a purine-like benzimidazole ring. Mechlorethamine and its derivatives form electrophilic alkyl groups. These groups form covalent bonds with electron-rich nucleophilic moieties, resulting in interstrand DNA crosslinks. The bifunctional covalent linkage can lead to cell death via several pathways. Bendamustine is active against both quiescent and dividing cells. The exact mechanism of action of bendamustine remains unknown.
## Structure
Bendamustine contains bendamustine hydrochloride, an alkylating drug, as the active ingredient. The chemical name of bendamustine hydrochloride is 1H-benzimidazole-2-butanoic acid, 5-[bis(2-chloroethyl)amino]-1 methyl-, monohydrochloride. Its empirical molecular formula is C16H21Cl2N3O2 ∙ HCl, and the molecular weight is 394.7. Bendamustine hydrochloride contains a mechlorethamine group and a benzimidazole heterocyclic ring with a butyric acid substituent, and has the following structural formula:
Bendamustine (bendamustine hydrochloride) for Injection is intended for intravenous infusion only after reconstitution with Sterile Water for Injection, USP, and after further dilution with either 0.9% Sodium Chloride Injection, USP, or 2.5% Dextrose/0.45% Sodium Chloride Injection, USP. It is supplied as a sterile non-pyrogenic white to off-white lyophilized powder in a single-use vial. Each 25-mg vial contains 25 mg of bendamustine hydrochloride and 42.5 mg of mannitol, USP. Each 100-mg vial contains 100 mg of bendamustine hydrochloride and 170 mg of mannitol, USP. The pH of the reconstituted solution is 2.5 - 3.5.
## Pharmacodynamics
Based on the pharmacokinetics/pharmacodynamics analyses of data from adult NHL patients, nausea increased with increasing bendamustine Cmax.
Cardiac Electrophysiology
The effect of bendamustine on the QTc interval was evaluated in 53 patients with indolent NHL and mantle cell lymphoma on Day 1 of Cycle 1 after administration of rituximab at 375 mg/m2 intravenous infusion followed by a 30-minute intravenous infusion of bendamustine at 90 mg/m2/day. No mean changes greater than 20 milliseconds were detected up to one hour post-infusion. The potential for delayed effects on the QT interval after one hour was not evaluated.
## Pharmacokinetics
Absorption
Following a single IV dose of bendamustine hydrochloride Cmax typically occurred at the end of infusion. The dose proportionality of bendamustine has not been studied.
Distribution
In vitro, the binding of bendamustine to human serum plasma proteins ranged from 94-96% and was concentration independent from 1-50 μg/mL. Data suggest that bendamustine is not likely to displace or to be displaced by highly protein-bound drugs. The blood to plasma concentration ratios in human blood ranged from 0.84 to 0.86 over a concentration range of 10 to 100 μg/mL indicating that bendamustine distributes freely in human red blood cells.
In a mass balance study, plasma radioactivity levels were sustained for a greater period of time than plasma concentrations of bendamustine, γ hydroxybendamustine (M3), and N desmethylbendamustine (M4). This suggests that there are bendamustine derived materials (detected via the radiolabel), that are rapidly cleared and have a longer half-life than bendamustine and its active metabolites.
The mean steady-state volume of distribution (Vss) of bendamustine was approximately 20-25 L. Steady-state volume of distribution for total radioactivity was approximately 50 L, indicating that neither bendamustine nor total radioactivity are extensively distributed into the tissues.
Metabolism
In vitro data indicate that bendamustine is primarily metabolized via hydrolysis to monohydroxy (HP1) and dihydroxy-bendamustine (HP2) metabolites with low cytotoxic activity. Two active minor metabolites, M3 and M4, are primarily formed via CYP1A2. However, concentrations of these metabolites in plasma are 1/10th and 1/100th that of the parent compound, respectively, suggesting that the cytotoxic activity is primarily due to bendamustine.
Results of a human mass balance study confirm that bendamustine is extensively metabolized via hydrolytic, oxidative, and conjugative pathways.
In vitro studies using human liver microsomes indicate that bendamustine does not inhibit CYP1A2, 2C9/10, 2D6, 2E1, or 3A4/5. Bendamustine did not induce metabolism of CYP1A2, CYP2A6, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2E1, or CYP3A4/5 enzymes in primary cultures of human hepatocytes.
Elimination
Mean recovery of total radioactivity in cancer patients following IV infusion of [14C] bendamustine hydrochloride was approximately 76% of the dose. Approximately 50% the dose was recovered in the urine and approximately a 25% of the dose was recovered in the feces. Urinary excretion was confirmed as a relatively minor pathway of elimination of bendamustine, with approximately 3.3% of the dose recovered in the urine as parent. Less than 1% of the dose was recovered in the urine as M3 and M4, and less than 5% of the dose was recovered in the urine as HP2.
Bendamustine clearance in humans is approximately 700 mL/minute. After a single dose of 120 mg/m2 bendamustine IV over 1-hour the intermediate t½ of the parent compound is approximately 40 minutes. The mean apparent terminal elimination t½ of M3 and M4 are approximately 3 hours and 30 minutes respectively. Little or no accumulation in plasma is expected for bendamustine administered on Days 1 and 2 of a 28-day cycle.
Renal Impairment
In a population pharmacokinetic analysis of bendamustine in patients receiving 120 mg/m2 there was no meaningful effect of renal impairment (CrCL 40 - 80 mL/min, N=31) on the pharmacokinetics of bendamustine. Bendamustine has not been studied in patients with CrCL < 40 mL/min.
These results are however limited, and therefore bendamustine should be used with caution in patients with mild or moderate renal impairment. Bendamustine should not be used in patients with CrCL < 40 mL/min. [See Use in Specific Populations (8.6)]
Hepatic Impairment
In a population pharmacokinetic analysis of bendamustine in patients receiving 120 mg/m2 there was no meaningful effect of mild (total bilirubin ≤ ULN, AST ≥ ULN to 2.5 x ULN, and/or ALP ≥ ULN to 5.0 x ULN, N=26) hepatic impairment on the pharmacokinetics of bendamustine. Bendamustine has not been studied in patients with moderate or severe hepatic impairment.
These results are however limited, and therefore bendamustine should be used with caution in patients with mild hepatic impairment. Bendamustine should not be used in patients with moderate (AST or ALT 2.5 - 10 x ULN and total bilirubin 1.5 - 3 x ULN) or severe (total bilirubin > 3 x ULN) hepatic impairment. [See Use in Specific Populations (8.7)]
Effect of Age
Bendamustine exposure (as measured by AUC and Cmax) has been studied in adult patients ages 31 through 84 years. The pharmacokinetics of bendamustine (AUC and Cmax) were not significantly different between patients less than or greater than/equal to 65 years of age. [See Use in Specific Populations (8.4, 8.5)]
Effect of Gender
The pharmacokinetics of bendamustine were similar in male and female patients. [See Use in Specific Populations (8.8)]
Effect of Race
The effect of race on the safety, and/or efficacy of Bendamustine has not been established. Based on a cross-study comparison, Japanese subjects (n = 6) had on average exposures that were 40% higher than non-Japanese subjects receiving the same dose. The significance of this difference on the safety and efficacy of Bendamustine in Japanese subjects has not been established.
## Nonclinical Toxicology
### Carcinogenesis, Mutagenesis, Impairment of Fertility
Bendamustine was carcinogenic in mice. After intraperitoneal injections at 37.5 mg/m2/day (12.5 mg/kg/day, the lowest dose tested) and 75 mg/m2/day (25 mg/kg/day) for four days, peritoneal sarcomas in female AB/jena mice were produced. Oral administration at 187.5 mg/m2/day (62.5 mg/kg/day, the only dose tested) for four days induced mammary carcinomas and pulmonary adenomas.
Bendamustine is a mutagen and clastogen. In a reverse bacterial mutation assay (Ames assay), bendamustine was shown to increase revertant frequency in the absence and presence of metabolic activation. Bendamustine was clastogenic in human lymphocytes in vitro, and in rat bone marrow cells in vivo (increase in micronucleated polychromatic erythrocytes) from 37.5 mg/m2, the lowest dose tested.
Impaired spermatogenesis, azoospermia, and total germinal aplasia have been reported in male patients treated with alkylating agents, especially in combination with other drugs. In some instances spermatogenesis may return in patients in remission, but this may occur only several years after intensive chemotherapy has been discontinued. Patients should be warned of the potential risk to their reproductive capacities.
# Clinical Studies
### Chronic Lymphocytic Leukemia (CLL)
The safety and efficacy of Bendamustine were evaluated in an open-label, randomized, controlled multicenter trial comparing Bendamustine to chlorambucil. The trial was conducted in 301 previously-untreated patients with Binet Stage B or C (Rai Stages I - IV) CLL requiring treatment. Need-to-treat criteria included hematopoietic insufficiency, B-symptoms, rapidly progressive disease or risk of complications from bulky lymphadenopathy. Patients with autoimmune hemolytic anemia or autoimmune thrombocytopenia, Richter’s syndrome, or transformation to prolymphocytic leukemia were excluded from the study.
The patient populations in the Bendamustine and chlorambucil treatment groups were balanced with regard to the following baseline characteristics: age (median 63 vs. 66 years), gender (63% vs. 61% male), Binet stage (71% vs. 69% Binet B), lymphadenopathy (79% vs. 82%), enlarged spleen (76% vs. 80%), enlarged liver (48% vs. 46%), hypercellular bone marrow (79% vs. 73%), “B” symptoms (51% vs. 53%), lymphocyte count (mean 65.7x109/L vs. 65.1x109/L), and serum lactate dehydrogenase concentration (mean 370.2 vs. 388.4 U/L). Ninety percent of patients in both treatment groups had immuno-phenotypic confirmation of CLL (CD5, CD23 and either CD19 or CD20 or both).
Patients were randomly assigned to receive either Bendamustine at 100 mg/m2, administered intravenously over a period of 30 minutes on Days 1 and 2 or chlorambucil at 0.8 mg/kg (Broca’s normal weight) administered orally on Days 1 and 15 of each 28-day cycle. Efficacy endpoints of objective response rate and progression-free survival were calculated using a pre-specified algorithm based on NCI working group criteria for CLL1.
The results of this open-label randomized study demonstrated a higher rate of overall response and a longer progression-free survival for Bendamustine compared to chlorambucil (see Table 5). Survival data are not mature.
- CR was defined as peripheral lymphocyte count ≤ 4.0 x 109/L, neutrophils ≥ 1.5 x 109/L, platelets >100 x 109/L, hemoglobin > 110g/L, without transfusions, absence of palpable hepatosplenomegaly, lymph nodes ≤ 1.5 cm, < 30% lymphocytes without nodularity in at least a normocellular bone marrow and absence of “B” symptoms. The clinical and laboratory criteria were required to be maintained for a period of at least 56 days.
nPR was defined as described for CR with the exception that the bone marrow biopsy shows persistent nodules.
- nPR was defined as described for CR with the exception that the bone marrow biopsy shows persistent nodules.
† PR was defined as ≥ 50% decrease in peripheral lymphocyte count from the pretreatment baseline value, and either ≥50% reduction in lymphadenopathy, or ≥50% reduction in the size of spleen or liver, as well as one of the following hematologic improvements: neutrophils ≥ 1.5 x 109/L or 50% improvement over baseline, platelets >100 x 109/L or 50% improvement over baseline, hemoglobin >110g/L or 50% improvement over baseline without transfusions, for a period of at least 56 days.
†† PFS was defined as time from randomization to progression or death from any cause.
Kaplan-Meier estimates of progression-free survival comparing Bendamustine with chlorambucil are shown in Figure 1.
### Non-Hodgkin Lymphoma (NHL)
The efficacy of Bendamustine was evaluated in a single arm study of 100 patients with indolent B-cell NHL that had progressed during or within six months of treatment with rituximab or a rituximab-containing regimen. Patients were included if they relapsed within 6 months of either the first dose (monotherapy) or last dose (maintenance regimen or combination therapy) of rituximab. All patients received Bendamustine intravenously at a dose of 120 mg/m2, on Days 1 and 2 of a 21-day treatment cycle. Patients were treated for up to 8 cycles.
The median age was 60 years, 65% were male, and 95% had a baseline WHO performance status of 0 or 1. Major tumor subtypes were follicular lymphoma (62%), diffuse small lymphocytic lymphoma (21%), and marginal zone lymphoma (16%). Ninety-nine percent of patients had received previous chemotherapy, 91% of patients had received previous alkylator therapy, and 97% of patients had relapsed within 6 months of either the first dose (monotherapy) or last dose (maintenance regimen or combination therapy) of rituximab.
Efficacy was based on the assessments by a blinded independent review committee (IRC) and included overall response rate (complete response + complete response unconfirmed + partial response) and duration of response (DR) as summarized in Table 6.
# How Supplied
### Safe Handling and Disposal
As with other potentially toxic anticancer agents, care should be exercised in the handling and preparation of solutions prepared from Bendamustine. The use of gloves and safety glasses is recommended to avoid exposure in case of breakage of the vial or other accidental spillage. If a solution of Bendamustine contacts the skin, wash the skin immediately and thoroughly with soap and water. If Bendamustine contacts the mucous membranes, flush thoroughly with water.
Bendamustine is an antineoplastic product. Follow special handling and disposal procedures1.
### How Supplied
Bendamustine (bendamustine hydrochloride) for Injection is supplied in individual cartons as follows:
NDC 63459-390-08 Bendamustine (bendamustine hydrochloride) for Injection, 25 mg in 8 mL amber single-use vial
NDC 63459-391-20 Bendamustine (bendamustine hydrochloride) for Injection, 100 mg in 20 mL amber single-use vial
## Storage
Bendamustine may be stored up to 25°C (77°F) with excursions permitted up to 30°C (86°F) (see USP Controlled Room Temperature). Retain in original package until time of use to protect from light.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
### Allergic (hypersensitivity) Reactions
Inform patients of the possibility of mild or serious allergic reactions and to immediately report rash, facial swelling, or difficulty breathing during or soon after infusion.
### myelosuppression
Inform patients of the likelihood that Bendamustine will cause a decrease in white blood cells, platelets, and red blood cells, and the need for frequent monitoring of blood counts. Advise patients to report shortness of breath, significant fatigue, bleeding, fever, or other signs of infection.
### Fatigue
Advise patients that Bendamustine may cause tiredness and to avoid driving any vehicle or operating any dangerous tools or machinery if they experience this side effect.
### nausea and vomiting
Advise patients that Bendamustine may cause nausea and/or vomiting. Patients should report nausea and vomiting so that symptomatic treatment may be provided.
### Diarrhea
Advise patients that Bendamustine may cause diarrhea. Patients should report diarrhea to the physician so that symptomatic treatment may be provided.
### Rash
Advise patients that a mild rash or itching may occur during treatment with Bendamustine. Advise patients to immediately report severe or worsening rash or itching.
### Pregnancy and Nursing
Bendamustine can cause fetal harm. Women should be advised to avoid becoming pregnant throughout treatment and for 3 months after Bendamustine therapy has stopped. Men receiving Bendamustine should use reliable contraception for the same time period. Advise patients to report pregnancy immediately. Advise patients to avoid nursing while receiving Bendamustine.
# Precautions with Alcohol
Alcohol-Bendamustine interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
TREANDA
# Look-Alike Drug Names
There is limited information about the look-alike drug names.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Bendamustine | |
2808578d9bcf53f97e128176378b0c16000322a9 | wikidoc | Benfotiamine | Benfotiamine
# Overview
Benfotiamine (rINN, or S-benzoylthiamine O-monophosphate) is a synthetic S-acyl derivative of thiamine (vitamin B1).
It has been licensed for use in Germany since 1993 under the trade name Milgamma. (Combinations with pyridoxine or cyanocobalamin are also sold under this name.) It is prescribed there for treating sciatica and other painful nerve conditions.
It is marketed as a medicine and/or dietary supplement, depending on the respective Regulatory Authority.
# Uses
Benfotiamine is primarily marketed as an antioxidant dietary supplement. In a clinical study with six patients, benfotiamine lowered AGE by 40%.
Benfotiamine may be useful for the treatment of diabetic retinopathy, neuropathy, and nephropathy however "Most of the effects attributed to benfotiamine are extrapolated from in vitro and animal studies. Unfortunately apparent evidences from human studies are scarce and especially endpoint studies are missing. Therefore additional clinical studies are mandatory to explore the therapeutic potential of benfotiamine in both diabetic and non-diabetic pathological conditions". It is thought that treatment with benfotiamine leads to increased intracellular thiamine diphosphate levels, a cofactor of transketolase. This enzyme directs advanced glycation and lipoxidation end products (AGE's, ALE's) substrates to the pentose phosphate pathway, thus reducing tissue AGEs.
# Pharmacology
After absorption, benfotiamine can be dephosphorylated by cells bearing an ecto-alkaline phosphatase to the lipid-soluble S-benzoylthiamine. Benfotiamine should not be confused with allithiamine, a naturally occurring thiamine disulfide derivative with a distinct pharmacological profile. | Benfotiamine
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Benfotiamine (rINN, or S-benzoylthiamine O-monophosphate) is a synthetic S-acyl derivative of thiamine (vitamin B1).
It has been licensed for use in Germany since 1993 under the trade name Milgamma. (Combinations with pyridoxine or cyanocobalamin are also sold under this name.) It is prescribed there for treating sciatica and other painful nerve conditions.[1]
It is marketed as a medicine and/or dietary supplement, depending on the respective Regulatory Authority.[citation needed]
# Uses
Benfotiamine is primarily marketed as an antioxidant dietary supplement. In a clinical study with six patients, benfotiamine lowered AGE by 40%.[2]
Benfotiamine may be useful for the treatment of diabetic retinopathy, neuropathy, and nephropathy however "Most of the effects attributed to benfotiamine are extrapolated from in vitro and animal studies. Unfortunately apparent evidences from human studies are scarce and especially endpoint studies are missing. Therefore additional clinical studies are mandatory to explore the therapeutic potential of benfotiamine in both diabetic and non-diabetic pathological conditions".[3] It is thought that treatment with benfotiamine leads to increased intracellular thiamine diphosphate levels,[3] a cofactor of transketolase. This enzyme directs advanced glycation and lipoxidation end products (AGE's, ALE's) substrates to the pentose phosphate pathway, thus reducing tissue AGEs.[4][5][6][7][8]
# Pharmacology
After absorption, benfotiamine can be dephosphorylated by cells bearing an ecto-alkaline phosphatase to the lipid-soluble S-benzoylthiamine.[9] Benfotiamine should not be confused with allithiamine, a naturally occurring thiamine disulfide derivative with a distinct pharmacological profile.[10] | https://www.wikidoc.org/index.php/Benfotiamine | |
9312074d64b5799edb18ff72cb7a0e1f8ddf23e3 | wikidoc | Benign tumor | Benign tumor
# Overview
A benign tumor is a tumor that lacks all three of the malignant properties of a cancer. Thus, by definition, a benign tumor:
- does not grow in an unlimited, aggressive manner
- does not invade surrounding tissues
- does not metastasize
Common examples of benign tumors include moles and uterine fibroids.
The term "benign" implies a mild and nonprogressive disease, and indeed, many kinds of benign tumor are harmless to the health. However, some neoplasms which are defined as 'benign tumors' because they lack the invasive properties of a cancer, may still produce negative health effects. Examples of this include tumors which produce a "mass effect" (compression of vital organs such as blood vessels), or "functional" tumors of endocrine tissues, which may overproduce certain hormones (examples include thyroid adenomas, adrenocortical adenomas, and pituitary adenomas).
Benign tumors typically are encapsulated, which inhibits their ability to behave in a malignant manner. Nonetheless, many types of benign tumors have the potential to become malignant and some types, such as teratoma, are notorious for this.
# Classification
The term "tumor" literally means "swelling", and the broadest definition of "benign tumor" encompasses all abnormal tissue masses which are not cancers. In practice, most of these entities are neoplasms, meaning that they contain a discrete population of cells which proliferate in an independent manner, usually as the result of acquired genetic abnormalities. Entities which may be referred to as "tumors" but are non-neoplastic include developmental abnormalities, such as hamartomas and ectopic rests (normal tissue in an anatomically abnormal location).
Benign neoplasms are typically composed of cells which bear a strong resemblance to a normal cell type in their organ of origin. These tumors are named for the cell or tissue type from which they originate, followed by the suffix "-oma" (but not -carcinoma, -sarcoma, or -blastoma, which are generally cancers). For example, a lipoma is a common benign tumor of fat cells (lipocytes), and a chondroma is a benign tumor of cartilage-forming cells (chondrocytes). Adenomas are benign tumors of gland-forming cells, and are usually specified further by their cell or organ of origin, as in hepatic adenoma (a benign tumor of hepatocytes, or liver cells). There are a few cancers with 'benign-sounding' names which have been retained for historical reasons, including melanoma (a cancer of pigmented skin cells, or melanocytes) and seminoma (a cancer of male reproductive cells).
In some cases, certain "benign" tumors may later give rise to malignant cancers, which result from additional genetic changes in a subpopulation of the tumor's neoplastic cells. A prominent example of this phenomenon is the tubular adenoma, a common type of colon polyp which is an important precursor to colon cancer. The cells in tubular adenomas, like most tumors which frequently progess to cancer, show certain abnormalities of cell maturation and appearance collectively known as dysplasia. These cellular abnormalities and are not seen in benign tumors that rarely or never turn cancerous, but are seen in other pre-cancerous tissue abnormalities which do not form discrete masses, such as pre-cancerous lesions of the uterine cervix. Some authorities prefer to refer to dysplastic tumors as "pre-malignant", and reserve the term "benign" for tumors which rarely or never give rise to cancer.
# Signs and symptoms
Benign tumors are very diverse, and may be asymptomatic or may cause specific symptoms depending on their anatomic location and tissue type. Symptoms or pathological effects of some benign tumors may include:
- Bleeding or occult blood loss causing anemia
- Pressure causing pain or dysfunction
- Cosmetic changes
- Itching
- 'Hormonal syndromes' resulting from hormones secreted by the tumor
- Obstruction, e.g., of the intestines
- Compression of blood vessels or vital organs
# Treatment
Many benign tumors do not need to be treated at all. If a benign tumor is causing symptoms, presents a health risk, or causes a cosmetic concern for the patient, surgery is usually the most effective approach. Most benign tumors do not respond to chemotherapy or radiation therapy, although there are exceptions. | Benign tumor
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
A benign tumor is a tumor that lacks all three of the malignant properties of a cancer. Thus, by definition, a benign tumor:
- does not grow in an unlimited, aggressive manner
- does not invade surrounding tissues
- does not metastasize
Common examples of benign tumors include moles and uterine fibroids.
The term "benign" implies a mild and nonprogressive disease, and indeed, many kinds of benign tumor are harmless to the health. However, some neoplasms which are defined as 'benign tumors' because they lack the invasive properties of a cancer, may still produce negative health effects. Examples of this include tumors which produce a "mass effect" (compression of vital organs such as blood vessels), or "functional" tumors of endocrine tissues, which may overproduce certain hormones (examples include thyroid adenomas, adrenocortical adenomas, and pituitary adenomas).
Benign tumors typically are encapsulated, which inhibits their ability to behave in a malignant manner. Nonetheless, many types of benign tumors have the potential to become malignant and some types, such as teratoma, are notorious for this.
# Classification
The term "tumor" literally means "swelling", and the broadest definition of "benign tumor" encompasses all abnormal tissue masses which are not cancers. In practice, most of these entities are neoplasms, meaning that they contain a discrete population of cells which proliferate in an independent manner, usually as the result of acquired genetic abnormalities. Entities which may be referred to as "tumors" but are non-neoplastic include developmental abnormalities, such as hamartomas and ectopic rests (normal tissue in an anatomically abnormal location).[1]
Benign neoplasms are typically composed of cells which bear a strong resemblance to a normal cell type in their organ of origin. These tumors are named for the cell or tissue type from which they originate, followed by the suffix "-oma" (but not -carcinoma, -sarcoma, or -blastoma, which are generally cancers). For example, a lipoma is a common benign tumor of fat cells (lipocytes), and a chondroma is a benign tumor of cartilage-forming cells (chondrocytes). Adenomas are benign tumors of gland-forming cells, and are usually specified further by their cell or organ of origin, as in hepatic adenoma (a benign tumor of hepatocytes, or liver cells). There are a few cancers with 'benign-sounding' names which have been retained for historical reasons, including melanoma (a cancer of pigmented skin cells, or melanocytes) and seminoma (a cancer of male reproductive cells).[1]
In some cases, certain "benign" tumors may later give rise to malignant cancers, which result from additional genetic changes in a subpopulation of the tumor's neoplastic cells. A prominent example of this phenomenon is the tubular adenoma, a common type of colon polyp which is an important precursor to colon cancer. The cells in tubular adenomas, like most tumors which frequently progess to cancer, show certain abnormalities of cell maturation and appearance collectively known as dysplasia. These cellular abnormalities and are not seen in benign tumors that rarely or never turn cancerous, but are seen in other pre-cancerous tissue abnormalities which do not form discrete masses, such as pre-cancerous lesions of the uterine cervix. Some authorities prefer to refer to dysplastic tumors as "pre-malignant", and reserve the term "benign" for tumors which rarely or never give rise to cancer.
# Signs and symptoms
Benign tumors are very diverse, and may be asymptomatic or may cause specific symptoms depending on their anatomic location and tissue type. Symptoms or pathological effects of some benign tumors may include:
- Bleeding or occult blood loss causing anemia
- Pressure causing pain or dysfunction
- Cosmetic changes
- Itching
- 'Hormonal syndromes' resulting from hormones secreted by the tumor
- Obstruction, e.g., of the intestines
- Compression of blood vessels or vital organs
# Treatment
Many benign tumors do not need to be treated at all. If a benign tumor is causing symptoms, presents a health risk, or causes a cosmetic concern for the patient, surgery is usually the most effective approach. Most benign tumors do not respond to chemotherapy or radiation therapy, although there are exceptions. | https://www.wikidoc.org/index.php/Benign_growth | |
c7ef8de610a5c051de90b4f7690e129d48d48828 | wikidoc | Benoxaprofen | Benoxaprofen
# Overview
Benoxaprofen is a chemical compound with the formula C16H12ClNO3. It is a non-steroidal anti-inflammatory drug and was marketed under the brand name Oraflex in the United States and as Opren in Europe by Eli Lilly and Company. Lilly suspended sales of Oraflex in 1982 after reports from the British government and the U.S. Food and Drug Administration (FDA) of adverse effects and deaths linked to the drug.
# History
Benoxaprofen was discovered by a team of Lilly chemists at its British laboratory. This laboratory was assigned to explore new anti-arthritic compounds in 1966. Lilly applied for patents on benoxaprofen seven years later and also filed for permission from the FDA to start testing the drug on humans. It had to undergo the three-step clinical testing procedure required by the Federal Government.
Lilly began Phase I of the progress by testing a handful of healthy human volunteers. These tests had to prove that the drug posed no clear and immediate safety hazards. In Phase II a larger number of human subjects, including some with minor illnesses, was tested. The drug’s effectiveness and safety was the major target of these tests. Phase III was the largest test and began in 1976. More than 2,000 arthritis patients were administered the drug by more than 100 physicians. The physicians reported the results to the Lilly Company.
When the company formally requested to begin marketing the drug in January 1980 with the FDA, the document consisted of more than 100,000 pages of test results and patients’ records.
Benoxaprofen was first marketed abroad: in 1980 the drug was released for marketing in the UK. It came on the market in May 1982 in the USA.
When benoxaprofen was on the market as Oraflex in the USA the first sign of trouble came for the Lilly Company. The British Medical Journal reported in May 1982 that physicians in the UK believed that the drug was responsible for at least 12 deaths, mainly caused by kidney and liver failure. A petition was filed to have Oraflex removed from the market.
On the fourth of August 1982 the British government temporarily suspended sales of the drug in UK ‘on grounds of safety’. The British Committee on the Safety of Medicines declared, in a telegram to the FDA, that it had received reports of more than 3,500 adverse side-effects among patients who had used Oraflex. There were also 61 deaths, most of which were of elderly people. Almost simultaneously, the FDA said it had reports of 11 deaths in the USA among Oraflex users, most of which were caused by kidney and liver damage.
The Eli Lilly Company suspended sales of benoxaprofen that afternoon.
# Structure and reactivity
The molecular formula of benoxaprofen is C16H12ClNO3 and the systematic (IUPAC) name is 2-propionic acid. The molecule has a molecular mass of 301.050568 g/mol.
Benoxaprofen is essentially a planar molecule. This is due to the co-planarity of the benzoxazole and phenyl rings, but the molecule also has a non-planar side chain consisting of the propanoic acid moiety which acts as a carrier group. These findings were determined with the use of X-ray crystallographic measurements by the Lilly Research Centre Limited.
Furthermore, benoxaprofen is highly phototoxic. The free radical decarboxylated derivative of the drug is the toxic agent which, in the presence of oxygen, yields singlet oxygen and superoxy anion.
Photochemical decarboxylation via a radical mechanism and in single-strand breaks of DNA is caused by irradiation of benoxaprofen in an aqueous solution. This also happens to ketoprofen and naproxen, other non-steroidal anti-inflammatory drugs, which are even more active in this respect than benoxaprofen.
# Available forms
Benoxaprofen is a racemic mixture . The two enantiomers are R(-) and S(+).
The inversion of the R(-) enantiomer and glucuronide conjugation will metabolize benoxaprofen. However, benoxaprofen will not readily undergo oxidative metabolism.
It is however possible that, when cytochrome P4501 is the catalyst, oxygenation of the 4-chlorophyl ring occurs. With the S(+) enantiomer it is more likely that oxygenation of the aromatic ring of the 2-phenylpropionic acid moiety occurs, also here is cytochrome P4501 the catalyst.
# Toxicokinetics
Benoxaprofen is absorbed well after oral intake of doses ranging from 1 up to 10 mg/kg. Only the unchanged drug is detected in the plasma, mostly bound to plasma proteins. The plasma levels of benoxaprofen in eleven subjects have been accurately predicted, based on the two-compartment open model. The mean half-life of absorption was 0.4 hours. This means that within 25 minutes, half of the dose is absorbed in the system. The mean half-life of distribution was 4.8 hours. This means that within 5 hours, half of the dose is distributed throughout the entire system. The mean half-life of elimination was 37.8 hours. This means that within 40 hours, half of the dose is excreted out of the system.
In female rats, after oral dose of 20 mg/kg, the tissue concentration of benoxaprofen was the highest in liver, kidney, lungs, adrenals and ovaries. The distribution in pregnant females is the same, while it can also be found –in lower concentrations– in the foetus. There is a big difference between species in the route of excretion. In man, rhesus monkey and rabbit it is mostly excreted via the urine, while in rat and dog is was excreted via biliary-faecal excretion. In man and dog, the compound was excreted as the ester glucuronide, and in the other species as the unchanged compound. This means no major metabolic transformation of benoxaprofen takes place.
# Toxicodynamics
Unlike other non-steroidal anti-inflammatory compounds, benoxaprofen acts directly on mononuclear cells. It inhibits their chemotactic response by inhibiting the lipoxygenase enzyme.
# Efficacy and side effects
## Efficacy
Benoxaprofen is an analgesic, antipyretic and anti-inflammatory drug.
Benoxaprofen was given to patients with rheumatoid arthritis and osteoarthritis because of its anti-inflammatory effect. Patients with the Paget’s disease, psoriatic arthritis, ankylosing spondylitis, a painful shoulder, the mixed connective-tissue disease, polymyalgia rheumatica, back pain and the Behçet’s disease received benoxaprofen, too. A daily dose of 300–600 mg is effective for many patients.
## Adverse effects
There are different types of side effects. Most of them were cutaneous or gastrointestinal. Side effects appear rarely in the central nervous system and miscellaneous side effects were not often observed. A study shows that most side effects appear in patients with rheumatoid arthritis
### Cutaneous side effects
Cutaneous side effects of benoxaprofen are photosensitivity, onycholysis, rash, milia, increased nail growth, pruritus (itch) and hypertrichosis. Photosensitivity leads to burning, itching or redness when patients are exposed to sunlight.
A study shows that benoxaprofen, or other lipoxygenase-inhibiting agents, might be helpful in the treatment of psoriasis because the migration inhibition of the inflammatory cells (leukocytes) into the skin.
### Gastrointestinal side effects
Gastrointestinal side effects of benoxaprofen are bleeding, diarrhoea, abdominal pain, anorexia (symptom), mouth ulcers and taste change. According to a study the most appearing gastric side effects are vomiting, heartburn and epigastric pain.
### Side effects in the central nervous system
For a small number of people, taking benoxaprofen might result in depression, lethargy and feeling ill.
### Miscellaneous side effects
Faintness, dizziness, headache, palpitations, epistaxis, blurred vision, urinary urgency and gynaecomastia rarely appear in patients who take benoxaprofen.
Benoxaprofen also causes hepatotoxicity, which led to death of some elderly patients. That was the main reason why the drug was withdrawn from the market.
# Toxicity
After the suspension of sales in 1982 the toxic effects which benoxaprofen might have on humans were looked into more deeply. The fairly planar compound of benoxaprofen seems to be hepa- and phototoxic in the human body.
Benoxaprofen has a rather long half life in man (t1/2= 20-30 h), undergoes biliary excretion and enterohepatic circulation and is also known to have a slow plasma clearance (CL p=4.5 ml/min). The half life may be further increased in elderly patients (>80 years of age) and in patients which already have an renal impairment increasing to figures as high as 148 hours.
The fetal hepatotoxicity of benoxaprofen can be attributed to the accumulation of the drug after a repeated dosage and also associated with the slow plasma clearance. The hepatic accumulation of the drug is presumably the cause for an increase in the activity of the hepatic cytochrome P450I which will oxygenate benaxoprofen and produce reactive intermediates. Benoxaprofen is very likely a substrate and weak inducer of cytochrome P450I and its enzyme family. Normally it is not metabolized by oxidative reactions but with the S(+) enantiomer of benoxaprofen and cytochrome P450I as a catalyst the oxygenation of the 4-chlorophenyl ring and of the aromatic ring of 2-phenyl propionic acid seems to be possible. Therefore the induction of a minor metabolic pathway leads to the formation of toxic metabolites in considerable amounts. The toxic metabolites may bind to vital intracellular macromolecules and may generate reactive oxygens by redox cycling if quinone is formed. This could also lead to a depletion of protective glutathione which is responsible for the detoxification of reactive oxygens.
The observed skin phototoxicity of patients treated with benoxaprofen can be explained with a look at the structure of the compound. There are significant structural similarities between the benzoxazole ring of benoxaprofen and the benzafuran ring of psoralen, a compound known to be phototoxic. The free decarboxylated derivate of the drug can produce singlet oxygen and superoxy anions in the presence of oxygen. Furthermore possible explanations for the photochemical decarboxylation and oxygen radical formation may be the accumulation of repeated dosage, the induction of cytochrome P450I and the emergence of reactive intermediates with covalent binding. The photochemical character of the compound can cause inflammation and severe tissue damage.
In animals peroxisomal proliferation is also observed but does not seem to be significant in man.
# Effects on animals
The effects of Benoxaprofen on animals were tested in a series of experiments. Benoxaprofen had a considerably anti-inflammatory, analgesic and also anti-pyretic activity in those tests. In all six animals tested, which included rats, dogs, rhesus monkeys, rabbits, guinea pigs and mice, the drug was well absorbed orally. In three of the six species benoxaprofen was then effectively taken up from the gastrointestinal tract (after oral doses of 1–10 mg/kg). The plasma half life was found to be different, being less than 13 hours in the dog, rabbit and monkey, it was notable longer in mice. Furthermore there were species differences found in the rate and route of excretion of the compound. Whereas benoxaprofen was excreted into the urine by the rabbit and guinea pig, biliary excretion was the way of clearance found in rats and dogs. In all species only unchanged benoxaprofen was found in the plasma mostly extensively bound to proteins.
The excretion of the unchanged compound into the bile did occur more slowly in rats. This is interpreted by the authors as evidence that no enterohepatic circulation takes place. Another research in rats showed that the plasma membrane of hepatocytes begun to form blebs after administration of benoxaprofen. This is suggested to be due to disturbances in the calcium concentration which is possibly a result of an altered cellular redox state which can have an effect on mitochondrial function and therefore cause disturbances in the calcium concentration. In none of the species significant levels of metabolism of benoxaprofen were found to have happened. Only in dogs glucuronide could be found in the bile which is a sure sign of metabolism in that species. Also no differences in distribution of the compound in normal and pregnant rats were found. It was shown in rats that benoxaprofen was distributed into the foetus but with a notable lower concentration than in the maternal tissue. | Benoxaprofen
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Benoxaprofen is a chemical compound with the formula C16H12ClNO3. It is a non-steroidal anti-inflammatory drug and was marketed under the brand name Oraflex in the United States and as Opren in Europe by Eli Lilly and Company. Lilly suspended sales of Oraflex in 1982 after reports from the British government and the U.S. Food and Drug Administration (FDA) of adverse effects and deaths linked to the drug.
# History
Benoxaprofen was discovered by a team of Lilly chemists at its British laboratory. This laboratory was assigned to explore new anti-arthritic compounds in 1966. Lilly applied for patents on benoxaprofen seven years later and also filed for permission from the FDA to start testing the drug on humans. It had to undergo the three-step clinical testing procedure required by the Federal Government.
Lilly began Phase I of the progress by testing a handful of healthy human volunteers. These tests had to prove that the drug posed no clear and immediate safety hazards. In Phase II a larger number of human subjects, including some with minor illnesses, was tested. The drug’s effectiveness and safety was the major target of these tests. Phase III was the largest test and began in 1976. More than 2,000 arthritis patients were administered the drug by more than 100 physicians. The physicians reported the results to the Lilly Company.
When the company formally requested to begin marketing the drug in January 1980 with the FDA, the document consisted of more than 100,000 pages of test results and patients’ records.
Benoxaprofen was first marketed abroad: in 1980 the drug was released for marketing in the UK. It came on the market in May 1982 in the USA.
When benoxaprofen was on the market as Oraflex in the USA the first sign of trouble came for the Lilly Company. The British Medical Journal reported in May 1982 that physicians in the UK believed that the drug was responsible for at least 12 deaths, mainly caused by kidney and liver failure. A petition was filed to have Oraflex removed from the market.
On the fourth of August 1982 the British government temporarily suspended sales of the drug in UK ‘on grounds of safety’. The British Committee on the Safety of Medicines declared, in a telegram to the FDA, that it had received reports of more than 3,500 adverse side-effects among patients who had used Oraflex. There were also 61 deaths, most of which were of elderly people. Almost simultaneously, the FDA said it had reports of 11 deaths in the USA among Oraflex users, most of which were caused by kidney and liver damage.
The Eli Lilly Company suspended sales of benoxaprofen that afternoon.
# Structure and reactivity
The molecular formula of benoxaprofen is C16H12ClNO3 and the systematic (IUPAC) name is 2-[2-(4-chlorophenyl)-1,3-benzoxazol-5-yl]propionic acid. The molecule has a molecular mass of 301.050568 g/mol.
Benoxaprofen is essentially a planar molecule. This is due to the co-planarity of the benzoxazole and phenyl rings, but the molecule also has a non-planar side chain consisting of the propanoic acid moiety which acts as a carrier group. These findings were determined with the use of X-ray crystallographic measurements by the Lilly Research Centre Limited.
Furthermore, benoxaprofen is highly phototoxic. The free radical decarboxylated derivative of the drug is the toxic agent which, in the presence of oxygen, yields singlet oxygen and superoxy anion.
Photochemical decarboxylation via a radical mechanism and in single-strand breaks of DNA is caused by irradiation of benoxaprofen in an aqueous solution. This also happens to ketoprofen and naproxen, other non-steroidal anti-inflammatory drugs, which are even more active in this respect than benoxaprofen.
# Available forms
Benoxaprofen is a racemic mixture [(RS)-2-(p-chlorophenyl-a-methyl-5-benzoxazoleacetic acid]. The two enantiomers are R(-) and S(+).
The inversion of the R(-) enantiomer and glucuronide conjugation will metabolize benoxaprofen. However, benoxaprofen will not readily undergo oxidative metabolism.
It is however possible that, when cytochrome P4501 is the catalyst, oxygenation of the 4-chlorophyl ring occurs. With the S(+) enantiomer it is more likely that oxygenation of the aromatic ring of the 2-phenylpropionic acid moiety occurs, also here is cytochrome P4501 the catalyst.
# Toxicokinetics
Benoxaprofen is absorbed well after oral intake of doses ranging from 1 up to 10 mg/kg. Only the unchanged drug is detected in the plasma, mostly bound to plasma proteins.[6] The plasma levels of benoxaprofen in eleven subjects have been accurately predicted, based on the two-compartment open model. The mean half-life of absorption was 0.4 hours. This means that within 25 minutes, half of the dose is absorbed in the system. The mean half-life of distribution was 4.8 hours. This means that within 5 hours, half of the dose is distributed throughout the entire system. The mean half-life of elimination was 37.8 hours. This means that within 40 hours, half of the dose is excreted out of the system.
In female rats, after oral dose of 20 mg/kg, the tissue concentration of benoxaprofen was the highest in liver, kidney, lungs, adrenals and ovaries. The distribution in pregnant females is the same, while it can also be found –in lower concentrations– in the foetus. There is a big difference between species in the route of excretion. In man, rhesus monkey and rabbit it is mostly excreted via the urine, while in rat and dog is was excreted via biliary-faecal excretion. In man and dog, the compound was excreted as the ester glucuronide, and in the other species as the unchanged compound. This means no major metabolic transformation of benoxaprofen takes place.
# Toxicodynamics
Unlike other non-steroidal anti-inflammatory compounds, benoxaprofen acts directly on mononuclear cells. It inhibits their chemotactic response by inhibiting the lipoxygenase enzyme.
# Efficacy and side effects
## Efficacy
Benoxaprofen is an analgesic, antipyretic and anti-inflammatory drug.
Benoxaprofen was given to patients with rheumatoid arthritis and osteoarthritis because of its anti-inflammatory effect. Patients with the Paget’s disease, psoriatic arthritis, ankylosing spondylitis, a painful shoulder, the mixed connective-tissue disease, polymyalgia rheumatica, back pain and the Behçet’s disease received benoxaprofen, too. A daily dose of 300–600 mg is effective for many patients.
## Adverse effects
There are different types of side effects. Most of them were cutaneous or gastrointestinal. Side effects appear rarely in the central nervous system and miscellaneous side effects were not often observed. A study shows that most side effects appear in patients with rheumatoid arthritis
### Cutaneous side effects
Cutaneous side effects of benoxaprofen are photosensitivity, onycholysis, rash, milia, increased nail growth, pruritus (itch) and hypertrichosis. Photosensitivity leads to burning, itching or redness when patients are exposed to sunlight.
A study shows that benoxaprofen, or other lipoxygenase-inhibiting agents, might be helpful in the treatment of psoriasis because the migration inhibition of the inflammatory cells (leukocytes) into the skin.
### Gastrointestinal side effects
Gastrointestinal side effects of benoxaprofen are bleeding, diarrhoea, abdominal pain, anorexia (symptom), mouth ulcers and taste change. According to a study the most appearing gastric side effects are vomiting, heartburn and epigastric pain.
### Side effects in the central nervous system
For a small number of people, taking benoxaprofen might result in depression, lethargy and feeling ill.
### Miscellaneous side effects
Faintness, dizziness, headache, palpitations, epistaxis, blurred vision, urinary urgency and gynaecomastia rarely appear in patients who take benoxaprofen.
Benoxaprofen also causes hepatotoxicity, which led to death of some elderly patients. That was the main reason why the drug was withdrawn from the market.
# Toxicity
After the suspension of sales in 1982 the toxic effects which benoxaprofen might have on humans were looked into more deeply. The fairly planar compound of benoxaprofen seems to be hepa- and phototoxic in the human body.
Benoxaprofen has a rather long half life in man (t1/2= 20-30 h), undergoes biliary excretion and enterohepatic circulation and is also known to have a slow plasma clearance (CL p=4.5 ml/min). The half life may be further increased in elderly patients (>80 years of age) and in patients which already have an renal impairment increasing to figures as high as 148 hours.
The fetal hepatotoxicity of benoxaprofen can be attributed to the accumulation of the drug after a repeated dosage and also associated with the slow plasma clearance. The hepatic accumulation of the drug is presumably the cause for an increase in the activity of the hepatic cytochrome P450I which will oxygenate benaxoprofen and produce reactive intermediates. Benoxaprofen is very likely a substrate and weak inducer of cytochrome P450I and its enzyme family. Normally it is not metabolized by oxidative reactions but with the S(+) enantiomer of benoxaprofen and cytochrome P450I as a catalyst the oxygenation of the 4-chlorophenyl ring and of the aromatic ring of 2-phenyl propionic acid seems to be possible. Therefore the induction of a minor metabolic pathway leads to the formation of toxic metabolites in considerable amounts. The toxic metabolites may bind to vital intracellular macromolecules and may generate reactive oxygens by redox cycling if quinone is formed. This could also lead to a depletion of protective glutathione which is responsible for the detoxification of reactive oxygens.
The observed skin phototoxicity of patients treated with benoxaprofen can be explained with a look at the structure of the compound. There are significant structural similarities between the benzoxazole ring of benoxaprofen and the benzafuran ring of psoralen, a compound known to be phototoxic. The free decarboxylated derivate of the drug can produce singlet oxygen and superoxy anions in the presence of oxygen. Furthermore possible explanations for the photochemical decarboxylation and oxygen radical formation may be the accumulation of repeated dosage, the induction of cytochrome P450I and the emergence of reactive intermediates with covalent binding. The photochemical character of the compound can cause inflammation and severe tissue damage.
In animals peroxisomal proliferation is also observed but does not seem to be significant in man.
# Effects on animals
The effects of Benoxaprofen on animals were tested in a series of experiments. Benoxaprofen had a considerably anti-inflammatory, analgesic and also anti-pyretic activity in those tests. In all six animals tested, which included rats, dogs, rhesus monkeys, rabbits, guinea pigs and mice, the drug was well absorbed orally. In three of the six species benoxaprofen was then effectively taken up from the gastrointestinal tract (after oral doses of 1–10 mg/kg). The plasma half life was found to be different, being less than 13 hours in the dog, rabbit and monkey, it was notable longer in mice. Furthermore there were species differences found in the rate and route of excretion of the compound. Whereas benoxaprofen was excreted into the urine by the rabbit and guinea pig, biliary excretion was the way of clearance found in rats and dogs. In all species only unchanged benoxaprofen was found in the plasma mostly extensively bound to proteins.
The excretion of the unchanged compound into the bile did occur more slowly in rats. This is interpreted by the authors as evidence that no enterohepatic circulation takes place. Another research in rats showed that the plasma membrane of hepatocytes begun to form blebs after administration of benoxaprofen. This is suggested to be due to disturbances in the calcium concentration which is possibly a result of an altered cellular redox state which can have an effect on mitochondrial function and therefore cause disturbances in the calcium concentration. In none of the species significant levels of metabolism of benoxaprofen were found to have happened. Only in dogs glucuronide could be found in the bile which is a sure sign of metabolism in that species. Also no differences in distribution of the compound in normal and pregnant rats were found. It was shown in rats that benoxaprofen was distributed into the foetus but with a notable lower concentration than in the maternal tissue. | https://www.wikidoc.org/index.php/Benoxaprofen | |
8730b284e2239dfe4a43d4e7caccf28e9df886a0 | wikidoc | Benzaldehyde | Benzaldehyde
Benzaldehyde (C6H5CHO) is a chemical compound consisting of a benzene ring with an aldehyde substituent. It is the simplest representative of the aromatic aldehydes and one of the most industrially used members of this family of compounds. At room temperature it is a colorless liquid with a characteristic and pleasant almond-like odor: benzaldehyde is an important component of the scent of almonds, hence its typical odor. It is the primary component of bitter almond oil extract, and can be extracted from a number of other natural sources in which it occurs, such as apricot, cherry, and laurel leaves, peach seeds and, in a glycoside combined form (amygdalin), in certain nuts and kernels. Currently benzaldehyde is primarily made from toluene by a number of different processes.
# Production
Benzaldehyde can be obtained by many processes. Currently liquid phase chlorination or oxidation of toluene are among the most used processes. There is also a number of discontinued applications such as partial oxidation of benzyl alcohol, alkali treating of benzal chloride and reaction between benzene and carbon monoxide.
# Reactions
On oxidation, benzaldehyde is converted into unpleasant smelling benzoic acid. Benzyl alcohol can be formed from benzaldehyde by means of hydrogenation or by treating the compound with alcoholic potassium hydroxide thus undergoing a simultaneous oxidation and reduction which result in the production of potassium benzoate and benzyl alcohol. Reaction of benzaldehyde with anhydrous sodium acetate and acetic anhydride yields cinnamic acid, while alcoholic potassium cyanide can be used to catalyze the condensation of benzaldehyde to benzoin.
Cannizzaro reaction
Benzaldehyde can also undergo disproportionation in concentrated alkali (Cannizzaro's reaction): one molecule of the aldehyde is reduced to the corresponding alcohol and another molecule is simultaneously oxidized to the salt of a carboxylic acid. The speed of this reaction depends on the substituents present in the aromatic ring.
# Uses
While it is commonly employed as a commercial food flavourant (almond flavour) or industrial solvent, benzaldehyde is used chiefly in the synthesis of other organic compounds, ranging from pharmaceuticals to plastic additives. It is also an important intermediate for the processing of perfume and flavouring compounds and in the preparation of certain aniline dyes.
The synthesis of mandelic acid starts from benzaldehyde:
mandelic acid synthesis
First hydrocyanic acid is added to benzaldehyde and the resulting mandelic acid nitrile is subsequently hydrolysed to a racemic mixture of mandelic acid. (The scheme above depicts only one of the two formed enantiomers).
Glaciologists LaChapelle and Stillman reported in 1966 that benzaldeyde and N-heptaldehyde inhibit the recrystallization of snow and therefore the formation of depth hoar. This treatment may prevent avalanches caused by unstable depth hoar layers. However, the chemicals are not in widespread use because they damage vegetation and contaminate water supplies.
# Biology
amygdalin
Almonds, apricots, apples and cherry kernels, contain significant amounts of amygdalin. This glycoside breaks up under enzyme catalysis into benzaldehyde, hydrocyanic acid and two molecules of glucose. | Benzaldehyde
Template:Chembox new
Benzaldehyde (C6H5CHO) is a chemical compound consisting of a benzene ring with an aldehyde substituent. It is the simplest representative of the aromatic aldehydes and one of the most industrially used members of this family of compounds. At room temperature it is a colorless liquid with a characteristic and pleasant almond-like odor: benzaldehyde is an important component of the scent of almonds, hence its typical odor. It is the primary component of bitter almond oil extract, and can be extracted from a number of other natural sources in which it occurs, such as apricot, cherry, and laurel leaves, peach seeds and, in a glycoside combined form (amygdalin), in certain nuts and kernels. Currently benzaldehyde is primarily made from toluene by a number of different processes.
# Production
Benzaldehyde can be obtained by many processes. Currently liquid phase chlorination or oxidation of toluene are among the most used processes. There is also a number of discontinued applications such as partial oxidation of benzyl alcohol, alkali treating of benzal chloride and reaction between benzene and carbon monoxide.
# Reactions
On oxidation, benzaldehyde is converted into unpleasant smelling benzoic acid. Benzyl alcohol can be formed from benzaldehyde by means of hydrogenation or by treating the compound with alcoholic potassium hydroxide thus undergoing a simultaneous oxidation and reduction which result in the production of potassium benzoate and benzyl alcohol. Reaction of benzaldehyde with anhydrous sodium acetate and acetic anhydride yields cinnamic acid, while alcoholic potassium cyanide can be used to catalyze the condensation of benzaldehyde to benzoin.
Cannizzaro reaction
Benzaldehyde can also undergo disproportionation in concentrated alkali (Cannizzaro's reaction): one molecule of the aldehyde is reduced to the corresponding alcohol and another molecule is simultaneously oxidized to the salt of a carboxylic acid. The speed of this reaction depends on the substituents present in the aromatic ring.
# Uses
While it is commonly employed as a commercial food flavourant (almond flavour) or industrial solvent, benzaldehyde is used chiefly in the synthesis of other organic compounds, ranging from pharmaceuticals to plastic additives. It is also an important intermediate for the processing of perfume and flavouring compounds and in the preparation of certain aniline dyes.
The synthesis of mandelic acid starts from benzaldehyde:
mandelic acid synthesis
First hydrocyanic acid is added to benzaldehyde and the resulting mandelic acid nitrile is subsequently hydrolysed to a racemic mixture of mandelic acid. (The scheme above depicts only one of the two formed enantiomers).
Glaciologists LaChapelle and Stillman reported in 1966 that benzaldeyde and N-heptaldehyde inhibit the recrystallization of snow and therefore the formation of depth hoar. This treatment may prevent avalanches caused by unstable depth hoar layers. However, the chemicals are not in widespread use because they damage vegetation and contaminate water supplies.[citation needed]
# Biology
amygdalin
Almonds, apricots, apples and cherry kernels, contain significant amounts of amygdalin. This glycoside breaks up under enzyme catalysis into benzaldehyde, hydrocyanic acid and two molecules of glucose. | https://www.wikidoc.org/index.php/Benzaldehyde | |
5b6bfaeb0715b20a34ff7f96cead866cf9134875 | wikidoc | Benznidazole | Benznidazole
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Benznidazole is a nitroimidazole antimicrobial that is FDA approved for the treatment of Chagas disease (American trypanosomiasis), caused by Trypanosoma cruzi. Common adverse reactions include abdominal pain, rash, decreased weight, headache, nausea, vomiting, neutropenia, urticaria, pruritis, eosinophilia, decreased appetite.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
There is limited information regarding Benznidazole FDA-Labeled Indications and Dosage (Adult) in the drug label.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Guideline-Supported Use and Dosage (Adult) in the drug label.
### Non–Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Non-Guideline-Supported Use and Dosage (Adult) in the drug label.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
- Benznidazole tablets are indicated in pediatric patients 2 to 12 years of age for the treatment of Chagas disease (American trypanosomiasis) caused by Trypanosoma cruzi.
- This indication is approved under accelerated approval based on the number of treated patients who became Immunoglobulin G (IgG) antibody negative against the recombinant antigens of T. cruzi. Continued approval for this indication may be contingent upon verification and description of clinical benefit in confirmatory trials.
- The total daily dose for pediatric patients 2 to 12 years of age is 5 mg/kg to 8 mg/kg orally administered in two divided doses separated by approximately 12 hours, for a duration of 60 days (see TABLE 1).
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Guideline-Supported Use and Dosage (Pediatric) in the drug label.
### Non–Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Non-Guideline-Supported Use and Dosage (Pediatric) in the drug label.
# Contraindications
### Hypersensitivity
- Benznidazole tablets are contraindicated in patients with a history of hypersensitivity reaction to benznidazole or other nitroimidazole derivatives. Reactions have included severe skin and soft tissue reactions.
### Disulfiram
- Benznidazole tablets are contraindicated in patients who have taken disulfiram within the last two weeks. Psychotic reactions may occur in patients who are using benznidazole and disulfiram concurrently.
### Alcohol and Products Containing Propylene Glycol
- Consumption of alcoholic beverages or products containing propylene glycol is contraindicated in patients during and for at least 3 days after therapy with Benznidazole tablets. A disulfiram-like reaction (abdominal cramps, nausea, vomiting, headaches, and flushing) may occur due to the interaction between alcohol or propylene glycol and benznidazole.
# Warnings
Genotoxicity
- Genotoxicity of benznidazole has been demonstrated in humans, in vitro in several bacterial species and mammalian cell systems, and in vivo in rodents.
- A study evaluating the cytogenetic effect of benznidazole in pediatric patients ranging from 11 months to 11 years of age (the safety and effectiveness of benznidazole tablets in patients less than 2 years old has not been established) with Chagas disease demonstrated a two-fold increase in chromosomal aberrations. In pediatric patients with Chagas disease who were treated with benznidazole, the median incidence of micronucleated interphase lymphocytes in 20 patients increased 2 fold compared to pre-dose values. In the same study, the mean incidence of chromosomal aberrations in 10 patients also increased 2 fold compared to pre-dose values.
Carcinogenicity
- Carcinogenicity has been observed in mice and rats treated chronically with nitroimidazole agents which are structurally similar to benznidazole. Similar data have not been reported for benznidazole. It is not known whether benznidazole is associated with carcinogenicity in humans.
- Based on findings from animal studies, benznidazole tablets can cause fetal harm when administered to a pregnant woman. In animal reproduction studies, benznidazole administered orally to pregnant rats and rabbits during organogenesis was associated with fetal malformations at doses approximately 1-3 times the maximum recommended human dose (MRHD) in rats (anasarca, anophthalmia, and/or microphthalmia) and doses approximately 0.3-1 times the MRHD in rabbits (ventricular septal defect). In rats, reduced maternal weights and smaller litter sizes occurred at a dose approximately 3 times the MRHD. In rabbits, reduced maternal weight gain, and abortions in 2/20 females occurred at a dose approximately equal to the MHRD. Advise pregnant women of the potential risk to a fetus. Pregnancy testing is recommended for females of reproductive potential. Advise females of reproductive potential to use effective contraception during treatment with benznidazole tablets and for 5 days after the last dose.
- Serious skin and subcutaneous disorders including acute generalized exanthematous pustulosis (AGEP), toxic epidermal necrolysis (TEN), erythema multiforme, and eosinophilic drug reaction have been reported with benznidazole. Discontinue treatment at the first evidence of these serious cutaneous reactions.
- Extensive skin reactions, such as rash (maculopapular, pruritic macules, eczema, pustules, erythematous, generalized, and allergic dermatitis, exfoliative dermatitis) have also been reported. Most cases occurred after approximately 10 days of treatment with benznidazole. Most rashes resolved with treatment discontinuation.
- In case of skin reactions presenting with additional symptoms or signs of systemic involvement such as lymphadenopathy, fever and/or purpura, discontinuation of treatment is recommended.
- Treatment with benznidazole tablets can cause paresthesia or symptoms of peripheral neuropathy that may take several months to resolve. Headache and dizziness have been reported. In cases where neurological symptoms occur, immediate discontinuation of treatment is recommended. In most cases, symptoms occur late in the course of treatment.
- There have been reports of hematological manifestations of bone marrow depression, such as neutropenia, thrombocytopenia, anemia and leukopenia, which resolved after treatment discontinuation. Patients with hematological manifestations of bone marrow depression must take benznidazole tablets only under strict medical supervision. Monitor complete blood count. Total and differential leukocyte counts are recommended before, during and after therapy.
# Adverse Reactions
## Clinical Trials Experience
- Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice.
- Benznidazole was evaluated in two randomized, double-blind, placebo-controlled trials (Trial 11 and Trial 22) and one uncontrolled trial (Trial 33).
- Trial 1 was conducted in pediatric patients 6 to 12 years of age with chronic indeterminate Chagas disease in Argentina. The chronic indeterminate form includes patients with serologic evidence of T. cruzi infection without symptoms of cardiac or gastrointestinal disease. A total of 106 patients were randomized to receive either benznidazole (5 mg/kg/day twice daily for 60 days; N= 55) or placebo (N=51) and followed for 4 years.
- Trial 2 was conducted in pediatric patients 7 to 12 years of age with chronic indeterminate Chagas disease in Brazil. A total of 129 patients were randomized to receive either benznidazole (7.5 mg/kg/day twice daily for 60 days; N = 64) or placebo (N = 65) and followed for 3 years.
- Trial 3 was an uncontrolled study in pediatric patients 2 to 12 years of age with chronic indeterminate Chagas disease. A total of 37 pediatric patients with Chagas disease were enrolled in this safety and pharmacokinetics study. Patients were treated with benznidazole 5 to 8 mg/kg/day twice daily for 60 days.
- In Trial 1, benznidazole was discontinued due to an adverse reaction in 5/55 (9%) patients. Some patients had more than one adverse reaction resulting in treatment discontinuation. The adverse reactions included abdominal pain, nausea, vomiting, rash, decreased appetite, headache, and transaminases increased.
- The most frequently reported adverse reactions in pediatric patients treated with benznidazole in Trial 1 were abdominal pain (25%), rash (16%), decreased weight (13%), and headache (7%). TABLE 4 lists adverse reactions occurring at a rate of 1% or greater in pediatric patients with Chagas disease aged 6 to 12 years of age in Trial 1.
- In Trial 2, skin lesions were reported in 7 of 64 (11%) pediatric patients treated with benznidazole and in 2 of 65 patients receiving placebo. Adverse reactions reported in fewer than 5% of benznidazole-treated patients included nausea, anorexia, headache, abdominal pain and arthralgia.
- In a subset of 19 pediatric patients 2 to 6 years of age treated with benznidazole in Trial 3, 6 patients (32%) had the following adverse reactions: rash, leukopenia, urticaria, eosinophilia, decreased appetite, and neutropenia. These adverse reactions were similar to those observed in the overall population of 37 patients.
## Postmarketing Experience
- The following adverse reactions have been identified during the use of other formulations of benznidazole outside of the United States. Because these reactions are reported from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure.
# Drug Interactions
- Disulfiram
- Alcohol and Products Containing Propylene Glycol
- Psychotic reactions have been reported in patients who are concurrently taking disulfiram and nitroimidazole agents (structurally related to benznidazole, but not with benznidazole). Benznidazole tablets should not be given to patients who have taken disulfiram within the last two weeks.
- Abdominal cramps, nausea, vomiting, headaches, and flushing may occur if alcoholic beverages or products containing propylene glycol are consumed during or following therapy with nitroimidazole agents which are structurally related to benznidazole. Although no similar reactions have been reported with benznidazole, discontinue alcoholic beverage or products containing propylene glycol during and for at least 3 days after therapy with benznidazole tablets.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
### Risk Summary
- Based on findings from animal studies, benznidazole tablets may cause fetal harm when administered to a pregnant woman. Published postmarketing reports on benznidazole use during pregnancy are insufficient to inform a drug-associated risk of adverse pregnancy-related outcomes. There are risks to the fetus associated with Chagas Disease. In animal reproduction studies, benznidazole administered orally to pregnant rats and rabbits during organogenesis was associated with fetal malformations at doses approximately 1-3 times the MRHD in rats (anasarca, anophthalmia, and/or microphthalmia) and doses approximately 0.3-1.0 times the MRHD in rabbits (ventricular septal defect). Advise pregnant women of the potential risk to a fetus.
- The estimated background risk of major birth defects and miscarriage for the indicated population is unknown. All pregnancies have a background risk of birth defect, loss, or other adverse outcomes. In the U.S. general population, the estimated background risk of major birth defects and miscarriage in clinically recognized pregnancies is 2-4% and 15-20%, respectively.
### Clinical Considerations
- Disease-associated Maternal and/or Embryo/Fetal Risk
- Published data from case-control and observational studies on chronic Chagas disease during pregnancy are inconsistent in their findings. Some studies showed an increased risk of pregnancy loss, prematurity and neonatal mortality in pregnant women who have chronic Chagas disease while other studies did not demonstrate these findings. Chronic Chagas disease is usually not life-threatening. Since pregnancy findings are inconsistent, treatment of chronic Chagas disease during pregnancy is not recommended due to risk of embryo-fetal toxicity from benznidazole tablets.
- Acute symptomatic Chagas disease is rare in pregnant women; however, symptoms may be serious or life-threatening. There have been reports of pregnant women with life-threatening symptoms associated with acute Chagas disease who were treated with benznidazole. If a pregnant women presents with acute symptomatic Chagas disease, the risks versus benefits of treatment with benznidazole tablets to the mother and the fetus should be evaluated on a case-by-case basis.
### Data (Animal Data)
- In an embryo-fetal toxicity study in pregnant rats, an oral dose of benznidazole of 150 mg/kg/day during organogenesis (days 6-17 of gestation) was associated with maternal weight loss, reduced fetal weights, and smaller litter sizes. Benznidazole was also associated with a low incidence of fetal malformations including anasarca in one fetus at a dose of 50 mg/kg/day and anasarca and eye abnormalities (anophthalmia and microphthalmia) in 5 fetuses in 5 litters at a high dose of 150 mg/kg/day (approximately equivalent to 1 and 3 times, respectively, the MRHD based on whole body surface area comparisons). The No Observed Adverse Effect Level (NOAEL) dose for maternal toxicity in this study, 50 mg/kg/day, is approximately equal to the MRHD based on body surface area comparisons. The NOAEL dose for fetal toxicity was 15 mg/kg/day which is approximately equivalent to 0.3 times the MRHD based on whole body surface area comparisons.
- In an embryo-fetal study in pregnant rabbits, oral (gavage) administration of benznidazole during organogenesis (days 6 to 19 of gestation) at a high dose of 25 mg/kg/day was associated with maternal toxicity including reduced weight gain and food consumption and abortions in 2/20 females. Benznidazole was also associated with a low incidence of fetal abnormalities including ventricular septal defect in 2 fetuses in 2 litters at a dose of 7.5 mg/kg/day and in 1 fetus at a dose of 25 mg/kg/day (approximately equivalent to 0.3 and 1 times respectively the MRHD based on whole body surface area comparisons). The NOAEL values for maternal and fetal toxicity in this study were 7.5 and 2.5 mg/kg/day respectively, which are respectively equivalent to approximately 0.3 and 0.1 times the MRHD based on body surface area comparisons.
- In a pre- postnatal study in rats, first generation (F1) pups born to dams administered 15, 50, and 75 mg/kg/day benznidazole demonstrated normal pre-weaning behavior, physical and functional development, neurological findings, and reproductive parameters. However, cesarean section data for the pregnant first generation (F1) females in the high-dose group included significantly higher pre-implantation loss and significantly lower mean values for corpora lutea counts, number of implantations, and number of live embryos. Also small testes and/or epididymides were observed in 1/20 and 2/20 first generation males in the mid- and high-dose groups respectively, and two of the affected animals failed to mate or induce pregnancy. However, the mean values for mating performance, fertility index, testes weight, testes and epididymides sperm counts, and epididymal sperm motility and progression were not altered in any of the F1 males in benznidazole treatment groups. The number of live second generation (F2) fetuses born to F1 dams was reduced in the high-dose group. The NOAEL value was considered to be 50 mg/kg/day which is approximately equal to the MRHD based on body surface area comparisons.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Benznidazole in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Benznidazole during labor and delivery.
### Nursing Mothers
Limited published literature based on breast milk sampling reports that benznidazole is present in human milk at infant doses of 5.5 to 17% of the maternal weight-adjusted dosage and a milk/plasma ratio ranging between 0.3-2.79. There are no reports of adverse effects on the breastfed infant and no information on the effects of benznidazole on milk production. Because of the potential for serious adverse reactions, and transmission of Chagas disease, advise patients that breastfeeding is not recommended during treatment with benznidazole tablets.
### Pediatric Use
The safety and effectiveness of benznidazole tablets have been established in pediatric patients 2 to 12 years of age for the treatment of Chagas disease. Use in pediatric patients 2 to 12 years of age was established in two adequate and well-controlled trials in pediatric patients 6 to 12 years old with additional safety and pharmacokinetic data from pediatric patients 2 to 6 years of age.Safety and effectiveness in pediatric patients below the age of 2 years and above the age of 12 years have not been established.
### Geriatic Use
There is no FDA guidance on the use of Benznidazole in geriatric settings.
### Gender
There is no FDA guidance on the use of Benznidazole with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Benznidazole with respect to specific racial populations.
### Renal Impairment
Use of benznidazole tablets has not been evaluated in patients with renal impairment.
### Hepatic Impairment
Use of benznidazole tablets has not been evaluated in patients with hepatic impairment.
### Females of Reproductive Potential and Males
Pregnancy Testing
Contraception (Females)
Infertility (Males)
### Immunocompromised Patients
There is no FDA guidance one the use of Benznidazole in patients who are immunocompromised.
# Administration and Monitoring
### Administration
### Assessment Prior to Initiating Benznidazole Tablets
- Obtain a pregnancy test in females of reproductive potential prior to therapy with Benznidzole tablets.
### Preparation of Slurry as an Alternative Method of Administration
- Preparation of Slurry Using benznidazole tablets 12.5 mg for the Pediatric Population with Body Weight Less Than 30 kg.
- Benznidazole tablets 12.5 mg may be made into slurry in a specified volume of water for the pediatric population with body weight less than 30 kg (see TABLE 2). The 12.5 mg tablet slurry is prepared by the following method:
- Preparation of Slurry Using benznidazole tablets 100 mg for the Pediatric Population with Body Weight (30 kg or greater).
- Benznidazole tablets 100 mg may be made into a slurry in a specified volume of water for the pediatric population with body weight of 30 kg or greater (see TABLE 3). The 100 mg tablet slurry is prepared as follows:
### Monitoring
- Improvement of symptoms of Chagas disease, caused by Trypanosoma cruzi, may be indicative of efficacy.
- Complete blood count with differential: Before, during, and after therapy.
- Pregnancy Test: Prior to therapy initiation, in females of reproductive potential.
# IV Compatibility
There is limited information regarding the compatibility of Benznidazole and IV administrations.
# Overdosage
There is limited information regarding Benznidazole overdosage. If you suspect drug poisoning or overdose, please contact the National Poison Help hotline (1-800-222-1222) immediately.
# Pharmacology
## Mechanism of Action
- Benznidazole is a nitroimidazole antimicrobial drug.
## Structure
## Pharmacodynamics
- The pharmacodynamics of benznidazole is unknown.
## Pharmacokinetics
### Absorption
- The absorption of benznidazole from three different 100 mg benznidazole preparations was comparable when administered as a single dose under fasting conditions in adult healthy volunteers (TABLE 6).
Effect of Food
- Benznidazole Cmax and AUC were not affected by the administration of benznidazole 100 mg tablet with a high-fat, high-caloric meal (approximately 1034 total kcal, 67 kcal from fat, 42 kcal from carbohydrates, 59 kcal from protein) compared with fasted conditions in adult healthy volunteers. Serum concentrations of benznidazole reached peak levels at 3.2 hours (1-10 hours) after administration of benznidazole tablets 100 mg tablet after a high-fat, high-caloric meal, and at 2.0 hours (0.5-4 hours) in fasted conditions.
### Distribution
- Protein binding is reported to be approximately 44 to 60 %.
### Elimination
- The elimination half-life on benznidazole is approximately 13 hours in healthy volunteers following single dose.
### Metabolism
- Benznidazole metabolism pathway is unknown.
### Excretion
- Benznidazole and unknown metabolites are reported to be excreted in the urine and feces.
### Specific Populations
- The effect of sex, race, renal impairment, or hepatic impairment on the pharmacokinetics of benznidazole is unknown.
### Drug Interaction Studies
- In vitro studies showed that benznidazole is a P-gp substrate and does not notably induce Cytochrome P450 enzymes 1A2, 2B6, and 3A4 at concentrations up to 100 uM.
## Nonclinical Toxicology
### Carcinogenesis, Mutagenesis, Impairment of Fertility
Carcinogenicity
- Long-term carcinogenicity studies for benznidazole have not been performed.
- Nitroimidazoles, which have similar chemical structures to benznidazole have been reported to be carcinogenic in mice and rats.
Genetic Toxicity
- Genotoxicity of benznidazole has been demonstrated in vitro in several bacterial species and mammalian cell systems and in vivo in mammals.
- Benznidazole was mutagenic in several strains of S. typhimurium (TA 100, 102 1535, 1537, 1538, 97, 98 99 53 and UTH8414), E.coli, and K. pneumoniae.
- Benznidazole was genotoxic in several in vitro mammalian cell assays including a chromosome aberration assay in human lymphocytes and in sister chromatid exchange assays in human lymphocytes and in Human Hep G2 cells.
- In vivo, benznidazole was shown to be positive for genotoxicity in a mouse bone marrow micronucleus assay, in mouse and human red blood cell micronucleus assays, in a mouse abnormal sperm head assay and in a human peripheral blood lymphocyte assay. However in other micronucleus studies in mice and rats, oral doses of benznidazole did not cause a significant increase in the frequency of chromosomal aberrations in bone marrow cells or micronuclei in peripheral blood cells.
Impairment of Fertility
- In a 6-month, chronic repeated-dosing study with Wistar rats, benznidazole was shown to produce dose-dependent testicular and epididymal atrophy at a dose of 30 mg/kg/day (approximately equivalent to 0.6 times the MRHD based on whole body surface area comparisons). Aspermia was also evident in affected rats, but fertility was not assessed in this study. The NOAEL value in this study was considered to be 10 mg/kg/day (5 mg/kg twice daily) in males which is approximately 0.2-times the MRHD based on body surface area comparison. In other literature reports, benznidazole has been shown to cause testicular atrophy and inhibit spermatogenesis in pubertal and adult rats and mice5-7.
- In a female fertility study, oral (gavage) administration of benznidazole to female Wistar rats twice daily for a 2-week pre-mating period, during mating, and through day 7 of gestation was associated with transient lower body weight gain and food consumption. There was no benznidazole-related effect on mating performance or fertility and no adverse macroscopic or reproductive organ weight changes. However, benznidazole reproductive performance was associated with a higher post-implantation loss with lower live litter size at a dose of 150 mg/kg/day (equivalent to approximately 3 times the MRHD based on whole body surface area comparisons). The NOAEL value for this study was consider to be 50 mg/kg/day which is approximately equivalent to the MRHD based on whole body surface area comparison.
### Animal Toxicology and/or Pharmacology
- Single oral dose toxicity studies in rats have established that benznidazole causes ultrastructural changes in the adrenal cortex, colon, esophagus, ovaries, and testis 5, 8-11. In these tissues, these changes were associated with the presences of nitro reductase activity, the production of reactive metabolites, and or covalent binding of metabolites.
- Neurotoxicity including brain axonal degeneration and Purkinje cell degeneration was observed with repeated-oral dosing in dogs without adverse changes in peripheral nerves12-14. Neurological signs included: apathy, hypertonia, hyperreflexia, ataxia, loss of balance, oscillatory movements of the trunk and head, strong contractions of the back and leg muscles, opisthotonus and nystagmus. Neurotoxicity was not observed in other test species, including mouse, rat, guinea pig, and rabbit.
# Clinical Studies
- The safety and effectiveness of benznidazole for the treatment of Chagas disease in patients 6 to 12 years of age was established in two adequate and well-controlled trials (Trial 1 and Trial 2) as described below.
- Trial 1 was a randomized, double-blind, placebo-controlled trial in children 6 to 12 years of age with chronic indeterminate Chagas disease conducted in Argentina. The chronic indeterminate form of Chagas disease includes patients with serologic evidence of T. cruzi infection without symptoms of cardiac or gastrointestinal disease. A total of 106 patients were randomized to receive either benznidazole (5 mg/kg/day for 60 days) or placebo and followed for 4 years. Patients with at least two positive conventional serologic tests for antibodies to T. cruzi were included in the study. The conventional serologic tests used include indirect hemagglutination assay (IHA), immunofluorescence antibody assay (IFA), and/or enzyme linked immunosorbent assay (ELISA) and were based on the detection of antibodies against T. cruzi parasites.
- Trial 2 was a randomized, double-blind, placebo-controlled trial in pediatric patients 7 to 12 years of age with chronic indeterminate Chagas disease conducted in Brazil. A total of 129 patients were randomized to receive either benznidazole (7.5 mg/kg/day for 60 days) or placebo and followed for 3 years. Patients with three positive conventional serologic tests for antibodies to T. cruzi were included in the study. The conventional serologic tests include IHA, IFA, and/or ELISA and were based on the detection of antibodies against T. cruzi parasites.
- Both trials measured antibodies by conventional and nonconventional assays. The nonconventional assays include F29-ELISA and AT- chemiluminescence-ELISA that are based on detection of anti-T. cruzi IgG antibodies against the recombinant antigens, F29 and AT from the flagella of T. cruzi parasites. Benznidazole treatment resulted in a significantly higher percentage of seronegative patients by a nonconventional assay. Results at the end of follow-up are reported in the following table.
- In Trial 1 using conventional ELISA, 4 of 53 (7.5%) benznidazole subjects and 2 of 50 (4.0%) placebo subjects seroconverted to negative by the end of follow-up (difference 3.5, 95% CI (-7.0, 14.9)). In Trial 2 using conventional ELISA, 4 of 64 (6.3%) of benznidazole subjects and 0 of 65 placebo subjects seroconverted to negative by the end of follow-up (difference 6.3, 95% CI (0.3, 15.2)).
# How Supplied
- Benznidazole tablets (12.5 mg or 100 mg) are supplied as follows:
- 100 mg white tablets, round and functionally scored twice as a cross on both sides. Each tablet is about 10 mm in diameter debossed with “E” on one side of each quarter portion.
- 12.5 mg white tablets, round and unscored. Each tablet is about 5 mm in diameter debossed with “E” on one side.
- Benznidazole tablets 100 mg are available in bottles of 100 tablets (NDC 0642-7464-10).
- Benznidazole tablets 12.5 mg are available in bottles of 100 tablets (NDC 0642-7463-12).
## Storage
- Store at controlled room temperature 20°C to 25°C (68°F to 77°F); excursions permitted to 15°C to 30°C (59°F to 86°F). Keep bottle tightly closed and protect from moisture.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
### Embryo-Fetal Toxicity
- Advise pregnant women and females of reproductive potential that exposure to benznidazole tablets during pregnancy can result in fetal harm.
- Advise females to inform their healthcare provider of a known or suspected pregnancy.
- Advise females of reproductive potential to use effective contraception while taking benznidazole tablets and for 5 days after the last dose.
### Lactation
- Advise women not to breastfeed during treatment with benznidazole tablets.
### Infertility
- Advise males of reproductive potential that benznidazole tablets may impair fertility.
### Important Administration Instructions
- Advise patients and parents/caregivers of pediatric patients taking Benznidazole tablets that:
- Benznidazole tablets 100 mg are functionally scored tablets which can be split into one-half (50 mg) or one-quarter (25 mg) at the scored lines to provide doses less than 100 mg.
- Benznidazole tablets 12.5 mg and 100 mg (whole or split) can be made into a slurry in a specified volume of water for the pediatric population.
### Hypersensitivity Skin Reactions
- Advise patients that serious skin reactions can occur with benznidazole tablets. In case of skin reactions, presenting with additional symptoms of systemic involvement such as lymphadenopathy, fever and/or purpura, discontinuation of treatment is necessary.
### Central and Peripheral Nervous System Effects
- Advise patients that treatment can potentially cause paresthesia or symptoms of peripheral neuropathy. In cases where neurological symptoms occur, immediate discontinuation of treatment is recommended.
### Hematological Manifestations of Bone Marrow Depression
- Advise patients that there have been hematological manifestations of bone marrow depression, such as anemia and leukopenia, which are reversible, and normalized after treatment discontinuation.
### Interaction with Alcohol
- Advise patients to discontinue consumption of alcoholic beverages or products containing propylene glycol while taking benznidazole tablets and for at least three days afterward because abdominal cramps, nausea, vomiting, headaches, and flushing may occur.
# Precautions with Alcohol
Alcohol-Benznidazole interaction has not been established. Talk to your doctor regarding the effects of taking alcohol with this medication.
# Brand Names
There is limited information regarding Benznidazole Brand Names in the drug label.
# Look-Alike Drug Names
There is limited information regarding Benznidazole Look-Alike Drug Names in the drug label.
# Drug Shortage Status
Drug Shortage
# Price | Benznidazole
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Yashasvi Aryaputra[2], Anmol Pitliya, M.B.B.S. M.D.[3]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Benznidazole is a nitroimidazole antimicrobial that is FDA approved for the treatment of Chagas disease (American trypanosomiasis), caused by Trypanosoma cruzi. Common adverse reactions include abdominal pain, rash, decreased weight, headache, nausea, vomiting, neutropenia, urticaria, pruritis, eosinophilia, decreased appetite.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
There is limited information regarding Benznidazole FDA-Labeled Indications and Dosage (Adult) in the drug label.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Guideline-Supported Use and Dosage (Adult) in the drug label.
### Non–Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Non-Guideline-Supported Use and Dosage (Adult) in the drug label.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
- Benznidazole tablets are indicated in pediatric patients 2 to 12 years of age for the treatment of Chagas disease (American trypanosomiasis) caused by Trypanosoma cruzi.
- This indication is approved under accelerated approval based on the number of treated patients who became Immunoglobulin G (IgG) antibody negative against the recombinant antigens of T. cruzi. Continued approval for this indication may be contingent upon verification and description of clinical benefit in confirmatory trials.
- The total daily dose for pediatric patients 2 to 12 years of age is 5 mg/kg to 8 mg/kg orally administered in two divided doses separated by approximately 12 hours, for a duration of 60 days (see TABLE 1).
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Guideline-Supported Use and Dosage (Pediatric) in the drug label.
### Non–Guideline-Supported Use
There is limited information regarding benznidazole Off-Label Non-Guideline-Supported Use and Dosage (Pediatric) in the drug label.
# Contraindications
### Hypersensitivity
- Benznidazole tablets are contraindicated in patients with a history of hypersensitivity reaction to benznidazole or other nitroimidazole derivatives. Reactions have included severe skin and soft tissue reactions.
### Disulfiram
- Benznidazole tablets are contraindicated in patients who have taken disulfiram within the last two weeks. Psychotic reactions may occur in patients who are using benznidazole and disulfiram concurrently.
### Alcohol and Products Containing Propylene Glycol
- Consumption of alcoholic beverages or products containing propylene glycol is contraindicated in patients during and for at least 3 days after therapy with Benznidazole tablets. A disulfiram-like reaction (abdominal cramps, nausea, vomiting, headaches, and flushing) may occur due to the interaction between alcohol or propylene glycol and benznidazole.
# Warnings
Genotoxicity
- Genotoxicity of benznidazole has been demonstrated in humans, in vitro in several bacterial species and mammalian cell systems, and in vivo in rodents.
- A study evaluating the cytogenetic effect of benznidazole in pediatric patients ranging from 11 months to 11 years of age (the safety and effectiveness of benznidazole tablets in patients less than 2 years old has not been established) with Chagas disease demonstrated a two-fold increase in chromosomal aberrations. In pediatric patients with Chagas disease who were treated with benznidazole, the median incidence of micronucleated interphase lymphocytes in 20 patients increased 2 fold compared to pre-dose values. In the same study, the mean incidence of chromosomal aberrations in 10 patients also increased 2 fold compared to pre-dose values.
Carcinogenicity
- Carcinogenicity has been observed in mice and rats treated chronically with nitroimidazole agents which are structurally similar to benznidazole. Similar data have not been reported for benznidazole. It is not known whether benznidazole is associated with carcinogenicity in humans.
- Based on findings from animal studies, benznidazole tablets can cause fetal harm when administered to a pregnant woman. In animal reproduction studies, benznidazole administered orally to pregnant rats and rabbits during organogenesis was associated with fetal malformations at doses approximately 1-3 times the maximum recommended human dose (MRHD) in rats (anasarca, anophthalmia, and/or microphthalmia) and doses approximately 0.3-1 times the MRHD in rabbits (ventricular septal defect). In rats, reduced maternal weights and smaller litter sizes occurred at a dose approximately 3 times the MRHD. In rabbits, reduced maternal weight gain, and abortions in 2/20 females occurred at a dose approximately equal to the MHRD. Advise pregnant women of the potential risk to a fetus. Pregnancy testing is recommended for females of reproductive potential. Advise females of reproductive potential to use effective contraception during treatment with benznidazole tablets and for 5 days after the last dose.
- Serious skin and subcutaneous disorders including acute generalized exanthematous pustulosis (AGEP), toxic epidermal necrolysis (TEN), erythema multiforme, and eosinophilic drug reaction have been reported with benznidazole. Discontinue treatment at the first evidence of these serious cutaneous reactions.
- Extensive skin reactions, such as rash (maculopapular, pruritic macules, eczema, pustules, erythematous, generalized, and allergic dermatitis, exfoliative dermatitis) have also been reported. Most cases occurred after approximately 10 days of treatment with benznidazole. Most rashes resolved with treatment discontinuation.
- In case of skin reactions presenting with additional symptoms or signs of systemic involvement such as lymphadenopathy, fever and/or purpura, discontinuation of treatment is recommended.
- Treatment with benznidazole tablets can cause paresthesia or symptoms of peripheral neuropathy that may take several months to resolve. Headache and dizziness have been reported. In cases where neurological symptoms occur, immediate discontinuation of treatment is recommended. In most cases, symptoms occur late in the course of treatment.
- There have been reports of hematological manifestations of bone marrow depression, such as neutropenia, thrombocytopenia, anemia and leukopenia, which resolved after treatment discontinuation. Patients with hematological manifestations of bone marrow depression must take benznidazole tablets only under strict medical supervision. Monitor complete blood count. Total and differential leukocyte counts are recommended before, during and after therapy.
# Adverse Reactions
## Clinical Trials Experience
- Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice.
- Benznidazole was evaluated in two randomized, double-blind, placebo-controlled trials (Trial 11 and Trial 22) and one uncontrolled trial (Trial 33).
- Trial 1 was conducted in pediatric patients 6 to 12 years of age with chronic indeterminate Chagas disease in Argentina. The chronic indeterminate form includes patients with serologic evidence of T. cruzi infection without symptoms of cardiac or gastrointestinal disease. A total of 106 patients were randomized to receive either benznidazole (5 mg/kg/day twice daily for 60 days; N= 55) or placebo (N=51) and followed for 4 years.
- Trial 2 was conducted in pediatric patients 7 to 12 years of age with chronic indeterminate Chagas disease in Brazil. A total of 129 patients were randomized to receive either benznidazole (7.5 mg/kg/day twice daily for 60 days; N = 64) or placebo (N = 65) and followed for 3 years.
- Trial 3 was an uncontrolled study in pediatric patients 2 to 12 years of age with chronic indeterminate Chagas disease. A total of 37 pediatric patients with Chagas disease were enrolled in this safety and pharmacokinetics study. Patients were treated with benznidazole 5 to 8 mg/kg/day twice daily for 60 days.
- In Trial 1, benznidazole was discontinued due to an adverse reaction in 5/55 (9%) patients. Some patients had more than one adverse reaction resulting in treatment discontinuation. The adverse reactions included abdominal pain, nausea, vomiting, rash, decreased appetite, headache, and transaminases increased.
- The most frequently reported adverse reactions in pediatric patients treated with benznidazole in Trial 1 were abdominal pain (25%), rash (16%), decreased weight (13%), and headache (7%). TABLE 4 lists adverse reactions occurring at a rate of 1% or greater in pediatric patients with Chagas disease aged 6 to 12 years of age in Trial 1.
- In Trial 2, skin lesions were reported in 7 of 64 (11%) pediatric patients treated with benznidazole and in 2 of 65 patients receiving placebo. Adverse reactions reported in fewer than 5% of benznidazole-treated patients included nausea, anorexia, headache, abdominal pain and arthralgia.
- In a subset of 19 pediatric patients 2 to 6 years of age treated with benznidazole in Trial 3, 6 patients (32%) had the following adverse reactions: rash, leukopenia, urticaria, eosinophilia, decreased appetite, and neutropenia. These adverse reactions were similar to those observed in the overall population of 37 patients.
## Postmarketing Experience
- The following adverse reactions have been identified during the use of other formulations of benznidazole outside of the United States. Because these reactions are reported from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure.
# Drug Interactions
- Disulfiram
- Alcohol and Products Containing Propylene Glycol
- Psychotic reactions have been reported in patients who are concurrently taking disulfiram and nitroimidazole agents (structurally related to benznidazole, but not with benznidazole). Benznidazole tablets should not be given to patients who have taken disulfiram within the last two weeks.
- Abdominal cramps, nausea, vomiting, headaches, and flushing may occur if alcoholic beverages or products containing propylene glycol are consumed during or following therapy with nitroimidazole agents which are structurally related to benznidazole. Although no similar reactions have been reported with benznidazole, discontinue alcoholic beverage or products containing propylene glycol during and for at least 3 days after therapy with benznidazole tablets.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
### Risk Summary
- Based on findings from animal studies, benznidazole tablets may cause fetal harm when administered to a pregnant woman. Published postmarketing reports on benznidazole use during pregnancy are insufficient to inform a drug-associated risk of adverse pregnancy-related outcomes. There are risks to the fetus associated with Chagas Disease. In animal reproduction studies, benznidazole administered orally to pregnant rats and rabbits during organogenesis was associated with fetal malformations at doses approximately 1-3 times the MRHD in rats (anasarca, anophthalmia, and/or microphthalmia) and doses approximately 0.3-1.0 times the MRHD in rabbits (ventricular septal defect). Advise pregnant women of the potential risk to a fetus.
- The estimated background risk of major birth defects and miscarriage for the indicated population is unknown. All pregnancies have a background risk of birth defect, loss, or other adverse outcomes. In the U.S. general population, the estimated background risk of major birth defects and miscarriage in clinically recognized pregnancies is 2-4% and 15-20%, respectively.
### Clinical Considerations
- Disease-associated Maternal and/or Embryo/Fetal Risk
- Published data from case-control and observational studies on chronic Chagas disease during pregnancy are inconsistent in their findings. Some studies showed an increased risk of pregnancy loss, prematurity and neonatal mortality in pregnant women who have chronic Chagas disease while other studies did not demonstrate these findings. Chronic Chagas disease is usually not life-threatening. Since pregnancy findings are inconsistent, treatment of chronic Chagas disease during pregnancy is not recommended due to risk of embryo-fetal toxicity from benznidazole tablets.
- Acute symptomatic Chagas disease is rare in pregnant women; however, symptoms may be serious or life-threatening. There have been reports of pregnant women with life-threatening symptoms associated with acute Chagas disease who were treated with benznidazole. If a pregnant women presents with acute symptomatic Chagas disease, the risks versus benefits of treatment with benznidazole tablets to the mother and the fetus should be evaluated on a case-by-case basis.
### Data (Animal Data)
- In an embryo-fetal toxicity study in pregnant rats, an oral dose of benznidazole of 150 mg/kg/day during organogenesis (days 6-17 of gestation) was associated with maternal weight loss, reduced fetal weights, and smaller litter sizes. Benznidazole was also associated with a low incidence of fetal malformations including anasarca in one fetus at a dose of 50 mg/kg/day and anasarca and eye abnormalities (anophthalmia and microphthalmia) in 5 fetuses in 5 litters at a high dose of 150 mg/kg/day (approximately equivalent to 1 and 3 times, respectively, the MRHD based on whole body surface area comparisons). The No Observed Adverse Effect Level (NOAEL) dose for maternal toxicity in this study, 50 mg/kg/day, is approximately equal to the MRHD based on body surface area comparisons. The NOAEL dose for fetal toxicity was 15 mg/kg/day which is approximately equivalent to 0.3 times the MRHD based on whole body surface area comparisons.
- In an embryo-fetal study in pregnant rabbits, oral (gavage) administration of benznidazole during organogenesis (days 6 to 19 of gestation) at a high dose of 25 mg/kg/day was associated with maternal toxicity including reduced weight gain and food consumption and abortions in 2/20 females. Benznidazole was also associated with a low incidence of fetal abnormalities including ventricular septal defect in 2 fetuses in 2 litters at a dose of 7.5 mg/kg/day and in 1 fetus at a dose of 25 mg/kg/day (approximately equivalent to 0.3 and 1 times respectively the MRHD based on whole body surface area comparisons). The NOAEL values for maternal and fetal toxicity in this study were 7.5 and 2.5 mg/kg/day respectively, which are respectively equivalent to approximately 0.3 and 0.1 times the MRHD based on body surface area comparisons.
- In a pre- postnatal study in rats, first generation (F1) pups born to dams administered 15, 50, and 75 mg/kg/day benznidazole demonstrated normal pre-weaning behavior, physical and functional development, neurological findings, and reproductive parameters. However, cesarean section data for the pregnant first generation (F1) females in the high-dose group included significantly higher pre-implantation loss and significantly lower mean values for corpora lutea counts, number of implantations, and number of live embryos. Also small testes and/or epididymides were observed in 1/20 and 2/20 first generation males in the mid- and high-dose groups respectively, and two of the affected animals failed to mate or induce pregnancy. However, the mean values for mating performance, fertility index, testes weight, testes and epididymides sperm counts, and epididymal sperm motility and progression were not altered in any of the F1 males in benznidazole treatment groups. The number of live second generation (F2) fetuses born to F1 dams was reduced in the high-dose group. The NOAEL value was considered to be 50 mg/kg/day which is approximately equal to the MRHD based on body surface area comparisons.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Benznidazole in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Benznidazole during labor and delivery.
### Nursing Mothers
Limited published literature based on breast milk sampling reports that benznidazole is present in human milk at infant doses of 5.5 to 17% of the maternal weight-adjusted dosage and a milk/plasma ratio ranging between 0.3-2.79. There are no reports of adverse effects on the breastfed infant and no information on the effects of benznidazole on milk production. Because of the potential for serious adverse reactions, and transmission of Chagas disease, advise patients that breastfeeding is not recommended during treatment with benznidazole tablets.
### Pediatric Use
The safety and effectiveness of benznidazole tablets have been established in pediatric patients 2 to 12 years of age for the treatment of Chagas disease. Use in pediatric patients 2 to 12 years of age was established in two adequate and well-controlled trials in pediatric patients 6 to 12 years old with additional safety and pharmacokinetic data from pediatric patients 2 to 6 years of age.Safety and effectiveness in pediatric patients below the age of 2 years and above the age of 12 years have not been established.
### Geriatic Use
There is no FDA guidance on the use of Benznidazole in geriatric settings.
### Gender
There is no FDA guidance on the use of Benznidazole with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Benznidazole with respect to specific racial populations.
### Renal Impairment
Use of benznidazole tablets has not been evaluated in patients with renal impairment.
### Hepatic Impairment
Use of benznidazole tablets has not been evaluated in patients with hepatic impairment.
### Females of Reproductive Potential and Males
Pregnancy Testing
Contraception (Females)
Infertility (Males)
### Immunocompromised Patients
There is no FDA guidance one the use of Benznidazole in patients who are immunocompromised.
# Administration and Monitoring
### Administration
### Assessment Prior to Initiating Benznidazole Tablets
- Obtain a pregnancy test in females of reproductive potential prior to therapy with Benznidzole tablets.
### Preparation of Slurry as an Alternative Method of Administration
- Preparation of Slurry Using benznidazole tablets 12.5 mg for the Pediatric Population with Body Weight Less Than 30 kg.
- Benznidazole tablets 12.5 mg may be made into slurry in a specified volume of water for the pediatric population with body weight less than 30 kg (see TABLE 2). The 12.5 mg tablet slurry is prepared by the following method:
- Preparation of Slurry Using benznidazole tablets 100 mg for the Pediatric Population with Body Weight (30 kg or greater).
- Benznidazole tablets 100 mg may be made into a slurry in a specified volume of water for the pediatric population with body weight of 30 kg or greater (see TABLE 3). The 100 mg tablet slurry is prepared as follows:
### Monitoring
- Improvement of symptoms of Chagas disease, caused by Trypanosoma cruzi, may be indicative of efficacy.
- Complete blood count with differential: Before, during, and after therapy.
- Pregnancy Test: Prior to therapy initiation, in females of reproductive potential.
# IV Compatibility
There is limited information regarding the compatibility of Benznidazole and IV administrations.
# Overdosage
There is limited information regarding Benznidazole overdosage. If you suspect drug poisoning or overdose, please contact the National Poison Help hotline (1-800-222-1222) immediately.
# Pharmacology
## Mechanism of Action
- Benznidazole is a nitroimidazole antimicrobial drug.
## Structure
## Pharmacodynamics
- The pharmacodynamics of benznidazole is unknown.
## Pharmacokinetics
### Absorption
- The absorption of benznidazole from three different 100 mg benznidazole preparations was comparable when administered as a single dose under fasting conditions in adult healthy volunteers (TABLE 6).
Effect of Food
- Benznidazole Cmax and AUC were not affected by the administration of benznidazole 100 mg tablet with a high-fat, high-caloric meal (approximately 1034 total kcal, 67 kcal from fat, 42 kcal from carbohydrates, 59 kcal from protein) compared with fasted conditions in adult healthy volunteers. Serum concentrations of benznidazole reached peak levels at 3.2 hours (1-10 hours) after administration of benznidazole tablets 100 mg tablet after a high-fat, high-caloric meal, and at 2.0 hours (0.5-4 hours) in fasted conditions.
### Distribution
- Protein binding is reported to be approximately 44 to 60 %.
### Elimination
- The elimination half-life on benznidazole is approximately 13 hours in healthy volunteers following single dose.
### Metabolism
- Benznidazole metabolism pathway is unknown.
### Excretion
- Benznidazole and unknown metabolites are reported to be excreted in the urine and feces.
### Specific Populations
- The effect of sex, race, renal impairment, or hepatic impairment on the pharmacokinetics of benznidazole is unknown.
### Drug Interaction Studies
- In vitro studies showed that benznidazole is a P-gp substrate and does not notably induce Cytochrome P450 enzymes 1A2, 2B6, and 3A4 at concentrations up to 100 uM.
## Nonclinical Toxicology
### Carcinogenesis, Mutagenesis, Impairment of Fertility
Carcinogenicity
- Long-term carcinogenicity studies for benznidazole have not been performed.
- Nitroimidazoles, which have similar chemical structures to benznidazole have been reported to be carcinogenic in mice and rats.
Genetic Toxicity
- Genotoxicity of benznidazole has been demonstrated in vitro in several bacterial species and mammalian cell systems and in vivo in mammals.
- Benznidazole was mutagenic in several strains of S. typhimurium (TA 100, 102 1535, 1537, 1538, 97, 98 99 53 and UTH8414), E.coli, and K. pneumoniae.
- Benznidazole was genotoxic in several in vitro mammalian cell assays including a chromosome aberration assay in human lymphocytes and in sister chromatid exchange assays in human lymphocytes and in Human Hep G2 cells.
- In vivo, benznidazole was shown to be positive for genotoxicity in a mouse bone marrow micronucleus assay, in mouse and human red blood cell micronucleus assays, in a mouse abnormal sperm head assay and in a human peripheral blood lymphocyte assay. However in other micronucleus studies in mice and rats, oral doses of benznidazole did not cause a significant increase in the frequency of chromosomal aberrations in bone marrow cells or micronuclei in peripheral blood cells.
Impairment of Fertility
- In a 6-month, chronic repeated-dosing study with Wistar rats, benznidazole was shown to produce dose-dependent testicular and epididymal atrophy at a dose of 30 mg/kg/day (approximately equivalent to 0.6 times the MRHD based on whole body surface area comparisons). Aspermia was also evident in affected rats, but fertility was not assessed in this study. The NOAEL value in this study was considered to be 10 mg/kg/day (5 mg/kg twice daily) in males which is approximately 0.2-times the MRHD based on body surface area comparison. In other literature reports, benznidazole has been shown to cause testicular atrophy and inhibit spermatogenesis in pubertal and adult rats and mice5-7.
- In a female fertility study, oral (gavage) administration of benznidazole to female Wistar rats twice daily for a 2-week pre-mating period, during mating, and through day 7 of gestation was associated with transient lower body weight gain and food consumption. There was no benznidazole-related effect on mating performance or fertility and no adverse macroscopic or reproductive organ weight changes. However, benznidazole reproductive performance was associated with a higher post-implantation loss with lower live litter size at a dose of 150 mg/kg/day (equivalent to approximately 3 times the MRHD based on whole body surface area comparisons). The NOAEL value for this study was consider to be 50 mg/kg/day which is approximately equivalent to the MRHD based on whole body surface area comparison.
### Animal Toxicology and/or Pharmacology
- Single oral dose toxicity studies in rats have established that benznidazole causes ultrastructural changes in the adrenal cortex, colon, esophagus, ovaries, and testis 5, 8-11. In these tissues, these changes were associated with the presences of nitro reductase activity, the production of reactive metabolites, and or covalent binding of metabolites.
- Neurotoxicity including brain axonal degeneration and Purkinje cell degeneration was observed with repeated-oral dosing in dogs without adverse changes in peripheral nerves12-14. Neurological signs included: apathy, hypertonia, hyperreflexia, ataxia, loss of balance, oscillatory movements of the trunk and head, strong contractions of the back and leg muscles, opisthotonus and nystagmus. Neurotoxicity was not observed in other test species, including mouse, rat, guinea pig, and rabbit.
# Clinical Studies
- The safety and effectiveness of benznidazole for the treatment of Chagas disease in patients 6 to 12 years of age was established in two adequate and well-controlled trials (Trial 1 and Trial 2) as described below.
- Trial 1 was a randomized, double-blind, placebo-controlled trial in children 6 to 12 years of age with chronic indeterminate Chagas disease conducted in Argentina. The chronic indeterminate form of Chagas disease includes patients with serologic evidence of T. cruzi infection without symptoms of cardiac or gastrointestinal disease. A total of 106 patients were randomized to receive either benznidazole (5 mg/kg/day for 60 days) or placebo and followed for 4 years. Patients with at least two positive conventional serologic tests for antibodies to T. cruzi were included in the study. The conventional serologic tests used include indirect hemagglutination assay (IHA), immunofluorescence antibody assay (IFA), and/or enzyme linked immunosorbent assay (ELISA) and were based on the detection of antibodies against T. cruzi parasites.
- Trial 2 was a randomized, double-blind, placebo-controlled trial in pediatric patients 7 to 12 years of age with chronic indeterminate Chagas disease conducted in Brazil. A total of 129 patients were randomized to receive either benznidazole (7.5 mg/kg/day for 60 days) or placebo and followed for 3 years. Patients with three positive conventional serologic tests for antibodies to T. cruzi were included in the study. The conventional serologic tests include IHA, IFA, and/or ELISA and were based on the detection of antibodies against T. cruzi parasites.
- Both trials measured antibodies by conventional and nonconventional assays. The nonconventional assays include F29-ELISA and AT- chemiluminescence-ELISA that are based on detection of anti-T. cruzi IgG antibodies against the recombinant antigens, F29 and AT from the flagella of T. cruzi parasites. Benznidazole treatment resulted in a significantly higher percentage of seronegative patients by a nonconventional assay. Results at the end of follow-up are reported in the following table.
- In Trial 1 using conventional ELISA, 4 of 53 (7.5%) benznidazole subjects and 2 of 50 (4.0%) placebo subjects seroconverted to negative by the end of follow-up (difference 3.5, 95% CI (-7.0, 14.9)). In Trial 2 using conventional ELISA, 4 of 64 (6.3%) of benznidazole subjects and 0 of 65 placebo subjects seroconverted to negative by the end of follow-up (difference 6.3, 95% CI (0.3, 15.2)).
# How Supplied
- Benznidazole tablets (12.5 mg or 100 mg) are supplied as follows:
- 100 mg white tablets, round and functionally scored twice as a cross on both sides. Each tablet is about 10 mm in diameter debossed with “E” on one side of each quarter portion.
- 12.5 mg white tablets, round and unscored. Each tablet is about 5 mm in diameter debossed with “E” on one side.
- Benznidazole tablets 100 mg are available in bottles of 100 tablets (NDC 0642-7464-10).
- Benznidazole tablets 12.5 mg are available in bottles of 100 tablets (NDC 0642-7463-12).
## Storage
- Store at controlled room temperature 20°C to 25°C (68°F to 77°F); excursions permitted to 15°C to 30°C (59°F to 86°F). Keep bottle tightly closed and protect from moisture.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
### Embryo-Fetal Toxicity
- Advise pregnant women and females of reproductive potential that exposure to benznidazole tablets during pregnancy can result in fetal harm.
- Advise females to inform their healthcare provider of a known or suspected pregnancy.
- Advise females of reproductive potential to use effective contraception while taking benznidazole tablets and for 5 days after the last dose.
### Lactation
- Advise women not to breastfeed during treatment with benznidazole tablets.
### Infertility
- Advise males of reproductive potential that benznidazole tablets may impair fertility.
### Important Administration Instructions
- Advise patients and parents/caregivers of pediatric patients taking Benznidazole tablets that:
- Benznidazole tablets 100 mg are functionally scored tablets which can be split into one-half (50 mg) or one-quarter (25 mg) at the scored lines to provide doses less than 100 mg.
- Benznidazole tablets 12.5 mg and 100 mg (whole or split) can be made into a slurry in a specified volume of water for the pediatric population.
### Hypersensitivity Skin Reactions
- Advise patients that serious skin reactions can occur with benznidazole tablets. In case of skin reactions, presenting with additional symptoms of systemic involvement such as lymphadenopathy, fever and/or purpura, discontinuation of treatment is necessary.
### Central and Peripheral Nervous System Effects
- Advise patients that treatment can potentially cause paresthesia or symptoms of peripheral neuropathy. In cases where neurological symptoms occur, immediate discontinuation of treatment is recommended.
### Hematological Manifestations of Bone Marrow Depression
- Advise patients that there have been hematological manifestations of bone marrow depression, such as anemia and leukopenia, which are reversible, and normalized after treatment discontinuation.
### Interaction with Alcohol
- Advise patients to discontinue consumption of alcoholic beverages or products containing propylene glycol while taking benznidazole tablets and for at least three days afterward because abdominal cramps, nausea, vomiting, headaches, and flushing may occur.
# Precautions with Alcohol
Alcohol-Benznidazole interaction has not been established. Talk to your doctor regarding the effects of taking alcohol with this medication.
# Brand Names
There is limited information regarding Benznidazole Brand Names in the drug label.
# Look-Alike Drug Names
There is limited information regarding Benznidazole Look-Alike Drug Names in the drug label.
# Drug Shortage Status
Drug Shortage
# Price | https://www.wikidoc.org/index.php/Benznidazole | |
7f9bb1807f1220b8aa051c2df69e31ae3ba2f447 | wikidoc | Benzoctamine | Benzoctamine
# Overview
Benzoctamine is a drug that possesses sedative and anxiolytic properties. Marketed as Tacitin by Ciba-Geigy, it is different from most sedative drugs because in most clinical trials it does not produce respiratory depression, but actually stimulates the respiratory system. As a result, when compared to other sedative and anxiolytic drugs such as alprazolam, librium, clonazepam, etc. it is a safer form of tranquilizing. However, when co-administered with other drugs that cause respiratory depression, like morphine, it can cause increased respiratory depression.
Medically, benzoctamine is used as a treatment for anxious outpatients to control aggression, enuresis, fear, and minor social maladjustment in children. While it is a relatively new anti-anxiety drug, it’s popularity is increasing as a result of it being able to have comparable anxiolytic and sedative effects to other medications without their potentially fatal respiratory depressive side effects. Its pharmalogical effects are most similar to diazepam, another anxiolytic, but unlike diazepam, benzoctamine has antagonistic effects on epinephrine, norepinephrine, and appears to reduce serotonin turnover. While little is understood about how it carries out its effects, studies point to reduced serotonin, epinephrine, and norepinephrine as partial causes of its pharmacologic and behavioral effects.
Animal studies have shown sedative hypnotic drugs tend to show dependency in animals, but benzoctamine has been shown to not be addictive. Other animal studies also point to the drug as a possible mechanism by which to reduce blood pressure through the adrenergic system.
Chemically, benzoctamine belongs to the class of compounds called dibenzobicyclo-octodienes. It consists of four rings in a three dimensional configuration.
# Medical uses
## Anxiety
Benzoctamine’s main clinical use is for the treatment of anxiety, and evidence points to it being as effective as other clinical anxiety drugs, in particular diazepam. In the treatment of symptoms of mild anxiety due to psychoneurosis, a daily dosage of 30 to 80 g of benzoctamine was shown to be just as effective as 6–20 mg of diazepam. In another study one group of patients were given 10g of benzoctamine three times a day, while another group was given 5 mg of diazepam, and the treatments were equivalent. While these studies point to higher doses of benzoctamine being needed to exert the same pharmacological effects, the drug is still popular because of its ability to act as an anxiolytic without producing the common respiratory depression associated with other sedative drugs. Some studies have even shown that it stimulates the respiratory system.
### Benzoctamine and sodium amylobarbitone
In a study used to compare benzoctamine to sodium amylobarbitone as a sleep promoter, it was found that during administration of both drugs, patients reported that there sleep was less restless, and drowsiness was diminished. The study further showed that while sodium amylobarbitone caused withdrawal rebound symptoms, benzoctamine did not. It was also found that benzoctamine reduced plasma corticosteroid hormone levels. There is a relationship between anxiety and adreno-corticosteroid activity, with raised levels commonly being reported as an indication of stress. The study showed that benzoctamine, a drug reported to reduce anxiety, was also able to reduce the hormones that potentially cause it. This points to a phenomenon often seen within pharmacology where drugs intended for other uses often have far-reaching and rarely considered effects.
### Benzoctamine vs. chlordiazepoxide in anxiety neurosis
Benzoctamine has been found to have the same efficacy as chlordiazepoxide when treating anxiety neurosis
## Sleep sedation
While benzoctamine was made to be an alternative to the benzodiazepine line of anxiolytic drugs, other uses for the drug have been discovered. Due to benzoctamine's ability to tranquilize without causing respiratory depression, scientists are moving forward with studies that test its sedative effects in patients with respiratory failure. In one study that used benzoctamine in a clinical setting, researchers showed that the use of benzoctamine for sedation did not result in changes in forced expiratory volume in one second or carbon dioxide partial pressure PCO2. This confirmed previous statements that claimed the drug did not cause respiratory failure. The main goal of this clinical study was to confirm the findings of another study that showed benzoctamine did not reduce CO2 responsiveness, but instead increased the ventilatory response to CO2.
There are usually many risks associated with using sedatives on patients who are suffering from respiratory failure, which has made it difficult to administer tranquillizing medications in situations when they are desirable. It’s not known why this drug is safe and its benzodiazepine cousins are not, but a possible explanation for this phenomenon might come from its similarity in structure to tricyclic antidepressants, which have also been shown to not cause respiratory failure. While further experimentation is necessary, this study points to benzoctamine’s possible consideration for sedation in respiratory failure patients.
## Other uses
### Hypertension
A possible treatment for hypertension is blocking peripheral vascular seretonergic neurons or alpha-adrenergic neurons on postsynaptic cell sites. One study showed that benzoctamine, a serotonin and alpha-adrenergic antagonist, does not reduce blood pressure through a seretonin mechanism but does reduce blood pressure by antagonizing alpha-adrenergic receptors in rats. Rats were given 10 mg of benzoctamine and drops in their blood pressure were approximately 30 mm Hg. The researchers further confirmed that serotonin antagonism was not sufficient to reduce blood pressure by using the highly selective serotonin antagonist 1-(1-naphthyl)-piperazine, which was not able to decrease the blood pressure of the rats. These studies have yet to be repeated in humans.
# Side effects
## Common Side Effects
- Drowsiness
- Dry mouth
- Headache
- Dizziness
## Serotonin turnover
Studies have shown that benzoctamine decreases the rate of turnover of serotonin. Scientists confirmed these results and proposed that the method of action was inhibition of serotonin uptake since the drug also blocked the serotonin depleting action of extra-neuronal monoamine transporters (EMT). This would lead to increased stimulation of serotonin receptors through a negative feed back mechanism, eventually decreasing serotonin out put. However, the study points out that other studies have shown that drugs combined with EMT cause a lowering of body temperature that in fact results in a decrease in 5HT turnover. This means that body temperature effects cannot be ruled out.
# Pharmacology
Not much is understood about how benzoctamine produces its anti-anxiety effects, but rat studies have shown that the possible mechanism of action is by way of increased turnover of cathecholamines. In addition to serotonin it has also been shown to decrease epinephrine, dopamine, and norepinephrine turnover by antagonizing their receptors. When given intravenously in doses of 20–40 mg there are no significant differences in efficacy. Oral doses exceeding 10 mg three times daily do not increase the effects of the drug. Assuming serotonin postsynaptic antagonism is the main mechanism by which benzoctamine carries out its effects, studies have shown it to have a half maximal inhibitory concentration(IC50) value of 115 mM at the serotonin receptor.
# Pharmacokinetics
Benzoctamine can be injected directly into the blood or given as tablets. When given as tablets, it is given in doses of 10 mg three times daily. And when given intravenously, patients are given the drug at a rate of 5 mg/minute until 20–40 mg of drug has been injected. Benzoctamine can be analyzed as the 3H acetyl derivative and N-methyl metabolite it gets broken down into using radioactive analysis. Benzoctamine has a half-life of 2–3 hours, with a bioavailability of 100% when given intravenously and around > 90% when given orally. The average time to achieve peak plasma concentrations is 1 hour and the volume of distribution for a 70 kg person is 1-2 l/kg.
# Other studies
## Alcohol and benzoctamine
Benzoctamine, like other psychoactive drugs has the ability to potentiate the effects of other drugs. However, a motor skill study looking at benzoctamine's capacity to potentiate the inhibitory effects of alcohol showed no significant decreases in motor skill function due to benzoctamine being administered with alcohol.
## Morphine and benzoctamine
Though benzoctamine does not potentiate the effects of alcohol, studies have shown it can potentiate the respiratory depression seen with morphine in rats, while also reducing morphine's analgesic effects.
## Dependency
Monkey studies looking at the dependence liability of several sedative drugs showed that benzoctamine was a dependence-free drug, while pentobarbital, alcohol, chloroform, meprobamate, diazepam, chlordiazepoxide, and oxazolam were not.
## Benzoctamine vs. chlordiazepoxide in serotonin turnover
In a rat study looking at the effects of benzoctamine and chlordiazepoxide on serotonin turnover, rats treated with drug were found to have elevated levels of -5HT, indicating a decrease in serotonin turnover. | Benzoctamine
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Benzoctamine is a drug that possesses sedative and anxiolytic properties. Marketed as Tacitin by Ciba-Geigy, it is different from most sedative drugs because in most clinical trials it does not produce respiratory depression, but actually stimulates the respiratory system. As a result, when compared to other sedative and anxiolytic drugs such as alprazolam, librium, clonazepam, etc. it is a safer form of tranquilizing. However, when co-administered with other drugs that cause respiratory depression, like morphine, it can cause increased respiratory depression.
Medically, benzoctamine is used as a treatment for anxious outpatients to control aggression, enuresis, fear, and minor social maladjustment in children. While it is a relatively new anti-anxiety drug, it’s popularity is increasing as a result of it being able to have comparable anxiolytic and sedative effects to other medications without their potentially fatal respiratory depressive side effects. Its pharmalogical effects are most similar to diazepam, another anxiolytic, but unlike diazepam, benzoctamine has antagonistic effects on epinephrine, norepinephrine, and appears to reduce serotonin turnover. While little is understood about how it carries out its effects, studies point to reduced serotonin, epinephrine, and norepinephrine as partial causes of its pharmacologic and behavioral effects.[1]
Animal studies have shown sedative hypnotic drugs tend to show dependency in animals, but benzoctamine has been shown to not be addictive. Other animal studies also point to the drug as a possible mechanism by which to reduce blood pressure through the adrenergic system.
Chemically, benzoctamine belongs to the class of compounds called dibenzobicyclo-octodienes. It consists of four rings in a three dimensional configuration.
# Medical uses
## Anxiety
Benzoctamine’s main clinical use is for the treatment of anxiety, and evidence points to it being as effective as other clinical anxiety drugs, in particular diazepam.[2] In the treatment of symptoms of mild anxiety due to psychoneurosis, a daily dosage of 30 to 80 g of benzoctamine was shown to be just as effective as 6–20 mg of diazepam.[2] In another study one group of patients were given 10g of benzoctamine three times a day, while another group was given 5 mg of diazepam, and the treatments were equivalent.[3] While these studies point to higher doses of benzoctamine being needed to exert the same pharmacological effects, the drug is still popular because of its ability to act as an anxiolytic without producing the common respiratory depression associated with other sedative drugs. Some studies have even shown that it stimulates the respiratory system.[4]
### Benzoctamine and sodium amylobarbitone
In a study used to compare benzoctamine to sodium amylobarbitone as a sleep promoter, it was found that during administration of both drugs, patients reported that there sleep was less restless, and drowsiness was diminished.[5] The study further showed that while sodium amylobarbitone caused withdrawal rebound symptoms, benzoctamine did not.[5] It was also found that benzoctamine reduced plasma corticosteroid hormone levels.[5] There is a relationship between anxiety and adreno-corticosteroid activity, with raised levels commonly being reported as an indication of stress.[5] The study showed that benzoctamine, a drug reported to reduce anxiety, was also able to reduce the hormones that potentially cause it.[5] This points to a phenomenon often seen within pharmacology where drugs intended for other uses often have far-reaching and rarely considered effects.
### Benzoctamine vs. chlordiazepoxide in anxiety neurosis
Benzoctamine has been found to have the same efficacy as chlordiazepoxide when treating anxiety neurosis[6]
## Sleep sedation
While benzoctamine was made to be an alternative to the benzodiazepine line of anxiolytic drugs, other uses for the drug have been discovered. Due to benzoctamine's ability to tranquilize without causing respiratory depression, scientists are moving forward with studies that test its sedative effects in patients with respiratory failure. In one study that used benzoctamine in a clinical setting, researchers showed that the use of benzoctamine for sedation did not result in changes in forced expiratory volume in one second or carbon dioxide partial pressure PCO2.[7] This confirmed previous statements that claimed the drug did not cause respiratory failure. The main goal of this clinical study was to confirm the findings of another study that showed benzoctamine did not reduce CO2 responsiveness, but instead increased the ventilatory response to CO2.[8]
There are usually many risks associated with using sedatives on patients who are suffering from respiratory failure, which has made it difficult to administer tranquillizing medications in situations when they are desirable. It’s not known why this drug is safe and its benzodiazepine cousins are not, but a possible explanation for this phenomenon might come from its similarity in structure to tricyclic antidepressants, which have also been shown to not cause respiratory failure.[7] While further experimentation is necessary, this study points to benzoctamine’s possible consideration for sedation in respiratory failure patients.
## Other uses
### Hypertension
A possible treatment for hypertension is blocking peripheral vascular seretonergic neurons or alpha-adrenergic neurons on postsynaptic cell sites.[9] One study showed that benzoctamine, a serotonin and alpha-adrenergic antagonist, does not reduce blood pressure through a seretonin mechanism but does reduce blood pressure by antagonizing alpha-adrenergic receptors in rats.[9] Rats were given 10 mg of benzoctamine and drops in their blood pressure were approximately 30 mm Hg.[9] The researchers further confirmed that serotonin antagonism was not sufficient to reduce blood pressure by using the highly selective serotonin antagonist 1-(1-naphthyl)-piperazine, which was not able to decrease the blood pressure of the rats.[9] These studies have yet to be repeated in humans.
# Side effects
## Common Side Effects
• Drowsiness
• Dry mouth
• Headache
• Dizziness
## Serotonin turnover
Studies have shown that benzoctamine decreases the rate of turnover of serotonin.[9] Scientists confirmed these results and proposed that the method of action was inhibition of serotonin uptake since the drug also blocked the serotonin depleting action of extra-neuronal monoamine transporters (EMT).[10] This would lead to increased stimulation of serotonin receptors through a negative feed back mechanism, eventually decreasing serotonin out put. However, the study points out that other studies have shown that drugs combined with EMT cause a lowering of body temperature that in fact results in a decrease in 5HT turnover.[10] This means that body temperature effects cannot be ruled out.
# Pharmacology
Not much is understood about how benzoctamine produces its anti-anxiety effects, but rat studies have shown that the possible mechanism of action is by way of increased turnover of cathecholamines.[11] In addition to serotonin it has also been shown to decrease epinephrine, dopamine, and norepinephrine turnover by antagonizing their receptors.[10] When given intravenously in doses of 20–40 mg there are no significant differences in efficacy.[12] Oral doses exceeding 10 mg three times daily do not increase the effects of the drug.[3] Assuming serotonin postsynaptic antagonism is the main mechanism by which benzoctamine carries out its effects, studies have shown it to have a half maximal inhibitory concentration(IC50) value of 115 mM at the serotonin receptor.[13]
# Pharmacokinetics
Benzoctamine can be injected directly into the blood or given as tablets. When given as tablets, it is given in doses of 10 mg three times daily.[3] And when given intravenously, patients are given the drug at a rate of 5 mg/minute until 20–40 mg of drug has been injected.[12] Benzoctamine can be analyzed as the 3H acetyl derivative and N-methyl metabolite it gets broken down into using radioactive analysis.[14] Benzoctamine has a half-life of 2–3 hours,[4] with a bioavailability of 100% when given intravenously and around > 90% when given orally.[15] The average time to achieve peak plasma concentrations is 1 hour[4] and the volume of distribution for a 70 kg person is 1-2 l/kg.[4]
# Other studies
## Alcohol and benzoctamine
Benzoctamine, like other psychoactive drugs has the ability to potentiate the effects of other drugs.[16] However, a motor skill study looking at benzoctamine's capacity to potentiate the inhibitory effects of alcohol showed no significant decreases in motor skill function due to benzoctamine being administered with alcohol.[16]
## Morphine and benzoctamine
Though benzoctamine does not potentiate the effects of alcohol, studies have shown it can potentiate the respiratory depression seen with morphine in rats, while also reducing morphine's analgesic effects.[4]
## Dependency
Monkey studies looking at the dependence liability of several sedative drugs showed that benzoctamine was a dependence-free drug, while pentobarbital, alcohol, chloroform, meprobamate, diazepam, chlordiazepoxide, and oxazolam were not.[17]
## Benzoctamine vs. chlordiazepoxide in serotonin turnover
In a rat study looking at the effects of benzoctamine and chlordiazepoxide on serotonin turnover, rats treated with drug were found to have elevated levels of [14C]-5HT, indicating a decrease in serotonin turnover.[1] | https://www.wikidoc.org/index.php/Benzoctamine | |
2d52365f49b9cd164954cd4d3ec75194c8b43735 | wikidoc | Benzoic acid | Benzoic acid
Benzoic acid, C7H6O2 (or C6H5COOH), is a colorless crystalline solid and the simplest aromatic carboxylic acid. The name derived from gum benzoin, which was for a long time the only source for benzoic acid. This weak acid and its salts are used as a food preservative. Benzoic acid is an important precursor for the synthesis of many other organic substances.
# History
Benzoic acid was discovered in the 16th century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and subsequently by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596).
Justus von Liebig and Friedrich Wöhler determined the structure of benzoic acid in 1832. They also investigated how hippuric acid is related to benzoic acid.
In 1875 Salkowski discovered the antifungal abilities of benzoic acid, which were used for a long time in the preservation of benzoate containing fruits.
# Production
## Industrial preparations
Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses cheap raw materials, proceeds in high yield, and is considered environmentally green.
U.S. production capacity is estimated to be 126,000 tonnes per year (139,000 tons), much of which is consumed domestically to prepare other industrial chemicals.
## Historical preparations
The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically.
Alkyl substituted benzene derivatives give benzoic acid with the stoichiometric oxidants potassium permanganate, chromium trioxide, nitric acid.
# Uses
## Food preservative
Benzoic acid and its salts are used as a food preservative, represented by the E-numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid in to the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food. Acidic food and beverage like fruit juice (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) or other acidified food are preserved with benzoic acid and benzoates.
Typical levels of use for benzoic acid as a preservative in food are between 0.05 – 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are laid down in international food law.
Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of benzene.
## Synthesis
Benzoic acid is used to make a large number of chemicals, important examples of which are:
- Benzoyl chloride, C6H5C(O)Cl, is obtained by treatment of benzoic with thionyl chloride, phosgene or one of the chlorides of phosphorus. C6H5C(O)Cl is an important starting material for several benzoic acid derivates like benzyl benzoate, which is used as artificial flavours and insect repellents.
- Benzoyl peroxide, 2, is obtained by treatment with peroxide. The peroxide is a radical starter in polymerization reactions and also a component in cosmetic products.
- Benzoate plasticizers, such as the glycol-, diethylengylcol-, and triethyleneglycol esters are obtained by transesterification of methyl benzoate with the corresponding diol. Alternatively these species arise by treatment of benzoylchloride with the diol. These plasticizers are used similarly to those derived from terephthalic acid ester.
- Phenol, C6H5OH, is obtained by oxidative decarboxylation at 300-400°C. The temperature required can be lowered to 200°C by the addition of catalytic amounts of copper(II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis.
## Medicinal
Benzoic acid is a constituent of Whitfield Ointment which is used for the treatment of fungal skin diseases such as tinea, ringworm, and athlete's foot.
# Purification
Benzoic acid is purified by recrystallisation of the crude product. This involves dissolving the material and allowing it to recrystallize (or re-solidify), leaving any impurities in solution and allowing the pure material to be isolated from the solution.
# Biology and health effects
Benzoic acid occurs naturally free and bound as benzoic acid esters in many plant and animal species. Appreciable amounts have been found in most berries (around 0.05%). Ripe fruits of several Vaccinium species (e.g., cranberry, V. vitis idaea; bilberry, V. macrocarpon) contain as much as 300-1300 mg free benzoic acid per kg fruit. Benzoic acid is also formed in apples after infection with the fungus Nectria galligena.
Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the ptarmigan (Lagopus mutus) as well as in gland secretions of male muskoxen (Ovibos moschatus) or Asian bull elephants (Elephas maximus).
Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters.
Benzoic acid is present as part of hippuric acid (N-Benzoylglycine) in urine of mammals, especially herbivores (Gr. hippos = horse; ouron = urine). Humans produce about 0.44 g/L hippuric acid per day in their urine, and if the person is exposed to toluene or benzoic acid it can rise above that level.
For humans ,the WHO's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight. The oral LD50 for rats is 3040 mg/kg, for mice it is 1940-2263 mg/kg.
# Chemistry
Reactions of benzoic acid can occur at either the aromatic ring or the carboxylic group:
## Aromatic ring
Electrophilic aromatic substitution reaction will take place mainly in 3-position to the electron-withdrawing carboxylic group.
The second substitution reaction (on the right) is slower because the first nitro group is deactivating. Conversely, if an activating group (electron-donating) was introduced (e.g., alkyl), a second substitution reaction would occur more readily than the first and the disubstituted product might not accumulate to a significant extent.
## Carboxylic group
All the reactions mentioned for carboxylic acids are also possible for benzoic acid.
- Benzoic acid esters are the product of the acid catalysed reaction with alcohols.
- Benzoic acid amides are more easily available by using activated acid derivatives (such as benzoyl chloride) or by coupling reagents used in peptide synthesis like DCC and DMAP.
- The more active benzoic anhydride is formed by dehydration using acetic anhydride or phosphorus pentoxide.
- Highly reactive acid derivatives such as acid halides are easily obtained by mixing with halogenation agents like phosphorus chlorides or thionyl chloride.
- Orthoesters can be obtained by the reaction of alcohols under acidic water free conditions with benzonitrile.
- Reduction to benzaldehyde and benzyl alcohol is possible using DIBAL-H, LiAlH4 or sodium borohydride.
- The copper catalysed decarboxylation of benzoate to benzene may be effected by heating in quinoline. Also, Hunsdiecker decoarboxylation can be achieved by forming the silver salt and heating.
# Laboratory preparations
Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedogical value. It is a common undergraduate preparation and a convenient property of the compound is that its melting point equals its molecular weight (122). For all syntheses, benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe.
## By hydrolysis
Like any other nitrile or amide, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions.
## From benzaldehyde
The base-induced disproportionation of benzaldehyde, the Cannizzaro reaction, affords equal amounts of benzoate and benzyl alcohol; the latter can be removed by distillation.
## From bromobenzene
Bromobenzene in diethyl ether is stirred with magnesium turnings to produce phenylmagnesium bromide (C6H5MgBr). This Grignard reagent is slowly added to dry-ice (solid carbon dioxide) to give benzoate. Dilute acid is added to form benzoic acid.
## From benzyl alcohol
Benzyl alcohol is refluxed with potassium permanganate or other oxidizing reagents in water. The mixture hot filtered to remove manganese oxide and then allowed to cool to afford benzoic acid. | Benzoic acid
Template:Chembox new
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Benzoic acid, C7H6O2 (or C6H5COOH), is a colorless crystalline solid and the simplest aromatic carboxylic acid. The name derived from gum benzoin, which was for a long time the only source for benzoic acid. This weak acid and its salts are used as a food preservative. Benzoic acid is an important precursor for the synthesis of many other organic substances.
# History
Benzoic acid was discovered in the 16th century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and subsequently by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596).[1]
Justus von Liebig and Friedrich Wöhler determined the structure of benzoic acid in 1832.[2] They also investigated how hippuric acid is related to benzoic acid.
In 1875 Salkowski discovered the antifungal abilities of benzoic acid, which were used for a long time in the preservation of benzoate containing fruits.[3]
# Production
## Industrial preparations
Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses cheap raw materials, proceeds in high yield, and is considered environmentally green.
U.S. production capacity is estimated to be 126,000 tonnes per year (139,000 tons), much of which is consumed domestically to prepare other industrial chemicals.
## Historical preparations
The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically.[4]
Alkyl substituted benzene derivatives give benzoic acid with the stoichiometric oxidants potassium permanganate, chromium trioxide, nitric acid.
# Uses
## Food preservative
Benzoic acid and its salts are used as a food preservative, represented by the E-numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast[5] and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid in to the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food.[6] Acidic food and beverage like fruit juice (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) or other acidified food are preserved with benzoic acid and benzoates.
Typical levels of use for benzoic acid as a preservative in food are between 0.05 – 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are laid down in international food law.[7][8]
Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of benzene.[9][10]
## Synthesis
Benzoic acid is used to make a large number of chemicals, important examples of which are:
- Benzoyl chloride, C6H5C(O)Cl, is obtained by treatment of benzoic with thionyl chloride, phosgene or one of the chlorides of phosphorus. C6H5C(O)Cl is an important starting material for several benzoic acid derivates like benzyl benzoate, which is used as artificial flavours and insect repellents.
- Benzoyl peroxide, [C6H5C(O)O]2, is obtained by treatment with peroxide.[11] The peroxide is a radical starter in polymerization reactions and also a component in cosmetic products.
- Benzoate plasticizers, such as the glycol-, diethylengylcol-, and triethyleneglycol esters are obtained by transesterification of methyl benzoate with the corresponding diol. Alternatively these species arise by treatment of benzoylchloride with the diol. These plasticizers are used similarly to those derived from terephthalic acid ester.
- Phenol, C6H5OH, is obtained by oxidative decarboxylation at 300-400°C. The temperature required can be lowered to 200°C by the addition of catalytic amounts of copper(II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis.
## Medicinal
Benzoic acid is a constituent of Whitfield Ointment which is used for the treatment of fungal skin diseases such as tinea, ringworm, and athlete's foot.
[12] [13]
# Purification
Benzoic acid is purified by recrystallisation of the crude product. This involves dissolving the material and allowing it to recrystallize (or re-solidify), leaving any impurities in solution and allowing the pure material to be isolated from the solution.
[14]
# Biology and health effects
Benzoic acid occurs naturally free and bound as benzoic acid esters in many plant and animal species. Appreciable amounts have been found in most berries (around 0.05%). Ripe fruits of several Vaccinium species (e.g., cranberry, V. vitis idaea; bilberry, V. macrocarpon) contain as much as 300-1300 mg free benzoic acid per kg fruit. Benzoic acid is also formed in apples after infection with the fungus Nectria galligena.
Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the ptarmigan (Lagopus mutus) as well as in gland secretions of male muskoxen (Ovibos moschatus) or Asian bull elephants (Elephas maximus).[15]
Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters.[16]
Benzoic acid is present as part of hippuric acid (N-Benzoylglycine) in urine of mammals, especially herbivores (Gr. hippos = horse; ouron = urine). Humans produce about 0.44 g/L hippuric acid per day in their urine, and if the person is exposed to toluene or benzoic acid it can rise above that level.[17]
For humans ,the WHO's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day.[15] Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight.[18] The oral LD50 for rats is 3040 mg/kg, for mice it is 1940-2263 mg/kg.[15]
# Chemistry
Reactions of benzoic acid can occur at either the aromatic ring or the carboxylic group:
## Aromatic ring
Electrophilic aromatic substitution reaction will take place mainly in 3-position to the electron-withdrawing carboxylic group.
The second substitution reaction (on the right) is slower because the first nitro group is deactivating.[19] Conversely, if an activating group (electron-donating) was introduced (e.g., alkyl), a second substitution reaction would occur more readily than the first and the disubstituted product might not accumulate to a significant extent.
## Carboxylic group
All the reactions mentioned for carboxylic acids are also possible for benzoic acid.
- Benzoic acid esters are the product of the acid catalysed reaction with alcohols.
- Benzoic acid amides are more easily available by using activated acid derivatives (such as benzoyl chloride) or by coupling reagents used in peptide synthesis like DCC and DMAP.
- The more active benzoic anhydride is formed by dehydration using acetic anhydride or phosphorus pentoxide.
- Highly reactive acid derivatives such as acid halides are easily obtained by mixing with halogenation agents like phosphorus chlorides or thionyl chloride.
- Orthoesters can be obtained by the reaction of alcohols under acidic water free conditions with benzonitrile.
- Reduction to benzaldehyde and benzyl alcohol is possible using DIBAL-H, LiAlH4 or sodium borohydride.
- The copper catalysed decarboxylation of benzoate to benzene may be effected by heating in quinoline. Also, Hunsdiecker decoarboxylation can be achieved by forming the silver salt and heating.
# Laboratory preparations
Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedogical value. It is a common undergraduate preparation and a convenient property of the compound is that its melting point equals its molecular weight (122). For all syntheses, benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe.
## By hydrolysis
Like any other nitrile or amide, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions.
## From benzaldehyde
The base-induced disproportionation of benzaldehyde, the Cannizzaro reaction, affords equal amounts of benzoate and benzyl alcohol; the latter can be removed by distillation.
## From bromobenzene
Bromobenzene in diethyl ether is stirred with magnesium turnings to produce phenylmagnesium bromide (C6H5MgBr). This Grignard reagent is slowly added to dry-ice (solid carbon dioxide) to give benzoate. Dilute acid is added to form benzoic acid.
## From benzyl alcohol
Benzyl alcohol is refluxed with potassium permanganate or other oxidizing reagents in water. The mixture hot filtered to remove manganese oxide and then allowed to cool to afford benzoic acid. | https://www.wikidoc.org/index.php/Benzoic_acid | |
af0c5b39e0cd460814a9914f2e22ca145eeb7847 | wikidoc | Bernard Lown | Bernard Lown
# Overview
Bernard Lown, M.D. was the original developer of the defibrillator and is an internationally known peace activist.
Born in Lithuania, he emigrated at age 13 with his parents to the US, initially to Maine shortly before the outbreak of World War II, and subsequently studied to become a specialist in cardiology.
# Development of the defibrillator
Up until the late 1950's, fibrillation of the heart could be treated only by drug therapy. In 1956 American cardiologist Paul Zoll published a paper describing resuscitation of open-heart surgery patients by means of a 110 volt alternating current electric shock (derived from a wall socket) and conducted to the sides of the exposed heart by metal plate "paddles". While being an advance in emergency resuscitation, the technique was later to be shown to be both damaging to the heart muscle and of unpredictable effectiveness in reverting ventricular fibrillation.
In 1959, Lown, aware of the Zoll paper and of the complications resulting from the alternating current method, commenced animal research in an endeavour to define a less traumatic and more effective form of electric shock.
This work resulted in what became known as the "Lown waveform"; a single heavily damped sinusoidal waveform with a half cycle time of approximately 5 milliseconds. The waveform was produced by charging a bank of capacitors to about 1000 volts, then discharging the capacitors through an inductor to deliver the waveform to the heart.
Following the research findings, Lown contacted engineer Barouh Berkovits of the American Optical Company, who produced a clinical prototype defibrillator (often referred to as a "cardioverter") which became the basis for further technological evolution. The original machine, weighing some 60 lb (27 kg), delivered the Lown waveform at energy levels up to 100 joules for exposed heart application, and 200–400 joules for transthoracic application.
# Peace activist
In 1960 he was one of the founders of Physicians for Social Responsibility and later the co-founder of International Physicians for the Prevention of Nuclear War. He also founded two organisations, SATELLIFE and ProCOR, which provide health information and assistance to developing countries.
International Physicians for the Prevention of Nuclear War was awarded the 1985 Nobel Peace Prize.
Bernard Lown is currently Professor of Cardiology Emeritus at the Harvard School of Public Health. He and his wife Louise have three children. | Bernard Lown
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Bernard Lown, M.D. was the original developer of the defibrillator and is an internationally known peace activist.
Born in Lithuania, he emigrated at age 13 with his parents to the US, initially to Maine shortly before the outbreak of World War II, and subsequently studied to become a specialist in cardiology.
# Development of the defibrillator
Up until the late 1950's, fibrillation of the heart could be treated only by drug therapy. In 1956 American cardiologist Paul Zoll published a paper describing resuscitation of open-heart surgery patients by means of a 110 volt alternating current electric shock (derived from a wall socket) and conducted to the sides of the exposed heart by metal plate "paddles". While being an advance in emergency resuscitation, the technique was later to be shown to be both damaging to the heart muscle and of unpredictable effectiveness in reverting ventricular fibrillation.
In 1959, Lown, aware of the Zoll paper and of the complications resulting from the alternating current method, commenced animal research in an endeavour to define a less traumatic and more effective form of electric shock.
This work resulted in what became known as the "Lown waveform"; a single heavily damped sinusoidal waveform with a half cycle time of approximately 5 milliseconds. The waveform was produced by charging a bank of capacitors to about 1000 volts, then discharging the capacitors through an inductor to deliver the waveform to the heart.
Following the research findings, Lown contacted engineer Barouh Berkovits of the American Optical Company, who produced a clinical prototype defibrillator (often referred to as a "cardioverter") which became the basis for further technological evolution. The original machine, weighing some 60 lb (27 kg), delivered the Lown waveform at energy levels up to 100 joules for exposed heart application, and 200–400 joules for transthoracic application.
# Peace activist
In 1960 he was one of the founders of Physicians for Social Responsibility and later the co-founder of International Physicians for the Prevention of Nuclear War. He also founded two organisations, SATELLIFE and ProCOR, which provide health information and assistance to developing countries.
International Physicians for the Prevention of Nuclear War was awarded the 1985 Nobel Peace Prize.
Bernard Lown is currently Professor of Cardiology Emeritus at the Harvard School of Public Health. He and his wife Louise have three children.
# External links
- The Lown Cardiovascular Research Foundation
- A Heart Doctor With an Extra Big Heart
de:Bernard Lown
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Bernard_Lown | |
0e9c546bf8fbe1faaa0288295ab5ca9db222c68d | wikidoc | Bert Sakmann | Bert Sakmann
Bert Sakmann (born June 12, 1942) is a German cell physiologist. He shared the Nobel Prize in Physiology or Medicine with Erwin Neher in 1991 for their work on "the function of single ion channels in cells," and invention of the patch clamp. Bert Sakmann was Professor at Heidelberg University and is an Emeritus Scientific Member of the Max Planck Institute for Medical Research in Heidelberg, Germany. Since 2008 he leads an emeritus research group at the Max Planck Institute of Neurobiology.
Born in Stuttgart, Sakmann enrolled in Volksschule in Lindau, and completed the Wagenburg gymnasium in Stuttgart in 1961. He studied medicine from 1967 onwards in Tübingen, Freiburg, Berlin, Paris and Munich. After completing his medical exams at Ludwig-Maximilians University in Munich, he became a medical assistant in 1968 at Munich University, while also working as a scientific assistant (Wissenschaftlicher Assistant) at Munich's Max-Planck-Institut für Psychiatrie, in the Neurophysiology Department under Otto Detlev Creutzfeldt. In 1971 he moved to University College London, where he worked in the Department of Biophysics under Bernard Katz. In 1974 he completed his medical dissertation, under the title Elektrophysiologie der neuralen Helladaptation in der Katzenretina (Electrophysiology of Neural Light Adaption in the Cat Retina) in the Medical Faculty of Göttingen University.
Afterwards (still in 1974), Sakmann returned to the lab of Otto Creutzfeldt, who had meanwhile moved to the Max Planck Institute for Biophysical Chemistry in Göttingen. Sakmann joined the membrane biology group the in 1979.
In 1986, he was awarded the Louisa Gross Horwitz Prize from Columbia University together with Erwin Neher co-winner of 1991 Nobel prize for Physiology or Medicine.
In 1987, he received the Gottfried Wilhelm Leibniz Prize of the Deutsche Forschungsgemeinschaft, which is the highest honour awarded in German research.
In 1991 he received the Nobel prize for Physiology or Medicine together with Neher, with whom he had worked in Göttingen.
Sakmann is the founder of the Bert-Sakmann-Stiftung. | Bert Sakmann
Bert Sakmann (born June 12, 1942) is a German cell physiologist. He shared the Nobel Prize in Physiology or Medicine with Erwin Neher in 1991 for their work on "the function of single ion channels in cells," and invention of the patch clamp. Bert Sakmann was Professor at Heidelberg University and is an Emeritus Scientific Member of the Max Planck Institute for Medical Research in Heidelberg, Germany. Since 2008 he leads an emeritus research group at the Max Planck Institute of Neurobiology.
Born in Stuttgart, Sakmann enrolled in Volksschule in Lindau, and completed the Wagenburg gymnasium in Stuttgart in 1961. He studied medicine from 1967 onwards in Tübingen, Freiburg, Berlin, Paris and Munich. After completing his medical exams at Ludwig-Maximilians University in Munich, he became a medical assistant in 1968 at Munich University, while also working as a scientific assistant (Wissenschaftlicher Assistant) at Munich's Max-Planck-Institut für Psychiatrie, in the Neurophysiology Department under Otto Detlev Creutzfeldt. In 1971 he moved to University College London, where he worked in the Department of Biophysics under Bernard Katz. In 1974 he completed his medical dissertation, under the title Elektrophysiologie der neuralen Helladaptation in der Katzenretina (Electrophysiology of Neural Light Adaption in the Cat Retina) in the Medical Faculty of Göttingen University.
Afterwards (still in 1974), Sakmann returned to the lab of Otto Creutzfeldt, who had meanwhile moved to the Max Planck Institute for Biophysical Chemistry in Göttingen. Sakmann joined the membrane biology group the in 1979.
In 1986, he was awarded the Louisa Gross Horwitz Prize from Columbia University together with Erwin Neher co-winner of 1991 Nobel prize for Physiology or Medicine.
In 1987, he received the Gottfried Wilhelm Leibniz Prize of the Deutsche Forschungsgemeinschaft, which is the highest honour awarded in German research.
In 1991 he received the Nobel prize for Physiology or Medicine together with Neher, with whom he had worked in Göttingen.
Sakmann is the founder of the Bert-Sakmann-Stiftung. | https://www.wikidoc.org/index.php/Bert_Sakmann | |
6c8fc7bf3c8876fc089a0adc59be85bb967b075d | wikidoc | Besifloxacin | Besifloxacin
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Besifloxacin is a quinolone antimicrobial that is FDA approved for the treatment of bacterial conjunctivitis. Common adverse reactions include conjunctival redness.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
### Indications
- Besivance® (besifloxacin ophthalmic suspension) 0.6%, is indicated for the treatment of bacterial conjunctivitis caused by susceptible isolates of the following bacteria:
- Efficacy for this organism was studied in fewer than 10 infections.
### Dosage
- Invert closed bottle and shake once before use.
- Instill one drop in the affected eye(s) 3 times a day, four to twelve hours apart for 7 days.
### DOSAGE FORMS AND STRENGTHS
- 7.5 mL bottle filled with 5 mL of besifloxacin ophthalmic suspension, 0.6%.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Besifloxacin in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Besifloxacin in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
There is limited information regarding FDA-Labeled Use of Besifloxacin in pediatric patients.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Besifloxacin in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Besifloxacin in pediatric patients.
# Contraindications
- None
# Warnings
- Topical Ophthalmic Use Only
- NOT FOR INJECTION INTO THE EYE.
- Besivance is for topical ophthalmic use only, and should not be injected subconjunctivally, nor should it be introduced directly into the anterior chamber of the eye.
- As with other anti-infectives, prolonged use of Besivance (besifloxacin ophthalmic suspension) 0.6% may result in overgrowth of non-susceptible organisms, including fungi. If super-infection occurs, discontinue use and institute alternative therapy. Whenever clinical judgment dictates, the patient should be examined with the aid of magnification, such as slit-lamp biomicroscopy, and, where appropriate, fluorescein staining.
- Patients should not wear contact lenses if they have signs or symptoms of bacterial conjunctivitis or during the course of therapy with Besivance .
# Adverse Reactions
## Clinical Trials Experience
- Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in one clinical trial of a drug cannot be directly compared with the rates in the clinical trials of the same or another drug and may not reflect the rates observed in practice.
- The data described below reflect exposure to Besivance in approximately 1,000 patients between 1 and 98 years old with clinical signs and symptoms of bacterial conjunctivitis.
- The most frequently reported ocular adverse reaction was conjunctival redness, reported in approximately 2% of patients.
- Other adverse reactions reported in patients receiving Besivance occurring in approximately 1-2% of patients included: blurred vision, eye pain, eye irritation, eye pruritus and headache.
## Postmarketing Experience
There is limited information regarding Postmarketing Experience of Besifloxacin in the drug label.
# Drug Interactions
There is limited information regarding Besifloxacin Drug Interactions in the drug label.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): C
- Oral doses of besifloxacin up to 1000 mg/kg/day were not associated with visceral or skeletal malformations in rat pups in a study of embryo-fetal development, although this dose was associated with maternal toxicity (reduced body weight gain and food consumption) and maternal mortality. Increased post-implantation loss, decreased fetal body weights, and decreased fetal ossification were also observed. At this dose, the mean Cmax in the rat dams was approximately 20 mcg/mL, >45,000 times the mean plasma concentrations measured in humans. The No Observed Adverse Effect Level (NOAEL) for this embryo-fetal development study was 100 mg/kg/day (Cmax, 5 mcg/mL, >11,000 times the mean plasma concentrations measured in humans).
- In a prenatal and postnatal development study in rats, the NOAELs for both fetal and maternal toxicity were also 100 mg/kg/day. At 1000 mg/kg/day, the pups weighed significantly less than controls and had a reduced neonatal survival rate. Attainment of developmental landmarks and sexual maturation were delayed, although surviving pups from this dose group that were reared to maturity did not demonstrate deficits in behavior, including activity, learning and memory, and their reproductive capacity appeared normal.
- Since there are no adequate and well-controlled studies in pregnant women, Besivance should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Pregnancy Category (AUS):
- Australian Drug Evaluation Committee (ADEC) Pregnancy Category
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Besifloxacin in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Besifloxacin during labor and delivery.
### Nursing Mothers
- Besifloxacin has not been measured in human milk, although it can be presumed to be excreted in human milk. Caution should be exercised when Besivance is administered to a nursing mother.
### Pediatric Use
- The safety and effectiveness of Besivance® in infants below one year of age have not been established. The efficacy of Besivance in treating bacterial conjunctivitis in pediatric patients one year or older has been demonstrated in controlled clinical trials .
- There is no evidence that the ophthalmic administration of quinolones has any effect on weight bearing joints, even though systemic administration of some quinolones has been shown to cause arthropathy in immature animals.
### Geriatic Use
- No overall differences in safety and effectiveness have been observed between elderly and younger patients.
### Gender
There is no FDA guidance on the use of Besifloxacin with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Besifloxacin with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Besifloxacin in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Besifloxacin in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Besifloxacin in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Besifloxacin in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- topical Ophthalmic
### Monitoring
There is limited information regarding Monitoring of Besifloxacin in the drug label.
# IV Compatibility
There is limited information regarding IV Compatibility of Besifloxacin in the drug label.
# Overdosage
There is limited information regarding Overdose of Besifloxacin in the drug label.
# Pharmacology
## Mechanism of Action
- Besifloxacin is an 8-chloro fluoroquinolone with a N-1 cyclopropyl group. The compound has activity against Gram-positive and Gram-negative bacteria due to the inhibition of both bacterial DNA gyrase and topoisomerase IV. DNA gyrase is an essential enzyme required for replication, transcription and repair of bacterial DNA. Topoisomerase IV is an essential enzyme required for partitioning of the chromosomal DNA during bacterial cell division. Besifloxacin is bactericidal with minimum bactericidal concentrations (MBCs) generally within one dilution of the minimum inhibitory concentrations (MICs).
## Structure
- Besivance (besifloxacin ophthalmic suspension) 0.6%, is a sterile ophthalmic suspension of besifloxacin formulated with DuraSite®- (polycarbophil, edetate disodium dihydrate and sodium chloride). Each mL of Besivance contains 6.63 mg besifloxacin hydrochloride equivalent to 6 mg besifloxacin base. It is an 8-chloro fluoroquinolone anti-infective for topical ophthalmic use.
- Mol Wt 430.30
- Chemical Name:(+)-7--8-chloro-1- cyclopropyl-6-fluoro-4-oxo-1,4-dihydroquinoline-3-carboxylic acid hydrochloride.
- Besifloxacin hydrochloride is a white to pale yellowish-white powder.
- Each mL Contains:
- Besivance is an isotonic suspension with an osmolality of approximately 290 mOsm/kg.
## Pharmacodynamics
There is limited information regarding Pharmacodynamics of Besifloxacin in the drug label.
## Pharmacokinetics
- Plasma concentrations of besifloxacin were measured in adult patients with suspected bacterial conjunctivitis who received Besivance bilaterally three times a day (16 doses total). Following the first and last dose, the maximum plasma besifloxacin concentration in each patient was less than 1.3 ng/mL. The mean besifloxacin Cmax was 0.37 ng/mL on day 1 and 0.43 ng/mL on day 6. The average elimination half-life of besifloxacin in plasma following multiple dosing was estimated to be 7 hours.
- Besifloxacin is an 8-chloro fluoroquinolone with a N-1 cyclopropyl group. The compound has activity against Gram-positive and Gram-negative bacteria due to the inhibition of both bacterial DNA gyrase and topoisomerase IV. DNA gyrase is an essential enzyme required for replication, transcription and repair of bacterial DNA. Topoisomerase IV is an essential enzyme required for partitioning of the chromosomal DNA during bacterial cell division. Besifloxacin is bactericidal with minimum bactericidal concentrations (MBCs) generally within one dilution of the minimum inhibitory concentrations (MICs).
- The mechanism of action of fluoroquinolones, including besifloxacin, is different from that of aminoglycoside, macrolide, and β-lactam antibiotics. Therefore, besifloxacin may be active against pathogens that are resistant to these antibiotics and these antibiotics may be active against pathogens that are resistant to besifloxacin. In vitro studies demonstrated cross-resistance between besifloxacin and some fluoroquinolones.
- In vitro resistance to besifloxacin develops via multiple-step mutations and occurs at a general frequency of < 3.3 x 10-10 for Staphylococcus aureus and < 7 x 10-10 for Streptococcus pneumoniae.
- Besifloxacin has been shown to be active against most isolates of the following bacteria both in vitro and in conjunctival infections treated in clinical trials as described in the INDICATIONS AND USAGE section:
- Aerococcus viridans*, CDC coryneform group G, Corynebacterium pseudodiphtheriticum*, C. striatum*, Haemophilus influenzae, Moraxella catarrhalis*, M. lacunata*, Pseudomonas aeruginosa*, Staphylococcus aureus, S. epidermidis, S. hominis*, S. lugdunensis*, S. warneri*, Streptococcus mitis group, S. oralis, S. pneumoniae, S. salivarius*
- Efficacy for this organism was studied in fewer than 10 infections.
## Nonclinical Toxicology
- Long-term studies in animals to determine the carcinogenic potential of besifloxacin have not been performed.
- No in vitro mutagenic activity of besifloxacin was observed in an Ames test (up to 3.33 mcg/plate) on bacterial tester strains Salmonella typhimurium TA98, TA100, TA1535, TA1537 and Escherichia coli WP2uvrA. However, it was mutagenic in S. typhimurium strain TA102 and E. coli strain WP2(pKM101). Positive responses in these strains have been observed with other quinolones and are likely related to topoisomerase inhibition.
- Besifloxacin induced chromosomal aberrations in CHO cells in vitro and it was positive in an in vivo mouse micronucleus assay at oral doses ≥ 1500 mg/kg. Besifloxacin did not induce unscheduled DNA synthesis in hepatocytes cultured from rats given the test compound up to 2,000 mg/ kg by the oral route. In a fertility and early embryonic development study in rats, besifloxacin did not impair the fertility of male or female rats at oral doses of up to 500 mg/kg/day. This is over 10,000 times higher than the recommended total daily human ophthalmic dose.
# Clinical Studies
- In a randomized, double-masked, vehicle controlled, multicenter clinical trial, in which patients 1-98 years of age were dosed 3 times a day for 5 days, Besivance was superior to its vehicle in patients with bacterial conjunctivitis. Clinical resolution was achieved in 45% (90/198) for the Besivance treated group versus 33% (63/191) for the vehicle treated group (difference 12%, 95% CI 3% - 22%). Microbiological outcomes demonstrated a statistically significant eradication rate for causative pathogens of 91% (181/198) for the Besivance treated group versus 60% (114/191) for the vehicle treated group (difference 31%, 95% CI 23% - 40%). Microbiologic eradication does not always correlate with clinical outcome in anti-infective trials.
# How Supplied
- Besivance® (besifloxacin ophthalmic suspension) 0.6%, is supplied as a sterile ophthalmic suspension in a white low density polyethylene (LDPE) bottle with a controlled dropper tip and tan polypropylene cap. Tamper evidence is provided with a shrink band around the cap and neck area of the package.
- 5 mL in 7.5 mL bottle
- NDC 24208-446-05
## Storage
- Store at 15°- 25°C (59° - 77°F). Protect from Light. Invert closed bottle and shake once before use.
# Images
## Drug Images
## Package and Label Display Panel
### PACKAGE/LABEL PRINCIPAL DISPLAY PANEL
### Ingredients and Appearance
# Patient Counseling Information
- Patients should be advised to avoid contaminating the applicator tip with material from the eye, fingers or other source.
- Although Besivance is not intended to be administered systemically, quinolones administered systemically have been associated with hypersensitivity reactions, even following a single dose. Patients should be advised to discontinue use immediately and contact their physician at the first sign of a rash or allergic reaction.
- Patients should be told that although it is common to feel better early in the course of the therapy, the medication should be taken exactly as directed. Skipping doses or not completing the full course of therapy may (1) decrease the effectiveness of the immediate treatment and (2) increase the likelihood that bacteria will develop resistance and will not be treatable by Besivance or other antibacterial drugs in the future.
- Patients should be advised not to wear contact lenses if they have signs or symptoms of bacterial conjunctivitis or during the course of therapy with Besivance.
- Patients should be advised to thoroughly wash hands prior to using Besivance.
- Patients should be instructed to invert closed bottle (upside down) and shake once before each use. Remove cap with bottle still in the inverted position. Tilt head back, and with bottle inverted, gently squeeze bottle to instill one drop into the affected eye(s).
- Manufactured by: Bausch & Lomb Incorporated
Tampa, Florida 33637
Besivance® is a registered trademark of Bausch & Lomb Incorporated.
©Bausch & Lomb Incorporated
U.S. Patent Nos. 6,685,958; 6,699,492; 5,447,926
- DuraSite is a trademark of InSite Vision Incorporated
9142605 (flat)
9142705 (folded)
# Precautions with Alcohol
- Alcohol-Besifloxacin interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- BESIVANCE®
# Look-Alike Drug Names
There is limited information regarding Besifloxacin Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Besifloxacin
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Rabin Bista, M.B.B.S. [2]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Besifloxacin is a quinolone antimicrobial that is FDA approved for the treatment of bacterial conjunctivitis. Common adverse reactions include conjunctival redness.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
### Indications
- Besivance® (besifloxacin ophthalmic suspension) 0.6%, is indicated for the treatment of bacterial conjunctivitis caused by susceptible isolates of the following bacteria:
- Efficacy for this organism was studied in fewer than 10 infections.
### Dosage
- Invert closed bottle and shake once before use.
- Instill one drop in the affected eye(s) 3 times a day, four to twelve hours apart for 7 days.
### DOSAGE FORMS AND STRENGTHS
- 7.5 mL bottle filled with 5 mL of besifloxacin ophthalmic suspension, 0.6%.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Besifloxacin in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Besifloxacin in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
There is limited information regarding FDA-Labeled Use of Besifloxacin in pediatric patients.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Besifloxacin in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Besifloxacin in pediatric patients.
# Contraindications
- None
# Warnings
- Topical Ophthalmic Use Only
- NOT FOR INJECTION INTO THE EYE.
- Besivance is for topical ophthalmic use only, and should not be injected subconjunctivally, nor should it be introduced directly into the anterior chamber of the eye.
- As with other anti-infectives, prolonged use of Besivance (besifloxacin ophthalmic suspension) 0.6% may result in overgrowth of non-susceptible organisms, including fungi. If super-infection occurs, discontinue use and institute alternative therapy. Whenever clinical judgment dictates, the patient should be examined with the aid of magnification, such as slit-lamp biomicroscopy, and, where appropriate, fluorescein staining.
- Patients should not wear contact lenses if they have signs or symptoms of bacterial conjunctivitis or during the course of therapy with Besivance .
# Adverse Reactions
## Clinical Trials Experience
- Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in one clinical trial of a drug cannot be directly compared with the rates in the clinical trials of the same or another drug and may not reflect the rates observed in practice.
- The data described below reflect exposure to Besivance in approximately 1,000 patients between 1 and 98 years old with clinical signs and symptoms of bacterial conjunctivitis.
- The most frequently reported ocular adverse reaction was conjunctival redness, reported in approximately 2% of patients.
- Other adverse reactions reported in patients receiving Besivance occurring in approximately 1-2% of patients included: blurred vision, eye pain, eye irritation, eye pruritus and headache.
## Postmarketing Experience
There is limited information regarding Postmarketing Experience of Besifloxacin in the drug label.
# Drug Interactions
There is limited information regarding Besifloxacin Drug Interactions in the drug label.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): C
- Oral doses of besifloxacin up to 1000 mg/kg/day were not associated with visceral or skeletal malformations in rat pups in a study of embryo-fetal development, although this dose was associated with maternal toxicity (reduced body weight gain and food consumption) and maternal mortality. Increased post-implantation loss, decreased fetal body weights, and decreased fetal ossification were also observed. At this dose, the mean Cmax in the rat dams was approximately 20 mcg/mL, >45,000 times the mean plasma concentrations measured in humans. The No Observed Adverse Effect Level (NOAEL) for this embryo-fetal development study was 100 mg/kg/day (Cmax, 5 mcg/mL, >11,000 times the mean plasma concentrations measured in humans).
- In a prenatal and postnatal development study in rats, the NOAELs for both fetal and maternal toxicity were also 100 mg/kg/day. At 1000 mg/kg/day, the pups weighed significantly less than controls and had a reduced neonatal survival rate. Attainment of developmental landmarks and sexual maturation were delayed, although surviving pups from this dose group that were reared to maturity did not demonstrate deficits in behavior, including activity, learning and memory, and their reproductive capacity appeared normal.
- Since there are no adequate and well-controlled studies in pregnant women, Besivance should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Pregnancy Category (AUS):
- Australian Drug Evaluation Committee (ADEC) Pregnancy Category
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Besifloxacin in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Besifloxacin during labor and delivery.
### Nursing Mothers
- Besifloxacin has not been measured in human milk, although it can be presumed to be excreted in human milk. Caution should be exercised when Besivance is administered to a nursing mother.
### Pediatric Use
- The safety and effectiveness of Besivance® in infants below one year of age have not been established. The efficacy of Besivance in treating bacterial conjunctivitis in pediatric patients one year or older has been demonstrated in controlled clinical trials [see CLINICAL STUDIES (14)] .
- There is no evidence that the ophthalmic administration of quinolones has any effect on weight bearing joints, even though systemic administration of some quinolones has been shown to cause arthropathy in immature animals.
### Geriatic Use
- No overall differences in safety and effectiveness have been observed between elderly and younger patients.
### Gender
There is no FDA guidance on the use of Besifloxacin with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Besifloxacin with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Besifloxacin in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Besifloxacin in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Besifloxacin in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Besifloxacin in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- topical Ophthalmic
### Monitoring
There is limited information regarding Monitoring of Besifloxacin in the drug label.
# IV Compatibility
There is limited information regarding IV Compatibility of Besifloxacin in the drug label.
# Overdosage
There is limited information regarding Overdose of Besifloxacin in the drug label.
# Pharmacology
## Mechanism of Action
- Besifloxacin is an 8-chloro fluoroquinolone with a N-1 cyclopropyl group. The compound has activity against Gram-positive and Gram-negative bacteria due to the inhibition of both bacterial DNA gyrase and topoisomerase IV. DNA gyrase is an essential enzyme required for replication, transcription and repair of bacterial DNA. Topoisomerase IV is an essential enzyme required for partitioning of the chromosomal DNA during bacterial cell division. Besifloxacin is bactericidal with minimum bactericidal concentrations (MBCs) generally within one dilution of the minimum inhibitory concentrations (MICs).
## Structure
- Besivance (besifloxacin ophthalmic suspension) 0.6%, is a sterile ophthalmic suspension of besifloxacin formulated with DuraSite®* (polycarbophil, edetate disodium dihydrate and sodium chloride). Each mL of Besivance contains 6.63 mg besifloxacin hydrochloride equivalent to 6 mg besifloxacin base. It is an 8-chloro fluoroquinolone anti-infective for topical ophthalmic use.
- Mol Wt 430.30
- Chemical Name:(+)-7-[(3R)-3-aminohexahydro-1H-azepin-1-yl]-8-chloro-1- cyclopropyl-6-fluoro-4-oxo-1,4-dihydroquinoline-3-carboxylic acid hydrochloride.
- Besifloxacin hydrochloride is a white to pale yellowish-white powder.
- Each mL Contains:
- Besivance is an isotonic suspension with an osmolality of approximately 290 mOsm/kg.
## Pharmacodynamics
There is limited information regarding Pharmacodynamics of Besifloxacin in the drug label.
## Pharmacokinetics
- Plasma concentrations of besifloxacin were measured in adult patients with suspected bacterial conjunctivitis who received Besivance bilaterally three times a day (16 doses total). Following the first and last dose, the maximum plasma besifloxacin concentration in each patient was less than 1.3 ng/mL. The mean besifloxacin Cmax was 0.37 ng/mL on day 1 and 0.43 ng/mL on day 6. The average elimination half-life of besifloxacin in plasma following multiple dosing was estimated to be 7 hours.
- Besifloxacin is an 8-chloro fluoroquinolone with a N-1 cyclopropyl group. The compound has activity against Gram-positive and Gram-negative bacteria due to the inhibition of both bacterial DNA gyrase and topoisomerase IV. DNA gyrase is an essential enzyme required for replication, transcription and repair of bacterial DNA. Topoisomerase IV is an essential enzyme required for partitioning of the chromosomal DNA during bacterial cell division. Besifloxacin is bactericidal with minimum bactericidal concentrations (MBCs) generally within one dilution of the minimum inhibitory concentrations (MICs).
- The mechanism of action of fluoroquinolones, including besifloxacin, is different from that of aminoglycoside, macrolide, and β-lactam antibiotics. Therefore, besifloxacin may be active against pathogens that are resistant to these antibiotics and these antibiotics may be active against pathogens that are resistant to besifloxacin. In vitro studies demonstrated cross-resistance between besifloxacin and some fluoroquinolones.
- In vitro resistance to besifloxacin develops via multiple-step mutations and occurs at a general frequency of < 3.3 x 10-10 for Staphylococcus aureus and < 7 x 10-10 for Streptococcus pneumoniae.
- Besifloxacin has been shown to be active against most isolates of the following bacteria both in vitro and in conjunctival infections treated in clinical trials as described in the INDICATIONS AND USAGE section:
- Aerococcus viridans*, CDC coryneform group G, Corynebacterium pseudodiphtheriticum*, C. striatum*, Haemophilus influenzae, Moraxella catarrhalis*, M. lacunata*, Pseudomonas aeruginosa*, Staphylococcus aureus, S. epidermidis, S. hominis*, S. lugdunensis*, S. warneri*, Streptococcus mitis group, S. oralis, S. pneumoniae, S. salivarius*
- Efficacy for this organism was studied in fewer than 10 infections.
## Nonclinical Toxicology
- Long-term studies in animals to determine the carcinogenic potential of besifloxacin have not been performed.
- No in vitro mutagenic activity of besifloxacin was observed in an Ames test (up to 3.33 mcg/plate) on bacterial tester strains Salmonella typhimurium TA98, TA100, TA1535, TA1537 and Escherichia coli WP2uvrA. However, it was mutagenic in S. typhimurium strain TA102 and E. coli strain WP2(pKM101). Positive responses in these strains have been observed with other quinolones and are likely related to topoisomerase inhibition.
- Besifloxacin induced chromosomal aberrations in CHO cells in vitro and it was positive in an in vivo mouse micronucleus assay at oral doses ≥ 1500 mg/kg. Besifloxacin did not induce unscheduled DNA synthesis in hepatocytes cultured from rats given the test compound up to 2,000 mg/ kg by the oral route. In a fertility and early embryonic development study in rats, besifloxacin did not impair the fertility of male or female rats at oral doses of up to 500 mg/kg/day. This is over 10,000 times higher than the recommended total daily human ophthalmic dose.
# Clinical Studies
- In a randomized, double-masked, vehicle controlled, multicenter clinical trial, in which patients 1-98 years of age were dosed 3 times a day for 5 days, Besivance was superior to its vehicle in patients with bacterial conjunctivitis. Clinical resolution was achieved in 45% (90/198) for the Besivance treated group versus 33% (63/191) for the vehicle treated group (difference 12%, 95% CI 3% - 22%). Microbiological outcomes demonstrated a statistically significant eradication rate for causative pathogens of 91% (181/198) for the Besivance treated group versus 60% (114/191) for the vehicle treated group (difference 31%, 95% CI 23% - 40%). Microbiologic eradication does not always correlate with clinical outcome in anti-infective trials.
# How Supplied
- Besivance® (besifloxacin ophthalmic suspension) 0.6%, is supplied as a sterile ophthalmic suspension in a white low density polyethylene (LDPE) bottle with a controlled dropper tip and tan polypropylene cap. Tamper evidence is provided with a shrink band around the cap and neck area of the package.
- 5 mL in 7.5 mL bottle
- NDC 24208-446-05
## Storage
- Store at 15°- 25°C (59° - 77°F). Protect from Light. Invert closed bottle and shake once before use.
# Images
## Drug Images
## Package and Label Display Panel
### PACKAGE/LABEL PRINCIPAL DISPLAY PANEL
### Ingredients and Appearance
# Patient Counseling Information
- Patients should be advised to avoid contaminating the applicator tip with material from the eye, fingers or other source.
- Although Besivance is not intended to be administered systemically, quinolones administered systemically have been associated with hypersensitivity reactions, even following a single dose. Patients should be advised to discontinue use immediately and contact their physician at the first sign of a rash or allergic reaction.
- Patients should be told that although it is common to feel better early in the course of the therapy, the medication should be taken exactly as directed. Skipping doses or not completing the full course of therapy may (1) decrease the effectiveness of the immediate treatment and (2) increase the likelihood that bacteria will develop resistance and will not be treatable by Besivance or other antibacterial drugs in the future.
- Patients should be advised not to wear contact lenses if they have signs or symptoms of bacterial conjunctivitis or during the course of therapy with Besivance.
- Patients should be advised to thoroughly wash hands prior to using Besivance.
- Patients should be instructed to invert closed bottle (upside down) and shake once before each use. Remove cap with bottle still in the inverted position. Tilt head back, and with bottle inverted, gently squeeze bottle to instill one drop into the affected eye(s).
- Manufactured by: Bausch & Lomb Incorporated
Tampa, Florida 33637
Besivance® is a registered trademark of Bausch & Lomb Incorporated.
©Bausch & Lomb Incorporated
U.S. Patent Nos. 6,685,958; 6,699,492; 5,447,926
- DuraSite is a trademark of InSite Vision Incorporated
9142605 (flat)
9142705 (folded)
# Precautions with Alcohol
- Alcohol-Besifloxacin interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- BESIVANCE®[1]
# Look-Alike Drug Names
There is limited information regarding Besifloxacin Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Besifloxacin | |
8aabd45d139905415bb80462178831734ca6c0a9 | wikidoc | Bestrophin 1 | Bestrophin 1
Bestrophin-1 (Best1) is a protein that, in humans, is encoded by the BEST1 gene (RPD ID - 5T5N/4RDQ).
The bestrophin family of proteins comprises four evolutionary related genes (BEST1, BEST2, BEST3, and BEST4) that code for integral membrane proteins. This family was first identified in humans by linking a BEST1 mutation with Best vitelliform macular dystrophy (BVMD). Mutations in the BEST1 gene have been identified as the primary cause for at least five different degenerative retinal diseases.
The bestrophins are an ancient family of structurally conserved proteins that have been identified in nearly every organism studied from bacteria to humans. In humans, they function as calcium-activated anion channels, each of which has a unique tissue distribution throughout the body. Specifically, the BEST1 gene on chromosome 11q13 encodes the Bestrophin-1 protein in humans whose expression is highest in the retina.
# Structure
## Gene
The bestrophin genes share a conserved gene structure, with almost identical sizes of the 8 RFP-TM domain-encoding exons and highly conserved exon-intron boundaries. Each of the four bestrophin genes has a unique 3-prime end of variable length.
BEST1 has been shown by two independent studies to be regulated by Microphthalmia-associated transcription factor.
## Protein
Bestrophin-1 is an integral membrane protein found primarily in the retinal pigment epithelium (RPE) of the eye. Within the RPE layer, it is mainly located on the basolateral plasma membrane. Protein crystallization structures indicate this protein's primary ion channel function as well as its calcium regulatory capabilities. Bestrophin-1 consists of 585 amino acids and both N- and the C-termini are located within the cell.
The structure of Best1 consists of five identical subunits that each span the membrane four times and form a continuous, funnel-shaped pore via the second transmembrane domain containing a high content of aromatic residues, including an invariant arg-phe-pro (RFP) motif. The pore is lined with various nonpolar, hydrophobic amino acids. Both the structure and the composition of the pore help to ensure that only small anions are able to move completely through the channel. The channel acts as two funnels working together in tandem. It begins with a semi-selective, narrow entryway for anions, and then opens to a larger, positively charged area which then leads to a narrower pathway that further limits the size of anions passing through the pore. A calcium clasp acts as a belting mechanism around the larger, middle section of the channel. Calcium ions control the opening and closing of the channel due to conformational changes caused by calcium binding at the C-terminus directly following the last transmembrane domain.
# Tissue and subcellular distribution
The location of expression of the BEST1 gene is essential for protein functioning and mislocalization is often connected to a variety of retinal degenerative diseases. The BEST1 gene expresses the Best1 protein primarily in the cytosol of the retinal pigment epithelium. The protein is typically contained in vesicles near the cellular membrane. There is also research to support that the Best1 protein is localized and produced in the endoplasmic reticulum (intracellular organelle involved in protein and lipid synthesis). Best1 is typically expressed with other proteins also synthesized in the endoplasmic reticulum, such as calreticulin, calnexin and Stim-1. Calcium ion involvement in the countertransport of chloride ions also supports the idea that Best1 is involved in forming calcium stores within the cell.
# Function
Best1 primarily functions as an intracellular calcium-activated chloride channel on the cellular membrane that is not voltage-dependent. More recently Best1 has been shown to act as a volume-regulating anion channel.
# Diseases
## Best’s vitelliform macular dystrophy (BVMD)
Best’s vitelliform macular dystrophy (BVMD) is one of the most common Best1-associated diseases. BVMD typically becomes noticeable in children and is represented by the buildup of lipofuscin (lipid residuals) lesions in the eye. Diagnosis normally follows an abnormal electrooculogram in which decreased activation of calcium channels in the basolateral membrane of the retinal pigment epithelium becomes apparent. A mutation in the BEST1 gene leads to a loss of channel function and eventually retinal degeneration. Although BVMD is an autosomal dominant form of macular dystrophy, expressivity varies within and between affected families although the overwhelming majority of affected families come from northern European descent. Typically, people with this condition experience five progressively worsening stages, though timing and severity varies greatly. BVMD is often caused by the single missense mutations; however, amino acid deletions have also been identified. A loss of function of the Best1 chloride channel could likely explain some of the most common issues associated with BVMD: an inability to regulate intracellular ion concentrations and regulate overall cell volume. To date, over 100 disease-causing mutations have been related to BVMD as well as a number of other degenerative retinal diseases.
## Adult-onset vitelliform macular dystrophy (AVMD)
Adult-onset vitelliform macular dystrophy (AVMD) consists of lesions similar to BVMD on the retina. However, the cause is not as definitive as BVMD. The inability to diagnosis AVMD via genetic testing makes differentiating between AVMD and pattern dystrophy difficult. It is also unknown whether there is truly a clinical difference between AVMD caused by BEST1 mutations and AVMD caused by PRPH2 mutations. AVMD usually involves less vision loss than BVMD and cases do not usually run in families.
## Autosomal recessive bestrophinopathy (ARB)
Autosomal recessive bestrophinopathy (ARB) was first identified in 2008. People with ARB demonstrate a decrease in vision during the first ten years of life. Parents and family members typically show no abnormalities as the disease is autosomal recessive, indicating that both alleles of the BEST1 gene must be mutated. Vitelliform lesions are often present and some cases involve cystoid macular edema. In addition, other complications have been observed. Vision decreases slowly over time, although rates of decline vary. Mutations causing ARB range from missense mutations to single base mutations in non-coding regions.
## Autosomal dominant vitreoretinochoroidopathy
Autosomal dominant vitreoretinochoroidpathy was first identified in 1982 and presents itself in both eyes with decreases in peripheral vision due to excessive fluid and changes in eye retinal pigmentation. Early onset cataracts are also likely.
## Retinitis pigmentosa (RP)
Retinitis pigmentosa was first described in relation to the BEST1 gene in 2009 and was found to be associated with four different missense mutations in the BEST1 gene in people. All affected individuals experience a diminished response to light within their retina and may have changes in pigmentation, pale optic discs, fluid accumulation and decreased visual acuity.
All of the diseases above do not have any known treatments or cures. However, as of 2017, researchers are currently working on discovering treatments with stem cell transplants of the retinal pigment epithelium. | Bestrophin 1
Bestrophin-1 (Best1) is a protein that, in humans, is encoded by the BEST1 gene (RPD ID - 5T5N/4RDQ).[1]
The bestrophin family of proteins comprises four evolutionary related genes (BEST1, BEST2, BEST3, and BEST4) that code for integral membrane proteins.[2] This family was first identified in humans by linking a BEST1 mutation with Best vitelliform macular dystrophy (BVMD).[3] Mutations in the BEST1 gene have been identified as the primary cause for at least five different degenerative retinal diseases.[3]
The bestrophins are an ancient family of structurally conserved proteins that have been identified in nearly every organism studied from bacteria to humans. In humans, they function as calcium-activated anion channels, each of which has a unique tissue distribution throughout the body. Specifically, the BEST1 gene on chromosome 11q13 encodes the Bestrophin-1 protein in humans whose expression is highest in the retina.[3]
# Structure
## Gene
The bestrophin genes share a conserved gene structure, with almost identical sizes of the 8 RFP-TM domain-encoding exons and highly conserved exon-intron boundaries. Each of the four bestrophin genes has a unique 3-prime end of variable length.[1]
BEST1 has been shown by two independent studies to be regulated by Microphthalmia-associated transcription factor.[4][5]
## Protein
Bestrophin-1 is an integral membrane protein found primarily in the retinal pigment epithelium (RPE) of the eye.[6] Within the RPE layer, it is mainly located on the basolateral plasma membrane. Protein crystallization structures indicate this protein's primary ion channel function as well as its calcium regulatory capabilities.[6][3] Bestrophin-1 consists of 585 amino acids and both N- and the C-termini are located within the cell.
The structure of Best1 consists of five identical subunits that each span the membrane four times and form a continuous, funnel-shaped pore via the second transmembrane domain containing a high content of aromatic residues, including an invariant arg-phe-pro (RFP) motif.[3][7][8] The pore is lined with various nonpolar, hydrophobic amino acids. Both the structure and the composition of the pore help to ensure that only small anions are able to move completely through the channel. The channel acts as two funnels working together in tandem. It begins with a semi-selective, narrow entryway for anions, and then opens to a larger, positively charged area which then leads to a narrower pathway that further limits the size of anions passing through the pore. A calcium clasp acts as a belting mechanism around the larger, middle section of the channel. Calcium ions control the opening and closing of the channel due to conformational changes caused by calcium binding at the C-terminus directly following the last transmembrane domain.[3][8]
# Tissue and subcellular distribution
The location of expression of the BEST1 gene is essential for protein functioning and mislocalization is often connected to a variety of retinal degenerative diseases. The BEST1 gene expresses the Best1 protein primarily in the cytosol of the retinal pigment epithelium. The protein is typically contained in vesicles near the cellular membrane. There is also research to support that the Best1 protein is localized and produced in the endoplasmic reticulum (intracellular organelle involved in protein and lipid synthesis). Best1 is typically expressed with other proteins also synthesized in the endoplasmic reticulum, such as calreticulin, calnexin and Stim-1. Calcium ion involvement in the countertransport of chloride ions also supports the idea that Best1 is involved in forming calcium stores within the cell.[6]
# Function
Best1 primarily functions as an intracellular calcium-activated chloride channel on the cellular membrane that is not voltage-dependent.[2][6][8] More recently Best1 has been shown to act as a volume-regulating anion channel.
# Diseases
## Best’s vitelliform macular dystrophy (BVMD)
Best’s vitelliform macular dystrophy (BVMD) is one of the most common Best1-associated diseases. BVMD typically becomes noticeable in children and is represented by the buildup of lipofuscin (lipid residuals) lesions in the eye.[2][6] Diagnosis normally follows an abnormal electrooculogram in which decreased activation of calcium channels in the basolateral membrane of the retinal pigment epithelium becomes apparent. A mutation in the BEST1 gene leads to a loss of channel function and eventually retinal degeneration.[6] Although BVMD is an autosomal dominant form of macular dystrophy, expressivity varies within and between affected families although the overwhelming majority of affected families come from northern European descent.[3][6] Typically, people with this condition experience five progressively worsening stages, though timing and severity varies greatly. BVMD is often caused by the single missense mutations; however, amino acid deletions have also been identified.[3] A loss of function of the Best1 chloride channel could likely explain some of the most common issues associated with BVMD: an inability to regulate intracellular ion concentrations and regulate overall cell volume.[9] To date, over 100 disease-causing mutations have been related to BVMD as well as a number of other degenerative retinal diseases.[8]
## Adult-onset vitelliform macular dystrophy (AVMD)
Adult-onset vitelliform macular dystrophy (AVMD) consists of lesions similar to BVMD on the retina. However, the cause is not as definitive as BVMD. The inability to diagnosis AVMD via genetic testing makes differentiating between AVMD and pattern dystrophy difficult. It is also unknown whether there is truly a clinical difference between AVMD caused by BEST1 mutations and AVMD caused by PRPH2 mutations. AVMD usually involves less vision loss than BVMD and cases do not usually run in families.[3]
## Autosomal recessive bestrophinopathy (ARB)
Autosomal recessive bestrophinopathy (ARB) was first identified in 2008. People with ARB demonstrate a decrease in vision during the first ten years of life. Parents and family members typically show no abnormalities as the disease is autosomal recessive, indicating that both alleles of the BEST1 gene must be mutated. Vitelliform lesions are often present and some cases involve cystoid macular edema. In addition, other complications have been observed. Vision decreases slowly over time, although rates of decline vary. Mutations causing ARB range from missense mutations to single base mutations in non-coding regions.[3]
## Autosomal dominant vitreoretinochoroidopathy
Autosomal dominant vitreoretinochoroidpathy was first identified in 1982 and presents itself in both eyes with decreases in peripheral vision due to excessive fluid and changes in eye retinal pigmentation. Early onset cataracts are also likely.[3]
## Retinitis pigmentosa (RP)
Retinitis pigmentosa was first described in relation to the BEST1 gene in 2009 and was found to be associated with four different missense mutations in the BEST1 gene in people. All affected individuals experience a diminished response to light within their retina and may have changes in pigmentation, pale optic discs, fluid accumulation and decreased visual acuity.[3]
All of the diseases above do not have any known treatments or cures. However, as of 2017, researchers are currently working on discovering treatments with stem cell transplants of the retinal pigment epithelium.[3] | https://www.wikidoc.org/index.php/Bestrophin_1 | |
244bdd68788a30dd55cf17ffec2f5b2184e6cf16 | wikidoc | Beta-alanine | Beta-alanine
In biochemistry, beta-alanine (or β-alanine) is the only naturally occurring beta amino acid, which are amino acids in which the amino group is at the β-position from the carboxylate group (i.e., two atoms away, see Figure 1). The IUPAC name for beta-alanine would be 3-aminopropionic acid. Unlike its normal counterpart, L-α-alanine, beta-alanine has no chiral center.
Beta-alanine is not used in the biosynthesis of any major proteins or enzymes. It is formed in vivo by the degradation of dihydrouracil and carnosine. It is a component of the naturally occurring peptides carnosine and anserine and also of pantothenic acid (vitamin B5) which itself is a component of coenzyme A. Under normal conditions, beta-alanine is metabolized into acetic acid.
Beta-alanine is the rate-limiting precursor of carnosine, which is to say carnosine levels are limited by the amount of available beta-alanine. Supplementation with beta-alanine has been shown to increase the concentration of carnosine in muscles, decrease fatigue in athletes and increase total muscular work done.
Typically studies have used supplementing strategies of multiple doses of 400 mg or 800 mg, administered at regular intervals for up to eight hours, over periods ranging from 4 to 10 weeks. After a 10 week supplementing strategy, the reported increase in intramuscular carnosine content was an average of 80.1% (range 18 to 205%).
L-Histidine, with a pKa of 6.1 is a relatively weak buffer over the physiological intramuscular pH range. However, when bound to other amino acids this increases nearer to 6.8-7.0. In particular, when bound to beta-alanine the pKa value is 6.83, making this a very efficient intramuscular buffer. Furthermore, because of the position of the beta amino group, beta-alanine dipeptides are not incorporated proteins and thus can be stored at relatively high concentrations (millimolar). Occurring at 17-25 mmol/kg (dry muscle), carnosine (beta-alanyl-L-histidine) is an important intramuscular buffer, constituting 10-20% of the total buffering capacity in type I and II muscle fibres.
Beta-alanine, provided in solution or as powder in gelatine capsules, however, causes paraesthesia when ingested in amounts above 10 mg per kg body weight (bwt). This is variable between individuals. Symptoms may be experienced by some individuals as mild even at 10 mg per kg bwt, in a majority as significant at 20 mg per kg bwt, and severe at 40 mg per kg bwt. However, an equivalent amount (equimolar) to 40 mg per kg bwt, ingested in the form of histidine containing dipeptides in chicken broth extract, did not cause paraesthesia.
It is probable that the paraesthesia, a form of neuropathic pain, results from high peak blood-plasma concentrations of beta-alanine since greater quantities, ingested in the form of the beta-alanine / histidine (or methylhistidine) containing dipeptides (i.e. carnosine and anserine) in meat, do not cause the same symptoms. In this case the beta-alanine absorption profile is flattened but sustained for a longer period of time, whereas, the beta-alanine samples in the studies were administered as gelatine capsules containing powder. This resulted in plasma concentrations rising rapidly, peaking within 30 to 45 minutes, and being eliminated after 90 to 120 minutes. The paraesthesia caused is no indication of efficacy since the published studies undertaken so far have utilised doses of 400 mg or 800 mg at a time to avoid the paraesthesia. Furthermore, excretion of beta-alanine in urine accounted for 0.60%(+/-0.09), 1.50%(+/-0.40) and 3.64%(+/-0.47) of the administered doses of 10, 20, or 40 mg per kg body weight, indicating greater losses occurring with increasing dosage.
Even though much weaker than glycine (and thus with a debated role as a physiological transmitter), beta-alanine is an agonist next in activity to the cognate ligant glycine itself, for strychnine-sensitive inhibitory glycine receptors (GlyRs) (the agonist order: glycine >> beta-alanine > taurine >> alanine, L-serine > proline). | Beta-alanine
Template:Chembox new
In biochemistry, beta-alanine (or β-alanine) is the only naturally occurring beta amino acid, which are amino acids in which the amino group is at the β-position from the carboxylate group (i.e., two atoms away, see Figure 1). The IUPAC name for beta-alanine would be 3-aminopropionic acid. Unlike its normal counterpart, L-α-alanine, beta-alanine has no chiral center.
Beta-alanine is not used in the biosynthesis of any major proteins or enzymes. It is formed in vivo by the degradation of dihydrouracil and carnosine. It is a component of the naturally occurring peptides carnosine and anserine and also of pantothenic acid (vitamin B5) which itself is a component of coenzyme A. Under normal conditions, beta-alanine is metabolized into acetic acid.
Beta-alanine is the rate-limiting precursor of carnosine, which is to say carnosine levels are limited by the amount of available beta-alanine. Supplementation with beta-alanine has been shown to increase the concentration of carnosine in muscles, decrease fatigue in athletes and increase total muscular work done.[1][2]
Typically studies have used supplementing strategies of multiple doses of 400 mg or 800 mg, administered at regular intervals for up to eight hours, over periods ranging from 4 to 10 weeks.[2][3] After a 10 week supplementing strategy, the reported increase in intramuscular carnosine content was an average of 80.1% (range 18 to 205%).[2]
L-Histidine, with a pKa of 6.1 is a relatively weak buffer over the physiological intramuscular pH range. However, when bound to other amino acids this increases nearer to 6.8-7.0. In particular, when bound to beta-alanine the pKa value is 6.83,[4] making this a very efficient intramuscular buffer. Furthermore, because of the position of the beta amino group, beta-alanine dipeptides are not incorporated proteins and thus can be stored at relatively high concentrations (millimolar). Occurring at 17-25 mmol/kg (dry muscle),[5] carnosine (beta-alanyl-L-histidine) is an important intramuscular buffer, constituting 10-20% of the total buffering capacity in type I and II muscle fibres.
Beta-alanine, provided in solution or as powder in gelatine capsules, however, causes paraesthesia when ingested in amounts above 10 mg per kg body weight (bwt).[3] This is variable between individuals. Symptoms may be experienced by some individuals as mild even at 10 mg per kg bwt, in a majority as significant at 20 mg per kg bwt, and severe at 40 mg per kg bwt.[3] However, an equivalent amount (equimolar) to 40 mg per kg bwt, ingested in the form of histidine containing dipeptides in chicken broth extract, did not cause paraesthesia.[3]
It is probable that the paraesthesia, a form of neuropathic pain, results from high peak blood-plasma concentrations of beta-alanine since greater quantities, ingested in the form of the beta-alanine / histidine (or methylhistidine) containing dipeptides (i.e. carnosine and anserine) in meat, do not cause the same symptoms. In this case the beta-alanine absorption profile is flattened but sustained for a longer period of time,[3] whereas, the beta-alanine samples in the studies were administered as gelatine capsules containing powder. This resulted in plasma concentrations rising rapidly, peaking within 30 to 45 minutes, and being eliminated after 90 to 120 minutes. The paraesthesia caused is no indication of efficacy since the published studies undertaken so far have utilised doses of 400 mg or 800 mg at a time to avoid the paraesthesia. Furthermore, excretion of beta-alanine in urine accounted for 0.60%(+/-0.09), 1.50%(+/-0.40) and 3.64%(+/-0.47) of the administered doses of 10, 20, or 40 mg per kg body weight,[3] indicating greater losses occurring with increasing dosage.
Even though much weaker than glycine (and thus with a debated role as a physiological transmitter), beta-alanine is an agonist next in activity to the cognate ligant glycine itself, for strychnine-sensitive inhibitory glycine receptors (GlyRs) (the agonist order: glycine >> beta-alanine > taurine >> alanine, L-serine > proline).[6] | https://www.wikidoc.org/index.php/Beta-alanine | |
a4362d9eda267aa53e6130ac5e46088d5929b8b5 | wikidoc | Beta-catenin | Beta-catenin
Catenin beta-1, also known as β-catenin, is a protein that in humans is encoded by the CTNNB1 gene.
β-catenin is a dual function protein, involved in regulation and coordination of cell–cell adhesion and gene transcription. In humans, the CTNNB1 protein is encoded by the CTNNB1 gene. In Drosophila, the homologous protein is called armadillo. β-catenin is a subunit of the cadherin protein complex and acts as an intracellular signal transducer in the Wnt signaling pathway. It is a member of the catenin protein family and homologous to γ-catenin, also known as plakoglobin. Beta-catenin is widely expressed in many tissues. In cardiac muscle, beta-catenin localizes to adherens junctions in intercalated disc structures, which are critical for electrical and mechanical coupling between adjacent cardiomyocytes.
Mutations and overexpression of β-catenin are associated with many cancers, including hepatocellular carcinoma, colorectal carcinoma, lung cancer, malignant breast tumors, ovarian and endometrial cancer. Alterations in the localization and expression levels of beta-catenin have been associated with various forms of heart disease, including dilated cardiomyopathy. β-catenin is regulated and destroyed by the beta-catenin destruction complex, and in particular by the adenomatous polyposis coli (APC) protein, encoded by the tumour-suppressing APC gene. Therefore, genetic mutation of the APC gene is also strongly linked to cancers, and in particular colorectal cancer resulting from familial adenomatous polyposis (FAP).
# Discovery
Beta-catenin was initially discovered in the early 1990s as a component of a mammalian cell adhesion complex: a protein responsible for cytoplasmatic anchoring of cadherins. But very soon, it was realized that the Drosophila protein armadillo – implicated in mediating the morphogenic effects of Wingless/Wnt – is homologous to the mammalian β-catenin, not just in structure but also in function. Thus beta-catenin became one of the very first examples of moonlighting: a protein performing more than one radically different cellular function.
# Structure
## Protein structure
The core of beta-catenin consists of several very characteristic repeats, each approximately 40 amino acids long. Termed armadillo repeats, all these elements fold together into a single, rigid protein domain with an elongated shape – called armadillo (ARM) domain. An average armadillo repeat is composed of three alpha helices. The first repeat of β-catenin (near the N-terminus) is slightly different from the others – as it has an elongated helix with a kink, formed by the fusion of helices 1 and 2. Due to the complex shape of individual repeats, the whole ARM domain is not a straight rod: it possesses a slight curvature, so that an outer (convex) and an inner (concave) surface is formed. This inner surface serves as a ligand-binding site for the various interaction partners of the ARM domains.
The segments N-terminal and far C-terminal to the ARM domain do not adopt any structure in solution by themselves. Yet these intrinsically disordered regions play a crucial role in beta-catenin function. The N-terminal disordered region contains a conserved short linear motif responsible for binding of TrCP1 (also known as β-TrCP) E3 ubiquitin ligase – but only when it is phosphorylated. Degradation of β-catenin is thus mediated by this N-terminal segment. The C-terminal region, on the other hand, is a strong transactivator when recruited onto DNA. This segment is not fully disordered: part of the C-terminal extension forms a stable helix that packs against the ARM domain, but may also engage separate binding partners. This small structural element (HelixC) caps the C-terminal end of the ARM domain, shielding its hydrophobic residues. HelixC is not necessary for beta-catenin to function in cell-cell adhesion. On the other hand, it is required for Wnt signaling: possibly to recruit various coactivators, such as 14-3-3zeta. Yet its exact partners among the general transcription complexes are still unknown. Notably, the C-terminal segment of β-catenin can mimic the effects of the entire Wnt pathway if artificially fused to the DNA binding domain of LEF1 transcription factor.
Plakoglobin (also called gamma-catenin) has a strikingly similar architecture to that of beta-catenin. Not only their ARM domains resemble each other in both architecture and ligand binding capacity, but the N-terminal β-TrCP-binding motif is also conserved in plakoglobin, implying common ancestry and shared regulation with β-catenin. However, plakoglobin is a very weak transactivator when bound to DNA – this is probably caused by the divergence of their C-terminal sequences (plakoglobin appears to lack the transactivator motifs, and thus inhibits the Wnt pathway target genes instead of activating them).
## Partners binding to the armadillo domain
As sketched above, the ARM domain of beta-catenin acts as a platform to which specific linear motifs may bind. Located in structurally diverse partners, the β-catenin binding motifs are typically disordered on their own, and typically adopt a rigid structure upon ARM domain engagement – as seen for short linear motifs. However, β-catenin interacting motifs also have a number of peculiar characteristics. First, they might reach or even surpass the length of 30 amino acids in length, and contact the ARM domain on an excessively large surface area. Another unusual feature of these motifs is their frequently high degree of phosphorylation. Such Ser/Thr phosphorylation events greatly enhance the binding of many β-catenin associating motifs to the ARM domain.
The structure of beta-catenin in complex with the catenin binding domain of the transcriptional transactivation partner TCF provided the initial structural roadmap of how many binding partners of beta-catenin may form interactions. This structure demonstrated how the otherwise disordered N-terminus of TCF adapted what appeared to be a rigid conformation, with the binding motif spanning many beta-catenin repeats. Relatively strong charged interaction "hot spots" were defined (predicted, and later verified, to be conserved for the beta-catenin/E-cadherin interaction), as well as hydrophobic regions deemed important in the overall mode of binding and as potential therapeutic small molecule inhibitor targets against certain cancer forms. Furthermore, following studies demonstrated another peculiar characteristic, plasticity in the binding of the TCF N-terminus to beta-catenin.
Similarly, we find the familiar E-cadherin, whose cytoplasmatic tail contacts the ARM domain in the same canonical fashion. The scaffold protein axin (two closely related paralogs, axin 1 and axin 2) contains a similar interaction motif on its long, disordered middle segment. Although one molecule of axin only contains a single β-catenin recruitment motif, its partner the Adenomatous Polyposis Coli (APC) protein contains 11 such motifs in tandem arrangement per protomer, thus capable to interact with several β-catenin molecules at once. Since the surface of the ARM domain can typically accommodate only one peptide motif at any given time, all these proteins compete for the same cellular pool of β-catenin molecules. This competition is the key to understand how the Wnt signaling pathway works.
However, this "main" binding site on the ARM domain β-catenin is by no means the only one. The first helices of the ARM domain form an additional, special protein-protein interaction pocket: This can accommodate a helix-forming linear motif found in the coactivator BCL9 (or the closely related BCL9L) – an important protein involved in Wnt signaling. Although the precise details are much less clear, it appears that the same site is used by alpha-catenin when beta-catenin is localized to the adherens junctions. Because this pocket is distinct from the ARM domain's "main" binding site, there is no competition between alpha-catenin and E-cadherin or between TCF1 and BCL9, respectively. On the other hand, BCL9 and BCL9L must compete with α-catenin to access β-catenin molecules.
# Function
## Regulation of degradation through phosphorylation
The cellular level of beta-catenin is mostly controlled by its ubiquitination and proteosomal degradation. The E3 ubiquitin ligase TrCP1 (also known as β-TrCP) can recognize β-catenin as its substrate through a short linear motif on the disordered N-terminus. However, this motif (Asp-Ser-Gly-Ile-His-Ser) of β-catenin needs to be phosphorylated on the two serines in order to be capable to bind β-TrCP. Phosphorylation of the motif is performed by Glycogen Synthase Kinase 3 alpha and beta (GSK3α and GSK3β). GSK3s are constitutively active enzymes implicated in several important regulatory processes. There is one requirement, though: substrates of GSK3 need to be pre-phosphorylated four amino acids downstream (C-terminally) of the actual target site. Thus it also requires a "priming kinase" for its activities. In the case of beta-catenin, the most important priming kinase is Casein Kinase I (CKI). Once a serin-threonine rich substrate has been "primed", GSK3 can "walk" across it from C-terminal to N-terminal direction, phosphorylating every 4th serine or threonine residues in a row. This process will result in dual phosphorylation of the aforementioned β-TrCP recognition motif as well.
## The beta-catenin destruction complex
For GSK3 to be a highly effective kinase on a substrate, pre-phosphorylation is not enough. There is one additional requirement: Similar to the mitogen-activated protein kinases (MAPKs), substrates need to associate with this enzyme through high-affinity docking motifs. Beta-catenin contains no such motifs, but a special protein does: axin. What is more, its GSK3 docking motif is directly adjacent to a β-catenin binding motif. This way, axin acts as a true scaffold protein, bringing an enzyme (GSK3) together with its substrate (β-catenin) into close physical proximity.
But even axin does not act alone. Through its N-terminal regulator of G-protein signaling (RGS) domain, it recruits the adenomatous polyposis coli (APC) protein. APC is like a huge "Christmas tree": with a multitude of β-catenin binding motifs (one APC molecule alone possesses 11 such motifs ), it may collect as many β-catenin molecules as possible. APC can interact with multiple axin molecules at the same time as it has three SAMP motifs (Ser-Ala-Met-Pro) to bind the RGS domains found in axin. In addition, axin also has the potential to oligomerize through its C-terminal DIX domain. The result is a huge, multimeric protein assembly dedicated to β-catenin phosphorylation. This complex is usually called the beta-catenin destruction complex, although it is distinct from the proteosome machinery actually responsible for β-catenin degradation. It only marks β-catenin molecules for subsequent destruction.
## Wnt signaling and the regulation of destruction
In resting cells, axin molecules oligomerize with each other through their C-terminal DIX domains, which have two binding interfaces. Thus they can build linear oligomers or even polymers inside the cytoplasm of cells. DIX domains are unique: the only other proteins known to have a DIX domain are Dishevelled and DIXDC1. (The single Dsh protein of Drosophila corresponds to three paralogous genes, Dvl1, Dvl2 and Dvl3 in mammals.) Dsh associates with the cytoplasmic regions of Frizzled receptors with its PDZ and DEP domains. When a Wnt molecule binds to Frizzled, it induces a poorly known cascade of events, that result in the exposure of dishevelled's DIX domain and the creation of a perfect binding site for axin. Axin is then titrated away from its oligomeric assemblies – the β-catenin destruction complex – by Dsh. Once bound to the receptor complex, axin will be rendered incompetent for β-catenin binding and GSK3 activity. Importantly, the cytoplasmic segments of the Frizzled-associated LRP5 and LRP6 proteins contain GSK3 pseudo-substrate sequences (Pro-Pro-Pro-Ser-Pro-x-Ser), appropriately "primed" (pre-phosphorylated) by CKI, as if it were a true substrate of GSK3. These false target sites greatly inhibit GSK3 activity in a competitive manner. This way receptor-bound axin will abolish mediating the phosphorylation of β-catenin. Since beta-catenin is no longer marked for destruction, but continues to be produced, its concentration will increase. Once β-catenin levels rise high enough to saturate all binding sites in the cytoplasm, it will also translocate into the nucleus. Upon engaging the transcription factors LEF1, TCF1, TCF2 or TCF3, β-catenin forces them to disengage their previous partners: Groucho proteins. Unlike Groucho, that recruit transcriptional repressors (e.g. histone-lysine methyltransferases), beta-catenin will bind transcriptional activators, switching on target genes.
## Role in cell-cell adhesion
Cell–cell adhesion complexes are essential for the formation of complex animal tissues. β-catenin is part of a protein complex that form the so-called adherens junctions. These cell-cell adhesion complexes are necessary for the creation and maintenance of epithelial cell layers and barriers. As a component of the complex, β-catenin can regulate cell growth and adhesion between cells. It may also be responsible for transmitting the contact inhibition signal that causes cells to stop dividing once the epithelial sheet is complete. The E-cadherin – β-catenin – α-catenin complex is weakly associated to actin filaments. Adherent junctions thus form a dynamic, rather than a stable link to the actin cytoskeleton.
The heart of the adherent junctions are the cadherin proteins. Cadherins form the cell-cell junctional structures known as adherens junctions as well as the desmosomes. Cadherins are capable of homophilic interactions through their extracellular cadherin repeat domains, in a Ca2+-dependent manner: this can hold adjacent epithelial cells together. While in the adherens junction, cadherins recruit β-catenin molecules onto their intracellular regions. β-catenin, in turn, associates with another important protein, α-catenin that directly binds to the actin filaments. This is possible because α-catenin and cadherins bind at distinct sites to β-catenin. The β-catenin – α-catenin complex can thus physically bridge cadherins with the actin cytoskeleton. Organization of the cadherin–catenin complex is additionally regulated through phosphorylation and endocytosis of its components.
## Roles in development
Beta-catenin has a central role in directing several developmental processes, as it can directly bind transcription factors and be regulated by a diffusible extracellular substance: Wnt. It acts upon early embryos to induce entire body regions, as well as individual cells in later stages of development. It also regulates physiological regeneration processes.
### Early embryonic patterning
Wnt signaling and beta-catenin dependent gene expression plays a critical role during the formation of different body regions in the early embryo. Experimentally modified embryos that do not express this protein will fail to develop mesoderm and initiate gastrulation. During the blastula and gastrula stages, Wnt as well as BMP and FGF pathways will induce the antero-posterior axis formation, regulate the precise placement of the primitive streak (gastrulation and mesoderm formation) as well as the process of neurulation (central nervous system development).
In Xenopus oocytes, β-catenin is initially equally localized to all regions of the egg, but it is targeted for ubiquitination and degradation by the β-catenin destruction complex. Fertilization of the egg causes a rotation of the outer cortical layers, moving clusters of the Frizzled and Dsh proteins closer to the equatorial region. β-catenin will be enriched locally under the influence of Wnt signaling pathway in the cells that inherit this portion of the cytoplasm. It will eventually translocate to the nucleus to bind TCF3 in order to activate several genes that induce dorsal cell characteristics. This signaling results in a region of cells known as the grey crescent, which is a classical organizer of embryonic development. If this region is surgically removed from the embryo, gastrulation does not occur at all. β-Catenin also plays a crucial role in the induction of the blastopore lip, which in turn initiates gastrulation. Inhibition of GSK-3 translation by injection of antisense mRNA may cause a second blastopore and a superfluous body axis to form. A similar effect can result from the overexpression of β-catenin.
### Asymmetric cell division
Beta-catenin has also been implicated in regulation of cell fates through asymmetric cell division in the model organism C. elegans. Similarly to the Xenopus oocytes, this is essentially the result of non-equal distribution of Dsh, Frizzled, axin and APC in the cytoplasm of the mother cell.
### Stem cell renewal
One of the most important results of Wnt signaling and the elevated level of beta-catenin in certain cell types is the maintenance of pluripotency. In other cell types and developmental stages, β-catenin may promote differentiation, especially towards mesodermal cell lineages.
### Epithelial-to-mesenchymal transition
Beta-catenin also acts as a morphogen in later stages of embryonic development. Together with TGF-β, an important role of β-catenin is to induce a morphogenic change in epithelial cells. It induces them to abandon their tight adhesion and assume a more mobile and loosely associated mesenchymal phenotype. During this process, epithelial cells lose expression of proteins like E-cadherin, Zonula occludens 1 (ZO1), and cytokeratin. At the same time they turn on the expression of vimentin, alpha smooth muscle actin (ACTA2), and fibroblast-specific protein 1 (FSP1). They also produce extracellular matrix components, such as type I collagen and fibronectin. Aberrant activation of the Wnt pathway has been implicated in pathological processes such as fibrosis and cancer. In cardiac muscle development, beta-catenin performs a biphasic role. Initially, the activation of Wnt/beta-catenin is essential for committing mesenchymal cells to a cardiac lineage; however, in later stages of development, the downregulation of beta-catenin is required.
## Involvement in cardiac physiology
In cardiac muscle, beta-catenin forms a complex with N-cadherin at adherens junctions within intercalated disc structures, which are responsible for electrical and mechanical coupling of adjacent cardiac cells. Studies in a model of adult rat ventricular cardiomyocytes have shown that the appearance and distribution of beta-catenin is spatio-temporally regulated during the redifferentiation of these cells in culture. Specifically, beta-catenin is part of a distinct complex with N-cadherin and alpha-catenin, which is abundant at adherens junctions in early stages following cardiomyocyte isolation for the reformation of cell-cell contacts. It has been shown that beta-catenin forms a complex with emerin in cardiomyocytes at adherens junctions within intercalated discs; and this interaction is dependent on the presence of GSK 3-beta phosphorylation sites on beta-catenin. Knocking out emerin significantly altred beta-catenin localization and the overall intercalated disc architecture, which resembled a dilated cardiomyopathy phenotype.
In animal models of cardiac disease, functions of beta-catenin have been unveiled. In a guinea pig model of aortic stenosis and left ventricular hypertrophy, beta-catenin was shown to change subcellular localization from intercalated discs to the cytosol, despite no change in the overall cellular abundance of beta-catenin. vinculin showed a similar profile of change. N-cadherin showed no change, and there was no compensatory upregulation of plakoglobin at intercalated discs in the absence of beta-catenin. In a hamster model of cardiomyopathy and heart failure, cell-cell adhesions were irregular and disorganized, and expression levels of adherens junction/intercalated disc and nuclear pools of beta-catenin were decreased. These data suggest that a loss of beta-catenin may play a role in the diseased intercalated discs that have been associated with cardiac muscle hypertrophy and heart failure. In a rat model of myocardial infarction, adenoviral gene transfer of nonphosphorylatable, constitutively-active beta-catenin decreased MI size, activated the cell cycle, and reduced the amount of apoptosis in cardiomyocytes and cardiac myofibroblasts. This finding was coordinate with enhanced expression of pro-survival proteins, survivin and Bcl-2, and vascular endothelial growth factor while promoting the differentiation of cardiac fibroblasts into myofibroblasts. These findings suggest that beta-catenin can promote the regeneration and healing process following myocardial infarction. In a spontaneously-hypertensive heart failure rat model, investigators detected a shuttling of beta-catenin from the intercalated disc/sarcolemma to the nucleus, evidenced by a reduction of beta-catenin expression in the membrane protein fraction and an increase in the nuclear fraction. Additionally, they found a weakening in the association between glycogen synthase kinase-3β and beta-catenin, which may indicate altered protein stability. Overall, results suggest that an enhanced nuclear localization of beta-catenin may be important in the progression of cardiac hypertrophy.
Regarding the mechanistic role of beta-catenin in cardiac hypertrophy, transgenic mouse studies have shown somewhat conflicting results regarding whether upregulation of beta-catenin is beneficial or detrimental. A recent study using a conditional knockout mouse that either lacked beta-catenin altogether or expressed a non-degradable form of beta-catenin in cardiomyocytes reconciled a potential reason for these discrepancies. There appears to be strict control over the subcellular localization of beta-catenin in cardiac muscle. Mice lacking beta-catenin had no overt phenotype in the left ventricular myocardium; however, mice harboring a stabilized form of beta-catenin developed dilated cardiomyopathy, suggesting that the temporal regulation of beta-catenin by protein degradation mechanisms is critical for normal functioning of beta-catenin in cardiac cells. In a mouse model harboring knockout of a desmosomal protein, plakoglobin, implicated in arrhythmogenic right ventricular cardiomyopathy, the stabilization of beta-catenin was also enhanced, presumably to compensate for the loss of its plakogloblin homolog. These changes were coordinate with Akt activation and glycogen synthase kinase 3β inhibition, suggesting once again that the abnormal stabilization of beta-catenin may be involved in the development of cardiomyopathy. Further studies employing a double knockout of plakoglobin and beta-catenin showed that the double knockout developed cardiomyopathy, fibrosis and arrhythmias resulting in sudden cardiac death. Intercalated disc architecture was severely impaired and connexin 43-resident gap junctions were markedly reduced. Electrocardiogram measurements captured spontaneous lethal ventricular arrhythmias in the double transgenic animals, suggesting that the two catenins—beta-catenin and plakoglobin are critical and idispensible for mechanoelectrical coupling in cardiomyocytes.
# Clinical significance
## Role in depression
Whether or not a given individual’s brain can deal effectively with stress, and thus their susceptibility to depression, depends on the beta-catenin in each person’s brain, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published November 12, 2014 in the journal Nature. Higher beta-catenin signaling increases behavioral flexibility, whereas defective beta-catenin signaling leads to depression and reduced stress management.
## Role in cardiac disease
Altered expression profiles in beta-catenin have been associated with dilated cardiomyopathy in humans. Beta-catenin upregulation of expression has generally been observed in patients with dilated cardiomyopathy. In a particular study, patients with end-stage dilated cardiomyopathy showed almost doubled estrogen receptor alpha (ER-alpha) mRNA and protein levels, and the ER-alpha/beta-catenin interaction, present at intercalated discs of control, non-diseased human hearts was lost, suggesting that the loss of this interaction at the intercalated disc may play a role in the progression of heart failure.
## Involvement in cancer
Beta-catenin is a proto-oncogene. Mutations of this gene are commonly found in a variety of cancers: in primary hepatocellular carcinoma, colorectal cancer, ovarian carcinoma, breast cancer, lung cancer and glioblastoma. It has been estimated that approximately 10% of all tissue samples sequenced from all cancers display mutations in the CTNNB1 gene. Most of these mutations cluster on a tiny area of the N-terminal segment of β-catenin: the β-TrCP binding motif. Loss-of-function mutations of this motif essentially make ubiquitinylation and degradation of β-catenin impossible. It will cause β-catenin to translocate to the nucleus without any external stimulus and continuously drive transcription of its target genes. Increased nuclear β-catenin levels have also been noted in basal cell carcinoma (BCC), head and neck squamous cell carcinoma (HNSCC), prostate cancer (CaP), pilomatrixoma (PTR) and medulloblastoma (MDB) These observations may or may not implicate a mutation in the β-catenin gene: other Wnt pathway components can also be faulty.
Similar mutations are also frequently seen in the β-catenin recruiting motifs of APC. Hereditary loss-of-function mutations of APC cause a condition known as Familial Adenomatous Polyposis. Affected individuals develop hundreds of polyps in their large intestine. Most of these polyps are benign in nature, but they have the potential to transform into deadly cancer as time progresses. Somatic mutations of APC in colorectal cancer are also not uncommon. Beta-catenin and APC are among the key genes (together with others, like K-Ras and SMAD4) involved in colorectal cancer development. The potential of β-catenin to change the previously epithelial phenotype of affected cells into an invasive, mesenchyme-like type contributes greatly to metastasis formation.
## As a therapeutic target
Due to its involvement in cancer development, inhibition of beta-catenin continues to receive significant attention. But the targeting of the binding site on its armadillo domain is not the simplest task, due to its extensive and relatively flat surface. However, for an efficient inhibition, binding to smaller "hotspots" of this surface is sufficient. This way, a "stapled" helical peptide derived from the natural β-catenin binding motif found in LEF1 was sufficient for the complete inhibition of β-catenin dependent transcription. Recently, several small-molecule compounds have also been developed to target the same, highly positively charged area of the ARM domain (CGP049090, PKF118-310, PKF115-584 and ZTM000990). In addition, β-catenin levels can also be influenced by targeting upstream components of the Wnt pathway as well as the β-catenin destruction complex. The additional N-terminal binding pocket is also important for Wnt target gene activation (required for BCL9 recruitment). This site of the ARM domain can be pharmacologically targeted by carnosic acid, for example. That "auxiliary" site is another attractive target for drug development. Despite intensive preclinical research, no β-catenin inhibitors are available as therapeutic agents yet. However, its function can be further examined by siRNA knockdown based on an independent validation. Another therapeutic approach for reducing β-catenin nuclear accumulation is via the inhibition of galectin-3. The galectin-3 inhibitor GR-MD-02 is currently undergoing clinical trials in combination with the FDA-approved dose of ipilimumab in patients who have advanced melanoma.
## Role in fetal alcohol syndrome
β-catenin destabilization by ethanol is one of two known pathways whereby alcohol exposure induces fetal alcohol syndrome (the other is ethanol-induced folate deficiency). Ethanol leads to β-catenin destabilization via a G-protein-dependent pathway, wherein activated Phospholipase Cβ hydrolyzes phosphatidylinositol-(4,5)-bisphosphate to diacylglycerol and inositol-(1,4,5)-trisphosphate. Soluble inositol-(1,4,5)-trisphosphate triggers calcium to be released from the endoplasmic reticulum. This sudden increase in cytoplasmic calcium activates Ca2+/calmodulin-dependent protein kinase (CaMKII). Activated CaMKII destabilizes β-catenin via a poorly characterized mechanism, but which likely involves β-catenin phosphorylation by CaMKII. The β-catenin transcriptional program (which is required for normal neural crest cell development) is thereby suppressed, resulting in premature neural crest cell apoptosis (cell death).
# Interactions
Beta-catenin has been shown to interact with:
- APC,
- AXIN1,
- Androgen receptor,
- CBY1,
- CDH1,
- CDH2,
- CDH3,
- CDK5R1,
- CHUK,
- CTNND1,
- CTNNA1,
- EGFR,
- Emerin
- ESR1
- FHL2,
- GSK3B,
- HER2/neu,
- HNF4A,
- IKK2,
- LEF1,
- MAGI1,
- MUC1,
- NR5A1,
- PCAF,
- PHF17,
- Plakoglobin,
- PTPN14,
- PTPRF,
- PTPRK (PTPkappa),
- PTPRT (PTPrho),
- PTPRU (PCP-2),
- PSEN1,
- PTK7
- RuvB-like 1,
- SMAD7,
- SMARCA4
- SLC9A3R1,
- USP9X, and
- VE-cadherin.
- XIRP1 | Beta-catenin
Catenin beta-1, also known as β-catenin, is a protein that in humans is encoded by the CTNNB1 gene.
β-catenin is a dual function protein, involved in regulation and coordination of cell–cell adhesion and gene transcription. In humans, the CTNNB1 protein is encoded by the CTNNB1 gene.[1][2] In Drosophila, the homologous protein is called armadillo. β-catenin is a subunit of the cadherin protein complex and acts as an intracellular signal transducer in the Wnt signaling pathway.[3][4][5] It is a member of the catenin protein family and homologous to γ-catenin, also known as plakoglobin. Beta-catenin is widely expressed in many tissues. In cardiac muscle, beta-catenin localizes to adherens junctions in intercalated disc structures, which are critical for electrical and mechanical coupling between adjacent cardiomyocytes.
Mutations and overexpression of β-catenin are associated with many cancers, including hepatocellular carcinoma, colorectal carcinoma, lung cancer, malignant breast tumors, ovarian and endometrial cancer.[6] Alterations in the localization and expression levels of beta-catenin have been associated with various forms of heart disease, including dilated cardiomyopathy. β-catenin is regulated and destroyed by the beta-catenin destruction complex, and in particular by the adenomatous polyposis coli (APC) protein, encoded by the tumour-suppressing APC gene. Therefore, genetic mutation of the APC gene is also strongly linked to cancers, and in particular colorectal cancer resulting from familial adenomatous polyposis (FAP).
# Discovery
Beta-catenin was initially discovered in the early 1990s as a component of a mammalian cell adhesion complex: a protein responsible for cytoplasmatic anchoring of cadherins.[7] But very soon, it was realized that the Drosophila protein armadillo – implicated in mediating the morphogenic effects of Wingless/Wnt – is homologous to the mammalian β-catenin, not just in structure but also in function.[8] Thus beta-catenin became one of the very first examples of moonlighting: a protein performing more than one radically different cellular function.
# Structure
## Protein structure
The core of beta-catenin consists of several very characteristic repeats, each approximately 40 amino acids long. Termed armadillo repeats, all these elements fold together into a single, rigid protein domain with an elongated shape – called armadillo (ARM) domain. An average armadillo repeat is composed of three alpha helices. The first repeat of β-catenin (near the N-terminus) is slightly different from the others – as it has an elongated helix with a kink, formed by the fusion of helices 1 and 2.[9] Due to the complex shape of individual repeats, the whole ARM domain is not a straight rod: it possesses a slight curvature, so that an outer (convex) and an inner (concave) surface is formed. This inner surface serves as a ligand-binding site for the various interaction partners of the ARM domains.
The segments N-terminal and far C-terminal to the ARM domain do not adopt any structure in solution by themselves. Yet these intrinsically disordered regions play a crucial role in beta-catenin function. The N-terminal disordered region contains a conserved short linear motif responsible for binding of TrCP1 (also known as β-TrCP) E3 ubiquitin ligase – but only when it is phosphorylated. Degradation of β-catenin is thus mediated by this N-terminal segment. The C-terminal region, on the other hand, is a strong transactivator when recruited onto DNA. This segment is not fully disordered: part of the C-terminal extension forms a stable helix that packs against the ARM domain, but may also engage separate binding partners.[10] This small structural element (HelixC) caps the C-terminal end of the ARM domain, shielding its hydrophobic residues. HelixC is not necessary for beta-catenin to function in cell-cell adhesion. On the other hand, it is required for Wnt signaling: possibly to recruit various coactivators, such as 14-3-3zeta.[11] Yet its exact partners among the general transcription complexes are still unknown. Notably, the C-terminal segment of β-catenin can mimic the effects of the entire Wnt pathway if artificially fused to the DNA binding domain of LEF1 transcription factor.[12]
Plakoglobin (also called gamma-catenin) has a strikingly similar architecture to that of beta-catenin. Not only their ARM domains resemble each other in both architecture and ligand binding capacity, but the N-terminal β-TrCP-binding motif is also conserved in plakoglobin, implying common ancestry and shared regulation with β-catenin.[13] However, plakoglobin is a very weak transactivator when bound to DNA – this is probably caused by the divergence of their C-terminal sequences (plakoglobin appears to lack the transactivator motifs, and thus inhibits the Wnt pathway target genes instead of activating them).[14]
## Partners binding to the armadillo domain
As sketched above, the ARM domain of beta-catenin acts as a platform to which specific linear motifs may bind. Located in structurally diverse partners, the β-catenin binding motifs are typically disordered on their own, and typically adopt a rigid structure upon ARM domain engagement – as seen for short linear motifs. However, β-catenin interacting motifs also have a number of peculiar characteristics. First, they might reach or even surpass the length of 30 amino acids in length, and contact the ARM domain on an excessively large surface area. Another unusual feature of these motifs is their frequently high degree of phosphorylation. Such Ser/Thr phosphorylation events greatly enhance the binding of many β-catenin associating motifs to the ARM domain.[15]
The structure of beta-catenin in complex with the catenin binding domain of the transcriptional transactivation partner TCF provided the initial structural roadmap of how many binding partners of beta-catenin may form interactions.[16] This structure demonstrated how the otherwise disordered N-terminus of TCF adapted what appeared to be a rigid conformation, with the binding motif spanning many beta-catenin repeats. Relatively strong charged interaction "hot spots" were defined (predicted, and later verified, to be conserved for the beta-catenin/E-cadherin interaction), as well as hydrophobic regions deemed important in the overall mode of binding and as potential therapeutic small molecule inhibitor targets against certain cancer forms. Furthermore, following studies demonstrated another peculiar characteristic, plasticity in the binding of the TCF N-terminus to beta-catenin.[17][18]
Similarly, we find the familiar E-cadherin, whose cytoplasmatic tail contacts the ARM domain in the same canonical fashion.[19] The scaffold protein axin (two closely related paralogs, axin 1 and axin 2) contains a similar interaction motif on its long, disordered middle segment.[20] Although one molecule of axin only contains a single β-catenin recruitment motif, its partner the Adenomatous Polyposis Coli (APC) protein contains 11 such motifs in tandem arrangement per protomer, thus capable to interact with several β-catenin molecules at once.[21] Since the surface of the ARM domain can typically accommodate only one peptide motif at any given time, all these proteins compete for the same cellular pool of β-catenin molecules. This competition is the key to understand how the Wnt signaling pathway works.
However, this "main" binding site on the ARM domain β-catenin is by no means the only one. The first helices of the ARM domain form an additional, special protein-protein interaction pocket: This can accommodate a helix-forming linear motif found in the coactivator BCL9 (or the closely related BCL9L) – an important protein involved in Wnt signaling.[22] Although the precise details are much less clear, it appears that the same site is used by alpha-catenin when beta-catenin is localized to the adherens junctions.[23] Because this pocket is distinct from the ARM domain's "main" binding site, there is no competition between alpha-catenin and E-cadherin or between TCF1 and BCL9, respectively.[24] On the other hand, BCL9 and BCL9L must compete with α-catenin to access β-catenin molecules.[25]
# Function
## Regulation of degradation through phosphorylation
The cellular level of beta-catenin is mostly controlled by its ubiquitination and proteosomal degradation. The E3 ubiquitin ligase TrCP1 (also known as β-TrCP) can recognize β-catenin as its substrate through a short linear motif on the disordered N-terminus. However, this motif (Asp-Ser-Gly-Ile-His-Ser) of β-catenin needs to be phosphorylated on the two serines in order to be capable to bind β-TrCP. Phosphorylation of the motif is performed by Glycogen Synthase Kinase 3 alpha and beta (GSK3α and GSK3β). GSK3s are constitutively active enzymes implicated in several important regulatory processes. There is one requirement, though: substrates of GSK3 need to be pre-phosphorylated four amino acids downstream (C-terminally) of the actual target site. Thus it also requires a "priming kinase" for its activities. In the case of beta-catenin, the most important priming kinase is Casein Kinase I (CKI). Once a serin-threonine rich substrate has been "primed", GSK3 can "walk" across it from C-terminal to N-terminal direction, phosphorylating every 4th serine or threonine residues in a row. This process will result in dual phosphorylation of the aforementioned β-TrCP recognition motif as well.
## The beta-catenin destruction complex
For GSK3 to be a highly effective kinase on a substrate, pre-phosphorylation is not enough. There is one additional requirement: Similar to the mitogen-activated protein kinases (MAPKs), substrates need to associate with this enzyme through high-affinity docking motifs. Beta-catenin contains no such motifs, but a special protein does: axin. What is more, its GSK3 docking motif is directly adjacent to a β-catenin binding motif.[20] This way, axin acts as a true scaffold protein, bringing an enzyme (GSK3) together with its substrate (β-catenin) into close physical proximity.
But even axin does not act alone. Through its N-terminal regulator of G-protein signaling (RGS) domain, it recruits the adenomatous polyposis coli (APC) protein. APC is like a huge "Christmas tree": with a multitude of β-catenin binding motifs (one APC molecule alone possesses 11 such motifs [21]), it may collect as many β-catenin molecules as possible.[26] APC can interact with multiple axin molecules at the same time as it has three SAMP motifs (Ser-Ala-Met-Pro) to bind the RGS domains found in axin. In addition, axin also has the potential to oligomerize through its C-terminal DIX domain. The result is a huge, multimeric protein assembly dedicated to β-catenin phosphorylation. This complex is usually called the beta-catenin destruction complex, although it is distinct from the proteosome machinery actually responsible for β-catenin degradation.[27] It only marks β-catenin molecules for subsequent destruction.
## Wnt signaling and the regulation of destruction
In resting cells, axin molecules oligomerize with each other through their C-terminal DIX domains, which have two binding interfaces. Thus they can build linear oligomers or even polymers inside the cytoplasm of cells. DIX domains are unique: the only other proteins known to have a DIX domain are Dishevelled and DIXDC1. (The single Dsh protein of Drosophila corresponds to three paralogous genes, Dvl1, Dvl2 and Dvl3 in mammals.) Dsh associates with the cytoplasmic regions of Frizzled receptors with its PDZ and DEP domains. When a Wnt molecule binds to Frizzled, it induces a poorly known cascade of events, that result in the exposure of dishevelled's DIX domain and the creation of a perfect binding site for axin. Axin is then titrated away from its oligomeric assemblies – the β-catenin destruction complex – by Dsh.[28] Once bound to the receptor complex, axin will be rendered incompetent for β-catenin binding and GSK3 activity. Importantly, the cytoplasmic segments of the Frizzled-associated LRP5 and LRP6 proteins contain GSK3 pseudo-substrate sequences (Pro-Pro-Pro-Ser-Pro-x-Ser), appropriately "primed" (pre-phosphorylated) by CKI, as if it were a true substrate of GSK3. These false target sites greatly inhibit GSK3 activity in a competitive manner.[29] This way receptor-bound axin will abolish mediating the phosphorylation of β-catenin. Since beta-catenin is no longer marked for destruction, but continues to be produced, its concentration will increase. Once β-catenin levels rise high enough to saturate all binding sites in the cytoplasm, it will also translocate into the nucleus. Upon engaging the transcription factors LEF1, TCF1, TCF2 or TCF3, β-catenin forces them to disengage their previous partners: Groucho proteins. Unlike Groucho, that recruit transcriptional repressors (e.g. histone-lysine methyltransferases), beta-catenin will bind transcriptional activators, switching on target genes.
## Role in cell-cell adhesion
Cell–cell adhesion complexes are essential for the formation of complex animal tissues. β-catenin is part of a protein complex that form the so-called adherens junctions.[30] These cell-cell adhesion complexes are necessary for the creation and maintenance of epithelial cell layers and barriers. As a component of the complex, β-catenin can regulate cell growth and adhesion between cells. It may also be responsible for transmitting the contact inhibition signal that causes cells to stop dividing once the epithelial sheet is complete.[31] The E-cadherin – β-catenin – α-catenin complex is weakly associated to actin filaments. Adherent junctions thus form a dynamic, rather than a stable link to the actin cytoskeleton.[30]
The heart of the adherent junctions are the cadherin proteins. Cadherins form the cell-cell junctional structures known as adherens junctions as well as the desmosomes. Cadherins are capable of homophilic interactions through their extracellular cadherin repeat domains, in a Ca2+-dependent manner: this can hold adjacent epithelial cells together. While in the adherens junction, cadherins recruit β-catenin molecules onto their intracellular regions. β-catenin, in turn, associates with another important protein, α-catenin that directly binds to the actin filaments.[32] This is possible because α-catenin and cadherins bind at distinct sites to β-catenin. The β-catenin – α-catenin complex can thus physically bridge cadherins with the actin cytoskeleton.[33] Organization of the cadherin–catenin complex is additionally regulated through phosphorylation and endocytosis of its components.
## Roles in development
Beta-catenin has a central role in directing several developmental processes, as it can directly bind transcription factors and be regulated by a diffusible extracellular substance: Wnt. It acts upon early embryos to induce entire body regions, as well as individual cells in later stages of development. It also regulates physiological regeneration processes.
### Early embryonic patterning
Wnt signaling and beta-catenin dependent gene expression plays a critical role during the formation of different body regions in the early embryo. Experimentally modified embryos that do not express this protein will fail to develop mesoderm and initiate gastrulation.[34] During the blastula and gastrula stages, Wnt as well as BMP and FGF pathways will induce the antero-posterior axis formation, regulate the precise placement of the primitive streak (gastrulation and mesoderm formation) as well as the process of neurulation (central nervous system development).[35]
In Xenopus oocytes, β-catenin is initially equally localized to all regions of the egg, but it is targeted for ubiquitination and degradation by the β-catenin destruction complex. Fertilization of the egg causes a rotation of the outer cortical layers, moving clusters of the Frizzled and Dsh proteins closer to the equatorial region. β-catenin will be enriched locally under the influence of Wnt signaling pathway in the cells that inherit this portion of the cytoplasm. It will eventually translocate to the nucleus to bind TCF3 in order to activate several genes that induce dorsal cell characteristics.[36] This signaling results in a region of cells known as the grey crescent, which is a classical organizer of embryonic development. If this region is surgically removed from the embryo, gastrulation does not occur at all. β-Catenin also plays a crucial role in the induction of the blastopore lip, which in turn initiates gastrulation.[37] Inhibition of GSK-3 translation by injection of antisense mRNA may cause a second blastopore and a superfluous body axis to form. A similar effect can result from the overexpression of β-catenin.[38]
### Asymmetric cell division
Beta-catenin has also been implicated in regulation of cell fates through asymmetric cell division in the model organism C. elegans. Similarly to the Xenopus oocytes, this is essentially the result of non-equal distribution of Dsh, Frizzled, axin and APC in the cytoplasm of the mother cell.[39]
### Stem cell renewal
One of the most important results of Wnt signaling and the elevated level of beta-catenin in certain cell types is the maintenance of pluripotency.[35] In other cell types and developmental stages, β-catenin may promote differentiation, especially towards mesodermal cell lineages.
### Epithelial-to-mesenchymal transition
Beta-catenin also acts as a morphogen in later stages of embryonic development. Together with TGF-β, an important role of β-catenin is to induce a morphogenic change in epithelial cells. It induces them to abandon their tight adhesion and assume a more mobile and loosely associated mesenchymal phenotype. During this process, epithelial cells lose expression of proteins like E-cadherin, Zonula occludens 1 (ZO1), and cytokeratin. At the same time they turn on the expression of vimentin, alpha smooth muscle actin (ACTA2), and fibroblast-specific protein 1 (FSP1). They also produce extracellular matrix components, such as type I collagen and fibronectin. Aberrant activation of the Wnt pathway has been implicated in pathological processes such as fibrosis and cancer.[40] In cardiac muscle development, beta-catenin performs a biphasic role. Initially, the activation of Wnt/beta-catenin is essential for committing mesenchymal cells to a cardiac lineage; however, in later stages of development, the downregulation of beta-catenin is required.[41][42][43]
## Involvement in cardiac physiology
In cardiac muscle, beta-catenin forms a complex with N-cadherin at adherens junctions within intercalated disc structures, which are responsible for electrical and mechanical coupling of adjacent cardiac cells. Studies in a model of adult rat ventricular cardiomyocytes have shown that the appearance and distribution of beta-catenin is spatio-temporally regulated during the redifferentiation of these cells in culture. Specifically, beta-catenin is part of a distinct complex with N-cadherin and alpha-catenin, which is abundant at adherens junctions in early stages following cardiomyocyte isolation for the reformation of cell-cell contacts.[44] It has been shown that beta-catenin forms a complex with emerin in cardiomyocytes at adherens junctions within intercalated discs; and this interaction is dependent on the presence of GSK 3-beta phosphorylation sites on beta-catenin. Knocking out emerin significantly altred beta-catenin localization and the overall intercalated disc architecture, which resembled a dilated cardiomyopathy phenotype.[45]
In animal models of cardiac disease, functions of beta-catenin have been unveiled. In a guinea pig model of aortic stenosis and left ventricular hypertrophy, beta-catenin was shown to change subcellular localization from intercalated discs to the cytosol, despite no change in the overall cellular abundance of beta-catenin. vinculin showed a similar profile of change. N-cadherin showed no change, and there was no compensatory upregulation of plakoglobin at intercalated discs in the absence of beta-catenin.[46] In a hamster model of cardiomyopathy and heart failure, cell-cell adhesions were irregular and disorganized, and expression levels of adherens junction/intercalated disc and nuclear pools of beta-catenin were decreased.[47] These data suggest that a loss of beta-catenin may play a role in the diseased intercalated discs that have been associated with cardiac muscle hypertrophy and heart failure. In a rat model of myocardial infarction, adenoviral gene transfer of nonphosphorylatable, constitutively-active beta-catenin decreased MI size, activated the cell cycle, and reduced the amount of apoptosis in cardiomyocytes and cardiac myofibroblasts. This finding was coordinate with enhanced expression of pro-survival proteins, survivin and Bcl-2, and vascular endothelial growth factor while promoting the differentiation of cardiac fibroblasts into myofibroblasts. These findings suggest that beta-catenin can promote the regeneration and healing process following myocardial infarction.[48] In a spontaneously-hypertensive heart failure rat model, investigators detected a shuttling of beta-catenin from the intercalated disc/sarcolemma to the nucleus, evidenced by a reduction of beta-catenin expression in the membrane protein fraction and an increase in the nuclear fraction. Additionally, they found a weakening in the association between glycogen synthase kinase-3β and beta-catenin, which may indicate altered protein stability. Overall, results suggest that an enhanced nuclear localization of beta-catenin may be important in the progression of cardiac hypertrophy.[49]
Regarding the mechanistic role of beta-catenin in cardiac hypertrophy, transgenic mouse studies have shown somewhat conflicting results regarding whether upregulation of beta-catenin is beneficial or detrimental.[50][51][52] A recent study using a conditional knockout mouse that either lacked beta-catenin altogether or expressed a non-degradable form of beta-catenin in cardiomyocytes reconciled a potential reason for these discrepancies. There appears to be strict control over the subcellular localization of beta-catenin in cardiac muscle. Mice lacking beta-catenin had no overt phenotype in the left ventricular myocardium; however, mice harboring a stabilized form of beta-catenin developed dilated cardiomyopathy, suggesting that the temporal regulation of beta-catenin by protein degradation mechanisms is critical for normal functioning of beta-catenin in cardiac cells.[53] In a mouse model harboring knockout of a desmosomal protein, plakoglobin, implicated in arrhythmogenic right ventricular cardiomyopathy, the stabilization of beta-catenin was also enhanced, presumably to compensate for the loss of its plakogloblin homolog. These changes were coordinate with Akt activation and glycogen synthase kinase 3β inhibition, suggesting once again that the abnormal stabilization of beta-catenin may be involved in the development of cardiomyopathy.[54] Further studies employing a double knockout of plakoglobin and beta-catenin showed that the double knockout developed cardiomyopathy, fibrosis and arrhythmias resulting in sudden cardiac death. Intercalated disc architecture was severely impaired and connexin 43-resident gap junctions were markedly reduced. Electrocardiogram measurements captured spontaneous lethal ventricular arrhythmias in the double transgenic animals, suggesting that the two catenins—beta-catenin and plakoglobin are critical and idispensible for mechanoelectrical coupling in cardiomyocytes.[55]
# Clinical significance
## Role in depression
Whether or not a given individual’s brain can deal effectively with stress, and thus their susceptibility to depression, depends on the beta-catenin in each person’s brain, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published November 12, 2014 in the journal Nature.[56] Higher beta-catenin signaling increases behavioral flexibility, whereas defective beta-catenin signaling leads to depression and reduced stress management.[56]
## Role in cardiac disease
Altered expression profiles in beta-catenin have been associated with dilated cardiomyopathy in humans. Beta-catenin upregulation of expression has generally been observed in patients with dilated cardiomyopathy.[57] In a particular study, patients with end-stage dilated cardiomyopathy showed almost doubled estrogen receptor alpha (ER-alpha) mRNA and protein levels, and the ER-alpha/beta-catenin interaction, present at intercalated discs of control, non-diseased human hearts was lost, suggesting that the loss of this interaction at the intercalated disc may play a role in the progression of heart failure.[58]
## Involvement in cancer
Beta-catenin is a proto-oncogene. Mutations of this gene are commonly found in a variety of cancers: in primary hepatocellular carcinoma, colorectal cancer, ovarian carcinoma, breast cancer, lung cancer and glioblastoma. It has been estimated that approximately 10% of all tissue samples sequenced from all cancers display mutations in the CTNNB1 gene.[59] Most of these mutations cluster on a tiny area of the N-terminal segment of β-catenin: the β-TrCP binding motif. Loss-of-function mutations of this motif essentially make ubiquitinylation and degradation of β-catenin impossible. It will cause β-catenin to translocate to the nucleus without any external stimulus and continuously drive transcription of its target genes. Increased nuclear β-catenin levels have also been noted in basal cell carcinoma (BCC),[60] head and neck squamous cell carcinoma (HNSCC), prostate cancer (CaP),[61] pilomatrixoma (PTR)[62] and medulloblastoma (MDB)[63] These observations may or may not implicate a mutation in the β-catenin gene: other Wnt pathway components can also be faulty.
Similar mutations are also frequently seen in the β-catenin recruiting motifs of APC. Hereditary loss-of-function mutations of APC cause a condition known as Familial Adenomatous Polyposis. Affected individuals develop hundreds of polyps in their large intestine. Most of these polyps are benign in nature, but they have the potential to transform into deadly cancer as time progresses. Somatic mutations of APC in colorectal cancer are also not uncommon.[64] Beta-catenin and APC are among the key genes (together with others, like K-Ras and SMAD4) involved in colorectal cancer development. The potential of β-catenin to change the previously epithelial phenotype of affected cells into an invasive, mesenchyme-like type contributes greatly to metastasis formation.
## As a therapeutic target
Due to its involvement in cancer development, inhibition of beta-catenin continues to receive significant attention. But the targeting of the binding site on its armadillo domain is not the simplest task, due to its extensive and relatively flat surface. However, for an efficient inhibition, binding to smaller "hotspots" of this surface is sufficient. This way, a "stapled" helical peptide derived from the natural β-catenin binding motif found in LEF1 was sufficient for the complete inhibition of β-catenin dependent transcription. Recently, several small-molecule compounds have also been developed to target the same, highly positively charged area of the ARM domain (CGP049090, PKF118-310, PKF115-584 and ZTM000990). In addition, β-catenin levels can also be influenced by targeting upstream components of the Wnt pathway as well as the β-catenin destruction complex.[65] The additional N-terminal binding pocket is also important for Wnt target gene activation (required for BCL9 recruitment). This site of the ARM domain can be pharmacologically targeted by carnosic acid, for example.[66] That "auxiliary" site is another attractive target for drug development.[67] Despite intensive preclinical research, no β-catenin inhibitors are available as therapeutic agents yet. However, its function can be further examined by siRNA knockdown based on an independent validation.[68] Another therapeutic approach for reducing β-catenin nuclear accumulation is via the inhibition of galectin-3.[69] The galectin-3 inhibitor GR-MD-02 is currently undergoing clinical trials in combination with the FDA-approved dose of ipilimumab in patients who have advanced melanoma.[70]
## Role in fetal alcohol syndrome
β-catenin destabilization by ethanol is one of two known pathways whereby alcohol exposure induces fetal alcohol syndrome (the other is ethanol-induced folate deficiency). Ethanol leads to β-catenin destabilization via a G-protein-dependent pathway, wherein activated Phospholipase Cβ hydrolyzes phosphatidylinositol-(4,5)-bisphosphate to diacylglycerol and inositol-(1,4,5)-trisphosphate. Soluble inositol-(1,4,5)-trisphosphate triggers calcium to be released from the endoplasmic reticulum. This sudden increase in cytoplasmic calcium activates Ca2+/calmodulin-dependent protein kinase (CaMKII). Activated CaMKII destabilizes β-catenin via a poorly characterized mechanism, but which likely involves β-catenin phosphorylation by CaMKII. The β-catenin transcriptional program (which is required for normal neural crest cell development) is thereby suppressed, resulting in premature neural crest cell apoptosis (cell death).[71]
# Interactions
Beta-catenin has been shown to interact with:
- APC,[72][73][74][75][76][77][78][79]
- AXIN1,[80][81]
- Androgen receptor,[82][83][84][85][86][87]
- CBY1,[88]
- CDH1,[19][73][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][106][107][108][109]
- CDH2,[44][110][111]
- CDH3,[108][112]
- CDK5R1,[113]
- CHUK,[114]
- CTNND1,[73][94]
- CTNNA1,[90][99][115][116][117]
- EGFR,[94][103][118]
- Emerin [119][120]
- ESR1 [58]
- FHL2,[121]
- GSK3B,[75][122]
- HER2/neu,[95][118][123]
- HNF4A,[86]
- IKK2,[114]
- LEF1,[124][125][126][127]
- MAGI1,[104]
- MUC1,[96][128][129][130][131][132][133]
- NR5A1,[134][135]
- PCAF,[136]
- PHF17,[137]
- Plakoglobin,[73][94]
- PTPN14,[138]
- PTPRF,[95][139]
- PTPRK (PTPkappa),[140]
- PTPRT (PTPrho),[141]
- PTPRU (PCP-2),[142][143][144]
- PSEN1,[145][146][147]
- PTK7[148]
- RuvB-like 1,[149]
- SMAD7,[124]
- SMARCA4[150]
- SLC9A3R1,[98]
- USP9X,[151] and
- VE-cadherin.[152][153]
- XIRP1 [154] | https://www.wikidoc.org/index.php/Beta-catenin | |
41362e7d926dc1a9d6ff3005734170132e5ff0d9 | wikidoc | Betacellulin | Betacellulin
Betacellulin is a protein that in humans is encoded by the BTC gene located on chromosome 4 at locus 4q13-q21. Betacellulin is a member of the EGF family of growth factors. It is synthesized primarily as a transmembrane precursor, which is then processed to mature molecule by proteolytic events. This protein is a ligand for the EGF receptor.
# Structure
BTC is a polymer of about 62-111 amino acid residues.
Secondary Structure: 6% helical (1 helices; 3 residues) 36% beta sheet (5 strands; 18 residues)
- BTC was originally identified as a growth-promoting factor in mouse pancreatic β-cell carcinoma cell line and has since been identified in humans. Mouse BTC (mBTC) is expressed as a 178-amino acid precursor. The membrane-bound precursor is cleaved to yield mature secreted mBTC. BTC is synthesized in a wide range of adult tissues and in many cultured cells, including smooth muscle cells and epithelial cells. The amino acid sequence of mature mBTC is 82.5%, identical with that of human BTC (hBTC), and both exhibit significant overall similarity with other members of the EGF family.
# About the Image
- The structure for the small protein Betacellulin that is shown was determined by two-dimensional nuclear magnetic resonance spectroscopy. The species that BTC was taken from was Homo sapiens.This particular molecule of BTC has a formula weight of 5916.9 and its sequence was determined to be RKGHFSRCPKQYKHYCIKGRCRFVVAEQTPSCVCDEGYIGARCERVDLFY (if you would like to see an image of what parts of the sequence code for the secondary structures observed in the image, click here). Also, a Ramachandran plot can be found here. | Betacellulin
Betacellulin is a protein that in humans is encoded by the BTC gene located on chromosome 4 at locus 4q13-q21.[1] Betacellulin is a member of the EGF family of growth factors. It is synthesized primarily as a transmembrane precursor, which is then processed to mature molecule by proteolytic events. This protein is a ligand for the EGF receptor.[1]
# Structure
BTC is a polymer of about 62-111 amino acid residues.
Secondary Structure: 6% helical (1 helices; 3 residues) 36% beta sheet (5 strands; 18 residues)
- BTC was originally identified as a growth-promoting factor in mouse pancreatic β-cell carcinoma cell line and has since been identified in humans. Mouse BTC (mBTC) is expressed as a 178-amino acid precursor. The membrane-bound precursor is cleaved to yield mature secreted mBTC. BTC is synthesized in a wide range of adult tissues and in many cultured cells, including smooth muscle cells and epithelial cells. The amino acid sequence of mature mBTC is 82.5%, identical with that of human BTC (hBTC), and both exhibit significant overall similarity with other members of the EGF family.
# About the Image
- The structure for the small protein Betacellulin that is shown was determined by two-dimensional nuclear magnetic resonance spectroscopy. The species that BTC was taken from was Homo sapiens.This particular molecule of BTC has a formula weight of 5916.9 and its sequence was determined to be RKGHFSRCPKQYKHYCIKGRCRFVVAEQTPSCVCDEGYIGARCERVDLFY (if you would like to see an image of what parts of the sequence code for the secondary structures observed in the image, click here). Also, a Ramachandran plot can be found here. | https://www.wikidoc.org/index.php/Betacellulin | |
bbc0f68234c7f531f64ada36b0183f74fee7a41c | wikidoc | Bicalutamide | Bicalutamide
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Bicalutamide is an antiandrogen that is FDA approved for the treatment of Stage D2 metastatic carcinoma of the prostate in combination with a luteinizing hormone-releasing hormone (LHRH) analog. Common adverse reactions include hot flashes, general pain, back pain, pelvic pain and abdominal pain, asthenia, constipation, infection, nausea, peripheral edema, dyspnea, diarrhea, hematuria, nocturia and anemia..
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
### Stage D2 metastatic carcinoma of the prostate
- One 50 mg tablet once daily (morning or evening)
- In combination with a luteinizing hormone-releasing hormone (LHRH) analog.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of BICALUTAMIDE in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of BICALUTAMIDE in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
There is limited information regarding Bicalutamide FDA-Labeled Indications and Dosage (Pediatric) in the drug label.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of BICALUTAMIDE in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of BICALUTAMIDE in pediatric patients.
# Contraindications
### Hypersensitivity
- Bicalutamide is contraindicated in any patient who has shown a hypersensitivity reaction to the drug or any of the tablet’s components.
- Hypersensitivity reactions including angioneurotic edema and urticaria have been reported.
### Women
- Bicalutamide has no indication for women, and should not be used in this population.
### Pregnancy
- Bicalutamide may cause fetal harm when administered to a pregnant woman. *Bicalutamide is contraindicated in women, including those who are or may become pregnant.
- There are no studies in pregnant women using bicalutamide.
- If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be appraised of the potential hazard to the fetus.
# Warnings
### Hepatitis
- Cases of death or hospitalization due to severe liver injury (hepatic failure) have been reported postmarketing in association with the use of bicalutamide.
- Hepatotoxicity in these reports generally occurred within the first three to four months of treatment.
- Hepatitis or marked increases in liver enzymes leading to drug discontinuation occurred in approximately 1% of bicalutamide patients in controlled clinical trials.
- Serum transaminase levels should be measured prior to starting treatment with bicalutamide, at regular intervals for the first four months of treatment, and periodically thereafter.
- If clinical symptoms or signs suggestive of liver dysfunction occur (e.g., nausea, vomiting, abdominal pain, fatigue, anorexia, “flu-like” symptoms, dark urine, jaundice, or right upper quadrant tenderness), the serum transaminases, in particular the serum ALT, should be measured immediately.
- If at any time a patient has jaundice, or their ALT rises above two times the upper limit of normal, bicalutamide should be immediately discontinued with close follow-up of liver function.
### Gynecomastia and Breast Pain
- In clinical trials with bicalutamide 150 mg as a single agent for prostate cancer, gynecomastia and breast pain have been reported in up to 38% and 39% of patients, respectively.
### Glucose Tolerance
- A reduction in glucose tolerance has been observed in males receiving LHRH agonists.
- This may manifest as diabetes or loss of glycemic control in those with preexisting diabetes.
- Consideration should therefore be given to monitoring blood glucose in patients receiving bicalutamide in combination with LHRH agonists.
### Laboratory Tests
- Regular assessments of serum Prostate Specific Antigen (PSA) may be helpful in monitoring the patient’s response.
- If PSA levels rise during bicalutamide therapy, the patient should be evaluated for clinical progression.
- For patients who have objective progression of disease together with an elevated PSA, a treatment-free period of antiandrogen, while continuing the LHRH analogue, may be considered.
# Adverse Reactions
## Clinical Trials Experience
- In patients with advanced prostate cancer treated with bicalutamide in combination with an LHRH analogue, the most frequent adverse reaction was hot flashes (53%).
- In the multicenter, double-blind, controlled clinical trial comparing bicalutamide 50 mg once daily with flutamide 250 mg three times a day, each in combination with an LHRH analogue, the following adverse reactions with an incidence of 5% or greater, regardless of causality, have been reported.
- Other adverse reactions (greater than or equal to 2%, but less than 5%) reported in the bicalutamide-LHRH analogue treatment group are listed below by body system and are in order of decreasing frequency within each body system regardless of causality.
- Neoplasm
- Neck Pain
- Fever
- Chills
- Sepsis
- Hernia
- Cyst
- Angina Pectoris
- Congestive Heart Failure
- Myocardial Infarct
- Cardiac Arrest
- Coronary Artery Disorder
- Syncope
### Digestive
- Melena
- Rectal Hemorrhage
- Dry Mouth
- Dysphagia
- Gastrointestinal Disorder
- Periodontal Abscess
- Gastrointestinal Carcinoma
### Metabolic and Nutritional
- Edema
- BUN Increased
- Creatinine Increased
- Dehydration
- Gout
- Hypercholesteremia
### Musculoskeletal
- Myalgia
- Leg Cramps
### Nervous
- Hypertonia
- Confusion
- Somnolence
- Libido Decreased
- Neuropathy
- Nervousness
### Respiratory
- Lung Disorder
- Asthma
- Epistaxis
- Sinusitis
### Skin and Appendages
- Dry Skin
- Alopecia
- Pruritus
- Herpes Zoster
- Skin Carcinoma
- Skin Disorder
Special Senses
- Cataract specified
### Urogenital
- Dysuria
- Urinary Urgency
- Hydronephrosis
- Urinary Tract Disorder
Abnormal Laboratory Test Values:
- Laboratory abnormalities including elevated AST, ALT, bilirubin, BUN, and creatinine and decreased hemoglobin and white cell count have been reported in both bicalutamide-LHRH analog treated and flutamide-LHRH analog treated patients.
## Postmarketing Experience
- The following adverse reactions have been identified during postapproval use of bicalutamide. Because these reactions are reported voluntarily from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure.
- Uncommon cases of hypersensitivity reactions, including angioneurotic edema and urticaria have been seen.
- Cases of interstitial lung disease (some fatal), including interstitial pneumonitis and pulmonary fibrosis, have been reported with bicalutamide. Interstitial lung disease has been reported most often at doses greater than 50 mg.
- A few cases of fatal hepatic failure have been reported.
- Reduction in glucose tolerance, manifesting as diabetes or a loss of glycemic control in those with preexisting diabetes, has been reported during treatment with LHRH agonists.
# Drug Interactions
- Clinical studies have not shown any drug interactions between bicalutamide and LHRH analogue (goserelin or leuprolide). There is no evidence that bicalutamide induces hepatic enzymes.
- In vitro studies have shown that R-bicalutamide is an inhibitor of CYP3A4 with lesser inhibitory effects on CYP2C9, 2C19 and 2D6 activity. Clinical studies have shown that with coadministration of bicalutamide, mean midazolam (a CYP3A4 substrate) levels may be increased 1.5 fold (for Cmax) and 1.9 fold (for AUC). Hence, caution should be exercised when bicalutamide is coadministered with CYP3A4 substrates.
- In vitro protein-binding studies have shown that bicalutamide can displace coumarin anticoagulants from binding sites. Prothrombin times should be closely monitored in patients already receiving coumarin anticoagulants who are started on bicalutamide and adjustment of the anticoagulant dose may be necessary.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): X
- Based on its mechanism of action, bicalutamide may cause fetal harm when administered to a pregnant woman.
- Bicalutamide is contraindicated in women, including those who are or may become pregnant.
- If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be apprised of the potential hazard to a fetus.
- While there are no human data on the use of bicalutamide in pregnancy and bicalutamide is not for use in women, it is important to know that maternal use of an androgen receptor inhibitor could affect development of the fetus.
- In animal reproduction studies, male offspring of rats receiving doses of 10 mg/kg/day (approximately 2/3 of clinical exposure at the recommended dose) and above, were observed to have reduced anogenital distance and hypospadias. These pharmacological effects have been observed with other antiandrogens. *No other teratogenic effects were observed in rabbits receiving doses up to 200 mg/kg/day (approximately 1/3 of clinical exposure at the recommended dose) or rats receiving doses up to 250 mg/kg/day (approximately 2 times the clinical exposure at the recommended dose).
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Bicalutamide in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Bicalutamide during labor and delivery.
### Nursing Mothers
- Bicalutamide is not indicated for use in women.
### Pediatric Use
- The safety and effectiveness of bicalutamide in pediatric patients have not been established.
- Labeling describing pediatric clinical studies for bicalutamide is approved for AstraZeneca Pharmaceuticals LP’s bicalutamide tablet. However, due to AstraZeneca Pharmaceuticals LP’s marketing exclusivity rights, a description of those clinical studies is not approved for this bicalutamide labeling.
### Geriatic Use
- In two studies in patients given 50 or 150 mg daily, no significant relationship between age and steady-state levels of total bicalutamide or the active R-enantiomer has been shown.
### Gender
- Bicalutamide has not been studied in women.
### Race
There is no FDA guidance on the use of Bicalutamide with respect to specific racial populations.
### Renal Impairment
- Renal impairment (as measured by creatinine clearance) had no significant effect on the elimination of total bicalutamide or the active R-enantiomer.
### Hepatic Impairment
- Bicalutamide should be used with caution in patients with moderate-to-severe hepatic impairment.
- Bicalutamide is extensively metabolized by the liver.
- Limited data in subjects with severe hepatic impairment suggest that excretion of bicalutamide may be delayed and could lead to further accumulation.
- Periodic liver function tests should be considered for hepatic-impaired patients on long-term therapy.
- No clinically significant difference in the pharmacokinetics of either enantiomer of bicalutamide was noted in patients with mild-to-moderate hepatic disease as compared to healthy controls. However, the half-life of the R-enantiomer was increased approximately 76% (5.9 and 10.4 days for normal and impaired patients, respectively) in patients with severe liver disease (n=4).
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Bicalutamide in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Bicalutamide in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Oral
### Monitoring
- Monitor serum transaminase levels prior to starting treatment with bicalutamide, at regular intervals for the first four months of treatment and periodically thereafter, and for symptoms or signs suggestive of hepatic dysfunction.
- Consideration should be given to monitoring blood glucose in patients receiving bicalutamide in combination with LHRH agonists.
- Monitoring Prostate Specific Antigen (PSA) is recommended
- Prothrombin times should be closely monitored in patient already receiving coumarin anticoagulants who are started on bicalutamide.
# IV Compatibility
There is limited information regarding the compatibility of Bicalutamide and IV administrations.
# Overdosage
- Long-term clinical trials have been conducted with dosages up to 200 mg of bicalutamide daily and these dosages have been well tolerated. A single dose of bicalutamide that results in symptoms of an overdose considered to be life threatening has not been established.
There is no specific antidote; treatment of an overdose should be symptomatic.
In the management of an overdose with bicalutamide, vomiting may be induced if the patient is alert. It should be remembered that, in this patient population, multiple drugs may have been taken. Dialysis is not likely to be helpful since bicalutamide is highly protein bound and is extensively metabolized. General supportive care, including frequent monitoring of vital signs and close observation of the patient, is indicated.
# Pharmacology
## Mechanism of Action
- Bicalutamide is a non-steroidal androgen receptor inhibitor.
- It competitively inhibits the action of androgens by binding to cytosol androgen receptors in the target tissue.
- Prostatic carcinoma is known to be androgen sensitive and responds to treatment that counteracts the effect of androgen and/or removes the source of androgen.
- When bicalutamide is combined with luteinizing hormone releasing hormone (LHRH) analog therapy, the suppression of serum testosterone induced by the LHRH analog is not affected. However, in clinical trials with bicalutamide as a single agent for prostate cancer, rises in serum testosterone and estradiol have been noted.
- In a subset of patients who have been treated with bicalutamide and an LHRH agonist, and who discontinue bicalutamide therapy due to progressive advanced prostate cancer, a reduction in Prostate Specific Antigen (PSA) and/or clinical improvement (antiandrogen withdrawal phenomenon) may be observed.
## Structure
- Bicalutamide tablets contain 50 mg of bicalutamide USP, a non-steroidal androgen receptor inhibitor with no other known endocrine activity. The chemical name is propanamide, N -3--2-hydroxy-2-methyl-,(+-). The structural and molecular formulas are:
- Bicalutamide has a molecular weight of 430.37. The pKa’ is approximately 12. *Bicalutamide is a fine white to off white powder which is practically insoluble in water at 37°C (5 mg per 1000 mL), slightly soluble in chloroform and absolute ethanol, sparingly soluble in methanol, and soluble in acetone and tetrahydrofuran.
## Pharmacodynamics
There is limited information regarding Bicalutamide Pharmacodynamics in the drug label.
## Pharmacokinetics
### Absorption
- Bicalutamide is well-absorbed following oral administration, although the absolute bioavailability is unknown.
- Coadministration of bicalutamide with food has no clinically significant effect on rate or extent of absorption.
### Distribution
- Bicalutamide is highly protein-bound (96%).
### Metabolism/Elimination
- Bicalutamide undergoes stereospecific metabolism.
- The S (inactive) isomer is metabolized primarily by glucuronidation. The R (active) isomer also undergoes glucuronidation but is predominantly oxidized to an inactive metabolite followed by glucuronidation.
- Both the parent and metabolite glucuronides are eliminated in the urine and feces.
- The S-enantiomer is rapidly cleared relative to the R-enantiomer, with the R-enantiomer accounting for about 99% of total steady-state plasma levels.
- Pharmacokinetics of the active enantiomer of bicalutamide in normal males and patients with prostate cancer are presented in Table 2.
## Nonclinical Toxicology
### Carcinogenesis, Mutagenesis, Impairment of Fertility
- Two-year oral carcinogenicity studies were conducted in both male and female rats and mice at doses of 5, 15 or 75 mg/kg/day of bicalutamide.
- A variety of tumor target organ effects were identified and were attributed to the antiandrogenicity of bicalutamide, namely, testicular benign interstitial (Leydig) cell tumors in male rats at all dose levels (the steady-state plasma concentration with the 5 mg/kg/day dose is approximately 2/3 human therapeutic concentrations) and uterine adenocarcinoma in female rats at 75 mg/kg/day (approximately 1 1/2 times the human therapeutic concentrations).
- There is no evidence of Leydig cell hyperplasia in patients; uterine tumors are not relevant to the indicated patient population.
- A small increase in the incidence of hepatocellular carcinoma in male mice given 75 mg/kg/day of bicalutamide (approximately 4 times human therapeutic concentrations) and an increased incidence of benign thyroid follicular cell adenomas in rats given 5 mg/kg/day (approximately 2/3 human therapeutic concentrations) and above were recorded.
- These neoplastic changes were progressions of non-neoplastic changes related to hepatic enzyme induction observed in animal toxicity studies.
- Enzyme induction has not been observed following bicalutamide administration in man.
- There were no tumorigenic effects suggestive of genotoxic carcinogenesis.
- A comprehensive battery of both in vitro and in vivo genotoxicity tests (yeast gene conversion, Ames, E. coli, CHO/HGPRT, human lymphocyte cytogenetic, mouse micronucleus, and rat bone marrow cytogenetic tests) has demonstrated that bicalutamide does not have genotoxic activity.
- Administration of bicalutamide may lead to inhibition of spermatogenesis. *The long-term effects of bicalutamide on male fertility have not been studied.
- In male rats dosed at 250 mg/kg/day (approximately 2 times human therapeutic concentrations*), the precoital interval and time to successful mating were increased in the first pairing but no effects on fertility following successful mating were seen.
- These effects were reversed by 7 weeks after the end of an 11-week period of dosing.
- No effects on female rats dosed at 10, 50 and 250 mg/kg/day (approximately 2/3, 1 and 2 times human therapeutic concentrations, respectively) or their female offspring were observed.
- Administration of bicalutamide to pregnant females resulted in feminization of the male offspring leading to hypospadias at all dose levels.
- Affected male offspring were also impotent.
- Based on a maximum dose of 50 mg/day of bicalutamide for an average 70 kg patient.
# Clinical Studies
### Bicalutamide 50 mg Daily in Combination with an LHRH-A
- In a multicenter, double-blind, controlled clinical trial, 813 patients with previously untreated advanced prostate cancer were randomized to receive bicalutamide 50 mg once daily (404 patients) or flutamide 250 mg (409 patients) three times a day, each in combination with LHRH analogs (either goserelin acetate implant or leuprolide acetate depot).
- In an analysis conducted after a median follow-up of 160 weeks was reached, 213 (52.7%) patients treated with bicalutamide-LHRH analog therapy and 235 (57.5%) patients treated with flutamide-LHRH analog therapy had died.
- There was no significant difference in survival between treatment groups (see Figure 1).
- The hazard ratio for time to death (survival) was 0.87 (95% confidence interval 0.72 to 1.05).
- There was no significant difference in time to objective tumor progression between treatment groups (see Figure 2).
- Objective tumor progression was defined as the appearance of any bone metastases or the worsening of any existing bone metastases on bone scan attributable to metastatic disease, or an increase by 25% or more of any existing measurable extraskeletal metastases.
- The hazard ratio for time to progression of bicalutamide plus LHRH analog to that of flutamide plus LHRH analog was 0.93 (95% confidence interval, 0.79 to 1.1).
- Quality of life was assessed with self-administered patient questionnaires on pain, social functioning, emotional well being, vitality, activity limitation, bed disability, overall health, physical capacity, general symptoms, and treatment related symptoms.
- Assessment of the Quality of Life questionnaires did not indicate consistent significant differences between the two treatment groups.
### Safety Data from Clinical Studies using Bicalutamide 150 mg
- Bicalutamide 150 mg is not approved for use either alone or with other treatments.
- Two identical multicenter, randomized, open-label trials comparing bicalutamide150 mg daily monotherapy to castration were conducted in patients that had locally advanced (T3-4, NX, MO) or metastatic (M1) prostate cancer.
Monotherapy — M1 Group
- Bicalutamide150 mg daily is not approved for use in patients with M1 cancer of the prostate.
- Based on an interim analysis of the two trials for survival, the Data Safety Monitoring Board recommended that bicalutamidetreatment be discontinued in the M1 patients because the risk of death was 25% (HR 1.25, 95% CI 0.87 to 1.81) and 31% (HR 1.31, 95% CI 0.97 to 1.77) higher in the bicalutamidetreated group compared to that in the castrated group, respectively.
Locally Advanced (T3-4, NX, MO) Group
- Bicalutamide 150 mg daily is not approved for use in patients with locally advanced (T3-4, NX, M0) cancer of the prostate.
- Following discontinuation of all M1 patients, the trials continued with the T3-4, NX, MO patients until study completion.
- In the larger trial (N=352), the risk of death was 25% (HR 1.25, 95% CI 0.92 to 1.71) higher in the bicalutamide group and in the smaller trial (N=140), the risk of death was 36% (HR 0.64, 95% CI, 0.39 to 1.03) lower in the bicalutamide group.
- In addition to the above two studies, there are three other on-going clinical studies that provide additional safety information for bicalutamide 150 mg, a dose that is not approved for use.
- These are three multicenter, randomized, double-blind, parallel group trials comparing bicalutamide 150 mg daily monotherapy (adjuvant to previous therapy or under watchful waiting) with placebo, for death or time to disease progression, in a population of 8113 patients with localized or locally advanced prostate cancer.
- Bicalutamide150 mg daily is not approved for use as therapy for patients with localized prostate cancer who are candidates for watchful waiting.
- Data from a planned subgroup analysis of two of these trials in 1627 patients with localized prostate cancer who were under watchful waiting, revealed a trend toward decreased survival in the bicalutamide arm after a median follow-up of 7.4 years.
- There were 294 (37.7%) deaths in the bicalutamide treated patients versus 279 (32.9%) deaths in the placebo treated patients (localized watchful waiting group) for a hazard ratio of 1.16 (95% CI 0.99 to 1.37).
# How Supplied
- White to off white, circular, biconvex, film-coated tablets debossed with “485” on one side and plain on other side.
- Bottles of 30’s with Child Resistant Cap……….…..…. NDC 47335-485-83
- Bottles of 30’s with Child Resistant Cap……….…..…. NDC 47335-485-83
- Bottles of 100’s with Child Resistant Cap………….…..NDC 47335-485-88
- Bottles of 100’s with Child Resistant Cap………….…..NDC 47335-485-88
- Bottles of 100’s with Non Child Resistant Cap…..…….NDC 47335-485-08
- Bottles of 100’s with Non Child Resistant Cap…..…….NDC 47335-485-08
- Bottles of 1000’s with Non Child Resistant Cap……….NDC 47335-485-18
- Bottles of 1000’s with Non Child Resistant Cap……….NDC 47335-485-18
## Storage
- Store at 20° to 25°C (68° to 77°F); excursions permitted between 15° and 30°C (59° and 86°F)
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Patients should be informed that therapy with bicalutamide tabletsand the LHRH analog should be started at the same time and that they should not interrupt or stop taking these medications without consulting their physician.
- During treatment with bicalutamide tablets, somnolence has been reported, and those patients who experience this symptom should observe caution when driving or operating machines.
- Patients should be informed that diabetes, or loss of glycemic control in patients with preexisting diabetes has been reported during treatment with LHRH agonists.
- Consideration should therefore be given to monitoring blood glucose in patients receiving bicalutamide tablets in combination with LHRH agonists.
# Precautions with Alcohol
Alcohol-BICALUTAMIDE interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- Casodex
# Look-Alike Drug Names
There is limited information regarding Bicalutamide Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Bicalutamide
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Stefano Giannoni [2]; Sree Teja Yelamanchili, MBBS [3]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Bicalutamide is an antiandrogen that is FDA approved for the treatment of Stage D2 metastatic carcinoma of the prostate in combination with a luteinizing hormone-releasing hormone (LHRH) analog. Common adverse reactions include hot flashes, general pain, back pain, pelvic pain and abdominal pain, asthenia, constipation, infection, nausea, peripheral edema, dyspnea, diarrhea, hematuria, nocturia and anemia..
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
### Stage D2 metastatic carcinoma of the prostate
- One 50 mg tablet once daily (morning or evening)
- In combination with a luteinizing hormone-releasing hormone (LHRH) analog.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of BICALUTAMIDE in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of BICALUTAMIDE in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
There is limited information regarding Bicalutamide FDA-Labeled Indications and Dosage (Pediatric) in the drug label.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of BICALUTAMIDE in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of BICALUTAMIDE in pediatric patients.
# Contraindications
### Hypersensitivity
- Bicalutamide is contraindicated in any patient who has shown a hypersensitivity reaction to the drug or any of the tablet’s components.
- Hypersensitivity reactions including angioneurotic edema and urticaria have been reported.
### Women
- Bicalutamide has no indication for women, and should not be used in this population.
### Pregnancy
- Bicalutamide may cause fetal harm when administered to a pregnant woman. *Bicalutamide is contraindicated in women, including those who are or may become pregnant.
- There are no studies in pregnant women using bicalutamide.
- If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be appraised of the potential hazard to the fetus.
# Warnings
### Hepatitis
- Cases of death or hospitalization due to severe liver injury (hepatic failure) have been reported postmarketing in association with the use of bicalutamide.
- Hepatotoxicity in these reports generally occurred within the first three to four months of treatment.
- Hepatitis or marked increases in liver enzymes leading to drug discontinuation occurred in approximately 1% of bicalutamide patients in controlled clinical trials.
- Serum transaminase levels should be measured prior to starting treatment with bicalutamide, at regular intervals for the first four months of treatment, and periodically thereafter.
- If clinical symptoms or signs suggestive of liver dysfunction occur (e.g., nausea, vomiting, abdominal pain, fatigue, anorexia, “flu-like” symptoms, dark urine, jaundice, or right upper quadrant tenderness), the serum transaminases, in particular the serum ALT, should be measured immediately.
- If at any time a patient has jaundice, or their ALT rises above two times the upper limit of normal, bicalutamide should be immediately discontinued with close follow-up of liver function.
### Gynecomastia and Breast Pain
- In clinical trials with bicalutamide 150 mg as a single agent for prostate cancer, gynecomastia and breast pain have been reported in up to 38% and 39% of patients, respectively.
### Glucose Tolerance
- A reduction in glucose tolerance has been observed in males receiving LHRH agonists.
- This may manifest as diabetes or loss of glycemic control in those with preexisting diabetes.
- Consideration should therefore be given to monitoring blood glucose in patients receiving bicalutamide in combination with LHRH agonists.
### Laboratory Tests
- Regular assessments of serum Prostate Specific Antigen (PSA) may be helpful in monitoring the patient’s response.
- If PSA levels rise during bicalutamide therapy, the patient should be evaluated for clinical progression.
- For patients who have objective progression of disease together with an elevated PSA, a treatment-free period of antiandrogen, while continuing the LHRH analogue, may be considered.
# Adverse Reactions
## Clinical Trials Experience
- In patients with advanced prostate cancer treated with bicalutamide in combination with an LHRH analogue, the most frequent adverse reaction was hot flashes (53%).
- In the multicenter, double-blind, controlled clinical trial comparing bicalutamide 50 mg once daily with flutamide 250 mg three times a day, each in combination with an LHRH analogue, the following adverse reactions with an incidence of 5% or greater, regardless of causality, have been reported.
- Other adverse reactions (greater than or equal to 2%, but less than 5%) reported in the bicalutamide-LHRH analogue treatment group are listed below by body system and are in order of decreasing frequency within each body system regardless of causality.
- Neoplasm
- Neck Pain
- Fever
- Chills
- Sepsis
- Hernia
- Cyst
- Angina Pectoris
- Congestive Heart Failure
- Myocardial Infarct
- Cardiac Arrest
- Coronary Artery Disorder
- Syncope
### Digestive
- Melena
- Rectal Hemorrhage
- Dry Mouth
- Dysphagia
- Gastrointestinal Disorder
- Periodontal Abscess
- Gastrointestinal Carcinoma
### Metabolic and Nutritional
- Edema
- BUN Increased
- Creatinine Increased
- Dehydration
- Gout
- Hypercholesteremia
### Musculoskeletal
- Myalgia
- Leg Cramps
### Nervous
- Hypertonia
- Confusion
- Somnolence
- Libido Decreased
- Neuropathy
- Nervousness
### Respiratory
- Lung Disorder
- Asthma
- Epistaxis
- Sinusitis
### Skin and Appendages
- Dry Skin
- Alopecia
- Pruritus
- Herpes Zoster
- Skin Carcinoma
- Skin Disorder
Special Senses
- Cataract specified
### Urogenital
- Dysuria
- Urinary Urgency
- Hydronephrosis
- Urinary Tract Disorder
Abnormal Laboratory Test Values:
- Laboratory abnormalities including elevated AST, ALT, bilirubin, BUN, and creatinine and decreased hemoglobin and white cell count have been reported in both bicalutamide-LHRH analog treated and flutamide-LHRH analog treated patients.
## Postmarketing Experience
- The following adverse reactions have been identified during postapproval use of bicalutamide. Because these reactions are reported voluntarily from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure.
- Uncommon cases of hypersensitivity reactions, including angioneurotic edema and urticaria have been seen.
- Cases of interstitial lung disease (some fatal), including interstitial pneumonitis and pulmonary fibrosis, have been reported with bicalutamide. Interstitial lung disease has been reported most often at doses greater than 50 mg.
- A few cases of fatal hepatic failure have been reported.
- Reduction in glucose tolerance, manifesting as diabetes or a loss of glycemic control in those with preexisting diabetes, has been reported during treatment with LHRH agonists.
# Drug Interactions
- Clinical studies have not shown any drug interactions between bicalutamide and LHRH analogue (goserelin or leuprolide). There is no evidence that bicalutamide induces hepatic enzymes.
- In vitro studies have shown that R-bicalutamide is an inhibitor of CYP3A4 with lesser inhibitory effects on CYP2C9, 2C19 and 2D6 activity. Clinical studies have shown that with coadministration of bicalutamide, mean midazolam (a CYP3A4 substrate) levels may be increased 1.5 fold (for Cmax) and 1.9 fold (for AUC). Hence, caution should be exercised when bicalutamide is coadministered with CYP3A4 substrates.
- In vitro protein-binding studies have shown that bicalutamide can displace coumarin anticoagulants from binding sites. Prothrombin times should be closely monitored in patients already receiving coumarin anticoagulants who are started on bicalutamide and adjustment of the anticoagulant dose may be necessary.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): X
- Based on its mechanism of action, bicalutamide may cause fetal harm when administered to a pregnant woman.
- Bicalutamide is contraindicated in women, including those who are or may become pregnant.
- If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be apprised of the potential hazard to a fetus.
- While there are no human data on the use of bicalutamide in pregnancy and bicalutamide is not for use in women, it is important to know that maternal use of an androgen receptor inhibitor could affect development of the fetus.
- In animal reproduction studies, male offspring of rats receiving doses of 10 mg/kg/day (approximately 2/3 of clinical exposure at the recommended dose) and above, were observed to have reduced anogenital distance and hypospadias. These pharmacological effects have been observed with other antiandrogens. *No other teratogenic effects were observed in rabbits receiving doses up to 200 mg/kg/day (approximately 1/3 of clinical exposure at the recommended dose) or rats receiving doses up to 250 mg/kg/day (approximately 2 times the clinical exposure at the recommended dose).
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Bicalutamide in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Bicalutamide during labor and delivery.
### Nursing Mothers
- Bicalutamide is not indicated for use in women.
### Pediatric Use
- The safety and effectiveness of bicalutamide in pediatric patients have not been established.
- Labeling describing pediatric clinical studies for bicalutamide is approved for AstraZeneca Pharmaceuticals LP’s bicalutamide tablet. However, due to AstraZeneca Pharmaceuticals LP’s marketing exclusivity rights, a description of those clinical studies is not approved for this bicalutamide labeling.
### Geriatic Use
- In two studies in patients given 50 or 150 mg daily, no significant relationship between age and steady-state levels of total bicalutamide or the active R-enantiomer has been shown.
### Gender
- Bicalutamide has not been studied in women.
### Race
There is no FDA guidance on the use of Bicalutamide with respect to specific racial populations.
### Renal Impairment
- Renal impairment (as measured by creatinine clearance) had no significant effect on the elimination of total bicalutamide or the active R-enantiomer.
### Hepatic Impairment
- Bicalutamide should be used with caution in patients with moderate-to-severe hepatic impairment.
- Bicalutamide is extensively metabolized by the liver.
- Limited data in subjects with severe hepatic impairment suggest that excretion of bicalutamide may be delayed and could lead to further accumulation.
- Periodic liver function tests should be considered for hepatic-impaired patients on long-term therapy.
- No clinically significant difference in the pharmacokinetics of either enantiomer of bicalutamide was noted in patients with mild-to-moderate hepatic disease as compared to healthy controls. However, the half-life of the R-enantiomer was increased approximately 76% (5.9 and 10.4 days for normal and impaired patients, respectively) in patients with severe liver disease (n=4).
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Bicalutamide in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Bicalutamide in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Oral
### Monitoring
- Monitor serum transaminase levels prior to starting treatment with bicalutamide, at regular intervals for the first four months of treatment and periodically thereafter, and for symptoms or signs suggestive of hepatic dysfunction.
- Consideration should be given to monitoring blood glucose in patients receiving bicalutamide in combination with LHRH agonists.
- Monitoring Prostate Specific Antigen (PSA) is recommended
- Prothrombin times should be closely monitored in patient already receiving coumarin anticoagulants who are started on bicalutamide.
# IV Compatibility
There is limited information regarding the compatibility of Bicalutamide and IV administrations.
# Overdosage
- Long-term clinical trials have been conducted with dosages up to 200 mg of bicalutamide daily and these dosages have been well tolerated. A single dose of bicalutamide that results in symptoms of an overdose considered to be life threatening has not been established.
There is no specific antidote; treatment of an overdose should be symptomatic.
In the management of an overdose with bicalutamide, vomiting may be induced if the patient is alert. It should be remembered that, in this patient population, multiple drugs may have been taken. Dialysis is not likely to be helpful since bicalutamide is highly protein bound and is extensively metabolized. General supportive care, including frequent monitoring of vital signs and close observation of the patient, is indicated.
# Pharmacology
## Mechanism of Action
- Bicalutamide is a non-steroidal androgen receptor inhibitor.
- It competitively inhibits the action of androgens by binding to cytosol androgen receptors in the target tissue.
- Prostatic carcinoma is known to be androgen sensitive and responds to treatment that counteracts the effect of androgen and/or removes the source of androgen.
- When bicalutamide is combined with luteinizing hormone releasing hormone (LHRH) analog therapy, the suppression of serum testosterone induced by the LHRH analog is not affected. However, in clinical trials with bicalutamide as a single agent for prostate cancer, rises in serum testosterone and estradiol have been noted.
- In a subset of patients who have been treated with bicalutamide and an LHRH agonist, and who discontinue bicalutamide therapy due to progressive advanced prostate cancer, a reduction in Prostate Specific Antigen (PSA) and/or clinical improvement (antiandrogen withdrawal phenomenon) may be observed.
## Structure
- Bicalutamide tablets contain 50 mg of bicalutamide USP, a non-steroidal androgen receptor inhibitor with no other known endocrine activity. The chemical name is propanamide, N [4 cyano-3-(trifluoromethyl)phenyl]-3-[(4-fluorophenyl)sulfonyl]-2-hydroxy-2-methyl-,(+-). The structural and molecular formulas are:
- Bicalutamide has a molecular weight of 430.37. The pKa’ is approximately 12. *Bicalutamide is a fine white to off white powder which is practically insoluble in water at 37°C (5 mg per 1000 mL), slightly soluble in chloroform and absolute ethanol, sparingly soluble in methanol, and soluble in acetone and tetrahydrofuran.
## Pharmacodynamics
There is limited information regarding Bicalutamide Pharmacodynamics in the drug label.
## Pharmacokinetics
### Absorption
- Bicalutamide is well-absorbed following oral administration, although the absolute bioavailability is unknown.
- Coadministration of bicalutamide with food has no clinically significant effect on rate or extent of absorption.
### Distribution
- Bicalutamide is highly protein-bound (96%).
### Metabolism/Elimination
- Bicalutamide undergoes stereospecific metabolism.
- The S (inactive) isomer is metabolized primarily by glucuronidation. The R (active) isomer also undergoes glucuronidation but is predominantly oxidized to an inactive metabolite followed by glucuronidation.
- Both the parent and metabolite glucuronides are eliminated in the urine and feces.
- The S-enantiomer is rapidly cleared relative to the R-enantiomer, with the R-enantiomer accounting for about 99% of total steady-state plasma levels.
- Pharmacokinetics of the active enantiomer of bicalutamide in normal males and patients with prostate cancer are presented in Table 2.
## Nonclinical Toxicology
### Carcinogenesis, Mutagenesis, Impairment of Fertility
- Two-year oral carcinogenicity studies were conducted in both male and female rats and mice at doses of 5, 15 or 75 mg/kg/day of bicalutamide.
- A variety of tumor target organ effects were identified and were attributed to the antiandrogenicity of bicalutamide, namely, testicular benign interstitial (Leydig) cell tumors in male rats at all dose levels (the steady-state plasma concentration with the 5 mg/kg/day dose is approximately 2/3 human therapeutic concentrations) and uterine adenocarcinoma in female rats at 75 mg/kg/day (approximately 1 1/2 times the human therapeutic concentrations).
- There is no evidence of Leydig cell hyperplasia in patients; uterine tumors are not relevant to the indicated patient population.
- A small increase in the incidence of hepatocellular carcinoma in male mice given 75 mg/kg/day of bicalutamide (approximately 4 times human therapeutic concentrations) and an increased incidence of benign thyroid follicular cell adenomas in rats given 5 mg/kg/day (approximately 2/3 human therapeutic concentrations) and above were recorded.
- These neoplastic changes were progressions of non-neoplastic changes related to hepatic enzyme induction observed in animal toxicity studies.
- Enzyme induction has not been observed following bicalutamide administration in man.
- There were no tumorigenic effects suggestive of genotoxic carcinogenesis.
- A comprehensive battery of both in vitro and in vivo genotoxicity tests (yeast gene conversion, Ames, E. coli, CHO/HGPRT, human lymphocyte cytogenetic, mouse micronucleus, and rat bone marrow cytogenetic tests) has demonstrated that bicalutamide does not have genotoxic activity.
- Administration of bicalutamide may lead to inhibition of spermatogenesis. *The long-term effects of bicalutamide on male fertility have not been studied.
- In male rats dosed at 250 mg/kg/day (approximately 2 times human therapeutic concentrations*), the precoital interval and time to successful mating were increased in the first pairing but no effects on fertility following successful mating were seen.
- These effects were reversed by 7 weeks after the end of an 11-week period of dosing.
- No effects on female rats dosed at 10, 50 and 250 mg/kg/day (approximately 2/3, 1 and 2 times human therapeutic concentrations, respectively) or their female offspring were observed.
- Administration of bicalutamide to pregnant females resulted in feminization of the male offspring leading to hypospadias at all dose levels.
- Affected male offspring were also impotent.
- Based on a maximum dose of 50 mg/day of bicalutamide for an average 70 kg patient.
# Clinical Studies
### Bicalutamide 50 mg Daily in Combination with an LHRH-A
- In a multicenter, double-blind, controlled clinical trial, 813 patients with previously untreated advanced prostate cancer were randomized to receive bicalutamide 50 mg once daily (404 patients) or flutamide 250 mg (409 patients) three times a day, each in combination with LHRH analogs (either goserelin acetate implant or leuprolide acetate depot).
- In an analysis conducted after a median follow-up of 160 weeks was reached, 213 (52.7%) patients treated with bicalutamide-LHRH analog therapy and 235 (57.5%) patients treated with flutamide-LHRH analog therapy had died.
- There was no significant difference in survival between treatment groups (see Figure 1).
- The hazard ratio for time to death (survival) was 0.87 (95% confidence interval 0.72 to 1.05).
- There was no significant difference in time to objective tumor progression between treatment groups (see Figure 2).
- Objective tumor progression was defined as the appearance of any bone metastases or the worsening of any existing bone metastases on bone scan attributable to metastatic disease, or an increase by 25% or more of any existing measurable extraskeletal metastases.
- The hazard ratio for time to progression of bicalutamide plus LHRH analog to that of flutamide plus LHRH analog was 0.93 (95% confidence interval, 0.79 to 1.1).
- Quality of life was assessed with self-administered patient questionnaires on pain, social functioning, emotional well being, vitality, activity limitation, bed disability, overall health, physical capacity, general symptoms, and treatment related symptoms.
- Assessment of the Quality of Life questionnaires did not indicate consistent significant differences between the two treatment groups.
### Safety Data from Clinical Studies using Bicalutamide 150 mg
- Bicalutamide 150 mg is not approved for use either alone or with other treatments.
- Two identical multicenter, randomized, open-label trials comparing bicalutamide150 mg daily monotherapy to castration were conducted in patients that had locally advanced (T3-4, NX, MO) or metastatic (M1) prostate cancer.
Monotherapy — M1 Group
- Bicalutamide150 mg daily is not approved for use in patients with M1 cancer of the prostate.
- Based on an interim analysis of the two trials for survival, the Data Safety Monitoring Board recommended that bicalutamidetreatment be discontinued in the M1 patients because the risk of death was 25% (HR 1.25, 95% CI 0.87 to 1.81) and 31% (HR 1.31, 95% CI 0.97 to 1.77) higher in the bicalutamidetreated group compared to that in the castrated group, respectively.
Locally Advanced (T3-4, NX, MO) Group
- Bicalutamide 150 mg daily is not approved for use in patients with locally advanced (T3-4, NX, M0) cancer of the prostate.
- Following discontinuation of all M1 patients, the trials continued with the T3-4, NX, MO patients until study completion.
- In the larger trial (N=352), the risk of death was 25% (HR 1.25, 95% CI 0.92 to 1.71) higher in the bicalutamide group and in the smaller trial (N=140), the risk of death was 36% (HR 0.64, 95% CI, 0.39 to 1.03) lower in the bicalutamide group.
- In addition to the above two studies, there are three other on-going clinical studies that provide additional safety information for bicalutamide 150 mg, a dose that is not approved for use.
- These are three multicenter, randomized, double-blind, parallel group trials comparing bicalutamide 150 mg daily monotherapy (adjuvant to previous therapy or under watchful waiting) with placebo, for death or time to disease progression, in a population of 8113 patients with localized or locally advanced prostate cancer.
- Bicalutamide150 mg daily is not approved for use as therapy for patients with localized prostate cancer who are candidates for watchful waiting.
- Data from a planned subgroup analysis of two of these trials in 1627 patients with localized prostate cancer who were under watchful waiting, revealed a trend toward decreased survival in the bicalutamide arm after a median follow-up of 7.4 years.
- There were 294 (37.7%) deaths in the bicalutamide treated patients versus 279 (32.9%) deaths in the placebo treated patients (localized watchful waiting group) for a hazard ratio of 1.16 (95% CI 0.99 to 1.37).
# How Supplied
- White to off white, circular, biconvex, film-coated tablets debossed with “485” on one side and plain on other side.
- Bottles of 30’s with Child Resistant Cap……….…..…. NDC 47335-485-83
- Bottles of 30’s with Child Resistant Cap……….…..…. NDC 47335-485-83
- Bottles of 100’s with Child Resistant Cap………….…..NDC 47335-485-88
- Bottles of 100’s with Child Resistant Cap………….…..NDC 47335-485-88
- Bottles of 100’s with Non Child Resistant Cap…..…….NDC 47335-485-08
- Bottles of 100’s with Non Child Resistant Cap…..…….NDC 47335-485-08
- Bottles of 1000’s with Non Child Resistant Cap……….NDC 47335-485-18
- Bottles of 1000’s with Non Child Resistant Cap……….NDC 47335-485-18
## Storage
- Store at 20° to 25°C (68° to 77°F); excursions permitted between 15° and 30°C (59° and 86°F)
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Patients should be informed that therapy with bicalutamide tabletsand the LHRH analog should be started at the same time and that they should not interrupt or stop taking these medications without consulting their physician.
- During treatment with bicalutamide tablets, somnolence has been reported, and those patients who experience this symptom should observe caution when driving or operating machines.
- Patients should be informed that diabetes, or loss of glycemic control in patients with preexisting diabetes has been reported during treatment with LHRH agonists.
- Consideration should therefore be given to monitoring blood glucose in patients receiving bicalutamide tablets in combination with LHRH agonists.
# Precautions with Alcohol
Alcohol-BICALUTAMIDE interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- Casodex[1]
# Look-Alike Drug Names
There is limited information regarding Bicalutamide Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Bicalutamide | |
00b51d6a43e4134faea48b7c03b793ad1e6b4a99 | wikidoc | Biclustering | Biclustering
Biclustering, co-clustering, or two-mode clustering is a data mining technique that allows simultaneous clustering of the rows and columns of a matrix.
The term was first introduced by Mirkin (recently by Cheng and Church in gene expression analysis), although the technique was originally introduced much earlier (i.e., by J.A. Hartigan).
Given a set of m rows in n columns (i.e., an m×n matrix), the biclustering algorithm generates biclusters - a subset of rows that exhibit similar behavior across a subset of columns, or vice versa.
# Complexity
The complexity of the biclustering problem depends on the exact problem formulation, and particularly on the merit function used to evaluate the quality of a given bicluster. However most interesting variants of this problem are NP-complete requiring either large computational effort or the use of lossy heuristics to short-circuit the calculation.
# Type of Bicluster
Different biclustering algorithms have different definitions of bicluster.
They are:
- Bicluster with constant values (a),
- Bicluster with constant values on rows or columns (b, c),
- Bicluster with coherent values (d, e).
File:Bicluster.JPG
# Algorithms
There are many biclustering algorithm developed for bioinformatics, including: Block clustering, CTWC, ITWC, δ-bicluster, δ-pCluster, δ-pattern, FLOC, OPC, Plaid Model, OPSMs, Gibbs, SAMBA, Robust Biclustering Algorithm (RoBA), cMonkey, PRMs and DCC. Biclustering algorithms have also been proposed and used in other application fields under the names coclustering, biodimentional clustering, and subspace clustering.
Some recent algorithms have attempted to include additional support for biclustering rectangular matricies in the form of other datatypes. One such algorithm, cMonkey, has been recently developed and applied to several systems-biology datasets.
There is an ongoing debate about how to judge the results of these methods, as biclustering allows overlap between clusters and some algorithms allow the exclusion of hard to reconcile columns/conditions. Not all of the available algorithms are deterministic and you need to pay attention to the degree to which results represent stable minima. Because this is an unsupervised classification problem, the lack of gold standard makes it difficult to spot errors in the results. One approach is to utilize multiple biclustering algorithms, with majority or super-majority voting amongst them deciding the best result. Another way is to analyse the quality of shifting and scaling patterns in biclusters. | Biclustering
Biclustering, co-clustering, or two-mode clustering[1] is a data mining technique that allows simultaneous clustering of the rows and columns of a matrix.
The term was first introduced by Mirkin[2] (recently by Cheng and Church[3] in gene expression analysis), although the technique was originally introduced much earlier [2] (i.e., by J.A. Hartigan[4]).
Given a set of m rows in n columns (i.e., an m×n matrix), the biclustering algorithm generates biclusters - a subset of rows that exhibit similar behavior across a subset of columns, or vice versa.
# Complexity
The complexity of the biclustering problem depends on the exact problem formulation, and particularly on the merit function used to evaluate the quality of a given bicluster. However most interesting variants of this problem are NP-complete requiring either large computational effort or the use of lossy heuristics to short-circuit the calculation.
# Type of Bicluster
Different biclustering algorithms have different definitions of bicluster.
They are:
- Bicluster with constant values (a),
- Bicluster with constant values on rows or columns (b, c),
- Bicluster with coherent values (d, e).
File:Bicluster.JPG
# Algorithms
There are many biclustering algorithm developed for bioinformatics, including: Block clustering, CTWC, ITWC, δ-bicluster, δ-pCluster, δ-pattern, FLOC, OPC, Plaid Model, OPSMs, Gibbs, SAMBA, Robust Biclustering Algorithm (RoBA), cMonkey[5], PRMs and DCC. Biclustering algorithms have also been proposed and used in other application fields under the names coclustering, biodimentional clustering, and subspace clustering[6].
Some recent algorithms have attempted to include additional support for biclustering rectangular matricies in the form of other datatypes. One such algorithm, cMonkey, has been recently developed and applied to several systems-biology datasets.
There is an ongoing debate about how to judge the results of these methods, as biclustering allows overlap between clusters and some algorithms allow the exclusion of hard to reconcile columns/conditions. Not all of the available algorithms are deterministic and you need to pay attention to the degree to which results represent stable minima. Because this is an unsupervised classification problem, the lack of gold standard makes it difficult to spot errors in the results. One approach is to utilize multiple biclustering algorithms, with majority or super-majority voting amongst them deciding the best result. Another way is to analyse the quality of shifting and scaling patterns in biclusters[7]. | https://www.wikidoc.org/index.php/Biclustering | |
81eeed66a2fc27eedd4fa48c1da8068ec35e4a79 | wikidoc | Mitral valve | Mitral valve
# Overview
The mitral valve (also known as the bicuspid valve or left atrioventricular valve), is a dual flap (bi = 2) valve in the heart that lies between the left atrium (LA) and the left ventricle (LV). In Latin, the term mitral means shaped like a miter, or bishop's cap. The mitral valve and the tricuspid valve are known collectively as the atrioventricular valves because they lie between the atria and the ventricles of the heart and control flow.
A normally functioning mitral valve opens to pressure from the superior surface of the valve, allowing blood to flow into the left ventricle during left atria systole (contraction), and closes at the end of atrial contraction to prevent blood from back flowing into the atria during left ventricle systole. In a normal cardiac cycle, the atria contracts first, filling the ventricle. At the end of ventricular diastole, the bicuspid valve shuts, and prevents backflow as the ventricle begins its systolic phase. Backflow may occur if the patient suffers from mitral valve prolapse, causing an audible heart murmur during auscultation.
# Anatomy
The mitral valve has two cusps/leaflets (the anteromedial leaflet and the posterolateral leaflet) which guards the opening. The opening is surrounded by a fibrous ring known as the mitral valve annulus. (The orientation of the two leaflets were once thought to resemble a bishop's miter, which is where the valve receives its name.) The anterior cusp protects approximately two-thirds of the valve (imagine a crescent moon within the circle, where the crescent represents the posterior cusp). These valve leaflets are prevented from prolapsing into the left atrium by the action of tendons attached to the posterior surface of the valve, chordae tendinae.
The inelastic chordae tendineae are attached at one end to the papillary muscles and the other to the valve cusps. Papillary muscles are finger like projections from the wall of the left ventricle. Chordae tendinae from each muscle are attached to both leaflets of the mitral valve. Thus when the ventricle contracts, the intraventricular pressure forces the valve to close, while the tendons prevent the valve from opening in the wrong direction.
# Normal physiology
During left ventricular diastole, after the pressure drops in the left ventricle due to relaxation of the ventricular myocardium, the mitral valve opens, and blood travels from the left atrium to the left ventricle. About 70-80% of the blood that travels across the mitral valve occurs during the early filling phase of the left ventricle. This early filling phase is due to active relaxation of the ventricular myocardium, causing a pressure gradient that allows a rapid flow of blood from the left atrium, across the mitral valve. This early filling across the mitral valve is seen on doppler echocardiography of the mitral valve as the E wave.
After the E wave, there is a period of slow filling of the ventricle.
Left atrial contraction (left atrial systole) (during left ventricular diastole) causes added blood to flow across the mitral valve immediately before left ventricular systole. This late flow across the open mitral valve is seen on doppler echocardiography of the mitral valve as the A wave. The late filling of the LV contributes about 20% to the volume in the left ventricle prior to ventricular systole, and is known as the atrial kick.
# Surface anatomy
The opening and closing of the mitral valve is difficult to hear directly, but the flow of blood to the left ventricle is most audible at the apex of the heart, and so auscultation is usually performed at the intersection of the fifth intercostal space and the midclavicular line.
# Additional images
- Section of the heart showing the ventricular septum. (Bicuspid valve visible at center.)
- Front of thorax, showing surface relations of bones, lungs (purple), pleura (blue), and heart (red outline). Heart valves are labeled with "B", "T", "A", and "P".
# Related Chapters
- Anatomy of the heart
Heart
Heart valve
Papillary muscle
- Heart
- Heart valve
- Papillary muscle
- Pathophysiology
Mitral valve prolapse
Mitral regurgitation
Mitral stenosis
- Mitral valve prolapse
- Mitral regurgitation
- Mitral stenosis
- Procedures to fix the mitral valve
Mitral valve replacement
Mitral valve repair
Mitral valvuloplasty
- Mitral valve replacement
- Mitral valve repair
- Mitral valvuloplasty
# Resources
- Template:SUNYAnatomyFigs - "Valves of the heart."
- Mitral Valve Repair at The Mount Sinai Hospital - "Mitral Valve Function"
- Surgical Anatomy of the Mitral Valve at echoincontext.com
- Cleveland Clinic Webchat - Latest Innovations in Mitral Valve Surgery with Dr. Marc Gillinov | Mitral valve
Template:Infobox Anatomy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
The mitral valve (also known as the bicuspid valve or left atrioventricular valve), is a dual flap (bi = 2) valve in the heart that lies between the left atrium (LA) and the left ventricle (LV). In Latin, the term mitral means shaped like a miter, or bishop's cap. The mitral valve and the tricuspid valve are known collectively as the atrioventricular valves because they lie between the atria and the ventricles of the heart and control flow.
A normally functioning mitral valve opens to pressure from the superior surface of the valve, allowing blood to flow into the left ventricle during left atria systole (contraction), and closes at the end of atrial contraction to prevent blood from back flowing into the atria during left ventricle systole. In a normal cardiac cycle, the atria contracts first, filling the ventricle. At the end of ventricular diastole, the bicuspid valve shuts, and prevents backflow as the ventricle begins its systolic phase. Backflow may occur if the patient suffers from mitral valve prolapse, causing an audible heart murmur during auscultation.
# Anatomy
The mitral valve has two cusps/leaflets (the anteromedial leaflet and the posterolateral leaflet) which guards the opening. The opening is surrounded by a fibrous ring known as the mitral valve annulus. (The orientation of the two leaflets were once thought to resemble a bishop's miter, which is where the valve receives its name.[1]) The anterior cusp protects approximately two-thirds of the valve (imagine a crescent moon within the circle, where the crescent represents the posterior cusp). These valve leaflets are prevented from prolapsing into the left atrium by the action of tendons attached to the posterior surface of the valve, chordae tendinae.
The inelastic chordae tendineae are attached at one end to the papillary muscles and the other to the valve cusps. Papillary muscles are finger like projections from the wall of the left ventricle. Chordae tendinae from each muscle are attached to both leaflets of the mitral valve. Thus when the ventricle contracts, the intraventricular pressure forces the valve to close, while the tendons prevent the valve from opening in the wrong direction.
# Normal physiology
During left ventricular diastole, after the pressure drops in the left ventricle due to relaxation of the ventricular myocardium, the mitral valve opens, and blood travels from the left atrium to the left ventricle. About 70-80% of the blood that travels across the mitral valve occurs during the early filling phase of the left ventricle. This early filling phase is due to active relaxation of the ventricular myocardium, causing a pressure gradient that allows a rapid flow of blood from the left atrium, across the mitral valve. This early filling across the mitral valve is seen on doppler echocardiography of the mitral valve as the E wave.
After the E wave, there is a period of slow filling of the ventricle.
Left atrial contraction (left atrial systole) (during left ventricular diastole) causes added blood to flow across the mitral valve immediately before left ventricular systole. This late flow across the open mitral valve is seen on doppler echocardiography of the mitral valve as the A wave. The late filling of the LV contributes about 20% to the volume in the left ventricle prior to ventricular systole, and is known as the atrial kick.
# Surface anatomy
The opening and closing of the mitral valve is difficult to hear directly, but the flow of blood to the left ventricle is most audible at the apex of the heart, and so auscultation is usually performed at the intersection of the fifth intercostal space and the midclavicular line.
# Additional images
- Section of the heart showing the ventricular septum. (Bicuspid valve visible at center.)
- Front of thorax, showing surface relations of bones, lungs (purple), pleura (blue), and heart (red outline). Heart valves are labeled with "B", "T", "A", and "P".
# Related Chapters
- Anatomy of the heart
Heart
Heart valve
Papillary muscle
- Heart
- Heart valve
- Papillary muscle
- Pathophysiology
Mitral valve prolapse
Mitral regurgitation
Mitral stenosis
- Mitral valve prolapse
- Mitral regurgitation
- Mitral stenosis
- Procedures to fix the mitral valve
Mitral valve replacement
Mitral valve repair
Mitral valvuloplasty
- Mitral valve replacement
- Mitral valve repair
- Mitral valvuloplasty
# Resources
- Template:SUNYAnatomyFigs - "Valves of the heart."
- Mitral Valve Repair at The Mount Sinai Hospital - "Mitral Valve Function"
- Surgical Anatomy of the Mitral Valve at echoincontext.com
- Cleveland Clinic Webchat - Latest Innovations in Mitral Valve Surgery with Dr. Marc Gillinov | https://www.wikidoc.org/index.php/Bicuspid_valve | |
b04c2412216a51a1a947a1a69b74368f86f2124e | wikidoc | Macrocephaly | Macrocephaly
Synonyms and keywords: Macrocephalus; megacephaly; megalocephaly; head enlarged; big head; large head; enlarged head
# Overview
Macrocephaly (from the Greek words μακρύς, meaning "long", and κεφάλη, meaning "head"), is when thehead circumference is larger than average for the age and sex of the infant or child.
# Causes
## Common Causes
- Hydrocephalus
- Intraventricular hemorrhage
- Acromegaly
- Rickets
- Autism
- Hurler's syndrome
- Arnold-Chiari syndrome
- Cerebral arteriovenous malformation
## Causes by Organ System
## Causes in Alphabeical Order
# Diagnosis
## Symptoms
Increased pressure in the head (increased intracranial pressure) often occurs with increased head circumference. Symptoms of this condition include:
- Irritability
- Vomiting
## Physical Examination
### Head
Macrocephaly is customarily diagnosed if head circumference is greater than 2 standard deviations (SD) above the mean. Relative macrocephaly occurs if the measure is less than 2 SD above the mean but is disproportionately above that when ethnicity and stature are considered. In research, cranial height or brain imaging are also used to determine intracranial volume more accurately.
### Eyes
- Eyes moving downward
# Related Chapters
- Microcephaly | Macrocephaly
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Associate Editor(s)-in-Chief: Kalsang Dolma, M.B.B.S.[2]
Synonyms and keywords: Macrocephalus; megacephaly; megalocephaly; head enlarged; big head; large head; enlarged head
# Overview
Macrocephaly (from the Greek words μακρύς, meaning "long", and κεφάλη, meaning "head"), is when thehead circumference is larger than average for the age and sex of the infant or child.
# Causes
## Common Causes
- Hydrocephalus
- Intraventricular hemorrhage
- Acromegaly
- Rickets
- Autism
- Hurler's syndrome
- Arnold-Chiari syndrome
- Cerebral arteriovenous malformation
## Causes by Organ System
## Causes in Alphabeical Order
# Diagnosis
## Symptoms
Increased pressure in the head (increased intracranial pressure) often occurs with increased head circumference. Symptoms of this condition include:
- Irritability
- Vomiting
## Physical Examination
### Head
Macrocephaly is customarily diagnosed if head circumference is greater than 2 standard deviations (SD) above the mean. Relative macrocephaly occurs if the measure is less than 2 SD above the mean but is disproportionately above that when ethnicity and stature are considered. In research, cranial height or brain imaging are also used to determine intracranial volume more accurately.[1]
### Eyes
- Eyes moving downward
# Related Chapters
- Microcephaly | https://www.wikidoc.org/index.php/Big_head | |
3b7bd0b73a6e46d453cdf4fbbb7affdf749c5ff2 | wikidoc | Bill Charman | Bill Charman
Bill Charman is an Australian pharmaceutical scientist whose work has developed medical treatments in a range of areas, including a new drug for the treatment of malaria. He was also the founder and director of biomedical sciences company Acrux Ltd. He has published more than 320 scientific papers on his research and has received tens of millions of dollars in funding to further his work. Prior to embarking on a career in academic research, he worked for a number of pharmaceutical companies in the USA.
He has received numerous international awards for his work, including the Glaxo Wellcome International Achievement award in Pharmaceutical Sciences from the Royal Pharmaceutical Society of Great Britain in 1999, the Drug Discovery Project of the Year award from the Medicines for Malaria Venture (Switzerland) in 2002, the Australasian Pharmaceutical Sciences Association Medal in 2005, the 2006 Controlled Release Society International Career Achievement in Oral Drug Delivery Award and the 2007 Research Achievement Award from the Pharmaceutical Sciences World Congress.
Currently, Charman is Dean of the Victorian College of Pharmacy at Monash University, where he holds a personal chair in pharmaceutics and is director of the Centre for Drug Candidate Optimisation. He also works as an adviser to the World Health Organisation. He is a regular commentator on many areas of drug development in the Australian media. | Bill Charman
Bill Charman is an Australian pharmaceutical scientist whose work has developed medical treatments in a range of areas, including a new drug for the treatment of malaria. He was also the founder and director of biomedical sciences company Acrux Ltd. He has published more than 320 scientific papers on his research and has received tens of millions of dollars in funding to further his work. Prior to embarking on a career in academic research, he worked for a number of pharmaceutical companies in the USA.[1]
He has received numerous international awards for his work, including the Glaxo Wellcome International Achievement award in Pharmaceutical Sciences from the Royal Pharmaceutical Society of Great Britain in 1999, the Drug Discovery Project of the Year award from the Medicines for Malaria Venture (Switzerland) in 2002, the Australasian Pharmaceutical Sciences Association Medal in 2005, the 2006 Controlled Release Society International Career Achievement in Oral Drug Delivery Award and the 2007 Research Achievement Award from the Pharmaceutical Sciences World Congress.[2]
Currently, Charman is Dean of the Victorian College of Pharmacy at Monash University, where he holds a personal chair in pharmaceutics and is director of the Centre for Drug Candidate Optimisation. He also works as an adviser to the World Health Organisation.[3] He is a regular commentator on many areas of drug development in the Australian media.[4] | https://www.wikidoc.org/index.php/Bill_Charman | |
d66e21de349c3d8ca165f1d89ea6385ca304d9d4 | wikidoc | Bioallethrin | Bioallethrin
# Overview
Bioallethrin is a brand name for an ectoparasiticide. It consists of two of the eight stereosiomers of allethrin I in an approximate ratio of 1:1. The name Bioallethrin is a registered trademark of Sumitomo Chemical Co., Ltd.
Esbiothrin (CAS number 260359-57-5 ) is a mixture of the same two stereosiomers, but in an approximate ratio of R:S = 1:3.
Esbioallethrin or S-bioallethrin (CAS number 28434-00-6 ) is the pure S-form (that is, the wedge in the structure as shown in the box points down). | Bioallethrin
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Bioallethrin is a brand name for an ectoparasiticide. It consists of two of the eight stereosiomers of allethrin I in an approximate ratio of 1:1.[1][2] The name Bioallethrin is a registered trademark of Sumitomo Chemical Co., Ltd.[3]
Esbiothrin (CAS number 260359-57-5 ) is a mixture of the same two stereosiomers, but in an approximate ratio of R:S = 1:3.[2]
Esbioallethrin or S-bioallethrin (CAS number 28434-00-6 ) is the pure S-form (that is, the wedge in the structure as shown in the box points down).[4] | https://www.wikidoc.org/index.php/Bioallethrin | |
0ddf2d3bc89e2e51c94f449b04d509078099a7bc | wikidoc | Biocatalysis | Biocatalysis
# Overwiew
Biocatalysis can be defined as the utilization of natural catalysts, called enzymes, to perform chemical transformations on organic compounds. Both enzymes that have been more or less isolated or enzymes still residing inside living cells are employed for this task .
# History
Biocatalysis underpins some of the oldest chemical transformations known to humans, for brewing predates recorded history. The oldest records of brewing are about 6000 years old and refer to the Sumerians.
The employment of enzymes and whole cells have been important for many industries for centuries. The most obvious usages have been in the food and drink businesses where the production of wine, beer, cheese etc. is dependent on the effects of the microorganisms.
More than one hundred years ago, biocatalysis was employed to do chemical transformations on non-natural man-made organic compounds, and the last 30 years have seen a substantial increase in the application of biocatalysis to produce fine chemicals, especially for the pharmaceutical industry.
# Advantages of Biocatalysis
The key word for organic synthesis is selectivity which is necessary to obtain a high yield of a specific product. There are a large range of selective organic reactions available for most synthetic needs. However, there is still one area where organic chemists are struggling, and that is when chirality is involved, although considerable progress in chiral synthesis has been achieved in recent years.
Enzymes display three major types of selectivities:
- Chemoselectivity: Since the purpose of an enzyme is to act on a single type of functional group, other sensitive functionalities, which would normally react to a certain extent under chemical catalysis, survive. As a result, biocatalytic reactions tend to be "cleaner" and laborious purification of product(s) from impurities emerging through side-reactions can largely be omitted.
- Regioselectivity and Diastereoselectivity: Due to their complex three-dimensional structure, enzymes may distinguish between functional groups which are chemically situated in different regions of the substrate molecule.
- Enantioselectivity: Since almost all enzymes are made from L-amino acids, enzymes are chiral catalysts. As a consequence, any type of chirality present in the substrate molecule is "recognized" upon the formation of the enzyme-substrate complex. Thus a prochiral substrate may be transformed into an optically active product and both enantiomers of a racemic substrate may react at different rates.
These reasons, and especially the latter, are the major reasons why synthetic chemists have become interested in biocatalysis. This interest in turn is mainly due to the need to synthesise enantiopure compounds as chiral building blocks for drugs and agrochemicals.
Another important advantage of biocatalysts are that they are environmentally acceptable, being completely degraded in the environment. Furthermore the enzymes act under mild conditions, which minimizes problems of undesired side-reactions such as decomposition, isomerization, racemization and rearrangement, which often plague traditional methodology.
# Asymmetric biocatalysis
The use of biocatalysis to obtain enantiopure compounds can be divided into two different methods;
- Kinetic resolution of a racemic mixture
- Biocatalysed asymmetric synthesis
In kinetic resolution of a racemic mixture, the presence of a chiral object (the enzyme) converts one of the enantiomers into product at a greater reaction rate than the other enantiomer.
The racemic mixture has now been transformed into a mixture of two different compounds, making them separable by normal methodology. The maximum yield in such kinetic resolutions is 50%, since a yield of more than 50% means that some of wrong isomer also has reacted, giving a lower enantiomeric excess. Such reactions must therefore be terminated before equilibrium is reached. If it is possible to perform such resolutions under conditions where the two substrate- enantiomers are racemizing continuously, all substrate may in theory be converted into enantiopure product. This is called dynamic resolution.
In biocatalysed asymmetric synthesis, a non-chiral unit becomes chiral in such a way that the different possible stereoismers are formed in different quantities. The chirality is introduced into the substrate by influence of enzyme, which is chiral. Yeast is a biocatalyst for the enantioselective reduction of ketones.
The biocatalytic Baeyer-Villiger oxidation is another example of a biocatalytic reaction. In one study a specially designed mutant of Candida antarctica was found to be an effective catalyst for the Michael addition of acrolein with acetylacetone at 20°C in absence of additional solvent .
Another study demonstrates how racemic nicotine (mixture of S and R-enantiomers 1 in scheme 3) can be deracemized in a one-pot procedure involving a monoamine oxidase isolated from Aspergillus niger which is able to oxidize only the amine S-enantiomer to the imine 2 and involving an ammonia / borane reducing couple which can reduce the imine 2 back to the amine 1 . In this way the S-enantiomer will contineously be consumed by the emzyme while the R-enantiomer accumulates. It is even possible to stereoinvert pure S to pure R. | Biocatalysis
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overwiew
Biocatalysis can be defined as the utilization of natural catalysts, called enzymes, to perform chemical transformations on organic compounds. Both enzymes that have been more or less isolated or enzymes still residing inside living cells are employed for this task [1] [2] [3].
# History
Biocatalysis underpins some of the oldest chemical transformations known to humans, for brewing predates recorded history. The oldest records of brewing are about 6000 years old and refer to the Sumerians.
The employment of enzymes and whole cells have been important for many industries for centuries. The most obvious usages have been in the food and drink businesses where the production of wine, beer, cheese etc. is dependent on the effects of the microorganisms.
More than one hundred years ago, biocatalysis was employed to do chemical transformations on non-natural man-made organic compounds, and the last 30 years have seen a substantial increase in the application of biocatalysis to produce fine chemicals, especially for the pharmaceutical industry.
# Advantages of Biocatalysis
The key word for organic synthesis is selectivity which is necessary to obtain a high yield of a specific product. There are a large range of selective organic reactions available for most synthetic needs. However, there is still one area where organic chemists are struggling, and that is when chirality is involved, although considerable progress in chiral synthesis has been achieved in recent years.
Enzymes display three major types of selectivities:
- Chemoselectivity: Since the purpose of an enzyme is to act on a single type of functional group, other sensitive functionalities, which would normally react to a certain extent under chemical catalysis, survive. As a result, biocatalytic reactions tend to be "cleaner" and laborious purification of product(s) from impurities emerging through side-reactions can largely be omitted.
- Regioselectivity and Diastereoselectivity: Due to their complex three-dimensional structure, enzymes may distinguish between functional groups which are chemically situated in different regions of the substrate molecule.
- Enantioselectivity: Since almost all enzymes are made from L-amino acids, enzymes are chiral catalysts. As a consequence, any type of chirality present in the substrate molecule is "recognized" upon the formation of the enzyme-substrate complex. Thus a prochiral substrate may be transformed into an optically active product and both enantiomers of a racemic substrate may react at different rates.
These reasons, and especially the latter, are the major reasons why synthetic chemists have become interested in biocatalysis. This interest in turn is mainly due to the need to synthesise enantiopure compounds as chiral building blocks for drugs and agrochemicals.
Another important advantage of biocatalysts are that they are environmentally acceptable, being completely degraded in the environment. Furthermore the enzymes act under mild conditions, which minimizes problems of undesired side-reactions such as decomposition, isomerization, racemization and rearrangement, which often plague traditional methodology.
# Asymmetric biocatalysis
The use of biocatalysis to obtain enantiopure compounds can be divided into two different methods;
- Kinetic resolution of a racemic mixture
- Biocatalysed asymmetric synthesis
In kinetic resolution of a racemic mixture, the presence of a chiral object (the enzyme) converts one of the enantiomers into product at a greater reaction rate than the other enantiomer.
The racemic mixture has now been transformed into a mixture of two different compounds, making them separable by normal methodology. The maximum yield in such kinetic resolutions is 50%, since a yield of more than 50% means that some of wrong isomer also has reacted, giving a lower enantiomeric excess. Such reactions must therefore be terminated before equilibrium is reached. If it is possible to perform such resolutions under conditions where the two substrate- enantiomers are racemizing continuously, all substrate may in theory be converted into enantiopure product. This is called dynamic resolution.
In biocatalysed asymmetric synthesis, a non-chiral unit becomes chiral in such a way that the different possible stereoismers are formed in different quantities. The chirality is introduced into the substrate by influence of enzyme, which is chiral. Yeast is a biocatalyst for the enantioselective reduction of ketones.
The biocatalytic Baeyer-Villiger oxidation is another example of a biocatalytic reaction. In one study a specially designed mutant of Candida antarctica was found to be an effective catalyst for the Michael addition of acrolein with acetylacetone at 20°C in absence of additional solvent [4].
Another study demonstrates how racemic nicotine (mixture of S and R-enantiomers 1 in scheme 3) can be deracemized in a one-pot procedure involving a monoamine oxidase isolated from Aspergillus niger which is able to oxidize only the amine S-enantiomer to the imine 2 and involving an ammonia / borane reducing couple which can reduce the imine 2 back to the amine 1 [5]. In this way the S-enantiomer will contineously be consumed by the emzyme while the R-enantiomer accumulates. It is even possible to stereoinvert pure S to pure R. | https://www.wikidoc.org/index.php/Biocatalysis | |
62e638cc2aa46fa220a60fb7ec2d9aecce85845f | wikidoc | Biochemistry | Biochemistry
Biochemistry (from Template:Lang-el, bios, "life" and Egyptian kēme, "earth"
The dawn of biochemistry may have been the discovery of the first enzyme, diastase (today called amylase), in 1833 by Anselme Payen. Eduard Buchner contributed the first demonstration of a complex biochemical process outside of a cell in 1896: alcoholic fermentation in cell extracts of yeast. Although the term “biochemistry” seems to have been first used in 1882, it is generally accepted that the formal coinage of biochemistry occurred in 1903 by Carl Neuberg, a German chemist. Previously, this area would have been referred to as physiological chemistry. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle).
Another significant historic event in biochemistry is the discovery of the gene and its role in the transfer of information in the cell. This part of biochemistry is often called molecular biology. In the 1950's, James D. Watson, Francis Crick, Rosalind Franklin, and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi), in the silencing of gene expression.
Today, there are three main types of biochemistry as established by Michael E. Sugar. Plant biochemistry involves the study of the biochemistry of autotrophic organisms such as photosynthesis and other plant specific biochemical processes. General biochemistry encompasses both plant and animal biochemistry. Human/medical/medicinal biochemistry focuses on the biochemistry of humans and medical illnesses.
# Carbohydrates
The function of carbohydrates includes energy storage and providing structure. Sugars are carbohydrates, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule.
## Monosaccharides
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose, one of the most important carbohydrates, is an example of a monosaccharide. So is fructose, the sugar that gives fruits their sweet taste. Some carbohydrates (especially after condensation to oligo- and polysaccharides) contain less carbon relative to H and O, which still are present in 2:1 (H:O) ratio. Monosaccharides can be grouped into aldoses (having an aldehyde group at the end of the chain, e. g. glucose) and ketoses (having a keto group in their chain; e. g. fructose). Both aldoses and ketoses occur in an equilibrium between the open-chain forms and (starting with chain lengths of C4) cyclic forms. These are generated by bond formation between one of the hydroxyl groups of the sugar chain with the carbon of the aldehyde or keto group to form a hemiacetal bond. This leads to saturated five-membered (in furanoses) or six-membered (in pyranoses) heterocyclic rings containing one O as heteroatom.
## Disaccharides
Two monosaccharides can be joined together using dehydration synthesis, in which a hydrogen atom is removed from the end of one molecule and a hydroxyl group (—OH) is removed from the other; the remaining residues are then attached at the sites from which the atoms were removed. The H—OH or H2O is then released as a molecule of water, hence the term dehydration. The new molecule, consisting of two monosaccharides, is called a disaccharide and is conjoined together by a glycosidic or ether bond. The reverse reaction can also occur, using a molecule of water to split up a disaccharide and break the glycosidic bond; this is termed hydrolysis. The most well-known disaccharide is sucrose, ordinary sugar (in scientific contexts, called table sugar or cane sugar to differentiate it from other sugars). Sucrose consists of a glucose molecule and a fructose molecule joined together. Another important disaccharide is lactose, consisting of a glucose molecule and a galactose molecule. As most humans age, the production of lactase, the enzyme that hydrolyzes lactose back into glucose and galactose, typically decreases. This results in lactase deficiency, also called lactose intolerance.
Sugar polymers are characterised by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom which can be in equilibrium with the open-chain aldehyde or keto form. If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety form a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
## Oligosaccharides and polysaccharides
File:Cellulose-2D-skeletal.png
When a few (around three to six) monosaccharides are joined together, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses.
Many monosaccharides joined together make a polysaccharide. They can be joined together in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers.
- Cellulose is made by plants and is an important structural component of their cell walls. Humans can neither manufacture nor digest it.
- Glycogen, on the other hand, is an animal carbohydrate; humans and other animals use it as a form of energy storage.
## Use of carbohydrates as an energy source
Glucose is the major energy source in most life forms. For instance, polysaccharides are broken down into their monomers (glycogen phosphorylase removes glucose residues from glycogen). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
### Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important and ancient ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate; this also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents in the form of converting NAD+ to NADH. This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e. g. in humans) or to ethanol plus carbon dioxide (e. g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
### Aerobic
In aerobic cells with sufficient oxygen, like most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two more molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thereby, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
### Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides.
# Proteins
Like carbohydrates, some proteins perform largely structural roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. In fact, the enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is currently one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. These amazing molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more: a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process, and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
In essence, proteins are chains of amino acids. An amino acid consists of a carbon atom bound to four groups. One is an amino group, —NH2, and one is a carboxylic acid group, —COOH (although these exist as —NH3+ and —COO− under physiologic conditions). The third is a simple hydrogen atom. The fourth is commonly denoted "—R" and is different for each amino acid. There are twenty standard amino acids. Some of these have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter.
File:Amino acids 1.png
Amino acids can be joined together via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than around thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein simply consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-…". Secondary structure is concerned with local morphology. Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine, and then absorbed. They can then be joined together to make new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to make all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can only synthesize half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. These are the essential amino acids, since it is essential to ingest them. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to make a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different strategies have evolved in different animals, depending on the animals' needs. Unicellular organisms, of course, simply release the ammonia into the environment. Similarly, bony fish can release the ammonia into the water where it is quickly diluted. In general, mammals convert the ammonia into urea, via the urea cycle.
# Lipids
The term lipid comprises a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids and terpenoids (eg. retinoids and steroids). Some lipids are linear aliphatic molecules, while others have ring structures. Some are aromatic, while others are not. Some are flexible, while others are rigid.
Most lipids have some polar character in addition to being largely nonpolar. Generally, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere -OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc, are comprised of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, which are the final degradation products of fats and lipids.
# Nucleic acids
A nucleic acid is a complex, high-molecular-weight biochemical macromolecule composed of nucleotide chains that convey genetic information. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Nucleic acids are found in all living cells and viruses. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate, the primary energy-carrier molecule found in all living organisms.
Nucleic acid, so called because of its prevalence in cellular nuclei, is the generic name of the family of biopolymers. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. Different nucleic acid types differ in the specific sugar found in their chain (e.g. DNA or deoxyribonucleic acid contains 2-deoxyriboses). Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
# Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas from genetics, molecular biology and biophysics. There has never been a hard-line between these disciplines in terms of content and technique, but members of each discipline have in the past been very territorial; today the terms molecular biology and biochemistry are nearly interchangeable. The following figure is a schematic that depicts one possible view of the relationship between the fields:
- Biochemistry is the study of the chemical substances and vital processes occurring in living organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are examples of biochemistry.
- Genetics is the study of the effect of genetic differences on organisms. Often this can be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms which lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knock-out" studies.
- Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. The central dogma of molecular biology where genetic material is transcribed into RNA and then translated into protein, despite being an oversimplified picture of molecular biology, still provides a good starting point for understanding the field. This picture, however, is undergoing revision in light of emerging novel roles for RNA.
- Chemical Biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules). | Biochemistry
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Biochemistry (from Template:Lang-el, bios, "life" and Egyptian kēme, "earth"[1]
The dawn of biochemistry may have been the discovery of the first enzyme, diastase (today called amylase), in 1833 by Anselme Payen. Eduard Buchner contributed the first demonstration of a complex biochemical process outside of a cell in 1896: alcoholic fermentation in cell extracts of yeast. Although the term “biochemistry” seems to have been first used in 1882, it is generally accepted that the formal coinage of biochemistry occurred in 1903 by Carl Neuberg, a German chemist. Previously, this area would have been referred to as physiological chemistry. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle).
Another significant historic event in biochemistry is the discovery of the gene and its role in the transfer of information in the cell. This part of biochemistry is often called molecular biology. In the 1950's, James D. Watson, Francis Crick, Rosalind Franklin, and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi), in the silencing of gene expression.
Today, there are three main types of biochemistry as established by Michael E. Sugar. Plant biochemistry involves the study of the biochemistry of autotrophic organisms such as photosynthesis and other plant specific biochemical processes. General biochemistry encompasses both plant and animal biochemistry. Human/medical/medicinal biochemistry focuses on the biochemistry of humans and medical illnesses.
# Carbohydrates
The function of carbohydrates includes energy storage and providing structure. Sugars are carbohydrates, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule.
## Monosaccharides
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose, one of the most important carbohydrates, is an example of a monosaccharide. So is fructose, the sugar that gives fruits their sweet taste. Some carbohydrates (especially after condensation to oligo- and polysaccharides) contain less carbon relative to H and O, which still are present in 2:1 (H:O) ratio. Monosaccharides can be grouped into aldoses (having an aldehyde group at the end of the chain, e. g. glucose) and ketoses (having a keto group in their chain; e. g. fructose). Both aldoses and ketoses occur in an equilibrium between the open-chain forms and (starting with chain lengths of C4) cyclic forms. These are generated by bond formation between one of the hydroxyl groups of the sugar chain with the carbon of the aldehyde or keto group to form a hemiacetal bond. This leads to saturated five-membered (in furanoses) or six-membered (in pyranoses) heterocyclic rings containing one O as heteroatom.
## Disaccharides
Two monosaccharides can be joined together using dehydration synthesis, in which a hydrogen atom is removed from the end of one molecule and a hydroxyl group (—OH) is removed from the other; the remaining residues are then attached at the sites from which the atoms were removed. The H—OH or H2O is then released as a molecule of water, hence the term dehydration. The new molecule, consisting of two monosaccharides, is called a disaccharide and is conjoined together by a glycosidic or ether bond. The reverse reaction can also occur, using a molecule of water to split up a disaccharide and break the glycosidic bond; this is termed hydrolysis. The most well-known disaccharide is sucrose, ordinary sugar (in scientific contexts, called table sugar or cane sugar to differentiate it from other sugars). Sucrose consists of a glucose molecule and a fructose molecule joined together. Another important disaccharide is lactose, consisting of a glucose molecule and a galactose molecule. As most humans age, the production of lactase, the enzyme that hydrolyzes lactose back into glucose and galactose, typically decreases. This results in lactase deficiency, also called lactose intolerance.
Sugar polymers are characterised by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom which can be in equilibrium with the open-chain aldehyde or keto form. If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety form a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
## Oligosaccharides and polysaccharides
File:Cellulose-2D-skeletal.png
When a few (around three to six) monosaccharides are joined together, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses.
Many monosaccharides joined together make a polysaccharide. They can be joined together in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers.
- Cellulose is made by plants and is an important structural component of their cell walls. Humans can neither manufacture nor digest it.
- Glycogen, on the other hand, is an animal carbohydrate; humans and other animals use it as a form of energy storage.
## Use of carbohydrates as an energy source
Glucose is the major energy source in most life forms. For instance, polysaccharides are broken down into their monomers (glycogen phosphorylase removes glucose residues from glycogen). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
### Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important and ancient ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate; this also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents in the form of converting NAD+ to NADH. This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e. g. in humans) or to ethanol plus carbon dioxide (e. g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
### Aerobic
In aerobic cells with sufficient oxygen, like most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two more molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thereby, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
### Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides.
# Proteins
Like carbohydrates, some proteins perform largely structural roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. In fact, the enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is currently one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. These amazing molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more: a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process, and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
In essence, proteins are chains of amino acids. An amino acid consists of a carbon atom bound to four groups. One is an amino group, —NH2, and one is a carboxylic acid group, —COOH (although these exist as —NH3+ and —COO− under physiologic conditions). The third is a simple hydrogen atom. The fourth is commonly denoted "—R" and is different for each amino acid. There are twenty standard amino acids. Some of these have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter.
File:Amino acids 1.png
Amino acids can be joined together via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than around thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein simply consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-…". Secondary structure is concerned with local morphology. Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine, and then absorbed. They can then be joined together to make new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to make all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can only synthesize half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. These are the essential amino acids, since it is essential to ingest them. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to make a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different strategies have evolved in different animals, depending on the animals' needs. Unicellular organisms, of course, simply release the ammonia into the environment. Similarly, bony fish can release the ammonia into the water where it is quickly diluted. In general, mammals convert the ammonia into urea, via the urea cycle.
# Lipids
The term lipid comprises a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids and terpenoids (eg. retinoids and steroids). Some lipids are linear aliphatic molecules, while others have ring structures. Some are aromatic, while others are not. Some are flexible, while others are rigid.
Most lipids have some polar character in addition to being largely nonpolar. Generally, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere -OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc, are comprised of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, which are the final degradation products of fats and lipids.
# Nucleic acids
A nucleic acid is a complex, high-molecular-weight biochemical macromolecule composed of nucleotide chains that convey genetic information. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Nucleic acids are found in all living cells and viruses. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate, the primary energy-carrier molecule found in all living organisms.
Nucleic acid, so called because of its prevalence in cellular nuclei, is the generic name of the family of biopolymers. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. Different nucleic acid types differ in the specific sugar found in their chain (e.g. DNA or deoxyribonucleic acid contains 2-deoxyriboses). Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
# Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas from genetics, molecular biology and biophysics. There has never been a hard-line between these disciplines in terms of content and technique, but members of each discipline have in the past been very territorial; today the terms molecular biology and biochemistry are nearly interchangeable. The following figure is a schematic that depicts one possible view of the relationship between the fields:
- Biochemistry is the study of the chemical substances and vital processes occurring in living organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are examples of biochemistry.
- Genetics is the study of the effect of genetic differences on organisms. Often this can be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms which lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knock-out" studies.
- Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. The central dogma of molecular biology where genetic material is transcribed into RNA and then translated into protein, despite being an oversimplified picture of molecular biology, still provides a good starting point for understanding the field. This picture, however, is undergoing revision in light of emerging novel roles for RNA.
- Chemical Biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules). | https://www.wikidoc.org/index.php/Biochemical | |
5b5e1f8d2f32b4c469cbdd0b605dd919f34a8a80 | wikidoc | Biodiversity | Biodiversity
Biodiversity is the variation of taxonomic life forms within a given ecosystem, biome or for the entire Earth. Biodiversity is often used as a measure of the health of biological systems.
# Evolution and meaning
Biodiversity is a neologism and a portmanteau word, from biology and diversity. The Science Division of The Nature Conservancy used the term "natural diversity" in a 1975 study, "The Preservation of Natural Diversity." The term biological diversity was used even before that by conservation scientists like Robert E. Jenkins and Thomas Lovejoy. The word biodiversity itself may have been coined by W.G. Rosen in 1985 while planning the National Forum on Biological Diversity organized by the National Research Council (NRC) which was to be held in 1986, and first appeared in a publication in 1988 when entomologist E. O. Wilson used it as the title of the proceedings of that forum. The word biodiversity was deemed more effective in terms of communication than biological diversity
Since 1986 the terms and the concept have achieved widespread use among biologists, environmentalists, political leaders, and concerned citizens worldwide. It is generally used to equate to a concern for the natural environment and nature conservation. This use has coincided with the expansion of concern over extinction observed in the last decades of the 20th century.
The term "natural heritage" pre-dates "biodiversity", though it is a less scientific term and more easily comprehended in some ways by the wider audience interested in conservation. "Natural Heritage" was used when Jimmy Carter set up the Georgia Heritage Trust while he was governor of Georgia; Carter's trust dealt with both natural and cultural heritage. It would appear that Carter picked the term up from Lyndon Johnson, who used it in a 1966 Message to Congress. "Natural Heritage" was picked up by the Science Division of The Nature Conservancy when, under Jenkins, it launched in 1974 the network of State Natural Heritage Programs. When this network was extended outside the USA, the term "Conservation Data Center" was suggested by Guillermo Mann and came to be preferred.
# Definitions
The most straightforward definition is "variation of life at all levels of biological organization".
A second definition holds that biodiversity is a measure of the relative diversity among organisms present in different ecosystems. "Diversity" in this definition includes diversity within a species and among species, and comparative diversity among ecosystems.
A third definition that is often used by ecologists is the "totality of genes, species, and ecosystems of a region". An advantage of this definition is that it seems to describe most circumstances and present a unified view of the traditional three levels at which biodiversity has been identified:
- genetic diversity - diversity of genes within a species. There is a genetic variability among the populations and the individuals of the same species. (See also population genetics.)
- species diversity - diversity among species in an ecosystem. "Biodiversity hotspots" are excellent examples of species diversity.
- ecosystem diversity - diversity at a higher level of organization, the ecosystem. To do with the variety of ecosystems on Earth.
This third definition, which conforms to the traditional five organization layers in biology, provides additional justification for multilevel approaches.
The 1992 United Nations Earth Summit in Rio de Janeiro defined "biodiversity" as "the variability among living organisms from all sources, including, 'inter alia', terrestrial, marine, and other aquatic ecosystems, and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems". This is, in fact, the closest thing to a single legally accepted definition of biodiversity, since it is the definition adopted by the United Nations Convention on Biological Diversity.
If the gene is the fundamental unit of natural selection, according to E. O. Wilson, the real biodiversity is genetic diversity. For geneticists, biodiversity is the diversity of genes and organisms. They study processes such as mutations, gene exchanges, and genome dynamics that occur at the DNA level and generate evolution.
For ecologists, biodiversity is also the diversity of durable interactions among species. It not only applies to species, but also to their immediate environment (biotope) and their larger ecoregion. In each ecosystem, living organisms are part of a whole, interacting with not only other organisms, but also with the air, water, and soil that surround them.
# Measurement
Biodiversity is a broad concept, so a variety of objective measures have been created in order to empirically measure biodiversity. Each measure of biodiversity relates to a particular use of the data.
For practical conservationists, this measure should quantify a value that is broadly shared among locally affected people. For others, a more economically defensible definition should allow the ensuring of continued possibilities for both adaptation and future use by people, assuring environmental sustainability.
As a consequence, biologists argue that this measure is likely to be associated with the variety of genes. Since it cannot always be said which genes are more likely to prove beneficial, the best choice for conservation is to assure the persistence of as many genes as possible. For ecologists, this latter approach is sometimes considered too restrictive, as it prohibits ecological succession.
Biodiversity is usually plotted as taxonomic richness of a geographic area, with some reference to a temporal scale. Whittaker described three common metrics used to measure species-level biodiversity, encompassing attention to species richness or species evenness:
- Species richness - the least sophisticated of the indices available.
- Simpson index
- Shannon index
There are three other indices which are used by ecologists:
- Alpha diversity refers to diversity within a particular area, community or ecosystem, and is measured by counting the number of taxa within the ecosystem (usually species)
- Beta diversity is species diversity between ecosystems; this involves comparing the number of taxa that are unique to each of the ecosystems.
- Gamma diversity is a measure of the overall diversity for different ecosystems within a region.
# Distribution
Biodiversity is not distributed evenly on Earth. It is consistently richer in the tropics and in other localized regions such as the California Floristic Province. As one approaches polar regions one generally finds fewer species. Flora and fauna diversity depends on climate, altitude, soils and the presence of other species. In the year 2006 large numbers of the Earth's species are formally classified as rare or endangered or threatened species; moreover, most scientists estimate that there are millions more species actually endangered which have not yet been formally recognized. About 40 percent of the 40,177 species assessed using the IUCN Red List criteria, are now listed as threatened species with extinction - a total of 16,119 species.
A biodiversity hotspot is a region with a high level of endemic species. These biodiversity hotspots were first identified by Dr. Norman Myers in two articles in the scientific journal The Environmentalist. Hotspots unfortunately tend to occur near areas of dense human habitation, leading to threats to their many endemic species. As a result of the pressures of the rapidly growing human population, human activity in many of these areas is increasing dramatically. Most of these hotspots are located in the tropics and most of them are forests.
For example, Brazil's Atlantic Forest contains roughly 20,000 plant species, 1350 vertebrates, and millions of insects, about half of which occur nowhere else in the world. The island of Madagascar including the unique Madagascar dry deciduous forests and lowland rainforests possess a very high ratio of species endemism and biodiversity, since the island separated from mainland Africa 65 million years ago, most of the species and ecosystems have evolved independently producing unique species different than other parts of Africa.
Many regions of high biodiversity (as well as high endemism) arise from very specialized habitats which require unusual adaptation mechanisms. For example the peat bogs of Northern Europe and the alvar regions such as the Stora Alvaret on Oland, Sweden host a large diversity of plants and animals, many of which are not found elsewhere.
# Evolution
Biodiversity found on Earth today is the result of 4 billion years of evolution. The origin of life is not well known to science, though limited evidence suggests that life may already have been well-established only a few 100 million years after the formation of the Earth. Until approximately 600 million years ago, all life consisted of bacteria and similar single-celled organisms.
The history of biodiversity during the Phanerozoic (the last 540 million years), starts with rapid growth during the Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. Over the next 400 million years or so, global diversity showed little overall trend, but was marked by periodic, massive losses of diversity classified as mass extinction events.
The apparent biodiversity shown in the fossil record suggests that the last few million years include the period of greatest biodiversity in the Earth's history. However, not all scientists support this view, since there is considerable uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recent geologic sections. Some (e.g. Alroy et al. 2001) argue that corrected for sampling artifacts, modern biodiversity is not much different from biodiversity 300 million years ago. Estimates of the present global macroscopic species diversity vary from 2 million to 100 million species, with a best estimate of somewhere near 10 million.
Most biologists agree however that the period since the emergence of humans is part of a new mass extinction, the Holocene extinction event, caused primarily by the impact humans are having on the environment. At present, the number of species estimated to have gone extinct as a result of human action is still far smaller than are observed during the major mass extinctions of the geological past. However, it has been argued that the present rate of extinction is sufficient to create a major mass extinction in less than 100 years. Others dispute this and suggest that the present rate of extinctions could be sustained for many thousands of years before the loss of biodiversity matches the more than 20% losses seen in past global extinction events.
New species are regularly discovered (on average about three new species of birds each year) and many, though discovered, are not yet classified (an estimate states that about 40% of freshwater fish from South America are not yet classified). Most of the terrestrial diversity is found in tropical forests.
# Benefits
There are a multitude of benefits of biodiversity in the sense of one diverse group aiding another such as:
## Resistance to catastrophe
Monoculture, the lack of biodiversity, was a contributing factor to several agricultural disasters in history, including the Irish Potato Famine, the European wine industry collapse in the late 1800s, and the US Southern Corn Leaf Blight epidemic of 1970.
See also: Agricultural biodiversity
Higher biodiversity also controls the spread of certain diseases as e.g. virusses will need adapt itself with every new species.
## Food and drink
Biodiversity provides food for humans. About 80 percent of our food supply comes from just 20 kinds of plants. Although many kinds of animals are utilized as food, again most consumption is focused on a few species.
There is vast untapped potential for increasing the range of food products suitable for human consumption, provided that the high present extinction rate can be stopped.
## Medicines
A significant proportion of drugs are derived, directly or indirectly, from biological sources; in most cases these medicines can not presently be synthesized in a laboratory setting. Moreover, only a small proportion of the total diversity of plants has been thoroughly investigated for potential sources of new drugs. Many medicines and antibiotics are also derived from microorganisms.
## Industrial materials
A wide range of industrial materials are derived directly from biological resources. These include building materials, fibers, dyes, resins, gums, adhesives, rubber and oil. There is enormous potential for further research into sustainably utilizing materials from a wider diversity of organisms.
## Intellectual value
Through the field of bionics, a lot of technological advancement has been done which may not have been the case without a rich biodiversity. (See also: Bionics)
## Better crop-varieties
For certain economical crops (e.g. foodcrops, ...), wild varieties of the domesticated species can be reintroduced to form a better variety than the previous (domesticated) species. The economic impact is gigantic, for even crops as common as the potato (which was bred through only one variety, brought back from the Inca), a lot more can come from these species. Wild varieties of the potato will all suffer enormously through the effects of climate change. A report by the Consultative Group on International Agricultural Research (CGIAR) describes the huge economic loss.
Rice, which has been improved for thousands of years by man, can through the same process regain some of its nutritional value that has been lost since (a project is already being carried out to do just this).
## Other ecological services
Biodiversity provides many ecosystem services that are often not readily visible. It plays a part in regulating the chemistry of our atmosphere and water supply. Biodiversity is directly involved in recycling nutrients and providing fertile soils. Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs; for example insect pollination cannot be mimicked by man-made construction, and that activity alone represents tens of billions of dollars in ecosystem services per annum to mankind.
## Leisure, cultural and aesthetic value
Many people derive value from biodiversity through leisure activities such as enjoying a walk in the countryside, birdwatching or natural history programs on television.
Biodiversity has inspired musicians, painters, sculptors, writers and other artists. Many cultural groups view themselves as an integral part of the natural world and show respect for other living organisms.
# Threats
During the last century, erosion of biodiversity has been increasingly observed. Some studies show that about one of eight known plant species is threatened with extinctionTemplate:Specify. Some estimates put the loss at up to 140,000 species per year (based on Species-area theory) and subject to discussion. This figure indicates unsustainable ecological practices, because only a small number of species come into being each year. Almost all scientists acknowledge that the rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates.
## Destruction of habitats
Most of the species extinctions from 1000 AD to 2000 AD are due to human activities, in particular destruction of plant and animal habitats. Raised rates of extinction are being driven by human consumption of organic resources, especially related to tropical forest destruction. While most of the species that are becoming extinct are not food species, their biomass is converted into human food when their habitat is transformed into pasture, cropland, and orchards. It is estimated that more than 40% of the Earth's biomass is tied up in only the few species that represent humans, livestock and crops. Because an ecosystem decreases in stability as its species are made extinct, these studies warn that the global ecosystem is destined for collapse if it is further reduced in complexity. Factors contributing to loss of biodiversity are: overpopulation, deforestation, pollution (air pollution, water pollution, soil contamination) and global warming or climate change, driven by human activity. These factors, while all stemming from overpopulation, produce a cumulative impact upon biodiversity.
Some characterize loss of biodiversity not as ecosystem degradation but by conversion to trivial standardized ecosystems (e.g., monoculture following deforestation). In some countries lack of property rights or access regulation to biotic resources necessarily leads to biodiversity loss (degradation costs having to be supported by the community).
A September 14, 2007 study conducted by the National Science Foundation found that biodiversity and genetic diversity are dependent upon each other--that diversity within a species is necessary to maintain diversity among species, and vice versa. According to the lead researcher in the study, Dr. Richard Lankauof, "If any one type is removed from the system, the cycle can break down, and the community becomes dominated by a single species."
## Exotic species
The rich diversity of unique species across many parts of the world exist only because they are separated by barriers, particularly large rivers, seas, oceans, mountains and deserts from other species of other land masses, particularly the highly fecund, ultra-competitive, generalist "super-species". These are barriers that could never be crossed by natural processes, except for many millions of years in the future through continental drift. However humans have invented ships and airplanes, and now have the power to bring into contact species that never have met in their evolutionary history, and on a time scale of days, unlike the centuries that historically have accompanied major animal migrations.
The widespread introduction of exotic species by humans is a potent threat to biodiversity. When exotic species are introduced to ecosystems and establish self-sustaining populations, the endemic species in that ecosystem, that have not evolved to cope with the exotic species, may not survive. The exotic organisms may be either predators, parasites, or simply aggressive species that deprive indigenous species of nutrients, water and light. These exotic or invasive species often have features due to their evolutionary background and environment that makes them competitive, and similarly makes endemic species defenceless and/or uncompetitive against these exotic species.
As a consequence of the above, if humans continue to combine species from different ecoregions, there is the potential that the world's ecosystems will end up dominated by relatively a few, aggressive, cosmopolitan "super-species".
Declines in amphibian populations have been observed since 1980s. These might critically threaten global biodiversity.
## Genetic pollution
Purebred naturally evolved region specific wild species can be threatened with extinction in a big way through the process of Genetic Pollution i.e. uncontrolled hybridization, introgression and Genetic swamping which leads to homogenization or replacement of local genotypes as a result of either a numerical and/or fitness advantage of introduced plant or animal. Nonnative species can bring about a form of extinction of native plants and animals by hybridization and introgression either through purposeful introduction by humans or through habitat modification, bringing previously isolated species into contact. These phenomena can be especially detrimental for rare species coming into contact with more abundant ones where the abundant ones can interbreed with them swamping the entire rarer gene pool creating hybrids thus driving the entire original purebred native stock to complete extinction. Attention has to be focused on the extent of this under appreciated problem that is not always apparent from morphological (outward appearance) observations alone. Some degree of gene flow may be a normal, evolutionarily constructive process, and all constellations of genes and genotypes cannot be preserved however, hybridization with or without introgression may, nevertheless, threaten a rare species' existence.
## Genetic pollution and food security
In agriculture and animal husbandry, green revolution popularized the use of conventional hybridization to increase yield many folds. Often the handful of breeds of plants and animals hybridized originated in developed countries and were further hybridized with local verities, in the rest of the developing world, to create high yield strains resistant to local climate and diseases. Local governments and industry since have been pushing hybridization with such zeal that several of the wild and indigenous breeds evolved locally over thousands of years having high resistance to local extremes in climate and immunity to diseases etc. have already become extinct or are in grave danger of becoming so in the near future. Due to complete disuse because of un-profitability and uncontrolled intentional, compounded with unintentional cross-pollination and crossbreeding (genetic pollution) formerly huge gene pools of various wild and indigenous breeds have collapsed causing widespread genetic erosion and genetic pollution resulting in great loss in genetic diversity and biodiversity as a whole.
A Genetically Modified Organism (GMO) is an organism whose genetic material has been altered using the genetic engineering techniques generally known as recombinant DNA technology. Genetic Engineering today has become another serious and alarming cause of genetic pollution because artificially created and genetically engineered plants and animals in laboratories, which could never have evolved in nature even with conventional hybridization, can live and breed on their own and what is even more alarming interbreed with naturally evolved wild varieties. Genetically Modified (GM) crops today have become a common source for genetic pollution, not only of wild varieties but also of other domesticated varieties derived from relatively natural hybridization.
It is being said that genetic erosion coupled with genetic pollution is destroying that needed unique genetic base thereby creating an unforeseen hidden crisis which will result in a severe threat to our food security for the future when diverse genetic material will cease to exist to be able to further improve or hybridize weakening food crops and livestock against more resistant diseases and climatic changes.
# Management
The conservation of biological diversity has become a global concern. Although not everybody agrees on extent and significance of current extinction, most consider biodiversity essential.
There are basically two main types of conservation options, in-situ conservation and ex-situ conservation. In-situ is usually seen as the ideal conservation strategy. However, its implementation is sometimes infeasible. For example, destruction of rare or endangered species' habitats sometimes requires ex-situ conservation efforts. Furthermore, ex-situ conservation can provide a backup solution to in-situ conservation projects. Some believe both types of conservation are required to ensure proper preservation.
An example of an in-situ conservation effort is the setting-up of protection areas. Examples of ex-situ conservation efforts, by contrast, would be planting germplasts in seedbanks, or growing the Wollemi Pine in nurseries. Such efforts allow the preservation of large populations of plants with minimal genetic erosion.
At national levels a Biodiversity Action Plan is sometimes prepared to state the protocols necessary to protect an individual species. Usually this plan also details extant data on the species and its habitat. In the USA such a plan is called a Recovery Plan.
The threat to biological diversity was among the hot topics discussed at the UN World Summit for Sustainable Development, in hope of seeing the foundation of a Global Conservation Trust to help maintain plant collections.
# Judicial status
Biodiversity is beginning to be evaluated and its evolution analysed (through observations, inventories, conservation...) as well as being taken into account in political and judicial decisions
- The relationship between law and ecosystems is very ancient and has consequences for biodiversity. It is related to property rights, both private and public. It can define protection for threatened ecosystems, but also some rights and duties (for example, fishing rights, hunting rights).
- Law regarding species is a more recent issue. It defines species that must be protected because they may be threatened by extinction. Some people question application of these laws. The U.S. Endangered Species Act is an example of an attempt to address the "law and species" issue.
- Laws regarding gene pools are only about a century old. While the genetic approach is not new (domestication, plant traditional selection methods), progress made in the genetic field in the past 20 years have led to a tightening of laws in this field. With the new technologies of genetic analysis and genetic engineering, people are going through gene patenting, processes patenting, and a totally new concept of genetic resources. A very hot debate today seeks to define whether the resource is the gene, the organism itself, or its DNA.
The 1972 UNESCO convention established that biological resources, such as plants, were the common heritage of mankind. These rules probably inspired the creation of great public banks of genetic resources, located outside the source-countries.
New global agreements (e.g.Convention on Biological Diversity), now give sovereign national rights over biological resources (not property). The idea of static conservation of biodiversity is disappearing and being replaced by the idea of dynamic conservation, through the notion of resource and innovation.
The new agreements commit countries to conserve biodiversity, develop resources for sustainability and share the benefits resulting from their use. Under new rules, it is expected that bioprospecting or collection of natural products has to be allowed by the biodiversity-rich country, in exchange for a share of the benefits.
Sovereignty principles can rely upon what is better known as Access and Benefit Sharing Agreements (ABAs). The Convention on Biodiversity spirit implies a prior informed consent between the source country and the collector, to establish which resource will be used and for what, and to settle on a fair agreement on benefit sharing. Bioprospecting can become a type of biopiracy when those principles are not respected.
Uniform approval for use of biodiversity as a legal standard has not been achieved, however. At least one legal commentator has argued that biodiversity should not be used as a legal standard, arguing that the multiple layers of scientific uncertainty inherent in the concept of biodiversity will cause administrative waste and increase litigation without promoting preservation goals. See Fred Bosselman, A Dozen Biodiversity Puzzles, 12 N.Y.U. Environmental Law Journal 364 (2004)
# Criticisms
## Food
The notion that there is 'vast untapped potential' for reducing mankinds dependence on a relatively small number of domesticated plant and animal species should be challenged. Jared Diamond, based on studies of the domestication of plants and animals, argued that the rarity of species suitable for domestication and their occurrence in just a few parts of the world, determined the limited number of locations in which major civilizations could arise. In recent times there have been many studies of minor food sources, but none of these sources have subsequently become major food crops.
## Founder effect
The field of biodiversity research (inevitably) suffers from natural human egocentric "myopic" cognitive biases. It has often been criticized for being overly defined by the personal interests of the founders (i.e. terrestrial mammals) giving a narrow focus, rather than extending to other areas where it could be useful. This is termed the founder effect by Norse and Irish, (1996). (This was a play on words: the founder effect in ecology typically refers to the genetic outcome when a small population establishes an isolated breeding group). France and Rigg reviewed the biodiversity literature in 1998 and found that there was a significant lack of papers studying marine ecosystems, leading them to dub marine biodiversity research the sleeping hydra. More work has been carried out for accessible, diverse coastal systems such as coral reefs than for inaccessible, species-poor deep sea areas.
It has been easier to mobilise public opinion and national legislation for the terrestrial realm, which has higher visibility and falls within countries' territorial boundaries. Marine conservation involves having to pioneer new and international mechanisms of protection as well as solving methodological problems in marine biology relating to marine ecosystem classification and data-gathering on some of the earth's most difficult species to access and monitor.
## Size bias
Biodiversity researcher Sean Nee points out that the vast majority of Earth's biodiversity is microbial, and that contemporary biodiversity physics is "firmly fixated on the visible world" (Nee uses "visible" as a synonym for macroscopic). For example, microbial life is very much more metabolically and environmentally diverse than multicellular life (see extremophile). Nee has stated: "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life consists of barely noticeable twigs. This should not be surprising — invisible life had at least three billion years to diversify and explore evolutionary space before the 'visibles' arrived".
The size bias is not restricted to consideration of microbes. Entomologist Nigel Stork states that "to a first approximation, all multicellular species on Earth are insects" .
The reply to this, however, is that biodiversity conservation has never focused exclusively on visible (in this sense) species. From the very beginning, the classification and conservation of natural communities or ecosystem types has been a central part of the effort. The thought behind this has been that since invisible (in this sense) diversity is, due to lack of taxonomy, impossible to treat in the same manner as visible diversity, the best that can be done is to preserve a diversity of ecosystem types, thereby preserving as well as possible the diversity of invisible organisms. | Biodiversity
Biodiversity is the variation of taxonomic life forms within a given ecosystem, biome or for the entire Earth. Biodiversity is often used as a measure of the health of biological systems.
# Evolution and meaning
Template:Wiktionarypar
Biodiversity is a neologism and a portmanteau word, from biology and diversity. The Science Division of The Nature Conservancy used the term "natural diversity" in a 1975 study, "The Preservation of Natural Diversity." The term biological diversity was used even before that by conservation scientists like Robert E. Jenkins and Thomas Lovejoy. The word biodiversity itself may have been coined by W.G. Rosen in 1985 while planning the National Forum on Biological Diversity organized by the National Research Council (NRC) which was to be held in 1986, and first appeared in a publication in 1988 when entomologist E. O. Wilson used it as the title of the proceedings[1] of that forum.[2] The word biodiversity was deemed more effective in terms of communication than biological diversity
Since 1986 the terms and the concept have achieved widespread use among biologists, environmentalists, political leaders, and concerned citizens worldwide. It is generally used to equate to a concern for the natural environment and nature conservation. This use has coincided with the expansion of concern over extinction observed in the last decades of the 20th century.
The term "natural heritage" pre-dates "biodiversity", though it is a less scientific term and more easily comprehended in some ways by the wider audience interested in conservation. "Natural Heritage" was used when Jimmy Carter set up the Georgia Heritage Trust while he was governor of Georgia; Carter's trust dealt with both natural and cultural heritage. It would appear that Carter picked the term up from Lyndon Johnson, who used it in a 1966 Message to Congress. "Natural Heritage" was picked up by the Science Division of The Nature Conservancy when, under Jenkins, it launched in 1974 the network of State Natural Heritage Programs. When this network was extended outside the USA, the term "Conservation Data Center" was suggested by Guillermo Mann and came to be preferred.
# Definitions
The most straightforward definition is "variation of life at all levels of biological organization".[3]
A second definition holds that biodiversity is a measure of the relative diversity among organisms present in different ecosystems. "Diversity" in this definition includes diversity within a species and among species, and comparative diversity among ecosystems.
A third definition that is often used by ecologists is the "totality of genes, species, and ecosystems of a region". An advantage of this definition is that it seems to describe most circumstances and present a unified view of the traditional three levels at which biodiversity has been identified:
- genetic diversity - diversity of genes within a species. There is a genetic variability among the populations and the individuals of the same species. (See also population genetics.)
- species diversity - diversity among species in an ecosystem. "Biodiversity hotspots" are excellent examples of species diversity.
- ecosystem diversity - diversity at a higher level of organization, the ecosystem. To do with the variety of ecosystems on Earth.
This third definition, which conforms to the traditional five organization layers in biology, provides additional justification for multilevel approaches.
The 1992 United Nations Earth Summit in Rio de Janeiro defined "biodiversity" as "the variability among living organisms from all sources, including, 'inter alia', terrestrial, marine, and other aquatic ecosystems, and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems". This is, in fact, the closest thing to a single legally accepted definition of biodiversity, since it is the definition adopted by the United Nations Convention on Biological Diversity.
If the gene is the fundamental unit of natural selection, according to E. O. Wilson, the real biodiversity is genetic diversity. For geneticists, biodiversity is the diversity of genes and organisms. They study processes such as mutations, gene exchanges, and genome dynamics that occur at the DNA level and generate evolution.
For ecologists, biodiversity is also the diversity of durable interactions among species. It not only applies to species, but also to their immediate environment (biotope) and their larger ecoregion. In each ecosystem, living organisms are part of a whole, interacting with not only other organisms, but also with the air, water, and soil that surround them.
# Measurement
Template:Split2
Biodiversity is a broad concept, so a variety of objective measures have been created in order to empirically measure biodiversity. Each measure of biodiversity relates to a particular use of the data.
For practical conservationists, this measure should quantify a value that is broadly shared among locally affected people. For others, a more economically defensible definition should allow the ensuring of continued possibilities for both adaptation and future use by people, assuring environmental sustainability.
As a consequence, biologists argue that this measure is likely to be associated with the variety of genes. Since it cannot always be said which genes are more likely to prove beneficial, the best choice for conservation is to assure the persistence of as many genes as possible. For ecologists, this latter approach is sometimes considered too restrictive, as it prohibits ecological succession.
Biodiversity is usually plotted as taxonomic richness of a geographic area, with some reference to a temporal scale. Whittaker[4] described three common metrics used to measure species-level biodiversity, encompassing attention to species richness or species evenness:
- Species richness - the least sophisticated of the indices available.
- Simpson index
- Shannon index
There are three other indices which are used by ecologists:
- Alpha diversity refers to diversity within a particular area, community or ecosystem, and is measured by counting the number of taxa within the ecosystem (usually species)
- Beta diversity is species diversity between ecosystems; this involves comparing the number of taxa that are unique to each of the ecosystems.
- Gamma diversity is a measure of the overall diversity for different ecosystems within a region.
# Distribution
Biodiversity is not distributed evenly on Earth. It is consistently richer in the tropics and in other localized regions such as the California Floristic Province. As one approaches polar regions one generally finds fewer species. Flora and fauna diversity depends on climate, altitude, soils and the presence of other species. In the year 2006 large numbers of the Earth's species are formally classified as rare or endangered or threatened species; moreover, most scientists estimate that there are millions more species actually endangered which have not yet been formally recognized. About 40 percent of the 40,177 species assessed using the IUCN Red List criteria, are now listed as threatened species with extinction - a total of 16,119 species.[4]
A biodiversity hotspot is a region with a high level of endemic species. These biodiversity hotspots were first identified by Dr. Norman Myers in two articles in the scientific journal The Environmentalist.[5][6] Hotspots unfortunately tend to occur near areas of dense human habitation, leading to threats to their many endemic species. As a result of the pressures of the rapidly growing human population, human activity in many of these areas is increasing dramatically. Most of these hotspots are located in the tropics and most of them are forests.
For example, Brazil's Atlantic Forest contains roughly 20,000 plant species, 1350 vertebrates, and millions of insects, about half of which occur nowhere else in the world. The island of Madagascar including the unique Madagascar dry deciduous forests and lowland rainforests possess a very high ratio of species endemism and biodiversity, since the island separated from mainland Africa 65 million years ago, most of the species and ecosystems have evolved independently producing unique species different than other parts of Africa.
Many regions of high biodiversity (as well as high endemism) arise from very specialized habitats which require unusual adaptation mechanisms. For example the peat bogs of Northern Europe and the alvar regions such as the Stora Alvaret on Oland, Sweden host a large diversity of plants and animals, many of which are not found elsewhere.
# Evolution
Biodiversity found on Earth today is the result of 4 billion years of evolution. The origin of life is not well known to science, though limited evidence suggests that life may already have been well-established only a few 100 million years after the formation of the Earth. Until approximately 600 million years ago, all life consisted of bacteria and similar single-celled organisms.
The history of biodiversity during the Phanerozoic (the last 540 million years), starts with rapid growth during the Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. Over the next 400 million years or so, global diversity showed little overall trend, but was marked by periodic, massive losses of diversity classified as mass extinction events.
The apparent biodiversity shown in the fossil record suggests that the last few million years include the period of greatest biodiversity in the Earth's history. However, not all scientists support this view, since there is considerable uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recent geologic sections. Some (e.g. Alroy et al. 2001) argue that corrected for sampling artifacts, modern biodiversity is not much different from biodiversity 300 million years ago.[7] Estimates of the present global macroscopic species diversity vary from 2 million to 100 million species, with a best estimate of somewhere near 10 million.
Most biologists agree however that the period since the emergence of humans is part of a new mass extinction, the Holocene extinction event, caused primarily by the impact humans are having on the environment. At present, the number of species estimated to have gone extinct as a result of human action is still far smaller than are observed during the major mass extinctions of the geological past. However, it has been argued that the present rate of extinction is sufficient to create a major mass extinction in less than 100 years. Others dispute this and suggest that the present rate of extinctions could be sustained for many thousands of years before the loss of biodiversity matches the more than 20% losses seen in past global extinction events.
New species are regularly discovered (on average about three new species of birds each year) and many, though discovered, are not yet classified (an estimate states that about 40% of freshwater fish from South America are not yet classified). Most of the terrestrial diversity is found in tropical forests.
# Benefits
There are a multitude of benefits of biodiversity in the sense of one diverse group aiding another such as:
## Resistance to catastrophe
Monoculture, the lack of biodiversity, was a contributing factor to several agricultural disasters in history, including the Irish Potato Famine, the European wine industry collapse in the late 1800s, and the US Southern Corn Leaf Blight epidemic of 1970.
[8] See also: Agricultural biodiversity
Higher biodiversity also controls the spread of certain diseases as e.g. virusses will need adapt itself with every new species.
## Food and drink
Biodiversity provides food for humans. About 80 percent of our food supply comes from just 20 kinds of plants. Although many kinds of animals are utilized as food, again most consumption is focused on a few species.
There is vast untapped potential for increasing the range of food products suitable for human consumption, provided that the high present extinction rate can be stopped.
## Medicines
A significant proportion of drugs are derived, directly or indirectly, from biological sources; in most cases these medicines can not presently be synthesized in a laboratory setting. Moreover, only a small proportion of the total diversity of plants has been thoroughly investigated for potential sources of new drugs. Many medicines and antibiotics are also derived from microorganisms.
## Industrial materials
A wide range of industrial materials are derived directly from biological resources. These include building materials, fibers, dyes, resins, gums, adhesives, rubber and oil. There is enormous potential for further research into sustainably utilizing materials from a wider diversity of organisms.
## Intellectual value
Through the field of bionics, a lot of technological advancement has been done which may not have been the case without a rich biodiversity. (See also: Bionics)
## Better crop-varieties
For certain economical crops (e.g. foodcrops, ...), wild varieties of the domesticated species can be reintroduced to form a better variety than the previous (domesticated) species. The economic impact is gigantic, for even crops as common as the potato (which was bred through only one variety, brought back from the Inca), a lot more can come from these species. Wild varieties of the potato will all suffer enormously through the effects of climate change. A report by the Consultative Group on International Agricultural Research (CGIAR) describes the huge economic loss.
Rice, which has been improved for thousands of years by man, can through the same process regain some of its nutritional value that has been lost since (a project is already being carried out to do just this).
## Other ecological services
Biodiversity provides many ecosystem services that are often not readily visible. It plays a part in regulating the chemistry of our atmosphere and water supply. Biodiversity is directly involved in recycling nutrients and providing fertile soils. Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs; for example insect pollination cannot be mimicked by man-made construction, and that activity alone represents tens of billions of dollars in ecosystem services per annum to mankind.
## Leisure, cultural and aesthetic value
Many people derive value from biodiversity through leisure activities such as enjoying a walk in the countryside, birdwatching or natural history programs on television.
Biodiversity has inspired musicians, painters, sculptors, writers and other artists. Many cultural groups view themselves as an integral part of the natural world and show respect for other living organisms.
# Threats
During the last century, erosion of biodiversity has been increasingly observed. Some studies show that about one of eight known plant species is threatened with extinctionTemplate:Specify. Some estimates put the loss at up to 140,000 species per year (based on Species-area theory) and subject to discussion.[9] This figure indicates unsustainable ecological practices, because only a small number of species come into being each year. Almost all scientists acknowledge[citation needed] that the rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates.
## Destruction of habitats
Most of the species extinctions from 1000 AD to 2000 AD are due to human activities, in particular destruction of plant and animal habitats. Raised rates of extinction are being driven by human consumption of organic resources, especially related to tropical forest destruction.[10] While most of the species that are becoming extinct are not food species, their biomass is converted into human food when their habitat is transformed into pasture, cropland, and orchards. It is estimated that more than 40% of the Earth's biomass[citation needed] is tied up in only the few species that represent humans, livestock and crops. Because an ecosystem decreases in stability as its species are made extinct, these studies warn that the global ecosystem is destined for collapse if it is further reduced in complexity. Factors contributing to loss of biodiversity are: overpopulation, deforestation, pollution (air pollution, water pollution, soil contamination) and global warming or climate change, driven by human activity. These factors, while all stemming from overpopulation, produce a cumulative impact upon biodiversity.
Some characterize loss of biodiversity not as ecosystem degradation but by conversion to trivial standardized ecosystems (e.g., monoculture following deforestation). In some countries lack of property rights or access regulation to biotic resources necessarily leads to biodiversity loss (degradation costs having to be supported by the community).
A September 14, 2007 study conducted by the National Science Foundation found that biodiversity and genetic diversity are dependent upon each other--that diversity within a species is necessary to maintain diversity among species, and vice versa. According to the lead researcher in the study, Dr. Richard Lankauof, "If any one type is removed from the system, the cycle can break down, and the community becomes dominated by a single species."[11]
## Exotic species
The rich diversity of unique species across many parts of the world exist only because they are separated by barriers, particularly large rivers, seas, oceans, mountains and deserts from other species of other land masses, particularly the highly fecund, ultra-competitive, generalist "super-species". These are barriers that could never be crossed by natural processes, except for many millions of years in the future through continental drift. However humans have invented ships and airplanes, and now have the power to bring into contact species that never have met in their evolutionary history, and on a time scale of days, unlike the centuries that historically have accompanied major animal migrations.
The widespread introduction of exotic species by humans is a potent threat to biodiversity. When exotic species are introduced to ecosystems and establish self-sustaining populations, the endemic species in that ecosystem, that have not evolved to cope with the exotic species, may not survive. The exotic organisms may be either predators, parasites, or simply aggressive species that deprive indigenous species of nutrients, water and light. These exotic or invasive species often have features due to their evolutionary background and environment that makes them competitive, and similarly makes endemic species defenceless and/or uncompetitive against these exotic species.
As a consequence of the above, if humans continue to combine species from different ecoregions, there is the potential that the world's ecosystems will end up dominated by relatively a few, aggressive, cosmopolitan "super-species".
Declines in amphibian populations have been observed since 1980s. These might critically threaten global biodiversity.
## Genetic pollution
Purebred naturally evolved region specific wild species can be threatened with extinction in a big way[12] through the process of Genetic Pollution i.e. uncontrolled hybridization, introgression and Genetic swamping which leads to homogenization or replacement of local genotypes as a result of either a numerical and/or fitness advantage of introduced plant or animal[13]. Nonnative species can bring about a form of extinction of native plants and animals by hybridization and introgression either through purposeful introduction by humans or through habitat modification, bringing previously isolated species into contact. These phenomena can be especially detrimental for rare species coming into contact with more abundant ones where the abundant ones can interbreed with them swamping the entire rarer gene pool creating hybrids thus driving the entire original purebred native stock to complete extinction. Attention has to be focused on the extent of this under appreciated problem that is not always apparent from morphological (outward appearance) observations alone. Some degree of gene flow may be a normal, evolutionarily constructive process, and all constellations of genes and genotypes cannot be preserved however, hybridization with or without introgression may, nevertheless, threaten a rare species' existence[14][15].
## Genetic pollution and food security
In agriculture and animal husbandry, green revolution popularized the use of conventional hybridization to increase yield many folds. Often the handful of breeds of plants and animals hybridized originated in developed countries and were further hybridized with local verities, in the rest of the developing world, to create high yield strains resistant to local climate and diseases. Local governments and industry since have been pushing hybridization with such zeal that several of the wild and indigenous breeds evolved locally over thousands of years having high resistance to local extremes in climate and immunity to diseases etc. have already become extinct or are in grave danger of becoming so in the near future. Due to complete disuse because of un-profitability and uncontrolled intentional, compounded with unintentional cross-pollination and crossbreeding (genetic pollution) formerly huge gene pools of various wild and indigenous breeds have collapsed causing widespread genetic erosion and genetic pollution resulting in great loss in genetic diversity and biodiversity as a whole[16].
A Genetically Modified Organism (GMO) is an organism whose genetic material has been altered using the genetic engineering techniques generally known as recombinant DNA technology. Genetic Engineering today has become another serious and alarming cause of genetic pollution because artificially created and genetically engineered plants and animals in laboratories, which could never have evolved in nature even with conventional hybridization, can live and breed on their own and what is even more alarming interbreed with naturally evolved wild varieties. Genetically Modified (GM) crops today have become a common source for genetic pollution, not only of wild varieties but also of other domesticated varieties derived from relatively natural hybridization[17][18][19][20][21].
It is being said that genetic erosion coupled with genetic pollution is destroying that needed unique genetic base thereby creating an unforeseen hidden crisis which will result in a severe threat to our food security for the future when diverse genetic material will cease to exist to be able to further improve or hybridize weakening food crops and livestock against more resistant diseases and climatic changes.
# Management
Template:Mainarticle
The conservation of biological diversity has become a global concern. Although not everybody agrees on extent and significance of current extinction, most consider biodiversity essential.
There are basically two main types of conservation options, in-situ conservation and ex-situ conservation. In-situ is usually seen as the ideal conservation strategy. However, its implementation is sometimes infeasible. For example, destruction of rare or endangered species' habitats sometimes requires ex-situ conservation efforts. Furthermore, ex-situ conservation can provide a backup solution to in-situ conservation projects. Some believe both types of conservation are required to ensure proper preservation.
An example of an in-situ conservation effort is the setting-up of protection areas. Examples of ex-situ conservation efforts, by contrast, would be planting germplasts in seedbanks, or growing the Wollemi Pine in nurseries. Such efforts allow the preservation of large populations of plants with minimal genetic erosion.
At national levels a Biodiversity Action Plan is sometimes prepared to state the protocols necessary to protect an individual species. Usually this plan also details extant data on the species and its habitat. In the USA such a plan is called a Recovery Plan.
The threat to biological diversity was among the hot topics discussed at the UN World Summit for Sustainable Development, in hope of seeing the foundation of a Global Conservation Trust to help maintain plant collections.
# Judicial status
Biodiversity is beginning to be evaluated and its evolution analysed (through observations, inventories, conservation...) as well as being taken into account in political and judicial decisions
.
- The relationship between law and ecosystems is very ancient and has consequences for biodiversity. It is related to property rights, both private and public. It can define protection for threatened ecosystems, but also some rights and duties (for example, fishing rights, hunting rights).
- Law regarding species is a more recent issue. It defines species that must be protected because they may be threatened by extinction. Some people question application of these laws[citation needed]. The U.S. Endangered Species Act is an example of an attempt to address the "law and species" issue.
- Laws regarding gene pools are only about a century old[citation needed]. While the genetic approach is not new (domestication, plant traditional selection methods), progress made in the genetic field in the past 20 years have led to a tightening of laws in this field. With the new technologies of genetic analysis and genetic engineering, people are going through gene patenting, processes patenting, and a totally new concept of genetic resources[citation needed]. A very hot debate today seeks to define whether the resource is the gene, the organism itself, or its DNA.
The 1972 UNESCO convention established that biological resources, such as plants, were the common heritage of mankind. These rules probably inspired the creation of great public banks of genetic resources, located outside the source-countries.
New global agreements (e.g.Convention on Biological Diversity), now give sovereign national rights over biological resources (not property). The idea of static conservation of biodiversity is disappearing and being replaced by the idea of dynamic conservation, through the notion of resource and innovation.
The new agreements commit countries to conserve biodiversity, develop resources for sustainability and share the benefits resulting from their use. Under new rules, it is expected that bioprospecting or collection of natural products has to be allowed by the biodiversity-rich country, in exchange for a share of the benefits.
Sovereignty principles can rely upon what is better known as Access and Benefit Sharing Agreements (ABAs). The Convention on Biodiversity spirit implies a prior informed consent between the source country and the collector, to establish which resource will be used and for what, and to settle on a fair agreement on benefit sharing. Bioprospecting can become a type of biopiracy when those principles are not respected.
Uniform approval for use of biodiversity as a legal standard has not been achieved, however. At least one legal commentator has argued that biodiversity should not be used as a legal standard, arguing that the multiple layers of scientific uncertainty inherent in the concept of biodiversity will cause administrative waste and increase litigation without promoting preservation goals. See Fred Bosselman, A Dozen Biodiversity Puzzles, 12 N.Y.U. Environmental Law Journal 364 (2004)
# Criticisms
## Food
The notion that there is 'vast untapped potential' for reducing mankinds dependence on a relatively small number of domesticated plant and animal species should be challenged. Jared Diamond,[22] based on studies of the domestication of plants and animals, argued that the rarity of species suitable for domestication and their occurrence in just a few parts of the world, determined the limited number of locations in which major civilizations could arise. In recent times there have been many studies of minor food sources, but none of these sources have subsequently become major food crops.
## Founder effect
The field of biodiversity research (inevitably) suffers from natural human egocentric "myopic" cognitive biases. It has often been criticized for being overly defined by the personal interests of the founders (i.e. terrestrial mammals) giving a narrow focus, rather than extending to other areas where it could be useful. This is termed the founder effect by Norse and Irish, (1996).[23] (This was a play on words: the founder effect in ecology typically refers to the genetic outcome when a small population establishes an isolated breeding group). France and Rigg reviewed the biodiversity literature in 1998 and found that there was a significant lack of papers studying marine ecosystems,[24] leading them to dub marine biodiversity research the sleeping hydra. More work has been carried out for accessible, diverse coastal systems such as coral reefs than for inaccessible, species-poor deep sea areas.
It has been easier to mobilise public opinion and national legislation for the terrestrial realm, which has higher visibility and falls within countries' territorial boundaries. Marine conservation involves having to pioneer new and international mechanisms of protection as well as solving methodological problems in marine biology relating to marine ecosystem classification and data-gathering on some of the earth's most difficult species to access and monitor.
## Size bias
Biodiversity researcher Sean Nee points out that the vast majority of Earth's biodiversity is microbial, and that contemporary biodiversity physics is "firmly fixated on the visible world" (Nee uses "visible" as a synonym for macroscopic).[25] For example, microbial life is very much more metabolically and environmentally diverse than multicellular life (see extremophile). Nee has stated: "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life consists of barely noticeable twigs. This should not be surprising — invisible life had at least three billion years to diversify and explore evolutionary space before the 'visibles' arrived".
The size bias is not restricted to consideration of microbes. Entomologist Nigel Stork states that "to a first approximation, all multicellular species on Earth are insects" [26].
The reply to this, however, is that biodiversity conservation has never focused exclusively on visible (in this sense) species. From the very beginning, the classification and conservation of natural communities or ecosystem types has been a central part of the effort. The thought behind this has been that since invisible (in this sense) diversity is, due to lack of taxonomy, impossible to treat in the same manner as visible diversity, the best that can be done is to preserve a diversity of ecosystem types, thereby preserving as well as possible the diversity of invisible organisms. | https://www.wikidoc.org/index.php/Biodiversity | |
57a509817e8bd34e4cdf7402cb07ec87be5975d1 | wikidoc | Reproduction | Reproduction
# Overview
Reproduction is the biological process by which new individual organisms are produced. Reproduction is a fundamental feature of all known life; each individual organism exists as the result of reproduction. The known methods of reproduction are broadly grouped into two main types: sexual and asexual. Human reproduction belongs to sexual reproduction.
In asexual reproduction, an individual can reproduce without involvement with another individual of that species. The division of a bacterial cell into two daughter cells is an example of asexual reproduction. Asexual reproduction is not, however, limited to single-celled organisms. Most plants have the ability to reproduce asexually.
Sexual reproduction requires the involvement of two individuals, typically one of each sex. Normal human reproduction is a common example of sexual reproduction.
# Asexual reproduction
Asexual reproduction is the process by which an organism creates a genetically-similar or identical copy of itself without a contribution of genetic material from another individual. Bacteria divide asexually via binary fission; viruses take control of host cells to produce more viruses; Hydras (invertebrates of the order Hydroidea) and yeasts are able to reproduce by budding. These organisms do not have different sexes, and they are capable of "splitting" themselves into two or more individuals. Some 'asexual' species, like hydra and jellyfish, may also reproduce sexually. For instance, most plants are capable of vegetative reproduction—reproduction without seeds or spores—but can also reproduce sexually. Likewise, bacteria may exchange genetic information by conjugation. Other ways of asexual reproduction include parthogenesis, fragmentation and spore formation that involves only mitosis. Parthenogenesis (from the Greek παρθένος parthenos, "virgin", + γένεσις genesis, "creation") is the growth and development of embryo or seed without fertilization by a male. Parthenogenesis occurs naturally in some species, including lower plants, invertebrates (e.g. water fleas, aphids, some bees and parasitic wasps), and vertebrates (e.g. some
reptiles,
fish,
and, very rarely, birds and sharks). It is sometimes also used to describe reproduction modes in hermaphroditic species which can self-fertilize.
# Sexual reproduction
Sexual reproduction is a biological process by which organisms create descendants that have a combination of genetic material contributed from two (usually) different members of the species. Each of two parent organisms contributes half of the offspring's genetic makeup by creating haploid gametes. Most organisms form two different types of gametes. In these anisogamous species, the two sexes are referred to as male (producing sperm or microspores) and female (producing ova or megaspores). In isogamous species the gametes are similar or identical in form, but may have separable properties and then may be given other different names. For example, in the green alga, Chlamydomonas reinhardtii, there are so-called "plus" and "minus" gametes. A few types of organisms, such as ciliates, have more than two kinds of gametes.
Most animals (including humans) and plants reproduce sexually. Sexually reproducing organisms have two sets of genes for every trait (called alleles). Offspring inherit one allele for each trait from each parent, thereby ensuring that offspring have a combination of the parents' genes. Having two copies of every gene, only one of which is expressed, allows deleterious alleles to be masked, an advantage believed to have led to the evolutionary development of diploidy (Otto and Goldstein).
## Allogamy
Allogamy is a term used in the field of biological reproduction describing the fertilization of an ovum from one individual with the spermatozoa of another.
## Autogamy
Self-fertilization (also known as autogamy) occurs in hermaphroditic organisms where the two gametes fused in fertilization come from the same individual. They are bound and all the cells merge to form one new gamete.
## Mitosis and meiosis
Mitosis and meiosis are an integral part of cell division. Mitosis occurs in somatic cells, while meiosis occurs in gametes.
Mitosis
The resultant number of cells in mitosis is twice the number of original cells. The number of chromosomes in the daughter cells is the same as that of the parent cell.
Meiosis
The resultant number of cells is four times the number of original cells. This results in cells with half the number of chromosomes present in the parent cell. A diploid cell duplicates itself, then undergoes two divisions (tetroid to diploid to haploid), in the process forming four haploid cells. This process occurs in two phases, meiosis I and meiosis II.
# Same-sex reproduction
In recent decades, developmental biologists have been researching and developing techniques to facilitate same-sex reproduction . The obvious approaches, subject to a growing amount of activity, are female sperm and male eggs, with female sperm closer to being a reality for humans, given that Japanese scientists have already created female sperm for chickens. More recently, by altering the function of a few genes involved with imprinting, other Japanese scientists combined two mouse eggs to produce daughter mice.
# Reproductive strategies
There is a wide range of reproductive strategies employed by different species. Some animals, such as the human and Northern Gannet, do not reach sexual maturity for many years after birth and even then produce few offspring. Others reproduce quickly; but, under normal circumstances, most offspring do not survive to adulthood. For example, a rabbit (mature after 8 months) can produce 10–30 offspring per year, and a fruit fly (mature after 10–14 days) can produce up to 900 offspring per year. These two main strategies are known as K-selection (few offspring) and r-selection (many offspring). Which strategy is favoured by evolution depends on a variety of circumstances. Animals with few offspring can devote more resources to the nurturing and protection of each individual offspring, thus reducing the need for many offspring. On the other hand, animals with many offspring may devote fewer resources to each individual offspring; for these types of animals it is common for many offspring to die soon after birth, but enough individuals typically survive to maintain the population.
## Other types of reproductive strategies
Polycyclic animals reproduce intermittently throughout their lives.
Semelparous organisms reproduce only once in their lifetime, such as annual plants. Often, they die shortly after reproduction. This is a characteristic of r-strategists.
Iteroparous organisms produce offspring in successive (e.g. annual or seasonal) cycles, such as perennial plants. Iteroparous animals survive over multiple seasons (or periodic condition changes). This is a characteristic of K-strategists.
# Asexual vs. sexual reproduction
Organisms that reproduce through asexual reproduction tend to grow in number exponentially. However, because they rely on mutation for variations in their DNA, all members of the species have similar vulnerabilities. Organisms that reproduce sexually yield a smaller number of offspring, but the large amount of variation in their genes makes them less susceptible to disease.
Many organisms can reproduce sexually as well as asexually. Aphids, slime molds, sea anemones, some species of starfish (by fragmentation), and many plants are examples. When environmental factors are favorable, asexual reproduction is employed to exploit suitable conditions for survival such as an abundant food supply, adequate shelter, favorable climate, disease, optimum pH or a proper mix of other lifestyle requirements. Populations of these organisms increase exponentially via asexual reproductive strategies to take full advantage of the rich supply resources.
When food sources have been depleted, the climate becomes hostile, or individual survival is jeopardized by some other adverse change in living conditions, these organisms switch to sexual forms of reproduction. Sexual reproduction ensures a mixing of the gene pool of the species. The variations found in offspring of sexual reproduction allow some individuals to be better suited for survival and provide a mechanism for selective adaptation to occur. In addition, sexual reproduction usually results in the formation of a life stage that is able to endure the conditions that threaten the offspring of an asexual parent. Thus, seeds, spores, eggs, pupae, cysts or other "over-wintering" stages of sexual reproduction ensure the survival during unfavorable times and the organism can "wait out" adverse situations until a swing back to suitability occurs.
# Life without reproduction
The existence of life without reproduction is the subject of some speculation. The biological study of how the origin of life led from non-reproducing elements to reproducing organisms is called abiogenesis. Whether or not there were several independent abiogenetic events, biologists believe that the last universal ancestor to all present life on earth lived about 3.5 billion years ago.
Today, some scientists have speculated about the possibility of creating life non-reproductively in the laboratory. Several scientists have succeeded in producing simple viruses from entirely non-living materials. The virus is often regarded as not alive. Being nothing more than a bit of RNA or DNA in a protein capsule, they have no metabolism and can only replicate with the assistance of a hijacked cell's metabolic machinery.
The production of a truly living organism (e.g. a simple bacterium) with no ancestors would be a much more complex task, but may well be possible according to current biological knowledge.
# Lottery principle
Sexual reproduction has many drawbacks, since it requires far more energy than asexual reproduction, and there is some argument about why so many species use it.
George C. Williams used lottery tickets as an analogy in one explanation for the widespread use of sexual reproduction. He argued that asexual reproduction, which produces little or no genetic variety in offspring, was like buying many tickets that all have the same number, limiting the chance of "winning" - that is, surviving. Sexual reproduction, he argued, was like purchasing fewer tickets but with a greater variety of numbers and therefore a greater chance of success.
The point of this analogy is that since asexual reproduction does not produce genetic variations, there is little ability to quickly adapt to a changing environment. The lottery principle is less accepted these days because of evidence that asexual reproduction is more prevalent in unstable environments, the opposite of what it predicts. | Reproduction
# Overview
Reproduction is the biological process by which new individual organisms are produced. Reproduction is a fundamental feature of all known life; each individual organism exists as the result of reproduction. The known methods of reproduction are broadly grouped into two main types: sexual and asexual. Human reproduction belongs to sexual reproduction.
In asexual reproduction, an individual can reproduce without involvement with another individual of that species. The division of a bacterial cell into two daughter cells is an example of asexual reproduction. Asexual reproduction is not, however, limited to single-celled organisms. Most plants have the ability to reproduce asexually.
Sexual reproduction requires the involvement of two individuals, typically one of each sex. Normal human reproduction is a common example of sexual reproduction.
# Asexual reproduction
Asexual reproduction is the process by which an organism creates a genetically-similar or identical copy of itself without a contribution of genetic material from another individual. Bacteria divide asexually via binary fission; viruses take control of host cells to produce more viruses; Hydras (invertebrates of the order Hydroidea) and yeasts are able to reproduce by budding. These organisms do not have different sexes, and they are capable of "splitting" themselves into two or more individuals. Some 'asexual' species, like hydra and jellyfish, may also reproduce sexually. For instance, most plants are capable of vegetative reproduction—reproduction without seeds or spores—but can also reproduce sexually. Likewise, bacteria may exchange genetic information by conjugation. Other ways of asexual reproduction include parthogenesis, fragmentation and spore formation that involves only mitosis. Parthenogenesis (from the Greek παρθένος parthenos, "virgin", + γένεσις genesis, "creation") is the growth and development of embryo or seed without fertilization by a male. Parthenogenesis occurs naturally in some species, including lower plants, invertebrates (e.g. water fleas, aphids, some bees and parasitic wasps), and vertebrates (e.g. some
reptiles,[1]
fish,
and, very rarely, birds[2] and sharks[3]). It is sometimes also used to describe reproduction modes in hermaphroditic species which can self-fertilize.
# Sexual reproduction
Sexual reproduction is a biological process by which organisms create descendants that have a combination of genetic material contributed from two (usually) different members of the species. Each of two parent organisms contributes half of the offspring's genetic makeup by creating haploid gametes. Most organisms form two different types of gametes. In these anisogamous species, the two sexes are referred to as male (producing sperm or microspores) and female (producing ova or megaspores). In isogamous species the gametes are similar or identical in form, but may have separable properties and then may be given other different names. For example, in the green alga, Chlamydomonas reinhardtii, there are so-called "plus" and "minus" gametes. A few types of organisms, such as ciliates, have more than two kinds of gametes.
Most animals (including humans) and plants reproduce sexually. Sexually reproducing organisms have two sets of genes for every trait (called alleles). Offspring inherit one allele for each trait from each parent, thereby ensuring that offspring have a combination of the parents' genes. Having two copies of every gene, only one of which is expressed, allows deleterious alleles to be masked, an advantage believed to have led to the evolutionary development of diploidy (Otto and Goldstein).
## Allogamy
Allogamy is a term used in the field of biological reproduction describing the fertilization of an ovum from one individual with the spermatozoa of another.
## Autogamy
Self-fertilization (also known as autogamy) occurs in hermaphroditic organisms where the two gametes fused in fertilization come from the same individual. They are bound and all the cells merge to form one new gamete.
## Mitosis and meiosis
Mitosis and meiosis are an integral part of cell division. Mitosis occurs in somatic cells, while meiosis occurs in gametes.
Mitosis
The resultant number of cells in mitosis is twice the number of original cells. The number of chromosomes in the daughter cells is the same as that of the parent cell.
Meiosis
The resultant number of cells is four times the number of original cells. This results in cells with half the number of chromosomes present in the parent cell. A diploid cell duplicates itself, then undergoes two divisions (tetroid to diploid to haploid), in the process forming four haploid cells. This process occurs in two phases, meiosis I and meiosis II.
# Same-sex reproduction
In recent decades, developmental biologists have been researching and developing techniques to facilitate same-sex reproduction [4]. The obvious approaches, subject to a growing amount of activity, are female sperm and male eggs, with female sperm closer to being a reality for humans, given that Japanese scientists have already created female sperm for chickens. More recently, by altering the function of a few genes involved with imprinting, other Japanese scientists combined two mouse eggs to produce daughter mice.
# Reproductive strategies
There is a wide range of reproductive strategies employed by different species. Some animals, such as the human and Northern Gannet, do not reach sexual maturity for many years after birth and even then produce few offspring. Others reproduce quickly; but, under normal circumstances, most offspring do not survive to adulthood. For example, a rabbit (mature after 8 months) can produce 10–30 offspring per year, and a fruit fly (mature after 10–14 days) can produce up to 900 offspring per year. These two main strategies are known as K-selection (few offspring) and r-selection (many offspring). Which strategy is favoured by evolution depends on a variety of circumstances. Animals with few offspring can devote more resources to the nurturing and protection of each individual offspring, thus reducing the need for many offspring. On the other hand, animals with many offspring may devote fewer resources to each individual offspring; for these types of animals it is common for many offspring to die soon after birth, but enough individuals typically survive to maintain the population.
## Other types of reproductive strategies
Polycyclic animals reproduce intermittently throughout their lives.
Semelparous organisms reproduce only once in their lifetime, such as annual plants. Often, they die shortly after reproduction. This is a characteristic of r-strategists.
Iteroparous organisms produce offspring in successive (e.g. annual or seasonal) cycles, such as perennial plants. Iteroparous animals survive over multiple seasons (or periodic condition changes). This is a characteristic of K-strategists.
# Asexual vs. sexual reproduction
Organisms that reproduce through asexual reproduction tend to grow in number exponentially. However, because they rely on mutation for variations in their DNA, all members of the species have similar vulnerabilities. Organisms that reproduce sexually yield a smaller number of offspring, but the large amount of variation in their genes makes them less susceptible to disease.
Many organisms can reproduce sexually as well as asexually. Aphids, slime molds, sea anemones, some species of starfish (by fragmentation), and many plants are examples. When environmental factors are favorable, asexual reproduction is employed to exploit suitable conditions for survival such as an abundant food supply, adequate shelter, favorable climate, disease, optimum pH or a proper mix of other lifestyle requirements. Populations of these organisms increase exponentially via asexual reproductive strategies to take full advantage of the rich supply resources.
When food sources have been depleted, the climate becomes hostile, or individual survival is jeopardized by some other adverse change in living conditions, these organisms switch to sexual forms of reproduction. Sexual reproduction ensures a mixing of the gene pool of the species. The variations found in offspring of sexual reproduction allow some individuals to be better suited for survival and provide a mechanism for selective adaptation to occur. In addition, sexual reproduction usually results in the formation of a life stage that is able to endure the conditions that threaten the offspring of an asexual parent. Thus, seeds, spores, eggs, pupae, cysts or other "over-wintering" stages of sexual reproduction ensure the survival during unfavorable times and the organism can "wait out" adverse situations until a swing back to suitability occurs.
# Life without reproduction
The existence of life without reproduction is the subject of some speculation. The biological study of how the origin of life led from non-reproducing elements to reproducing organisms is called abiogenesis. Whether or not there were several independent abiogenetic events, biologists believe that the last universal ancestor to all present life on earth lived about 3.5 billion years ago.
Today, some scientists have speculated about the possibility of creating life non-reproductively in the laboratory. Several scientists have succeeded in producing simple viruses from entirely non-living materials[5]. The virus is often regarded as not alive. Being nothing more than a bit of RNA or DNA in a protein capsule, they have no metabolism and can only replicate with the assistance of a hijacked cell's metabolic machinery.
The production of a truly living organism (e.g. a simple bacterium) with no ancestors would be a much more complex task, but may well be possible according to current biological knowledge.
# Lottery principle
Sexual reproduction has many drawbacks, since it requires far more energy than asexual reproduction, and there is some argument about why so many species use it.
George C. Williams used lottery tickets as an analogy in one explanation for the widespread use of sexual reproduction[6]. He argued that asexual reproduction, which produces little or no genetic variety in offspring, was like buying many tickets that all have the same number, limiting the chance of "winning" - that is, surviving. Sexual reproduction, he argued, was like purchasing fewer tickets but with a greater variety of numbers and therefore a greater chance of success.
The point of this analogy is that since asexual reproduction does not produce genetic variations, there is little ability to quickly adapt to a changing environment. The lottery principle is less accepted these days because of evidence that asexual reproduction is more prevalent in unstable environments, the opposite of what it predicts. | https://www.wikidoc.org/index.php/Biological_reproduction | |
72dcdeddff6a9a8538c517af14c8123523586a43 | wikidoc | Biomphalaria | Biomphalaria
Biomphalaria is a genus of air-breathing freshwater snail, an aquatic pulmonate gastropod mollusk in the family Planorbidae, the ram's horn snails.
This genus of snails is medically important because the snails can carry a parasite which represents a serious disease risk to humans;
the snails serve as an intermediate host (vector) for the human parasitic blood fluke, Schistosoma mansoni.
The fluke, which is found primarily in tropical areas, infects mammals (including humans) via contact with water that contains schistosome larvae (cercariae) which have previously been released from the snail. Infection occurs via penetration of cercariae through the skin. In humans this fluke causes the debilitating disease schistosomiasis.
Other flukes which parasitize snails include Schistosoma japonicum, which parasitizes snails in the genus Oncomelania, and Schistosoma mekongi, which parasitizes snails in the genus Tricula. | Biomphalaria
Biomphalaria is a genus of air-breathing freshwater snail, an aquatic pulmonate gastropod mollusk in the family Planorbidae, the ram's horn snails.
This genus of snails is medically important because the snails can carry a parasite which represents a serious disease risk to humans;
the snails serve as an intermediate host (vector) for the human parasitic blood fluke, Schistosoma mansoni.
The fluke, which is found primarily in tropical areas, infects mammals (including humans) via contact with water that contains schistosome larvae (cercariae) which have previously been released from the snail. Infection occurs via penetration of cercariae through the skin. In humans this fluke causes the debilitating disease schistosomiasis.
Other flukes which parasitize snails include Schistosoma japonicum, which parasitizes snails in the genus Oncomelania, and Schistosoma mekongi, which parasitizes snails in the genus Tricula.
Template:WH
Template:WS | https://www.wikidoc.org/index.php/Biomphalaria | |
5dc0e6c8e7431a5ead7f5a567152c92466edaf17 | wikidoc | Birth weight | Birth weight
# Overview
Birth weight is the weight of a baby at its birth. It has direct links with the gestational age at which the child was born and can be estimated during the pregnancy by measuring fundal height. A baby born within the normal range of weight for that gestational age is known as appropriate for gestational age (AGA). Those born above or below that range have often had an unusual rate of development – this often indicates complications with the pregnancy that may affect the baby or its mother.
The incidence of birth weight being outside of the AGA is influenced by the parents in numerous ways, including:
- Genetics
- The health of the mother, particularly during the pregnancy
- Environmental factors
- Other factors, like multiple births, where each baby is likely to be outside the AGA, one more so than the other
There have been numerous studies that have attempted, with varying degrees of success, to show links between birth weight and later-life conditions, including diabetes, obesity, tobacco smoking and intelligence.
# Conditions
Associated conditions include:
- Large for gestational age
- Small for gestational age
# Influence on adult life
Studies have been conducted to investigate how a person's birth weight can influence aspects of their future life. This includes theorised links with obesity, diabetes and intelligence.
## Obesity
A baby born small or large for gestational age (either of the two extremes) is thought to have an increased risk of obesity in later life.
## Diabetes
Babies that have a low birth weight are thought to have an increased risk of developing type 2 diabetes in later life.
## Intelligence
Some studies have shown a direct link between an increased birth weight and an increased intelligence quotient.
# Effects on the mother
There is some evidence of a link between a child's birth weight and its mother's risk of cardiovascular disease. | Birth weight
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Birth weight is the weight of a baby at its birth. It has direct links with the gestational age at which the child was born and can be estimated during the pregnancy by measuring fundal height. A baby born within the normal range of weight for that gestational age is known as appropriate for gestational age (AGA). Those born above or below that range have often had an unusual rate of development – this often indicates complications with the pregnancy that may affect the baby or its mother.
The incidence of birth weight being outside of the AGA is influenced by the parents in numerous ways, including:
- Genetics
- The health of the mother, particularly during the pregnancy
- Environmental factors
- Other factors, like multiple births, where each baby is likely to be outside the AGA, one more so than the other
There have been numerous studies that have attempted, with varying degrees of success, to show links between birth weight and later-life conditions, including diabetes, obesity, tobacco smoking and intelligence.
# Conditions
Associated conditions include:
- Large for gestational age
- Small for gestational age
# Influence on adult life
Studies have been conducted to investigate how a person's birth weight can influence aspects of their future life. This includes theorised links with obesity, diabetes and intelligence.
## Obesity
A baby born small or large for gestational age (either of the two extremes) is thought to have an increased risk of obesity in later life.[1][2][3]
## Diabetes
Babies that have a low birth weight are thought to have an increased risk of developing type 2 diabetes in later life.[4][5][6]
## Intelligence
Some studies have shown a direct link between an increased birth weight and an increased intelligence quotient.[7][8][9]
# Effects on the mother
There is some evidence of a link between a child's birth weight and its mother's risk of cardiovascular disease.[10] | https://www.wikidoc.org/index.php/Birth_weight | |
33d15c81d2c91be3cf9b82d5e82824f753014ab5 | wikidoc | Bishop score | Bishop score
# Overview
Bishop score, also Bishop's score, is a pre-labour scoring system to assist in predicting whether induction of labour will be required.
# Components
The total score is achieved by assessing the following five components on vaginal examination:
- Cervical dilatation
- Cervical effacement
- Cervical consistency
- Cervical position
- Fetal station
They can be remembered with the mnemonic: Call PEDS For Parturition = Cervical Postion, Effacement, Dilation, Softness; Fetal Station.
# Scoring
Each components is given a score of 0-2 or 0-3. The highest possible score is 13.
# Interpretation
A score of 5 or less suggests that labour is unlikely to start without induction. A score of 9 or more indicates that labour will most likely commence spontaneously.
A low Bishop's score often indicates that induction is unlikely to be successful. Some sources indicate that only a score of 8 or greater is reliably predictive of a successful induction.
# Modified Bishop score
According to the Modified Bishop's pre-induction cervical scoring system, effacement has been replaced by cervical length in cm, with scores as follows-
0>3cm, 1>2cm, 2>1cm, 3>0cm. | Bishop score
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Bishop score, also Bishop's score, is a pre-labour scoring system to assist in predicting whether induction of labour will be required.[1]
# Components
The total score is achieved by assessing the following five components on vaginal examination:
- Cervical dilatation
- Cervical effacement
- Cervical consistency
- Cervical position
- Fetal station
They can be remembered with the mnemonic: Call PEDS For Parturition = Cervical Postion, Effacement, Dilation, Softness; Fetal Station.
# Scoring
Each components is given a score of 0-2 or 0-3. The highest possible score is 13.
# Interpretation
A score of 5 or less suggests that labour is unlikely to start without induction. A score of 9 or more indicates that labour will most likely commence spontaneously.[2]
A low Bishop's score often indicates that induction is unlikely to be successful[3]. Some sources indicate that only a score of 8 or greater is reliably predictive of a successful induction.
# Modified Bishop score
According to the Modified Bishop's pre-induction cervical scoring system, effacement has been replaced by cervical length in cm, with scores as follows-
0>3cm, 1>2cm, 2>1cm, 3>0cm.[4] | https://www.wikidoc.org/index.php/Bishop_score | |
89d31d63a2eb34da0106475a4f2e31e26a78d615 | wikidoc | Bisoctrizole | Bisoctrizole
Bisoctrizole (USAN, Tinosorb® M, INCI Methylene Bis-Benzotriazolyl Tetramethylbutylphenol) is a chemical which is added to sunscreens to absorb UV rays. It's marketed by Ciba Specialty Chemicals.
Bisoctrizole is a broad spectrum ultraviolet radiation absorber, absorbing UVB as well as UVA rays. It also reflects and scatters UV. Bisoctrizole is a hybrid UV absorber. It's produced as small particles (< 200 nm), like microfine zinc oxide and titanium dioxide. And it is organic like most sunscreen actives. It is added to the water phase of a sunscreen as a 50% suspension, while mineral micropigments are usually added to the oil phase.
Bisoctrizole shows very little photodegradation and has a stabilizing effect on other UV absorbers, octyl methoxycinnamate (octinoxate) in particular.
Unlike some other organic sunscreen actives, it shows no estrogenic effects in vitro.
Bisoctrizole is not approved by the FDA, but is approved in the EU and other parts of the world. | Bisoctrizole
Template:Chembox new
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Bisoctrizole (USAN[1], Tinosorb® M, INCI Methylene Bis-Benzotriazolyl Tetramethylbutylphenol) is a chemical which is added to sunscreens to absorb UV rays. It's marketed by Ciba Specialty Chemicals.
Bisoctrizole is a broad spectrum ultraviolet radiation absorber, absorbing UVB as well as UVA rays. It also reflects and scatters UV. Bisoctrizole is a hybrid UV absorber. It's produced as small particles (< 200 nm)[2], like microfine zinc oxide and titanium dioxide. And it is organic like most sunscreen actives. It is added to the water phase of a sunscreen as a 50% suspension, while mineral micropigments are usually added to the oil phase.
Bisoctrizole shows very little photodegradation and has a stabilizing effect on other UV absorbers, octyl methoxycinnamate (octinoxate) in particular.
Unlike some other organic sunscreen actives, it shows no estrogenic effects in vitro.[3]
Bisoctrizole is not approved by the FDA, but is approved in the EU and other parts of the world.[4][5][6] | https://www.wikidoc.org/index.php/Bisoctrizole | |
4b90243bffccc42f8b320e22ec9ae6dabfbe3367 | wikidoc | Bitter melon | Bitter melon
Momordica charantia is a tropical and subtropical vine of the family Cucurbitaceae, widely grown for edible fruit, which is among the most bitter of all vegetables. English names for the plant and its fruit include bitter melon or bitter gourd (translated from Template:Zh-cp). The original home of the species is not known, other than that it is a native of the tropics. It is widely grown in India (Karela करेला in Hindi), Pakistan (Karela کریلا in Urdu, اردو),(komboze کمبوزه in Persian), South Asia, Southeast Asia, China, Africa and the Caribbean.
# Description
Also known as Ku gua, the herbaceous, tendril-bearing vine grows to 5 m. It bears simple, alternate leaves 4-12 cm across, with 3-7 deeply separated lobes. Each plant bears separate yellow male and female flowers.
The fruit has a distinct warty looking exterior and an oblong shape. It is hollow in cross-section, with a relatively thin layer of flesh surrounding a central seed cavity filled with large flat seeds and pith. Seeds and pith appear white in unripe fruits, ripening to red; they are NOT intensely bitter and can be removed before cooking. However, the pith will become sweet when the fruit is fully ripen, and the pith's color will turn red. The pith can be eaten uncooked in this state, but the flesh of the melon will be far too tough to be eaten anymore. Red and sweet bitter melon pith is a popular ingredient in some special southeast asian style salad. The flesh is crunchy and watery in texture, similar to cucumber, chayote or green bell pepper. The skin is tender and edible. The fruit is most often eaten green. Although it can also be eaten when it has started to ripen and turn yellowish, it becomes more bitter as it ripens. The fully ripe fruit turns orange and mushy, is too bitter to eat, and splits into segments which curl back dramatically to expose seeds covered in bright red pulp.
Bitter Gourd comes in a variety of shapes and sizes. The typical Chinese phenotype is 20 to 30 cm long, oblong with bluntly tapering ends and pale green in color, with a gently undulating, warty surface. The bitter melon more typical of India has a narrower shape with pointed ends, and a surface covered with jagged, triangular "teeth" and ridges. Coloration is green or white. Between these two extremes are any number of intermediate forms. Some bear miniature fruit of only 6 - 10 cm in length, which may be served individually as stuffed vegetables. These miniature fruit are popular in Southeast Asia as well as India.
# Culinary uses
Bitter melons are seldom mixed with other vegetables due to the strong bitter taste, although this can be moderated to some extent by salting and then washing the cut melon before use.
Bitter melon is often used in Chinese cooking for its bitter flavor, typically in stir-fries (often with pork and douchi), soups, and also as tea.
It is also a popular vegetable in Indian cooking, where it is often prepared with potatoes and served with yogurt on the side to offset the bitterness, or used in sabji. Bitter melon fried in oil and then stuffed with other spicy ingredients is very popular in Andhra Pradesh, a south Indian state.
Bitter melon is rarely used in mainland Japan, but is a significant component of Okinawan cuisine.
In Indonesia, bitter melon is prepared in various dishes, such as stir fry, cooked in coconut milk, or steamed.
In Vietnam, raw bitter melon slices consumed with dried meat floss and bitter melon soup with shrimp are popular dishes.
It is prepared into various dishes in the Philippines, where it is known as ampalaya. Ampalaya may also be stir-fried with ground beef and oyster sauce, or with eggs and diced tomato. A very popular dish from the Ilocos region of the Philippines, pinakbet, consists mainly of bitter melons, eggplant, okra, string beans, tomatoes, lima beans, and other various regional vegetables stewed with a little bagoong-based stock.
The young shoots and leaves may also be eaten as greens; in the Philippines, where bitter melon leaves are most commonly consumed, they are called dahon (leaves) ng ampalaya. The seeds can also be eaten, and give off a sweet taste, but have been known to cause vomiting and stomach upset.
In Nepal bitter melon is prepared in various ways. Most prepare it as fresh achar(kind of salsa). For this the bitter gourd is cut into cubes or slices and sauted covered in little bit of oil and sprikle of water. When it is softened and water dries out, it is minced in tradition mortar with few cloves of garlic, salt and red or green pepper. Other way is the sauted version. In this, bitter gourd is cut in thin round slices or cubes fried(sauted) with very less oil with some salt, cumin and red chilli. It is fried until the vegetable softens and with hints of golden brown on the sides. It is even prepared as a curry on its own or with with potato and made as stuffed vegetables.
In Pakistan bitter melon is available in the summertime and is cooked mostly with lots of onions. A traditional way to cook bitter melon curry is, to peel off the skin and cut into thin slices. Then it is salted and kept under the sun for few hours to reduce its bitterness to some extent. After few hours, its salty and bitter water is squeezed out (by pressing with the hands) and then bitter melon is washed with water for few times. The bitter melon is fried in cooking oil in a separate pan whereas lots of onions are fried in another pan. When onions are turned little pink in color, the fried bitter melon is added to them. After some frying both the onions and bitter melon, red chilli powder, turmeric powder, salt, coriander powder and a pinch of cumin seeds are added. Now little water is sprinkled while frying the spices. Then a good amount of tomatoes is added to the curry and also the green chillies are added if one likes to. Now the pan is covered with a lid and heat is reduced to minimum so the tomatoes get tender and all spices could work their magic. The curry is stirred or fried for few times (at intervals) during this covering period. After half an hour or before, the curry is ready to serve. It is served with soft and hot flat breads (chappatis, چپاتی) and yogurt chutney.
Another dish in Pakistan calls for whole, un peeled bitter melon to be boiled and then stuffed with cooked ground beef. In this dish, it is recommended that the bitter melon be left 'de-bittered'. It is either served with hot tandoori bread, naan, chappati, or with khichri (a mixture of lentils and rice).
# Medicinal uses
Bitter melons have been used in various Asian traditional medicine systems for a long time . Like most bitter-tasting foods, bitter melon stimulates digestion. While this can be helpful in people with sluggish digestion, dyspepsia, and constipation, it can sometimes make heartburn and ulcers worse. The fact that bitter melon is also a demulcent and at least mild inflammation modulator, however, means that it rarely does have these negative effects, based on clinical experience and traditional reports.
Though it has been claimed that bitter melon’s bitterness comes from quinine, no evidence could be located supporting this claim. Bitter melon is traditionally regarded by Asians, as well as Panamanians and Colombians, as useful for preventing and treating malaria. Laboratory studies have confirmed that various species of bitter melon have anti-malarial activity, though human studies have not yet been published .
Laboratory tests suggest that compounds in bitter melon might be effective for treating HIV infection . As most compounds isolated from bitter melon that impact HIV have either been proteins or glycoproteins lectins), neither of which are well-absorbed, it is unlikely that oral intake of bitter melon will slow HIV in infected people. It is possible oral ingestion of bitter melon could offset negative effects of anti-HIV drugs, if a test tube study can be shown to be true in people . In one preliminary clinical trial, an enema form of a bitter melon extract showed some benefits in people infected with HIV (Zhang 1992). Clearly more research is necessary before this could be recommended.
The other realm showing the most promise related to bitter melon is as an immunomodulator. One clinical trial found very limited evidence that bitter melon might improve immune cell function in people with cancer, but this needs to be verified and amplified in other research . If proven correct this is another way bitter melon could help people infected with HIV.
Some claim bitter melon as "a cure for diabetes", although outside of anecdotal stories there is limited scientific evidence for this claim
# Names in other languages
Austronesian languages
- Chavacano:amargozo
- Ilocano: pariya
- Malay and Indonesian: peria, pare, or parai
- Tagalog:ampalaya
Dravidian languages
- Kannada: hāgala kāyi
- Malayalam: kaipakka or pavakkya
- Tamil: pākaRkāi or pavakka
- Telugu: kākara kāyi
- Tulu: kānchaal
Indic languages
- Bengali: করল্লা kôrolla
- Bishnupriya Manipuri: কারল karol
- Gujarati: કારેલું kāreluṃ
- Hindi/Urdu: करेला کریلا karelā
- Marathi: कारले karla
- Konkani: kārate
- Punjabi: karaila
- Sinhalese: karawila
- Trinidad Hindi: karailī
Japonic languages
- Japanese: nigauri Template:Nihongo, tsurureishi Template:Nihongo, usually gōya Template:Nihongo
- Okinawan: gōyā
Sino-Tibetan languages
- Mandarin: 苦瓜 kǔ guā
- Taiwanese (Min Nan): 苦瓜 ko guai'
- Burmese: kyethinkhathee
Other languages
- Arabic: Hanzal
- Portuguese: melão-de-são-caetano
- Thai: มะระจีน marajin or มะระ mara
- Vietnamese: khổ qua
- Nepali: tito karela
- Korean : 여주
tamil : pagarkai
# Trivia
- A "bitter Gourd face" (苦瓜臉) is a common Chinese description for a serious or sad face. | Bitter melon
Momordica charantia is a tropical and subtropical vine of the family Cucurbitaceae, widely grown for edible fruit, which is among the most bitter of all vegetables. English names for the plant and its fruit include bitter melon or bitter gourd (translated from Template:Zh-cp). The original home of the species is not known, other than that it is a native of the tropics. It is widely grown in India (Karela करेला in Hindi), Pakistan (Karela کریلا in Urdu, اردو),(komboze کمبوزه in Persian), South Asia, Southeast Asia, China, Africa and the Caribbean.
# Description
Also known as Ku gua, the herbaceous, tendril-bearing vine grows to 5 m. It bears simple, alternate leaves 4-12 cm across, with 3-7 deeply separated lobes. Each plant bears separate yellow male and female flowers.
The fruit has a distinct warty looking exterior and an oblong shape. It is hollow in cross-section, with a relatively thin layer of flesh surrounding a central seed cavity filled with large flat seeds and pith. Seeds and pith appear white in unripe fruits, ripening to red; they are NOT intensely bitter and can be removed before cooking. However, the pith will become sweet when the fruit is fully ripen, and the pith's color will turn red. The pith can be eaten uncooked in this state, but the flesh of the melon will be far too tough to be eaten anymore. Red and sweet bitter melon pith is a popular ingredient in some special southeast asian style salad. The flesh is crunchy and watery in texture, similar to cucumber, chayote or green bell pepper. The skin is tender and edible. The fruit is most often eaten green. Although it can also be eaten when it has started to ripen and turn yellowish, it becomes more bitter as it ripens. The fully ripe fruit turns orange and mushy, is too bitter to eat, and splits into segments which curl back dramatically to expose seeds covered in bright red pulp.
Bitter Gourd comes in a variety of shapes and sizes. The typical Chinese phenotype is 20 to 30 cm long, oblong with bluntly tapering ends and pale green in color, with a gently undulating, warty surface. The bitter melon more typical of India has a narrower shape with pointed ends, and a surface covered with jagged, triangular "teeth" and ridges. Coloration is green or white. Between these two extremes are any number of intermediate forms. Some bear miniature fruit of only 6 - 10 cm in length, which may be served individually as stuffed vegetables. These miniature fruit are popular in Southeast Asia as well as India.
# Culinary uses
Bitter melons are seldom mixed with other vegetables due to the strong bitter taste, although this can be moderated to some extent by salting and then washing the cut melon before use.
Bitter melon is often used in Chinese cooking for its bitter flavor, typically in stir-fries (often with pork and douchi), soups, and also as tea.
It is also a popular vegetable in Indian cooking, where it is often prepared with potatoes and served with yogurt on the side to offset the bitterness, or used in sabji. Bitter melon fried in oil and then stuffed with other spicy ingredients is very popular in Andhra Pradesh, a south Indian state.
Bitter melon is rarely used in mainland Japan, but is a significant component of Okinawan cuisine.
In Indonesia, bitter melon is prepared in various dishes, such as stir fry, cooked in coconut milk, or steamed.
In Vietnam, raw bitter melon slices consumed with dried meat floss and bitter melon soup with shrimp are popular dishes.
It is prepared into various dishes in the Philippines, where it is known as ampalaya. Ampalaya may also be stir-fried with ground beef and oyster sauce, or with eggs and diced tomato. A very popular dish from the Ilocos region of the Philippines, pinakbet, consists mainly of bitter melons, eggplant, okra, string beans, tomatoes, lima beans, and other various regional vegetables stewed with a little bagoong-based stock.
The young shoots and leaves may also be eaten as greens; in the Philippines, where bitter melon leaves are most commonly consumed, they are called dahon (leaves) ng ampalaya. The seeds can also be eaten, and give off a sweet taste, but have been known to cause vomiting and stomach upset.
In Nepal bitter melon is prepared in various ways. Most prepare it as fresh achar(kind of salsa). For this the bitter gourd is cut into cubes or slices and sauted covered in little bit of oil and sprikle of water. When it is softened and water dries out, it is minced in tradition mortar with few cloves of garlic, salt and red or green pepper. Other way is the sauted version. In this, bitter gourd is cut in thin round slices or cubes fried(sauted) with very less oil with some salt, cumin and red chilli. It is fried until the vegetable softens and with hints of golden brown on the sides. It is even prepared as a curry on its own or with with potato and made as stuffed vegetables.
In Pakistan bitter melon is available in the summertime and is cooked mostly with lots of onions. A traditional way to cook bitter melon curry is, to peel off the skin and cut into thin slices. Then it is salted and kept under the sun for few hours to reduce its bitterness to some extent. After few hours, its salty and bitter water is squeezed out (by pressing with the hands) and then bitter melon is washed with water for few times. The bitter melon is fried in cooking oil in a separate pan whereas lots of onions are fried in another pan. When onions are turned little pink in color, the fried bitter melon is added to them. After some frying both the onions and bitter melon, red chilli powder, turmeric powder, salt, coriander powder and a pinch of cumin seeds are added. Now little water is sprinkled while frying the spices. Then a good amount of tomatoes is added to the curry and also the green chillies are added if one likes to. Now the pan is covered with a lid and heat is reduced to minimum so the tomatoes get tender and all spices could work their magic. The curry is stirred or fried for few times (at intervals) during this covering period. After half an hour or before, the curry is ready to serve. It is served with soft and hot flat breads (chappatis, چپاتی) and yogurt chutney.
Another dish in Pakistan calls for whole, un peeled bitter melon to be boiled and then stuffed with cooked ground beef. In this dish, it is recommended that the bitter melon be left 'de-bittered'. It is either served with hot tandoori bread, naan, chappati, or with khichri (a mixture of lentils and rice).
# Medicinal uses
Bitter melons have been used in various Asian traditional medicine systems for a long time [1]. Like most bitter-tasting foods, bitter melon stimulates digestion. While this can be helpful in people with sluggish digestion, dyspepsia, and constipation, it can sometimes make heartburn and ulcers worse. The fact that bitter melon is also a demulcent and at least mild inflammation modulator, however, means that it rarely does have these negative effects, based on clinical experience and traditional reports.
Though it has been claimed that bitter melon’s bitterness comes from quinine,[2] no evidence could be located supporting this claim. Bitter melon is traditionally regarded by Asians, as well as Panamanians and Colombians, as useful for preventing and treating malaria. Laboratory studies have confirmed that various species of bitter melon have anti-malarial activity, though human studies have not yet been published [3].
Laboratory tests suggest that compounds in bitter melon might be effective for treating HIV infection [4]. As most compounds isolated from bitter melon that impact HIV have either been proteins or glycoproteins lectins), neither of which are well-absorbed, it is unlikely that oral intake of bitter melon will slow HIV in infected people. It is possible oral ingestion of bitter melon could offset negative effects of anti-HIV drugs, if a test tube study can be shown to be true in people [5]. In one preliminary clinical trial, an enema form of a bitter melon extract showed some benefits in people infected with HIV (Zhang 1992). Clearly more research is necessary before this could be recommended.
The other realm showing the most promise related to bitter melon is as an immunomodulator. One clinical trial found very limited evidence that bitter melon might improve immune cell function in people with cancer, but this needs to be verified and amplified in other research [6]. If proven correct this is another way bitter melon could help people infected with HIV.
Some claim bitter melon as "a cure for diabetes", although outside of anecdotal stories there is limited scientific evidence for this claim
# Names in other languages
Austronesian languages
- Chavacano:amargozo
- Ilocano: pariya
- Malay and Indonesian: peria, pare, or parai
- Tagalog:ampalaya
Dravidian languages
- Kannada: hāgala kāyi
- Malayalam: kaipakka or pavakkya
- Tamil: pākaRkāi or pavakka
- Telugu: kākara kāyi
- Tulu: kānchaal
Indic languages
- Bengali: করল্লা kôrolla
- Bishnupriya Manipuri: কারল karol
- Gujarati: કારેલું kāreluṃ
- Hindi/Urdu: करेला کریلا karelā
- Marathi: कारले karla
- Konkani: kārate
- Punjabi: karaila
- Sinhalese: karawila
- Trinidad Hindi: karailī
Japonic languages
- Japanese: nigauri Template:Nihongo, tsurureishi Template:Nihongo, usually gōya Template:Nihongo
- Okinawan: gōyā
Sino-Tibetan languages
- Mandarin: 苦瓜 kǔ guā
- Taiwanese (Min Nan): 苦瓜 ko guai'
- Burmese: kyethinkhathee
Other languages
- Arabic: Hanzal
- Portuguese: melão-de-são-caetano
- Thai: มะระจีน marajin or มะระ mara
- Vietnamese: khổ qua
- Nepali: tito karela
- Korean : 여주
tamil : pagarkai
# Trivia
- A "bitter Gourd face" (苦瓜臉) is a common Chinese description for a serious or sad face. | https://www.wikidoc.org/index.php/Bitter_melon | |
2341923bc2d6b18e5adb5135dae2e6d4ea3e8c1c | wikidoc | Black Cohosh | Black Cohosh
Cimicifuga racemosa (Black cohosh, Black bugbane or Black snakeroot or Fairy candle; syn. Actaea racemosa) is a member of the family Ranunculaceae, native to eastern North America from the extreme south of Ontario south to central Georgia, and west to Missouri and Arkansas. It grows in a variety of woodland situations, and is often found in small woodland openings.
It is a glabrous herbaceous perennial plant, producing large, compound leaves from an underground rhizome, growing 0.25-0.6 m (7-18 in) tall. The basal leaves are up to 1 m (39 in.) long and broad, tripinnately compound, the leaflets with a coarsely toothed margin. The flowers are produced in late spring and early summer on a tall stem, 0.75-2.5 m (2½–8 ft) tall, in racemes up to 50 cm (20 in) long; they have no petals or sepals, only a tight cluster of 55-110 white stamens 5-10 mm long surrounding the white stigma. The flowers have a distinctly sweet smell. The fruit is a dry follicle 5-10 mm long containing several seeds.
Blue cohosh (Caulophyllum thalictroides), despite its similar common name, is a plant of another genus.
# Herbal use
Black cohosh has been included in herbal compounds or dietary supplements marketed to women as remedies for the symptoms of premenstrual tension, menopause and other gynecological problems. However, a recent study published in Annals of Medicine (December 19, 2006)casts serious doubt on its efficacy. The researchers actually found black cohosh slightly less effective than a placebo and concluded that the herb "shows little potential as an important therapy for relief of vasomotor symptoms." However, that study used a product that contained 5 mgs of the active component a day whereas the current daily recommended dose of the long-used standard Remifemin contains 2 mgs. The American Botanical Council discusses that study.
It was thought that black cohosh contained estrogen-like chemicals, but recent research suggests that it works by binding to serotonin receptors. Native Americans used black cohosh to treat gynecological disorders and other disorders as well, including sore throats, kidney problems, and even depression.
Black cohosh has been used as an abortifacient (see side effects).
# Side effects
Black cohosh should not be used during pregnancy or lactation. There is a case report of neurological complications in a post-term baby after labor induction with a mixture of black cohosh and blue cohosh during a home birth. Other cases of adverse outcomes experienced by neonates born to women who reportedly used blue cohosh to induce labor have been published in peer-reviewed journals.
Black cohosh produces endometrial stimulation. Since black cohosh increases blood flow to the pelvic area, its use is not recommended during menses as it may increase or prolong bleeding. Because of the possible estrogenic action, it should be used with caution after six months. Additionally, black cohosh contains tannin, which inhibits iron absorption. This, considered with possible effects of enhancing menstrual bleeding, gives good cause to monitor iron stores when taking black cohosh.
No studies have been published on long-term safety in humans. However concerns arise that, in humans, because of its estrogen-like effects, long-term use may promote metastasis of estrogen-sensitive cancer tissue via stimulation of cells in the endometrium or breast. Black cohosh increased metastasis of cancer to the lungs (but did not cause an increased incidence of breast cancer) in an experiment done on mice (which was never published and the lung tumors were never biopsied, just observed.)NIH.pdf
The liver damage reported in a few individuals using black cohosh has been severe, but large numbers of women have taken the herb for years without reporting adverse health effects. See the NIH link above for thorough discussion of the liver issue. While studies of black cohosh have not proven that the herb causes liver damage, Australia has added a warning to the label of all products containing black cohosh, stating that it may cause harm to the liver of some individuals and should not be used without medical supervision.
Aside from pregnancy complications, increased menstrual bleeding, anemia, and rare but serious hepatic dysfunction, reported direct side-effects also include dizziness, diarrhea, nausea, and occasional gastric discomfort. Additional possible side effects include headaches, seizures, vomiting, sweating, constipation, low blood pressure, slow heartbeats, weight problems.
# Garden use
Cimicifuga racemosa grows in dependably moist, fairly heavy soil. It bears tall tapering racemes of white midsummer flowers on wiry black-purple stems, whose mildly unpleasant, medicinal smell at close range gives it the common name 'Bugbane'. The drying seed heads stay handsome in the garden for many weeks. Its burgundy, deeply cut leaves add interest to American gardens, wherever summer heat and drought do not make it die back, which make it a popular garden perennial. | Black Cohosh
Cimicifuga racemosa (Black cohosh, Black bugbane or Black snakeroot or Fairy candle; syn. Actaea racemosa) is a member of the family Ranunculaceae, native to eastern North America from the extreme south of Ontario south to central Georgia, and west to Missouri and Arkansas. It grows in a variety of woodland situations, and is often found in small woodland openings.
It is a glabrous herbaceous perennial plant, producing large, compound leaves from an underground rhizome, growing 0.25-0.6 m (7-18 in) tall. The basal leaves are up to 1 m (39 in.) long and broad, tripinnately compound, the leaflets with a coarsely toothed margin. The flowers are produced in late spring and early summer on a tall stem, 0.75-2.5 m (2½–8 ft) tall, in racemes up to 50 cm (20 in) long; they have no petals or sepals, only a tight cluster of 55-110 white stamens 5-10 mm long surrounding the white stigma. The flowers have a distinctly sweet smell. The fruit is a dry follicle 5-10 mm long containing several seeds.
Blue cohosh (Caulophyllum thalictroides), despite its similar common name, is a plant of another genus.
## Herbal use
Black cohosh has been included in herbal compounds or dietary supplements marketed to women as remedies for the symptoms of premenstrual tension, menopause and other gynecological problems. However, a recent study published in Annals of Medicine (December 19, 2006)[1]casts serious doubt on its efficacy. The researchers actually found black cohosh slightly less effective than a placebo and concluded that the herb "shows little potential as an important therapy for relief of vasomotor symptoms."[2] However, that study used a product that contained 5 mgs of the active component a day whereas the current daily recommended dose of the long-used standard Remifemin contains 2 mgs. The American Botanical Council discusses that study. [3]
It was thought that black cohosh contained estrogen-like chemicals, but recent research suggests that it works by binding to serotonin receptors. [4] Native Americans used black cohosh to treat gynecological disorders and other disorders as well, including sore throats, kidney problems, and even depression.
Black cohosh has been used as an abortifacient (see side effects).
## Side effects
Black cohosh should not be used during pregnancy or lactation. There is a case report of neurological complications in a post-term baby after labor induction with a mixture of black cohosh and blue cohosh during a home birth.[1] Other cases of adverse outcomes experienced by neonates born to women who reportedly used blue cohosh to induce labor have been published in peer-reviewed journals.[2]
Black cohosh produces endometrial stimulation. Since black cohosh increases blood flow to the pelvic area, its use is not recommended during menses as it may increase or prolong bleeding.[3] Because of the possible estrogenic action, it should be used with caution after six months.[4] Additionally, black cohosh contains tannin, which inhibits iron absorption.[5] This, considered with possible effects of enhancing menstrual bleeding, gives good cause to monitor iron stores when taking black cohosh.
No studies have been published on long-term safety in humans.[6] However concerns arise that, in humans, because of its estrogen-like effects, long-term use may promote metastasis of estrogen-sensitive cancer tissue via stimulation of cells in the endometrium or breast. Black cohosh increased metastasis of cancer to the lungs (but did not cause an increased incidence of breast cancer) in an experiment done on mice (which was never published and the lung tumors were never biopsied, just observed.)NIH.pdf
The liver damage reported in a few individuals using black cohosh has been severe, but large numbers of women have taken the herb for years without reporting adverse health effects.[7] See the NIH link above for thorough discussion of the liver issue. While studies of black cohosh have not proven that the herb causes liver damage, Australia has added a warning to the label of all products containing black cohosh, stating that it may cause harm to the liver of some individuals and should not be used without medical supervision.[8]
Aside from pregnancy complications, increased menstrual bleeding, anemia, and rare but serious hepatic dysfunction, reported direct side-effects also include dizziness, diarrhea, nausea, and occasional gastric discomfort. Additional possible side effects include headaches, seizures, vomiting, sweating, constipation, low blood pressure, slow heartbeats, weight problems.[9]
## Garden use
Cimicifuga racemosa grows in dependably moist, fairly heavy soil. It bears tall tapering racemes of white midsummer flowers on wiry black-purple stems, whose mildly unpleasant, medicinal smell at close range gives it the common name 'Bugbane'. The drying seed heads stay handsome in the garden for many weeks. Its burgundy, deeply cut leaves add interest to American gardens, wherever summer heat and drought do not make it die back, which make it a popular garden perennial.
# External links
- Safety Concerns
- Australian Adverse Drug Reactions Bulletin, April 2006, Noted 49 cases of live toxicity worldwide have been associated with the use of black cohosh. 4 Cases in the group required liver transplants. Serious cases of liver toxicity have been reported with use for less than a month. Listed as Do Not Use in Worst Pills Best Pills, August 2006, p63
- Flora of North America: Cimicifuga racemosa
- Black cohosh root treatments and side effects
- Missouri plants: Cimicifuga racemosa (detailed photos)
- Chemical background of black cohosh.
- Article on a recent study of Black cohosh
- National Institutes of Health (NIH) "Workshop on the Safety of Black Cohosh in Clinical Studies" November 2004 Large .pdf file addresses rare liver toxicty issues, lung mets in the mouse study from 2003, and concludes that there was NO COMPETENT EVIDENCE to support concerns about safety with respect to use of black cohosh in breast cancer patients as long as they are being followed by their doctors. | https://www.wikidoc.org/index.php/Black_Cohosh | |
e3eac2397ac280eacc53014567dafa86285fa54f | wikidoc | Black locust | Black locust
Black Locust (Robinia pseudoacacia) is a tree in the subfamily Faboideae of the pea family Fabaceae. It is native to the southeastern United States, but has been widely planted and naturalized elsewhere in temperate North America, Europe and Asia and is considered an invasive species in some areas. A less frequently used common name is False Acacia, which is a literal translation of the specific epithet. It was introduced into Britain in 1636.
# Description
It grows to 14–25 m tall, with a trunk up to 0.8 m diameter (exceptionally up to 27 m tall and 1.6 m diameter in very old trees), with thick, deeply furrowed blackish bark. The leaves are 10–25 cm long, pinnate with 9–19 oval leaflets, 2–5 cm long and 1.5–3 cm broad. Each leaf usually has a pair of short thorns at the base, 1–2 mm long or absent on adult crown shoots, up to 2 cm long on vigorous young plants. The intensely fragrant flowers are white, borne in pendulous racemes 8–20 cm long, and are considered edible. The fruit is a legume 5–10 cm long, containing 4–10 seeds.
Although similar in general appearance to Honey locust, it lacks that tree's characteristic long branched spines on the trunk, instead having the pairs of short thorns at the base of each leaf; the leaflets are also much broader.
Native from Pennsylvania to northern Georgia and westward as far as Arkansas and Oklahoma, but has been widely spread. Reaches the height of seventy feet with a trunk three or four feet in diameter, with brittle branches that form an oblong narrow head. Spreads by underground shoots. The leaflets fold together in wet weather, also at night; some change of position at night is the habit of the entire leguminous family.
- Bark: Dark gray brown tinged with red, deeply furrowed, surface inclined to scale. Branchlets at first coated with white silvery down. This soon disappears and they become pale green, afterward reddish brown. Prickles develop from stipules, are short, somewhat triangular, dilated at base, sharp, dark purple, adhering only to the bark, but persistent.
- Wood: Pale yellowish brown; heavy, hard, strong, close-grained and very durable in contact with the ground. Sp. gr., 0.7333; weight of cu. ft., 45.70 lbs.
- Winter buds: Minute, naked, three or four together, protected in a depression by a scale-like covering lined on the inner surface with a thick coat of tomentum and opening in early spring; when forming are covered by the swollen base of the petiole.
- Leaves: Parallel, compound, odd-pinnate, eight to fourteen inches long, with slender hairy petioles, grooved and swollen at the base. Leaflets petiolate, seven to nine, one to two inches long, one-half to three-fourths of an inch broad, emarginate or rounded at apex. They come out of the bud conduplicate, yellow green, covered with silvery down which soon disappears; when full grown are dull dark green above, paler beneath. Feather-veined, midvein prominent. In autumn they turn a clear pale yellow. Stipules linear, downy, membranous at first, ultimately developing into hard woody prickles, straight or slightly curved. Each leaflet has a minute stipel which quickly falls and a short petiole.
- Flowers: May, after the leaves. Papilionaceous. Perfect, borne in loose drooping racemes four to five inches long, cream-white, about an inch long, nectar bearing, fragrant. Pedicels slender, half an inch long, dark read or reddish green.
- Calyx: Campanulate, givvous, hairy, five-toothed, slightly two-lipped, dark green blotched with red, especially on the upper side teeth valvate in bud.
- Corolla: Imperfectly papilionaceous, petals inserted upon a tubular disk; standard white with pale yellow blotch; wings white, oblong-falcate; keel petals incurved, obtuse, united below.
- Stamens: Ten, inserted, with the petals, diadelphous, nine inferior, united into a tube which is cleft on the upper side, superior one free at the base. Anthers two-celled, cells opening longitudinally.
- Pistil: Ovary superior, linear-oblong, stipitate, one-celled; style inflexed, long, slender, bearded; stigma capitate; ovules several, two-ranked.
- Fruit: legume two-valved, smooth three to four inches long and half an inch broad, usually four to eight seeded. Ripens late in autumn and hangs on the branches until early spring. Seeds dark orange brown with irregular markings. Cotyledons oval, fleshy.
# Cultivation
Black locust is a major honey plant in eastern USA, and, having been taken and planted in France, is the source of the renowned acacia monofloral honey from France. Flowering starts after 140 growing degree days.
In Europe it is often planted alongside streets and in parks, especially in large cities, because it tolerates pollution well. The species is unsuitable for small gardens due to its large size and rapid growth, but the cultivar 'Frisia', a selection with bright yellow-green leaves, is occasionally planted as an ornamental tree.
Black locust has nitrogen-fixing bacteria on its root system; for this reason it can grow on poor soils and is an early colonizer of disturbed areas.
In 1900 it was reported that the value of Robinia pseudacacia is practically destroyed in nearly all parts of the United States beyond the mountain forests which are its home, by the borers which riddle the trunk and branches. Were it not for these insects it would be one of the most valuable timber trees that could be planted in the northern and middle states. Young trees grow quickly and vigorously for a number of years, but soon become stunted and diseased, and rarely live long enough to attain any commercial value.
# Uses
The wood is extremely hard, resistant to rot and long lasting, making it prized for fence posts and small watercraft. As a young man, Abraham Lincoln spent a lot of time splitting rails and fence posts from black locust logs. Flavonoids in the heartwood allow the wood to last over 100 years in soil. In the Netherlands and some other parts of Europe, black locust is the most rot-resistant local tree, and projects have started to limit the use of tropical wood by promoting this tree and creating plantations. It is one of the heaviest and hardest woods in North America.
Black Locust is unsurpassed as firewood for wood stoves; it burns slowly, with little visible flame or smoke, and has a higher heat content than any other wood that grows in the Eastern US, comparable to the heat content of anthracite".. However, for this use it should be split when green, then dried for 2 to 3 years, and ignited by insertion into a stove already hot from burning of a load of some other hardwood. In fireplaces it is less satisfactory because knots and beetle damage in black locust make the wood prone to "spitting" coals for distances of up to several feet. If the Black Locust is cut, split, and cured while relatively young (within ten years) typically damage and "spitting" problems are minimal. It can be an excellent firewood in stoves, campfires, and fireplaces if properly cultivated. However, some people find the smell of the smoke offensive. As it is fast-growing and highly resilient in a variety of soils it renews itself readily for future use.
## Toxicity
Like the honey locust, the black locust reproduces through its distinct hanging pods, but on the black locust they are smaller and lighter and thus easily carried long distances by the wind. Unlike the pods of the honey locust, but like those of the related European Laburnum, the black locust's pods are toxic. In fact, every part of the tree, especially the bark, is considered toxic, with the exception of the flowers. However, various reports have suggested that the seeds and the young pods of the black locust can be edible when cooked, since the poisons that are contained in this plant are decomposed by heat. Horses who consume the plant show signs of anorexia, depression, diarrhea, colic, weakness, and cardiac arrhythmia. Symptoms usually occur about 1 hour following consumption, and immediate veterinary attention is required.
# History
The name locust is said to have been given to Robinia by Jesuit missionaries, who fancied that this was the tree that supported St. John in the wilderness, but it is native only to North America. The locust tree of Spain, which is also native to Syria, is supposed to be the true locust of the New Testament; the fruit of this tree may be found in the shops under the name of St. John's bread.
Robinia is now a North American genus—but traces of it are found in the eocene and miocene rocks of Europe. | Black locust
Black Locust (Robinia pseudoacacia) is a tree in the subfamily Faboideae of the pea family Fabaceae. It is native to the southeastern United States, but has been widely planted and naturalized elsewhere in temperate North America, Europe and Asia and is considered an invasive species in some areas. A less frequently used common name is False Acacia, which is a literal translation of the specific epithet. It was introduced into Britain in 1636.
# Description
It grows to 14–25 m tall, with a trunk up to 0.8 m diameter (exceptionally up to 27 m tall and 1.6 m diameter in very old trees), with thick, deeply furrowed blackish bark. The leaves are 10–25 cm long, pinnate with 9–19 oval leaflets, 2–5 cm long and 1.5–3 cm broad. Each leaf usually has a pair of short thorns at the base, 1–2 mm long or absent on adult crown shoots, up to 2 cm long on vigorous young plants. The intensely fragrant flowers are white, borne in pendulous racemes 8–20 cm long, and are considered edible. The fruit is a legume 5–10 cm long, containing 4–10 seeds.
Although similar in general appearance to Honey locust, it lacks that tree's characteristic long branched spines on the trunk, instead having the pairs of short thorns at the base of each leaf; the leaflets are also much broader.
Native from Pennsylvania to northern Georgia and westward as far as Arkansas and Oklahoma, but has been widely spread. Reaches the height of seventy feet with a trunk three or four feet in diameter, with brittle branches that form an oblong narrow head. Spreads by underground shoots. The leaflets fold together in wet weather, also at night; some change of position at night is the habit of the entire leguminous family.
- Bark: Dark gray brown tinged with red, deeply furrowed, surface inclined to scale. Branchlets at first coated with white silvery down. This soon disappears and they become pale green, afterward reddish brown. Prickles develop from stipules, are short, somewhat triangular, dilated at base, sharp, dark purple, adhering only to the bark, but persistent.
- Wood: Pale yellowish brown; heavy, hard, strong, close-grained and very durable in contact with the ground. Sp. gr., 0.7333; weight of cu. ft., 45.70 lbs.
- Winter buds: Minute, naked, three or four together, protected in a depression by a scale-like covering lined on the inner surface with a thick coat of tomentum and opening in early spring; when forming are covered by the swollen base of the petiole.
- Leaves: Parallel, compound, odd-pinnate, eight to fourteen inches long, with slender hairy petioles, grooved and swollen at the base. Leaflets petiolate, seven to nine, one to two inches long, one-half to three-fourths of an inch broad, emarginate or rounded at apex. They come out of the bud conduplicate, yellow green, covered with silvery down which soon disappears; when full grown are dull dark green above, paler beneath. Feather-veined, midvein prominent. In autumn they turn a clear pale yellow. Stipules linear, downy, membranous at first, ultimately developing into hard woody prickles, straight or slightly curved. Each leaflet has a minute stipel which quickly falls and a short petiole.
- Flowers: May, after the leaves. Papilionaceous. Perfect, borne in loose drooping racemes four to five inches long, cream-white, about an inch long, nectar bearing, fragrant. Pedicels slender, half an inch long, dark read or reddish green.
- Calyx: Campanulate, givvous, hairy, five-toothed, slightly two-lipped, dark green blotched with red, especially on the upper side teeth valvate in bud.
- Corolla: Imperfectly papilionaceous, petals inserted upon a tubular disk; standard white with pale yellow blotch; wings white, oblong-falcate; keel petals incurved, obtuse, united below.
- Stamens: Ten, inserted, with the petals, diadelphous, nine inferior, united into a tube which is cleft on the upper side, superior one free at the base. Anthers two-celled, cells opening longitudinally.
- Pistil: Ovary superior, linear-oblong, stipitate, one-celled; style inflexed, long, slender, bearded; stigma capitate; ovules several, two-ranked.
- Fruit: legume two-valved, smooth three to four inches long and half an inch broad, usually four to eight seeded. Ripens late in autumn and hangs on the branches until early spring. Seeds dark orange brown with irregular markings. Cotyledons oval, fleshy.[1]
# Cultivation
Black locust is a major honey plant in eastern USA, and, having been taken and planted in France, is the source of the renowned acacia monofloral honey from France. Flowering starts after 140 growing degree days.
In Europe it is often planted alongside streets and in parks, especially in large cities, because it tolerates pollution well. The species is unsuitable for small gardens due to its large size and rapid growth, but the cultivar 'Frisia', a selection with bright yellow-green leaves, is occasionally planted as an ornamental tree.
Black locust has nitrogen-fixing bacteria on its root system; for this reason it can grow on poor soils and is an early colonizer of disturbed areas.
In 1900 it was reported that the value of Robinia pseudacacia is practically destroyed in nearly all parts of the United States beyond the mountain forests which are its home, by the borers which riddle the trunk and branches. Were it not for these insects it would be one of the most valuable timber trees that could be planted in the northern and middle states. Young trees grow quickly and vigorously for a number of years, but soon become stunted and diseased, and rarely live long enough to attain any commercial value.[1]
# Uses
The wood is extremely hard, resistant to rot and long lasting, making it prized for fence posts and small watercraft. As a young man, Abraham Lincoln spent a lot of time splitting rails and fence posts from black locust logs. Flavonoids in the heartwood allow the wood to last over 100 years in soil.[2] In the Netherlands and some other parts of Europe, black locust is the most rot-resistant local tree, and projects have started to limit the use of tropical wood by promoting this tree and creating plantations. It is one of the heaviest and hardest woods in North America.
Black Locust is unsurpassed as firewood for wood stoves; it burns slowly, with little visible flame or smoke, and has a higher heat content than any other wood that grows in the Eastern US, comparable to the heat content of anthracite".[3]. However, for this use it should be split when green, then dried for 2 to 3 years, and ignited by insertion into a stove already hot from burning of a load of some other hardwood.[citation needed] In fireplaces it is less satisfactory because knots and beetle damage in black locust make the wood prone to "spitting" coals for distances of up to several feet.[citation needed] If the Black Locust is cut, split, and cured while relatively young (within ten years) typically damage and "spitting" problems are minimal. It can be an excellent firewood in stoves, campfires, and fireplaces if properly cultivated. However, some people find the smell of the smoke offensive. As it is fast-growing and highly resilient in a variety of soils it renews itself readily for future use.
## Toxicity
Like the honey locust, the black locust reproduces through its distinct hanging pods, but on the black locust they are smaller and lighter and thus easily carried long distances by the wind. Unlike the pods of the honey locust, but like those of the related European Laburnum, the black locust's pods are toxic. In fact, every part of the tree, especially the bark, is considered toxic, with the exception of the flowers. However, various reports have suggested that the seeds and the young pods of the black locust can be edible when cooked, since the poisons that are contained in this plant are decomposed by heat. Horses who consume the plant show signs of anorexia, depression, diarrhea, colic, weakness, and cardiac arrhythmia. Symptoms usually occur about 1 hour following consumption, and immediate veterinary attention is required.
# History
The name locust is said to have been given to Robinia by Jesuit missionaries, who fancied that this was the tree that supported St. John in the wilderness, but it is native only to North America. The locust tree of Spain, which is also native to Syria, is supposed to be the true locust of the New Testament; the fruit of this tree may be found in the shops under the name of St. John's bread.[1]
Robinia is now a North American genus—but traces of it are found in the eocene and miocene rocks of Europe.[1]
# External links
- Robinia pseudoacacia images at bioimages.vanderbilt.edu
- Black Locust
- Black Locust (as an invasive species)
- Robinia pseudoacacia 'Frisia'
- Flower detail
Flower detail
- Summer foliage
Summer foliage
- Robinia pseudoacacia c.v. Lace Lady
Robinia pseudoacacia c.v. Lace Lady | https://www.wikidoc.org/index.php/Black_locust | |
0d94e7154ce1e2d94d4028e41f62b15a1f2e3931 | wikidoc | Black pepper | Black pepper
Black pepper (Piper nigrum) is a flowering vine in the family Piperaceae, cultivated for its fruit, which is usually dried and used as a spice and seasoning. The same fruit is also used to produce white pepper, red/pink pepper, and green pepper. Black pepper is native to South India and is extensively cultivated there and elsewhere in tropical regions. The fruit, known as a peppercorn when dried, is a small drupe five millimetres in diameter, dark red when fully mature, containing a single seed.
Dried ground pepper is one of the most common spices in European cuisine and its descendants, having been known and prized since antiquity for both its flavour and its use as a medicine. The spiciness of black pepper is due to the chemical piperine. Ground black peppercorn, usually referred to simply as "pepper", may be found on nearly every dinner table in some parts of the world, often alongside table salt.
The word "pepper" is derived from the Sanskrit pippali, the word for long pepper via the Latin piper which was used by the Romans to refer both to pepper and long pepper, as the Romans erroneously believed that both of these spices were derived from the same plant. The English word for pepper is derived from the Old English pipor. The Latin word is also the source of German pfeffer, French poivre, Dutch peper, and other similar forms. In the 16th century, pepper started referring to the unrelated New World chile peppers as well. "Pepper" was used in a figurative sense to mean "spirit" or "energy" at least as far back as the 1840s; in the early 20th century, this was shortened to pep.
# Varieties
Black pepper is produced from the still-green unripe berries of the pepper plant. The berries are cooked briefly in hot water, both to clean them and to prepare them for drying. The heat ruptures cell walls in the fruit, speeding the work of browning enzymes during drying. The berries are dried in the sun or by machine for several days, during which the fruit around the seed shrinks and darkens into a thin, wrinkled black layer. Once dried, the fruits are called black peppercorns.
White pepper consists of the seed only, with the fruit removed. This is usually accomplished by allowing fully ripe berries to soak in water for about a week, during which the flesh of the fruit softens and decomposes. Rubbing then removes what remains of the fruit, and the naked seed is dried. Alternative processes are used for removing the outer fruit from the seed, including removal of the outer layer from black pepper produced from unripe berries.
In the U.S., white pepper is often used in dishes like light-coloured sauces or mashed potatoes, where ground black pepper would visibly stand out. There is disagreement regarding which is generally spicier. They do have differing flavours due to the presence of certain compounds in the outer fruit layer of the berry that are not found in the seed.
Green pepper, like black, is made from the unripe berries. Dried green peppercorns are treated in a manner that retains the green colour, such as treatment with sulphur dioxide or freeze-drying. Pickled peppercorns, also green, are unripe berries preserved in brine or vinegar. Fresh, unpreserved green pepper berries, largely unknown in the West, are used in some Asian cuisines, particularly Thai cuisine. Their flavor has been described as piquant and fresh, with a bright aroma. They decay quickly if not dried or preserved.
A rarely seen product called pink pepper or red pepper consists of ripe red pepper berries preserved in brine and vinegar. Even more rarely seen, ripe red peppercorns can also be dried using the same colour-preserving techniques used to produce green pepper. Pink pepper from Piper nigrum is distinct from the more-common dried "pink peppercorns", which are the fruits of a plant from a different family, the Peruvian pepper tree, Schinus molle, and its relative the Brazilian pepper tree, Schinus terebinthifolius. In years past there was debate as to the health safety of pink peppercorns, which is mostly no longer an issue. Sichuan peppercorn is another "pepper" that is botanically unrelated to black pepper.
Peppercorns are often categorised under a label describing their region or port of origin. Two well-known types come from India's Malabar Coast: Malabar pepper and Tellicherry pepper. Tellicherry is a higher-grade pepper, made from the largest, ripest 10% of berries from Malabar plants grown on Mount Tellicherry. Sarawak pepper is produced in the Malaysian portion of Borneo, and Lampong pepper on Indonesia's island of Sumatra. White Muntok pepper is another Indonesian product, from Bangka Island.
# The pepper plant
The pepper plant is a perennial woody vine growing to four metres in height on supporting trees, poles, or trellises. It is a spreading vine, rooting readily where trailing stems touch the ground. The leaves are alternate, entire, five to ten centimetres long and three to six centimetres broad. The flowers are small, produced on pendulous spikes four to eight centimetres long at the leaf nodes, the spikes lengthening to seven to 15 centimetres as the fruit matures.
Black pepper is grown in soil that is neither too dry nor susceptible to flooding, moist, well-drained and rich in organic matter. The plants are propagated by cuttings about 40 to 50 centimetres long, tied up to neighbouring trees or climbing frames at distances of about two metres apart; trees with rough bark are favoured over those with smooth bark, as the pepper plants climb rough bark more readily. Competing plants are cleared away, leaving only sufficient trees to provide shade and permit free ventilation. The roots are covered in leaf mulch and manure, and the shoots are trimmed twice a year. On dry soils the young plants require watering every other day during the dry season for the first three years. The plants bear fruit from the fourth or fifth year, and typically continue to bear fruit for seven years. The cuttings are usually cultivars, selected both for yield and quality of fruit.
A single stem will bear 20 to 30 fruiting spikes. The harvest begins as soon as one or two berries at the base of the spikes begin to turn red, and before the fruit is mature, but when full grown and still hard; if allowed to ripen, the berries lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes.
# History
Pepper has been used as a spice in India since prehistoric times. J. Innes Miller notes that while pepper was grown in southern Thailand and in Malaysia, its most important source was India, particularly the Malabar Coast, in what is now the state of Kerala. Peppercorns were a much prized trade good, often referred to as "black gold" and used as a form of commodity money. The term "peppercorn rent" still exists today.
The ancient history of black pepper is often interlinked with (and confused with) that of long pepper, the dried fruit of closely related Piper longum. The Romans knew of both and often referred to either as just "piper". In fact, it was not until the discovery of the New World and of chile peppers that the popularity of long pepper entirely declined. Chile peppers, some of which when dried are similar in shape and taste to long pepper, were easier to grow in a variety of locations more convenient to Europe.
Until well after the Middle Ages, virtually all of the black pepper found in Europe, the Middle East, and North Africa travelled there from India's Malabar region. By the 16th century, pepper was also being grown in Java, Sunda, Sumatra, Madagascar, Malaysia, and elsewhere in Southeast Asia, but these areas traded mainly with China, or used the pepper locally. Ports in the Malabar area also served as a stop-off point for much of the trade in other spices from farther east in the Indian Ocean.
Black pepper, along with other spices from India and lands farther east, changed the course of world history. It was in some part the preciousness of these spices that led to the European efforts to find a sea route to India and consequently to the European colonial occupation of that country, as well as the European discovery and colonization of the Americas.
## Ancient times
Black peppercorns were found lodged in the nostrils of Ramesses II, placed there as part of the mummification rituals shortly after his death in 1213 BCE. Little else is known about the use of pepper in ancient Egypt, nor how it reached the Nile from India.
Pepper (both long and black) was known in Greece at least as early as the 4th century BCE, though it was probably an uncommon and expensive item that only the very rich could afford. Trade routes of the time were by land, or in ships which hugged the coastlines of the Arabian Sea. Long pepper, growing in the north-western part of India, was more accessible than the black pepper from further south; this trade advantage, plus long pepper's greater spiciness, probably made black pepper less popular at the time.
By the time of the early Roman Empire, especially after Rome's conquest of Egypt in 30 BCE, open-ocean crossing of the Arabian Sea directly to southern India's Malabar Coast was near routine. Details of this trading across the Indian Ocean have been passed down in the Periplus of the Erythraean Sea. According to the Roman geographer Strabo, the early Empire sent a fleet of around 120 ships on an annual one-year trip to India and back. The fleet timed its travel across the Arabian Sea to take advantage of the predictable monsoon winds. Returning from India, the ships travelled up the Red Sea, from where the cargo was carried overland or via the Nile Canal to the Nile River, barged to Alexandria, and shipped from there to Italy and Rome. The rough geographical outlines of this same trade route would dominate the pepper trade into Europe for a millennium and a half to come.
With ships sailing directly to the Malabar coast, black pepper was now travelling a shorter trade route than long pepper, and the prices reflected it. Pliny the Elder's Natural History tells us the prices in Rome around 77 CE: "Long pepper ... is fifteen denarii per pound, while that of white pepper is seven, and of black, four." Pliny also complains "there is no year in which India does not drain the Roman Empire of fifty million sesterces," and further moralises on pepper:
It is quite surprising that the use of pepper has come so much into fashion, seeing that in other substances which we use, it is sometimes their sweetness, and sometimes their appearance that has attracted our notice; whereas, pepper has nothing in it that can plead as a recommendation to either fruit or berry, its only desirable quality being a certain pungency; and yet it is for this that we import it all the way from India! Who was the first to make trial of it as an article of food? and who, I wonder, was the man that was not content to prepare himself by hunger only for the satisfying of a greedy appetite? (N.H. 12.14)
Black pepper was a well-known and widespread, if expensive, seasoning in the Roman Empire. Apicius' De re coquinaria, a 3rd-century cookbook probably based at least partly on one from the 1st century CE, includes pepper in a majority of its recipes. Edward Gibbon wrote, in The History of the Decline and Fall of the Roman Empire, that pepper was "a favourite ingredient of the most expensive Roman cookery".
## Postclassical Europe
Pepper was so valuable that it was often used as collateral or even currency. The taste for pepper (or the appreciation of its monetary value) was passed on to those who would see Rome fall. It is said that Alaric the Visigoth and Attila the Hun each demanded from Rome a ransom of more than a ton of pepper when they besieged the city in 5th century. After the fall of Rome, others took over the middle legs of the spice trade, first the Persians and then the Arabs; Innes Miller cites the account of Cosmas Indicopleustes, who travelled east to India, as proof that "pepper was still being exported from India in the sixth century". By the end of the Dark Ages, the central portions of the spice trade were firmly under Islamic control. Once into the Mediterranean, the trade was largely monopolised by Italian powers, especially Venice and Genoa. The rise of these city-states was funded in large part by the spice trade.
A riddle authored by Saint Aldhelm, a 7th-century Bishop of Sherborne, sheds some light on black pepper's role in England at that time:
It is commonly believed that during the Middle Ages, pepper was used to conceal the taste of partially rotten meat. There is no evidence to support this claim, and historians view it as highly unlikely: in the Middle Ages, pepper was a luxury item, affordable only to the wealthy, who certainly had unspoiled meat available as well. Similarly, the belief that pepper was widely used as a preservative is questionable: it is true that piperine, the compound that gives pepper its spiciness, has some antimicrobial properties, but at the concentrations present when pepper is used as a spice, the effect is small. Salt is a much more effective preservative, and salt-cured meats were common fare, especially in winter. However, pepper and other spices probably did play a role in improving the taste of long-preserved meats.
Its exorbitant price during the Middle Ages — and the monopoly on the trade held by Italy — was one of the inducements which led the Portuguese to seek a sea route to India. In 1498, Vasco da Gama became the first European to reach India by sea; asked by Arabs in Calicut (who spoke Spanish and Italian) why they had come, his representative replied, "we seek Christians and spices." Though this first trip to India by way of the southern tip of Africa was only a modest success, the Portuguese quickly returned in greater numbers and used their superior naval firepower to eventually gain complete control of trade on the Arabian sea. This was the start of the first European empire in Asia, given additional legitimacy (at least from a European perspective) by the 1494 Treaty of Tordesillas, which granted Portugal exclusive rights to the half of the world where black pepper originated.
The Portuguese proved unable to maintain their stranglehold on the spice trade for long. The old Arab and Venetian trade networks successfully smuggled enormous quantities of spices through the patchy Portuguese blockade, and pepper once again flowed through Alexandria and Italy, as well as around Africa. In the 17th century, the Portuguese lost almost all of their valuable Indian Ocean possessions to the Dutch and the English. The pepper ports of Malabar fell to the Dutch in the period 1661–1663.
As pepper supplies into Europe increased, the price of pepper declined (though the total value of the import trade generally did not). Pepper, which in the early Middle Ages had been an item exclusively for the rich, started to become more of an everyday seasoning among those of more average means. Today, pepper accounts for one-fifth of the world's spice trade.
## China
It is possible that black pepper was known in China in the 2nd century BCE, if poetic reports regarding an explorer named Tang Meng (唐蒙) are correct. Sent by Emperor Wu to what is now south-west China, Tang Meng is said to have come across something called jujiang or "sauce-betel". He was told it came from the markets of Shu, an area in what is now the Sichuan province. The traditional view among historians is that "sauce-betel" is a sauce made from betel leaves, but arguments have been made that it actually refers to pepper, either long or black.
In the 3rd century CE, black pepper made its first definite appearance in Chinese texts, as hujiao or "foreign pepper". It does not appear to have been widely known at the time, failing to appear in a 4th-century work describing a wide variety of spices from beyond China's southern border, including long pepper. By the 12th century, however, black pepper had become a popular ingredient in the cuisine of the wealthy and powerful, sometimes taking the place of China's native Sichuan pepper (the tongue-numbing dried fruit of an unrelated plant).
Marco Polo testifies to pepper's popularity in 13th-century China when he relates what he is told of its consumption in the city of Kinsay (Zhejiang): "... Messer Marco heard it stated by one of the Great Kaan's officers of customs that the quantity of pepper introduced daily for consumption into the city of Kinsay amounted to 43 loads, each load being equal to 223 lbs." Marco Polo is not considered a very reliable source regarding China, and this second-hand data may be even more suspect, but if this estimated 10,000 pounds (4,500 kg) a day for one city is anywhere near the truth, China's pepper imports may have dwarfed Europe's.
## Pepper as a medicine
Like all eastern spices, pepper was historically both a seasoning and a medicine. Long pepper, being stronger, was often the preferred medication, but both were used.
Black peppercorns figure in remedies in Ayurveda, Siddha and Unani medicine in India. The 5th century Syriac Book of Medicines prescribes pepper (or perhaps long pepper) for such illnesses as constipation, diarrhea, earache, gangrene, heart disease, hernia, hoarseness, indigestion, insect bites, insomnia, joint pain, liver problems, lung disease, oral abscesses, sunburn, tooth decay, and toothaches. Various sources from the 5th century onward also recommend pepper to treat eye problems, often by applying salves or poultices made with pepper directly to the eye. There is no current medical evidence that any of these treatments has any benefit; pepper applied directly to the eye would be quite uncomfortable and possibly damaging.
Pepper has long been believed to cause sneezing; this is still believed true today. Some sources say that piperine irritates the nostrils, causing the sneezing; some say that it is just the effect of the fine dust in ground pepper, and some say that pepper is not in fact a very effective sneeze-producer at all. Few if any controlled studies have been carried out to answer the question.
Pepper is eliminated from the diet of patients having abdominal surgery and ulcers because of its irritating effect upon the intestines, being replaced by what is referred to as a bland diet.
Pepper is sometimes used to stop light or mild cuts from bleeding in restaurant kitchens.
# Flavour
Pepper gets its spicy heat mostly from the piperine compound, which is found both in the outer fruit and in the seed. Refined piperine, milligram-for-milligram, is about one per cent as hot as the capsaicin in chile peppers. The outer fruit layer, left on black pepper, also contains important odour-contributing terpenes including pinene, sabinene, limonene, caryophyllene, and linalool, which give citrusy, woody, and floral notes. These scents are mostly missing in white pepper, which is stripped of the fruit layer. White pepper can gain some different odours (including musty notes) from its longer fermentation stage.
Pepper loses flavour and aroma through evaporation, so airtight storage helps preserve pepper's original spiciness longer. Pepper can also lose flavour when exposed to light, which can transform piperine into nearly tasteless isochavicine. Once ground, pepper's aromatics can evaporate quickly; most culinary sources recommend grinding whole peppercorns immediately before use for this reason. Handheld pepper mills (or "pepper grinders"), which mechanically grind or crush whole peppercorns, are used for this, sometimes instead of pepper shakers, dispensers of pre-ground pepper. Spice mills such as pepper mills were found in European kitchens as early as the 14th century, but the mortar and pestle used earlier for crushing pepper remained a popular method for centuries after as well.
# World trade
Peppercorns are, by monetary value, the most widely traded spice in the world, accounting for 20 percent of all spice imports in 2002. The price of pepper can be volatile, and this figure fluctuates a great deal year to year; for example, pepper made up 39 percent of all spice imports in 1998. By weight, slightly more chile peppers are traded worldwide than peppercorns. The International Pepper Exchange is located in Kochi, India.
Vietnam has recently become the world's largest producer and exporter of pepper (85,000 long tons in 2003). Other major producers include Indonesia (67,000 tons), India (65,000 tons), Brazil (35,000 tons), Malaysia (22,000 tons), Sri Lanka (12,750 tons), Thailand, and China. Vietnam dominates the export market, using almost none of its production domestically. In 2003, Vietnam exported 82,000 tons of pepper, Indonesia 57,000 tons, Brazil 37,940 tons, Malaysia 18,500 tons, and India 17,200 tons.
# Notes
- ↑ Green capsicum or bell pepper may also be called "green pepper"; it is an unrelated plant.
- ↑ Pippali is Sanskrit for long pepper, also known as long pepper. Black pepper is marica. Greek and Latin borrowed pippali to refer to either.
- ↑ Douglas Harper's Online Etymology Dictionary entries for pepper and pep. Retrieved 13 November 2005.
- ↑ See Thai Ingredients Glossary. Retrieved 6 November 2005.
- ↑ Ochef, Using fresh green peppercorns. Retrieved 6 November 2005.
- ↑ Katzer, Gernot (2006). Pepper. Gernot Katzer's Spice Pages. Retrieved 12 August 2006.
- ↑ Peppercorns, from Penzey's Spices. Retrieved 17 October 2006.
- ↑ Pepper varieties information from A Cook's Wares. Retrieved 6 November 2005.
- ↑ J. Innes Miller, The Spice Trade of the Roman Empire (Oxford: Clarendon Press, 1969), p. 80
- ↑ Dalby p. 93.
- ↑ From Bostock and Riley's 1855 translation. Text online.
- ↑ Innes Miller, The Spice Trade, p. 83
- ↑ Translation from Turner, p 94. The riddle's answer is of course pepper.
- ↑ Dalby p. 156; also Turner pp. 108–109, though Turner does go on to discuss spices (not pepper specifically) being used to disguise the taste of partially spoiled wine or ale.
- ↑ H. J. D. Dorman and S. G. Deans (2000). "Antimicrobial agents from plants: antibacterial activity of plant volatile oils". Journal of Applied Microbiology. 88 Issue 2: 308..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}. Full text at Blackwell website; purchase required. "Spices, which are used as integral ingredients in cuisine or added as flavouring agents to foods, are present in insufficient quantities for their antimicrobial properties to be significant."
- ↑ Jaffee p. 10.
- ↑ Dalby pp. 74–75. The argument that jujiang was long pepper goes back to the 4th century CE botanical writings of Ji Han; Hui-lin Li's 1979 translation of and commentary on Ji Han's work makes the case that it was piper nigrum.
- ↑ Dalby p. 77.
- ↑ Translation from The Travels of Marco Polo: The Complete Yule-Cordier Edition, Vol. 2, Dover. ISBN 0-486-27587-6. p. 204.
- ↑ Turner p. 160.
- ↑ Turner p. 171.
- ↑ U.S. Library of Congress Science Reference Services "Everyday Mysteries", Why does pepper make you sneeze?. Retrieved November 12, 2005.
- ↑ McGee p. 428.
- ↑ ibid.
- ↑ Montagne, Prosper (2001). Larousse Gastronomique. Hamlyn. p. 726. ISBN 0-600-60235-4. "Mill".
- ↑ Jaffee p. 12, table 2.
- ↑ Data from Multi Commodity Exchange of India, Ltd. Retrieved 6 November 2005. | Black pepper
Template:Featured article
Black pepper (Piper nigrum) is a flowering vine in the family Piperaceae, cultivated for its fruit, which is usually dried and used as a spice and seasoning. The same fruit is also used to produce white pepper, red/pink pepper, and green pepper.[1] Black pepper is native to South India and is extensively cultivated there and elsewhere in tropical regions. The fruit, known as a peppercorn when dried, is a small drupe five millimetres in diameter, dark red when fully mature, containing a single seed.
Dried ground pepper is one of the most common spices in European cuisine and its descendants, having been known and prized since antiquity for both its flavour and its use as a medicine. The spiciness of black pepper is due to the chemical piperine. Ground black peppercorn, usually referred to simply as "pepper", may be found on nearly every dinner table in some parts of the world, often alongside table salt.
The word "pepper" is derived from the Sanskrit pippali, the word for long pepper[2] via the Latin piper which was used by the Romans to refer both to pepper and long pepper, as the Romans erroneously believed that both of these spices were derived from the same plant. The English word for pepper is derived from the Old English pipor. The Latin word is also the source of German pfeffer, French poivre, Dutch peper, and other similar forms. In the 16th century, pepper started referring to the unrelated New World chile peppers as well. "Pepper" was used in a figurative sense to mean "spirit" or "energy" at least as far back as the 1840s; in the early 20th century, this was shortened to pep.[3]
# Varieties
Black pepper is produced from the still-green unripe berries of the pepper plant. The berries are cooked briefly in hot water, both to clean them and to prepare them for drying. The heat ruptures cell walls in the fruit, speeding the work of browning enzymes during drying. The berries are dried in the sun or by machine for several days, during which the fruit around the seed shrinks and darkens into a thin, wrinkled black layer. Once dried, the fruits are called black peppercorns.
White pepper consists of the seed only, with the fruit removed. This is usually accomplished by allowing fully ripe berries to soak in water for about a week, during which the flesh of the fruit softens and decomposes. Rubbing then removes what remains of the fruit, and the naked seed is dried. Alternative processes are used for removing the outer fruit from the seed, including removal of the outer layer from black pepper produced from unripe berries.
In the U.S., white pepper is often used in dishes like light-coloured sauces or mashed potatoes, where ground black pepper would visibly stand out. There is disagreement regarding which is generally spicier. They do have differing flavours due to the presence of certain compounds in the outer fruit layer of the berry that are not found in the seed.
Green pepper, like black, is made from the unripe berries. Dried green peppercorns are treated in a manner that retains the green colour, such as treatment with sulphur dioxide or freeze-drying. Pickled peppercorns, also green, are unripe berries preserved in brine or vinegar. Fresh, unpreserved green pepper berries, largely unknown in the West, are used in some Asian cuisines, particularly Thai cuisine.[4] Their flavor has been described as piquant and fresh, with a bright aroma.[5] They decay quickly if not dried or preserved.
A rarely seen product called pink pepper or red pepper consists of ripe red pepper berries preserved in brine and vinegar. Even more rarely seen, ripe red peppercorns can also be dried using the same colour-preserving techniques used to produce green pepper.[6] Pink pepper from Piper nigrum is distinct from the more-common dried "pink peppercorns", which are the fruits of a plant from a different family, the Peruvian pepper tree, Schinus molle, and its relative the Brazilian pepper tree, Schinus terebinthifolius. In years past there was debate as to the health safety of pink peppercorns, which is mostly no longer an issue.[citation needed] Sichuan peppercorn is another "pepper" that is botanically unrelated to black pepper.
Peppercorns are often categorised under a label describing their region or port of origin. Two well-known types come from India's Malabar Coast: Malabar pepper and Tellicherry pepper. Tellicherry is a higher-grade pepper, made from the largest, ripest 10% of berries from Malabar plants grown on Mount Tellicherry.[7] Sarawak pepper is produced in the Malaysian portion of Borneo, and Lampong pepper on Indonesia's island of Sumatra. White Muntok pepper is another Indonesian product, from Bangka Island.[8]
# The pepper plant
The pepper plant is a perennial woody vine growing to four metres in height on supporting trees, poles, or trellises. It is a spreading vine, rooting readily where trailing stems touch the ground. The leaves are alternate, entire, five to ten centimetres long and three to six centimetres broad. The flowers are small, produced on pendulous spikes four to eight centimetres long at the leaf nodes, the spikes lengthening to seven to 15 centimetres as the fruit matures.
Black pepper is grown in soil that is neither too dry nor susceptible to flooding, moist, well-drained and rich in organic matter. The plants are propagated by cuttings about 40 to 50 centimetres long, tied up to neighbouring trees or climbing frames at distances of about two metres apart; trees with rough bark are favoured over those with smooth bark, as the pepper plants climb rough bark more readily. Competing plants are cleared away, leaving only sufficient trees to provide shade and permit free ventilation. The roots are covered in leaf mulch and manure, and the shoots are trimmed twice a year. On dry soils the young plants require watering every other day during the dry season for the first three years. The plants bear fruit from the fourth or fifth year, and typically continue to bear fruit for seven years. The cuttings are usually cultivars, selected both for yield and quality of fruit.
A single stem will bear 20 to 30 fruiting spikes. The harvest begins as soon as one or two berries at the base of the spikes begin to turn red, and before the fruit is mature, but when full grown and still hard; if allowed to ripen, the berries lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes.
# History
Pepper has been used as a spice in India since prehistoric times. J. Innes Miller notes that while pepper was grown in southern Thailand and in Malaysia, its most important source was India, particularly the Malabar Coast, in what is now the state of Kerala.[9] Peppercorns were a much prized trade good, often referred to as "black gold" and used as a form of commodity money. The term "peppercorn rent" still exists today.
The ancient history of black pepper is often interlinked with (and confused with) that of long pepper, the dried fruit of closely related Piper longum. The Romans knew of both and often referred to either as just "piper". In fact, it was not until the discovery of the New World and of chile peppers that the popularity of long pepper entirely declined. Chile peppers, some of which when dried are similar in shape and taste to long pepper, were easier to grow in a variety of locations more convenient to Europe.
Until well after the Middle Ages, virtually all of the black pepper found in Europe, the Middle East, and North Africa travelled there from India's Malabar region. By the 16th century, pepper was also being grown in Java, Sunda, Sumatra, Madagascar, Malaysia, and elsewhere in Southeast Asia, but these areas traded mainly with China, or used the pepper locally.[10] Ports in the Malabar area also served as a stop-off point for much of the trade in other spices from farther east in the Indian Ocean.
Black pepper, along with other spices from India and lands farther east, changed the course of world history. It was in some part the preciousness of these spices that led to the European efforts to find a sea route to India and consequently to the European colonial occupation of that country, as well as the European discovery and colonization of the Americas.
## Ancient times
Black peppercorns were found lodged in the nostrils of Ramesses II, placed there as part of the mummification rituals shortly after his death in 1213 BCE. Little else is known about the use of pepper in ancient Egypt, nor how it reached the Nile from India.
Pepper (both long and black) was known in Greece at least as early as the 4th century BCE, though it was probably an uncommon and expensive item that only the very rich could afford. Trade routes of the time were by land, or in ships which hugged the coastlines of the Arabian Sea. Long pepper, growing in the north-western part of India, was more accessible than the black pepper from further south; this trade advantage, plus long pepper's greater spiciness, probably made black pepper less popular at the time.
By the time of the early Roman Empire, especially after Rome's conquest of Egypt in 30 BCE, open-ocean crossing of the Arabian Sea directly to southern India's Malabar Coast was near routine. Details of this trading across the Indian Ocean have been passed down in the Periplus of the Erythraean Sea. According to the Roman geographer Strabo, the early Empire sent a fleet of around 120 ships on an annual one-year trip to India and back. The fleet timed its travel across the Arabian Sea to take advantage of the predictable monsoon winds. Returning from India, the ships travelled up the Red Sea, from where the cargo was carried overland or via the Nile Canal to the Nile River, barged to Alexandria, and shipped from there to Italy and Rome. The rough geographical outlines of this same trade route would dominate the pepper trade into Europe for a millennium and a half to come.
With ships sailing directly to the Malabar coast, black pepper was now travelling a shorter trade route than long pepper, and the prices reflected it. Pliny the Elder's Natural History tells us the prices in Rome around 77 CE: "Long pepper ... is fifteen denarii per pound, while that of white pepper is seven, and of black, four." Pliny also complains "there is no year in which India does not drain the Roman Empire of fifty million sesterces," and further moralises on pepper:
It is quite surprising that the use of pepper has come so much into fashion, seeing that in other substances which we use, it is sometimes their sweetness, and sometimes their appearance that has attracted our notice; whereas, pepper has nothing in it that can plead as a recommendation to either fruit or berry, its only desirable quality being a certain pungency; and yet it is for this that we import it all the way from India! Who was the first to make trial of it as an article of food? and who, I wonder, was the man that was not content to prepare himself by hunger only for the satisfying of a greedy appetite? (N.H. 12.14)[11]
Black pepper was a well-known and widespread, if expensive, seasoning in the Roman Empire. Apicius' De re coquinaria, a 3rd-century cookbook probably based at least partly on one from the 1st century CE, includes pepper in a majority of its recipes. Edward Gibbon wrote, in The History of the Decline and Fall of the Roman Empire, that pepper was "a favourite ingredient of the most expensive Roman cookery".
## Postclassical Europe
Pepper was so valuable that it was often used as collateral or even currency. The taste for pepper (or the appreciation of its monetary value) was passed on to those who would see Rome fall. It is said that Alaric the Visigoth and Attila the Hun each demanded from Rome a ransom of more than a ton of pepper when they besieged the city in 5th century. After the fall of Rome, others took over the middle legs of the spice trade, first the Persians and then the Arabs; Innes Miller cites the account of Cosmas Indicopleustes, who travelled east to India, as proof that "pepper was still being exported from India in the sixth century".[12] By the end of the Dark Ages, the central portions of the spice trade were firmly under Islamic control. Once into the Mediterranean, the trade was largely monopolised by Italian powers, especially Venice and Genoa. The rise of these city-states was funded in large part by the spice trade.
A riddle authored by Saint Aldhelm, a 7th-century Bishop of Sherborne, sheds some light on black pepper's role in England at that time:
It is commonly believed that during the Middle Ages, pepper was used to conceal the taste of partially rotten meat. There is no evidence to support this claim, and historians view it as highly unlikely: in the Middle Ages, pepper was a luxury item, affordable only to the wealthy, who certainly had unspoiled meat available as well.[14] Similarly, the belief that pepper was widely used as a preservative is questionable: it is true that piperine, the compound that gives pepper its spiciness, has some antimicrobial properties, but at the concentrations present when pepper is used as a spice, the effect is small.[15] Salt is a much more effective preservative, and salt-cured meats were common fare, especially in winter. However, pepper and other spices probably did play a role in improving the taste of long-preserved meats.
Its exorbitant price during the Middle Ages — and the monopoly on the trade held by Italy — was one of the inducements which led the Portuguese to seek a sea route to India. In 1498, Vasco da Gama became the first European to reach India by sea; asked by Arabs in Calicut (who spoke Spanish and Italian) why they had come, his representative replied, "we seek Christians and spices." Though this first trip to India by way of the southern tip of Africa was only a modest success, the Portuguese quickly returned in greater numbers and used their superior naval firepower to eventually gain complete control of trade on the Arabian sea. This was the start of the first European empire in Asia, given additional legitimacy (at least from a European perspective) by the 1494 Treaty of Tordesillas, which granted Portugal exclusive rights to the half of the world where black pepper originated.
The Portuguese proved unable to maintain their stranglehold on the spice trade for long. The old Arab and Venetian trade networks successfully smuggled enormous quantities of spices through the patchy Portuguese blockade, and pepper once again flowed through Alexandria and Italy, as well as around Africa. In the 17th century, the Portuguese lost almost all of their valuable Indian Ocean possessions to the Dutch and the English. The pepper ports of Malabar fell to the Dutch in the period 1661–1663.
As pepper supplies into Europe increased, the price of pepper declined (though the total value of the import trade generally did not). Pepper, which in the early Middle Ages had been an item exclusively for the rich, started to become more of an everyday seasoning among those of more average means. Today, pepper accounts for one-fifth of the world's spice trade.[16]
## China
It is possible that black pepper was known in China in the 2nd century BCE, if poetic reports regarding an explorer named Tang Meng (唐蒙) are correct. Sent by Emperor Wu to what is now south-west China, Tang Meng is said to have come across something called jujiang or "sauce-betel". He was told it came from the markets of Shu, an area in what is now the Sichuan province. The traditional view among historians is that "sauce-betel" is a sauce made from betel leaves, but arguments have been made that it actually refers to pepper, either long or black.[17]
In the 3rd century CE, black pepper made its first definite appearance in Chinese texts, as hujiao or "foreign pepper". It does not appear to have been widely known at the time, failing to appear in a 4th-century work describing a wide variety of spices from beyond China's southern border, including long pepper.[18] By the 12th century, however, black pepper had become a popular ingredient in the cuisine of the wealthy and powerful, sometimes taking the place of China's native Sichuan pepper (the tongue-numbing dried fruit of an unrelated plant).
Marco Polo testifies to pepper's popularity in 13th-century China when he relates what he is told of its consumption in the city of Kinsay (Zhejiang): "... Messer Marco heard it stated by one of the Great Kaan's officers of customs that the quantity of pepper introduced daily for consumption into the city of Kinsay amounted to 43 loads, each load being equal to 223 lbs."[19] Marco Polo is not considered a very reliable source regarding China, and this second-hand data may be even more suspect, but if this estimated 10,000 pounds (4,500 kg) a day for one city is anywhere near the truth, China's pepper imports may have dwarfed Europe's.
## Pepper as a medicine
Like all eastern spices, pepper was historically both a seasoning and a medicine. Long pepper, being stronger, was often the preferred medication, but both were used.
Black peppercorns figure in remedies in Ayurveda, Siddha and Unani medicine in India. The 5th century Syriac Book of Medicines prescribes pepper (or perhaps long pepper) for such illnesses as constipation, diarrhea, earache, gangrene, heart disease, hernia, hoarseness, indigestion, insect bites, insomnia, joint pain, liver problems, lung disease, oral abscesses, sunburn, tooth decay, and toothaches.[20] Various sources from the 5th century onward also recommend pepper to treat eye problems, often by applying salves or poultices made with pepper directly to the eye. There is no current medical evidence that any of these treatments has any benefit; pepper applied directly to the eye would be quite uncomfortable and possibly damaging.[21]
Pepper has long been believed to cause sneezing; this is still believed true today. Some sources say that piperine irritates the nostrils, causing the sneezing;[22] some say that it is just the effect of the fine dust in ground pepper, and some say that pepper is not in fact a very effective sneeze-producer at all. Few if any controlled studies have been carried out to answer the question.
Pepper is eliminated from the diet of patients having abdominal surgery and ulcers because of its irritating effect upon the intestines, being replaced by what is referred to as a bland diet.
Pepper is sometimes used to stop light or mild cuts from bleeding in restaurant kitchens.
# Flavour
Pepper gets its spicy heat mostly from the piperine compound, which is found both in the outer fruit and in the seed. Refined piperine, milligram-for-milligram, is about one per cent as hot as the capsaicin in chile peppers. The outer fruit layer, left on black pepper, also contains important odour-contributing terpenes including pinene, sabinene, limonene, caryophyllene, and linalool, which give citrusy, woody, and floral notes. These scents are mostly missing in white pepper, which is stripped of the fruit layer. White pepper can gain some different odours (including musty notes) from its longer fermentation stage.[23]
Pepper loses flavour and aroma through evaporation, so airtight storage helps preserve pepper's original spiciness longer. Pepper can also lose flavour when exposed to light, which can transform piperine into nearly tasteless isochavicine.[24] Once ground, pepper's aromatics can evaporate quickly; most culinary sources recommend grinding whole peppercorns immediately before use for this reason. Handheld pepper mills (or "pepper grinders"), which mechanically grind or crush whole peppercorns, are used for this, sometimes instead of pepper shakers, dispensers of pre-ground pepper. Spice mills such as pepper mills were found in European kitchens as early as the 14th century, but the mortar and pestle used earlier for crushing pepper remained a popular method for centuries after as well.[25]
# World trade
Peppercorns are, by monetary value, the most widely traded spice in the world, accounting for 20 percent of all spice imports in 2002. The price of pepper can be volatile, and this figure fluctuates a great deal year to year; for example, pepper made up 39 percent of all spice imports in 1998.[26] By weight, slightly more chile peppers are traded worldwide than peppercorns. The International Pepper Exchange is located in Kochi, India.
Vietnam has recently become the world's largest producer and exporter of pepper (85,000 long tons in 2003). Other major producers include Indonesia (67,000 tons), India (65,000 tons), Brazil (35,000 tons), Malaysia (22,000 tons), Sri Lanka (12,750 tons), Thailand, and China. Vietnam dominates the export market, using almost none of its production domestically. In 2003, Vietnam exported 82,000 tons of pepper, Indonesia 57,000 tons, Brazil 37,940 tons, Malaysia 18,500 tons, and India 17,200 tons.[27]
# Notes
Template:Cookbook
- ↑ Green capsicum or bell pepper may also be called "green pepper"; it is an unrelated plant.
- ↑ Pippali is Sanskrit for long pepper, also known as long pepper. Black pepper is marica. Greek and Latin borrowed pippali to refer to either.
- ↑ Douglas Harper's Online Etymology Dictionary entries for pepper and pep. Retrieved 13 November 2005.
- ↑ See Thai Ingredients Glossary. Retrieved 6 November 2005.
- ↑ Ochef, Using fresh green peppercorns. Retrieved 6 November 2005.
- ↑ Katzer, Gernot (2006). Pepper. Gernot Katzer's Spice Pages. Retrieved 12 August 2006.
- ↑ Peppercorns, from Penzey's Spices. Retrieved 17 October 2006.
- ↑ Pepper varieties information from A Cook's Wares. Retrieved 6 November 2005.
- ↑ J. Innes Miller, The Spice Trade of the Roman Empire (Oxford: Clarendon Press, 1969), p. 80
- ↑ Dalby p. 93.
- ↑ From Bostock and Riley's 1855 translation. Text online.
- ↑ Innes Miller, The Spice Trade, p. 83
- ↑ Translation from Turner, p 94. The riddle's answer is of course pepper.
- ↑ Dalby p. 156; also Turner pp. 108–109, though Turner does go on to discuss spices (not pepper specifically) being used to disguise the taste of partially spoiled wine or ale.
- ↑ H. J. D. Dorman and S. G. Deans (2000). "Antimicrobial agents from plants: antibacterial activity of plant volatile oils". Journal of Applied Microbiology. 88 Issue 2: 308..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}. Full text at Blackwell website; purchase required. "Spices, which are used as integral ingredients in cuisine or added as flavouring agents to foods, are present in insufficient quantities for their antimicrobial properties to be significant."
- ↑ Jaffee p. 10.
- ↑ Dalby pp. 74–75. The argument that jujiang was long pepper goes back to the 4th century CE botanical writings of Ji Han; Hui-lin Li's 1979 translation of and commentary on Ji Han's work makes the case that it was piper nigrum.
- ↑ Dalby p. 77.
- ↑ Translation from The Travels of Marco Polo: The Complete Yule-Cordier Edition, Vol. 2, Dover. ISBN 0-486-27587-6. p. 204.
- ↑ Turner p. 160.
- ↑ Turner p. 171.
- ↑ U.S. Library of Congress Science Reference Services "Everyday Mysteries", Why does pepper make you sneeze?. Retrieved November 12, 2005.
- ↑ McGee p. 428.
- ↑ ibid.
- ↑ Montagne, Prosper (2001). Larousse Gastronomique. Hamlyn. p. 726. ISBN 0-600-60235-4. "Mill".
- ↑ Jaffee p. 12, table 2.
- ↑ Data from Multi Commodity Exchange of India, Ltd. Retrieved 6 November 2005. | https://www.wikidoc.org/index.php/Black_pepper | |
d192b6826431e420363a91c976fa9dbaa950e355 | wikidoc | Blast injury | Blast injury
# Overview
Blast injuries are inflicted on individuals subjected to the effects of the detonation of high-order explosives, explosives that produce a supersonic over-pressurization shock wave, as well as low order explosives which produce a subsonic explosion with no over-pressurization wave. These injuries are compounded when the explosion takes place in a confined space.
# Classification
Blast injuries are divided into four classes:
- Primary: Injuries due to high-order explosive over-pressurization shock wave as it moves through the body from solid and liquid sections to gas-filled organs, in particular the lungs, gastrointestinal tract and middle ear. Solid- and liquid-filled organs are not subject to primary blast injury. These injuries are not necessarily obvious to observers.
- Secondary: Injuries due to bomb fragments and other objects propelled by the explosion. These injuries may affect any part of the body and sometimes result in visible hemorrhage. At times the propelled object may become embedded in the body, obstructing the loss of blood to the outside. However, there may be extensive loss of blood within the body cavities. Shrapnel wounds may be lethal and therefore many anti-personnel bombs are designed to generate shrapnel and fragments.
- Tertiary: Injuries as a result of the victim becoming a missile and being thrown against other objects. The injuries sustained are then similar to those that are sustained by blunt trauma, including bone fractures and coup/contre-coup injuries.
- Quaternary: All other injuries not included in the first three classes. These include burns, crushing injuries and respiratory injuries. | Blast injury
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Blast injuries are inflicted on individuals subjected to the effects of the detonation of high-order explosives, explosives that produce a supersonic over-pressurization shock wave, as well as low order explosives which produce a subsonic explosion with no over-pressurization wave. These injuries are compounded when the explosion takes place in a confined space.[1] [2]
# Classification
Blast injuries are divided into four classes:
- Primary: Injuries due to high-order explosive over-pressurization shock wave as it moves through the body from solid and liquid sections to gas-filled organs, in particular the lungs, gastrointestinal tract and middle ear. Solid- and liquid-filled organs are not subject to primary blast injury. These injuries are not necessarily obvious to observers.
- Secondary: Injuries due to bomb fragments and other objects propelled by the explosion. These injuries may affect any part of the body and sometimes result in visible hemorrhage. At times the propelled object may become embedded in the body, obstructing the loss of blood to the outside. However, there may be extensive loss of blood within the body cavities. Shrapnel wounds may be lethal and therefore many anti-personnel bombs are designed to generate shrapnel and fragments.
- Tertiary: Injuries as a result of the victim becoming a missile and being thrown against other objects. The injuries sustained are then similar to those that are sustained by blunt trauma, including bone fractures and coup/contre-coup injuries.
- Quaternary: All other injuries not included in the first three classes. These include burns, crushing injuries and respiratory injuries. | https://www.wikidoc.org/index.php/Blast_injury | |
4c6dccf2177eff3976e81e156b0ee8edab7296ad | wikidoc | Blastocystis | Blastocystis
# Overview
Blastocystis is a highly prevalent single-celled parasite that infects the gastrointestinal tract of humans and animals. Many different types of Blastocystis exist, and they can infect humans, farm animals, birds, rodents, amphibians, reptiles, fish, and even cockroaches.
# Blastocystosis
Infection with Blastocystis can produce the disease Blastocystosis. The most frequently described symptoms of Blastocystosis are abdominal pain, constipation, diarrhea.
# Genetic classification
Blastocystis has presented a challenge to the medical and scientific community due to the diversity of hosts the organism can infect, the diversity of Blastocystis species which exist, and the fact that most species of Blastocystis found in mammals and birds are able to cause infection in humans. The organism has been called controversial, cryptic, and enigmatic. Even its classification has proved challenging. Blastocystis was originally classified as a yeast, then as a protozoan. An analysis of gene sequences was finally performed in 1996, which placed it into the Stramenopile kingdom.
For many years, scientists believed one species of Blastocystis infected humans, while different species of Blastocystis infected other animals. So they called Blastocystis from humans Blastocystis hominis and gave different species names to Blastocystis from other animals, for example Blastocystis ratti from rats. Various genetic analysis showed Blastocystis hominis as a unique entity does not really exist -- there is no single species of Blastocystis that infects humans. In fact, nine distinct 'species' of Blastocystis (as defined by genetic differences) can infect humans, including those previously called Blastocystis ratti. Because of this, in 2007 scientists proposed discontinuing the use of the term Blastocystis hominis. Their proposal is to refer to Blastocystis from humans and animals as Blastocystis sp. subtype nn where nn is a number from 1 to 9 assigned to each species group.
# Microbiology
The appropriate classification of Blastocystis has only recently been resolved. The original description of Blastocystis was as a yeast due to its yeast-like glistening appearance in fresh wet mounts and the absence of pseudopodia and locomotion. This was then contradicted by Zierdt who reclassified it under subphylum Sporozoa based on some distinctive protistan features that the Blastocystis cell has, such as the presence of nuclei, smooth and rough endoplasmic reticulum, Golgi complex, and mitochondrion-like organelles. Its sensitivity to antiprotozoal drugs and its inability to grow on fungal media further indicated that it was a protozoan. However, major revisions were made to its classification more recently based on modern molecular approaches to classification, and these studies have shown that Blastocystis is neither yeast nor a protozoan. It is placed in a new Kingdom known as the Stramenopiles. Other Stramenopiles include brown algae, mildew, diatoms, the organism that caused the Irish potato famine, and the organism responsible for Sudden oak death disease.
The great diversity of morphological forms in which Blastocystis exists in poses identification and diagnostic problems. Four commonly described forms are the vacuolar (otherwise known as central body), granular, amoeboid, and cyst forms. The appearance of the organism is largely dependent upon environmental conditions as it is extremely sensitive to oxygen. Whether all of these forms exist in the host intestine is unclear.
- Vacuolar form
- Granular form
- Amoeboid form
- Cyst form
The proposed life cycle begins with ingestion of the cyst form. After ingestion, the cyst develops into other forms which may in turn re-develop into cyst forms. Through human feces, the cyst forms enter the external environment and are transmitted to humans and other animals via the fecal-oral route, repeating the entire cycle.
Obtaining and culturing Blastocystis
The ATCC maintains a collection of Blastocystis isolates. Some records show whether the isolates were obtained from symptomatic or asymptomatic carriers. As yet, no publication has identified the subtypes of most of the ATCC isolates, which are mostly axenic. Researchers have reported that patients with Irritable bowel syndrome may provide a reliable source for xenic Blastocystis isolates. Some researchers have reported being able to culture Blastocystis from 46% of IBS patients. Researchers have described different culture mechanisms for growing Blastocystis. Colony growth on solid medium colonies on solid culture medium using a synthetic medium with added supplements have both been described. However, most cultivation is performed in liquid media of various types.
# Treatment
## Antimicrobial regimen
- Blastocystis
- Preferred regimen (1): Metronidazole 750 mg PO tid or 1.5 g qd for 10 days
- Preferred regimen (2): Trimethoprim-sulfamethoxazole 1 DS PO bid or 2 DS PO qd for 7 days
- Preferred regimen (3): Iodoquinol 650 mg PO tid for 20 days
- Preferred regimen (4): Nitazoxanide 500 mg PO bid for 3 days
- Preferred regimen (5): Paromomycin 25-35 mg/kg/day PO tid for 7 days
- Note (1): Treatment of asymptomatic infections is unnecessary
- Note (2): One double strength tablet contains 160 mg trimethoprim/800 mg sulfamethoxazole | Blastocystis
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Christen Stensvold, PhD
# Overview
Blastocystis is a highly prevalent single-celled parasite that infects the gastrointestinal tract of humans and animals. Many different types of Blastocystis exist, and they can infect humans, farm animals, birds, rodents, amphibians, reptiles, fish, and even cockroaches.
# Blastocystosis
Infection with Blastocystis can produce the disease Blastocystosis. The most frequently described symptoms of Blastocystosis are abdominal pain, constipation, diarrhea.
# Genetic classification
Blastocystis has presented a challenge to the medical and scientific community due to the diversity of hosts the organism can infect, the diversity of Blastocystis species which exist, and the fact that most species of Blastocystis found in mammals and birds are able to cause infection in humans. The organism has been called controversial, cryptic, and enigmatic. Even its classification has proved challenging. Blastocystis was originally classified as a yeast, then as a protozoan. An analysis of gene sequences was finally performed in 1996, which placed it into the Stramenopile kingdom.
For many years, scientists believed one species of Blastocystis infected humans, while different species of Blastocystis infected other animals. So they called Blastocystis from humans Blastocystis hominis and gave different species names to Blastocystis from other animals, for example Blastocystis ratti from rats. Various genetic analysis showed Blastocystis hominis as a unique entity does not really exist -- there is no single species of Blastocystis that infects humans. In fact, nine distinct 'species' of Blastocystis (as defined by genetic differences) can infect humans, including those previously called Blastocystis ratti. Because of this, in 2007 scientists proposed discontinuing the use of the term Blastocystis hominis. Their proposal is to refer to Blastocystis from humans and animals as Blastocystis sp. subtype nn where nn is a number from 1 to 9 assigned to each species group. [1]
# Microbiology
The appropriate classification of Blastocystis has only recently been resolved. The original description of Blastocystis was as a yeast due to its yeast-like glistening appearance in fresh wet mounts and the absence of pseudopodia and locomotion. [2] This was then contradicted by Zierdt who reclassified it under subphylum Sporozoa based on some distinctive protistan features that the Blastocystis cell has, such as the presence of nuclei, smooth and rough endoplasmic reticulum, Golgi complex, and mitochondrion-like organelles. Its sensitivity to antiprotozoal drugs and its inability to grow on fungal media further indicated that it was a protozoan. However, major revisions were made to its classification more recently based on modern molecular approaches to classification, and these studies have shown that Blastocystis is neither yeast nor a protozoan. It is placed in a new Kingdom known as the Stramenopiles. Other Stramenopiles include brown algae, mildew, diatoms, the organism that caused the Irish potato famine, and the organism responsible for Sudden oak death disease.
The great diversity of morphological forms in which Blastocystis exists in poses identification and diagnostic problems. Four commonly described forms are the vacuolar (otherwise known as central body), granular, amoeboid, and cyst forms. The appearance of the organism is largely dependent upon environmental conditions as it is extremely sensitive to oxygen. Whether all of these forms exist in the host intestine is unclear.
- Vacuolar form
- Granular form
- Amoeboid form
- Cyst form
The proposed life cycle begins with ingestion of the cyst form. After ingestion, the cyst develops into other forms which may in turn re-develop into cyst forms. Through human feces, the cyst forms enter the external environment and are transmitted to humans and other animals via the fecal-oral route, repeating the entire cycle.
Obtaining and culturing Blastocystis
The ATCC maintains a collection of Blastocystis isolates. Some records show whether the isolates were obtained from symptomatic or asymptomatic carriers. As yet, no publication has identified the subtypes of most of the ATCC isolates, which are mostly axenic. Researchers have reported that patients with Irritable bowel syndrome may provide a reliable source for xenic Blastocystis isolates. Some researchers have reported being able to culture Blastocystis from 46% of IBS patients. [7] Researchers have described different culture mechanisms for growing Blastocystis. Colony growth on solid medium colonies on solid culture medium using a synthetic medium with added supplements have both been described. [8][9] However, most cultivation is performed in liquid media of various types.
# Treatment
## Antimicrobial regimen
- Blastocystis[10]
- Preferred regimen (1): Metronidazole 750 mg PO tid or 1.5 g qd for 10 days
- Preferred regimen (2): Trimethoprim-sulfamethoxazole 1 DS PO bid or 2 DS PO qd for 7 days
- Preferred regimen (3): Iodoquinol 650 mg PO tid for 20 days
- Preferred regimen (4): Nitazoxanide 500 mg PO bid for 3 days
- Preferred regimen (5): Paromomycin 25-35 mg/kg/day PO tid for 7 days
- Note (1): Treatment of asymptomatic infections is unnecessary
- Note (2): One double strength tablet contains 160 mg trimethoprim/800 mg sulfamethoxazole | https://www.wikidoc.org/index.php/Blastocystis | |
85dce33023f31731f584cb4839f41a6326020100 | wikidoc | Coagulopathy | Coagulopathy
# Overview
Coagulopathy is a medical term for a defect in the body's mechanism for blood clotting. While there are several possible causes they generally result in excessive bleeding and a lack of clotting.
Hemophilia is one type of congenital disease characterized by coagulopathy; these are examples of severe lack of blood clotting.
# Causes
- Acquired causes of coagulopathy include anticoagulation with warfarin, liver failure, and disseminated intravascular coagulation. Additionally, the haemotoxic venom from certain species of snakes can cause this condition e.g. Bothrops, rattlesnakes and other species of viper.
- Drugs: caspofungin acetate, hydroxyethyl starch, Ixabepilone, Pegasparaginase
# Differential Diagnosis of Coagulopathy | Coagulopathy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Shyam Patel [2] Associate Editor(s)-in-Chief: M. Khurram Afzal, MD [3], Sogand Goudarzi, MD [4]
# Overview
Coagulopathy is a medical term for a defect in the body's mechanism for blood clotting. While there are several possible causes they generally result in excessive bleeding and a lack of clotting.
Hemophilia is one type of congenital disease characterized by coagulopathy; these are examples of severe lack of blood clotting.
# Causes
- Acquired causes of coagulopathy include anticoagulation with warfarin, liver failure, and disseminated intravascular coagulation. Additionally, the haemotoxic venom from certain species of snakes can cause this condition e.g. Bothrops, rattlesnakes and other species of viper.
- Drugs: caspofungin acetate, hydroxyethyl starch, Ixabepilone, Pegasparaginase
# Differential Diagnosis of Coagulopathy | https://www.wikidoc.org/index.php/Bleeding_disorder | |
1d7e54382bda4cadbf50066b708123bb1906215e | wikidoc | Blinatumomab | Blinatumomab
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Black Box Warning
# Overview
Blinatumomab is an antineoplastic agent that is FDA approved for the treatment of Philadelphia chromosome-negative relapsed or refractory B-cell precursor acute lymphoblastic leukemia (ALL).. There is a Black Box Warning for this drug as shown here. Common adverse reactions include pyrexia, headache , peripheral edema , febrile neutropenia , nausea , hypokalemia , and constipation and the most common serious adverse reactions included febrile neutropenia, pyrexia, pneumonia, sepsis, neutropenia, device-related infection, tremor, encephalopathy, infection, overdose, confusion, Staphylococcal bacteremia, and headache..
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- Blinatumomab is indicated for the treatment of Philadelphia chromosome-negative relapsed or refractory B-cell precursor acute lymphoblastic leukemia (ALL).
- This indication is approved under accelerated approval. Continued approval for this indication may be contingent upon verification of clinical benefit in subsequent trials
- Hospitalization is recommended for the first 9 days of the first cycle and the first 2 days of the second cycle. For all subsequent cycle starts and reinitiation (eg, if treatment is interrupted for 4 or more hours), supervision by a healthcare professional or hospitalization is recommended.
- Do not flush the Blinatumomab infusion line especially when changing infusion bags. Flushing when changing bags or at completion of infusion can result in excess dosage and complications thereof. Preparation and administration errors resulting in overdose have occurred .
- A single cycle of treatment of Blinatumomab consists of 4 weeks of continuous intravenous infusion followed by a 2-week treatment-free interval.
- For patients at least 45 kg in weight:
- In Cycle 1, administer Blinatumomab at 9 mcg/day on Days 1–7 and at 28 mcg/day on Days 8-28.
- For subsequent cycles, administer Blinatumomab at 28 mcg/day on Days 1–28.
- Allow for at least 2 weeks treatment-free between cycles of Blinatumomab .
- A treatment course consists of up to 2 cycles of Blinatumomab for induction followed by 3 additional cycles for consolidation treatment (up to a total of 5 cycles).
- Premedicate with dexamethasone 20 mg intravenously 1 hour prior to the first dose of Blinatumomab of each cycle, prior to a step dose (such as Cycle 1 day 8), or when restarting an infusion after an interruption of 4 or more hours.
- Administer Blinatumomab as a continuous intravenous infusion at a constant flow rate using an infusion pump. The pump should be programmable, lockable, non-elastomeric, and have an alarm.
- Blinatumomab infusion bags should be infused over 24 hours or 48 hours . Infuse the total 240 mL Blinatumomab solution according to the instructions on the pharmacy label on the bag at one of the following constant infusion rates:
- Infusion rate of 10 mL/h for a duration of 24 hours, OR
- Infusion rate of 5 mL/h for a duration of 48 hours
- The Blinatumomab solution for infusion must be administered using IV tubing that contains a sterile, non-pyrogenic, low protein-binding, 0.2 micron in-line filter.
Important Note: Do not flush the infusion line, especially when changing infusion bags. Flushing when changing bags or at completion of infusion can result in excess dosage. Blinatumomab should be infused through a dedicated lumen.
- At the end of the infusion, any unused Blinatumomab solution in the IV bag and IV lines should be disposed of in accordance with local requirements.
- If the interruption after an adverse event is no longer than 7 days, continue the same cycle to a total of 28 days of infusion inclusive of days before and after the interruption in that cycle. If an interruption due to an adverse event is longer than 7 days, start a new cycle.
- It is very important that the instructions for preparation (including admixing) and administration provided in this section are strictly followed to minimize medication errors (including underdose and overdose) .
NOTE: 1 package Blinatumomab includes 1 vial of Blinatumomab and 1 vial of IV Solution Stabilizer.
- Before preparation, ensure you have the following supplies ready:
- 1 package of Blinatumomab for preparation of 9 mcg/day dose infused over 24 hours at a rate of 10 mL/h, 9 mcg/day dose infused over 48 hours at a rate of 5 mL/h, and 28 mcg/day dose infused over 24 hours at a rate of 10 mL/h
- 2 packages of Blinatumomab for preparation of 28 mcg/day dose infused over 48 hours at a rate of 5 mL/h
- The following supplies are also required, but not included in the package:
- Sterile, single-use disposable syringes
- 21- to 23- gauge needle(s) (recommended)
- Preservative-free Sterile Water for Injection, USP
- 250 mL 0.9% Sodium Chloride IV bag
To minimize the number of aseptic transfers, it is recommended to use a 250 mL-prefilled IV bag. 250 mL-prefilled IV bags typically contain overfill with a total volume of 265 to 275 mL. Blinatumomab dose calculations provided in section 2.4.4 are based on a starting volume of 265 mL to 275 mL 0.9% Sodium Chloride.
Use only polyolefin, PVC non-di-ethylhexylphthalate (non-DEHP), or ethyl vinyl acetate (EVA) infusion bags/pump cassettes.
- To minimize the number of aseptic transfers, it is recommended to use a 250 mL-prefilled IV bag. 250 mL-prefilled IV bags typically contain overfill with a total volume of 265 to 275 mL. Blinatumomab dose calculations provided in section 2.4.4 are based on a starting volume of 265 mL to 275 mL 0.9% Sodium Chloride.
- Use only polyolefin, PVC non-di-ethylhexylphthalate (non-DEHP), or ethyl vinyl acetate (EVA) infusion bags/pump cassettes.
- Polyolefin, PVC non-DEHP, or EVA IV tubing with a sterile, non-pyrogenic, low protein-binding 0.2 micron in-line filter
- Ensure that the IV tubing is compatible with the infusion pump.
- Aseptic technique must be strictly observed when preparing the solution for infusion since Blinatumomab vials do not contain antimicrobial preservatives. To prevent accidental contamination, prepare Blinatumomab according to aseptic standards, including but not limited to:
- Preparation must be done in a USP compliant facility.
- Preparation must be done in an ISO Class 5 laminar flow hood or better.
- The admixing area should have appropriate environmental specifications, confirmed by periodic monitoring.
- Personnel should be appropriately trained in aseptic manipulations and admixing of oncology drugs.
- Personnel should wear appropriate protective clothing and gloves.
- Gloves and surfaces should be disinfected.
- IV Solution Stabilizer is provided with the Blinatumomab package and is used to coat the prefilled IV bag prior to addition of reconstituted Blinatumomab to prevent adhesion of Blinatumomab to IV bags and IV lines. Therefore, add IV Solution Stabilizer to the IV bag containing 0.9% Sodium Chloride. Do not use IV Solution Stabilizer for reconstitution of Blinatumomab .
- The entire volume of the admixed Blinatumomab will be more than the volume administered to the patient (240 mL) to account for the priming of the IV line and to ensure that the patient will receive the full dose of Blinatumomab .
- When preparing an IV bag, remove air from IV bag. This is particularly important for use with an ambulatory infusion pump.
- Use the specific volumes described in the admixing instructions to minimize errors in calculation.
- Specific admixing instructions are provided for each dose and infusion time. Verify the prescribed dose and infusion time of Blinatumomab and identify the appropriate dosing preparation section listed below. Follow the steps for reconstituting Blinatumomab and preparing the IV bag.
- 9 mcg/day infused over 24 hours at a rate of 10 mL/h.
- 9 mcg/day infused over 48 hours at a rate of 5 mL/h.
- 28 mcg/day infused over 24 hours at a rate of 10 mL/h.
- 28 mcg/day infused over 48 hours at a rate of 5 mL/h.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.5 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vial.
- Using a 5 mL syringe, reconstitute one vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 1 mL syringe, aseptically transfer 0.83 mL of reconstituted Blinatumomab into the IV bag. Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.5 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vial.
- Using a 5 mL syringe, reconstitute one vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 3 mL syringe, aseptically transfer 1.7 mL of reconstituted Blinatumomab into the IV bag. Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.6 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vial.
- Using a 5 mL syringe, reconstitute one vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 3 mL syringe, aseptically transfer 2.6 mL of reconstituted Blinatumomab into the IV bag. Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.6 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vials.
- Use two vials of Blinatumomab . Using a 5 mL syringe, reconstitute each vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 3 mL syringe, aseptically transfer 5.2 mL of reconstituted Blinatumomab into the IV bag (2.7 mL from one vial and the remaining 2.5 mL from the second vial). Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- The information in Table 1 indicates the storage time for the reconstituted Blinatumomab vial and prepared IV bag containing Blinatumomab solution for infusion. Lyophilized Blinatumomab vial and IV Solution Stabilizer may be stored for a maximum of 8 hours at room temperature.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Blinatumomab in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Blinatumomab in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
Limited experience in pediatric patients
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Blinatumomab in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Blinatumomab in pediatric patients.
# Contraindications
- Blinatumomab is contraindicated in patients with known hypersensitivity to blinatumomab or to any component of the product formulation.
# Warnings
- Cytokine Release Syndrome (CRS), which may be life-threatening or fatal, occurred in patients receiving Blinatumomab .
- Infusion reactions have occurred with the Blinatumomab infusion and may be clinically indistinguishable from manifestations of CRS.
- Serious adverse events that may be associated with CRS included pyrexia, headache, nausea, asthenia, hypotension, increased alanine aminotransferase, increased aspartate aminotransferase, and increased total bilirubin; these events infrequently led to Blinatumomab discontinuation. Life-threatening or fatal CRS was infrequently reported in patients receiving Blinatumomab . In some cases, disseminated intravascular coagulation (DIC), capillary leak syndrome (CLS), and hemophagocytic lymphohistiocytosis/macrophage activation syndrome (HLH/MAS) have been reported in the setting of CRS.
- Patients should be closely monitored for signs or symptoms of these events. Management of these events may require either temporary interruption or discontinuation of Blinatumomab .
- In patients receiving Blinatumomab in clinical trials, neurological toxicities have occurred in approximately 50% of patients. The median time to onset of any neurological toxicity was 7 days. Grade 3 or higher (severe, life-threatening, or fatal) neurological toxicities following initiation of Blinatumomab administration occurred in approximately 15% of patients and included encephalopathy, convulsions, speech disorders, disturbances in consciousness, confusion and disorientation, and coordination and balance disorders. The majority of events resolved following interruption of Blinatumomab , but some resulted in treatment discontinuation.
- Monitor patients receiving Blinatumomab for signs and symptoms of neurological toxicities, and interrupt or discontinue Blinatumomab as recommended.
- In patients receiving Blinatumomab in clinical trials, serious infections such as sepsis, pneumonia, bacteremia, opportunistic infections, and catheter-site infections were observed in approximately 25% of patients, some of which were life-threatening or fatal. As appropriate, administer prophylactic antibiotics and employ surveillance testing during treatment with Blinatumomab . Monitor patients for signs and symptoms of infection and treat appropriately.
- Tumor lysis syndrome (TLS), which may be life-threatening or fatal, has been observed in patients receiving Blinatumomab . Appropriate prophylactic measures, including pretreatment nontoxic cytoreduction and on-treatment hydration, should be used for the prevention of TLS during Blinatumomab treatment. Monitor for signs or symptoms of TLS. Management of these events may require either temporary interruption or discontinuation of Blinatumomab .
- Neutropenia and febrile neutropenia, including life-threatening cases, have been observed in patients receiving Blinatumomab . Monitor laboratory parameters (including, but not limited to, white blood cell count and absolute neutrophil count) during Blinatumomab infusion. Interrupt Blinatumomab if prolonged neutropenia occurs.
- Due to the potential for neurologic events, including seizures, patients receiving Blinatumomab are at risk for loss of consciousness . Advise patients to refrain from driving and engaging in hazardous occupations or activities such as operating heavy or potentially dangerous machinery while Blinatumomab is being administered.
- Treatment with Blinatumomab was associated with transient elevations in liver enzymes. Although the majority of these events were observed in the setting of CRS, some were observed outside of this setting. For these events, the median time to onset was 15 days. *In patients receiving Blinatumomab in clinical trials, Grade 3 or greater elevations in liver enzymes occurred in approximately 6% of patients outside the setting of CRS and resulted in treatment discontinuation in less than 1% of patients.
- Monitor alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GGT), and total blood bilirubin prior to the start of and during Blinatumomab treatment. Interrupt Blinatumomab if the transaminases rise to greater than 5 times the upper limit of normal or if bilirubin rises to more than 3 times the upper limit of normal.
- Cranial magnetic resonance imaging (MRI) changes showing leukoencephalopathy have been observed in patients receiving Blinatumomab , especially in patients with prior treatment with cranial irradiation and antileukemic chemotherapy (including systemic high-dose methotrexate or intrathecal cytarabine). The clinical significance of these imaging changes is unknown.
- Preparation and administration errors have occurred with Blinatumomab treatment. Follow instructions for preparation (including admixing) and administration strictly to minimize medication errors (including underdose and overdose)
# Adverse Reactions
## Clinical Trials Experience
- The following adverse reactions are discussed in greater detail in other sections of the label:
- Cytokine release syndrome
- Neurological Toxicities
- Infections
- Tumor Lysis Syndrome
- Neutropenia and Febrile Neutropenia
- Effects on Ability to Drive and Use Machines
- Elevated Liver Enzymes
- Leukoencephalopathy
- Preparation and Administration Errors
- Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice.
- The safety data described in this section reflect exposure to Blinatumomab in clinical trials in which 212 patients with relapsed or refractory ALL received up to 28 mcg/day. All patients received at least one dose of Blinatumomab . The median age of the study population was 37 years (range: 18 to 79 years), 63% were male, 79% were White, 3% were Asian, and 3% were Black or African American.
- The most common adverse reactions (≥ 20%) were pyrexia (62%), headache (36%), peripheral edema (25%), febrile neutropenia (25%), nausea (25%), hypokalemia (23%), and constipation (20%).
- Serious adverse reactions were reported in 65% of patients. The most common serious adverse reactions (≥ 2%) included febrile neutropenia, pyrexia, pneumonia, sepsis, neutropenia, device-related infection, tremor, encephalopathy, infection, overdose, confusion, Staphylococcal bacteremia, and headache.
- Adverse reactions of Grade 3 or higher were reported in 80% of patients. Discontinuation of therapy due to adverse reactions occurred in 18% of patients treated with Blinatumomab . The adverse reactions reported most frequently as the reason for discontinuation of treatment included encephalopathy and sepsis. Fatal adverse events occurred in 15% of patients. The majority of these events were infections. No fatal adverse events occurred on treatment among patients in remission.
- The adverse reactions with ≥ 10% incidence for any grade or ≥ 5% incidence for Grade 3 or higher are summarized in Table 2.
- Additional important adverse reactions that did not meet the threshold criteria for inclusion in Table 2 were:
- Blood and lymphatic system disorders: leukocytosis (2%), lymphopenia (1%)
- Cardiac disorders: tachycardia (8%)
- General disorders and administration site conditions: edema (5%)
- Immune system disorders: cytokine storm (1%)
- Investigations: decreased immunoglobulins (9%), increased blood bilirubin (8%), increased gamma-glutamyl-transferase (6%), increased liver enzymes (1%)
- Metabolism and nutrition disorders: tumor lysis syndrome (4%), hypoalbuminemia (4%)
- Nervous system disorders: encephalopathy (5%), paresthesia (5%), aphasia (4%), convulsion (2%), memory impairment (2%), cognitive disorder (1%), speech disorder (< 1%)
- Psychiatric disorders: confusion (7%), disorientation (3%)
- Vascular disorders: capillary leak syndrome (< 1%).
- Hypersensitivity reactions related to Blinatumomab treatment were hypersensitivity (1%) and bronchospasm (< 1%).
- As with all therapeutic proteins, there is potential for immunogenicity. The immunogenicity of Blinatumomab has been evaluated using either an electrochemiluminescence detection technology (ECL) or an enzyme-linked immunosorbent assay (ELISA) screening immunoassay for the detection of binding anti-blinatumomab antibodies. For patients whose sera tested positive in the screening immunoassay, an in vitro biological assay was performed to detect neutralizing antibodies.
- In clinical studies, less than 1% of patients treated with Blinatumomab tested positive for binding anti-blinatumomab antibodies. All patients who tested positive for binding antibodies also tested positive for neutralizing anti-blinatumomab antibodies.
- Anti-blinatumomab antibody formation may affect pharmacokinetics of Blinatumomab . No association was seen between antibody development and development of adverse events.
- The detection of anti-blinatumomab antibody formation is highly dependent on the sensitivity and specificity of the assay. Additionally, the observed incidence of antibody (including neutralizing antibody) positivity in an assay may be influenced by several factors, including assay methodology, sample handling, timing of sample collection, concomitant medications, and underlying disease. For these reasons, comparison of the incidence of antibodies to blinatumomab with the incidence of antibodies to other products may be misleading.
## Postmarketing Experience
There is limited information regarding Blinatumomab Postmarketing Experience in the drug label.
# Drug Interactions
- No formal drug interaction studies have been conducted with Blinatumomab . Initiation of Blinatumomab treatment causes transient release of cytokines that may suppress CYP450 enzymes. The highest drug- drug interaction risk is during the first 9 days of the first cycle and the first 2 days of the second cycle in patients who are receiving concomitant CYP450 substrates, particularly those with a narrow therapeutic index. In these patients, monitor for toxicity (eg, warfarin) or drug concentrations (eg, cyclosporine). Adjust the dose of the concomitant drug as needed
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): C
- There are no adequate and well-controlled studies of Blinatumomab in pregnant women. Based on its mechanism of action, Blinatumomab may cause fetal toxicity including B-cell lymphocytopenia when administered to a pregnant woman. Blinatumomab should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
- Animal reproduction studies have not been conducted with blinatumomab. In embryo-fetal developmental toxicity studies, a murine surrogate molecule was administered intravenously to pregnant mice during the period of organogenesis. The surrogate molecule crossed the placental barrier and did not cause embryo-fetal toxicity or teratogenicity. The expected depletions of B and T cells were observed in the pregnant mice, but hematological effects were not assessed in fetuses.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Blinatumomab in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Blinatumomab during labor and delivery.
### Nursing Mothers
- It is not known whether blinatumomab is excreted in human milk. Because many drugs are excreted in human milk and because of the potential for serious adverse reactions in nursing infants from blinatumomab, a decision should be made whether to discontinue nursing or to discontinue the drug, taking into account the importance of the drug to the mother.
### Pediatric Use
- There is limited experience in pediatric patients. Blinatumomab was evaluated in a dose-escalation study of 41 pediatric patients with relapsed or refractory B-precursor ALL. The median age was 6 years (range: 2 to 17 years). Blinatumomab was administered at doses of 5 to 30 mcg/m2/day. The recommended phase 2 regimen was 5 mcg/m2/day on Days 1-7 and 15 mcg/m2/day on Days 8-28 for cycle 1, and 15 mcg/m2/day on Days 1-28 for subsequent cycles. At a higher dose, a fatal cardiac failure event occurred in the setting of life-threatening cytokine release syndrome (CRS) .
- The steady-state concentrations of blinatumomab were comparable in adult and pediatric patients at the equivalent dose levels based on body surface area (BSA)-based regimens.
### Geriatic Use
- Of the total number of patients with relapsed or refractory ALL, approximately 13% were 65 years of age and over. Generally, safety and efficacy were similar between elderly patients (≥ 65 years of age) and patients less than 65 years of age treated with Blinatumomab . Elderly patients experienced a higher rate of neurological toxicities, including cognitive disorder, encephalopathy, confusion, and serious infections
### Gender
There is no FDA guidance on the use of Blinatumomab with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Blinatumomab with respect to specific racial populations.
### Renal Impairment
- No formal pharmacokinetic studies using Blinatumomab have been conducted in patients with renal impairment. No dose adjustment is needed for patients with baseline creatinine clearance (CrCL) equal to or greater than 30 mL/min. There is no information available in patients with CrCL less than 30 mL/min or patients on hemodialysis
### Hepatic Impairment
- No formal pharmacokinetic studies using Blinatumomab have been conducted in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Blinatumomab in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Blinatumomab in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Intravenous
### Monitoring
- Monitor patients receiving Blinatumomab for signs and symptoms of neurological toxicities, and interrupt or discontinue Blinatumomab as recommended
- Monitor for signs or symptoms of TLS.
- Monitor laboratory parameters (including, but not limited to, white blood cell count and absolute neutrophil count) during Blinatumomab infusion. Interrupt Blinatumomab if prolonged neutropenia occurs.
- Monitor alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GGT), and total blood bilirubin prior to the start of and during Blinatumomab treatment.
# IV Compatibility
There is limited information regarding the compatibility of Blinatumomab and IV administrations.
# Overdosage
- Overdoses have been observed, including one patient who received 133-fold the recommended therapeutic dose of Blinatumomab delivered over a short duration. Overdoses resulted in adverse reactions which were consistent with the reactions observed at the recommended therapeutic dose and included fever, tremors, and headache. In the event of overdose, interrupt the infusion, monitor the patient for signs of toxicity, and provide supportive care . Consider reinitiation of Blinatumomab at the correct therapeutic dose when all toxicities have resolved and no earlier than 12 hours after interruption of the infusion
# Pharmacology
## Mechanism of Action
- Mechanism of Action
Blinatumomab is a bispecific CD19-directed CD3 T-cell engager that binds to CD19 expressed on the surface of cells of B-lineage origin and CD3 expressed on the surface of T cells. It activates endogenous T cells by connecting CD3 in the T-cell receptor (TCR) complex with CD19 on benign and malignant B cells. Blinatumomab mediates the formation of a synapse between the T cell and the tumor cell, upregulation of cell adhesion molecules, production of cytolytic proteins, release of inflammatory cytokines, and proliferation of T cells, which result in redirected lysis of CD19+ cells.
## Structure
- Blinatumomab (blinatumomab) is a bispecific CD19-directed CD3 T-cell engager that binds to CD19 (expressed on cells of B-lineage origin) and CD3 (expressed on T cells). Blinatumomab is produced in Chinese hamster ovary cells. It consists of 504 amino acids and has a molecular weight of approximately 54 kilodaltons.
- Each Blinatumomab package contains 1 vial Blinatumomab and 1 vial IV Solution Stabilizer.
- Blinatumomab is supplied in a single-use vial as a sterile, preservative-free, white to off-white lyophilized powder for intravenous administration. Each single-use vial of Blinatumomab contains 35 mcg blinatumomab, citric acid monohydrate (3.35 mg), lysine hydrochloride (23.23 mg), polysorbate 80 (0.64 mg), trehalose dihydrate (95.5 mg), and sodium hydroxide to adjust pH to 7.0. After reconstitution with 3 mL of preservative-free Sterile Water for Injection, USP, the resulting concentration is 12.5 mcg/mL blinatumomab.
- IV Solution Stabilizer is supplied in a single-use vial as a sterile, preservative-free, colorless to slightly yellow, clear solution. Each single-use vial of IV Solution Stabilizer contains citric acid monohydrate (52.5 mg), lysine hydrochloride (2283.8 mg), polysorbate 80 (10 mg), sodium hydroxide to adjust pH to 7.0, and water for injection.
## Pharmacodynamics
- During the continuous intravenous infusion over 4 weeks, the pharmacodynamic response was characterized by T-cell activation and initial redistribution, reduction in peripheral B cells, and transient cytokine elevation.
- Peripheral T cell redistribution (ie, T cell adhesion to blood vessel endothelium and/or transmigration into tissue) occurred after start of Blinatumomab infusion or dose escalation. T cell counts initially declined within 1 to 2 days and then returned to baseline levels within 7 to 14 days in majority patients. Increase of T cell counts above baseline (T cell expansion) was observed in few patients.
- Peripheral B cell counts decreased to less than or equal to 10 cells/microliter during the first treatment cycle at doses ≥ 5 mcg/m2/day or ≥ 9 mcg/day in the majority of patients. No recovery of peripheral B-cell counts was observed during the 2-week Blinatumomab -free period between treatment cycles. Incomplete depletion of B cells occurred at doses of 0.5 mcg/m2/day and 1.5 mcg/m2/day and in a few patients at higher doses.
Cytokines including IL-2, IL-4, IL-6, IL-8, IL-10, IL-12, TNF-α, and IFN-γ were measured, and IL-6, IL-10, and IFN-γ were elevated. The highest elevation of cytokines was observed in the first 2 days following start of Blinatumomab infusion. The elevated cytokine levels returned to baseline within 24 to 48 hours during the infusion. In subsequent treatment cycles, cytokine elevation occurred in fewer patients with lesser intensity compared to the initial 48 hours of the first treatment cycle.
## Pharmacokinetics
- The pharmacokinetics of blinatumomab appear linear over a dose range from 5 to 90 mcg/m2/day (approximately equivalent to 9 to 162 mcg/day) in adult patients. Following continuous intravenous infusion, the steady-state serum concentration (Css) was achieved within a day and remained stable over time. The increase in mean Css values was approximately proportional to the dose in the range tested. At the clinical doses of 9 mcg/day and 28 mcg/day for the treatment of relapsed/refractory ALL, the mean (SD) Css was 211 (258) pg/mL and 621 (502) pg/mL, respectively.
- The estimated mean (SD) volume of distribution based on terminal phase (Vz) was 4.52 (2.89) L with continuous intravenous infusion of blinatumomab.
- The metabolic pathway of blinatumomab has not been characterized. Like other protein therapeutics, Blinatumomab is expected to be degraded into small peptides and amino acids via catabolic pathways.
- The estimated mean (SD) systemic clearance with continuous intravenous infusion in patients receiving blinatumomab in clinical studies was 2.92 (2.83) L/hour. The mean (SD) half-life was 2.11 (1.42) hours. Negligible amounts of blinatumomab were excreted in the urine at the tested clinical doses.
- Results of population pharmacokinetic analyses indicate that age (18 to 80 years of age), gender, body weight (44 to 134 kg), and body surface area (1.39 to 2.57 m2) do not influence the pharmacokinetics of blinatumomab.
- No formal pharmacokinetic studies of blinatumomab have been conducted in patients with renal impairment.
- Pharmacokinetic analyses showed an approximately 2-fold difference in mean blinatumomab clearance values between patients with moderate renal impairment (CrCL ranging from 30 to 59 mL/min, N = 21) and normal renal function (CrCL more than 90 mL/min, N = 215). However, high interpatient variability was discerned (CV% up to 95.6%), and clearance values in renal impaired patients were essentially within the range observed in patients with normal renal function. There is no information available in patients with severe renal impairment (CrCL less than 30 mL/min) or patients on hemodialysis.
- Transient elevation of cytokines may suppress CYP450 enzyme activities
## Nonclinical Toxicology
- No carcinogenicity or genotoxicity studies have been conducted with blinatumomab.
- No studies have been conducted to evaluate the effects of blinatumomab on fertility. A murine surrogate molecule had no adverse effects on male and female reproductive organs in a 13-week repeat-dose toxicity study in mice.
# Clinical Studies
- The safety and efficacy of Blinatumomab were evaluated in an open-label, multicenter, single-arm study. Eligible patients were ≥ 18 years of age with Philadelphia chromosome-negative relapsed or refractory B‑precursor ALL (relapsed with first remission duration of ≤ 12 months in first salvage or relapsed or refractory after first salvage therapy or relapsed within 12 months of allogeneic hematopoietic stem cell transplantation , and had ≥ 10% blasts in bone marrow).
- Blinatumomab was administered as a continuous intravenous infusion. In the first cycle, the initial dose was 9 mcg/day for week 1, then 28 mcg/day for the remaining 3 weeks. The target dose of 28 mcg/day was administered in cycle 2 and subsequent cycles starting on day 1 of each cycle. Dose adjustment was possible in case of adverse events. The treated population included 185 patients who received at least 1 infusion of Blinatumomab ; the median number of treatment cycles was 2 (range: 1 to 5). Patients who responded to Blinatumomab but later relapsed had the option to be retreated with Blinatumomab . Among treated patients, the median age was 39 years (range: 18 to 79 years), 63 out of 185 (34.1%) had undergone HSCT prior to receiving Blinatumomab , and 32 out of 185 (17.3%) had received more than 2 prior salvage therapies.
- The primary endpoint was the complete remission/complete remission with partial hematological recovery (CR/CRh*) rate within 2 cycles of treatment with Blinatumomab . Seventy-seven out of 185 (41.6%) evaluable patients achieved CR/CRh- within the first 2 treatment cycles, with the majority of responses (81%, 62 out of 77) occurring within cycle 1 of treatment. See Table 3 for efficacy results from this study. The HSCT rate among those who achieved CR/CRh- was 39% (30 out of 77).
# How Supplied
- Each Blinatumomab package (NDC 55513-160-01) contains:
- One Blinatumomab 35 mcg single-use vial containing a sterile, preservative-free, white to off-white lyophilized powder and
- One IV Solution Stabilizer 10 mL single-use glass vial containing a sterile, preservative-free, colorless to slightly yellow, clear solution. Do not use the IV Solution Stabilizer to reconstitute Blinatumomab .
## Storage
- Store Blinatumomab and IV Solution Stabilizer vials in the original package refrigerated at 2°C to 8°C (36°F to 46°F) and protect from light until time of use. Do not freeze.
- Store and transport the prepared IV bag containing Blinatumomab solution for infusion at 2°C to 8°C (36°F to 46°F) conditions. Ship in packaging that has been validated to maintain temperature of the contents at 2°C to 8°C (36°F to 46°F). Do not freeze.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Advise patients to contact a healthcare professional for any of the following:
- Signs and symptoms that may be associated with cytokine release syndrome and infusion reactions including pyrexia, fatigue, nausea, vomiting, chills, hypotension, rash, and wheezing
- Signs and symptoms of neurological toxicities including convulsions, speech disorders, and confusion
- Signs and symptoms of infections including pneumonia
- Advise patients to refrain from driving and engaging in hazardous occupations or activities such as operating heavy or potentially dangerous machinery while Blinatumomab is being administered. Patients should be advised that they may experience neurological events .
- Inform patients that:
- It is very important to keep the area around the intravenous catheter clean to reduce the risk of infection.
- They should not adjust the setting on the infusion pump. Any changes to pump function may result in dosing errors. If there is a problem with the infusion pump or the pump alarms, patients should contact their doctor or nurse immediately.
# Precautions with Alcohol
Alcohol-Blinatumomab interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- Blincyto
# Look-Alike Drug Names
There is limited information regarding Blinatumomab Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Blinatumomab
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Aparna Vuppala, M.B.B.S. [2]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Black Box Warning
# Overview
Blinatumomab is an antineoplastic agent that is FDA approved for the treatment of Philadelphia chromosome-negative relapsed or refractory B-cell precursor acute lymphoblastic leukemia (ALL).. There is a Black Box Warning for this drug as shown here. Common adverse reactions include pyrexia, headache , peripheral edema , febrile neutropenia , nausea , hypokalemia , and constipation and the most common serious adverse reactions included febrile neutropenia, pyrexia, pneumonia, sepsis, neutropenia, device-related infection, tremor, encephalopathy, infection, overdose, confusion, Staphylococcal bacteremia, and headache..
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- Blinatumomab is indicated for the treatment of Philadelphia chromosome-negative relapsed or refractory B-cell precursor acute lymphoblastic leukemia (ALL).
- This indication is approved under accelerated approval. Continued approval for this indication may be contingent upon verification of clinical benefit in subsequent trials
- Hospitalization is recommended for the first 9 days of the first cycle and the first 2 days of the second cycle. For all subsequent cycle starts and reinitiation (eg, if treatment is interrupted for 4 or more hours), supervision by a healthcare professional or hospitalization is recommended.
- Do not flush the Blinatumomab infusion line especially when changing infusion bags. Flushing when changing bags or at completion of infusion can result in excess dosage and complications thereof. Preparation and administration errors resulting in overdose have occurred .
- A single cycle of treatment of Blinatumomab consists of 4 weeks of continuous intravenous infusion followed by a 2-week treatment-free interval.
- For patients at least 45 kg in weight:
- In Cycle 1, administer Blinatumomab at 9 mcg/day on Days 1–7 and at 28 mcg/day on Days 8-28.
- For subsequent cycles, administer Blinatumomab at 28 mcg/day on Days 1–28.
- Allow for at least 2 weeks treatment-free between cycles of Blinatumomab .
- A treatment course consists of up to 2 cycles of Blinatumomab for induction followed by 3 additional cycles for consolidation treatment (up to a total of 5 cycles).
- Premedicate with dexamethasone 20 mg intravenously 1 hour prior to the first dose of Blinatumomab of each cycle, prior to a step dose (such as Cycle 1 day 8), or when restarting an infusion after an interruption of 4 or more hours.
- Administer Blinatumomab as a continuous intravenous infusion at a constant flow rate using an infusion pump. The pump should be programmable, lockable, non-elastomeric, and have an alarm.
- Blinatumomab infusion bags should be infused over 24 hours or 48 hours . Infuse the total 240 mL Blinatumomab solution according to the instructions on the pharmacy label on the bag at one of the following constant infusion rates:
- Infusion rate of 10 mL/h for a duration of 24 hours, OR
- Infusion rate of 5 mL/h for a duration of 48 hours
- The Blinatumomab solution for infusion must be administered using IV tubing that contains a sterile, non-pyrogenic, low protein-binding, 0.2 micron in-line filter.
Important Note: Do not flush the infusion line, especially when changing infusion bags. Flushing when changing bags or at completion of infusion can result in excess dosage. Blinatumomab should be infused through a dedicated lumen.
- At the end of the infusion, any unused Blinatumomab solution in the IV bag and IV lines should be disposed of in accordance with local requirements.
- If the interruption after an adverse event is no longer than 7 days, continue the same cycle to a total of 28 days of infusion inclusive of days before and after the interruption in that cycle. If an interruption due to an adverse event is longer than 7 days, start a new cycle.
- It is very important that the instructions for preparation (including admixing) and administration provided in this section are strictly followed to minimize medication errors (including underdose and overdose) .
NOTE: 1 package Blinatumomab includes 1 vial of Blinatumomab and 1 vial of IV Solution Stabilizer.
- Before preparation, ensure you have the following supplies ready:
- 1 package of Blinatumomab for preparation of 9 mcg/day dose infused over 24 hours at a rate of 10 mL/h, 9 mcg/day dose infused over 48 hours at a rate of 5 mL/h, and 28 mcg/day dose infused over 24 hours at a rate of 10 mL/h
- 2 packages of Blinatumomab for preparation of 28 mcg/day dose infused over 48 hours at a rate of 5 mL/h
- The following supplies are also required, but not included in the package:
- Sterile, single-use disposable syringes
- 21- to 23- gauge needle(s) (recommended)
- Preservative-free Sterile Water for Injection, USP
- 250 mL 0.9% Sodium Chloride IV bag
To minimize the number of aseptic transfers, it is recommended to use a 250 mL-prefilled IV bag. 250 mL-prefilled IV bags typically contain overfill with a total volume of 265 to 275 mL. Blinatumomab dose calculations provided in section 2.4.4 are based on a starting volume of 265 mL to 275 mL 0.9% Sodium Chloride.
Use only polyolefin, PVC non-di-ethylhexylphthalate (non-DEHP), or ethyl vinyl acetate (EVA) infusion bags/pump cassettes.
- To minimize the number of aseptic transfers, it is recommended to use a 250 mL-prefilled IV bag. 250 mL-prefilled IV bags typically contain overfill with a total volume of 265 to 275 mL. Blinatumomab dose calculations provided in section 2.4.4 are based on a starting volume of 265 mL to 275 mL 0.9% Sodium Chloride.
- Use only polyolefin, PVC non-di-ethylhexylphthalate (non-DEHP), or ethyl vinyl acetate (EVA) infusion bags/pump cassettes.
- Polyolefin, PVC non-DEHP, or EVA IV tubing with a sterile, non-pyrogenic, low protein-binding 0.2 micron in-line filter
- Ensure that the IV tubing is compatible with the infusion pump.
- Aseptic technique must be strictly observed when preparing the solution for infusion since Blinatumomab vials do not contain antimicrobial preservatives. To prevent accidental contamination, prepare Blinatumomab according to aseptic standards, including but not limited to:
- Preparation must be done in a USP <797> compliant facility.
- Preparation must be done in an ISO Class 5 laminar flow hood or better.
- The admixing area should have appropriate environmental specifications, confirmed by periodic monitoring.
- Personnel should be appropriately trained in aseptic manipulations and admixing of oncology drugs.
- Personnel should wear appropriate protective clothing and gloves.
- Gloves and surfaces should be disinfected.
- IV Solution Stabilizer is provided with the Blinatumomab package and is used to coat the prefilled IV bag prior to addition of reconstituted Blinatumomab to prevent adhesion of Blinatumomab to IV bags and IV lines. Therefore, add IV Solution Stabilizer to the IV bag containing 0.9% Sodium Chloride. Do not use IV Solution Stabilizer for reconstitution of Blinatumomab .
- The entire volume of the admixed Blinatumomab will be more than the volume administered to the patient (240 mL) to account for the priming of the IV line and to ensure that the patient will receive the full dose of Blinatumomab .
- When preparing an IV bag, remove air from IV bag. This is particularly important for use with an ambulatory infusion pump.
- Use the specific volumes described in the admixing instructions to minimize errors in calculation.
- Specific admixing instructions are provided for each dose and infusion time. Verify the prescribed dose and infusion time of Blinatumomab and identify the appropriate dosing preparation section listed below. Follow the steps for reconstituting Blinatumomab and preparing the IV bag.
- 9 mcg/day infused over 24 hours at a rate of 10 mL/h.
- 9 mcg/day infused over 48 hours at a rate of 5 mL/h.
- 28 mcg/day infused over 24 hours at a rate of 10 mL/h.
- 28 mcg/day infused over 48 hours at a rate of 5 mL/h.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.5 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vial.
- Using a 5 mL syringe, reconstitute one vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 1 mL syringe, aseptically transfer 0.83 mL of reconstituted Blinatumomab into the IV bag. Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.5 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vial.
- Using a 5 mL syringe, reconstitute one vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 3 mL syringe, aseptically transfer 1.7 mL of reconstituted Blinatumomab into the IV bag. Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.6 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vial.
- Using a 5 mL syringe, reconstitute one vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 3 mL syringe, aseptically transfer 2.6 mL of reconstituted Blinatumomab into the IV bag. Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- Use a prefilled 250 mL 0.9% Sodium Chloride IV bag. 250 mL-prefilled bags typically contain overfill to a total volume of 265 to 275 mL. If necessary adjust the IV bag volume by adding or removing 0.9% Sodium Chloride to achieve a starting volume between 265 and 275 mL.
- Using a 10 mL syringe, aseptically transfer 5.6 mL of IV Solution Stabilizer to the IV bag with 0.9% Sodium Chloride. Gently mix the contents of the bag to avoid foaming. Discard remaining IV Solution Stabilizer vials.
- Use two vials of Blinatumomab . Using a 5 mL syringe, reconstitute each vial of Blinatumomab using 3 mL of preservative-free Sterile Water for Injection, USP. Direct preservative-free Sterile Water for Injection, USP, toward the side of the vial during reconstitution. Gently swirl contents to avoid excess foaming. Do not shake.
- Do not reconstitute Blinatumomab with IV Solution Stabilizer.
- The addition of preservative-free Sterile Water for Injection, USP, to the lyophilized powder results in a final Blinatumomab concentration of 12.5 mcg/mL.
- Visually inspect the reconstituted solution for particulate matter and discoloration during reconstitution and prior to infusion. The resulting solution should be clear to slightly opalescent, colorless to slightly yellow. Do not use if solution is cloudy or has precipitated.
- Using a 3 mL syringe, aseptically transfer 5.2 mL of reconstituted Blinatumomab into the IV bag (2.7 mL from one vial and the remaining 2.5 mL from the second vial). Gently mix the contents of the bag to avoid foaming.
- Under aseptic conditions, attach the IV tubing to the IV bag with the sterile 0.2 micron in-line filter.
- Remove air from the IV bag and prime the IV line only with the prepared solution for infusion. Do not prime with 0.9% Sodium Chloride.
- Store at 2°C to 8°C if not used immediately.
- The information in Table 1 indicates the storage time for the reconstituted Blinatumomab vial and prepared IV bag containing Blinatumomab solution for infusion. Lyophilized Blinatumomab vial and IV Solution Stabilizer may be stored for a maximum of 8 hours at room temperature.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Blinatumomab in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Blinatumomab in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
Limited experience in pediatric patients
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Blinatumomab in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Blinatumomab in pediatric patients.
# Contraindications
- Blinatumomab is contraindicated in patients with known hypersensitivity to blinatumomab or to any component of the product formulation.
# Warnings
- Cytokine Release Syndrome (CRS), which may be life-threatening or fatal, occurred in patients receiving Blinatumomab .
- Infusion reactions have occurred with the Blinatumomab infusion and may be clinically indistinguishable from manifestations of CRS.
- Serious adverse events that may be associated with CRS included pyrexia, headache, nausea, asthenia, hypotension, increased alanine aminotransferase, increased aspartate aminotransferase, and increased total bilirubin; these events infrequently led to Blinatumomab discontinuation. Life-threatening or fatal CRS was infrequently reported in patients receiving Blinatumomab . In some cases, disseminated intravascular coagulation (DIC), capillary leak syndrome (CLS), and hemophagocytic lymphohistiocytosis/macrophage activation syndrome (HLH/MAS) have been reported in the setting of CRS.
- Patients should be closely monitored for signs or symptoms of these events. Management of these events may require either temporary interruption or discontinuation of Blinatumomab .
- In patients receiving Blinatumomab in clinical trials, neurological toxicities have occurred in approximately 50% of patients. The median time to onset of any neurological toxicity was 7 days. Grade 3 or higher (severe, life-threatening, or fatal) neurological toxicities following initiation of Blinatumomab administration occurred in approximately 15% of patients and included encephalopathy, convulsions, speech disorders, disturbances in consciousness, confusion and disorientation, and coordination and balance disorders. The majority of events resolved following interruption of Blinatumomab , but some resulted in treatment discontinuation.
- Monitor patients receiving Blinatumomab for signs and symptoms of neurological toxicities, and interrupt or discontinue Blinatumomab as recommended.
- In patients receiving Blinatumomab in clinical trials, serious infections such as sepsis, pneumonia, bacteremia, opportunistic infections, and catheter-site infections were observed in approximately 25% of patients, some of which were life-threatening or fatal. As appropriate, administer prophylactic antibiotics and employ surveillance testing during treatment with Blinatumomab . Monitor patients for signs and symptoms of infection and treat appropriately.
- Tumor lysis syndrome (TLS), which may be life-threatening or fatal, has been observed in patients receiving Blinatumomab . Appropriate prophylactic measures, including pretreatment nontoxic cytoreduction and on-treatment hydration, should be used for the prevention of TLS during Blinatumomab treatment. Monitor for signs or symptoms of TLS. Management of these events may require either temporary interruption or discontinuation of Blinatumomab .
- Neutropenia and febrile neutropenia, including life-threatening cases, have been observed in patients receiving Blinatumomab . Monitor laboratory parameters (including, but not limited to, white blood cell count and absolute neutrophil count) during Blinatumomab infusion. Interrupt Blinatumomab if prolonged neutropenia occurs.
- Due to the potential for neurologic events, including seizures, patients receiving Blinatumomab are at risk for loss of consciousness . Advise patients to refrain from driving and engaging in hazardous occupations or activities such as operating heavy or potentially dangerous machinery while Blinatumomab is being administered.
- Treatment with Blinatumomab was associated with transient elevations in liver enzymes. Although the majority of these events were observed in the setting of CRS, some were observed outside of this setting. For these events, the median time to onset was 15 days. *In patients receiving Blinatumomab in clinical trials, Grade 3 or greater elevations in liver enzymes occurred in approximately 6% of patients outside the setting of CRS and resulted in treatment discontinuation in less than 1% of patients.
- Monitor alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GGT), and total blood bilirubin prior to the start of and during Blinatumomab treatment. Interrupt Blinatumomab if the transaminases rise to greater than 5 times the upper limit of normal or if bilirubin rises to more than 3 times the upper limit of normal.
- Cranial magnetic resonance imaging (MRI) changes showing leukoencephalopathy have been observed in patients receiving Blinatumomab , especially in patients with prior treatment with cranial irradiation and antileukemic chemotherapy (including systemic high-dose methotrexate or intrathecal cytarabine). The clinical significance of these imaging changes is unknown.
- Preparation and administration errors have occurred with Blinatumomab treatment. Follow instructions for preparation (including admixing) and administration strictly to minimize medication errors (including underdose and overdose)
# Adverse Reactions
## Clinical Trials Experience
- The following adverse reactions are discussed in greater detail in other sections of the label:
- Cytokine release syndrome
- Neurological Toxicities
- Infections
- Tumor Lysis Syndrome
- Neutropenia and Febrile Neutropenia
- Effects on Ability to Drive and Use Machines
- Elevated Liver Enzymes
- Leukoencephalopathy
- Preparation and Administration Errors
- Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice.
- The safety data described in this section reflect exposure to Blinatumomab in clinical trials in which 212 patients with relapsed or refractory ALL received up to 28 mcg/day. All patients received at least one dose of Blinatumomab . The median age of the study population was 37 years (range: 18 to 79 years), 63% were male, 79% were White, 3% were Asian, and 3% were Black or African American.
- The most common adverse reactions (≥ 20%) were pyrexia (62%), headache (36%), peripheral edema (25%), febrile neutropenia (25%), nausea (25%), hypokalemia (23%), and constipation (20%).
- Serious adverse reactions were reported in 65% of patients. The most common serious adverse reactions (≥ 2%) included febrile neutropenia, pyrexia, pneumonia, sepsis, neutropenia, device-related infection, tremor, encephalopathy, infection, overdose, confusion, Staphylococcal bacteremia, and headache.
- Adverse reactions of Grade 3 or higher were reported in 80% of patients. Discontinuation of therapy due to adverse reactions occurred in 18% of patients treated with Blinatumomab . The adverse reactions reported most frequently as the reason for discontinuation of treatment included encephalopathy and sepsis. Fatal adverse events occurred in 15% of patients. The majority of these events were infections. No fatal adverse events occurred on treatment among patients in remission.
- The adverse reactions with ≥ 10% incidence for any grade or ≥ 5% incidence for Grade 3 or higher are summarized in Table 2.
- Additional important adverse reactions that did not meet the threshold criteria for inclusion in Table 2 were:
- Blood and lymphatic system disorders: leukocytosis (2%), lymphopenia (1%)
- Cardiac disorders: tachycardia (8%)
- General disorders and administration site conditions: edema (5%)
- Immune system disorders: cytokine storm (1%)
- Investigations: decreased immunoglobulins (9%), increased blood bilirubin (8%), increased gamma-glutamyl-transferase (6%), increased liver enzymes (1%)
- Metabolism and nutrition disorders: tumor lysis syndrome (4%), hypoalbuminemia (4%)
- Nervous system disorders: encephalopathy (5%), paresthesia (5%), aphasia (4%), convulsion (2%), memory impairment (2%), cognitive disorder (1%), speech disorder (< 1%)
- Psychiatric disorders: confusion (7%), disorientation (3%)
- Vascular disorders: capillary leak syndrome (< 1%).
- Hypersensitivity reactions related to Blinatumomab treatment were hypersensitivity (1%) and bronchospasm (< 1%).
- As with all therapeutic proteins, there is potential for immunogenicity. The immunogenicity of Blinatumomab has been evaluated using either an electrochemiluminescence detection technology (ECL) or an enzyme-linked immunosorbent assay (ELISA) screening immunoassay for the detection of binding anti-blinatumomab antibodies. For patients whose sera tested positive in the screening immunoassay, an in vitro biological assay was performed to detect neutralizing antibodies.
- In clinical studies, less than 1% of patients treated with Blinatumomab tested positive for binding anti-blinatumomab antibodies. All patients who tested positive for binding antibodies also tested positive for neutralizing anti-blinatumomab antibodies.
- Anti-blinatumomab antibody formation may affect pharmacokinetics of Blinatumomab . No association was seen between antibody development and development of adverse events.
- The detection of anti-blinatumomab antibody formation is highly dependent on the sensitivity and specificity of the assay. Additionally, the observed incidence of antibody (including neutralizing antibody) positivity in an assay may be influenced by several factors, including assay methodology, sample handling, timing of sample collection, concomitant medications, and underlying disease. For these reasons, comparison of the incidence of antibodies to blinatumomab with the incidence of antibodies to other products may be misleading.
## Postmarketing Experience
There is limited information regarding Blinatumomab Postmarketing Experience in the drug label.
# Drug Interactions
- No formal drug interaction studies have been conducted with Blinatumomab . Initiation of Blinatumomab treatment causes transient release of cytokines that may suppress CYP450 enzymes. The highest drug- drug interaction risk is during the first 9 days of the first cycle and the first 2 days of the second cycle in patients who are receiving concomitant CYP450 substrates, particularly those with a narrow therapeutic index. In these patients, monitor for toxicity (eg, warfarin) or drug concentrations (eg, cyclosporine). Adjust the dose of the concomitant drug as needed
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): C
- There are no adequate and well-controlled studies of Blinatumomab in pregnant women. Based on its mechanism of action, Blinatumomab may cause fetal toxicity including B-cell lymphocytopenia when administered to a pregnant woman. Blinatumomab should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
- Animal reproduction studies have not been conducted with blinatumomab. In embryo-fetal developmental toxicity studies, a murine surrogate molecule was administered intravenously to pregnant mice during the period of organogenesis. The surrogate molecule crossed the placental barrier and did not cause embryo-fetal toxicity or teratogenicity. The expected depletions of B and T cells were observed in the pregnant mice, but hematological effects were not assessed in fetuses.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Blinatumomab in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Blinatumomab during labor and delivery.
### Nursing Mothers
- It is not known whether blinatumomab is excreted in human milk. Because many drugs are excreted in human milk and because of the potential for serious adverse reactions in nursing infants from blinatumomab, a decision should be made whether to discontinue nursing or to discontinue the drug, taking into account the importance of the drug to the mother.
### Pediatric Use
- There is limited experience in pediatric patients. Blinatumomab was evaluated in a dose-escalation study of 41 pediatric patients with relapsed or refractory B-precursor ALL. The median age was 6 years (range: 2 to 17 years). Blinatumomab was administered at doses of 5 to 30 mcg/m2/day. The recommended phase 2 regimen was 5 mcg/m2/day on Days 1-7 and 15 mcg/m2/day on Days 8-28 for cycle 1, and 15 mcg/m2/day on Days 1-28 for subsequent cycles. At a higher dose, a fatal cardiac failure event occurred in the setting of life-threatening cytokine release syndrome (CRS) .
- The steady-state concentrations of blinatumomab were comparable in adult and pediatric patients at the equivalent dose levels based on body surface area (BSA)-based regimens.
### Geriatic Use
- Of the total number of patients with relapsed or refractory ALL, approximately 13% were 65 years of age and over. Generally, safety and efficacy were similar between elderly patients (≥ 65 years of age) and patients less than 65 years of age treated with Blinatumomab . Elderly patients experienced a higher rate of neurological toxicities, including cognitive disorder, encephalopathy, confusion, and serious infections
### Gender
There is no FDA guidance on the use of Blinatumomab with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Blinatumomab with respect to specific racial populations.
### Renal Impairment
- No formal pharmacokinetic studies using Blinatumomab have been conducted in patients with renal impairment. No dose adjustment is needed for patients with baseline creatinine clearance (CrCL) equal to or greater than 30 mL/min. There is no information available in patients with CrCL less than 30 mL/min or patients on hemodialysis
### Hepatic Impairment
- No formal pharmacokinetic studies using Blinatumomab have been conducted in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Blinatumomab in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Blinatumomab in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Intravenous
### Monitoring
- Monitor patients receiving Blinatumomab for signs and symptoms of neurological toxicities, and interrupt or discontinue Blinatumomab as recommended
- Monitor for signs or symptoms of TLS.
- Monitor laboratory parameters (including, but not limited to, white blood cell count and absolute neutrophil count) during Blinatumomab infusion. Interrupt Blinatumomab if prolonged neutropenia occurs.
- Monitor alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyl transferase (GGT), and total blood bilirubin prior to the start of and during Blinatumomab treatment.
# IV Compatibility
There is limited information regarding the compatibility of Blinatumomab and IV administrations.
# Overdosage
- Overdoses have been observed, including one patient who received 133-fold the recommended therapeutic dose of Blinatumomab delivered over a short duration. Overdoses resulted in adverse reactions which were consistent with the reactions observed at the recommended therapeutic dose and included fever, tremors, and headache. In the event of overdose, interrupt the infusion, monitor the patient for signs of toxicity, and provide supportive care . Consider reinitiation of Blinatumomab at the correct therapeutic dose when all toxicities have resolved and no earlier than 12 hours after interruption of the infusion
# Pharmacology
## Mechanism of Action
- Mechanism of Action
Blinatumomab is a bispecific CD19-directed CD3 T-cell engager that binds to CD19 expressed on the surface of cells of B-lineage origin and CD3 expressed on the surface of T cells. It activates endogenous T cells by connecting CD3 in the T-cell receptor (TCR) complex with CD19 on benign and malignant B cells. Blinatumomab mediates the formation of a synapse between the T cell and the tumor cell, upregulation of cell adhesion molecules, production of cytolytic proteins, release of inflammatory cytokines, and proliferation of T cells, which result in redirected lysis of CD19+ cells.
## Structure
- Blinatumomab (blinatumomab) is a bispecific CD19-directed CD3 T-cell engager that binds to CD19 (expressed on cells of B-lineage origin) and CD3 (expressed on T cells). Blinatumomab is produced in Chinese hamster ovary cells. It consists of 504 amino acids and has a molecular weight of approximately 54 kilodaltons.
- Each Blinatumomab package contains 1 vial Blinatumomab and 1 vial IV Solution Stabilizer.
- Blinatumomab is supplied in a single-use vial as a sterile, preservative-free, white to off-white lyophilized powder for intravenous administration. Each single-use vial of Blinatumomab contains 35 mcg blinatumomab, citric acid monohydrate (3.35 mg), lysine hydrochloride (23.23 mg), polysorbate 80 (0.64 mg), trehalose dihydrate (95.5 mg), and sodium hydroxide to adjust pH to 7.0. After reconstitution with 3 mL of preservative-free Sterile Water for Injection, USP, the resulting concentration is 12.5 mcg/mL blinatumomab.
- IV Solution Stabilizer is supplied in a single-use vial as a sterile, preservative-free, colorless to slightly yellow, clear solution. Each single-use vial of IV Solution Stabilizer contains citric acid monohydrate (52.5 mg), lysine hydrochloride (2283.8 mg), polysorbate 80 (10 mg), sodium hydroxide to adjust pH to 7.0, and water for injection.
## Pharmacodynamics
- During the continuous intravenous infusion over 4 weeks, the pharmacodynamic response was characterized by T-cell activation and initial redistribution, reduction in peripheral B cells, and transient cytokine elevation.
- Peripheral T cell redistribution (ie, T cell adhesion to blood vessel endothelium and/or transmigration into tissue) occurred after start of Blinatumomab infusion or dose escalation. T cell counts initially declined within 1 to 2 days and then returned to baseline levels within 7 to 14 days in majority patients. Increase of T cell counts above baseline (T cell expansion) was observed in few patients.
- Peripheral B cell counts decreased to less than or equal to 10 cells/microliter during the first treatment cycle at doses ≥ 5 mcg/m2/day or ≥ 9 mcg/day in the majority of patients. No recovery of peripheral B-cell counts was observed during the 2-week Blinatumomab -free period between treatment cycles. Incomplete depletion of B cells occurred at doses of 0.5 mcg/m2/day and 1.5 mcg/m2/day and in a few patients at higher doses.
Cytokines including IL-2, IL-4, IL-6, IL-8, IL-10, IL-12, TNF-α, and IFN-γ were measured, and IL-6, IL-10, and IFN-γ were elevated. The highest elevation of cytokines was observed in the first 2 days following start of Blinatumomab infusion. The elevated cytokine levels returned to baseline within 24 to 48 hours during the infusion. In subsequent treatment cycles, cytokine elevation occurred in fewer patients with lesser intensity compared to the initial 48 hours of the first treatment cycle.
## Pharmacokinetics
- The pharmacokinetics of blinatumomab appear linear over a dose range from 5 to 90 mcg/m2/day (approximately equivalent to 9 to 162 mcg/day) in adult patients. Following continuous intravenous infusion, the steady-state serum concentration (Css) was achieved within a day and remained stable over time. The increase in mean Css values was approximately proportional to the dose in the range tested. At the clinical doses of 9 mcg/day and 28 mcg/day for the treatment of relapsed/refractory ALL, the mean (SD) Css was 211 (258) pg/mL and 621 (502) pg/mL, respectively.
- The estimated mean (SD) volume of distribution based on terminal phase (Vz) was 4.52 (2.89) L with continuous intravenous infusion of blinatumomab.
- The metabolic pathway of blinatumomab has not been characterized. Like other protein therapeutics, Blinatumomab is expected to be degraded into small peptides and amino acids via catabolic pathways.
- The estimated mean (SD) systemic clearance with continuous intravenous infusion in patients receiving blinatumomab in clinical studies was 2.92 (2.83) L/hour. The mean (SD) half-life was 2.11 (1.42) hours. Negligible amounts of blinatumomab were excreted in the urine at the tested clinical doses.
- Results of population pharmacokinetic analyses indicate that age (18 to 80 years of age), gender, body weight (44 to 134 kg), and body surface area (1.39 to 2.57 m2) do not influence the pharmacokinetics of blinatumomab.
- No formal pharmacokinetic studies of blinatumomab have been conducted in patients with renal impairment.
- Pharmacokinetic analyses showed an approximately 2-fold difference in mean blinatumomab clearance values between patients with moderate renal impairment (CrCL ranging from 30 to 59 mL/min, N = 21) and normal renal function (CrCL more than 90 mL/min, N = 215). However, high interpatient variability was discerned (CV% up to 95.6%), and clearance values in renal impaired patients were essentially within the range observed in patients with normal renal function. There is no information available in patients with severe renal impairment (CrCL less than 30 mL/min) or patients on hemodialysis.
- Transient elevation of cytokines may suppress CYP450 enzyme activities
## Nonclinical Toxicology
- No carcinogenicity or genotoxicity studies have been conducted with blinatumomab.
- No studies have been conducted to evaluate the effects of blinatumomab on fertility. A murine surrogate molecule had no adverse effects on male and female reproductive organs in a 13-week repeat-dose toxicity study in mice.
# Clinical Studies
- The safety and efficacy of Blinatumomab were evaluated in an open-label, multicenter, single-arm study. Eligible patients were ≥ 18 years of age with Philadelphia chromosome-negative relapsed or refractory B‑precursor ALL (relapsed with first remission duration of ≤ 12 months in first salvage or relapsed or refractory after first salvage therapy or relapsed within 12 months of allogeneic hematopoietic stem cell transplantation [HSCT], and had ≥ 10% blasts in bone marrow).
- Blinatumomab was administered as a continuous intravenous infusion. In the first cycle, the initial dose was 9 mcg/day for week 1, then 28 mcg/day for the remaining 3 weeks. The target dose of 28 mcg/day was administered in cycle 2 and subsequent cycles starting on day 1 of each cycle. Dose adjustment was possible in case of adverse events. The treated population included 185 patients who received at least 1 infusion of Blinatumomab ; the median number of treatment cycles was 2 (range: 1 to 5). Patients who responded to Blinatumomab but later relapsed had the option to be retreated with Blinatumomab . Among treated patients, the median age was 39 years (range: 18 to 79 years), 63 out of 185 (34.1%) had undergone HSCT prior to receiving Blinatumomab , and 32 out of 185 (17.3%) had received more than 2 prior salvage therapies.
- The primary endpoint was the complete remission/complete remission with partial hematological recovery (CR/CRh*) rate within 2 cycles of treatment with Blinatumomab . Seventy-seven out of 185 (41.6%) evaluable patients achieved CR/CRh* within the first 2 treatment cycles, with the majority of responses (81%, 62 out of 77) occurring within cycle 1 of treatment. See Table 3 for efficacy results from this study. The HSCT rate among those who achieved CR/CRh* was 39% (30 out of 77).
# How Supplied
- Each Blinatumomab package (NDC 55513-160-01) contains:
- One Blinatumomab 35 mcg single-use vial containing a sterile, preservative-free, white to off-white lyophilized powder and
- One IV Solution Stabilizer 10 mL single-use glass vial containing a sterile, preservative-free, colorless to slightly yellow, clear solution. Do not use the IV Solution Stabilizer to reconstitute Blinatumomab .
## Storage
- Store Blinatumomab and IV Solution Stabilizer vials in the original package refrigerated at 2°C to 8°C (36°F to 46°F) and protect from light until time of use. Do not freeze.
- Store and transport the prepared IV bag containing Blinatumomab solution for infusion at 2°C to 8°C (36°F to 46°F) conditions. Ship in packaging that has been validated to maintain temperature of the contents at 2°C to 8°C (36°F to 46°F). Do not freeze.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Advise patients to contact a healthcare professional for any of the following:
- Signs and symptoms that may be associated with cytokine release syndrome and infusion reactions including pyrexia, fatigue, nausea, vomiting, chills, hypotension, rash, and wheezing
- Signs and symptoms of neurological toxicities including convulsions, speech disorders, and confusion
- Signs and symptoms of infections including pneumonia
- Advise patients to refrain from driving and engaging in hazardous occupations or activities such as operating heavy or potentially dangerous machinery while Blinatumomab is being administered. Patients should be advised that they may experience neurological events .
- Inform patients that:
- It is very important to keep the area around the intravenous catheter clean to reduce the risk of infection.
- They should not adjust the setting on the infusion pump. Any changes to pump function may result in dosing errors. If there is a problem with the infusion pump or the pump alarms, patients should contact their doctor or nurse immediately.
# Precautions with Alcohol
Alcohol-Blinatumomab interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- Blincyto
# Look-Alike Drug Names
There is limited information regarding Blinatumomab Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Blinatumomab | |
6ce9d113f6eeae9ae92b8b643f414db0522badac | wikidoc | Block design | Block design
# Overview
In combinatorial mathematics, a block design (more fully, a balanced incomplete block design) is a particular kind of set system, which has long-standing applications to experimental design (an area of statistics) as well as purely combinatorial aspects.
Given a finite set X (of elements called points) and integers k, r, λ ≥ 1, we define a 2-design B to be a set of k-element subsets of X, called blocks, such that the number r of blocks containing x in X is independent of x, and the number λ of blocks containing given distinct points x and y in X is also independent of the choices.
Here v (the number of elements of X, called points), b (the number of blocks), k, r, and λ are the parameters of the design. (Also, B may not consist of all k-element subsets of X; that is the meaning of incomplete.) The design is called a (v, k, λ)-design or a (v, b, r, k, λ)-design. The parameters are not all independent; v, k, and λ determine b and r, and not all combinations of v, k, and λ are possible. The two basic equations connecting these parameters are
A fundamental theorem (Fisher's inequality) is that b ≥ v in any block design. The case of equality is called a symmetric design; it has many special features.
Examples of block designs include the lines in finite projective planes (where X is the set of points of the plane and λ = 1), and Steiner triple systems (k = 3). The former is a relatively simple example of a symmetric design.
# Generalization: t-designs
Given any integer t ≥ 2, a t-design B is a class of k-element subsets of X (the set of points) , called blocks, such that the number r of blocks that contain any point x in X is independent of x, and the number λ of blocks that contain any given t-element subset T is independent of the choice of T. The numbers v (the number of elements of X), b (the number of blocks), k, r, λ, and t are the parameters of the design. The design may be called a t-(v,k,λ)-design. Again, these four numbers determine b and r and the four numbers themselves cannot be chosen arbitrarily. The equations are
where bi is the number of blocks that contain any i-element set of points.
Examples include the d-dimensional subspaces of a finite projective geometry (where t = d + 1 and λ = 1).
The term block design by itself usually means a 2-design. | Block design
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
In combinatorial mathematics, a block design (more fully, a balanced incomplete block design) is a particular kind of set system, which has long-standing applications to experimental design (an area of statistics) as well as purely combinatorial aspects.
Given a finite set X (of elements called points) and integers k, r, λ ≥ 1, we define a 2-design B to be a set of k-element subsets of X, called blocks, such that the number r of blocks containing x in X is independent of x, and the number λ of blocks containing given distinct points x and y in X is also independent of the choices.
Here v (the number of elements of X, called points), b (the number of blocks), k, r, and λ are the parameters of the design. (Also, B may not consist of all k-element subsets of X; that is the meaning of incomplete.) The design is called a (v, k, λ)-design or a (v, b, r, k, λ)-design. The parameters are not all independent; v, k, and λ determine b and r, and not all combinations of v, k, and λ are possible. The two basic equations connecting these parameters are
A fundamental theorem (Fisher's inequality) is that b ≥ v in any block design. The case of equality is called a symmetric design; it has many special features.
Examples of block designs include the lines in finite projective planes (where X is the set of points of the plane and λ = 1), and Steiner triple systems (k = 3). The former is a relatively simple example of a symmetric design.
# Generalization: t-designs
Given any integer t ≥ 2, a t-design B is a class of k-element subsets of X (the set of points) , called blocks, such that the number r of blocks that contain any point x in X is independent of x, and the number λ of blocks that contain any given t-element subset T is independent of the choice of T. The numbers v (the number of elements of X), b (the number of blocks), k, r, λ, and t are the parameters of the design. The design may be called a t-(v,k,λ)-design. Again, these four numbers determine b and r and the four numbers themselves cannot be chosen arbitrarily. The equations are
where bi is the number of blocks that contain any i-element set of points.
Examples include the d-dimensional subspaces of a finite projective geometry (where t = d + 1 and λ = 1).
The term block design by itself usually means a 2-design. | https://www.wikidoc.org/index.php/Block_design | |
f705390549a4ae659e48e7556246bee2e952f432 | wikidoc | Blood doping | Blood doping
Blood doping is the practice of illicitly boosting the number of red blood cells (RBCs) in the circulation in order to enhance athletic performance. Because they carry oxygen from the lungs to the muscles, more RBCs in the blood can improve an athlete’s aerobic capacity and stamina.
# Methods
The term blood doping originally meant literally doping with blood, i.e. the transfusion of RBCs. RBCs are uniquely suited to this process because they can be concentrated, frozen and later thawed with little loss of viability or activity. There are two possible types of transfusion: homologous and autologous. In a homologous transfusion, RBCs from a compatible donor are harvested, concentrated and then transfused into the athlete’s circulation prior to endurance competitions. In an autologous transfusion, the athlete's own RBCs are harvested well in advance of competition and then re-introduced before a critical event. For some time after the harvesting the athlete may be anemic.
Both types of transfusion can be dangerous because of the risk of infection and the potential toxicity of improperly stored blood. Homologous transfusions present the additional risks of communication of infectious diseases and the possibility of a transfusion reaction. From a logistical standpoint, either type of transfusion requires the athlete to surreptitiously transport frozen RBCs, thaw and re-infuse them in a non-clinical setting and then dispose of the medical paraphernalia.
In the late 1980s an advance in medicine led to an entirely new form of blood doping involving the hormone erythropoietin (EPO). EPO is a naturally-occurring growth factor that stimulates the formation of RBCs. Recombinant DNA technology made it possible to produce EPO economically on a large scale and it was approved in US and Europe as a pharmaceutical product for the treatment of anemia resulting from renal failure or cancer chemotherapy. Easily injected under the skin, pharmaceutical EPO can boost hematocrit for six weeks or longer. The use of EPO is now believed by many to be widespread in endurance sports.
EPO is also not free of health hazards: excessive use of the hormone can cause polycythemia, a condition where the level of RBCs in the blood is abnormally high. This causes the blood to be more viscous than normal, a condition that strains the heart. Some elite athletes who died of heart failure—usually during sleep, when heart rate is naturally low—were found to have unnaturally high RBC concentrations in their blood.
# Testing and enforcement
## General methods
A time-honored approach to the detection of doping is the random and often repeated search of athletes’ homes and team facilities for evidence of a banned substance or practice. Professional cyclists customarily submit to random drug testing and searches of their homes as an obligation of team membership and participation in the UCI ProTour. In 2004, British cyclist David Millar was stripped of his world time-trial championship after pharmaceutical EPO was found in his possession. Because athletes sometimes inject or infuse non-banned substances such as vitamin B or electrolytes, the possession of syringes or other medical equipment is not necessarily evidence of doping.
It has also been possible to link athletes to blood doping entirely through documentary evidence, even if no banned substance has been found and no athlete has failed a doping test. The Operación Puerto case is a recent example.
A more modern approach, which has been applied to blood doping with mixed success, is to test the blood or urine of an athlete for evidence of a banned substance or practice. This approach requires a well-documented chain of custody of the sample and a test method that can be relied upon to be accurate and reproducible.
One strategy has been to regard any "non-negative" or "unusual" result as evidence of blood doping. The Union Cycliste Internationale (UCI), for example, imposes a 15-day suspension from racing on any male athlete found to have a hematocrit above 50% and a hemoglobin concentration above 17 grams per deciliter (g/dL). A few athletes have normally high RBC concentrations, especially if they have polycythemia, which they must demonstrate through a series of consistently high hematocrit and hemoglobin results over an extended period of time. (Hematocrit (HCT) is the fraction of blood cells by volume that are RBCs. A normal HCT is 41-50% in adult men and 36-44% in adult women. Hemoglobin (Hb) is the iron-containing protein that binds oxygen in RBCs. Normal Hb levels are 14-17 g/dL of blood in men and 12-15 g/dL in women.)
A more recent and more sophisticated method of analysis, which has not yet reached the level of an official standard, is to compare the levels of mature and immature RBCs in an athlete's circulation. If a high number of mature RBCs is not accompanied by a high number of immature RBCs--called reticulocytes--it suggests that the mature RBCs were artificially introduced by transfusion. EPO use can also lead to a similar RBC profile because a preponderance of mature RBCs tends to suppress the formation of reticulocytes. A measure known as the "stimulation index" or "off-score" has been proposed based on an equation involving hemoglobin and reticulocyte concentrations. A normal score is 85-95 and scores over 133 are considered evidence of doping. (The stimulation index is defined as Hb (g/L) minus sixty times the square root of the percentage of RBCs identified as reticulocytes.)
These threshold levels, and their specific numeric values are sources of controversy. Establishment of incorrect threshold values is one way that false positive test results can be produced by a doping control program.
## EPO
Some success has also been realized in applying a specific test to detect EPO use. In 2000 a test developed by scientists at the French national anti-doping laboratory (LNDD) and endorsed by the World Anti-Doping Agency (WADA) was introduced to detect pharmaceutical EPO by distinguishing it from the nearly identical natural hormone normally present in an athlete’s urine. The test method relies on scientific techniques known as gel electrophoresis and isoelectric focusing. Although the test has been widely applied, especially among cyclists and triathletes, it is highly controversial and its accuracy has been called into question. The principal criticism has been the ability of the test to distinguish pharmaceutical EPO from other proteins that may normally be present in the urine of an athlete after strenuous exercise.
The validity of a doping conviction based on the EPO test method was first challenged successfully by Belgian triathelete Rutger Beke. Beke was suspended from competition for 18 months in March 2005 by the Flemish Disciplinary Commission after a positive urine test for EPO in September 2004. In August 2005 the Commission reversed its decision and exonerated him based on scientific and medical information presented by Beke. He asserted that his sample had become degraded as a result of bacterial contamination and that the substance identified by the laboratory as pharmaceutical EPO was, in fact, an unrelated protein indistinguishable from pharmaceutical EPO in the test method. He claimed, therefore, that the test had produced a false positive result in his case.
In May 2007 Bjarne Riis, Rolf Aldag, Erik Zabel and Brian Holm, all former members of the Telekom cycling team, admitted to using EPO during their cycling careers in the mid 1990s. Riis also relinquished his title as champion of the 1996 Tour de France.
## Transfusions
In the case of detecting blood transfusions, a test for detecting homologous blood transfusions (from a donor to a doping athlete) has been in use since 2000. The test method is based on a technique known as fluorescent-activated cell sorting. By examining markers on the surface of blood cells, the method can determine whether blood from more than one person is present in an athlete’s circulation.
The American cyclist Tyler Hamilton failed this test during the 2004 Olympics but was allowed to keep his gold medal because the processing of his sample precluded conducting a second, confirmatory test. He appealed a second positive test for homologous transfusion from the 2004 Vuelta a España to the International Court of Arbitration for Sport but his appeal was denied. Hamilton's lawyers proposed Hamilton may be a genetic chimera or have had a 'vanishing twin' to explain the presence of RBCs from more than one person. While theoretically possible, these explanations were ruled to be of 'negligible probability'.
At present there is no accepted way of detecting autologous transfusions (that is using the athlete’s own RBCs) but research is in progress and the World Anti-Doping Agency (WADA) has promised that a test will eventually be introduced. The test method and its introduction date are to be kept secret in order to avoid tipping off doping athletes, though the most likely assay is a measure of 2,3-bisphosphoglycerate (2,3-BPG) levels in red blood cells. As 2,3-BPG is readily degraded, autologous transfusions will have lower 2,3-BPG levels. Since 2,3-BPG does not readily diffuse across the cellular membrane, it is extremely difficult to restore 2,3-BPG levels in transfused cells.
## Notable Blood Doping Cases
Tour de France rider Alexander Vinokourov, of the Astana Team, tested positive for two different types of blood, one type from himself, and another type from a compatible donor, various news sources reported on July 24, 2007. Vinokourov was tested after his victory in the 13th stage time trial of the Tour on July 21, 2007. A doping test is not considered to be positive until a second sample is tested to confirm the first. Vinokourov's B sample has tested positive, and he now faces a potential suspension of 2 years and a fine equal to one year's salary. He also tested positive after stage 15.
Vinokourov's teammate Andrej Kashechkin was also tested positive for
homologous blood doping on August 1st, 2007, just a few days after the conclusion of the 2007 Tour de France his team withdrew from after the revelation that Vinokourov had doped. | Blood doping
Blood doping is the practice of illicitly boosting the number of red blood cells (RBCs) in the circulation in order to enhance athletic performance. Because they carry oxygen from the lungs to the muscles, more RBCs in the blood can improve an athlete’s aerobic capacity and stamina.
# Methods
The term blood doping originally meant literally doping with blood, i.e. the transfusion of RBCs. RBCs are uniquely suited to this process because they can be concentrated, frozen and later thawed with little loss of viability or activity. There are two possible types of transfusion: homologous and autologous. In a homologous transfusion, RBCs from a compatible donor are harvested, concentrated and then transfused into the athlete’s circulation prior to endurance competitions. In an autologous transfusion, the athlete's own RBCs are harvested well in advance of competition and then re-introduced before a critical event. For some time after the harvesting the athlete may be anemic.
Both types of transfusion can be dangerous because of the risk of infection and the potential toxicity of improperly stored blood. Homologous transfusions present the additional risks of communication of infectious diseases and the possibility of a transfusion reaction. From a logistical standpoint, either type of transfusion requires the athlete to surreptitiously transport frozen RBCs, thaw and re-infuse them in a non-clinical setting and then dispose of the medical paraphernalia.
In the late 1980s an advance in medicine led to an entirely new form of blood doping involving the hormone erythropoietin (EPO). EPO is a naturally-occurring growth factor that stimulates the formation of RBCs. Recombinant DNA technology made it possible to produce EPO economically on a large scale and it was approved in US and Europe as a pharmaceutical product for the treatment of anemia resulting from renal failure or cancer chemotherapy. Easily injected under the skin, pharmaceutical EPO can boost hematocrit for six weeks or longer. The use of EPO is now believed by many to be widespread in endurance sports.
EPO is also not free of health hazards: excessive use of the hormone can cause polycythemia, a condition where the level of RBCs in the blood is abnormally high. This causes the blood to be more viscous than normal, a condition that strains the heart. Some elite athletes who died of heart failure—usually during sleep, when heart rate is naturally low—were found to have unnaturally high RBC concentrations in their blood[1].
# Testing and enforcement
## General methods
A time-honored approach to the detection of doping is the random and often repeated search of athletes’ homes and team facilities for evidence of a banned substance or practice. Professional cyclists customarily submit to random drug testing and searches of their homes as an obligation of team membership and participation in the UCI ProTour. In 2004, British cyclist David Millar was stripped of his world time-trial championship after pharmaceutical EPO was found in his possession. Because athletes sometimes inject or infuse non-banned substances such as vitamin B or electrolytes, the possession of syringes or other medical equipment is not necessarily evidence of doping.
It has also been possible to link athletes to blood doping entirely through documentary evidence, even if no banned substance has been found and no athlete has failed a doping test. The Operación Puerto case is a recent example.
A more modern approach, which has been applied to blood doping with mixed success, is to test the blood or urine of an athlete for evidence of a banned substance or practice. This approach requires a well-documented chain of custody of the sample and a test method that can be relied upon to be accurate and reproducible.
One strategy has been to regard any "non-negative" or "unusual" result as evidence of blood doping. The Union Cycliste Internationale (UCI), for example, imposes a 15-day suspension from racing on any male athlete found to have a hematocrit above 50% and a hemoglobin concentration above 17 grams per deciliter (g/dL). A few athletes have normally high RBC concentrations, especially if they have polycythemia, which they must demonstrate through a series of consistently high hematocrit and hemoglobin results over an extended period of time. (Hematocrit (HCT) is the fraction of blood cells by volume that are RBCs. A normal HCT is 41-50% in adult men and 36-44% in adult women[2]. Hemoglobin (Hb) is the iron-containing protein that binds oxygen in RBCs. Normal Hb levels are 14-17 g/dL of blood in men and 12-15 g/dL in women.)
A more recent and more sophisticated method of analysis, which has not yet reached the level of an official standard, is to compare the levels of mature and immature RBCs in an athlete's circulation. If a high number of mature RBCs is not accompanied by a high number of immature RBCs--called reticulocytes--it suggests that the mature RBCs were artificially introduced by transfusion. EPO use can also lead to a similar RBC profile because a preponderance of mature RBCs tends to suppress the formation of reticulocytes. A measure known as the "stimulation index" or "off-score" has been proposed based on an equation involving hemoglobin and reticulocyte concentrations. A normal score is 85-95 and scores over 133 are considered evidence of doping. (The stimulation index is defined as Hb (g/L) minus sixty times the square root of the percentage of RBCs identified as reticulocytes.)
These threshold levels, and their specific numeric values are sources of controversy. Establishment of incorrect threshold values is one way that false positive test results can be produced by a doping control program.
## EPO
Some success has also been realized in applying a specific test to detect EPO use. In 2000 a test developed by scientists at the French national anti-doping laboratory (LNDD) and endorsed by the World Anti-Doping Agency (WADA) was introduced to detect pharmaceutical EPO by distinguishing it from the nearly identical natural hormone normally present in an athlete’s urine. The test method relies on scientific techniques known as gel electrophoresis and isoelectric focusing. Although the test has been widely applied, especially among cyclists and triathletes, it is highly controversial and its accuracy has been called into question. The principal criticism has been the ability of the test to distinguish pharmaceutical EPO from other proteins that may normally be present in the urine of an athlete after strenuous exercise.
The validity of a doping conviction based on the EPO test method was first challenged successfully by Belgian triathelete Rutger Beke. Beke was suspended from competition for 18 months in March 2005 by the Flemish Disciplinary Commission after a positive urine test for EPO in September 2004. In August 2005 the Commission reversed its decision and exonerated him based on scientific and medical information presented by Beke. He asserted that his sample had become degraded as a result of bacterial contamination and that the substance identified by the laboratory as pharmaceutical EPO was, in fact, an unrelated protein indistinguishable from pharmaceutical EPO in the test method. He claimed, therefore, that the test had produced a false positive result in his case.
In May 2007 Bjarne Riis, Rolf Aldag, Erik Zabel and Brian Holm, all former members of the Telekom cycling team, admitted to using EPO during their cycling careers in the mid 1990s. Riis also relinquished his title as champion of the 1996 Tour de France.
## Transfusions
In the case of detecting blood transfusions, a test for detecting homologous blood transfusions (from a donor to a doping athlete) has been in use since 2000. The test method is based on a technique known as fluorescent-activated cell sorting. By examining markers on the surface of blood cells, the method can determine whether blood from more than one person is present in an athlete’s circulation.
The American cyclist Tyler Hamilton failed this test during the 2004 Olympics but was allowed to keep his gold medal because the processing of his sample precluded conducting a second, confirmatory test. He appealed a second positive test for homologous transfusion from the 2004 Vuelta a España to the International Court of Arbitration for Sport but his appeal was denied. Hamilton's lawyers proposed Hamilton may be a genetic chimera or have had a 'vanishing twin' to explain the presence of RBCs from more than one person. While theoretically possible, these explanations were ruled to be of 'negligible probability'.[3]
At present there is no accepted way of detecting autologous transfusions (that is using the athlete’s own RBCs) but research is in progress and the World Anti-Doping Agency (WADA) has promised that a test will eventually be introduced. The test method and its introduction date are to be kept secret in order to avoid tipping off doping athletes, though the most likely assay is a measure of 2,3-bisphosphoglycerate (2,3-BPG) levels in red blood cells. As 2,3-BPG is readily degraded, autologous transfusions will have lower 2,3-BPG levels. Since 2,3-BPG does not readily diffuse across the cellular membrane, it is extremely difficult to restore 2,3-BPG levels in transfused cells.
## Notable Blood Doping Cases
Tour de France rider Alexander Vinokourov, of the Astana Team, tested positive for two different types of blood, one type from himself, and another type from a compatible donor, various news sources reported on July 24, 2007. Vinokourov was tested after his victory in the 13th stage time trial of the Tour on July 21, 2007. A doping test is not considered to be positive until a second sample is tested to confirm the first. Vinokourov's B sample has tested positive, and he now faces a potential suspension of 2 years and a fine equal to one year's salary.[4] He also tested positive after stage 15.[5][6]
Vinokourov's teammate Andrej Kashechkin was also tested positive for
homologous blood doping[7] on August 1st, 2007, just a few days after the conclusion of the 2007 Tour de France his team withdrew from after the revelation that Vinokourov had doped. | https://www.wikidoc.org/index.php/Blood_doping | |
f08e0ba4effb1a8e91e4313c0a1b19675db3479a | wikidoc | Bloodletting | Bloodletting
Bloodletting (or blood-letting, in modern medicine referred to as phlebotomy) was a popular medical practice from antiquity up to the late 19th century, involving the withdrawal of often considerable quantities of blood from a patient in the hopeful belief that this would cure or prevent a great many illnesses and diseases. The practice, of unproven efficacy, has been abandoned for all except a few specific conditions as modern treatments proved or believed to be effective have been introduced. It is conceivable that historically, in the absence of other treatments for hypertension, bloodletting could sometimes have had a beneficial effect in temporarily reducing blood pressure by a reduction in blood volume.
Today the term "phlebotomy" refers to the drawing of blood for laboratory analysis or blood transfusion (see Phlebotomy (modern)). Therapeutic phlebotomy refers to the drawing of a unit of blood in specific cases like hemochromatosis, polycythemia vera, porphyria cutanea tarda etc., to reduce the amount of red blood cells.
# In the ancient world
Bloodletting is one of the oldest medical practices, having been practiced among diverse ancient peoples, including the Mesopotamians, the Egyptians, the Greeks, the Mayans, and the Aztecs. In Greece, bloodletting was in use around the time of Hippocrates, who mentions bloodletting but in general relied on dietary techniques. Erasistratus, however, theorized that many diseases were caused by plethoras, or overabundances, in the blood, and advised that these plethoras be treated, initially, by exercise, sweating, reduced food intake, and vomiting. Herophilus advocated bloodletting. Archagathus, one of the first Greek physicians to practice in Rome, practiced bloodletting extensively and gained a most sanguinary reputation.
The popularity of bloodletting in Greece was reinforced by the ideas of Galen, after he discovered the veins and arteries were filled with blood, not air as was commonly believed at the time. There were two key concepts in his system of bloodletting. The first was that blood was created and then used up, it did not circulate and so it could 'stagnate' in the extremities. The second was that humoral balance was the basis of illness or health, the four humours being blood, phlegm, black bile, and yellow bile, relating to the four Greek classical elements of air, water, earth and fire. Galen believed that blood was the dominant humour and the one in most need of control. In order to balance the humours, a physician would either remove 'excess' blood (plethora) from the patient or give them an emetic to induce vomiting, or a diuretic to induce urination.
Galen created a complex system of how much blood should be removed based on the patient's age, constitution, the season, the weather and the place. Symptoms of plethora were believed to include fever, apoplexy, and headache. The blood to be let was of a specific nature determined by the disease: either arterial or venous, and distant or close to the area of the body affected. He linked different blood vessels with different organs, according to their supposed drainage. For example, the vein in the right hand would be let for liver problems and the vein in the left hand for problems with the spleen. The more severe the disease, the more blood would be let. Fevers required copious amounts of bloodletting.
The Talmud recommended a specific day of the week and days of the month for bloodletting, and similar rules, though less codified, can be found among Christian writings advising which saints' days were favourable for bloodletting. Islamic authors too advised bloodletting, particularly for fevers. The practice was probably passed to them by the Greeks; when Islamic theories became known in the Latin-speaking countries of Europe, bloodletting became more widespread. Together with cautery it was central to Arabic surgery; the key texts Kitab al-Qanum and especially Al-Tasrif li-man 'ajaza 'an al-ta'lif both recommended it. It was also known in Ayurvedic medicine, described in the Susruta Samhita.
# In the 2nd millennium
Even after the humoral system fell into disuse, the practice was continued by surgeons and barber-surgeons. Though the bloodletting was often recommended by physicians, it was carried out by barbers. This division of labour led to the distinction between physicians and surgeons. The red-and-white-striped pole of the barbershop, still in use today, is derived from this practice: the red represents the blood being drawn, the white represents the tourniquet used, and the pole itself represents the stick squeezed in the patient's hand to dilate the veins. Bloodletting was used to 'treat' a wide range of diseases, becoming a standard treatment for almost every ailment, and was practiced prophylactically as well as therapeutically.
The practice continued throughout the Middle Ages but began to be questioned in the 16th century, particularly in northern Europe and the Netherlands. In France, the court and university physicians advocated frequent phlebotomy. In England, the efficacy of bloodletting was hotly debated, declining throughout the 18th century, and briefly revived for treating tropical fevers in the 19th century.
At right are three photos and a diagram of a 19th century bloodletting device called a scarificator. It has a spring loaded mechanism with gears that snaps the blades out through slits in the front cover and back in, in a circular motion. The case is cast brass and the mechanism and blades steel. One knife bar gear has slipped teeth, turning the blades in a different direction than those on the other bars. The last photo and the diagram show the depth adjustment bar at the back and sides.
A number of different methods were employed. The most common was phlebotomy or venesection (often called "breathing a vein"), in which blood was drawn from one or more of the larger external veins, such as those in the forearm or neck. In arteriotomy an artery was punctured, although generally only in the temples. In scarification (not to be confused with scarification, a method of body modification) the "superficial" vessels were attacked, often using a syringe, a spring-loaded lancet, or a glass cup that contained heated air, producing a vacuum within. A scarificator is a bloodletting tool used primarily in 19th century medicine. Leeches could also be used. The withdrawal of so much blood as to induce syncope (fainting) was considered beneficial, and many sessions would only end when the patient began to swoon.
William Harvey disproved the basis of the practice in 1628, and the introduction of scientific medicine, la méthode numérique, allowed Pierre Charles Alexandre Louis to demonstrate that phlebotomy was entirely ineffective in the treatment of pneumonia and various fevers in the 1830s. Nevertheless, in 1840 a lecturer at the Royal College of Physicians would still state that "blood-letting is a remedy which, when judiciously employed, it is hardly possible to estimate too highly" and Louis was dogged by the sanguinary Broussais, who could recommend leeches fifty at a time.
Bloodletting was especially popular in the young United States of America, where Benjamin Rush (a signatory of the Declaration of Independence) saw the state of the arteries as the key to disease, recommending levels of blood-letting that were high, even for the time. George Washington was treated in this manner following a horseback riding accident: almost 4 pounds (1.7 litres) of blood was withdrawn, contributing to his death by throat infection in 1799.
One reason for the continued popularity of bloodletting (and purging) was that, while anatomical knowledge, surgical and diagnostic skills increased tremendously in Europe from the 17th century, the key to curing disease remained elusive and the underlying belief was that it was better to give any treatment than nothing at all. The psychological benefit of bloodletting to the patient (a placebo effect) may sometimes have outweighed the physiological problems it caused. Bloodletting slowly lost favour during the 19th century, but a number of other ineffective or harmful treatments were available as placebos—mesmerism, various processes involving the new technology of electricity, many potions, tonics, and elixirs.
In the absence of other treatments bloodletting actually is beneficial in some circumstance, including the fluid overload of heart failure, and possibly simply to reduce blood pressure. In other cases, such as those involving agitation, the reduction in blood pressure might appear beneficial due to the sedative effect. In 1844 Joseph Pancoast listed the advantages of bloodletting in "A Treatise on Operative Surgery". Not all of these reasons are outrageous nowadays:
# Phlebotomy
Today it is well-established that bloodletting is not effective for most diseases, or at best less effective than modern treatments. Bloodletting still has its place in the treatment of a few diseases, including hemochromatosis and polycythemia; it is practiced by specifically trained practitioners in hospitals, using modern techniques.
In most cases, phlebotomy now refers to the removal of small quantities of blood for the purpose of performing blood tests. For more details on this subject, see Phlebotomy (modern). | Bloodletting
Template:WikiDoc Cardiology News
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Bloodletting (or blood-letting, in modern medicine referred to as phlebotomy) was a popular medical practice from antiquity up to the late 19th century, involving the withdrawal of often considerable quantities of blood from a patient in the hopeful belief that this would cure or prevent a great many illnesses and diseases. The practice, of unproven efficacy, has been abandoned for all except a few specific conditions as modern treatments proved or believed to be effective have been introduced. It is conceivable that historically, in the absence of other treatments for hypertension, bloodletting could sometimes have had a beneficial effect in temporarily reducing blood pressure by a reduction in blood volume.
Today the term "phlebotomy" refers to the drawing of blood for laboratory analysis or blood transfusion (see Phlebotomy (modern)). Therapeutic phlebotomy refers to the drawing of a unit of blood in specific cases like hemochromatosis, polycythemia vera, porphyria cutanea tarda etc., to reduce the amount of red blood cells.
# In the ancient world
Bloodletting is one of the oldest medical practices, having been practiced among diverse ancient peoples, including the Mesopotamians, the Egyptians, the Greeks, the Mayans, and the Aztecs. In Greece, bloodletting was in use around the time of Hippocrates, who mentions bloodletting but in general relied on dietary techniques. Erasistratus, however, theorized that many diseases were caused by plethoras, or overabundances, in the blood, and advised that these plethoras be treated, initially, by exercise, sweating, reduced food intake, and vomiting. Herophilus advocated bloodletting. Archagathus, one of the first Greek physicians to practice in Rome, practiced bloodletting extensively and gained a most sanguinary reputation.
The popularity of bloodletting in Greece was reinforced by the ideas of Galen, after he discovered the veins and arteries were filled with blood, not air as was commonly believed at the time. There were two key concepts in his system of bloodletting. The first was that blood was created and then used up, it did not circulate and so it could 'stagnate' in the extremities. The second was that humoral balance was the basis of illness or health, the four humours being blood, phlegm, black bile, and yellow bile, relating to the four Greek classical elements of air, water, earth and fire. Galen believed that blood was the dominant humour and the one in most need of control. In order to balance the humours, a physician would either remove 'excess' blood (plethora) from the patient or give them an emetic to induce vomiting, or a diuretic to induce urination.
Galen created a complex system of how much blood should be removed based on the patient's age, constitution, the season, the weather and the place. Symptoms of plethora were believed to include fever, apoplexy, and headache. The blood to be let was of a specific nature determined by the disease: either arterial or venous, and distant or close to the area of the body affected. He linked different blood vessels with different organs, according to their supposed drainage. For example, the vein in the right hand would be let for liver problems and the vein in the left hand for problems with the spleen. The more severe the disease, the more blood would be let. Fevers required copious amounts of bloodletting.
The Talmud recommended a specific day of the week and days of the month for bloodletting, and similar rules, though less codified, can be found among Christian writings advising which saints' days were favourable for bloodletting. Islamic authors too advised bloodletting, particularly for fevers. The practice was probably passed to them by the Greeks; when Islamic theories became known in the Latin-speaking countries of Europe, bloodletting became more widespread. Together with cautery it was central to Arabic surgery; the key texts Kitab al-Qanum and especially Al-Tasrif li-man 'ajaza 'an al-ta'lif both recommended it. It was also known in Ayurvedic medicine, described in the Susruta Samhita.
# In the 2nd millennium
Even after the humoral system fell into disuse, the practice was continued by surgeons and barber-surgeons. Though the bloodletting was often recommended by physicians, it was carried out by barbers. This division of labour led to the distinction between physicians and surgeons. The red-and-white-striped pole of the barbershop, still in use today, is derived from this practice: the red represents the blood being drawn, the white represents the tourniquet used, and the pole itself represents the stick squeezed in the patient's hand to dilate the veins. Bloodletting was used to 'treat' a wide range of diseases, becoming a standard treatment for almost every ailment, and was practiced prophylactically as well as therapeutically.
The practice continued throughout the Middle Ages but began to be questioned in the 16th century, particularly in northern Europe and the Netherlands. In France, the court and university physicians advocated frequent phlebotomy. In England, the efficacy of bloodletting was hotly debated, declining throughout the 18th century, and briefly revived for treating tropical fevers in the 19th century.
At right are three photos and a diagram of a 19th century bloodletting device called a scarificator. It has a spring loaded mechanism with gears that snaps the blades out through slits in the front cover and back in, in a circular motion. The case is cast brass and the mechanism and blades steel. One knife bar gear has slipped teeth, turning the blades in a different direction than those on the other bars. The last photo and the diagram show the depth adjustment bar at the back and sides.
A number of different methods were employed. The most common was phlebotomy or venesection (often called "breathing a vein"), in which blood was drawn from one or more of the larger external veins, such as those in the forearm or neck. In arteriotomy an artery was punctured, although generally only in the temples. In scarification (not to be confused with scarification, a method of body modification) the "superficial" vessels were attacked, often using a syringe, a spring-loaded lancet, or a glass cup that contained heated air, producing a vacuum within. A scarificator is a bloodletting tool used primarily in 19th century medicine. Leeches could also be used. The withdrawal of so much blood as to induce syncope (fainting) was considered beneficial, and many sessions would only end when the patient began to swoon.
William Harvey disproved the basis of the practice in 1628, and the introduction of scientific medicine, la méthode numérique, allowed Pierre Charles Alexandre Louis to demonstrate that phlebotomy was entirely ineffective in the treatment of pneumonia and various fevers in the 1830s. Nevertheless, in 1840 a lecturer at the Royal College of Physicians would still state that "blood-letting is a remedy which, when judiciously employed, it is hardly possible to estimate too highly" and Louis was dogged by the sanguinary Broussais, who could recommend leeches fifty at a time.
Bloodletting was especially popular in the young United States of America, where Benjamin Rush (a signatory of the Declaration of Independence) saw the state of the arteries as the key to disease, recommending levels of blood-letting that were high, even for the time. George Washington was treated in this manner following a horseback riding accident: almost 4 pounds (1.7 litres) of blood was withdrawn, contributing to his death by throat infection in 1799.
One reason for the continued popularity of bloodletting (and purging) was that, while anatomical knowledge, surgical and diagnostic skills increased tremendously in Europe from the 17th century, the key to curing disease remained elusive and the underlying belief was that it was better to give any treatment than nothing at all. The psychological benefit of bloodletting to the patient (a placebo effect) may sometimes have outweighed the physiological problems it caused. Bloodletting slowly lost favour during the 19th century, but a number of other ineffective or harmful treatments were available as placebos—mesmerism, various processes involving the new technology of electricity, many potions, tonics, and elixirs.
In the absence of other treatments bloodletting actually is beneficial in some circumstance, including the fluid overload of heart failure, and possibly simply to reduce blood pressure. In other cases, such as those involving agitation, the reduction in blood pressure might appear beneficial due to the sedative effect. In 1844 Joseph Pancoast listed the advantages of bloodletting in "A Treatise on Operative Surgery". Not all of these reasons are outrageous nowadays:
# Phlebotomy
Today it is well-established that bloodletting is not effective for most diseases, or at best less effective than modern treatments. Bloodletting still has its place in the treatment of a few diseases, including hemochromatosis and polycythemia; it is practiced by specifically trained practitioners in hospitals, using modern techniques.
In most cases, phlebotomy now refers to the removal of small quantities of blood for the purpose of performing blood tests. For more details on this subject, see Phlebotomy (modern). | https://www.wikidoc.org/index.php/Blood_letting | |
10dde05284f03cbeef858d2357a57b7e5f3d78a4 | wikidoc | Blood phobia | Blood phobia
# Background
Blood phobia (also, AE: Hemophobia, BE: Haemophobia) is the extreme and irrational fear of blood. Acute cases of this fear can cause physical reactions that are uncommon in most other fears, specifically Vasovagal Syncope (fainting). Similar reactions can also occur with trypanophobia and traumatophobia. For this reason, these three phobias are categorized as "blood-injection-injury phobia" by the DSM-IV. Some early texts refer to this category as "blood-injury-illness phobia."
# Etiology
Blood phobia is often caused by direct or vicarious trauma in childhood or adolescence. There is also a genetic component to blood phobia.
# Treatment
In patients with vasovagal blood phobia, patients who are successfully treated with psychological interventions are seen as unique. In contrast, many behavioral techniques useful in mitigating vasovagal syncope, such as applying tension to the muscles in an effort to increase blood pressure, are helpful to patients with blood phobia. Medical devices, such as pacemakers, are also used to treat patients with blood-phobia. | Blood phobia
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Background
Blood phobia (also, AE: Hemophobia, BE: Haemophobia) is the extreme and irrational fear of blood. Acute cases of this fear can cause physical reactions that are uncommon in most other fears, specifically Vasovagal Syncope (fainting).[1] Similar reactions can also occur with trypanophobia and traumatophobia. For this reason, these three phobias are categorized as "blood-injection-injury phobia" by the DSM-IV.[2] Some early texts refer to this category as "blood-injury-illness phobia."[3]
# Etiology
Blood phobia is often caused by direct or vicarious trauma in childhood or adolescence.[3] There is also a genetic component to blood phobia.[4]
# Treatment
In patients with vasovagal blood phobia, patients who are successfully treated with psychological interventions are seen as unique.[5] In contrast, many behavioral techniques useful in mitigating vasovagal syncope, such as applying tension to the muscles in an effort to increase blood pressure, are helpful to patients with blood phobia.[6] Medical devices, such as pacemakers, are also used to treat patients with blood-phobia.[5] | https://www.wikidoc.org/index.php/Blood_phobia | |
6a3bd422f12ab02bab7999dc76995bfcffeccd25 | wikidoc | Blood plasma | Blood plasma
Blood plasma is the liquid component of blood, in which the blood cells are suspended. It makes up about 55% of total blood volume. Blood plasma is prepared simply by spinning a tube of fresh blood in a centrifuge until the blood cells fall to the bottom of the tube. The blood plasma is then poured or drawn off.
Plasmapheresis is a type of medical therapy involving separation of plasma from red blood cells.
# Description
Blood plasma contains many vital proteins including fibrinogen (a clotting factor), globulins and human serum albumin. Sometimes blood plasma may contain viral impurities which must be extracted through viral processing.
Blood plasma is clear and has a pale yellow color. It is mainly composed of water, blood proteins, and inorganic electrolytes. Its protein content is necessary to maintain oncotic pressure: this "holds" the blood plasma within the blood vessels, which are "leaky". Blood plasma serves as transport medium for glucose, lipids, amino acids, hormones, metabolic end products, carbon dioxide (CO2) and oxygen (O2). The oxygen transport capacity and oxygen content of plasma is much lower than that of the hemoglobin in red blood cells; the CO2 will, however, increase under hyperbaric conditions. Plasma is the storage and transport medium of clotting factors. Blood serum is blood plasma from which clotting factors have been removed. This is done by allowing fresh blood to clot before spinning it.
# Fresh frozen plasma
"Fresh frozen plasma" (FFP) is prepared from a single unit of blood, drawn from a single person. It is frozen after collection and can be stored for one year from date of collection. FFP contains all of the coagulation factors and proteins present in the original unit of blood. It is used to treat coagulopathies from warfarin overdose, liver disease, or dilutional coagulopathy. FFP which has been stored more than the standard length of time is re-classified as simply "frozen plasma," which is identical except that the coagulation factors are no longer considered completely viable.
It is also used to treat TTP (thrombotic thrombocytopenic purpura) because it is not possible to treat this disease by transfusing platelets.
# Dried plasma
"Dried plasma" was developed and first used during World War II. Prior to the United States' involvement in the war, liquid plasma and "whole blood" were used. The "Blood for Britain" program during the early 1940s was quite successful (and popular stateside) based in part on Dr.Charles Drew's contribution. A large project was begun in August of the year 1940 to collect blood in New York City hospitals for the export of plasma to Britain. Dr. Drew was appointed medical supervisor of the "Plasma for Britain" project. His notable contribution at this time was to transform the test tube methods of many blood researchers, including himself, into the first successful mass production techniques.
Nonetheless, the decision was made to develop a dried plasma package for the armed forces as it would reduce breakage and make the transportation, packaging, and storage much simpler.
The resulting Army-Navy dried plasma package came in two tin cans containing 400 cc bottles. One bottle contained enough distilled water to completely reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours.
Following the "Plasma for Britain" project, Dr. Drew was named director of the Red Cross blood bank and assistant director of the National Research Council, in charge of blood collection for the United States Army and Navy. Dr. Drew argued against the armed forces directive that blood/plasma was to be separated by the race of the donor. Dr. Drew argued that there was no racial difference in human blood and that the policy would lead to needless deaths as soldiers and sailors were required to wait for "same race" blood.
By the end of the war the American Red Cross had provided enough blood for over six million plasma packages. Most of the surplus plasma was returned stateside for civilian use. Serum albumin replaced dried plasma for combat use during the Korean War. | Blood plasma
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Blood plasma is the liquid component of blood, in which the blood cells are suspended. It makes up about 55% of total blood volume. Blood plasma is prepared simply by spinning a tube of fresh blood in a centrifuge until the blood cells fall to the bottom of the tube. The blood plasma is then poured or drawn off.
Plasmapheresis is a type of medical therapy involving separation of plasma from red blood cells.
# Description
Blood plasma contains many vital proteins including fibrinogen (a clotting factor), globulins and human serum albumin. Sometimes blood plasma may contain viral impurities which must be extracted through viral processing.
Blood plasma is clear and has a pale yellow color. It is mainly composed of water, blood proteins, and inorganic electrolytes. Its protein content is necessary to maintain oncotic pressure: this "holds" the blood plasma within the blood vessels, which are "leaky". Blood plasma serves as transport medium for glucose, lipids, amino acids, hormones, metabolic end products, carbon dioxide (CO2) and oxygen (O2). The oxygen transport capacity and oxygen content of plasma is much lower than that of the hemoglobin in red blood cells; the CO2 will, however, increase under hyperbaric conditions. Plasma is the storage and transport medium of clotting factors. Blood serum is blood plasma from which clotting factors have been removed. This is done by allowing fresh blood to clot before spinning it.
# Fresh frozen plasma
"Fresh frozen plasma" (FFP) is prepared from a single unit of blood, drawn from a single person. It is frozen after collection and can be stored for one year from date of collection. FFP contains all of the coagulation factors and proteins present in the original unit of blood. It is used to treat coagulopathies from warfarin overdose, liver disease, or dilutional coagulopathy. FFP which has been stored more than the standard length of time is re-classified as simply "frozen plasma," which is identical except that the coagulation factors are no longer considered completely viable.[1]
It is also used to treat TTP (thrombotic thrombocytopenic purpura) because it is not possible to treat this disease by transfusing platelets.
# Dried plasma
"Dried plasma" was developed and first used during World War II. Prior to the United States' involvement in the war, liquid plasma and "whole blood" were used. The "Blood for Britain" program during the early 1940s was quite successful (and popular stateside) based in part on Dr.Charles Drew's contribution. A large project was begun in August of the year 1940 to collect blood in New York City hospitals for the export of plasma to Britain. Dr. Drew was appointed medical supervisor of the "Plasma for Britain" project. His notable contribution at this time was to transform the test tube methods of many blood researchers, including himself, into the first successful mass production techniques.
Nonetheless, the decision was made to develop a dried plasma package for the armed forces as it would reduce breakage and make the transportation, packaging, and storage much simpler. [2]
The resulting Army-Navy dried plasma package came in two tin cans containing 400 cc bottles. One bottle contained enough distilled water to completely reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours. [3]
Following the "Plasma for Britain" project, Dr. Drew was named director of the Red Cross blood bank and assistant director of the National Research Council, in charge of blood collection for the United States Army and Navy. Dr. Drew argued against the armed forces directive that blood/plasma was to be separated by the race of the donor. Dr. Drew argued that there was no racial difference in human blood and that the policy would lead to needless deaths as soldiers and sailors were required to wait for "same race" blood.
By the end of the war the American Red Cross had provided enough blood for over six million plasma packages. Most of the surplus plasma was returned stateside for civilian use. Serum albumin replaced dried plasma for combat use during the Korean War.[4] | https://www.wikidoc.org/index.php/Blood_plasma | |
34f7fbad33038e29567ec01e6ef1993f8ff73fd2 | wikidoc | Blunt trauma | Blunt trauma
Synonyms and keywords: Blunt injury; non-penetrating trauma; blunt force trauma.
# Overview
In medical terminology, blunt trauma refers to a type of physical trauma caused to a body part, either by impact, injury or physical attack; the latter usually being referred to as blunt force trauma. The term itself is used to refer to the precursory trauma, from which there is further development of more specific types of trauma, such as contusions, abrasions, lacerations, and/or bone fracturing.
# Variations
## Abdominal Trauma (BAT)
Blunt abdominal trauma is often referred to as the most common type of trauma, representing around 50 to 75 percent of blunt trauma. The majority of BAT is often attributed to car-to-car collisions, in which rapid deceleration often propels the driver forwards into the steering wheel or dashboard, causing contusions in less serious cases or rupturing of internal organs due to briefly increased intraluminal pressure in more serious cases where speed or forward force is greater.
Abdominal trauma caused by deceleration and impact shows a similar effect to trauma to any other part of the body; namely the rupturing or damage of free and relatively fixed objects, a classic example of such an injury would be a hepatic tear along the ligamentum teres followed with injuries to the renal arteries.
As with most trauma, blunt abdominal trauma is often the case of further injury, depending upon the severity of the accident. In the majority of cases, the liver and spleen (see Blunt splenic trauma) are most severely affected, followed by damage to the small intestine. Recent studies utilizing CT scanning have suggested that hepatic and other concomitant injuries may develop from blunt abdominal trauma.
In rare cases, BAT has been attributed to several medical techniques such as the heimlich maneuver, attempts at cardiopulmonary resuscitation, and manual thrusts to a clear an airway. Although these are rare causes of blunt abdominal trauma, it is often thought that they are caused by applying unnecessary pressure when administering such techniques.
# Diagnosis
Although blunt trauma is a condition in itself, the main emphasis on the diagnosis of blunt trauma is to ascertain the cause of the accident, any further injury and its correlation with the medical, dietary, and physiological history of the patient gathered from various sources, such as family and friends, or previous physicians, in order to establish the most swift path to recovery. This method is given the mnemonic "SITEMAP";
- Social history and/or evidence of substance abuse
- Immunization history
- Time of last meal or sign of nutrient intake
- Events leading to the accident or incident
- Medication status, history
- Allergies
- Past surgical and medical treatment history
Usually, in the case of examination, areas such as the head or those linked with the respiratory system have a higher priority, and are examined before the abdomen, so as to administer, if necessary, medical treatments which will immediately limit the amount of progressive damage which could be caused from such injuries. The amount of time spent on diagnosing abdominal injury should be minimal, and expedited by using relatively quick methods of determining the extent of such injury, such as by identifying free intra-abdominal fluid through diagnostic peritoneal lavage (DPL) before recommending a laparotomy if the situation requires one.
# Treatment
Whenever any blunt trauma is sustained to the body, it is normal to ensure first that there is no bleeding, internal or back injury, or breathing problems before administering any type of rehabilitative care to the patient. In cases of car accidents, or where a patient has had some form of accelerated impact, the likelihood is that there will be progressive damage to internal organs, as well as the fracturing of bones, both of which are dealt with by splinting fractures and controlling external hemorrhaging. Most cases require IV therapy along with other methods of stabilization such as securing the airway or providing a respirator.
# Related Chapters
- Penetrating trauma
- Blunt splenic trauma | Blunt trauma
Template:Medical conditionbox
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Synonyms and keywords: Blunt injury; non-penetrating trauma; blunt force trauma.
# Overview
In medical terminology, blunt trauma refers to a type of physical trauma caused to a body part, either by impact, injury or physical attack; the latter usually being referred to as blunt force trauma. The term itself is used to refer to the precursory trauma, from which there is further development of more specific types of trauma, such as contusions, abrasions, lacerations, and/or bone fracturing.
# Variations
## Abdominal Trauma (BAT)
Blunt abdominal trauma is often referred to as the most common type of trauma, representing around 50 to 75 percent of blunt trauma. The majority of BAT is often attributed to car-to-car collisions, in which rapid deceleration often propels the driver forwards into the steering wheel or dashboard, causing contusions in less serious cases or rupturing of internal organs due to briefly increased intraluminal pressure in more serious cases where speed or forward force is greater.[2]
Abdominal trauma caused by deceleration and impact shows a similar effect to trauma to any other part of the body; namely the rupturing or damage of free and relatively fixed objects, a classic example of such an injury would be a hepatic tear along the ligamentum teres followed with injuries to the renal arteries.
As with most trauma, blunt abdominal trauma is often the case of further injury, depending upon the severity of the accident. In the majority of cases, the liver and spleen (see Blunt splenic trauma) are most severely affected, followed by damage to the small intestine. Recent studies utilizing CT scanning have suggested that hepatic and other concomitant injuries may develop from blunt abdominal trauma.[3]
In rare cases, BAT has been attributed to several medical techniques such as the heimlich maneuver, attempts at cardiopulmonary resuscitation, and manual thrusts to a clear an airway. Although these are rare causes of blunt abdominal trauma, it is often thought that they are caused by applying unnecessary pressure when administering such techniques.[4]
# Diagnosis
Although blunt trauma is a condition in itself, the main emphasis on the diagnosis of blunt trauma is to ascertain the cause of the accident, any further injury and its correlation with the medical, dietary, and physiological history of the patient gathered from various sources, such as family and friends, or previous physicians, in order to establish the most swift path to recovery. This method is given the mnemonic "SITEMAP"; [5]
- Social history and/or evidence of substance abuse
- Immunization history
- Time of last meal or sign of nutrient intake
- Events leading to the accident or incident
- Medication status, history
- Allergies
- Past surgical and medical treatment history
Usually, in the case of examination, areas such as the head or those linked with the respiratory system have a higher priority, and are examined before the abdomen, so as to administer, if necessary, medical treatments which will immediately limit the amount of progressive damage which could be caused from such injuries. The amount of time spent on diagnosing abdominal injury should be minimal, and expedited by using relatively quick methods of determining the extent of such injury, such as by identifying free intra-abdominal fluid through diagnostic peritoneal lavage (DPL) before recommending a laparotomy if the situation requires one. [6]
# Treatment
Whenever any blunt trauma is sustained to the body, it is normal to ensure first that there is no bleeding, internal or back injury, or breathing problems before administering any type of rehabilitative care to the patient. In cases of car accidents, or where a patient has had some form of accelerated impact, the likelihood is that there will be progressive damage to internal organs, as well as the fracturing of bones, both of which are dealt with by splinting fractures and controlling external hemorrhaging. Most cases require IV therapy along with other methods of stabilization such as securing the airway or providing a respirator. [7]
# Related Chapters
- Penetrating trauma
- Blunt splenic trauma | https://www.wikidoc.org/index.php/Blunt_force_trauma | |
9917c1bd5706d466ebf08e4f6a9a5eab5944330c | wikidoc | Bodily fluid | Bodily fluid
Bodily fluids listed below are found in the bodies of men and/or women. Some may be found in animals as well. They include fluids that are excreted or secreted from the body as well as fluids that normally are not. These respective fluids would include:
- Amniotic fluid surrounding a fetus
- Aqueous humour
- Bile
- Blood and blood plasma
- Cerumen also known as earwax
- Cowper's fluid or pre-ejaculatory fluid
- Chyle
- Chyme
- Female ejaculate
- Interstitial fluid
- Lymph
- Menses
- Breast milk
- Mucus (including snot and phlegm)
- Pleural fluid
- Pus
- Saliva
- Sebum (skin oil)
- Semen
- Serum
- Sweat
- Tears
- Urine
- Vaginal lubrication
- Vomit
- Water
Feces, while not generally classed as a body fluid, are often treated similarly to body fluids, and are sometimes fluid or semi-fluid in nature.
Internal body fluids, which are not usually leaked or excreted to the outside world, include:
- cerebrospinal fluid surrounding the brain and the spinal cord
- synovial fluid surrounding bone joints
- intracellular fluid is the fluid inside cells
- blood
- aqueous humour and vitreous humour the fluids in the eyeball.
# Bodily fluids in religion and history
Bodily fluids are regarded with varying levels of disgust among world cultures, including the Abrahamic faiths (Christianity, Islam, Judaism) and Hinduism. In Hinduism substances that have left the body are considered unclean, although there are some sects which smear cremated body ash on their foreheads as symbolic gestures.
Feces and urine have been used by religions on every continent for atonement, rites of passage, and funerary rites.
One interesting example is the alleged consumption of some ancient sects of the urine of people intoxicated with hallucinogenic mushrooms or creepers, as the urine contained high concentrations of the drug and could be "re-used."
Attitudes concerning bodily fluids aside, there is a long human history of their use in religion, medicine, art, sex, and folklore. Some believe that the tradition of shaking hands with the right hand stems from using the left hand to clean up after defecation, as a result, shaking hands with the left hand is considered insulting in many cultures.
# Body fluids and health
Modern medical hygiene and public health practices also treat body fluids as unclean. This is because they can be vectors for infectious diseases, such as sexually transmitted diseases or blood-borne diseases.
Safer sex practices try to avoid exchanges of body fluids. | Bodily fluid
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Bodily fluids listed below are found in the bodies of men and/or women. Some may be found in animals as well. They include fluids that are excreted or secreted from the body as well as fluids that normally are not. These respective fluids would include:
- Amniotic fluid surrounding a fetus
- Aqueous humour
- Bile
- Blood and blood plasma
- Cerumen also known as earwax
- Cowper's fluid or pre-ejaculatory fluid
- Chyle
- Chyme
- Female ejaculate
- Interstitial fluid
- Lymph
- Menses
- Breast milk
- Mucus (including snot and phlegm)
- Pleural fluid
- Pus
- Saliva
- Sebum (skin oil)
- Semen
- Serum
- Sweat
- Tears
- Urine
- Vaginal lubrication
- Vomit
- Water
Feces, while not generally classed as a body fluid, are often treated similarly to body fluids, and are sometimes fluid or semi-fluid in nature.
Internal body fluids, which are not usually leaked or excreted to the outside world, include:
- cerebrospinal fluid surrounding the brain and the spinal cord
- synovial fluid surrounding bone joints
- intracellular fluid is the fluid inside cells
- blood
- aqueous humour and vitreous humour the fluids in the eyeball.
# Bodily fluids in religion and history
Bodily fluids are regarded with varying levels of disgust among world cultures, including the Abrahamic faiths (Christianity, Islam, Judaism) and Hinduism. In Hinduism substances that have left the body are considered unclean, although there are some sects which smear cremated body ash on their foreheads as symbolic gestures.
Feces and urine have been used by religions on every continent for atonement, rites of passage, and funerary rites.
One interesting example is the alleged consumption of some ancient sects of the urine of people intoxicated with hallucinogenic mushrooms or creepers, as the urine contained high concentrations of the drug and could be "re-used."
Attitudes concerning bodily fluids aside, there is a long human history of their use in religion, medicine, art, sex, and folklore. Some believe that the tradition of shaking hands with the right hand stems from using the left hand to clean up after defecation, as a result, shaking hands with the left hand is considered insulting in many cultures.
# Body fluids and health
Modern medical hygiene and public health practices also treat body fluids as unclean. This is because they can be vectors for infectious diseases, such as sexually transmitted diseases or blood-borne diseases.
Safer sex practices try to avoid exchanges of body fluids. | https://www.wikidoc.org/index.php/Bodily_fluid | |
a43f6198c0d63f89024a871239163b0533e924fe | wikidoc | Bodybuilding | Bodybuilding
Bodybuilding is the process of maximizing muscle hypertrophy through the combination of weight training, sufficient caloric intake, and rest. Someone who engages in this activity is referred to as a bodybuilder. As a sport, called competitive bodybuilding, bodybuilders display their physiques to a panel of judges, who assign points based on their aesthetic appearance. The muscles are revealed through a combination of fat loss, oils, and tanning (or tanning lotions) which combined with lighting make the definition of the muscle group more distinct. Famous bodybuilders include Arnold Schwarzenegger, Sergio Oliva, Dorian Yates, Lou Ferrigno, Franco Columbu, Frank Zane, Lee Haney, Ronnie Coleman, and Jay Cutler.
# History
## Early years
The "Early Years" of Bodybuilding are considered to be the period between 1880 and 1930.
Bodybuilding (the art of displaying the muscles) did not really exist prior to the late 19th century, when it was promoted by a man from Prussia named Eugen Sandow, who is now generally referred to as "The Father of Modern Bodybuilding". He is credited as being a pioneer of the sport because he allowed an audience to enjoy viewing his physique in "muscle display performances". Although audiences were thrilled to see a well-developed physique, those men simply displayed their bodies as part of strength demonstrations or wrestling matches. Sandow had a stage show built around these displays through his manager, Florenz Ziegfeld. He became so successful at it, he later created several businesses around his fame and was among the first to market products branded with his name alone. As he became more popular, he was credited with inventing and selling the first exercise equipment for the masses (machined dumbbells, spring pulleys and tension bands).
Sandow was a strong advocate of "the Grecian Ideal" (this was a standard where a mathematical "ideal" was set up and the "perfect physique" was close to the proportions of ancient Greek and Roman statues from classical times). This is how Sandow built his own physique and in the early years, men were judged by how closely they matched these "ideal" proportions. Sandow organised the first bodybuilding contest on 14 September, 1901 called the "Great Competition" and held in the Royal Albert Hall, London, UK. Judged by himself, Sir Charles Lawes, and Sir Arthur Conan Doyle, the contest was a huge success and was sold out and hundreds of physical culture enthusiasts were turned away. The trophy presented to the winner was a bronze statue of Sandow himself sculpted by Frederick Pomeroy. The winner was William L. Murray of Nottingham, England. The most prestigious bodybuilding contest today is the Mr. Olympia, and since 1977, the winner has been presented with the same bronze statue of Sandow that he himself presented to the winner at the first contest.
On 16 January 1904, the first large-scale bodybuilding competition in America took place at Madison Square Garden in New York City. The winner was Al Treloar and he was declared "The Most Perfectly Developed Man in the World". Treloar won a $1,000 cash prize, a substantial sum at that time. Two weeks later, Thomas Edison made a film of Al Treloar's posing routine. Edison also made two films of Sandow a few years before, making him the man who made the first three motion pictures featuring a bodybuilder. In the early 20th century, Bernarr Macfadden and Charles Atlas, continued to promote bodybuilding across the world. Alois P. Swoboda was an early pioneer in America and the man whom Charles Atlas credited with his success in his statement: "Everything that I know I learned from A. P. (Alois) Swoboda."
Other important bodybuilders in the early history of bodybuilding prior to 1930 include: Earle Liederman (writer of some of the earliest bodybuilding instruction books), Seigmund Breitbart (famous Jewish bodybuilder), Georg Hackenschmidt, George F. Jowett, Maxick (a pioneer in the art of posing), Monte Saldo, Launceston Elliot, Sig Klein, Sgt. Alfred Moss, Joe Nordquist, Lionel Strongfort (Strongfortism), Gustav Fristensky (the Czech champion), and Alan C. Mead, who became an impressive muscle champion despite the fact that he lost a leg in World War I.
## The "Golden Age"
The period of around 1940 to 1970 is often referred to as the "Golden Age" of bodybuilding because of changes in the aesthetic for more mass, as well as muscular symmetry and definition, which characterised the "early years". This was due in large part to the advent of World War II, which inspired many young men to be bigger, stronger and more aggressive in their attitudes. This was accomplished by improved training techniques, better nutrition and more effective equipment. Several important publications came into being, as well, and new contests emerged as the popularity of the sport grew.
This period of bodybuilding was typified at Muscle Beach in Venice, California. Famous names in bodybuilding from this period included Steve Reeves (notable in his day for portraying Hercules and other sword-and-sandal heroes),Clancy Ross, Reg Park, John Grimek, Larry Scott, Bill Pearl, and Irvin "Zabo" Koszewski.
The rise in popularity of the Amateur Athletic Union (AAU) added a bodybuilding competition to their existing weightlifting contest in 1939 - and the following year this competition was named AAU Mr. America. Around the mid-1940s most bodybuilders became disgruntled with the AAU since they only allowed amateur competitors and they placed more focus on the Olympic sport of weightlifting. This caused brothers Ben and Joe Weider to form the International Federation of BodyBuilders (IFBB) - which organized their competition IFBB Mr. America, which was open to professional athletes.
In 1950, another organization, the National Amateur Bodybuilders Association (NABBA) started their NABBA Mr. Universe contest in the UK. Another major contest, Mr. Olympia was first held in 1965 - and this is currently the most prestigious title in bodybuilding.
Initially contests were only for men, but the NABBA added Miss Universe in 1965 and Ms. Olympia was started in 1980. (For more, see female bodybuilding.)
## 1970s onwards
In the 1970s, bodybuilding had major publicity thanks to Arnold Schwarzenegger and the 1977 film Pumping Iron. By this time the IFBB dominated the sport and the AAU took a back seat.
The National Physique Committee (NPC) was formed in 1981 by Jim Manion, who had just stepped down as chairman of the AAU Physique Committee. The NPC has gone on to become the most successful bodybuilding organization in the U.S., and is the amateur division of the IFBB. The late 1980s and early 1990s saw the decline of AAU sponsored bodybuilding contests. In 1999, the AAU voted to discontinue its bodybuilding events.
This period also saw the rise of anabolic steroids used both in bodybuilding and many other sports. To combat this, and to be allowed to be an IOC member, the IFBB introduced doping tests for both steroids and other banned substances. Although doping tests occurred, the majority of professional bodybuilders still used anabolic steroids for competition. During the 1970s the use of anabolic steroids was openly discussed partly due to the fact they were legal. However the U.S. Congress in the Anabolic Steroid Control Act of 1990 placed anabolic steroids into Schedule III of the Controlled substance act (CSA).
In 1990, wrestling promoter Vince McMahon announced he was forming a new bodybuilding organization, the World Bodybuilding Federation (WBF). McMahon wanted to bring WWF-style showmanship and bigger prize money to the sport of bodybuilding. McMahon signed 13 competitors to lucrative long-term contracts, something virtually unheard of in bodybuilding up until then. Most of the WBF competitors immediately abandoned the IFBB. In response to the WBF's formation, IFBB president Ben Weider blacklisted all the bodybuilders who had signed with the WBF. The IFBB also quietly stopped testing their athletes for anabolic steroid use since it was difficult to compete thus with a new organization which did not test for steroids. In 1992, Vince McMahon instituted drug testing for WBF athletes because he and the WWF were under investigation by the federal government for alleged involvement in anabolic steroid trafficking. The result was that the competitors in the 1992 WBF contest looked sub-par, according to some contemporary accounts. McMahon formally dissolved the WBF in July, 1992. Reasons for this probably included lack of income from the pay-per-view broadcasts of the WBF contests, slow sales of the WBF's magazine Bodybuilding Lifestyles (which later became WBF Magazine), and the expense of paying multiple 6-figure contracts as well as producing two TV shows and a monthly magazine. However, the formation of the WBF had two positive effects for the IFBB athletes: (1) it caused IFBB founder Joe Weider to sign many of his top stars to contracts, and (2) it caused the IFBB to raise prize money in its sanctioned contests. Joe Weider eventually offered to accept the WBF bodybuilders back into the IFBB for a fine of 10% of their former yearly WBF salary.
In the early 2000s, the IFBB was attempting to make bodybuilding an Olympic sport. It obtained full IOC membership in 2000 and was attempting to get approved as a demonstration event at the Olympics which would hopefully lead to it being added as a full contest. This did not happen. Olympic recognition for bodybuilding remains controversial since some argue that bodybuilding is not a sport because the actual contest does not involve athletic effort. Also, some still have the misperception that bodybuilding necessarily involves the use of anabolic steroids, which are prohibited in Olympic competitions. Proponents argue that the posing routine requires skill and preparation, and bodybuilding should therefore be considered a sport.
In 2003, Joe Weider sold Weider Publications to AMI, which owns The National Enquirer. Ben Weider is still the president of the IFBB. In 2004, contest promoter Wayne DeMilia broke ranks with the IFBB and AMI took over the promotion of the Mr. Olympia contest.
# Areas of Bodybuilding
Professional bodybuilding
In the modern bodybuilding industry "Professional" generally means a bodybuilder who has won qualifying completions as an amateur and has earned a 'pro card' from the IFBB. Professionals earn the right to compete in sanctioned competitions including the Arnold Classic and the Night of Champions. Placings at such competitions in turn earn them the right to compete at the Mr. Olympia; the title is considered to be the highest accolade in the professional bodybuilding field.
Natural bodybuilding
In natural contests bodybuilders are routinely tested for illegal substances and are banned for any violations from future contests. Testing can be done on urine samples, but in many cases a less expensive polygraph (lie detector) test is performed instead. What qualifies as an "illegal" substance, in the sense that it is prohibited by regulatory bodies, varies between natural federations, and does not necessarily include only substances that are illegal under the laws of the relevant jurisdiction. Anabolic steroids, Prohormone and Diuretics are generally banned in natural organizations. Natural bodybuilding organizations include NANBF (North American Natural Bodybuilding Federation), and the NPA (Natural physique association). Natural bodybuilders assert that their method is more focused on competition and a healthy lifestyle than other forms of bodybuilding.
Teenage bodybuilding
Bodybuilding also has many competition categories for young entrants. Many current professional bodybuilders started weight training during their teenage years. Bodybuilders such as Arnold Schwarzenegger, Lee Priest and Jay Cutler all started competing when they were teenagers. Today many teenagers compete in bodybuilding competitions.
Female bodybuilding
In the 1970s, women began to take part in bodybuilding competitions, and was extremely popular for a time. More than ever women are training with weights for exercise purposes with desire for a more attractive body and to prevent bone loss. Many women however still fear that weight training will make them "bulky" and believe weight training is only for men. However strength training has many benefits for women including increased bone mass and prevention of bone loss as well as increased muscle strength and balance. In recent years, the related areas of fitness and figure competition have gained in popularity, providing an alternative for women who choose not to develop the level of muscularity necessary for bodybuilding. The first Ms. Olympia contest in 1980, won by Rachel McLish, would resemble closely what is thought of today as a fitness and figure competition.
# Competition
For biographies of professional bodybuilders see list of female bodybuilders, list of male professional bodybuilders, and Category:Professional bodybuilders
In competitive bodybuilding, bodybuilders aspire to develop and maintain an aesthetically pleasing (by bodybuilding standards) body and balanced physique. The competitors show off their bodies by performing a number of poses - bodybuilders spend time practicing their posing as this has a large effect on how they are judged.
A bodybuilder's size and shape are far more important than how much he or she can lift. The sport should therefore not be confused with strongman competition or powerlifting, where the main point is on actual physical strength, or with Olympic weightlifting, where the main point is equally split between strength and technique. Though superficially similar to the casual observer, the fields entail a different regimen of training, diet, and basic motivation.
## Contest preparation
The general strategy adopted by most present-day competitive bodybuilders is to make muscle gains for most of the year (known as the "off-season") and approximately 3-4 months from competition attempt to lose body fat (referred to as "cutting"). In doing this some muscle will be lost but the aim is to keep this to a minimum. There are many approaches used but most involve reducing calorie intake and increasing cardio, while monitoring body fat percentage.
In the week leading up to a contest, bodybuilders will begin increasing their water intake so as to deregulate the systems in the body associated with water flushing. They will also increase their sodium intake. At the same time they will decrease their carbohydrate consumption in an attempt to "carb deplete". The goal during this week is to deplete the muscles of glycogen. Two days before the show, sodium intake is reduced by half, and then eliminated completely. The day before the show, water is removed from the diet, and diuretics may be introduced. At the same time carbohydrates are re-introduced into the diet to expand the muscles. This is typically known as "carb-loading." The end result is an ultra-lean bodybuilder with full hard muscles and a dry, vascular appearance.
Prior to performing on stage, bodybuilders will apply various products to their skin to improve their muscle definition - these include fake tan commonly called "pro tan" (to make the skin darker) and various oils (to make the skin shiny). They will also use weights to "pump up" by forcing blood to their muscles to improve size and vascularity.
# Strategy
Bodybuilders use three main strategies to maximize muscle hypertrophy:
- Strength training through weights or elastic/hydraulic resistance
- Specialised nutrition, incorporating extra protein and supplements where necessary
- Adequate rest, including sleep and recuperation between workouts
## Weight training
Weight training causes micro-tears to the muscles being trained; this is generally known as microtrauma. These micro-tears in the muscle contribute to the soreness felt after exercise, called delayed onset muscle soreness (DOMS). It is the repair to these micro-trauma that result in muscle growth. Normally, this soreness becomes most apparent a day or two after a workout.
## Nutrition
The high levels of muscle growth and repair achieved by bodybuilders require a specialized diet. Generally speaking, bodybuilders require more calories than the average person of the same weight to support the protein and energy requirements needed to support their training and increase muscle mass. A sub-maintenance level of food energy is combined with cardiovascular exercise to lose body fat in preparation for a contest. The ratios of food energy from carbohydrates, proteins, and fats vary depending on the goals of the bodybuilder.
Carbohydrates play an important role for bodybuilders. Carbohydrates give the body energy to deal with the rigors of training and recovery. Bodybuilders seek out low-glycemic polysaccharides and other slowly-digesting carbohydrates, which release energy in a more stable fashion than high-glycemic sugars and starches. This is important as high-glycemic carbohydrates cause a sharp insulin response, which places the body in a state where it is likely to store additional food energy as fat rather than muscle, and which can waste energy that should be directed towards muscle growth. However, bodybuilders frequently do ingest some quickly-digesting sugars (often in form of pure dextrose or maltodextrin) after a workout. This may help to replenish glycogen stores within the muscle, and to stimulate muscle protein synthesis.
Protein is probably one of the most important parts of the diet for the bodybuilder to consider. Functional proteins such as motor proteins which include myosin, kinesin, and dynein generate the forces exerted by contracting muscles. Current advice says that bodybuilders should consume 25-30% of protein per total calorie intake to further their goal of maintaining and improving their body composition. This is a widely debated topic, with many arguing that 1 gram of protein per pound of body weight is ideal, some suggesting that less is sufficient, while others recommending 1.5, 2, or more. It is believed that protein needs to be consumed frequently throughout the day, especially during/after a workout, and before sleep. There is also some debate concerning the best type of protein to take. Chicken, beef, pork, fish, eggs and dairy foods are high in protein, as are some nuts, seeds, beans and lentils. Casein or whey are often used to supplement the diet with additional protein. Whey protein is the type of protein contained in many popular brands of protein supplements, and is preferred by many bodybuilders because of its high Biological Value (BV) and quick absorption rates. Bodybuilders usually require higher quality protein with a high BV rather than relying on protein such as soy, which is often avoided due to its claimed estrogenic properties. Still, some nutrition experts believe that soy, flax seeds and many other plants that contain the weak estrogen-like compounds or phytoestrogens can be used beneficially as phytoestrogens compete with this hormone for receptor sites in the male body and can block its actions. This can also include some inhibition of pituitary functions while stimulating the P450 system (the system that eliminates chemicals, hormones, drugs and metabolic waste product from the body) in the liver to more actively process and excrete excess estrogen.
Bodybuilders usually split their food intake for the day into 5 to 7 meals of roughly equal nutritional content and attempt to eat at regular intervals (normally between 2 and 3 hours). This method purports to serve two purposes: to limit overindulging as well as increasing basal metabolic rate when compared to the traditional 3 meals a day. However, this has been debunked as the most reliable reasearch using whole-body calorimetry and doubly-labelled water finds no metabolic advantage to eating more frequently.
### Dietary supplements
The important role of nutrition in building muscle and losing fat means bodybuilders may consume a wide variety of dietary supplements. Various products are used in an attempt to augment muscle size, increase the rate of fat loss, improve joint health and prevent potential nutrient deficiencies. Scientific consensus supports the effectiveness of only a small number of commercially available supplements when used by healthy, physically active adults. Creatine is probably the most widely used performance enhancing legal supplement. Creatine works by turning into creatine phosphate, which provides an extra phosphorus molecule in the regeneration of ATP. This will provide the body with more energy that lasts longer during short, intense bits of work like weight training.
## Performance enhancing substances
Some bodybuilders use drugs to gain an advantage in hypertrophy, especially in professional competitions. Although these substances are illegal without prescription in many countries, in professional bodybuilding anabolic steroids and precursor substances such as prohormones are used very frequently. Anabolic steroids cause muscle hypertrophy of both types (I and II) of muscle fibers caused likely by an increased synthesis of muscle proteins. Some negative side-effects accompany steroid abuse, such as hepatotoxicity, gynecomastia, acne, male pattern baldness and a temporary decline in the body's own testosterone production, which can cause testicular atrophy.
Growth Hormone (GH) and insulin are also used. GH is relatively expensive compared to steroids, while insulin is very readily available yet fatal if misused. See Growth hormone treatment for bodybuilding.
## Rest
Although muscle stimulation occurs in the gym lifting weights, muscle growth occurs afterward during rest. Without adequate rest and sleep, muscles do not have an opportunity to recover and build. About eight hours of sleep a night is desirable for the bodybuilder to be refreshed, although this varies from person to person. Additionally, many athletes find a daytime nap further increases their body's ability to build muscle. Some bodybuilders take several naps per day, during peak anabolic phases.
## Overtraining
Overtraining refers to when a bodybuilder has trained to the point where his workload exceeds his recovery capacity. There are many reasons that overtraining occurs, including lack of adequate nutrition, lack of recovery time between workouts, insufficient sleep, and training at a high intensity for too long (a lack of splitting apart workouts). Training at a high intensity too frequently also stimulates the central nervous system (CNS) and can result in a hyper-adrenergic state that interferes with sleep patterns. To avoid overtraining, intense frequent training must be met with at least an equal amount of purposeful recovery. Timely provision of carbohydrates, proteins, and various micronutrients such as vitamins, minerals, phytochemicals, even nutritional supplements are acutely critical.
It has been argued that overtraining can be beneficial. One article published by Muscle & Fitness magazine stated that you can "Overtrain for Big Gains". It suggested that if one is planning a restful holiday and they do not wish to inhibit their bodybuilding lifestyle too much, they should overtrain before taking the holiday, so the body can rest easily and recuperate and grow. Overtraining can be used advantageously, as when a bodybuilder is purposely overtrained for a brief period of time to super compensate during a regeneration phase. These are known as "shock micro-cycles" and were a key training technique used by Soviet athletes. However, the vast majority of overtraining that occurs in average bodybuilders is generally unplanned and completely unnecessary. | Bodybuilding
Bodybuilding is the process of maximizing muscle hypertrophy through the combination of weight training, sufficient caloric intake, and rest. Someone who engages in this activity is referred to as a bodybuilder. As a sport, called competitive bodybuilding, bodybuilders display their physiques to a panel of judges, who assign points based on their aesthetic appearance. The muscles are revealed through a combination of fat loss, oils, and tanning (or tanning lotions) which combined with lighting make the definition of the muscle group more distinct. Famous bodybuilders include Arnold Schwarzenegger, Sergio Oliva, Dorian Yates, Lou Ferrigno, Franco Columbu, Frank Zane, Lee Haney, Ronnie Coleman, and Jay Cutler.
# History
## Early years
The "Early Years" of Bodybuilding are considered to be the period between 1880 and 1930.
Bodybuilding (the art of displaying the muscles) did not really exist prior to the late 19th century, when it was promoted by a man from Prussia named Eugen Sandow,[1] who is now generally referred to as "The Father of Modern Bodybuilding". He is credited as being a pioneer of the sport because he allowed an audience to enjoy viewing his physique in "muscle display performances". Although audiences were thrilled to see a well-developed physique, those men simply displayed their bodies as part of strength demonstrations or wrestling matches. Sandow had a stage show built around these displays through his manager, Florenz Ziegfeld. He became so successful at it, he later created several businesses around his fame and was among the first to market products branded with his name alone. As he became more popular, he was credited with inventing and selling the first exercise equipment for the masses (machined dumbbells, spring pulleys and tension bands).
Sandow was a strong advocate of "the Grecian Ideal" (this was a standard where a mathematical "ideal" was set up and the "perfect physique" was close to the proportions of ancient Greek and Roman statues from classical times). This is how Sandow built his own physique and in the early years, men were judged by how closely they matched these "ideal" proportions. Sandow organised the first bodybuilding contest on 14 September, 1901 called the "Great Competition" and held in the Royal Albert Hall, London, UK. Judged by himself, Sir Charles Lawes, and Sir Arthur Conan Doyle, the contest was a huge success and was sold out and hundreds of physical culture enthusiasts were turned away. The trophy presented to the winner was a bronze statue of Sandow himself sculpted by Frederick Pomeroy. The winner was William L. Murray of Nottingham, England. The most prestigious bodybuilding contest today is the Mr. Olympia, and since 1977, the winner has been presented with the same bronze statue of Sandow that he himself presented to the winner at the first contest.[2]
On 16 January 1904, the first large-scale bodybuilding competition in America took place at Madison Square Garden in New York City. The winner was Al Treloar and he was declared "The Most Perfectly Developed Man in the World". Treloar won a $1,000 cash prize, a substantial sum at that time. Two weeks later, Thomas Edison made a film of Al Treloar's posing routine. Edison also made two films of Sandow a few years before, making him the man who made the first three motion pictures featuring a bodybuilder. In the early 20th century, Bernarr Macfadden and Charles Atlas, continued to promote bodybuilding across the world. Alois P. Swoboda was an early pioneer in America and the man whom Charles Atlas credited with his success in his statement: "Everything that I know I learned from A. P. (Alois) Swoboda."[citation needed]
Other important bodybuilders in the early history of bodybuilding prior to 1930 include: Earle Liederman (writer of some of the earliest bodybuilding instruction books), Seigmund Breitbart (famous Jewish bodybuilder), Georg Hackenschmidt, George F. Jowett, Maxick (a pioneer in the art of posing), Monte Saldo, Launceston Elliot, Sig Klein, Sgt. Alfred Moss, Joe Nordquist, Lionel Strongfort (Strongfortism), Gustav Fristensky (the Czech champion), and Alan C. Mead, who became an impressive muscle champion despite the fact that he lost a leg in World War I.
## The "Golden Age"
The period of around 1940 to 1970 is often referred to as the "Golden Age" of bodybuilding because of changes in the aesthetic for more mass, as well as muscular symmetry and definition, which characterised the "early years". This was due in large part to the advent of World War II, which inspired many young men to be bigger, stronger and more aggressive in their attitudes. This was accomplished by improved training techniques, better nutrition and more effective equipment. Several important publications came into being, as well, and new contests emerged as the popularity of the sport grew.
This period of bodybuilding was typified at Muscle Beach in Venice, California. Famous names in bodybuilding from this period included Steve Reeves (notable in his day for portraying Hercules and other sword-and-sandal heroes),Clancy Ross, Reg Park, John Grimek, Larry Scott, Bill Pearl, and Irvin "Zabo" Koszewski.
The rise in popularity of the Amateur Athletic Union (AAU) added a bodybuilding competition to their existing weightlifting contest in 1939 - and the following year this competition was named AAU Mr. America. Around the mid-1940s most bodybuilders became disgruntled with the AAU since they only allowed amateur competitors and they placed more focus on the Olympic sport of weightlifting. This caused brothers Ben and Joe Weider to form the International Federation of BodyBuilders (IFBB) - which organized their competition IFBB Mr. America, which was open to professional athletes.
In 1950, another organization, the National Amateur Bodybuilders Association (NABBA) started their NABBA Mr. Universe contest in the UK. Another major contest, Mr. Olympia was first held in 1965 - and this is currently the most prestigious title in bodybuilding.
Initially contests were only for men, but the NABBA added Miss Universe in 1965 and Ms. Olympia was started in 1980. (For more, see female bodybuilding.)
## 1970s onwards
In the 1970s, bodybuilding had major publicity thanks to Arnold Schwarzenegger and the 1977 film Pumping Iron. By this time the IFBB dominated the sport and the AAU took a back seat.
The National Physique Committee (NPC) was formed in 1981 by Jim Manion, who had just stepped down as chairman of the AAU Physique Committee. The NPC has gone on to become the most successful bodybuilding organization in the U.S., and is the amateur division of the IFBB. The late 1980s and early 1990s saw the decline of AAU sponsored bodybuilding contests. In 1999, the AAU voted to discontinue its bodybuilding events.
This period also saw the rise of anabolic steroids used both in bodybuilding and many other sports. To combat this, and to be allowed to be an IOC member, the IFBB introduced doping tests for both steroids and other banned substances. Although doping tests occurred, the majority of professional bodybuilders still used anabolic steroids for competition. During the 1970s the use of anabolic steroids was openly discussed partly due to the fact they were legal.[3] However the U.S. Congress in the Anabolic Steroid Control Act of 1990 placed anabolic steroids into Schedule III of the Controlled substance act (CSA).
In 1990, wrestling promoter Vince McMahon announced he was forming a new bodybuilding organization, the World Bodybuilding Federation (WBF). McMahon wanted to bring WWF-style showmanship and bigger prize money to the sport of bodybuilding. McMahon signed 13 competitors to lucrative long-term contracts, something virtually unheard of in bodybuilding up until then. Most of the WBF competitors immediately abandoned the IFBB. In response to the WBF's formation, IFBB president Ben Weider blacklisted all the bodybuilders who had signed with the WBF. The IFBB also quietly stopped testing their athletes for anabolic steroid use since it was difficult to compete thus with a new organization which did not test for steroids. In 1992, Vince McMahon instituted drug testing for WBF athletes because he and the WWF were under investigation by the federal government for alleged involvement in anabolic steroid trafficking. The result was that the competitors in the 1992 WBF contest looked sub-par, according to some contemporary accounts. McMahon formally dissolved the WBF in July, 1992. Reasons for this probably included lack of income from the pay-per-view broadcasts of the WBF contests, slow sales of the WBF's magazine Bodybuilding Lifestyles (which later became WBF Magazine), and the expense of paying multiple 6-figure contracts as well as producing two TV shows and a monthly magazine. However, the formation of the WBF had two positive effects for the IFBB athletes: (1) it caused IFBB founder Joe Weider to sign many of his top stars to contracts, and (2) it caused the IFBB to raise prize money in its sanctioned contests. Joe Weider eventually offered to accept the WBF bodybuilders back into the IFBB for a fine of 10% of their former yearly WBF salary.
In the early 2000s, the IFBB was attempting to make bodybuilding an Olympic sport. It obtained full IOC membership in 2000 and was attempting to get approved as a demonstration event at the Olympics which would hopefully lead to it being added as a full contest. This did not happen. Olympic recognition for bodybuilding remains controversial since some argue that bodybuilding is not a sport because the actual contest does not involve athletic effort. Also, some still have the misperception that bodybuilding necessarily involves the use of anabolic steroids, which are prohibited in Olympic competitions. Proponents argue that the posing routine requires skill and preparation, and bodybuilding should therefore be considered a sport.
In 2003, Joe Weider sold Weider Publications to AMI, which owns The National Enquirer. Ben Weider is still the president of the IFBB. In 2004, contest promoter Wayne DeMilia broke ranks with the IFBB and AMI took over the promotion of the Mr. Olympia contest.
# Areas of Bodybuilding
Professional bodybuilding
In the modern bodybuilding industry "Professional" generally means a bodybuilder who has won qualifying completions as an amateur and has earned a 'pro card' from the IFBB. Professionals earn the right to compete in sanctioned competitions including the Arnold Classic and the Night of Champions. Placings at such competitions in turn earn them the right to compete at the Mr. Olympia; the title is considered to be the highest accolade in the professional bodybuilding field.
Natural bodybuilding
In natural contests bodybuilders are routinely tested for illegal substances and are banned for any violations from future contests. Testing can be done on urine samples, but in many cases a less expensive polygraph (lie detector) test is performed instead. What qualifies as an "illegal" substance, in the sense that it is prohibited by regulatory bodies, varies between natural federations, and does not necessarily include only substances that are illegal under the laws of the relevant jurisdiction. Anabolic steroids, Prohormone and Diuretics are generally banned in natural organizations. Natural bodybuilding organizations include NANBF (North American Natural Bodybuilding Federation), and the NPA (Natural physique association). Natural bodybuilders assert that their method is more focused on competition and a healthy lifestyle than other forms of bodybuilding.
Teenage bodybuilding
Bodybuilding also has many competition categories for young entrants. Many current professional bodybuilders started weight training during their teenage years. Bodybuilders such as Arnold Schwarzenegger, Lee Priest and Jay Cutler all started competing when they were teenagers. Today many teenagers compete in bodybuilding competitions.
Female bodybuilding
In the 1970s, women began to take part in bodybuilding competitions, and was extremely popular for a time. More than ever women are training with weights for exercise purposes with desire for a more attractive body and to prevent bone loss.[4] Many women however still fear that weight training will make them "bulky" and believe weight training is only for men. However strength training has many benefits for women including increased bone mass and prevention of bone loss as well as increased muscle strength and balance.[5][6] In recent years, the related areas of fitness and figure competition have gained in popularity, providing an alternative for women who choose not to develop the level of muscularity necessary for bodybuilding. The first Ms. Olympia contest in 1980, won by Rachel McLish, would resemble closely what is thought of today as a fitness and figure competition.
# Competition
For biographies of professional bodybuilders see list of female bodybuilders, list of male professional bodybuilders, and Category:Professional bodybuilders
In competitive bodybuilding, bodybuilders aspire to develop and maintain an aesthetically pleasing (by bodybuilding standards) body and balanced physique. The competitors show off their bodies by performing a number of poses - bodybuilders spend time practicing their posing as this has a large effect on how they are judged.
A bodybuilder's size and shape are far more important than how much he or she can lift. The sport should therefore not be confused with strongman competition or powerlifting, where the main point is on actual physical strength, or with Olympic weightlifting, where the main point is equally split between strength and technique. Though superficially similar to the casual observer, the fields entail a different regimen of training, diet, and basic motivation.
## Contest preparation
The general strategy adopted by most present-day competitive bodybuilders is to make muscle gains for most of the year (known as the "off-season") and approximately 3-4 months from competition attempt to lose body fat (referred to as "cutting"). In doing this some muscle will be lost but the aim is to keep this to a minimum. There are many approaches used but most involve reducing calorie intake and increasing cardio, while monitoring body fat percentage.
In the week leading up to a contest, bodybuilders will begin increasing their water intake so as to deregulate the systems in the body associated with water flushing. They will also increase their sodium intake. At the same time they will decrease their carbohydrate consumption in an attempt to "carb deplete". The goal during this week is to deplete the muscles of glycogen. Two days before the show, sodium intake is reduced by half, and then eliminated completely. The day before the show, water is removed from the diet, and diuretics may be introduced. At the same time carbohydrates are re-introduced into the diet to expand the muscles. This is typically known as "carb-loading." The end result is an ultra-lean bodybuilder with full hard muscles and a dry, vascular appearance.
Prior to performing on stage, bodybuilders will apply various products to their skin to improve their muscle definition - these include fake tan commonly called "pro tan" (to make the skin darker) and various oils (to make the skin shiny). They will also use weights to "pump up" by forcing blood to their muscles to improve size and vascularity.
# Strategy
Bodybuilders use three main strategies to maximize muscle hypertrophy:
- Strength training through weights or elastic/hydraulic resistance
- Specialised nutrition, incorporating extra protein and supplements where necessary
- Adequate rest, including sleep and recuperation between workouts
## Weight training
Weight training causes micro-tears to the muscles being trained; this is generally known as microtrauma. These micro-tears in the muscle contribute to the soreness felt after exercise, called delayed onset muscle soreness (DOMS). It is the repair to these micro-trauma that result in muscle growth. Normally, this soreness becomes most apparent a day or two after a workout.[7]
## Nutrition
The high levels of muscle growth and repair achieved by bodybuilders require a specialized diet. Generally speaking, bodybuilders require more calories than the average person of the same weight to support the protein and energy requirements needed to support their training and increase muscle mass. A sub-maintenance level of food energy is combined with cardiovascular exercise to lose body fat in preparation for a contest. The ratios of food energy from carbohydrates, proteins, and fats vary depending on the goals of the bodybuilder.[8]
Carbohydrates play an important role for bodybuilders. Carbohydrates give the body energy to deal with the rigors of training and recovery. Bodybuilders seek out low-glycemic polysaccharides and other slowly-digesting carbohydrates, which release energy in a more stable fashion than high-glycemic sugars and starches. This is important as high-glycemic carbohydrates cause a sharp insulin response, which places the body in a state where it is likely to store additional food energy as fat rather than muscle, and which can waste energy that should be directed towards muscle growth. However, bodybuilders frequently do ingest some quickly-digesting sugars (often in form of pure dextrose or maltodextrin) after a workout. This may help to replenish glycogen stores within the muscle, and to stimulate muscle protein synthesis.[9]
Protein is probably one of the most important parts of the diet for the bodybuilder to consider. Functional proteins such as motor proteins which include myosin, kinesin, and dynein generate the forces exerted by contracting muscles. Current advice says that bodybuilders should consume 25-30% of protein per total calorie intake to further their goal of maintaining and improving their body composition.[10] This is a widely debated topic, with many arguing that 1 gram of protein per pound of body weight is ideal, some suggesting that less is sufficient, while others recommending 1.5, 2, or more.[11][12][13][14] It is believed that protein needs to be consumed frequently throughout the day, especially during/after a workout, and before sleep.[15] There is also some debate concerning the best type of protein to take. Chicken, beef, pork, fish, eggs and dairy foods are high in protein, as are some nuts, seeds, beans and lentils. Casein or whey are often used to supplement the diet with additional protein. Whey protein is the type of protein contained in many popular brands of protein supplements, and is preferred by many bodybuilders because of its high Biological Value (BV) and quick absorption rates. Bodybuilders usually require higher quality protein with a high BV rather than relying on protein such as soy, which is often avoided due to its claimed estrogenic properties.[16] Still, some nutrition experts believe that soy, flax seeds and many other plants that contain the weak estrogen-like compounds or phytoestrogens can be used beneficially as phytoestrogens compete with this hormone for receptor sites in the male body and can block its actions. This can also include some inhibition of pituitary functions while stimulating the P450 system (the system that eliminates chemicals, hormones, drugs and metabolic waste product from the body) in the liver to more actively process and excrete excess estrogen.[17][18]
Bodybuilders usually split their food intake for the day into 5 to 7 meals of roughly equal nutritional content and attempt to eat at regular intervals (normally between 2 and 3 hours). This method purports to serve two purposes: to limit overindulging as well as increasing basal metabolic rate when compared to the traditional 3 meals a day. However, this has been debunked as the most reliable reasearch using whole-body calorimetry and doubly-labelled water finds no metabolic advantage to eating more frequently.[19][20]
### Dietary supplements
The important role of nutrition in building muscle and losing fat means bodybuilders may consume a wide variety of dietary supplements.[21] Various products are used in an attempt to augment muscle size, increase the rate of fat loss, improve joint health and prevent potential nutrient deficiencies. Scientific consensus supports the effectiveness of only a small number of commercially available supplements when used by healthy, physically active adults[citation needed]. Creatine is probably the most widely used performance enhancing legal supplement. Creatine works by turning into creatine phosphate, which provides an extra phosphorus molecule in the regeneration of ATP. This will provide the body with more energy that lasts longer during short, intense bits of work like weight training.
## Performance enhancing substances
Some bodybuilders use drugs to gain an advantage in hypertrophy, especially in professional competitions. Although these substances are illegal without prescription in many countries, in professional bodybuilding anabolic steroids and precursor substances such as prohormones are used very frequently. Anabolic steroids cause muscle hypertrophy of both types (I and II) of muscle fibers caused likely by an increased synthesis of muscle proteins. Some negative side-effects accompany steroid abuse, such as hepatotoxicity, gynecomastia, acne, male pattern baldness and a temporary decline in the body's own testosterone production, which can cause testicular atrophy.[22][23][24]
Growth Hormone (GH) and insulin are also used. GH is relatively expensive compared to steroids, while insulin is very readily available yet fatal if misused. See Growth hormone treatment for bodybuilding.
## Rest
Although muscle stimulation occurs in the gym lifting weights, muscle growth occurs afterward during rest. Without adequate rest and sleep, muscles do not have an opportunity to recover and build. About eight hours of sleep a night is desirable for the bodybuilder to be refreshed, although this varies from person to person. Additionally, many athletes find a daytime nap further increases their body's ability to build muscle. Some bodybuilders take several naps per day, during peak anabolic phases.
## Overtraining
Overtraining refers to when a bodybuilder has trained to the point where his workload exceeds his recovery capacity. There are many reasons that overtraining occurs, including lack of adequate nutrition, lack of recovery time between workouts, insufficient sleep, and training at a high intensity for too long (a lack of splitting apart workouts). Training at a high intensity too frequently also stimulates the central nervous system (CNS) and can result in a hyper-adrenergic state that interferes with sleep patterns.[25] To avoid overtraining, intense frequent training must be met with at least an equal amount of purposeful recovery. Timely provision of carbohydrates, proteins, and various micronutrients such as vitamins, minerals, phytochemicals, even nutritional supplements are acutely critical.
It has been argued that overtraining can be beneficial. One article published by Muscle & Fitness magazine stated that you can "Overtrain for Big Gains". It suggested that if one is planning a restful holiday and they do not wish to inhibit their bodybuilding lifestyle too much, they should overtrain before taking the holiday, so the body can rest easily and recuperate and grow. Overtraining can be used advantageously, as when a bodybuilder is purposely overtrained for a brief period of time to super compensate during a regeneration phase. These are known as "shock micro-cycles" and were a key training technique used by Soviet athletes.[26] However, the vast majority of overtraining that occurs in average bodybuilders is generally unplanned and completely unnecessary.[27] | https://www.wikidoc.org/index.php/Body_building | |
5765cabcc05b558889925ef3b8e722d0b6381598 | wikidoc | Body orifice | Body orifice
A body orifice is an opening in the body of an animal. In a typical mammalian body such as the human body, the body orifices are:
- The nostrils, for breathing and the associated sense of smell.
- The eyes, for the sense of sight and crying.
- The mouth, for eating, breathing and vocalizations such as speech.
- The ear canals, for the sense of hearing.
- The anus, for defecation.
- The urethra, for urination, and in males, also for ejaculation.
- In females, the vagina, for sexual intercourse, menstruation and childbirth.
- The breast, especially in females for breastfeeding.
In other organisms with different body plans, there are other body orifices, such as the cloaca in reptiles, and the siphon in cephalopods. | Body orifice
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
A body orifice is an opening in the body of an animal. In a typical mammalian body such as the human body, the body orifices are:
- The nostrils, for breathing and the associated sense of smell.
- The eyes, for the sense of sight and crying.
- The mouth, for eating, breathing and vocalizations such as speech.
- The ear canals, for the sense of hearing.
- The anus, for defecation.
- The urethra, for urination, and in males, also for ejaculation.
- In females, the vagina, for sexual intercourse, menstruation and childbirth.
- The breast, especially in females for breastfeeding.
In other organisms with different body plans, there are other body orifices, such as the cloaca in reptiles, and the siphon in cephalopods. | https://www.wikidoc.org/index.php/Body_orifice | |
b8d52b79bfa0b0093f65c1b69f536a88fb639d9f | wikidoc | Body packers | Body packers
# Body packing
The practice of transporting goods outside the body is called body packing; this is done by a person usually called a mule, or bait. This method is, in general, rarely used today. However, some narcotics-trafficking organizations such as the Mexican Cartels will purposely send 1 or 2 people with drugs on the outside of their body to purposely be caught, so that the authorities are occupied while dozens of mules pass by undetected with drugs inside their body. But even these diversion tactics are becoming less and less prevalent as airport security increases.
Swallowing has been used for the transportation of heroin, cocaine, and sometimes for ecstasy.
A swallower typically fills tiny balloons, often made with multilayered condoms or more sophisticated hollow pellets, with small quantities of a drug, usually heroin or cocaine. These balloons may be swallowed or may be hidden in other natural or artificial body cavities as the rectum, a colostomy, or vagina.
The swallower then attempts to cross international borders, excrete the balloons, and then sell the drugs for profit. It is far more common for the swallower to be making the trip on behalf of a drug lord or drug dealer. Swallowers are often impoverished and agree to transport the drugs in exchange for money or other favors. In fewer cases, the drug dealers can attempt extortion against people by threatening physical harm against friends or family, but the more common practice is for swallowers to willingly accept the job in exchange for big payoffs. An increasingly popular type of swallowing involves having the drug in the form of liquid-filled balloons or condoms/packages. These are impossible to detect unless the airport has high-sensitivity X-Ray equipment. Most of the major airports in Europe, Canada, and the US have these machines. Note that a liquid mixture of water and the drug will most likely not be detected using a standard X-Ray Machine. As reported in Lost Rights by James Bovard: "Nigerian drug lords have employed an army of 'swallowers', those who will swallow as many as 150 balloons and smuggle drugs into the United States. Given the per capita yearly income of Nigeria is $2,100, Nigerians can collect as much as $15,000 per trip." | Body packers
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]Vidit Bhargava, M.B.B.S [2]
# Body packing
The practice of transporting goods outside the body is called body packing; this is done by a person usually called a mule, or bait. This method is, in general, rarely used today. However, some narcotics-trafficking organizations such as the Mexican Cartels will purposely send 1 or 2 people with drugs on the outside of their body to purposely be caught, so that the authorities are occupied while dozens of mules pass by undetected with drugs inside their body. But even these diversion tactics are becoming less and less prevalent as airport security increases.
Swallowing has been used for the transportation of heroin, cocaine, and sometimes for ecstasy.[1]
A swallower typically fills tiny balloons, often made with multilayered condoms or more sophisticated hollow pellets, with small quantities of a drug, usually heroin or cocaine. These balloons may be swallowed or may be hidden in other natural or artificial body cavities as the rectum, a colostomy,[2] or vagina.
The swallower then attempts to cross international borders, excrete the balloons, and then sell the drugs for profit. It is far more common for the swallower to be making the trip on behalf of a drug lord or drug dealer. Swallowers are often impoverished and agree to transport the drugs in exchange for money or other favors. In fewer cases, the drug dealers can attempt extortion against people by threatening physical harm against friends or family, but the more common practice is for swallowers to willingly accept the job in exchange for big payoffs. An increasingly popular type of swallowing involves having the drug in the form of liquid-filled balloons or condoms/packages. These are impossible to detect unless the airport has high-sensitivity X-Ray equipment. Most of the major airports in Europe, Canada, and the US have these machines. Note that a liquid mixture of water and the drug will most likely not be detected using a standard X-Ray Machine. As reported in Lost Rights by James Bovard: "Nigerian drug lords have employed an army of 'swallowers', those who will swallow as many as 150 balloons and smuggle drugs into the United States. Given the per capita yearly income of Nigeria is $2,100, Nigerians can collect as much as $15,000 per trip."[3] | https://www.wikidoc.org/index.php/Body_packer | |
8ca533fea1c3d7a4d4c9886869826a0945a58c62 | wikidoc | Body shaping | Body shaping
# Overview
Body contouring is a general term that refers to any surgical procedure that alters different areas of the body, whether it is in a massive weight loss patient or not. Body contouring after massive weight loss refers to a series of procedures that eliminate and/or reduce excess skin and fat that remains after obese individuals lose a significant amount of weight, in a variety of places including the torso, upper arms, chest, and thighs.
# History
Obesity is in epidemic proportions in the US and many parts of the world. It is defined as a condition where a person’s Body Mass Index (BMI) reaches 30 or more. BMI is calculated by dividing the patient's weight in kilograms by their height in meters, squared. Normal weight individuals have a BMI that ranges from 18 to 25. Overweight people have a BMI from 26 to 30, with 30 and above people considered obese. Once the BMI reaches 35 and above, patients are considered morbidly obese. From a BMI of 30 and above a person's life span is shortened. In addition, obesity negatively affects the economic health of a society as well as other aspects of adult and child health, often for life.Childhood obesity is on the rise in Europe as well.
# Bariatric surgery
In response to a serious obesity crisis, medical science has devised a handful of bariatric (obesity treatment) surgeries, including gastric bypass, stomach stapling, lap banding, stomach reduction and other techniques that reduce the amount of food the stomach can hold. For instance, in the United States, the American Society of Bariatric Surgery (ASBS) reports that the year 2000 saw an estimated 37,700 surgeries to restrict the size of a patient’s stomach. But in 2006, the most recent year for which statistics are available, there were 177,600 such operations. Usually, by 18 months after the surgery, patients report having lost anywhere from 45 to 136 kg (100 to 300 pounds).
# Body lifting
Food-restriction operations to the stomach have several side effects. One such undesirable side effect that is very bothersome and visible is the loose, hanging skin that covers much of a weight loss patient's body. Because hundreds of pounds have stretched the patient’s skin to the maximum, it has lost its elasticity and the ability to spring back. Instead, the newly slimmed patient must deal with so much extra hanging skin, he or she can actually stumble on an overhanging panniculus, the large apron of skin hanging from the stomach that can cover the pubis and groin areas. Notably, many extra inches (and sometimes, feet) of floppy skin hang from the upper arms, the chest, the stomach, the upper thighs and buttocks.
Most people who have lost massive amounts of weight complain about the difficulty of getting their fleshy arms into sleeves and their excess stomach skin tucked into clothing. Many say that sitting on the loose skin is like sitting on Jell-O. Most women in this state condition require a mastopexy, or breast lift, often in conjunction with breast implants. Men who have body shaping surgery usually undergo male breast reduction surgery to remove the pendulous skin hanging from their chests.
The extra rolls and sheets of skin rub against each other, creating many spots of irritation and leading to hygienic difficulties. The masses of excess skin also make any form of exercise difficult. Many patients become reclusive and are hesitant to enter romantic or social relationships. Said one massive weight loss patient: "After I lost 155 pounds, (70 kg) it was like I had a size 26 skin hanging on my size 8 body".
While the procedure is expensive, often running in the neighborhood of US$20,000 to US$50,000 for an entire body, it usually leaves long, visible scars on the arms, chest, stomach and legs. Nonetheless, the body lifting procedure is growing in popularity, thanks to a concomitant rise in weight loss surgeries on the stomach. For instance, in the year 2000, 207 lower body lifts took place, according to the American Society for Aesthetic Plastic Surgery (ASAPS). But in 2006, 10,323 such lift operations took place, an increase of 19 percent over the previous year.Most surgeons break the surgical task into an upper, and a lower, body lift. A lower body lift removes the sagging skin on the back, abdomen, buttocks and thighs while the upper body procedure removes loose skin from the arms, breasts and chest.
# Potential risks and side-effects
Body lifting is not lightly undertaken. The process requires a commitment on the part of the patient who must stay with the program through bariatric surgery, during the 18 months required for weight loss, then the body contouring procedures and recovery. Often, beginning to end takes three years.A single body lifting operation can require seven to 10 hours under general anesthesia, blood transfusions and often, another surgeon to assist. Plastic surgeons advise patients that body shaping is not an obesity operation. A patient who is more than 50 percent over his or her ideal weight must first drop as many pounds as possible before proceeding. Other medical considerations the plastic surgeon must take into account include scars already present on the body, current medical conditions like heart disease or bleeding disorders, and if the patient smokes. Other possible risks include infections and reactions and complications due to being under anesthesia for longer than six hours. The patient may also experience seroma, a build up of fluid; dehiscence (wound separation) and deep vein thrombosis (blood clots forming in the legs.) Rare complications include lymphatic injury and major wound dehiscence. The hospital stay for the procedure can require from one to four days while recovery can require about a month for a total body lift. Essentially, the patient trades "skin for scars". But skin relaxation is always a risk and may not be stopped with a single procedure.Reputable plastic surgeons will explain all the risks and complications in full to their patients and even encourage a second or third consultation visit with other plastic surgeons to get additional views on such a major undertaking.
# Body lifting surgical procedures
While body shaping can be done in one marathon session, it is usually broken into one to three surgical stages, with the patient under general anesthesia. But if the patient is a smoker, has a history of deep venous thrombosis or clotting disorders along with a high BMI and other medical risk factors, the surgeon will probably insist on doing several short procedures in a hospital setting to insure maximum safety for the patient.
Please note that there are also non-surgical procedures such as the UltraShape procedure.
It is safe, non-invasive and backed-up by extensive clinical trials.
The following are the individual components of body contouring:
Arm lift or brachioplasty. The extra flesh on the arms of bariatric patients virtually always appears on the underside of the upper arm and is sometimes referred to as "bat wings". Surgeons realize they must use long incisions made from the armpit to the elbow to remove the skin and create a more pleasing contour. Consequently, surgeons open the arm on its underside so that the resulting scar is fairly well hidden. A brachioplasty procedure can employ some liposuction after the incision is made. With the arm opened, the surgeon pulls the skin tight and then trims away the excess skin which, depending on the patient, can be a pound of skin per arm or more.
Breast lift or mastopexy. By trimming excess tissue from the upper breast, the surgeon can move breasts which usually droop to the umbilicus to a more upright and full position. The procedure also often requires an implant to make up for lost fat and tissue inside the breast. Scars on women are almost always hidden inside the area covered by the bra. Most men gladly exchange several light, quickly fading horizontal scars across their chest muscles for a sleeker upper trunk.
Stomach lift or abdominoplasty. Excess skin hanging down over the pubic region is often the distorting feature that most concerns and bothers patients. The stomach pannus retains moisture, and causes rashes due to skin rubbing against itself which usually leads to poor hygiene. While the surgical procedure to remove it is known as a panniculectomy, there is often more work to be done for patients who suffer from large amounts of hanging skin. To provide improved contours on the waist, back and flanks, surgeons sometimes perform a belt lipectomy, (also known as a torsoplasty or a circumferential lipectomy).The incision goes all the way around the patient’s midsection at the level of the lower waist. The surgeon uses more liposuction on the stomach and flanks while trimming excess skin from the patient’s back and sides as well. The abdominoplasty and belt lipectomy incisions are placed so that the resulting scar is hidden within most underwear and swimsuits.
Lower body lift trims excess skin on the buttocks and thighs. For an inner thigh lift, the surgeon makes an incision high on the inner leg, starting near the groin and continuing down to the knee. Some fat may be removed with liposuction. The surgeon then removes excess skin and redrapes the remaining skin before closing the long incision, leaving the patient with tighter and more attractive thighs.
The outer thigh and buttock can be lifted through a hip-to-hip incision across the back, above the buttocks.
# Usual results
While considered major surgery, the outcome of body shaping is generally extremely satisfying to patients although it can require several months to see the full effects of the procedure. But with a flatter stomach, more curve to the waist and smaller hips, less "back fat" and smaller, better shaped buttocks, patients usually develop more self confidence and become more active. After healing, most patients can buy and fit into easily available clothing, participate in sports and physical fitness activities again and become more involved in social and romantic situations.
When researchers at the University of Pittsburgh enrolled 18 bariatric patients just before the subjects decided to undergo body contouring, their average age was 46, plus or minus ten years. The researchers studied the patients’ body perception, quality of life and mood at three and six months after the body contouring procedures. They found the subjects’ quality of life improved and significantly enhanced their moods which had remained stable at the six-month point.
Most body lifting patients return to non-strenuous work in about two to three weeks.
Except for brachioplasty, virtually all body shaping procedures require the patient to wear a support or compression garment for two to six weeks. The garment speeds and aids in healing.
Patients can usually drive again within one to three weeks, depending on the extent of the surgery, their health and general robustness.
# Further reading
- Total Body Lift: Reshaping the Breasts, Chest, Arms, Thighs, Hips. Waist, Abdomen & Knees after Weight Loss, Aging & Pregnancies, Dennis J. Hurwitz, M.D. F.A.C.S., M.D. Publish, NYC
- Body Contouring Surgery After Weight Loss, Joseph Capella, M.D., Peter Rubin, M.D., and Jeffrey Sebastian, M.D., Addicus Books, Omaha, Nebraska
- Eating Well After Weight Loss Surgery, Patt Levine, Michele Bontmpo-Saray and William B. Inabnet, Marlowe & Company. Washington, D.C. | Body shaping
Editors-In-Chief: Martin I. Newman, M.D., FACS, Cleveland Clinic Florida, [1]; Michel C. Samson, M.D., FRCSC, FACS [2]
# Overview
Body contouring is a general term that refers to any surgical procedure that alters different areas of the body, whether it is in a massive weight loss patient or not. Body contouring after massive weight loss refers to a series of procedures that eliminate and/or reduce excess skin and fat that remains after obese individuals lose a significant amount of weight, in a variety of places including the torso, upper arms, chest, and thighs.
# History
Obesity is in epidemic proportions in the US and many parts of the world. It is defined as a condition where a person’s Body Mass Index (BMI) reaches 30 or more. BMI is calculated by dividing the patient's weight in kilograms by their height in meters, squared. Normal weight individuals have a BMI that ranges from 18 to 25. Overweight people have a BMI from 26 to 30, with 30 and above people considered obese. Once the BMI reaches 35 and above, patients are considered morbidly obese. From a BMI of 30 and above a person's life span is shortened. In addition, obesity negatively affects the economic health of a society as well as other aspects of adult and child health, often for life.[1][2]Childhood obesity is on the rise in Europe as well.[3]
# Bariatric surgery
In response to a serious obesity crisis, medical science has devised a handful of bariatric (obesity treatment) surgeries, including gastric bypass, stomach stapling, lap banding, stomach reduction and other techniques that reduce the amount of food the stomach can hold. For instance, in the United States, the American Society of Bariatric Surgery (ASBS) reports that the year 2000 saw an estimated 37,700 surgeries to restrict the size of a patient’s stomach. But in 2006, the most recent year for which statistics are available, there were 177,600 such operations. Usually, by 18 months after the surgery, patients report having lost anywhere from 45 to 136 kg (100 to 300 pounds).
# Body lifting
Food-restriction operations to the stomach have several side effects. One such undesirable side effect that is very bothersome and visible is the loose, hanging skin that covers much of a weight loss patient's body. Because hundreds of pounds have stretched the patient’s skin to the maximum, it has lost its elasticity and the ability to spring back. Instead, the newly slimmed patient must deal with so much extra hanging skin, he or she can actually stumble on an overhanging panniculus, the large apron of skin hanging from the stomach that can cover the pubis and groin areas. Notably, many extra inches (and sometimes, feet) of floppy skin hang from the upper arms, the chest, the stomach, the upper thighs and buttocks.
Most people who have lost massive amounts of weight complain about the difficulty of getting their fleshy arms into sleeves and their excess stomach skin tucked into clothing. Many say that sitting on the loose skin is like sitting on Jell-O. Most women in this state condition require a mastopexy, or breast lift, often in conjunction with breast implants. Men who have body shaping surgery usually undergo male breast reduction surgery to remove the pendulous skin hanging from their chests.
The extra rolls and sheets of skin rub against each other, creating many spots of irritation and leading to hygienic difficulties. The masses of excess skin also make any form of exercise difficult. Many patients become reclusive and are hesitant to enter romantic or social relationships. Said one massive weight loss patient: "After I lost 155 pounds, (70 kg) it was like I had a size 26 skin hanging on my size 8 body".[4]
While the procedure is expensive, often running in the neighborhood of US$20,000 to US$50,000 for an entire body, it usually leaves long, visible scars on the arms, chest, stomach and legs. Nonetheless, the body lifting procedure is growing in popularity, thanks to a concomitant rise in weight loss surgeries on the stomach. For instance, in the year 2000, 207 lower body lifts took place, according to the American Society for Aesthetic Plastic Surgery (ASAPS). But in 2006, 10,323 such lift operations took place, an increase of 19 percent over the previous year.[5]Most surgeons break the surgical task into an upper, and a lower, body lift. A lower body lift removes the sagging skin on the back, abdomen, buttocks and thighs while the upper body procedure removes loose skin from the arms, breasts and chest.
# Potential risks and side-effects
Body lifting is not lightly undertaken. The process requires a commitment on the part of the patient who must stay with the program through bariatric surgery, during the 18 months required for weight loss, then the body contouring procedures and recovery. Often, beginning to end takes three years.[6]A single body lifting operation can require seven to 10 hours under general anesthesia, blood transfusions and often, another surgeon to assist. Plastic surgeons advise patients that body shaping is not an obesity operation. A patient who is more than 50 percent over his or her ideal weight must first drop as many pounds as possible before proceeding. Other medical considerations the plastic surgeon must take into account include scars already present on the body, current medical conditions like heart disease or bleeding disorders, and if the patient smokes. Other possible risks include infections and reactions and complications due to being under anesthesia for longer than six hours. The patient may also experience seroma, a build up of fluid; dehiscence (wound separation) and deep vein thrombosis (blood clots forming in the legs.) Rare complications include lymphatic injury and major wound dehiscence. The hospital stay for the procedure can require from one to four days while recovery can require about a month for a total body lift. Essentially, the patient trades "skin for scars". But skin relaxation is always a risk and may not be stopped with a single procedure.[7]Reputable plastic surgeons will explain all the risks and complications in full to their patients and even encourage a second or third consultation visit with other plastic surgeons to get additional views on such a major undertaking.
# Body lifting surgical procedures
While body shaping can be done in one marathon session, it is usually broken into one to three surgical stages, with the patient under general anesthesia. But if the patient is a smoker, has a history of deep venous thrombosis or clotting disorders along with a high BMI and other medical risk factors, the surgeon will probably insist on doing several short procedures in a hospital setting to insure maximum safety for the patient.
Please note that there are also non-surgical procedures such as the UltraShape procedure.
It is safe, non-invasive and backed-up by extensive clinical trials.
The following are the individual components of body contouring:
Arm lift or brachioplasty. The extra flesh on the arms of bariatric patients virtually always appears on the underside of the upper arm and is sometimes referred to as "bat wings". Surgeons realize they must use long incisions made from the armpit to the elbow to remove the skin and create a more pleasing contour. Consequently, surgeons open the arm on its underside so that the resulting scar is fairly well hidden. A brachioplasty procedure can employ some liposuction after the incision is made. With the arm opened, the surgeon pulls the skin tight and then trims away the excess skin which, depending on the patient, can be a pound of skin per arm or more.
Breast lift or mastopexy. By trimming excess tissue from the upper breast, the surgeon can move breasts which usually droop to the umbilicus to a more upright and full position. The procedure also often requires an implant to make up for lost fat and tissue inside the breast. Scars on women are almost always hidden inside the area covered by the bra. Most men gladly exchange several light, quickly fading horizontal scars across their chest muscles for a sleeker upper trunk.
Stomach lift or abdominoplasty. Excess skin hanging down over the pubic region is often the distorting feature that most concerns and bothers patients. The stomach pannus retains moisture, and causes rashes due to skin rubbing against itself which usually leads to poor hygiene. While the surgical procedure to remove it is known as a panniculectomy, there is often more work to be done for patients who suffer from large amounts of hanging skin. To provide improved contours on the waist, back and flanks, surgeons sometimes perform a belt lipectomy, (also known as a torsoplasty or a circumferential lipectomy).[8]The incision goes all the way around the patient’s midsection at the level of the lower waist. The surgeon uses more liposuction on the stomach and flanks while trimming excess skin from the patient’s back and sides as well. The abdominoplasty and belt lipectomy incisions are placed so that the resulting scar is hidden within most underwear and swimsuits.
Lower body lift trims excess skin on the buttocks and thighs. For an inner thigh lift, the surgeon makes an incision high on the inner leg, starting near the groin and continuing down to the knee. Some fat may be removed with liposuction. The surgeon then removes excess skin and redrapes the remaining skin before closing the long incision, leaving the patient with tighter and more attractive thighs.
The outer thigh and buttock can be lifted through a hip-to-hip incision across the back, above the buttocks.
# Usual results
While considered major surgery, the outcome of body shaping is generally extremely satisfying to patients although it can require several months to see the full effects of the procedure. But with a flatter stomach, more curve to the waist and smaller hips, less "back fat" and smaller, better shaped buttocks, patients usually develop more self confidence and become more active. After healing, most patients can buy and fit into easily available clothing, participate in sports and physical fitness activities again and become more involved in social and romantic situations.
When researchers at the University of Pittsburgh enrolled 18 bariatric patients just before the subjects decided to undergo body contouring, their average age was 46, plus or minus ten years. The researchers studied the patients’ body perception, quality of life and mood at three and six months after the body contouring procedures. They found the subjects’ quality of life improved and significantly enhanced their moods which had remained stable at the six-month point.[9]
Most body lifting patients return to non-strenuous work in about two to three weeks.
Except for brachioplasty, virtually all body shaping procedures require the patient to wear a support or compression garment for two to six weeks. The garment speeds and aids in healing.
Patients can usually drive again within one to three weeks, depending on the extent of the surgery, their health and general robustness.
# Further reading
- Total Body Lift: Reshaping the Breasts, Chest, Arms, Thighs, Hips. Waist, Abdomen & Knees after Weight Loss, Aging & Pregnancies, Dennis J. Hurwitz, M.D. F.A.C.S., M.D. Publish, NYC
- Body Contouring Surgery After Weight Loss, Joseph Capella, M.D., Peter Rubin, M.D., and Jeffrey Sebastian, M.D., Addicus Books, Omaha, Nebraska
- Eating Well After Weight Loss Surgery, Patt Levine, Michele Bontmpo-Saray and William B. Inabnet, Marlowe & Company. Washington, D.C. | https://www.wikidoc.org/index.php/Body_shaping | |
fd92ddb239ac54105a31ae399efc2a6b38c2809f | wikidoc | Superheating | Superheating
In physics, superheating (sometimes referred to as boiling retardation, or boiling delay) is the phenomenon in which a liquid is heated to a temperature higher than its standard boiling point, without actually boiling. This can be caused by rapidly heating a homogeneous substance while leaving it undisturbed (in order to avoid the introduction of bubbles at nucleation sites). Superheated liquids can be stable above their usual boiling point if the pressure is above atmospheric (see superheated water). This article refers only to liquids above their actual boiling point in a metastable state
# Mechanics
With the exception of superheated water below the Earth's crust, a superheated liquid is usually the result of artificial circumstances. Being such, it is metastable, and is disrupted once the circumstances abate, leading to the liquid boiling very suddenly and violently (a steam explosion). Superheating is sometimes a concern with microwave ovens, some of which can quickly heat water without physical disturbance. A person agitating a container full of superheated water by attempting to remove it from a microwave could easily be scalded.
Superheating is common when a person puts an undisturbed cup of water into the microwave and heats it. Once finished, the water appears to have not come to a boil. Once the water is disturbed, it violently comes to a boil. This can be simply from contact with the cup, or the addition of substances like instant coffee or sugar, which could result in hot scalding water shooting out. The chances of superheating are greater with smooth containers, like brand-new glassware that lacks any scratches (scratches can house small pockets of air, which can serve as a nucleation point).
Rotating dishes in modern microwave ovens can also provide enough perturbation to prevent superheating.
There have been some injuries by superheating water, like when a person makes instant coffee and adds the coffee to the superheated water. This sometimes results in an "explosion" of bubbles. There are some ways to prevent superheating in a microwave oven, like putting a popsicle stick in the glass, or having a scratched container to boil the water in. However this is very, very rare and can only happen under certain conditions. A foreign object added to the water prior to heating, whether it be a plastic spoon or a salt cube, greatly diminishes the chance of an explosion because it provides nucleation sites.
Superheating also occurs in nuclear reactors and other types of high-temperature steam generators used for producing electricity, and is guarded against when it leads to corrosion or embrittlement of metal pipes.
Magnetrons, such as those used in microwave ovens, can also superheat steam in steam-power or steam-heating circuits, exponentially increasing steam thermal capacity. Advanced theories include powering the magnetron superheating circuit from electricity generated by the waste heat from the main steam circuit, resulting in additional heating BTUs for buildings at zero additional fuel cost or additional fossil fuel pollution.
# Myth
A commonly mistaken belief is that superheating can only occur in pure substances. This is untrue because nucleation points for boiling do not include solid nucleation centres, but rather, seed-bubbles that occur due to the presence of solid nucleation centres. In other words, if there are solid nucleation centres in a substance (e.g. impure water) but without seed-bubbles (e.g. leaving impure water to stand or boiling it once to rid the water of the bubbles), superheating can occur. It is interesting to note however, that nucleation points for freezing include solid nucleation centres. That is to say, an impure substance cannot undergo supercooling.
# Scope restriction
Milk and water with starch content do not boil over because of superheating, but rather result in extreme foam buildup. This foam is stabilized by special substances in the liquids and therefore does not burst. | Superheating
In physics, superheating (sometimes referred to as boiling retardation, or boiling delay) is the phenomenon in which a liquid is heated to a temperature higher than its standard boiling point, without actually boiling. This can be caused by rapidly heating a homogeneous substance while leaving it undisturbed (in order to avoid the introduction of bubbles at nucleation sites). Superheated liquids can be stable above their usual boiling point if the pressure is above atmospheric (see superheated water). This article refers only to liquids above their actual boiling point in a metastable state
# Mechanics
With the exception of superheated water below the Earth's crust, a superheated liquid is usually the result of artificial circumstances. Being such, it is metastable, and is disrupted once the circumstances abate, leading to the liquid boiling very suddenly and violently (a steam explosion). Superheating is sometimes a concern with microwave ovens, some of which can quickly heat water without physical disturbance. A person agitating a container full of superheated water by attempting to remove it from a microwave could easily be scalded.
Superheating is common when a person puts an undisturbed cup of water into the microwave and heats it. Once finished, the water appears to have not come to a boil. Once the water is disturbed, it violently comes to a boil. This can be simply from contact with the cup, or the addition of substances like instant coffee or sugar, which could result in hot scalding water shooting out. The chances of superheating are greater with smooth containers, like brand-new glassware that lacks any scratches (scratches can house small pockets of air, which can serve as a nucleation point).
Rotating dishes in modern microwave ovens can also provide enough perturbation to prevent superheating.
There have been some injuries by superheating water, like when a person makes instant coffee and adds the coffee to the superheated water[1]. This sometimes results in an "explosion" of bubbles. There are some ways to prevent superheating in a microwave oven, like putting a popsicle stick in the glass, or having a scratched container to boil the water in. However this is very, very rare and can only happen under certain conditions. A foreign object added to the water prior to heating, whether it be a plastic spoon or a salt cube, greatly diminishes the chance of an explosion because it provides nucleation sites.
Superheating also occurs in nuclear reactors and other types of high-temperature steam generators used for producing electricity, and is guarded against when it leads to corrosion or embrittlement of metal pipes.
Magnetrons, such as those used in microwave ovens, can also superheat steam in steam-power or steam-heating circuits, exponentially increasing steam thermal capacity. Advanced theories include powering the magnetron superheating circuit from electricity generated by the waste heat from the main steam circuit, resulting in additional heating BTUs for buildings at zero additional fuel cost or additional fossil fuel pollution.
# Myth
A commonly mistaken belief is that superheating can only occur in pure substances. This is untrue because nucleation points for boiling do not include solid nucleation centres, but rather, seed-bubbles that occur due to the presence of solid nucleation centres. In other words, if there are solid nucleation centres in a substance (e.g. impure water) but without seed-bubbles (e.g. leaving impure water to stand or boiling it once to rid the water of the bubbles), superheating can occur[2][3]. It is interesting to note however, that nucleation points for freezing include solid nucleation centres. That is to say, an impure substance cannot undergo supercooling.
# Scope restriction
Milk and water with starch content do not boil over because of superheating, but rather result in extreme foam buildup. This foam is stabilized by special substances in the liquids and therefore does not burst. | https://www.wikidoc.org/index.php/Boiling_delay | |
7dc3eb3b05a611d83d2f1b280aa91f526dbd180a | wikidoc | Bone density | Bone density
# Overview
Bone density is a medical term referring to the amount of matter per cubic centimeter of bones. It is measured by a procedure called densitometry, often performed in the radiology or nuclear medicine departments of hospitals or clinics. The measurement is painless and non-invasive and involves minimal radiation exposure. Measurements are most commonly made over the lumbar spine and over the upper part of the hip. The forearm is scanned if either the hip or the lumbar spine can't be.
# Indication
The most common reason for measuring bone density is to screen for, or diagnose, osteoporosis.
# Interpretation
Results are often reported in 3 terms:
- Measured density in g/cm3
- Z-score, the number of standard deviations above or below the mean for the patient's age and sex
- T-score, the number of standard deviations above or below the mean for a healthy 30 year old adult of the same sex as the patient
# Limitations
The technique has several limitations.
- Measurement can be affected by the size of the patient, the thickness of tissue overlying the bone, and other factors extraneous to the bones.
- Bone density is a proxy measurement for bone strength, which is the resistance to fracture and the truly significant characteristic. Although the two are usually related, there are some circumstances in which bone density is a poorer indicator of bone strength.
- Reference standards for some populations (e.g., children) are unavailable for many of the methods used.
- Crushed vertebrae can result in falsely high bone density so must be excluded from analysis. | Bone density
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Bone density is a medical term referring to the amount of matter per cubic centimeter of bones. It is measured by a procedure called densitometry, often performed in the radiology or nuclear medicine departments of hospitals or clinics. The measurement is painless and non-invasive and involves minimal radiation exposure. Measurements are most commonly made over the lumbar spine and over the upper part of the hip. The forearm is scanned if either the hip or the lumbar spine can't be.
# Indication
The most common reason for measuring bone density is to screen for, or diagnose, osteoporosis.
# Interpretation
Results are often reported in 3 terms:
- Measured density in g/cm3
- Z-score, the number of standard deviations above or below the mean for the patient's age and sex
- T-score, the number of standard deviations above or below the mean for a healthy 30 year old adult of the same sex as the patient
# Limitations
The technique has several limitations.
- Measurement can be affected by the size of the patient, the thickness of tissue overlying the bone, and other factors extraneous to the bones.
- Bone density is a proxy measurement for bone strength, which is the resistance to fracture and the truly significant characteristic. Although the two are usually related, there are some circumstances in which bone density is a poorer indicator of bone strength.
- Reference standards for some populations (e.g., children) are unavailable for many of the methods used.
- Crushed vertebrae can result in falsely high bone density so must be excluded from analysis.
Template:WH
Template:WS | https://www.wikidoc.org/index.php/Bone_Density | |
99beb7138e6dd77e65b7460dcc6fbc0300d2a52b | wikidoc | Bone healing | Bone healing
Bone healing or fracture healing is a proliferative physiological process, in which the body facilitates repair of Bone fractures.
# Physiology and process of healing
In the process of fracture healing, several phases of recovery facilitate the proliferation and protection of the areas surrounding fractures and dislocations. The length of the process is relevant to the extent of the injury, and usual margins of two to three weeks are given for the reparation of the majority of upper bodily fractures; anywhere above four weeks given for lower bodily injury.
The process of the entire regeneration of the bone can depend upon the angle of dislocation or fracture, and dislocated bones are generally pushed back into place via relocation with or without anaesthetic. While the bone formation usually spans the entire duration of the healing process, in some instances, bone marrow within the fracture having healed two or fewer weeks before the final remodeling phase.
While immobilization and surgery may facilitate healing, a fracture ultimately heals through physiological processes. The healing process is mainly determined by the periosteum (the connective tissue membrane covering the bone). The periosteum is the primary source of precursor cells which develop into chondroblasts and osteoblasts that are essential to the healing of bone. The bone marrow (when present), endosteum, small blood vessels, and fibroblasts are secondary sources of precursor cells.
## Phases of fracture healing
There are three major phases of fracture healing, two of which can be further sub-divided to make a total of five phases;
- 1. Reactive Phase
i. Fracture and inflammatory phase
ii. Granulation tissue formation
- i. Fracture and inflammatory phase
- ii. Granulation tissue formation
- 2. Reparative Phase
iii. Callus formation
iv. Lamellar bone deposition
- iii. Callus formation
- iv. Lamellar bone deposition
- 3. Remodeling Phase
v. Remodeling to original bone contour
- v. Remodeling to original bone contour
### Reactive Phase
After fracture, the first change seen by light and electron microscopy is the presence of blood cells within the tissues which are adjacent to the injury site. Soon after fracture, the blood vessels constrict, stopping any further bleeding. Within a few hours after fracture, the extravascular blood cells, known as a "hematoma", form a blood clot. All of the cells within the blood clot degenerate and die. Some of the cells outside of the blood clot, but adjacent to the injury site, also degenerate and die. Within this same area, the fibroblasts survive and replicate. They form a loose aggregate of cells, interspersed with small blood vessels, known as granulation tissue.
### Reparative Phase
Days after fracture, the cells of the periosteum replicate and transform. The periosteal cells proximal to the fracture gap develop into chondroblasts and form hyaline cartilage. The periosteal cells distal to the fracture gap develop into osteoblasts and form woven bone. The fibroblasts within the granulation tissue also develop into chondroblasts and form hyaline cartilage. These two new tissues grow in size until they unite with their counterparts from other pieces of the fracture. This process forms the fracture callus. Eventually, the fracture gap is bridged by the hyaline cartilage and woven bone, restoring some of its original strength.
The next phase is the replacement of the hyaline cartilage and woven bone with "lamellar bone". The replacement process is known as "endochondral ossification" with respect to the hyaline cartilage and "bony substitution" with respect to the woven bone. Substitution of the woven bone with lamellar bone precedes the substitution of the hyaline cartilage with lamellar bone. The lamellar bone begins forming soon after the collagen matrix of either tissue becomes mineralized. At this point, "vascular channels" with many accompanying osteoblasts penetrate the mineralized matrix. The osteoblasts form new lamellar bone upon the recently exposed surface of the mineralized matrix. This new lamellar bone is in the form of "trabecular bone". Eventually, all of the woven bone and cartilage of the original fracture callus is replaced by trabecular bone, restoring much, if not all, of the bone's original strength.
### Remodeling Phase
The remodeling process substitutes the trabecular bone with "compact bone". The trabecular bone is first resorbed by osteoclasts, creating a shallow resorption pit known as a "Howship's lacuna". Then osteoblasts deposit compact bone within the resorption pit. Eventually, the fracture callus is remodelled into a new shape which closely duplicates the bone's original shape and strength.
# Other forms and complications
## Inadequate healing or formation
Inadequate bone healing is known as an "incomplete" form of bone healing, in which the regeneration of bone through natural processes is impeded due to other factors, such as malnutrition or immune disorders, which may prevent the reparation of bone due to the lack of nutrient intake, such as that seen in the case of osteomalacia and osteoporosis.
Similarly, factors such as the intake of carcinogens, such as nicotine or exposure to radiation may lead to the malformation or incomplete healing of bones, which can further facilitate the formation of newer fractures, due to the already weakened site of injury being more easily affected by impact or strain, as well pseudarthrosis, undesired mobility in what appears to have become a new joint.
# Medical Treatments
In terms of medical treatments and procedures, several options are available which facilitate faster reparation of bone if the specific patient has an aforementioned bone disorder. The use of Bone morphogenetic proteins is incurred in small amounts, and is also used in clinical practice, alongside immobilising surgical procedures involving vertebroplasty or percutaneous kyphoplasty in the case of bone malformation, and stimulate the growth of bone in areas which require "strengthening", such as in the case of spinal fusion.
## Osseointegration
Osseointegration is the pattern of growth exhibited by bone tissue during assimilation of surgically-implanted devices, prostheses or bone grafts to be used as either replacement parts (e.g., hip) or as anchors (e.g., endosseous dental implants). | Bone healing
Bone healing or fracture healing is a proliferative physiological process, in which the body facilitates repair of Bone fractures.
# Physiology and process of healing
In the process of fracture healing, several phases of recovery facilitate the proliferation and protection of the areas surrounding fractures and dislocations. The length of the process is relevant to the extent of the injury, and usual margins of two to three weeks are given for the reparation of the majority of upper bodily fractures; anywhere above four weeks given for lower bodily injury.
The process of the entire regeneration of the bone can depend upon the angle of dislocation or fracture, and dislocated bones are generally pushed back into place via relocation with or without anaesthetic. While the bone formation usually spans the entire duration of the healing process, in some instances, bone marrow within the fracture having healed two or fewer weeks before the final remodeling phase.
While immobilization and surgery may facilitate healing, a fracture ultimately heals through physiological processes. The healing process is mainly determined by the periosteum (the connective tissue membrane covering the bone). The periosteum is the primary source of precursor cells which develop into chondroblasts and osteoblasts that are essential to the healing of bone. The bone marrow (when present), endosteum, small blood vessels, and fibroblasts are secondary sources of precursor cells.
## Phases of fracture healing
There are three major phases of fracture healing, two of which can be further sub-divided to make a total of five phases;
- 1. Reactive Phase
i. Fracture and inflammatory phase
ii. Granulation tissue formation
- i. Fracture and inflammatory phase
- ii. Granulation tissue formation
- 2. Reparative Phase
iii. Callus formation
iv. Lamellar bone deposition
- iii. Callus formation
- iv. Lamellar bone deposition
- 3. Remodeling Phase
v. Remodeling to original bone contour
- v. Remodeling to original bone contour
### Reactive Phase
After fracture, the first change seen by light and electron microscopy is the presence of blood cells within the tissues which are adjacent to the injury site. Soon after fracture, the blood vessels constrict, stopping any further bleeding.[1] Within a few hours after fracture, the extravascular blood cells, known as a "hematoma", form a blood clot. All of the cells within the blood clot degenerate and die.[2] Some of the cells outside of the blood clot, but adjacent to the injury site, also degenerate and die.[3] Within this same area, the fibroblasts survive and replicate. They form a loose aggregate of cells, interspersed with small blood vessels, known as granulation tissue.[4]
### Reparative Phase
Days after fracture, the cells of the periosteum replicate and transform. The periosteal cells proximal to the fracture gap develop into chondroblasts and form hyaline cartilage. The periosteal cells distal to the fracture gap develop into osteoblasts and form woven bone. The fibroblasts within the granulation tissue also develop into chondroblasts and form hyaline cartilage.[5] These two new tissues grow in size until they unite with their counterparts from other pieces of the fracture. This process forms the fracture callus.[6] Eventually, the fracture gap is bridged by the hyaline cartilage and woven bone, restoring some of its original strength.
The next phase is the replacement of the hyaline cartilage and woven bone with "lamellar bone". The replacement process is known as "endochondral ossification" with respect to the hyaline cartilage and "bony substitution" with respect to the woven bone. Substitution of the woven bone with lamellar bone precedes the substitution of the hyaline cartilage with lamellar bone. The lamellar bone begins forming soon after the collagen matrix of either tissue becomes mineralized. At this point, "vascular channels" with many accompanying osteoblasts penetrate the mineralized matrix. The osteoblasts form new lamellar bone upon the recently exposed surface of the mineralized matrix. This new lamellar bone is in the form of "trabecular bone".[7] Eventually, all of the woven bone and cartilage of the original fracture callus is replaced by trabecular bone, restoring much, if not all, of the bone's original strength.
### Remodeling Phase
The remodeling process substitutes the trabecular bone with "compact bone". The trabecular bone is first resorbed by osteoclasts, creating a shallow resorption pit known as a "Howship's lacuna". Then osteoblasts deposit compact bone within the resorption pit. Eventually, the fracture callus is remodelled into a new shape which closely duplicates the bone's original shape and strength.[8]
# Other forms and complications
## Inadequate healing or formation
Inadequate bone healing is known as an "incomplete" form of bone healing, in which the regeneration of bone through natural processes is impeded due to other factors, such as malnutrition or immune disorders, which may prevent the reparation of bone due to the lack of nutrient intake, such as that seen in the case of osteomalacia and osteoporosis.
Similarly, factors such as the intake of carcinogens, such as nicotine or exposure to radiation may lead to the malformation or incomplete healing of bones, which can further facilitate the formation of newer fractures, due to the already weakened site of injury being more easily affected by impact or strain, as well pseudarthrosis, undesired mobility in what appears to have become a new joint.
# Medical Treatments
In terms of medical treatments and procedures, several options are available which facilitate faster reparation of bone if the specific patient has an aforementioned bone disorder. The use of Bone morphogenetic proteins is incurred in small amounts, and is also used in clinical practice, alongside immobilising surgical procedures involving vertebroplasty or percutaneous kyphoplasty in the case of bone malformation, and stimulate the growth of bone in areas which require "strengthening", such as in the case of spinal fusion.
## Osseointegration
Osseointegration is the pattern of growth exhibited by bone tissue during assimilation of surgically-implanted devices, prostheses or bone grafts to be used as either replacement parts (e.g., hip) or as anchors (e.g., endosseous dental implants). | https://www.wikidoc.org/index.php/Bone_healing | |
23541179ed1d549ac008c27f8ef05678d1f26d72 | wikidoc | Boosterspice | Boosterspice
In Larry Niven's fictional Known Space universe, boosterspice is a compound that increases the longevity and reverses aging of human beings. With the use of boosterspice, humans can easily live into hundreds of years and, theoretically, it can extend life indefinitely.
Humans have been led to believe it is made from genetically engineered ragweed (although early stories have it ingested in the form of edible seeds) but, in Ringworld's Children, we discover it is actually adapted from Tree-of-Life, without the symbiotic virus that enabled hominids to metamorphose from Pak Breeder stage to Pak Protector stage (mutated Pak breeders were the ancestors of both homo sapiens and the hominids of the Ringworld in the Known Space universe).
On the Ringworld, there is an analogous compound, but they are mutually incompatible; In The Ringworld Engineers, Louis Wu learns that the character Halrloprillalar died when in ARM custody after leaving the Ringworld, as a result of having taken boosterspice and previously having used the Ringworld equivalent. | Boosterspice
In Larry Niven's fictional Known Space universe, boosterspice is a compound that increases the longevity and reverses aging of human beings. With the use of boosterspice, humans can easily live into hundreds of years and, theoretically, it can extend life indefinitely.
Humans have been led to believe it is made from genetically engineered ragweed (although early stories have it ingested in the form of edible seeds) but, in Ringworld's Children, we discover it is actually adapted from Tree-of-Life, without the symbiotic virus that enabled hominids to metamorphose from Pak Breeder stage to Pak Protector stage (mutated Pak breeders were the ancestors of both homo sapiens and the hominids of the Ringworld in the Known Space universe).
On the Ringworld, there is an analogous compound, but they are mutually incompatible; In The Ringworld Engineers, Louis Wu learns that the character Halrloprillalar died when in ARM custody after leaving the Ringworld, as a result of having taken boosterspice and previously having used the Ringworld equivalent. | https://www.wikidoc.org/index.php/Boosterspice | |
6f1484bc14277aa105c40288b46fa86597c212c7 | wikidoc | Boracic lint | Boracic lint
Boracic lint was a type of medical dressing made from surgical lint that was soaked in a hot, saturated solution of boracic acid and glycerine and then left to dry.
It has been in use since at least the 19th century, but is now less commonly used.
The term boracic lint, or often just "boracic", pronounced "brassic", is also used as Cockney rhyming slang for having no money. Boracic lint -> skint. | Boracic lint
Template:Expand
Boracic lint was a type of medical dressing made from surgical lint that was soaked in a hot, saturated solution of boracic acid and glycerine and then left to dry.
It has been in use since at least the 19th century,[1] but is now less commonly used.
The term boracic lint, or often just "boracic", pronounced "brassic", is also used as Cockney rhyming slang for having no money. Boracic lint -> skint. | https://www.wikidoc.org/index.php/Boracic_lint | |
e94866d78549fc48c915fec17d66d439bef9250c | wikidoc | Boronic acid | Boronic acid
A boronic acid is an alkyl or aryl substituted boric acid containing a carbon to boron chemical bond belonging to the larger class of organoboranes. Boronic acids act as Lewis acids. Their unique feature are that they are capable of forming reversible covalent complexes with sugars, amino acids, hydroxamic acids, etc. (molecules with vicinal, (1,2) or occasionally (1,3) substituted Lewis base donors (alcohol, amine, carboxylate). The pKa of a boronic acid is ~9, but upon complexion in aqueous solutions, they form tetrahedral boronate complexes with pKa ~7. They are occasionally used in the area of molecular recognition to bind to saccharides for fluorescent detection or selective transport of saccharides across membranes.
Boronic acids are used extensively in organic chemistry as chemical building blocks and intermediates predominantly in the Suzuki coupling. A key concept in its chemistry is transmetallation of its organic residue to a transition metal.
The compound bortezomib with a boronic acid group is a drug used in Chemotherapy. The boron atom in this molecule is a key substructure because through it certain proteasomes are blocked that would otherwise degrade proteins
# Boronic acids
Many air-stable boronic acids are commercially available. They are characterised by high melting points.
# Borinic acids and esters
Borinic acids and borinic esters have the general structure R2BOR.
# Boronic esters
When hydrogen is replaced by any organic residue the resulting compound is called a boronic ester or boronate ester. The compounds can be obtained from boric esters by condensation with alcohols and diols. Phenylboronic acid can be selfcondensed to the cyclic trimer called triphenyl anhydride or triphenylboroxin
Compounds with 6-membered cyclic structures containing the C-O-B-O-C linkage are called dioxaborolanes and those with 5-membered rings dioxaborinanes.
# Boronate or borate salts
Boronate salts or borate salts (not encouraged) have the general structure R4B-M+ for example potassium tetraphenylborate.
# Boronic acids in organic chemistry
## Suzuki coupling reaction
Boronic acids are used in organic chemistry in the Suzuki reaction. In this reaction the boron atom exchanges its aryl group with an alkoxy group from palladium.
## Chan-Lam coupling
In the Chan-Lam coupling the alkyl, alkenyl or aryl boronic acid reacts with a N-H or O-H containing compound with Cu(II) such as copper(II) acetate and oxygen and a base such as pyridine forming a new carbon-nitrogen bond or carbon-oxygen bond for example in this reaction of 2-pyridone with trans-1-hexenylboronic acid:
The reaction mechanism sequence is deprotonation of the amine, coordination of the amine to the copper(II), transmetallation (transferring the alkyl boron group to copper and the copper acetate group to boron), oxidation of Cu(II) to Cu(III) by oxygen and finally reductive elimination of Cu(III) to Cu(I) with formation of the product. Direct reductive elimination of Cu(II) to Cu(0) also takes place but is very slow. In catalytic systems oxygen also regenerates the Cu(II) catalyst.
## Conjugate addition
The boronic acid organic residue is a nucleophile in conjugate addition also in conjunction with a metal. In one study the pinacol ester of allylboronic acid is reacted with dibenzylidene acetone in a such a conjugate addition :
Another conjugate addition is that of gramine with phenylboronic acid catalyzed by cyclooctadiene rhodium chloride dimer :
## Oxidation
Boronic esters are oxidized to the corresponding alcohols with base and hydrogen peroxide (for an example see: carbenoid)
## Homologization
- In boronic ester homologization an alkyl group shifts from boron in a boronate to carbon :
In this reaction dichloromethyllithium converts the boronic ester into a boronate. A lewis acid then induces a rearrangement of the alkyl group with displacement of the chlorine group. Finally an organometallic reagent such as a Grignard reagent displaces the second chlorine atom effectively leading to insertion of a RCH2 group into the C-B bond.
## Electrophilic allyl shifts
Allyl boronic esters engage in electrophilic allyl shifts very much like silicon pendant in the Sakurai reaction. In one study a diallylation reagent combines both :
## Hydrolysis
Hydrolysis of boronic esters back to the boronic acid and the alcohol can be accomplished in certain systems with thionyl chloride and pyridine . | Boronic acid
A boronic acid is an alkyl or aryl substituted boric acid containing a carbon to boron chemical bond belonging to the larger class of organoboranes. Boronic acids act as Lewis acids. Their unique feature are that they are capable of forming reversible covalent complexes with sugars, amino acids, hydroxamic acids, etc. (molecules with vicinal, (1,2) or occasionally (1,3) substituted Lewis base donors (alcohol, amine, carboxylate). The pKa of a boronic acid is ~9, but upon complexion in aqueous solutions, they form tetrahedral boronate complexes with pKa ~7. They are occasionally used in the area of molecular recognition to bind to saccharides for fluorescent detection or selective transport of saccharides across membranes.
Boronic acids are used extensively in organic chemistry as chemical building blocks and intermediates predominantly in the Suzuki coupling. A key concept in its chemistry is transmetallation of its organic residue to a transition metal.
The compound bortezomib with a boronic acid group is a drug used in Chemotherapy. The boron atom in this molecule is a key substructure because through it certain proteasomes are blocked that would otherwise degrade proteins
# Boronic acids
Many air-stable boronic acids are commercially available. They are characterised by high melting points.
# Borinic acids and esters
Borinic acids and borinic esters have the general structure R2BOR.
# Boronic esters
When hydrogen is replaced by any organic residue the resulting compound is called a boronic ester or boronate ester. The compounds can be obtained from boric esters [2] by condensation with alcohols and diols. Phenylboronic acid can be selfcondensed to the cyclic trimer called triphenyl anhydride or triphenylboroxin [3]
Compounds with 6-membered cyclic structures containing the C-O-B-O-C linkage are called dioxaborolanes and those with 5-membered rings dioxaborinanes.
# Boronate or borate salts
Boronate salts or borate salts (not encouraged) have the general structure R4B-M+ for example potassium tetraphenylborate.
# Boronic acids in organic chemistry
## Suzuki coupling reaction
Boronic acids are used in organic chemistry in the Suzuki reaction. In this reaction the boron atom exchanges its aryl group with an alkoxy group from palladium.
## Chan-Lam coupling
In the Chan-Lam coupling the alkyl, alkenyl or aryl boronic acid reacts with a N-H or O-H containing compound with Cu(II) such as copper(II) acetate and oxygen and a base such as pyridine [5] [6] forming a new carbon-nitrogen bond or carbon-oxygen bond for example in this reaction of 2-pyridone with trans-1-hexenylboronic acid:
The reaction mechanism sequence is deprotonation of the amine, coordination of the amine to the copper(II), transmetallation (transferring the alkyl boron group to copper and the copper acetate group to boron), oxidation of Cu(II) to Cu(III) by oxygen and finally reductive elimination of Cu(III) to Cu(I) with formation of the product. Direct reductive elimination of Cu(II) to Cu(0) also takes place but is very slow. In catalytic systems oxygen also regenerates the Cu(II) catalyst.
## Conjugate addition
The boronic acid organic residue is a nucleophile in conjugate addition also in conjunction with a metal. In one study the pinacol ester of allylboronic acid is reacted with dibenzylidene acetone in a such a conjugate addition [7]:
Another conjugate addition is that of gramine with phenylboronic acid catalyzed by cyclooctadiene rhodium chloride dimer [8]:
## Oxidation
Boronic esters are oxidized to the corresponding alcohols with base and hydrogen peroxide (for an example see: carbenoid)
## Homologization
- In boronic ester homologization an alkyl group shifts from boron in a boronate to carbon [9]:
In this reaction dichloromethyllithium converts the boronic ester into a boronate. A lewis acid then induces a rearrangement of the alkyl group with displacement of the chlorine group. Finally an organometallic reagent such as a Grignard reagent displaces the second chlorine atom effectively leading to insertion of a RCH2 group into the C-B bond.
## Electrophilic allyl shifts
Allyl boronic esters engage in electrophilic allyl shifts very much like silicon pendant in the Sakurai reaction. In one study a diallylation reagent combines both [10][11]:
## Hydrolysis
Hydrolysis of boronic esters back to the boronic acid and the alcohol can be accomplished in certain systems with thionyl chloride and pyridine [12]. | https://www.wikidoc.org/index.php/Boronic_acid | |
ed7dda44ba96b5b70a10d31c0d115fda89e31b9e | wikidoc | Botallackite | Botallackite
Botallackite, chemical formula Cu2 is a secondary copper mineral, named for its type locality at the Botallack mine, St Just in Penwith, Cornwall. It is polymorphous with Atacamite, Paratacamite and Clinoatacamite.
In the monoclinic crystal system, Botallackite is mountain-green to green in colour, with one distinct to good cleavage.
Botallackite forms in copper deposits exposed to weathering and salt water. | Botallackite
Template:Infobox mineral
Botallackite, chemical formula Cu2[(OH)3|Cl] is a secondary copper mineral, named for its type locality at the Botallack mine, St Just in Penwith, Cornwall. It is polymorphous with Atacamite, Paratacamite and Clinoatacamite.[1]
In the monoclinic crystal system, Botallackite is mountain-green to green in colour, with one distinct to good cleavage.[1]
Botallackite forms in copper deposits exposed to weathering and salt water.[1] | https://www.wikidoc.org/index.php/Botallackite | |
1644426d05ad0c36bf76d0583c513dfafc42d741 | wikidoc | Brain damage | Brain damage
# Overview
Brain damage or brain injury is the destruction or degeneration of brain cells.
# Causes
Brain damage may occur due to a wide range of conditions, illnesses, injuries, and as a result of iatrogenesis. Possible causes of widespread (diffuse) brain damage include prolonged hypoxia (shortage of oxygen), poisoning by teratogens (including alcohol), infection, and neurological illness. Chemotherapy can cause brain damage to the neural stem cells and oligodendrocyte cells that produce myelin. Common causes of focal or localized brain damage are physical trauma (traumatic brain injury), stroke, aneurysm, surgery, or neurological illness.
# Complications
Brain injury does not necessarily result in long-term impairment or disability, although the location and extent of damage both have a significant effect on the likely outcome. In serious cases of brain injury, the result can be permanent disability, including neurocognitive deficits, delusions (often specifically monothematic delusions), speech or movement problems, and mental handicap. There may also be personality changes. Severe brain damage may result in persistent vegetative state, coma, or death.
## Brain Damage in Children
It is a common misconception that brain damage sustained during childhood has a better chance of successful recovery than similar injury acquired in adult life. It is contested that in recent studies, severe brain damage inflicted upon children can be alleviated by the interaction of nicotinamide repropagation in nerve cells. In fact, the consequences of childhood injury may simply be more difficult to detect in the short term. This is because different cortical areas mature at different stages, with some major cell populations and their corresponding cognitive faculties remaining unrefined until early adulthood. In the case of a child with frontal brain injury, for example, the impact of the damage may be undetectable until that child fails to develop normal executive functions in his or her late teens and early twenties.
# Diagnosis
The extent and effect of brain injury is often assessed by the use of neurological examination, neuroimaging, and neuropsychological assessment.
# Treatment
Various professions may be involved in the medical care and rehabilitation of someone who suffers impairment after brain damage. Neurologists, neurosurgeons, and physiatrists are physicians who specialise in treating brain injury. Neuropsychologists (especially clinical neuropsychologists) are psychologists who specialise in understanding the effects of brain injury and may be involved in assessing the extent of brain damage or creating rehabilitation programmes. Occupational therapists may be involved in running rehabilitation programs to help restore lost function or help re-learn essential skills.
The effects of impairment or disability resulting from brain injury may be treated by a number of methods, including medication, psychotherapy, neuropsychological rehabilitation, snoezelen, surgery, or physical implants such as deep brain stimulation. | Brain damage
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Template:Neuropsychology
# Overview
Brain damage or brain injury is the destruction or degeneration of brain cells.
# Causes
Brain damage may occur due to a wide range of conditions, illnesses, injuries, and as a result of iatrogenesis. Possible causes of widespread (diffuse) brain damage include prolonged hypoxia (shortage of oxygen), poisoning by teratogens (including alcohol), infection, and neurological illness. Chemotherapy can cause brain damage to the neural stem cells and oligodendrocyte cells that produce myelin. Common causes of focal or localized brain damage are physical trauma (traumatic brain injury), stroke, aneurysm, surgery, or neurological illness.
# Complications
Brain injury does not necessarily result in long-term impairment or disability, although the location and extent of damage both have a significant effect on the likely outcome. In serious cases of brain injury, the result can be permanent disability, including neurocognitive deficits, delusions (often specifically monothematic delusions), speech or movement problems, and mental handicap. There may also be personality changes. Severe brain damage may result in persistent vegetative state, coma, or death.
## Brain Damage in Children
It is a common misconception that brain damage sustained during childhood has a better chance of successful recovery than similar injury acquired in adult life. It is contested that in recent studies, severe brain damage inflicted upon children can be alleviated by the interaction of nicotinamide repropagation in nerve cells. In fact, the consequences of childhood injury may simply be more difficult to detect in the short term. This is because different cortical areas mature at different stages, with some major cell populations and their corresponding cognitive faculties remaining unrefined until early adulthood. In the case of a child with frontal brain injury, for example, the impact of the damage may be undetectable until that child fails to develop normal executive functions in his or her late teens and early twenties.
# Diagnosis
The extent and effect of brain injury is often assessed by the use of neurological examination, neuroimaging, and neuropsychological assessment.
# Treatment
Various professions may be involved in the medical care and rehabilitation of someone who suffers impairment after brain damage. Neurologists, neurosurgeons, and physiatrists are physicians who specialise in treating brain injury. Neuropsychologists (especially clinical neuropsychologists) are psychologists who specialise in understanding the effects of brain injury and may be involved in assessing the extent of brain damage or creating rehabilitation programmes. Occupational therapists may be involved in running rehabilitation programs to help restore lost function or help re-learn essential skills.
The effects of impairment or disability resulting from brain injury may be treated by a number of methods, including medication, psychotherapy, neuropsychological rehabilitation, snoezelen, surgery, or physical implants such as deep brain stimulation.
# External Links
- Head and Brain Injuries from MedlinePlus
- [http://www.braininjury.org.au/ Fact sheets on brain damage, its effects, and strategies for survivors and their families
- Recovery from Acquired Brain Injury from the Psychology Wiki
# Related Chapters
- Cerebral Palsy
- Epilepsy
- Fetal alcohol syndrome
- Head injury
- Lobotomy
- Neurocognitive deficit
- Neurology
- Rehabilitation (neuropsychology)
- Traumatic brain injury
de:Hirnschaden
ko:뇌손상
is:Heilaskemmd
fi:Aivovaurio
sv:Hjärnskada
Template:WH
Template:WS | https://www.wikidoc.org/index.php/Brain_damage | |
f827e30698d2d29f29081235d360374817f2dba3 | wikidoc | Brain freeze | Brain freeze
Brain freeze, cold headache, ice cream headache, shakeache, frigid face, freezie, Frozen Brain Syndrome, cold-stimulus headache, or its given scientific name sphenopalatine ganglioneuralgia are terms used to describe a form of cranial pain or headache which people are known to sometimes experience after consuming cold beverages or foods such as ice cream, slurpees, or margaritas, particularly when consumed quickly.
# Mechanism and cause
The reaction can sometimes be triggered within a few seconds after a very cold substance consumed comes into contact with the roof of the mouth. The pain is not caused by the cold temperature alone, rather quick warming of the hard palate. Letting the mouth slowly adjust back to normal temperatures can prevent this from occurring. Brain freeze is often a result of speaking or breathing out of the mouth after consuming something cold. The body's response to cold environments is to vasoconstrict the peripheral vasculature (to reduce the diameter of blood vessels). This vasoconstriction is in place to reduce blood flow to the area, and thus minimize heat loss to keep warmth in the body. After vasoconstriction, they return to normal status and artery size results in massive dilation (vasodilation) of the arteries that supply the palate (descending palatine arteries). The nerves in the region of the palate (greater and lesser palatine nerves) sense this as pain and transmits the sensation of this pain back to the trigeminal ganglia. This results in pain that is referred to the forehead and below the orbit, and other regions from which the trigeminal nerve receives sensation. (This phenomenon is similar to the pain that is present in the left arm when someone is having a myocardial infarction or heart attack). A similar effect occurs when one takes a prescription vasodilator, such as Nitroglycerin or Viagra. It is a stabbing or aching type of pain that usually recedes within 10–20 seconds after its onset, but sometimes 30–60 seconds, and can persist for up to five minutes in rare cases. The pain is usually located in the midfrontal area, but can be unilateral in the temporal, frontal, or retro-orbital regions.
It has been reported that the pain can be relieved by moving the tongue to the roof of the mouth, which will cause greater warmth in the region; it is also believed that the pain can be relieved by slowly sipping room temperature water. Laying the head to the side may also provide relief. Creating a mask with one's hands placed over the mouth and nose while breathing rapidly is also said to be useful since the temperature in the mouth rises quickly.
A report was submitted to the British Medical Journal on brain freeze; it focused on the effect of speed of consumption of ice cream on causing brain freeze. Commonly referred to as "ice cream headaches," it has been studied as an example of referred pain, an unpleasant sensation localized to an area separate from the site of the painful stimulation.
It has been estimated that "30% of the population" experiences brain freeze or freeze head from ice cream. Some studies suggest that brain freeze is more common in people who experience migraines. Raskin and Knittle found this to be the case, with brain freeze occurring in 93% of migraine sufferers and in only 31% of controls. However, other studies found that it is more common in people without migraines. These inconsistencies may be due to differences in subject selection–the subjects of the first study were drawn from a hospital population, whereas the controls in the second were student volunteers, making the tests inconclusive. | Brain freeze
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Brain freeze, cold headache, ice cream headache, shakeache, frigid face, freezie, Frozen Brain Syndrome, cold-stimulus headache, or its given scientific name sphenopalatine ganglioneuralgia are terms used to describe a form of cranial pain or headache which people are known to sometimes experience after consuming cold beverages or foods such as ice cream, slurpees, or margaritas, particularly when consumed quickly.
# Mechanism and cause
The reaction can sometimes be triggered within a few seconds after a very cold substance consumed comes into contact with the roof of the mouth. The pain is not caused by the cold temperature alone, rather quick warming of the hard palate. Letting the mouth slowly adjust back to normal temperatures can prevent this from occurring. Brain freeze is often a result of speaking or breathing out of the mouth after consuming something cold. The body's response to cold environments is to vasoconstrict the peripheral vasculature (to reduce the diameter of blood vessels). This vasoconstriction is in place to reduce blood flow to the area, and thus minimize heat loss to keep warmth in the body. After vasoconstriction, they return to normal status and artery size results in massive dilation (vasodilation) of the arteries that supply the palate (descending palatine arteries). The nerves in the region of the palate (greater and lesser palatine nerves) sense this as pain and transmits the sensation of this pain back to the trigeminal ganglia. This results in pain that is referred to the forehead and below the orbit, and other regions from which the trigeminal nerve receives sensation. (This phenomenon is similar to the pain that is present in the left arm when someone is having a myocardial infarction or heart attack). A similar effect occurs when one takes a prescription vasodilator, such as Nitroglycerin or Viagra. It is a stabbing or aching type of pain that usually recedes within 10–20 seconds after its onset, but sometimes 30–60 seconds, and can persist for up to five minutes in rare cases. The pain is usually located in the midfrontal area, but can be unilateral in the temporal, frontal, or retro-orbital regions.
It has been reported that the pain can be relieved by moving the tongue to the roof of the mouth,[1] which will cause greater warmth in the region; it is also believed that the pain can be relieved by slowly sipping room temperature water. Laying the head to the side may also provide relief. Creating a mask with one's hands placed over the mouth and nose while breathing rapidly is also said to be useful since the temperature in the mouth rises quickly.
A report was submitted to the British Medical Journal on brain freeze; it focused on the effect of speed of consumption of ice cream on causing brain freeze. Commonly referred to as "ice cream headaches," it has been studied as an example of referred pain,[2] an unpleasant sensation localized to an area separate from the site of the painful stimulation.
It has been estimated that "30% of the population" experiences brain freeze or freeze head from ice cream.[3] Some studies suggest that brain freeze is more common in people who experience migraines. Raskin and Knittle found this to be the case, with brain freeze occurring in 93% of migraine sufferers and in only 31% of controls. However, other studies found that it is more common in people without migraines. These inconsistencies may be due to differences in subject selection–the subjects of the first study were drawn from a hospital population, whereas the controls in the second were student volunteers, making the tests inconclusive. | https://www.wikidoc.org/index.php/Brain_freeze | |
b67c41bfc74a20148e60217e38e25f2b6a635a6f | wikidoc | Neuroimaging | Neuroimaging
# Overview
Neuroimaging includes the use of various techniques to either directly or indirectly image the structure, function/pharmacology of the brain. It is a relatively new discipline within medicine and neuroscience.
Neuroimaging falls into two broad categories: structural imaging and functional imaging. Structural imaging deals with the structure of the brain and the diagnosis of gross (large scale) intracranial disease (such as tumor), and injury. Functional imaging is used to diagnose metabolic diseases and lesions on a finer scale (such as Alzheimer's disease) and also for neurological and cognitive science research and building brain-computer interfaces. Functional imaging enables, for example, the processing of information by centers in the brain to be visualized directly. Such processing causes the involved area of the brain to increase metabolism and "light up" on the scan.
# Types of brain imaging
## CAT
Computed Tomography (CT) or Computed Axial Tomography (CAT) scanning uses a series of x-rays of the head taken from many different directions. Typically used for quickly viewing brain injuries, CT scanning uses a computer program that performs a numerical integral calculation (the inverse Radon transform) on the measured x-ray series to estimate how much of an x-ray beam is absorbed in a small volume of the brain. Typically the information is presented as cross sections of the brain . In approximation, the more dense a material is, the whiter a volume of it will appear on the scan (just as in the more familiar "flat" X-rays). CT scans are primarily used for evaluating swelling from tissue damage in the brain and in assessment of ventricle size. Modern CT scanning can provide reasonably good images in a matter of minutes.
## MRI
Magnetic Resonance Imaging (MRI) uses magnetic fields and radio waves to produce high quality two- or three-dimensional images of brain structures without use of ionizing radiation (X-rays) or radioactive tracers. During an MRI, a large cylindrical magnet creates a magnetic field around the head of the patient through which radio waves are sent. When the magnetic field is imposed, each point in space has a unique radio frequency at which the signal is received and transmitted (Preuss). Sensors read the frequencies and a computer uses the information to construct an image. The detection mechanisms are so precise that changes in structures over time can be detected. Using MRI, scientists can create images of both surface and subsurface structures with a high degree of anatomical detail. MRI scans can produce cross sectional images in any direction from top to bottom, side to side, or front to back. The problem with original MRI technology was that while it provides a detailed assessment of the physical appearance, water content, and many kinds of subtle derangements of structure of the brain (such as inflammation or bleeding), it fails to provide information about the metabolism of the brain (i.e. how actively it is functioning) at the time of imaging. A distinction is therefore made between "MRI imaging" and "functional MRI imaging" (fMRI), where MRI provides only structural information on the brain while fMRI yields both structural and functional data.
## fMRI
Functional Magnetic Resonance Imaging (fMRI) relies on the paramagnetic properties of oxygenated and deoxygenated hemoglobin to see images of changing blood flow in the brain associated with neural activity. This allows images to be generated that reflect which brain structures are activated (and how) during performance of different tasks. Most fMRI scanners allow subjects to be presented with different visual images, sounds and touch stimuli, and to make different actions such as pressing a button or moving a joystick. Consequently, fMRI can be used to reveal brain structures and processes associated with perception, thought and action. The resolution of fMRI is about 2-3 millimeters at present, limited by the spatial spread of the hemodynamic response to neural activity. It has largely superseded PET for the study of brain activation patterns. PET, however, retains the significant advantage of being able to identify specific brain receptors (or transporters) associated with particular neurotransmitters through its ability to image radiolabelled receptor "ligands" (receptor ligands are any chemicals that stick to receptors).
As well as research on healthy subjects, fMRI is increasingly used for the medical diagnosis of disease. Because fMRI is exquisitely sensitive to blood flow, it is extremely sensitive to early changes in the brain resulting from ischemia (abnormally low blood flow), such as the changes which follow stroke. Early diagnosis of certain types of stroke is increasingly important in neurology, since substances which dissolve blood clots may be used in the first few hours after certain types of stroke occur, but are dangerous to use afterwards. Brain changes seen on fMRI may help to make the decision to treat with these agents.
## PET
Positron Emission Tomography (PET) measures emissions from radioactively labeled metabolically active chemicals that have been injected into the bloodstream. The emission data are computer-processed to produce 2- or 3-dimensional images of the distribution of the chemicals throughout the brain (Nilsson 57). The positron emitting radioisotopes used are produced by a cyclotron, and chemicals are labelled with these radioactive atoms. The labeled compound, called a radiotracer, is injected into the bloodstream and eventually makes its way to the brain. Sensors in the PET scanner detect the radioactivity as the compound accumulates in various regions of the brain. A computer uses the data gathered by the sensors to create multicolored 2- or 3-dimensional images that show where the compound acts in the brain. Especially useful are a wide array of ligands used to map different aspects of neurotransmitter activity, with by far the most commonly used PET tracer being a labeled form of glucose (see FDG).
The greatest benefit of PET scanning is that different compounds can show blood flow and oxygen and glucose metabolism in the tissues of the working brain. These measurements reflect the amount of brain activity in the various regions of the brain and allow us to learn more about how the brain works. PET scans were superior to all other metabolic imaging methods in terms of resolution and speed of completion (as little as 30 seconds), when they first became available. The improved resolution permitted better study to be made as to the area of the brain activated by a particular task. The biggest drawback of PET scanning is that because the radioactivity decays rapidly, it is limited to monitoring short tasks (Nilsson 60). Before fMRI technology came online, PET scanning was the preferred method of functional (as opposed to structural) brain imaging, and it still continues to make large contributions to neuroscience.
PET scanning is also used for diagnosis of brain disease, most notably because brain tumors, strokes, and neuron-damaging diseases which cause dementia (such as Alzheimer's disease) all cause great changes in brain metabolism, which in turn causes easily detectable changes in PET scans. PET is probably most useful in early cases of certain dementias (with classic examples being Azheimer's disease and Pick's disease) where the early damage is too diffuse and makes too little difference in brain volume and gross structure to change CT and standard MRI images enough to be able to reliably differentiate it from the "normal" range of cortical atrophy which occurs with aging (in many but not all) persons, and which does not cause clinical dementia.
## SPECT
Single Photon Emission Computed Tomography (SPECT) is similar to PET and uses gamma ray emitting radioisotopes and a gamma camera to record data that a computer uses to construct two- or three-dimensional images of active brain regions (Ball). SPECT relies on an injection of radioactive tracer, which is rapidly taken up by the brain but does not redistribute. Uptake of SPECT agent is nearly 100% complete within 30 – 60s, reflecting cerebral blood flow (CBF) at the time of injection. These properties of SPECT make it particularly well suited for epilepsy imaging, which is usually made difficult by problems with patient movement and variable seizure types. SPECT provides a "snapshot" of cerebral blood flow since scans can be acquired after seizure termination (so long as the radioactive tracer was injected at the time of the seizure). A significant limitation of SPECT is its poor resolution (about 1 cm) compared to that of MRI.
Like PET, SPECT also can be used to differentiate different kinds of disease process which produce dementia, and it is increasingly used for this purpose. Neuro-PET has a disadvantage of requiring use of a tracers with half-lives of at most 110 minutes, such as FDG. These must be made in a cyclotron, and are expensive or even unavailable if necessary transport times are prolonged more than a few half-lives. SPECT, however, is able to make use of tracers with much longer half-lives, such as technetium-99m, and as a result, is far more widely available.
## DOT
Diffuse Optical Imaging (DOI) or Diffuse Optical Tomography (DOT) is a medical imaging modality which uses near infrared light to generate images of the body. The technique measures the optical absorption of haemoglobin, and relies on the absorption spectrum of haemoglobin varying with its oxygenation status.
# History
In 1918 the American neurosurgeon Walter Dandy introduced the technique of ventriculography. X-ray images of the ventricular system within the brain were obtained by injection of filtered air directly into one or both lateral ventricles of the brain. Dandy also observed that air introduced into the subarachnoid space via lumbar spinal puncture could enter the cerebral ventricles and also demonstrate the cerebrospinal fluid compartments around the base of the brain and over its surface. This technique was called pneumoencephalography.
In 1927 Egas Moniz, professor of neurology in Lisbon, introduced cerebral angiography, whereby both normal and abnormal blood vessels in and around the brain could be visualized with great accuracy.
In the early 1970s, Allan McLeod Cormack and Godfrey Newbold Hounsfield introduced computerized axial tomography (CAT or CT scanning), and ever more detailed anatomic images of the brain became available for diagnostic and research purposes. Cormack and Hounsfield won the 1979 Nobel Prize for Physiology or Medicine for their work. Soon after the introduction of CAT in the early 1980s, the development of radioligands allowed single photon emission computed tomography (SPECT) and positron emission tomography (PET) of the brain.
More or less concurrently, magnetic resonance imaging (MRI or MR scanning) was developed by researchers including Peter Mansfield and Paul Lauterbur, who were awarded the Nobel Prize for Physiology or Medicine in 2003. In the early 1980s MRI was introduced clinically, and during the 1980s a veritable explosion of technical refinements and diagnostic MR applications took place. Scientists soon learned that the large blood flow changes measured by PET could also be imaged by the correct type of MRI. Functional magnetic resonance imaging (fMRI) was born, and
since the 1990s, fMRI has come to dominate the brain mapping field due to its low invasiveness, lack of radiation exposure, and relatively wide availability. As noted above fMRI is also beginning to dominate the field of stroke treatment.
In early 2000s the field of neuroimaging reached the stage where limited practical applications of functional brain imaging have became feasible. The main application area is crude forms of brain-computer interface. | Neuroimaging
# Overview
Neuroimaging includes the use of various techniques to either directly or indirectly image the structure, function/pharmacology of the brain. It is a relatively new discipline within medicine and neuroscience.
Neuroimaging falls into two broad categories: structural imaging and functional imaging. Structural imaging deals with the structure of the brain and the diagnosis of gross (large scale) intracranial disease (such as tumor), and injury. Functional imaging is used to diagnose metabolic diseases and lesions on a finer scale (such as Alzheimer's disease) and also for neurological and cognitive science research and building brain-computer interfaces. Functional imaging enables, for example, the processing of information by centers in the brain to be visualized directly. Such processing causes the involved area of the brain to increase metabolism and "light up" on the scan.
# Types of brain imaging
## CAT
Computed Tomography (CT) or Computed Axial Tomography (CAT) scanning uses a series of x-rays of the head taken from many different directions. Typically used for quickly viewing brain injuries, CT scanning uses a computer program that performs a numerical integral calculation (the inverse Radon transform) on the measured x-ray series to estimate how much of an x-ray beam is absorbed in a small volume of the brain. Typically the information is presented as cross sections of the brain [1]. In approximation, the more dense a material is, the whiter a volume of it will appear on the scan (just as in the more familiar "flat" X-rays). CT scans are primarily used for evaluating swelling from tissue damage in the brain and in assessment of ventricle size. Modern CT scanning can provide reasonably good images in a matter of minutes.
## MRI
Magnetic Resonance Imaging (MRI) uses magnetic fields and radio waves to produce high quality two- or three-dimensional images of brain structures without use of ionizing radiation (X-rays) or radioactive tracers. During an MRI, a large cylindrical magnet creates a magnetic field around the head of the patient through which radio waves are sent. When the magnetic field is imposed, each point in space has a unique radio frequency at which the signal is received and transmitted (Preuss). Sensors read the frequencies and a computer uses the information to construct an image. The detection mechanisms are so precise that changes in structures over time can be detected. Using MRI, scientists can create images of both surface and subsurface structures with a high degree of anatomical detail. MRI scans can produce cross sectional images in any direction from top to bottom, side to side, or front to back. The problem with original MRI technology was that while it provides a detailed assessment of the physical appearance, water content, and many kinds of subtle derangements of structure of the brain (such as inflammation or bleeding), it fails to provide information about the metabolism of the brain (i.e. how actively it is functioning) at the time of imaging. A distinction is therefore made between "MRI imaging" and "functional MRI imaging" (fMRI), where MRI provides only structural information on the brain while fMRI yields both structural and functional data.
## fMRI
Functional Magnetic Resonance Imaging (fMRI) relies on the paramagnetic properties of oxygenated and deoxygenated hemoglobin to see images of changing blood flow in the brain associated with neural activity. This allows images to be generated that reflect which brain structures are activated (and how) during performance of different tasks. Most fMRI scanners allow subjects to be presented with different visual images, sounds and touch stimuli, and to make different actions such as pressing a button or moving a joystick. Consequently, fMRI can be used to reveal brain structures and processes associated with perception, thought and action. The resolution of fMRI is about 2-3 millimeters at present, limited by the spatial spread of the hemodynamic response to neural activity. It has largely superseded PET for the study of brain activation patterns. PET, however, retains the significant advantage of being able to identify specific brain receptors (or transporters) associated with particular neurotransmitters through its ability to image radiolabelled receptor "ligands" (receptor ligands are any chemicals that stick to receptors).
As well as research on healthy subjects, fMRI is increasingly used for the medical diagnosis of disease. Because fMRI is exquisitely sensitive to blood flow, it is extremely sensitive to early changes in the brain resulting from ischemia (abnormally low blood flow), such as the changes which follow stroke. Early diagnosis of certain types of stroke is increasingly important in neurology, since substances which dissolve blood clots may be used in the first few hours after certain types of stroke occur, but are dangerous to use afterwards. Brain changes seen on fMRI may help to make the decision to treat with these agents.
## PET
Positron Emission Tomography (PET) measures emissions from radioactively labeled metabolically active chemicals that have been injected into the bloodstream. The emission data are computer-processed to produce 2- or 3-dimensional images of the distribution of the chemicals throughout the brain (Nilsson 57). The positron emitting radioisotopes used are produced by a cyclotron, and chemicals are labelled with these radioactive atoms. The labeled compound, called a radiotracer, is injected into the bloodstream and eventually makes its way to the brain. Sensors in the PET scanner detect the radioactivity as the compound accumulates in various regions of the brain. A computer uses the data gathered by the sensors to create multicolored 2- or 3-dimensional images that show where the compound acts in the brain. Especially useful are a wide array of ligands used to map different aspects of neurotransmitter activity, with by far the most commonly used PET tracer being a labeled form of glucose (see FDG).
The greatest benefit of PET scanning is that different compounds can show blood flow and oxygen and glucose metabolism in the tissues of the working brain. These measurements reflect the amount of brain activity in the various regions of the brain and allow us to learn more about how the brain works. PET scans were superior to all other metabolic imaging methods in terms of resolution and speed of completion (as little as 30 seconds), when they first became available. The improved resolution permitted better study to be made as to the area of the brain activated by a particular task. The biggest drawback of PET scanning is that because the radioactivity decays rapidly, it is limited to monitoring short tasks (Nilsson 60). Before fMRI technology came online, PET scanning was the preferred method of functional (as opposed to structural) brain imaging, and it still continues to make large contributions to neuroscience.
PET scanning is also used for diagnosis of brain disease, most notably because brain tumors, strokes, and neuron-damaging diseases which cause dementia (such as Alzheimer's disease) all cause great changes in brain metabolism, which in turn causes easily detectable changes in PET scans. PET is probably most useful in early cases of certain dementias (with classic examples being Azheimer's disease and Pick's disease) where the early damage is too diffuse and makes too little difference in brain volume and gross structure to change CT and standard MRI images enough to be able to reliably differentiate it from the "normal" range of cortical atrophy which occurs with aging (in many but not all) persons, and which does not cause clinical dementia.
## SPECT
Single Photon Emission Computed Tomography (SPECT) is similar to PET and uses gamma ray emitting radioisotopes and a gamma camera to record data that a computer uses to construct two- or three-dimensional images of active brain regions (Ball). SPECT relies on an injection of radioactive tracer, which is rapidly taken up by the brain but does not redistribute. Uptake of SPECT agent is nearly 100% complete within 30 – 60s, reflecting cerebral blood flow (CBF) at the time of injection. These properties of SPECT make it particularly well suited for epilepsy imaging, which is usually made difficult by problems with patient movement and variable seizure types. SPECT provides a "snapshot" of cerebral blood flow since scans can be acquired after seizure termination (so long as the radioactive tracer was injected at the time of the seizure). A significant limitation of SPECT is its poor resolution (about 1 cm) compared to that of MRI.
Like PET, SPECT also can be used to differentiate different kinds of disease process which produce dementia, and it is increasingly used for this purpose. Neuro-PET has a disadvantage of requiring use of a tracers with half-lives of at most 110 minutes, such as FDG. These must be made in a cyclotron, and are expensive or even unavailable if necessary transport times are prolonged more than a few half-lives. SPECT, however, is able to make use of tracers with much longer half-lives, such as technetium-99m, and as a result, is far more widely available.
## DOT
Diffuse Optical Imaging (DOI) or Diffuse Optical Tomography (DOT) is a medical imaging modality which uses near infrared light to generate images of the body. The technique measures the optical absorption of haemoglobin, and relies on the absorption spectrum of haemoglobin varying with its oxygenation status.
# History
In 1918 the American neurosurgeon Walter Dandy introduced the technique of ventriculography. X-ray images of the ventricular system within the brain were obtained by injection of filtered air directly into one or both lateral ventricles of the brain. Dandy also observed that air introduced into the subarachnoid space via lumbar spinal puncture could enter the cerebral ventricles and also demonstrate the cerebrospinal fluid compartments around the base of the brain and over its surface. This technique was called pneumoencephalography.
In 1927 Egas Moniz, professor of neurology in Lisbon, introduced cerebral angiography, whereby both normal and abnormal blood vessels in and around the brain could be visualized with great accuracy.
In the early 1970s, Allan McLeod Cormack and Godfrey Newbold Hounsfield introduced computerized axial tomography (CAT or CT scanning), and ever more detailed anatomic images of the brain became available for diagnostic and research purposes. Cormack and Hounsfield won the 1979 Nobel Prize for Physiology or Medicine for their work. Soon after the introduction of CAT in the early 1980s, the development of radioligands allowed single photon emission computed tomography (SPECT) and positron emission tomography (PET) of the brain.
More or less concurrently, magnetic resonance imaging (MRI or MR scanning) was developed by researchers including Peter Mansfield and Paul Lauterbur, who were awarded the Nobel Prize for Physiology or Medicine in 2003. In the early 1980s MRI was introduced clinically, and during the 1980s a veritable explosion of technical refinements and diagnostic MR applications took place. Scientists soon learned that the large blood flow changes measured by PET could also be imaged by the correct type of MRI. Functional magnetic resonance imaging (fMRI) was born, and
since the 1990s, fMRI has come to dominate the brain mapping field due to its low invasiveness, lack of radiation exposure, and relatively wide availability. As noted above fMRI is also beginning to dominate the field of stroke treatment.
In early 2000s the field of neuroimaging reached the stage where limited practical applications of functional brain imaging have became feasible. The main application area is crude forms of brain-computer interface. | https://www.wikidoc.org/index.php/Brain_imaging | |
301c464a9839757d6c0f7d232781066df77e4191 | wikidoc | Neurosurgery | Neurosurgery
# Overview
Neurosurgery is the surgical discipline focused on treating those central, peripheral nervous system and spinal column diseases amenable to mechanical intervention.
# Definition and scope
According to the U.S. Accreditation Council of Graduate Medical Education (ACGME) ,
# History
Unearthed remains of successful brain operations, as well as surgical implements, were found in France-- at one of Europe's noted archeological digs.
And, the success rate was remarkable, even circa 7,000 B.C.
But, pre-historic evidence of brain surgery was not limited to Europe. Pre-Incan civilization used brain surgery as an extensive practice as early as 2,000 B.C. In Paracas, Peru, a desert strip south of Lima, archeologic evidence indicates that brain surgery was used extensively. Here, too, an inordinate success rate was noted as patients were restored to health. The treatment was used for mental illnesses, epilepsy, headaches, organic diseases, osteomylitis, as well as head injuries.
Brain surgery was also used for both spiritual and magical reasons; often, the practice was limited to kings, priests and the nobility.
Surgical tools in South America were made of both bronze and man-shaped obsidian (a hard, sharp-edged volcanic rock).
Africa showed evidence of brain surgery as early as 3,000 B.C. in papyrus writings found in Egypt. "Brain," the actual word itself, is used here for the first time in any language. Egyptian knowledge of anatomy may have been rudimentary, but the ancient civilization did contribute important notations on the nervous system.
Hippocrates, the father of modern medical ethics, left many texts on brain surgery. Born on the Aegean Island of Cos in 470 B.C., Hippocrates was quite familiar with the clinical signs of head injuries. He also described seizures accurately, as well as spasms and classified head contusions, fractures and depressions. Many concepts found in his texts were still in good stead two thousand years after his death in 360 B.C.
Ancient Rome in the first century A.D. had its brain surgeon star, Aulus Cornelius Celsus. Hippocrates did not operate on depressed skull fractures; Celsus often did. Celsus also described the symptoms of brain injury in great detail.
Asia was home to many talented brain surgeons: Galenus of Pergamon, born in Turkey, and the physicians of Byzance such as Oribasius (4th century) and Paul of Aegina. An Islamic school of brain surgery also flourished from 800 to 1200 A.D., the height of Islamic influence in the world. Abu Bekr Muhammed el Razi, who lived from 852 to 932 in the Common Era, was perhaps the greatest of Islamic brain srugeons. A second Islamic brain surgeon, Abu l'Qluasim Khalaf, lived and practiced in Cordoba, Spain, and was one of the great influences on western brain surgery.
The Christian surgeons of the Middle Ages were clerics, well educated, knowledgeable in Latin, and familiar with the realm of medical literature. Despite the church's ban on study of anatomy, many churchmen of great renown (advisors and confessors to a succession of Popes) were outstanding physicians and surgeons. Leonardo Davinci's portfolio containing hundreds of accurate anatomical sketches indicates the intense intellectual interest in the workings of the human body despite the Church's ban.
# Risks
There are many risks to neurosurgery. Any operation dealing with the brain or spinal cord can cause paralysis, brain damage, severe blood loss or even death.
# Conditions
Neurosurgical conditions include primarily brain, spinal cord, vertebral column and peripheral nerve disorders.
Conditions treated by neurosurgeons include:
- Spinal disc herniation
- Spinal stenosis
- Hydrocephalus
- Head trauma (brain hemorrhages, skull fractures, etc.)
- Spinal cord trauma
- Traumatic injuries of peripheral nerves
- Brain tumors
- Infections and infestations
- Tumors of the spine, spinal cord and peripheral nerves
- Cerebral aneurysms
- Some forms of hemorrhagic stroke, such as subarachnoid hemorrhages, as well as intraparenchymal and intraventricular hemorrhages
- Some forms of pharmacologically resistant epilepsy
- Some forms of movement disorders (advanced Parkinson's disease, chorea) - this involves the use of specially developed minimally invasive stereotactic techniques (functional, stereotactic neurosurgery)
- Intractable pain of cancer or trauma patients and cranial/peripheral nerve pain
- Some forms of intractable psychiatric disorders
- Malformations of the nervous system
- Carotid artery stenosis
- Vascular malformations (i.e., arteriovenous malformations, venous angiomas, cavernous angiomas, capillary telangectasias) of the brain and spinal cord
- Peripheral neuropathies such as Carpal Tunnel Syndrome and ulnar neuropathy
- Moyamoya disease
- Congenital malformations of the nervous system, including spina bifida and craniosynostosis
# Job field
Neurosurgeons work in a variety of practice settings. Some neurosurgeons practice general neurosurgery, while others choose to limit their practice to specific subspecialties. Some areas of specialty include pediatric, spine, vascular/endovascular, tumor, peripheral nerve, functional, and skull base. Practices range from solo practices to large group practices with multidisciplinary components. Increasingly, neurosurgeons are working together with psychiatrists, neurologists and therapists to provide comprehensive care for patients with neurologic disorders such as back pain. About 20 percent of neurosurgeons practice under the auspices of a university practice plan, while the majority of neurosurgeons maintain private practices often with academic affiliations. Typical work schedules for a neurosurgeon include call coverage for one or more emergency rooms requiring sometimes frequent emergency surgeries. Most averages found online describing typical salary for a practicing neurosurgeon in the United States are between $300,000 and $500,000 annually, though these should be considered as weak small-survey estimates based on the values given by the AAMC.
In the United States neurosurgical training is very competitive and grueling. It usually requires six to eight years of residency after completing medical school, plus the option of a fellowship for subspecialization (lasting an additional one to three years). Most applicants to neurosurgery training programs have excellent medical school grades and evaluations, have published scientific and/or clinical research, and have obtained board scores of 95 or higher. Resident work hour limits are set at 88 hours per week for many programs, although many neurosurgical programs have had problems meeting these new work hour limits due to the small size of residency programs, the high volume of neurosurgical patients, and the need to provide constant coverage in the ER, OR, and ICU. On average 50-60% of neurosurgery applicants match into a residency program (~85% of US senior medical student applicants). | Neurosurgery
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Neurosurgery is the surgical discipline focused on treating those central, peripheral nervous system and spinal column diseases amenable to mechanical intervention.
# Definition and scope
According to the U.S. Accreditation Council of Graduate Medical Education (ACGME) [2],
# History
Unearthed remains of successful brain operations, as well as surgical implements, were found in France-- at one of Europe's noted archeological digs.
And, the success rate was remarkable, even circa 7,000 B.C.
But, pre-historic evidence of brain surgery was not limited to Europe. Pre-Incan civilization used brain surgery as an extensive practice as early as 2,000 B.C. In Paracas, Peru, a desert strip south of Lima, archeologic evidence indicates that brain surgery was used extensively. Here, too, an inordinate success rate was noted as patients were restored to health. The treatment was used for mental illnesses, epilepsy, headaches, organic diseases, osteomylitis, as well as head injuries.
Brain surgery was also used for both spiritual and magical reasons; often, the practice was limited to kings, priests and the nobility.
Surgical tools in South America were made of both bronze and man-shaped obsidian (a hard, sharp-edged volcanic rock).
Africa showed evidence of brain surgery as early as 3,000 B.C. in papyrus writings found in Egypt. "Brain," the actual word itself, is used here for the first time in any language. Egyptian knowledge of anatomy may have been rudimentary, but the ancient civilization did contribute important notations on the nervous system.
Hippocrates, the father of modern medical ethics, left many texts on brain surgery. Born on the Aegean Island of Cos in 470 B.C., Hippocrates was quite familiar with the clinical signs of head injuries. He also described seizures accurately, as well as spasms and classified head contusions, fractures and depressions. Many concepts found in his texts were still in good stead two thousand years after his death in 360 B.C.
Ancient Rome in the first century A.D. had its brain surgeon star, Aulus Cornelius Celsus. Hippocrates did not operate on depressed skull fractures; Celsus often did. Celsus also described the symptoms of brain injury in great detail.
Asia was home to many talented brain surgeons: Galenus of Pergamon, born in Turkey, and the physicians of Byzance such as Oribasius (4th century) and Paul of Aegina. An Islamic school of brain surgery also flourished from 800 to 1200 A.D., the height of Islamic influence in the world. Abu Bekr Muhammed el Razi, who lived from 852 to 932 in the Common Era, was perhaps the greatest of Islamic brain srugeons. A second Islamic brain surgeon, Abu l'Qluasim Khalaf, lived and practiced in Cordoba, Spain, and was one of the great influences on western brain surgery.
The Christian surgeons of the Middle Ages were clerics, well educated, knowledgeable in Latin, and familiar with the realm of medical literature. Despite the church's ban on study of anatomy, many churchmen of great renown (advisors and confessors to a succession of Popes) were outstanding physicians and surgeons. Leonardo Davinci's portfolio containing hundreds of accurate anatomical sketches indicates the intense intellectual interest in the workings of the human body despite the Church's ban.
# Risks
There are many risks to neurosurgery. Any operation dealing with the brain or spinal cord can cause paralysis, brain damage, severe blood loss or even death.
# Conditions
Neurosurgical conditions include primarily brain, spinal cord, vertebral column and peripheral nerve disorders.
Conditions treated by neurosurgeons include:
- Spinal disc herniation
- Spinal stenosis
- Hydrocephalus
- Head trauma (brain hemorrhages, skull fractures, etc.)
- Spinal cord trauma
- Traumatic injuries of peripheral nerves
- Brain tumors
- Infections and infestations
- Tumors of the spine, spinal cord and peripheral nerves
- Cerebral aneurysms
- Some forms of hemorrhagic stroke, such as subarachnoid hemorrhages, as well as intraparenchymal and intraventricular hemorrhages
- Some forms of pharmacologically resistant epilepsy
- Some forms of movement disorders (advanced Parkinson's disease, chorea) - this involves the use of specially developed minimally invasive stereotactic techniques (functional, stereotactic neurosurgery)
- Intractable pain of cancer or trauma patients and cranial/peripheral nerve pain
- Some forms of intractable psychiatric disorders
- Malformations of the nervous system
- Carotid artery stenosis
- Vascular malformations (i.e., arteriovenous malformations, venous angiomas, cavernous angiomas, capillary telangectasias) of the brain and spinal cord
- Peripheral neuropathies such as Carpal Tunnel Syndrome and ulnar neuropathy
- Moyamoya disease
- Congenital malformations of the nervous system, including spina bifida and craniosynostosis
# Job field
Neurosurgeons work in a variety of practice settings. Some neurosurgeons practice general neurosurgery, while others choose to limit their practice to specific subspecialties. Some areas of specialty include pediatric, spine, vascular/endovascular, tumor, peripheral nerve, functional, and skull base. Practices range from solo practices to large group practices with multidisciplinary components. Increasingly, neurosurgeons are working together with psychiatrists, neurologists and therapists to provide comprehensive care for patients with neurologic disorders such as back pain. About 20 percent of neurosurgeons practice under the auspices of a university practice plan, while the majority of neurosurgeons maintain private practices often with academic affiliations. Typical work schedules for a neurosurgeon include call coverage for one or more emergency rooms requiring sometimes frequent emergency surgeries. Most averages found online describing typical salary for a practicing neurosurgeon in the United States are between $300,000 and $500,000 annually, though these should be considered as weak small-survey estimates based on the values given by the AAMC.
In the United States neurosurgical training is very competitive and grueling. It usually requires six to eight years of residency after completing medical school, plus the option of a fellowship for subspecialization (lasting an additional one to three years). Most applicants to neurosurgery training programs have excellent medical school grades and evaluations, have published scientific and/or clinical research, and have obtained board scores of 95 or higher. Resident work hour limits are set at 88 hours per week for many programs, although many neurosurgical programs have had problems meeting these new work hour limits due to the small size of residency programs, the high volume of neurosurgical patients, and the need to provide constant coverage in the ER, OR, and ICU. On average 50-60% of neurosurgery applicants match into a residency program (~85% of US senior medical student applicants). [3] | https://www.wikidoc.org/index.php/Brain_surgery | |
dbd4f01a0d49a02d655257af91dbac840aacd89a | wikidoc | Breast shell | Breast shell
Breast shells are hollow plastic disks worn inside the brassiere to protect the nipple from becoming flattened. The disk has a hole in the middle worn toward the nipple side. It is slightly concave to conform to the shape of the breast, but can sometimes still be slightly visible under tight clothing. Shells come apart for washing. This should be done frequently, as the shell also tends to make the mother's breast sweat, which can increase bacteria growth and cause irritation.
Breast shells may be used to protect engorged or sore nipples during breastfeeding. The shell can also encourage an inverted nipple to protract (come out). If the shell is used to help ready the mother for breastfeeding, this is best done during pregnancy, as the shell can increase leaking of breast milk or colostrum. Some research suggests that breast shells used on inverted nipples may actually hinder the mother's ability to breastfeed successfully.
It is also used to collect milk when the baby hasnot finished the teat.
# See Also
- Nipple shields may be confused with breast shells, but shields are intended for use during the act of breastfeeding, whereas breast shells are worn in preparation for or after breastfeeding. | Breast shell
Breast shells are hollow plastic disks worn inside the brassiere to protect the nipple from becoming flattened. The disk has a hole in the middle worn toward the nipple side. It is slightly concave to conform to the shape of the breast, but can sometimes still be slightly visible under tight clothing. Shells come apart for washing. This should be done frequently, as the shell also tends to make the mother's breast sweat, which can increase bacteria growth and cause irritation.
Breast shells may be used to protect engorged or sore nipples during breastfeeding. The shell can also encourage an inverted nipple to protract (come out). If the shell is used to help ready the mother for breastfeeding, this is best done during pregnancy, as the shell can increase leaking of breast milk or colostrum. Some research suggests that breast shells used on inverted nipples may actually hinder the mother's ability to breastfeed successfully.[1]
It is also used to collect milk when the baby hasnot finished the teat.
# See Also
- Nipple shields may be confused with breast shells, but shields are intended for use during the act of breastfeeding, whereas breast shells are worn in preparation for or after breastfeeding. | https://www.wikidoc.org/index.php/Breast_shell | |
bb303b1b83c0d2fa92a67546d23f19008d2491e9 | wikidoc | Breathalyzer | Breathalyzer
A breathalyzer (or breathalyser) is a device for estimating blood alcohol content (BAC) from a breath sample. "Breathalyzer" is the brand name of a series of models made by one manufacturer of these instruments (originally Smith and Wesson, later it was sold to National Draeger), but has become a genericized trademark for all such instruments. Intoxilyzer, Intoximeter, AlcoScan, Alcotest, AlcoSensor, Alcolizer, Datamaster are the other most common brand names in use today. In Canada, a preliminary non-evidentiary screening device can be approved by Parliament as an approved screening device and an evidentiary breath instrument can be similarly designated as an approved instrument. The U.S. Government's National Highway Traffic Safety Administration maintains a "Conforming Products List" of breath alcohol devices approved for evidentiary use , as well as for preliminary screening use .
# Origins
In late 1927, in a case in Marlborough, England, a Dr. Gorsky, Police Surgeon, asked a suspect to inflate a football bladder with his breath. Since the 2 liters of the man's breath contained 1.5 ml of ethanol, Dr. Gorsky testified before the court that the defendant was "50% drunk". Though technologies for detecting alcohol vary, it's widely accepted that Dr. Robert Borkenstein (1912–2002), a captain with the Indiana State Police and later a professor at Indiana University at Bloomington, is regarded as the first to create a device that measures a subject's blood alcohol level based on a breath sample. In 1954, Borkenstein invented his breathalyzer, which used chemical oxidation and photometry to determine alcohol concentration. Subsequent breathalyzers have converted primarily to infrared spectroscopy. The invention of the breathalyzer provided law enforcement with a non-invasive test providing immediate results to determine an individual's BAC at the time of testing. It does not, however, determine an individual's level of intoxication, as this varies by a subject's individual alcohol tolerance. Also, the BAC test result itself can vary between individuals consuming identical amounts of alcohol due to gender, weight, genetic pre-disposition,
# Law enforcement
Breath analyzers do not directly measure blood alcohol content or concentration, which requires the analysis of a blood sample. Instead, they estimate BAC indirectly by measuring the amount of alcohol in one's breath. Two form factors are most prevalent. Desktop analyzers generally utilize infrared spectrophotometer technology, electrochemical fuel cell technology, or a combination of the two. Hand-held field testing devices, are generally based on electrochemical fuel cell analysis, and depending upon jurisdiction may be used by officers in the field as a form of "field sobriety test" commonly called PBT (preliminary breath test) or PAS (preliminary alcohol screening), or as evidential devices in POA (point of arrest) testing.
# Consumer use
There are a number of models of breath alcohol analyzers that are intended for the consumer market. These hand-held devices are less expensive and can be much smaller than the devices used by law enforcement, and are less accurate, but can still give a useful indication of the user’s BAC. Almost all of these devices use less expensive tin-oxide semiconductor alcohol sensors (frequently called "Taguchi cell" based sensors), which are not as stable as fuel cell sensors or infrared devices, and are more prone to false positives. Breath alcohol analyzers sold to consumers in the United States are required to be certified by the Food and Drug Administration, while those used by law enforcement must be approved by the Department of Transportation's National Highway Traffic Safety Administration.
# Breath test evidence
The breath alcohol reading is used in criminal prosecutions in two ways. Unless the suspect refuses to submit to chemical testing, he will be charged with a violation of the illegal per se law: that is, it is a misdemeanor throughout the United States to drive a vehicle with a BAC of .08% or higher (.02% in most states for drivers under 21). One exception is the State of Wisconsin, where a first time drunk driving offense is normally a civil ordinance violation. The breathalyzer reading will be offered as evidence of that crime, although the issue is what the BAC was at the time of driving rather than at the time of the test. The suspect will also be charged with driving under the influence of alcohol (sometimes referred to as driving or operating while intoxicated). While BAC tests are not necessary to prove a defendant was under the influence, laws in most states require the jury to presume that he was under the influence if his BAC was over .08% when driving. This is a rebuttable presumption, however: the jury can disregard the test if they find it unreliable or if other evidence establishes a reasonable doubt.
If a defendant refused to take a breathalyzer test, most states allow evidence of that fact to be introduced; in many states, the jury is instructed that they can draw a permissible inference of "consciousness of guilt." Many states also operate under "implied consent," meaning that anyone issued a driver's license in the state agrees to submit to a test of his or her breath, blood, or urine when requested by a law enforcement officer. Failure to submit to such a test may result in automatic suspension of his or her driver's license even if not convicted of drunk driving. Failure to submit to such a test may also serve to enhance the penalties for a drunk driving conviction. In drunk driving cases in Massachusetts and Delaware, if the defendant refuses the breathalyzer there can be no mention of the test during the trial.
Instruments such as the Intoxilyzer 5000 are known as Evidentiary Breath Tests (EBT's) and generally produce court-admissible results. Other instruments, such as the SD-2 by CMI or the Alcosensor III by Intoximeters, are known as Preliminary Breath Tests (PBT's), and their results, while valuable to an officer attempting to establish probable cause for a drunk driving arrest, are generally not admissible in court. Some states do not permit data or "readings" from hand-held PBTs to be presented as evidence in court. They are generally admissible, if at all, only to show the presence of alcohol or as a pass-fail field sobriety test to help determine probable cause to arrest. South Dakota does not permit data from any type or size of breath tester but relies entirely on blood tests to ensure accuracy.
# Common sources of error
Breath testers can be very sensitive to temperature, for example, and will give false readings if not adjusted or recalibrated to account for ambient or surrounding air temperatures. The temperature of the subject is also very important.
Breathing pattern can also significantly affect breath test results. One study found that the BAC readings of subjects decreased 11 to 14% after running up one flight of stairs and 22–25% after doing so twice. Another study found a 15% decrease in BAC readings after vigorous exercise or hyperventilation. Hyperventilation for 20 seconds has been shown to lower the reading by approximately 32%. On the other hand, holding your breath for 30 seconds can increase the breath test result by about 28%.
Some breath analysis machines assume a hematocrit (cell volume of blood) of 47%. However, hematocrit values range from 42 to 52% in men and from 37 to 47% in women. A person with a lower hematocrit will have a falsely high BAC reading.
Failure of law enforcement officers to use the devices properly or of administrators to have the machines properly maintained and re-calibrated as required are particularly common sources of error. However, most states have very strict guidelines regarding officer training and instrument maintenance and calibration.
Research indicates that breath tests can vary at least 15% from actual blood alcohol concentration. An estimated 23% of individuals tested will have a BAC reading higher than their true BAC. Police in Victoria, Australia use breathalyzers that give a recognized 20 percent tolerance on readings. Noel Ashby, former Victoria Police Assistant Commissioner (Traffic & Transport), claims that this tolerance is to allow for different body types.
## Calibration
Most handheld breathalyzers use a silicon oxide sensor to determine the blood alcohol concentration. Without proper software calibration, the accuracy of these sensors degrades over time and with repeated use. The calibration process aims to focus the sensor's ability to detect an accurate reading.
New advances in breathalyzer design allow some models to self-calibrate or easily replace the sensor module without the need to send the unit to a calibration lab.
## Non-specific analysis
One major problem with older breathalyzers is non-specificity: the machines not only identify the ethyl alcohol (or ethanol) found in alcoholic beverages, but also other substances similar in molecular structure or reactivity.
The oldest breathalyzer models pass breath through a solution of potassium dichromate, which oxidizes ethanol into acetic acid, changing color in the process. A monochromatic light beam is passed through this sample, and a detector records the change in intensity and, hence, the change in color, which is used to calculate the percent alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by it, producing false positives.
Infrared-based breathalyzers project an infrared beam of radiation through the captured breath in the sample chamber and detect the absorbance of the compound as a function of the wavelength of the beam, producing an absorbance spectrum that can be used to identify the compound, as the absorbance is due to the harmonic vibration and stretching of specific bonds in the molecule at specific wavelengths (see infrared spectroscopy). The characteristic bond of alcohols in infrared is the O-H bond, which gives a strong absorbance at a short wavelength. The more light is absorbed by compounds containing the alcohol group, the less reaches the detector on the other side—and the higher the reading. Other groups, most notably aromatic rings and carboxylic acids can give similar absorbance readings .
## Interfering compounds
Some natural and volatile interfering compounds do exist, however. For example, the National Highway Traffic Safety Administration (NHTSA) has found that dieters and diabetics may have acetone levels hundreds and even thousand of times higher than those in others. Acetone is one of the many substances that can be falsely identified as ethyl alcohol by some breath machines. However, new machines like the Draeger Breathalyzer use technology that filters out substances like acetone.
A study in Spain showed that metered-dose inhalers (MDIs) used in asthma treatment are also a cause of false positives in breath machines.
Substances in the environment can also lead to false BAC readings. For example, methyl tert-butyl ether (MTBE), a common gasoline additive, has been alleged anecdotally to cause false positives in persons exposed to it. Tests have shown this to be true for older machines; however, newer machines detect this interference and compensate for it . Any number of other products found in the environment or workplace can also cause erroneous BAC results. These include compounds found in lacquer, paint remover, celluloid, gasoline, and cleaning fluids, especially ethers, alcohols, and other volatile compounds.
## Homeostatic variables
Breathalyzers assume that the subject being tested has a 2100-to-1 partition ratio in converting alcohol measured in the breath to estimates of alcohol in the blood. If the instrument estimates the BAC, then it measures weight of alcohol to volume of breath, so it will effectively measure grams of alcohol per 2100 ml of breath given. This measure is in direct proportion to the amount of grams of alcohol to every 100 ml of blood. Therefore, there is a 2100 to 1 ratio of alcohol in blood to alcohol in breath. However, this assumed "partition ratio" varies from 1300:1 to 3100:1 or wider among individuals and within a given individual over time. Assuming a true (and legal) blood-alcohol concentration of .07%, for example, a person with a partition ratio of 1500:1 would have a breath test reading of .10%—over the legal limit.
Most individuals do, in fact, have a 2100-to-1 partition ratio in accordance with William Henry's Law (1803), which states that when the water solution of a volatile compound is brought into equilibrium with air, there is a fixed ratio between the concentration of the compound in air and its concentration in water. This ratio is constant at a given temperature. The human body is 37 degrees Celsius on average. Breath leaves the mouth at a temperature of 34 degrees Celsius. Alcohol in the body obeys Henry's Law as it is a volatile compound and diffuses in body water. To ensure that variables such as fever and hypothermia could not be pointed out to influence the results in a way that was harmful to the accused, the instrument is calibrated at a ratio of 2100:1, underestimating by 9 percent. In order for a person running a fever to significantly overestimate, he would have to have a fever that would likely see the subject be in the hospital rather than driving in the first place. Studies suggest that about 1.8% of the population have a partition ratio below 2100. Thus, a machine using a 2100-to-1 ratio could actually under-report. As much as 14% of the population has a partition ratio above 2100, thus causing the machine to overestimate the BAC.
Further, the assumption that the test subject's partition ratio will be average—that there will be 2100 parts in the blood for every part in the breath—means that accurate analysis of a given individual's blood alcohol by measuring breath alcohol is difficult, as the ratio varies considerably.
Variance in how much one breathes out can also give false readings, usually low . This is due to biological variance in breath alcohol concentration as a function of the volume of air in the lungs, an example of a factor which interferes with the liquid-gas equilibrium assumed by the devices. The presence of volatile components is another example of this; mixtures of volatile compounds can be more volatile than their components, which can create artificially high levels of ethanol (or other) vapors relative to the normal biological blood/breath alcohol equilibrium.
## Mouth alcohol
One of the most common causes of falsely high breathalyzer readings is the existence of mouth alcohol. In analyzing a subject's breath sample, the breathalyzer's internal computer is making the assumption that the alcohol in the breath sample came from alveolar air—that is, air exhaled from deep within the lungs. However, alcohol may have come from the mouth, throat or stomach for a number of reasons. To help guard against mouth-alcohol contamination, certified breath test operators are trained to carefully observe a test subject for at least 15-20 minutes before administering the test.
The problem with mouth alcohol being analyzed by the breathalyzer is that it was not absorbed through the stomach and intestines and passed through the blood to the lungs. In other words, the machine's computer is mistakenly applying the "partition ratio" (see above) and multiplying the result. Consequently, a very tiny amount of alcohol from the mouth, throat or stomach can have a significant impact on the breath alcohol reading.
Other than recent drinking, the most common source of mouth alcohol is from belching or burping, or in medical terms "eructation." This causes the liquids and/or gases from the stomach—including any alcohol—to rise up into the soft tissue of the esophagus and oral cavity, where it will stay until it has dissipated. The American Medical Association concludes in its Manual for Chemical Tests for Intoxication (1959): "True reactions with alcohol in expired breath from sources other than the alveolar air (eructation, regurgitation, vomiting) will, of course, vitiate the breath alcohol results." For this reason, police officers are supposed to keep a DUI suspect under observation for at least 15 minutes prior to administering a breath test. Instruments such as the Intoxilyzer 5000 also feature a "slope" parameter. This parameter detects any decrease in alcohol concentration of .006 g per 210L of breath in 6/10th's of a second, a condition indicative of residual mouth alcohol, and will result in an "invalid sample" warning to the operator, notifying the operator of the presence of the residual mouth alcohol. PBT's, however, feature no such safeguard.
Acid reflux, or gastroesophageal reflux disease, can greatly exacerbate the mouth alcohol problem. The stomach is normally separated from the throat by a valve, but when this valve becomes herniated, there is nothing to stop the liquid contents in the stomach from rising and permeating the esophagus and mouth. The contents—including any alcohol—are then later exhaled into the breathalyzer.
Mouth alcohol can also be created in other ways. Dentures, for example, will trap alcohol. Periodental disease can also create pockets in the gums which will contain the alcohol for longer periods. Also known to produce false results due to residual alcohol in the mouth, is passionate kissing with a intoxicated person. And recent use of mouthwash or breath freshener—possibly to disguise the smell of alcohol when being pulled over by police—contain fairly high levels of alcohol.
## Testing during absorptive phase
One of the most common sources of error in breath alcohol analysis is simply testing the subject too early—while his or her body is still absorbing the alcohol. Absorption of alcohol continues for anywhere from 45 minutes to two hours after drinking or even longer. Peak absorption normally occurs within an hour; this can range from as little as 15 minutes to as much as two-and-a-half hours.
During this absorptive phase, the distribution of alcohol throughout the body is not uniform; uniformity of distribution—called equilibrium—will not occur until absorption is complete. In other words, some parts of the body will have a higher blood alcohol content (BAC) than others. One aspect of this non-uniformity is that the BAC in arterial blood will be higher than in venous blood (laws generally require blood samples to be venous). During peak absorption arterial BAC can be as much as 60 percent higher than venous.
## Retrograde extrapolation
The breathalyzer test is usually administered at a police station, commonly an hour or so after the arrest. Although this gives the BAC at the time of testing, it does not by itself answer the question of what it was at the time of driving. The prosecution typically provides evidence of this in the form of retrograde extrapolation. Usually presented in the form of an expert opinion, this involves projecting the BAC backwards in time—that is, estimating the probable BAC at the time of driving by applying mathematical formula, commonly the Widmark factor. This process, however, has been the subject of considerable criticism.
# Photovoltaic assay
The photovoltaic assay, used only in the dated Intoximeter 3000, is a form of breath testing rarely encountered today. The process works by using photocells to analyze the color change of a redox (oxidation-reduction) reaction. A breath sample is bubbled through an aqueous solution of sulfuric acid, potassium dichromate, and silver nitrate. The silver nitrate acts as a catalyst, allowing the alcohol to be oxidized at an appreciable rate. The requisite acidic condition needed for the reaction might also be provided by the sulfuric acid. In solution, ethanol reacts with the potassium dichromate, reducing the dichromate ion to the chromium (III) ion. This reduction results in a change of the solution's colour from red-orange to green. The reacted solution is compared to a vial of nonreacted solution by a photocell, which creates an electric current proportional to the degree of the colour change; this current moves the needle that indicates BAC.
Like other methods, breath testing devices using chemical analysis are somewhat prone to false readings. Compounds which have compositions similar to ethanol, for example, could also act as reducing agents, creating the necessary color change to indicate increased BAC.
# Myths
A common myth is that breath testers can be "fooled" (that is, made to generate estimates making one's blood alcohol content appear lower) by using certain substances.
An episode of the Discovery Channel's MythBusters tested substances usually recommended in this practice—including breath mints, mouthwash, and onion—and found them to be ineffective. Adding an odor to mask the smell of alcohol might fool a person, but does not change the actual alcohol concentration in the body or on the breath. Interestingly, substances that might actually reduce the BAC reading were not tested on the show. These include a bag of activated charcoal concealed in the mouth (to absorb alcohol vapor), an oxidizing gas (such as N2O, Cl2, O3, etc.) which would fool a fuel cell type detector, or an organic interferent to fool an infra-red absorption detector. The infra-red absorption detector is especially vulnerable to countermeasures, since it only makes measurements at particular discrete wavelengths rather than producing a continuous absorption spectrum as a laboratory instrument would do.
On the other hand, products such as mouthwash or breath spray can "fool" breath machines by significantly raising test results. Listerine, for example, contains 27% alcohol; because the breath machine will assume the alcohol is coming from alcohol in the blood diffusing into the lung rather than directly from the mouth, it will apply a "partition ratio" of 2100:1 in computing blood alcohol concentration—resulting in a false high test reading. To counter this, officers are not supposed to administer a PBT for 15 minutes after the subject eats, vomits, or puts anything in their mouth. In addition, most instruments require that the individual be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have dissipated after two minutes and cause the second reading to disagree with the first, requiring a retest. (Also see the discussion of the "slope parameter" of the Intoxilyzer 5000 in the "Mouth Alcohol" section above.)
This was clearly illustrated in a study conducted with Listerine mouthwash on a breath machine and reported in an article entitled "Field Sobriety Testing: Intoxilyzers and Listerine Antiseptic," published in the July 1985 issue of The Police Chief (p. 70). Seven individuals were tested at a police station, with readings of .00%. Each then rinsed his mouth with 20 milliliters of Listerine mouthwash for 30 seconds in accordance with directions on the label. All seven were then tested on the machine at intervals of one, three, five and ten minutes. The results indicated an average reading of .43 blood-alcohol concentration, indicating a level that, if accurate, approaches lethal proportions. After three minutes, the average level was still .20, despite the absence of any alcohol in the system. Even after five minutes, the average level was .11.
In another study, reported in 8(22) Drinking/Driving Law Letter 1, a scientist tested the effects of Binaca breath spray on an Intoxilyzer 5000. He performed 23 tests with subjects who sprayed their throats, and obtained readings as high as .81 — far beyond lethal levels. The scientist also noted that the effects of the spray did not fall below detectable levels until after 18 minutes. | Breathalyzer
A breathalyzer (or breathalyser) is a device for estimating blood alcohol content (BAC) from a breath sample. "Breathalyzer" is the brand name of a series of models made by one manufacturer of these instruments (originally Smith and Wesson, later it was sold to National Draeger), but has become a genericized trademark for all such instruments. Intoxilyzer, Intoximeter, AlcoScan, Alcotest, AlcoSensor, Alcolizer, Datamaster are the other most common brand names in use today. In Canada, a preliminary non-evidentiary screening device can be approved by Parliament as an approved screening device and an evidentiary breath instrument can be similarly designated as an approved instrument. The U.S. Government's National Highway Traffic Safety Administration maintains a "Conforming Products List" of breath alcohol devices approved for evidentiary use [1], as well as for preliminary screening use [2].
# Origins
In late 1927, in a case in Marlborough, England, a Dr. Gorsky, Police Surgeon, asked a suspect to inflate a football bladder with his breath. Since the 2 liters of the man's breath contained 1.5 ml of ethanol, Dr. Gorsky testified before the court that the defendant was "50% drunk".[1] Though technologies for detecting alcohol vary, it's widely accepted that Dr. Robert Borkenstein (1912–2002), a captain with the Indiana State Police and later a professor at Indiana University at Bloomington, is regarded as the first to create a device that measures a subject's blood alcohol level based on a breath sample. In 1954, Borkenstein invented his breathalyzer, which used chemical oxidation and photometry to determine alcohol concentration. Subsequent breathalyzers have converted primarily to infrared spectroscopy. The invention of the breathalyzer provided law enforcement with a non-invasive test providing immediate results to determine an individual's BAC at the time of testing. It does not, however, determine an individual's level of intoxication, as this varies by a subject's individual alcohol tolerance. Also, the BAC test result itself can vary between individuals consuming identical amounts of alcohol due to gender, weight, genetic pre-disposition,
# Law enforcement
Breath analyzers do not directly measure blood alcohol content or concentration, which requires the analysis of a blood sample. Instead, they estimate BAC indirectly by measuring the amount of alcohol in one's breath. Two form factors are most prevalent. Desktop analyzers generally utilize infrared spectrophotometer technology, electrochemical fuel cell technology, or a combination of the two. Hand-held field testing devices, are generally based on electrochemical fuel cell analysis, and depending upon jurisdiction may be used by officers in the field as a form of "field sobriety test" commonly called PBT (preliminary breath test) or PAS (preliminary alcohol screening), or as evidential devices in POA (point of arrest) testing.
# Consumer use
There are a number of models of breath alcohol analyzers that are intended for the consumer market. These hand-held devices are less expensive and can be much smaller than the devices used by law enforcement, and are less accurate, but can still give a useful indication of the user’s BAC. Almost all of these devices use less expensive tin-oxide semiconductor alcohol sensors (frequently called "Taguchi cell" based sensors), which are not as stable as fuel cell sensors or infrared devices, and are more prone to false positives. Breath alcohol analyzers sold to consumers in the United States are required to be certified by the Food and Drug Administration, while those used by law enforcement must be approved by the Department of Transportation's National Highway Traffic Safety Administration.
# Breath test evidence
The breath alcohol reading is used in criminal prosecutions in two ways. Unless the suspect refuses to submit to chemical testing, he will be charged with a violation of the illegal per se law: that is, it is a misdemeanor throughout the United States to drive a vehicle with a BAC of .08% or higher (.02% in most states for drivers under 21). One exception is the State of Wisconsin, where a first time drunk driving offense is normally a civil ordinance violation.[2] The breathalyzer reading will be offered as evidence of that crime, although the issue is what the BAC was at the time of driving rather than at the time of the test. The suspect will also be charged with driving under the influence of alcohol (sometimes referred to as driving or operating while intoxicated). While BAC tests are not necessary to prove a defendant was under the influence, laws in most states require the jury to presume that he was under the influence if his BAC was over .08% when driving. This is a rebuttable presumption, however: the jury can disregard the test if they find it unreliable or if other evidence establishes a reasonable doubt.
If a defendant refused to take a breathalyzer test, most states allow evidence of that fact to be introduced; in many states, the jury is instructed that they can draw a permissible inference of "consciousness of guilt." Many states also operate under "implied consent," meaning that anyone issued a driver's license in the state agrees to submit to a test of his or her breath, blood, or urine when requested by a law enforcement officer. Failure to submit to such a test may result in automatic suspension of his or her driver's license even if not convicted of drunk driving. Failure to submit to such a test may also serve to enhance the penalties for a drunk driving conviction. In drunk driving cases in Massachusetts and Delaware, if the defendant refuses the breathalyzer there can be no mention of the test during the trial.[citation needed]
Instruments such as the Intoxilyzer 5000 are known as Evidentiary Breath Tests (EBT's) and generally produce court-admissible results. Other instruments, such as the SD-2 by CMI or the Alcosensor III by Intoximeters, are known as Preliminary Breath Tests (PBT's), and their results, while valuable to an officer attempting to establish probable cause for a drunk driving arrest, are generally not admissible in court. Some states do not permit data or "readings" from hand-held PBTs to be presented as evidence in court. They are generally admissible, if at all, only to show the presence of alcohol or as a pass-fail field sobriety test to help determine probable cause to arrest. South Dakota does not permit data from any type or size of breath tester but relies entirely on blood tests to ensure accuracy.
# Common sources of error
Breath testers can be very sensitive to temperature, for example, and will give false readings if not adjusted or recalibrated to account for ambient or surrounding air temperatures. The temperature of the subject is also very important.
Breathing pattern can also significantly affect breath test results. One study found that the BAC readings of subjects decreased 11 to 14% after running up one flight of stairs and 22–25% after doing so twice. Another study found a 15% decrease in BAC readings after vigorous exercise or hyperventilation. Hyperventilation for 20 seconds has been shown to lower the reading by approximately 32%. On the other hand, holding your breath for 30 seconds can increase the breath test result by about 28%.[citation needed]
Some breath analysis machines assume a hematocrit (cell volume of blood) of 47%. However, hematocrit values range from 42 to 52% in men and from 37 to 47% in women. A person with a lower hematocrit will have a falsely high BAC reading.
Failure of law enforcement officers to use the devices properly or of administrators to have the machines properly maintained and re-calibrated as required are particularly common sources of error.[citation needed] However, most states have very strict guidelines regarding officer training and instrument maintenance and calibration.
Research indicates that breath tests can vary at least 15% from actual blood alcohol concentration. An estimated 23% of individuals tested will have a BAC reading higher than their true BAC. Police in Victoria, Australia use breathalyzers that give a recognized 20 percent tolerance on readings. Noel Ashby, former Victoria Police Assistant Commissioner (Traffic & Transport), claims that this tolerance is to allow for different body types.[3]
## Calibration
Most handheld breathalyzers use a silicon oxide sensor to determine the blood alcohol concentration. Without proper software calibration, the accuracy of these sensors degrades over time and with repeated use. The calibration process aims to focus the sensor's ability to detect an accurate reading.
New advances in breathalyzer design allow some models to self-calibrate or easily replace the sensor module without the need to send the unit to a calibration lab.
## Non-specific analysis
One major problem with older breathalyzers is non-specificity: the machines not only identify the ethyl alcohol (or ethanol) found in alcoholic beverages, but also other substances similar in molecular structure or reactivity.
The oldest breathalyzer models pass breath through a solution of potassium dichromate, which oxidizes ethanol into acetic acid, changing color in the process. A monochromatic light beam is passed through this sample, and a detector records the change in intensity and, hence, the change in color, which is used to calculate the percent alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by it, producing false positives.
Infrared-based breathalyzers project an infrared beam of radiation through the captured breath in the sample chamber and detect the absorbance of the compound as a function of the wavelength of the beam, producing an absorbance spectrum that can be used to identify the compound, as the absorbance is due to the harmonic vibration and stretching of specific bonds in the molecule at specific wavelengths (see infrared spectroscopy). The characteristic bond of alcohols in infrared is the O-H bond, which gives a strong absorbance at a short wavelength. The more light is absorbed by compounds containing the alcohol group, the less reaches the detector on the other side—and the higher the reading. Other groups, most notably aromatic rings and carboxylic acids can give similar absorbance readings [3].
## Interfering compounds
Some natural and volatile interfering compounds do exist, however. For example, the National Highway Traffic Safety Administration (NHTSA) has found that dieters and diabetics may have acetone levels hundreds and even thousand of times higher than those in others. Acetone is one of the many substances that can be falsely identified as ethyl alcohol by some breath machines. However, new machines like the Draeger Breathalyzer use technology that filters out substances like acetone.
A study in Spain showed that metered-dose inhalers (MDIs) used in asthma treatment are also a cause of false positives in breath machines.
Substances in the environment can also lead to false BAC readings. For example, methyl tert-butyl ether (MTBE), a common gasoline additive, has been alleged anecdotally to cause false positives in persons exposed to it. Tests have shown this to be true for older machines; however, newer machines detect this interference and compensate for it [4]. Any number of other products found in the environment or workplace can also cause erroneous BAC results. These include compounds found in lacquer, paint remover, celluloid, gasoline, and cleaning fluids, especially ethers, alcohols, and other volatile compounds.
## Homeostatic variables
Breathalyzers assume that the subject being tested has a 2100-to-1 partition ratio [5] in converting alcohol measured in the breath to estimates of alcohol in the blood. If the instrument estimates the BAC, then it measures weight of alcohol to volume of breath, so it will effectively measure grams of alcohol per 2100 ml of breath given. This measure is in direct proportion to the amount of grams of alcohol to every 100 ml of blood. Therefore, there is a 2100 to 1 ratio of alcohol in blood to alcohol in breath. However, this assumed "partition ratio" varies from 1300:1 to 3100:1 or wider among individuals and within a given individual over time. Assuming a true (and legal) blood-alcohol concentration of .07%, for example, a person with a partition ratio of 1500:1 would have a breath test reading of .10%—over the legal limit.
Most individuals do, in fact, have a 2100-to-1 partition ratio in accordance with William Henry's Law (1803), which states that when the water solution of a volatile compound is brought into equilibrium with air, there is a fixed ratio between the concentration of the compound in air and its concentration in water. This ratio is constant at a given temperature. The human body is 37 degrees Celsius on average. Breath leaves the mouth at a temperature of 34 degrees Celsius. Alcohol in the body obeys Henry's Law as it is a volatile compound and diffuses in body water. To ensure that variables such as fever and hypothermia could not be pointed out to influence the results in a way that was harmful to the accused, the instrument is calibrated at a ratio of 2100:1, underestimating by 9 percent. In order for a person running a fever to significantly overestimate, he would have to have a fever that would likely see the subject be in the hospital rather than driving in the first place. Studies suggest that about 1.8% of the population have a partition ratio below 2100. Thus, a machine using a 2100-to-1 ratio could actually under-report. As much as 14% of the population has a partition ratio above 2100, thus causing the machine to overestimate the BAC.
Further, the assumption that the test subject's partition ratio will be average—that there will be 2100 parts in the blood for every part in the breath—means that accurate analysis of a given individual's blood alcohol by measuring breath alcohol is difficult, as the ratio varies considerably.
Variance in how much one breathes out can also give false readings, usually low [6]. This is due to biological variance in breath alcohol concentration as a function of the volume of air in the lungs, an example of a factor which interferes with the liquid-gas equilibrium assumed by the devices. The presence of volatile components is another example of this; mixtures of volatile compounds can be more volatile than their components, which can create artificially high levels of ethanol (or other) vapors relative to the normal biological blood/breath alcohol equilibrium.
## Mouth alcohol
One of the most common causes of falsely high breathalyzer readings is the existence of mouth alcohol. In analyzing a subject's breath sample, the breathalyzer's internal computer is making the assumption that the alcohol in the breath sample came from alveolar air—that is, air exhaled from deep within the lungs. However, alcohol may have come from the mouth, throat or stomach for a number of reasons. To help guard against mouth-alcohol contamination, certified breath test operators are trained to carefully observe a test subject for at least 15-20 minutes before administering the test.
The problem with mouth alcohol being analyzed by the breathalyzer is that it was not absorbed through the stomach and intestines and passed through the blood to the lungs. In other words, the machine's computer is mistakenly applying the "partition ratio" (see above) and multiplying the result. Consequently, a very tiny amount of alcohol from the mouth, throat or stomach can have a significant impact on the breath alcohol reading.
Other than recent drinking, the most common source of mouth alcohol is from belching or burping, or in medical terms "eructation." This causes the liquids and/or gases from the stomach—including any alcohol—to rise up into the soft tissue of the esophagus and oral cavity, where it will stay until it has dissipated. The American Medical Association concludes in its Manual for Chemical Tests for Intoxication (1959): "True reactions with alcohol in expired breath from sources other than the alveolar air (eructation, regurgitation, vomiting) will, of course, vitiate the breath alcohol results." For this reason, police officers are supposed to keep a DUI suspect under observation for at least 15 minutes prior to administering a breath test. Instruments such as the Intoxilyzer 5000 also feature a "slope" parameter. This parameter detects any decrease in alcohol concentration of .006 g per 210L of breath in 6/10th's of a second, a condition indicative of residual mouth alcohol, and will result in an "invalid sample" warning to the operator, notifying the operator of the presence of the residual mouth alcohol. PBT's, however, feature no such safeguard.
Acid reflux, or gastroesophageal reflux disease, can greatly exacerbate the mouth alcohol problem. The stomach is normally separated from the throat by a valve, but when this valve becomes herniated, there is nothing to stop the liquid contents in the stomach from rising and permeating the esophagus and mouth. The contents—including any alcohol—are then later exhaled into the breathalyzer.[4]
Mouth alcohol can also be created in other ways. Dentures, for example, will trap alcohol. Periodental disease can also create pockets in the gums which will contain the alcohol for longer periods. Also known to produce false results due to residual alcohol in the mouth, is passionate kissing with a intoxicated person. And recent use of mouthwash or breath freshener—possibly to disguise the smell of alcohol when being pulled over by police—contain fairly high levels of alcohol.
## Testing during absorptive phase
One of the most common sources of error in breath alcohol analysis is simply testing the subject too early—while his or her body is still absorbing the alcohol.[7] Absorption of alcohol continues for anywhere from 45 minutes to two hours after drinking or even longer. Peak absorption normally occurs within an hour; this can range from as little as 15 minutes to as much as two-and-a-half hours.
During this absorptive phase, the distribution of alcohol throughout the body is not uniform; uniformity of distribution—called equilibrium—will not occur until absorption is complete. In other words, some parts of the body will have a higher blood alcohol content (BAC) than others. One aspect of this non-uniformity is that the BAC in arterial blood will be higher than in venous blood (laws generally require blood samples to be venous). During peak absorption arterial BAC can be as much as 60 percent higher than venous.
## Retrograde extrapolation
The breathalyzer test is usually administered at a police station, commonly an hour or so after the arrest. Although this gives the BAC at the time of testing, it does not by itself answer the question of what it was at the time of driving. The prosecution typically provides evidence of this in the form of retrograde extrapolation. Usually presented in the form of an expert opinion, this involves projecting the BAC backwards in time—that is, estimating the probable BAC at the time of driving by applying mathematical formula, commonly the Widmark factor. This process, however, has been the subject of considerable criticism.
# Photovoltaic assay
The photovoltaic assay, used only in the dated Intoximeter 3000, is a form of breath testing rarely encountered today. The process works by using photocells to analyze the color change of a redox (oxidation-reduction) reaction. A breath sample is bubbled through an aqueous solution of sulfuric acid, potassium dichromate, and silver nitrate. The silver nitrate acts as a catalyst, allowing the alcohol to be oxidized at an appreciable rate. The requisite acidic condition needed for the reaction might also be provided by the sulfuric acid. In solution, ethanol reacts with the potassium dichromate, reducing the dichromate ion to the chromium (III) ion. This reduction results in a change of the solution's colour from red-orange to green. The reacted solution is compared to a vial of nonreacted solution by a photocell, which creates an electric current proportional to the degree of the colour change; this current moves the needle that indicates BAC.
Like other methods, breath testing devices using chemical analysis are somewhat prone to false readings. Compounds which have compositions similar to ethanol, for example, could also act as reducing agents, creating the necessary color change to indicate increased BAC.
# Myths
A common myth is that breath testers can be "fooled" (that is, made to generate estimates making one's blood alcohol content appear lower) by using certain substances.
An episode of the Discovery Channel's MythBusters tested substances usually recommended in this practice—including breath mints, mouthwash, and onion—and found them to be ineffective. Adding an odor to mask the smell of alcohol might fool a person, but does not change the actual alcohol concentration in the body or on the breath. Interestingly, substances that might actually reduce the BAC reading were not tested on the show. These include a bag of activated charcoal concealed in the mouth (to absorb alcohol vapor), an oxidizing gas (such as N2O, Cl2, O3, etc.) which would fool a fuel cell type detector, or an organic interferent to fool an infra-red absorption detector. The infra-red absorption detector is especially vulnerable to countermeasures, since it only makes measurements at particular discrete wavelengths rather than producing a continuous absorption spectrum as a laboratory instrument would do.
On the other hand, products such as mouthwash or breath spray can "fool" breath machines by significantly raising test results. Listerine, for example, contains 27% alcohol; because the breath machine will assume the alcohol is coming from alcohol in the blood diffusing into the lung rather than directly from the mouth, it will apply a "partition ratio" of 2100:1 in computing blood alcohol concentration—resulting in a false high test reading. To counter this, officers are not supposed to administer a PBT for 15 minutes after the subject eats, vomits, or puts anything in their mouth. In addition, most instruments require that the individual be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have dissipated after two minutes and cause the second reading to disagree with the first, requiring a retest. (Also see the discussion of the "slope parameter" of the Intoxilyzer 5000 in the "Mouth Alcohol" section above.)
This was clearly illustrated in a study conducted with Listerine mouthwash on a breath machine and reported in an article entitled "Field Sobriety Testing: Intoxilyzers and Listerine Antiseptic," published in the July 1985 issue of The Police Chief (p. 70). Seven individuals were tested at a police station, with readings of .00%. Each then rinsed his mouth with 20 milliliters of Listerine mouthwash for 30 seconds in accordance with directions on the label. All seven were then tested on the machine at intervals of one, three, five and ten minutes. The results indicated an average reading of .43 blood-alcohol concentration, indicating a level that, if accurate, approaches lethal proportions. After three minutes, the average level was still .20, despite the absence of any alcohol in the system. Even after five minutes, the average level was .11.
In another study, reported in 8(22) Drinking/Driving Law Letter 1, a scientist tested the effects of Binaca breath spray on an Intoxilyzer 5000. He performed 23 tests with subjects who sprayed their throats, and obtained readings as high as .81 — far beyond lethal levels. The scientist also noted that the effects of the spray did not fall below detectable levels until after 18 minutes. | https://www.wikidoc.org/index.php/Breathalyzer | |
fc8fed4c4dffe4e522a104d4c540fafe12cdd5d7 | wikidoc | Bredt's rule | Bredt's rule
Bredt's rule is an empirical observation in organic chemistry that states that a double bond cannot be placed at the bridgehead of a bridged ring system, unless the rings are large enough.
For example, two of the following isomers of norbornene violate Bredt's rule, which makes them too unstable to prepare:
In the figure, the bridgehead atoms involved in Bredt's rule violation are highlighted in red.
Bredt's rule is a consequence of the fact that having a double bond on a bridgehead would be equivalent to having a trans double bond on a ring, which is not possible for small rings (fewer than eight atoms) due to ring strain, and angle strain in particular.
Bredt's rule can be useful for predicting which isomer is obtained from an elimination reaction in a bridged ring system. It can also be applied to reaction mechanisms that go via carbocations and, to a lesser degree, via free radicals, because these intermediates, like carbon atoms involved in a double bond, prefer to have a planar geometry with 120 degree angles and sp2 hybridization.
An anti-bredt molecule is one that is found to exist and be stable (within certain parameters) despite this rule. A recent (2006) example such a molecule is 2-quinuclidonium tetrafluoroborate.
# History
The first publication of what would later become known as Bredt's rule was in an article by Julius Bredt in 1924 about the chemistry of naturally occurring bicyclic terpenes. For an extensive review of this topic, see the article by Shea. | Bredt's rule
Bredt's rule is an empirical observation in organic chemistry that states that a double bond cannot be placed at the bridgehead of a bridged ring system, unless the rings are large enough.
For example, two of the following isomers of norbornene violate Bredt's rule, which makes them too unstable to prepare:
In the figure, the bridgehead atoms involved in Bredt's rule violation are highlighted in red.
Bredt's rule is a consequence of the fact that having a double bond on a bridgehead would be equivalent to having a trans double bond on a ring, which is not possible for small rings (fewer than eight atoms) due to ring strain, and angle strain in particular.
Bredt's rule can be useful for predicting which isomer is obtained from an elimination reaction in a bridged ring system. It can also be applied to reaction mechanisms that go via carbocations and, to a lesser degree, via free radicals, because these intermediates, like carbon atoms involved in a double bond, prefer to have a planar geometry with 120 degree angles and sp2 hybridization.
An anti-bredt molecule is one that is found to exist and be stable (within certain parameters) despite this rule. A recent (2006) example such a molecule is 2-quinuclidonium tetrafluoroborate.
# History
The first publication of what would later become known as Bredt's rule was in an article by Julius Bredt in 1924 about the chemistry of naturally occurring bicyclic terpenes.[1] For an extensive review of this topic, see the article by Shea.[2] | https://www.wikidoc.org/index.php/Bredt%27s_rule | |
1937b0ae1d49b571c57bb57bdb8f418ba5bc8499 | wikidoc | Breech birth | Breech birth
# Overview
A breech birth (also known as breech presentation) refers to the position of the baby in the uterus such that it will be delivered buttocks first as opposed to the normal head first position.
# Etiology
Certain factors can encourage a breech presentation. These include multiple (or multifoetal) pregnancy (twins, triplets or more), excessive amounts of amniotic fluid, hydrocephaly, anencephaly, very short umbilical cord, and some uterine abnormalities. Babies with congenital abnormalities are more likely to present by the breech. It is postulated that the baby normally assumes a head down presentation because of the weight of the baby's head. As the mass of the fetal head is the same as that of the pelvis, it is more likely that the enlarging fetus is more and more restricted in its movements, and simply becomes entrapped. The shape of the uterus is a more likely determinant of the final fetal presentation as uterine shape anomalies are strong predictors of breech presentation and other malpresentations.
# Epidemiology
Researchers generally cite a breech presentation frequency at term of 3-4% at the onset of labour though some claim a frequency as high as 7%. When labour is premature, the incidence of breech presentation is higher. At 28 weeks' gestation 25% of babies are breech, and the percentage decreases approaching term (40 weeks' gestation).
# Categories
There are four main categories of breech births:
- Frank breech - the baby's bottom comes first, and his or her legs are flexed at the hip and extended at the knees (with feet near the ears). 65-70% of breech babies are in the frank breech position.
- Complete breech - the baby's hips and knees are flexed so that the baby is sitting crosslegged, with feet beside the bottom.
- Footling breech - one or both feet come first, with the bottom at a higher position. This is rare at term but relatively common with premature fetuses.
- Kneeling breech - the baby is in a kneeling position, with one or both legs extended at the hips and flexed at the knees. This is extremely rare.
# Process of breech birth
As in labour with a baby in a normal head-down position, uterine contractions typically occur at regular intervals and gradually cause the cervix to become thinner and to open. In the more common breech presentations, the baby’s bottom (rather than feet or knees) is what is first to descend through the maternal pelvis and emerge from the vagina.
At the beginning of labour, the baby is generally in an oblique position, facing either the right or left side of the mother's back. As the baby's bottom is the same size in the term baby as the baby's head. Descent is thus as for the presenting fetal head and delay in descent is a cardinal sign of possible problems with the delivery of the head.
In order to begin the birth, internal rotation needs to occur. This happens when the mother's pelvic floor muscles cause the baby to turn so that it can be born with one hip directly in front of the other. At this point the baby is facing one of the mother's inner thighs. Then, the shoulders follow the same path as the hips did. At this time the baby usually turns to face the mother's back. Next occurs external rotation, which is when the shoulders emerge as the baby’s head enters the maternal pelvis. The combination of maternal muscle tone and uterine contractions cause the baby’s head to flex, chin to chest. Then the face emerges, and finally the back of the baby's head.
Due to the increased pressure during labour and birth, it is normal for the baby's leading hip to be bruised and genitalia to be swollen; this usually resolves shortly after birth.
Babies who assumed the frank breech position in utero may continue to hold their legs in this position for some days after birth - this is normal.
# Risks
Umbilical cord prolapse may occur, particularly in the complete, footling, or kneeling breech. This is caused by the lowermost parts of the baby not completely filling the space of the dilated cervix. When the waters break amniotic sac, it is possible for the umbilical cord to drop down and become compressed. This complication severely diminishes oxygen flow to the baby and the baby must be delivered immediately (usually by Caesarean section) so that he or she can breathe. If there is a delay in delivery, the brain can be damaged. Among full-term, head down babies, cord prolapse is quite rare, occurring in 0.4 percent. Among frank breech babies the incidence is 0.5 percent, among complete breeches 4-6 percent, and among footling breeches 15-18 percent.
Head entrapment is caused the failure of the fetal head to negotiate the maternal pelvis. At full term, the bitrochanteric diameter (the distance between the outer points of the hips) is about the same as the biparietal diameter (the transverse diameter of the skull)- simply put the size of the hips are the same as the size of the head. The relatively larger buttocks dilate the cervix as effectively as the head does in the typical head-down presentation. The relative head size of a premature fetus is significantly greater that the fetal buttocks. If the baby is premature, it may be possible for the baby’s body to emerge while the cervix has not dilated enough for the head to emerge.
Because the umbilical cord—the baby’s oxygen supply—is significantly compressed while the head is in the pelvis during a breech birth, it is important that the delivery of the aftercoming fetal head not be delayed. The head only just fits through the pelvis, and if the arm is extended alongside the head, delivery will not occur. If this occurs, the Lovset manoeuvre may be employed, or the arm may be manually brought to a position in front of the chest. The Lovset Manoeuvre works by rotating the fetal body by holding the fetal pelvis. Twisting the body such that an arm trails behind the shoulder, it will tend to cross down over the face to a position where it can be reached by the obstetrician's finger, and brought to a position below the head. A similar rotation in the opposite direction is made to deliver the other arm. In order to present the smallest diameter (9.5 cm) to the pelvis, the baby’s head must be flexed (chin to chest). If the head is in a deflexed position, the risk of entrapment is increased. Uterine contractions and maternal muscle tone encourage the head to flex. If the birth attendant pulls on the baby’s body, this action may deflex the head.
Oxygen deprivation may occur from either cord prolapse or prolonged compression of the cord during birth, as in head entrapment. If oxygen deprivation is prolonged, it may cause permanent neurological damage or death.
Injury to the brain and skull may occur due to the rapid passage of the baby's head through the mother's pelvis. This causes rapid decompression of the baby's head. In contrast, a baby going through labor in the head-down position usually experiences gradual molding (temporary reshaping of the skull) over the course of a few hours. This sudden compression and decompression in breech birth may cause no problems at all, but it can injure the brain. This injury is more likely in preterm babies. The fetal head may be controlled by a special two handed grip call the Morisseau-Smellie-Veidt manoeuvre or the elective application of forceps. This will be of value in controlling the rate of delivery of the head and reduce decompression.
Squeezing the baby’s abdomen can damage internal organs. Positioning the baby incorrectly while using forceps to deliver the aftercoming head can damage the spine or spinal cord. It is important for the birth attendant to be knowledgeable, skilled, and experienced with all variations of breech birth. In the United States, because Cesarean section is increasingly being used for breech babies, fewer and fewer birth attendants are developing these skills.
Injury may occur even if a birth attendant uses appropriate interventions during labour. A majority of full-term term frank breech babies would be born without problems even without assistance. However, in a minority of cases, expert assistance is needed for the baby to be born safely. This must be placed in perspective. It is this minority that determines the safety of the choice of vaginal delivery of the breech. A fetal death rate as low as 1% might be acceptable to some societies if a greater benefit could accrue. Take a country like the United States with a population of 300 million, and a 14.14/1000 birth rate, assume a 3% breech rate, and the aforementioned 1% mortality. This would result in an annual attributable death rate from breech delivery of 1,273 babies per year. Attributable death rate implies that the deaths occurred because of the selection of vaginal delivery and not from concurrent problems, such as congenital abnormalities or prematurity.
# Factors influencing the safety
- Type of breech presentation - the frank breech has the most favorable outcomes in vaginal birth, with many studies suggesting no difference in outcome compared to head down babies. (Some studies, however, find that planned caesarean sections for all breech babies improve outcome. The difference may rest in part on the skill of the doctors who delivered babies in different studies.) Complete breech presentation is the next most favorable position, but these babies sometimes shift and become footling breeches during labour. Footling and kneeling breeches have a higher risk of cord prolapse and head entrapment.
- Parity - Parity refers to the number of times a woman has given birth before. If a woman has given birth vaginally, her pelvis has "proven" it is big enough to allow a baby of that baby's size to pass through it. However, a head-down baby's head often molds (shifts its shape to fit the maternal pelvis) and so may present a smaller diameter than the same size baby born breech. Research on the issue has been contradictory as far as whether vaginal breech birth is safer when the mother has given birth before, or not.
- Fetal size in relation to maternal pelvic size - If the mother's pelvis is roomy and the baby is not large, this is favorable for vaginal breech delivery. However, prenatal estimates of the size of the baby and the size of the pelvis are unreliable.
- Hyperextension of the fetal head - this can be evaluated with ultrasound. Less than 5% of breech babies have their heads in the "star gazing" position, face looking straight upwards and the back of the head resting against the back of the neck. Caesarean delivery is absolutely necessary, because vaginal birth with the baby's head in this position confers a high risk of spinal cord trauma and death.
- Maturity of the Baby - Premature babies appear to be at higher risk of complications if delivered vaginally than if delivered by caesarean section.
- Progress of Labour - A spontaneous, normally progressing, straightforward labour requiring no intervention is a favourable sign.
- Second twins - If a first twin is born head down and the second twin is breech, the chances are good that the second twin can have a safe breech birth.
- Birth attendant's skill (and experience with breech birth) - The skill of the doctor or midwife and the number of breech births previously assisted is of crucial importance. Many of the dangers in vaginal birth for breech babies come from mistakes made by birth attendants.
# Diagnosis
Early in pregnancy the baby changes position freely and frequently. By 28 weeks gestation, 30% of babies present by the breech, this falls to three percent at full term. The mother carrying a breech fetus often feels that there is a hard, round part of the baby under her ribs; she feels kicking in the lower part of her uterus or around her umbilicus rather than at the top of her uterus; she may feel the baby hiccuping just under her ribs and may report that something feels different compared to previous pregnancies.
The midwife or doctor can usually feel the baby's position by palpating the mother's abdomen (Leopold’s maneuvers). The baby's head and bottom may feel similar, but if the head is characteristically ballotable.
Listening to the baby’s heartbeat with a stethoscope or fetoscope can also raise suspicion that the baby might be breech. Hearing the heartbeat above the mother's umbilicus suggests a breech presentation. Listening to the fetal heartbeat with an ultrasound-based electronic device gives similar information. There is no change in the symphysuiofundal height (SFH - the measurement from the pubic bone to the top of the uterus that is characteristic of a breech presentation. If it is late in pregnancy and the cervix has opened slightly, the midwife or doctor may be able to confirm head-down by vaginal examination. However even then the similarity in palpation between the sacrum and the fetal head continues to make this a relatively unreliable examination in all but the most obvious of cases. Palpating the sagittal suture, than runs between the baby's unfused parietal skull bones is helpful. An ultrasound scan can visualize the fetus and reveal its position and is the most reliable test.
# Turning the baby to avoid breech birth
There are many methods which have been attempted with the aim of turning breech babies, with varying degrees of success:
- External cephalic version where a midwife or doctor turns the baby by manipulating the baby through the mother's abdomen. ECV has a success rate between 40 - 70% depending on practitioner (Goer, 1995, 111) The fetal heart is monitored after the turn attempt, usually in the context of an institutional protocol. Studies show that turning the baby at term (after 36 weeks) is effective in reducing the number of babies born in the breech position. Complications from external cephalic version are rare. Studies have also shown that attempting to turn the baby prior to this point has no impact on the presentation at term.
- Maternal positioning, for a few minutes several times a day, to give the baby more room and encourage turning (including the knee-chest position, the all-fours position, crawling, and lying down with several pillows under the mother's buttocks to elevate her pelvis). Swimming is postulated by some to be of value. A study has shown that there is insufficient evidence as to the benefit of maternal positioning in reducing the incidence of breech presentation.
# Breech birth versus Caesarean section
Caesarean section is the most common way to deliver a breech baby in the USA, Australia, and Great Britain. Like any major surgery, it involves risks. Maternal mortality is increased by a Caesarean section, but still remains a rare complication in the First World. Third World statistics are dramatically different,and mortality is increased significantly. There is remote risk of injury to the mother's internal organs, injury to the baby, and severe hemorrhage requiring hysterectomy with resultant infertility. More commonly seen are problems with noncatastrophic bleeding, postoperative infection and wound healing problems. Obesity increases both the section rate and the complication rate.
Overall, large studies have confirmed that elective cesarean section has lower risk to the fetus and a slightly increased risk to the mother, than planned vaginal delivery of the breech.
The same birth injuries that can occur in vaginal breech birth may rarely occur in caesarean breech delivery. A Cesarean breech delivery is still a breech delivery. However the soft tissues of the uterus and abdominal wall are more forgiving of breech delivery than the hard bony ring of the pelvis. If a caesarean is scheduled in advance (rather than waiting for the onset of labor) there is a risk of accidentally delivering the baby too early, so that the baby might have complications of prematurity. With proper prenatal care, including first trimester ultrasound, this is theoretically impossible, and is indeed almost unheard of. The mother's subsequent pregnancies will be riskier than they would be after a vaginal birth (risk of unexplained stillbirth, uterine rupture, placental abnormalities). The presence of a uterine scar will be a risk factor for any subsequent pregnancies. | Breech birth
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
A breech birth (also known as breech presentation) refers to the position of the baby in the uterus such that it will be delivered buttocks first as opposed to the normal head first position.
# Etiology
Certain factors can encourage a breech presentation. These include multiple (or multifoetal) pregnancy (twins, triplets or more), excessive amounts of amniotic fluid, hydrocephaly, anencephaly, very short umbilical cord, and some uterine abnormalities. Babies with congenital abnormalities are more likely to present by the breech. It is postulated that the baby normally assumes a head down presentation because of the weight of the baby's head. As the mass of the fetal head is the same as that of the pelvis, it is more likely that the enlarging fetus is more and more restricted in its movements, and simply becomes entrapped. The shape of the uterus is a more likely determinant of the final fetal presentation as uterine shape anomalies are strong predictors of breech presentation and other malpresentations.
# Epidemiology
Researchers generally cite a breech presentation frequency at term of 3-4%[2][3] at the onset of labour though some claim a frequency as high as 7%[4]. When labour is premature, the incidence of breech presentation is higher. At 28 weeks' gestation 25% of babies are breech, and the percentage decreases approaching term (40 weeks' gestation).
# Categories
There are four main categories of breech births:
- Frank breech - the baby's bottom comes first, and his or her legs are flexed at the hip and extended at the knees (with feet near the ears). 65-70% of breech babies are in the frank breech position.
- Complete breech - the baby's hips and knees are flexed so that the baby is sitting crosslegged, with feet beside the bottom.
- Footling breech - one or both feet come first, with the bottom at a higher position. This is rare at term but relatively common with premature fetuses.
- Kneeling breech - the baby is in a kneeling position, with one or both legs extended at the hips and flexed at the knees. This is extremely rare.
# Process of breech birth
As in labour with a baby in a normal head-down position, uterine contractions typically occur at regular intervals and gradually cause the cervix to become thinner and to open. In the more common breech presentations, the baby’s bottom (rather than feet or knees) is what is first to descend through the maternal pelvis and emerge from the vagina.
At the beginning of labour, the baby is generally in an oblique position, facing either the right or left side of the mother's back. As the baby's bottom is the same size in the term baby as the baby's head. Descent is thus as for the presenting fetal head and delay in descent is a cardinal sign of possible problems with the delivery of the head.
In order to begin the birth, internal rotation needs to occur. This happens when the mother's pelvic floor muscles cause the baby to turn so that it can be born with one hip directly in front of the other. At this point the baby is facing one of the mother's inner thighs. Then, the shoulders follow the same path as the hips did. At this time the baby usually turns to face the mother's back. Next occurs external rotation, which is when the shoulders emerge as the baby’s head enters the maternal pelvis. The combination of maternal muscle tone and uterine contractions cause the baby’s head to flex, chin to chest. Then the face emerges, and finally the back of the baby's head.
Due to the increased pressure during labour and birth, it is normal for the baby's leading hip to be bruised and genitalia to be swollen; this usually resolves shortly after birth.
Babies who assumed the frank breech position in utero may continue to hold their legs in this position for some days after birth - this is normal.
# Risks
Umbilical cord prolapse may occur, particularly in the complete, footling, or kneeling breech. This is caused by the lowermost parts of the baby not completely filling the space of the dilated cervix. When the waters break amniotic sac, it is possible for the umbilical cord to drop down and become compressed. This complication severely diminishes oxygen flow to the baby and the baby must be delivered immediately (usually by Caesarean section) so that he or she can breathe. If there is a delay in delivery, the brain can be damaged. Among full-term, head down babies, cord prolapse is quite rare, occurring in 0.4 percent. Among frank breech babies the incidence is 0.5 percent, among complete breeches 4-6 percent, and among footling breeches 15-18 percent.
Head entrapment is caused the failure of the fetal head to negotiate the maternal pelvis. At full term, the bitrochanteric diameter (the distance between the outer points of the hips) is about the same as the biparietal diameter (the transverse diameter of the skull)- simply put the size of the hips are the same as the size of the head. The relatively larger buttocks dilate the cervix as effectively as the head does in the typical head-down presentation. The relative head size of a premature fetus is significantly greater that the fetal buttocks. If the baby is premature, it may be possible for the baby’s body to emerge while the cervix has not dilated enough for the head to emerge.
Because the umbilical cord—the baby’s oxygen supply—is significantly compressed while the head is in the pelvis during a breech birth, it is important that the delivery of the aftercoming fetal head not be delayed. The head only just fits through the pelvis, and if the arm is extended alongside the head, delivery will not occur. If this occurs, the Lovset manoeuvre may be employed, or the arm may be manually brought to a position in front of the chest. The Lovset Manoeuvre works by rotating the fetal body by holding the fetal pelvis. Twisting the body such that an arm trails behind the shoulder, it will tend to cross down over the face to a position where it can be reached by the obstetrician's finger, and brought to a position below the head. A similar rotation in the opposite direction is made to deliver the other arm. In order to present the smallest diameter (9.5 cm) to the pelvis, the baby’s head must be flexed (chin to chest). If the head is in a deflexed position, the risk of entrapment is increased. Uterine contractions and maternal muscle tone encourage the head to flex. If the birth attendant pulls on the baby’s body, this action may deflex the head.
Oxygen deprivation may occur from either cord prolapse or prolonged compression of the cord during birth, as in head entrapment. If oxygen deprivation is prolonged, it may cause permanent neurological damage or death.
Injury to the brain and skull may occur due to the rapid passage of the baby's head through the mother's pelvis. This causes rapid decompression of the baby's head. In contrast, a baby going through labor in the head-down position usually experiences gradual molding (temporary reshaping of the skull) over the course of a few hours. This sudden compression and decompression in breech birth may cause no problems at all, but it can injure the brain. This injury is more likely in preterm babies. The fetal head may be controlled by a special two handed grip call the Morisseau-Smellie-Veidt manoeuvre or the elective application of forceps. This will be of value in controlling the rate of delivery of the head and reduce decompression.
Squeezing the baby’s abdomen can damage internal organs. Positioning the baby incorrectly while using forceps to deliver the aftercoming head can damage the spine or spinal cord. It is important for the birth attendant to be knowledgeable, skilled, and experienced with all variations of breech birth. In the United States, because Cesarean section is increasingly being used for breech babies, fewer and fewer birth attendants are developing these skills.
Injury may occur even if a birth attendant uses appropriate interventions during labour. A majority of full-term term frank breech babies would be born without problems even without assistance. However, in a minority of cases, expert assistance is needed for the baby to be born safely. This must be placed in perspective. It is this minority that determines the safety of the choice of vaginal delivery of the breech. A fetal death rate as low as 1% might be acceptable to some societies if a greater benefit could accrue. Take a country like the United States with a population of 300 million, and a 14.14/1000 birth rate, assume a 3% breech rate, and the aforementioned 1% mortality. This would result in an annual attributable death rate from breech delivery of 1,273 babies per year. Attributable death rate implies that the deaths occurred because of the selection of vaginal delivery and not from concurrent problems, such as congenital abnormalities or prematurity.
# Factors influencing the safety
- Type of breech presentation - the frank breech has the most favorable outcomes in vaginal birth, with many studies suggesting no difference in outcome compared to head down babies. (Some studies, however, find that planned caesarean sections for all breech babies improve outcome. The difference may rest in part on the skill of the doctors who delivered babies in different studies.) Complete breech presentation is the next most favorable position, but these babies sometimes shift and become footling breeches during labour. Footling and kneeling breeches have a higher risk of cord prolapse and head entrapment.
- Parity - Parity refers to the number of times a woman has given birth before. If a woman has given birth vaginally, her pelvis has "proven" it is big enough to allow a baby of that baby's size to pass through it. However, a head-down baby's head often molds (shifts its shape to fit the maternal pelvis) and so may present a smaller diameter than the same size baby born breech. Research on the issue has been contradictory as far as whether vaginal breech birth is safer when the mother has given birth before, or not.
- Fetal size in relation to maternal pelvic size - If the mother's pelvis is roomy and the baby is not large, this is favorable for vaginal breech delivery. However, prenatal estimates of the size of the baby and the size of the pelvis are unreliable.
- Hyperextension of the fetal head - this can be evaluated with ultrasound. Less than 5% of breech babies have their heads in the "star gazing" position, face looking straight upwards and the back of the head resting against the back of the neck. Caesarean delivery is absolutely necessary, because vaginal birth with the baby's head in this position confers a high risk of spinal cord trauma and death.
- Maturity of the Baby - Premature babies appear to be at higher risk of complications if delivered vaginally than if delivered by caesarean section.
- Progress of Labour - A spontaneous, normally progressing, straightforward labour requiring no intervention is a favourable sign.
- Second twins - If a first twin is born head down and the second twin is breech, the chances are good that the second twin can have a safe breech birth.
- Birth attendant's skill (and experience with breech birth) - The skill of the doctor or midwife and the number of breech births previously assisted is of crucial importance. Many of the dangers in vaginal birth for breech babies come from mistakes made by birth attendants.
# Diagnosis
Early in pregnancy the baby changes position freely and frequently. By 28 weeks gestation, 30% of babies present by the breech, this falls to three percent at full term. The mother carrying a breech fetus often feels that there is a hard, round part of the baby under her ribs; she feels kicking in the lower part of her uterus or around her umbilicus rather than at the top of her uterus; she may feel the baby hiccuping just under her ribs and may report that something feels different compared to previous pregnancies.
The midwife or doctor can usually feel the baby's position by palpating the mother's abdomen (Leopold’s maneuvers). The baby's head and bottom may feel similar, but if the head is characteristically ballotable.
Listening to the baby’s heartbeat with a stethoscope or fetoscope can also raise suspicion that the baby might be breech. Hearing the heartbeat above the mother's umbilicus suggests a breech presentation. Listening to the fetal heartbeat with an ultrasound-based electronic device gives similar information. There is no change in the symphysuiofundal height (SFH - the measurement from the pubic bone to the top of the uterus that is characteristic of a breech presentation. If it is late in pregnancy and the cervix has opened slightly, the midwife or doctor may be able to confirm head-down by vaginal examination. However even then the similarity in palpation between the sacrum and the fetal head continues to make this a relatively unreliable examination in all but the most obvious of cases. Palpating the sagittal suture, than runs between the baby's unfused parietal skull bones is helpful. An ultrasound scan can visualize the fetus and reveal its position and is the most reliable test.
# Turning the baby to avoid breech birth
There are many methods which have been attempted with the aim of turning breech babies, with varying degrees of success:
- External cephalic version where a midwife or doctor turns the baby by manipulating the baby through the mother's abdomen. ECV has a success rate between 40 - 70% depending on practitioner (Goer, 1995, 111) The fetal heart is monitored after the turn attempt, usually in the context of an institutional protocol. Studies show that turning the baby at term (after 36 weeks) is effective in reducing the number of babies born in the breech position. http://www.cochrane.org/reviews/en/ab000083.html Complications from external cephalic version are rare. Studies have also shown that attempting to turn the baby prior to this point has no impact on the presentation at term. http://www.cochrane.org/reviews/en/ab000051.html
- Maternal positioning, for a few minutes several times a day, to give the baby more room and encourage turning (including the knee-chest position, the all-fours position, crawling, and lying down with several pillows under the mother's buttocks to elevate her pelvis). Swimming is postulated by some to be of value. A study has shown that there is insufficient evidence as to the benefit of maternal positioning in reducing the incidence of breech presentation. http://www.cochrane.org/reviews/en/ab000051.html
# Breech birth versus Caesarean section
Caesarean section is the most common way to deliver a breech baby in the USA, Australia, and Great Britain. Like any major surgery, it involves risks. Maternal mortality is increased by a Caesarean section, but still remains a rare complication in the First World. Third World statistics are dramatically different,and mortality is increased significantly. There is remote risk of injury to the mother's internal organs, injury to the baby, and severe hemorrhage requiring hysterectomy with resultant infertility. More commonly seen are problems with noncatastrophic bleeding, postoperative infection and wound healing problems. Obesity increases both the section rate and the complication rate.
Overall, large studies have confirmed that elective cesarean section has lower risk to the fetus and a slightly increased risk to the mother, than planned vaginal delivery of the breech. http://www.cochrane.org/reviews/en/ab000166.html
The same birth injuries that can occur in vaginal breech birth may rarely occur in caesarean breech delivery. A Cesarean breech delivery is still a breech delivery. However the soft tissues of the uterus and abdominal wall are more forgiving of breech delivery than the hard bony ring of the pelvis. If a caesarean is scheduled in advance (rather than waiting for the onset of labor) there is a risk of accidentally delivering the baby too early, so that the baby might have complications of prematurity. With proper prenatal care, including first trimester ultrasound, this is theoretically impossible, and is indeed almost unheard of. The mother's subsequent pregnancies will be riskier than they would be after a vaginal birth (risk of unexplained stillbirth, uterine rupture, placental abnormalities). The presence of a uterine scar will be a risk factor for any subsequent pregnancies. | https://www.wikidoc.org/index.php/Breech_birth | |
ba58376d7778f633e988715e612b84ca78dcd8cb | wikidoc | Methohexital | Methohexital
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Black Box Warning
# Overview
Methohexital is a general anesthetic that is FDA approved for the {{{indicationType}}} of anesthesia. There is a Black Box Warning for this drug as shown here. Common adverse reactions include cardiovascular: hypotension, dermatologic: injection site pain, musculoskeletal: spasmodic movement, respiratory: cough, hiccoughs, laryngeal spasm.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- Anesthesia: induction, 1 to 1.5 mg/kg (50 to 120 mg, mean 70 mg) IV administered at a rate of 1 mL every 5 seconds (1% solution), which usually provides anesthesia for 5 to 7 minutes; gaseous anesthetics and skeletal muscle relaxants may be administered concomitantly
- Anesthesia: maintenance, intermittent IV injections of 20 to 40 mg (2 to 4 mL of a 1% solution) as required, usually every 4 to 7 minutes OR by continuous IV drip of 3 mL/min (0.2% solution); individualize flow rate for each patient; for longer surgical procedures, gradual reduction in the administration rate is recommended.
- Procedural sedation: 0.75 to 1 mg/kg IV; can be re-dosed 0.5 mg/kg every 2-5 min as needed
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
- Procedural sedation.
### Non–Guideline-Supported Use
There is limited information about Off-Label Non–Guideline-Supported Use of Methohexital in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
- Anesthesia: (older than 1 month) 6.6 to 10 mg/kg IM (5% solution) OR 25 mg/kg RECTALLY (1% solution).
- Procedural sedation: 25 mg/kg rectally as 1% solution.
- Procedural sedation: 0.5 to 1 mg/kg IV.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
- Procedural sedation
### Non–Guideline-Supported Use
There is limited information about Off-Label Non–Guideline-Supported Use of Methohexital in pediatric patients.
# Contraindications
- Brevital Sodium is contraindicated in patients in whom general anesthesia is contraindicated, in those with latent or manifest porphyria, or in patients with a known hypersensitivity to barbiturates.
# Warnings
- As with all potent anesthetic agents and adjuncts, Brevital should be used only in hospital or ambulatory care settings that provide for continuous monitoring of respiratory (e.g. pulse oximetry) and cardiac function. Immediate availability of resuscitative drugs and age- and size-appropriate equipment for bag/valve/mask ventilation and intubation and personnel trained in their use and skilled in airway management should be assured. For deeply sedated patients, a designated individual other than the practitioner performing the procedure should be present to continuously monitor the patient.
- Maintenance of a patent airway and adequacy of ventilation must be ensured during induction and maintenance of anesthesia with methohexital sodium solution. Laryngospasm is common during induction with all barbiturates and may be due to a combination of secretions and accentuated reflexes following induction or may result from painful stimuli during light anesthesia. Apnea/hypoventilation may be noted during induction, which may impair pulmonary ventilation; the duration of apnea may be longer than that produced by other barbiturate anesthetics. Cardiorespiratory arrest may occur.
- This prescribing information describes intravenous use of methohexital sodium in adults. It also discusses intramuscular and rectal administration in pediatric patients older than one month. Although the published literature discusses intravenous administration in pediatric patients, the safety and effectiveness of intravenous administration of methohexital sodium in pediatric patients have not been established in well-controlled, prospective studies. (See Precautions— Pediatric Use)
- Seizures may be elicited in subjects with a previous history of convulsive activity, especially partial seizure disorders.
- Because the liver is involved in demethylation and oxidation of methohexital and because barbiturates may enhance preexisting circulatory depression, severe hepatic dysfunction, severe cardiovascular instability, or a shock-like condition may be reason for selecting another induction agent.
- Prolonged administration may result in cumulative effects, including extended somnolence, protracted unconsciousness, and respiratory and cardiovascular depression. Respiratory depression in the presence of an impaired airway may lead to hypoxia, cardiac arrest, and death.
- The CNS-depressant effect of Brevital Sodium may be additive with that of other CNS depressants, including ethyl alcohol and propylene glycol.
# Adverse Reactions
## Clinical Trials Experience
- Side effects associated with Brevital Sodium are extensions of pharmacologic effects and include:
- Circulatory depression, thrombophlebitis, hypotension, tachycardia, peripheral vascular collapse, and convulsions in association with cardiorespiratory arrest.
- Respiratory depression (including apnea), cardiorespiratory arrest, laryngospasm, bronchospasm, hiccups, and dyspnea
- Skeletal muscle hyperactivity (twitching), injury to nerves adjacent to injection site, and seizures
- Emergence delirium, restlessness, and anxiety may occur, especially in the presence of postoperative pain
- Nausea, emesis, abdominal pain, and liver function tests abnormal
- Erythema, pruritus, urticaria, and cases of anaphylaxis have been reported rarely
- Other adverse reactions include pain at injection site, salivation, headache, and rhinitis
- For medical advice about adverse reactions contact your medical professional. To report suspected adverse reactions, contact JHP at 1-866-923-2547 or MEDWATCH at 1-800-FDA-1088 (1-800-332-1088) or /.
## Postmarketing Experience
There is limited information regarding Methohexital Postmarketing Experience in the drug label.
# Drug Interactions
There is limited information regarding Methohexital Drug Interactions in the drug label.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): B
There is no FDA guidance on usage of Methohexital in women who are pregnant.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Methohexital in women who are pregnant.
### Labor and Delivery
- Brevital Sodium has been used in cesarean section delivery but, because of its solubility and lack of protein binding, it readily and rapidly traverses the placenta.
### Nursing Mothers
- Caution should be exercised when Brevital Sodium is administered to a nursing woman.
### Pediatric Use
- The safety and effectiveness of methohexital sodium in pediatric patients below the age of 1 month have not been established. Seizures may be elicited in subjects with a previous history of convulsive activity, especially partial seizure disorders. Apnea has been reported following dosing with methohexital regardless of the route of administration used. Studies using methohexital sodium intravenously in pediatric patients have been reported in the published literature. This literature is not adequate to establish the safety and effectiveness of intravenous administration of methohexital sodium in pediatric patients. Due to a variety of limitations such as study design, biopharmaceutic issues, and the wide range of effects observed with similar doses of intravenous methohexital, additional studies of intravenous methohexital in pediatric patients are necessary before this route can be recommended in pediatric patients. (See Warnings)
### Geriatic Use
- Clinical studies of Brevital did not include sufficient numbers of subjects aged 65 and over to determine whether they respond differently from younger subjects. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. Elderly subjects may commonly have conditions in which methohexital should be used cautiously such as obstructive pulmonary disease, severe hypertension or hypotension, preexisting circulatory depression, myocardial disease, congestive heart failure, or severe anemia. Caution should be exercised in debilitated patients or in those with impaired function of respiratory, circulatory, renal, hepatic, or endocrine systems (see Warnings, precautions and adverse reactions). Barbiturates may influence the metabolism of other concomitantly used drugs that are commonly taken by the elderly, such as anticoagulants and corticosteroids. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy (see Precautions-Drug Interactions).
### Gender
There is no FDA guidance on the use of Methohexital with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Methohexital with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Methohexital in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Methohexital in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Methohexital in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Methohexital in patients who are immunocompromised.
# Administration and Monitoring
### Administration
There is limited information regarding Methohexital Administration in the drug label.
### Monitoring
There is limited information regarding Methohexital Monitoring in the drug label.
# IV Compatibility
There is limited information regarding the compatibility of Methohexital and IV administrations.
# Overdosage
- The onset of toxicity following an overdose of intravenously administered methohexital will be within seconds of the infusion. If methohexital is administered rectally or is ingested, the onset of toxicity may be delayed. The manifestations of an ultrashort-acting barbiturate in overdose include central nervous system depression, respiratory depression, hypotension, loss of peripheral vascular resistance, and muscular hyperactivity ranging from twitching to convulsive-like movements. Other findings may include convulsions and allergic reactions. Following massive exposure to any barbiturate, pulmonary edema, circulatory collapse with loss of peripheral vascular tone, and cardiac arrest may occur.
- To obtain up-to-date information about the treatment of overdose, a good resource is your certified Regional Poison Control Center. Telephone numbers of certified poison control centers are listed in the Physicians' Desk Reference (PDR). In managing overdosage, consider the possibility of multiple drug overdoses, interaction among drugs, and unusual drug kinetics in your patient.
- Establish an airway and ensure oxygenation and ventilation. Resuscitative measures should be initiated promptly. For hypotension, intravenous fluids should be administered and the patient's legs raised. If desirable increase in blood pressure is not obtained, vasopressor and/or inotropic drugs may be used as dictated by the clinical situation.
- For convulsions, diazepam intravenously and phenytoin may be required. If the seizures are refractory to diazepam and phenytoin, general anesthesia and paralysis with a neuromuscular blocking agent may be necessary.
- Protect the patient's airway and support ventilation and perfusion. Meticulously monitor and maintain, within acceptable limits, the patient's vital signs, blood gases, serum electrolytes, etc. Absorption of drugs from the gastrointestinal tract may be decreased by giving activated charcoal, which, in many cases, is more effective than emesis or lavage; consider charcoal instead of or in addition to gastric emptying. Repeated doses of charcoal over time may hasten elimination of some drugs that have been absorbed. Safeguard the patient's airway when employing gastric emptying or charcoal.
# Pharmacology
## Mechanism of Action
There is limited information regarding Methohexital Mechanism of Action in the drug label.
## Structure
- Brevital® Sodium (Methohexital Sodium for Injection, USP) is 2,4,6 (1H, 3H, 5H)-Pyrimidinetrione, 1-methyl-5-(1-methyl-2-pentynyl)-5-(2-propenyl)-, (±)-, monosodium salt and has the empirical formula C14H17N2NaO3. Its molecular weight is 284.29.
- The structural formula is as follows:
- Methohexital sodium is a rapid, ultrashort-acting barbiturate anesthetic. Methohexital sodium for injection is a freeze-dried, sterile, nonpyrogenic mixture of methohexital sodium with 6% anhydrous sodium carbonate added as a buffer. It contains not less than 90% and not more than 110% of the labeled amount of methohexital sodium. It occurs as a white, freeze-dried plug that is freely soluble in water.
- This product is oxygen sensitive. The pH of the 1% solution is between 10 and 11; the pH of the 0.2% solution in 5% dextrose is between 9.5 and 10.5.
- Methohexital sodium may be administered by direct intravenous injection or continuous intravenous drip, intramuscular or rectal routes (see Precautions—Pediatric Use). Reconstituting instructions vary depending on the route of administration (see Dosage and Administration).
## Pharmacodynamics
- Compared with thiamylal and thiopental, methohexital is at least twice as potent on a weight basis, and its duration of action is only about half as long. Although the metabolic fate of methohexital in the body is not clear, the drug does not appear to concentrate in fat depots to the extent that other barbiturate anesthetics do. Thus, cumulative effects are fewer and recovery is more rapid with methohexital than with thiobarbiturates. In experimental animals, the drug cannot be detected in the blood 24 hours after administration.
- Methohexital differs chemically from the established barbiturate anesthetics in that it contains no sulfur. Little analgesia is conferred by barbiturates; their use in the presence of pain may result in excitation.
- Intravenous administration of methohexital results in rapid uptake by the brain (within 30 seconds) and rapid induction of sleep.
## Pharmacokinetics
- Following intramuscular administration to pediatric patients, the onset of sleep occurs in 2 to 10 minutes. A plasma concentration of 3 µg/mL was achieved in pediatric patients 15 minutes after an intramuscular dose (10 mg/kg) of a 5% solution. Following rectal administration to pediatric patients, the onset of sleep occurs in 5 to 15 minutes. Plasma methohexital concentrations achieved following rectal administration tend to increase both with dose and with the use of more dilute solution concentrations when using the same dose. A 25 mg/kg dose of a 1% methohexital solution yielded plasma concentrations of 6.9 to 7.9 µg/mL 15 minutes after dosing. The absolute bioavailability of rectal methohexital sodium is 17%.
- With single doses, the rate of redistribution determines duration of pharmacologic effect. Metabolism occurs in the liver through demethylation and oxidation. Side-chain oxidation is the most important biotransformation involved in termination of biologic activity. Excretion occurs via the kidneys through glomerular filtration.
## Nonclinical Toxicology
There is limited information regarding Methohexital Nonclinical Toxicology in the drug label.
# Clinical Studies
There is limited information regarding Methohexital Clinical Studies in the drug label.
# How Supplied
- Store between 20° to 25°C (68° to 77°F). (See USP Controlled Room Temperature.)
## Storage
There is limited information regarding Methohexital Storage in the drug label.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
There is limited information regarding Methohexital Patient Counseling Information in the drug label.
# Precautions with Alcohol
Alcohol-Methohexital interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
There is limited information regarding Methohexital Brand Names in the drug label.
# Look-Alike Drug Names
There is limited information regarding Methohexital Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Methohexital
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Chetan Lokhande, M.B.B.S [2]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Black Box Warning
# Overview
Methohexital is a general anesthetic that is FDA approved for the {{{indicationType}}} of anesthesia. There is a Black Box Warning for this drug as shown here. Common adverse reactions include cardiovascular: hypotension, dermatologic: injection site pain, musculoskeletal: spasmodic movement, respiratory: cough, hiccoughs, laryngeal spasm.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- Anesthesia: induction, 1 to 1.5 mg/kg (50 to 120 mg, mean 70 mg) IV administered at a rate of 1 mL every 5 seconds (1% solution), which usually provides anesthesia for 5 to 7 minutes; gaseous anesthetics and skeletal muscle relaxants may be administered concomitantly [2]
- Anesthesia: maintenance, intermittent IV injections of 20 to 40 mg (2 to 4 mL of a 1% solution) as required, usually every 4 to 7 minutes OR by continuous IV drip of 3 mL/min (0.2% solution); individualize flow rate for each patient; for longer surgical procedures, gradual reduction in the administration rate is recommended.
- Procedural sedation: 0.75 to 1 mg/kg IV; can be re-dosed 0.5 mg/kg every 2-5 min as needed
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
- Procedural sedation.
### Non–Guideline-Supported Use
There is limited information about Off-Label Non–Guideline-Supported Use of Methohexital in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
- Anesthesia: (older than 1 month) 6.6 to 10 mg/kg IM (5% solution) OR 25 mg/kg RECTALLY (1% solution).
- Procedural sedation: 25 mg/kg rectally as 1% solution.
- Procedural sedation: 0.5 to 1 mg/kg IV.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
- Procedural sedation
### Non–Guideline-Supported Use
There is limited information about Off-Label Non–Guideline-Supported Use of Methohexital in pediatric patients.
# Contraindications
- Brevital Sodium is contraindicated in patients in whom general anesthesia is contraindicated, in those with latent or manifest porphyria, or in patients with a known hypersensitivity to barbiturates.
# Warnings
- As with all potent anesthetic agents and adjuncts, Brevital should be used only in hospital or ambulatory care settings that provide for continuous monitoring of respiratory (e.g. pulse oximetry) and cardiac function. Immediate availability of resuscitative drugs and age- and size-appropriate equipment for bag/valve/mask ventilation and intubation and personnel trained in their use and skilled in airway management should be assured. For deeply sedated patients, a designated individual other than the practitioner performing the procedure should be present to continuously monitor the patient.
- Maintenance of a patent airway and adequacy of ventilation must be ensured during induction and maintenance of anesthesia with methohexital sodium solution. Laryngospasm is common during induction with all barbiturates and may be due to a combination of secretions and accentuated reflexes following induction or may result from painful stimuli during light anesthesia. Apnea/hypoventilation may be noted during induction, which may impair pulmonary ventilation; the duration of apnea may be longer than that produced by other barbiturate anesthetics. Cardiorespiratory arrest may occur.
- This prescribing information describes intravenous use of methohexital sodium in adults. It also discusses intramuscular and rectal administration in pediatric patients older than one month. Although the published literature discusses intravenous administration in pediatric patients, the safety and effectiveness of intravenous administration of methohexital sodium in pediatric patients have not been established in well-controlled, prospective studies. (See Precautions— Pediatric Use)
- Seizures may be elicited in subjects with a previous history of convulsive activity, especially partial seizure disorders.
- Because the liver is involved in demethylation and oxidation of methohexital and because barbiturates may enhance preexisting circulatory depression, severe hepatic dysfunction, severe cardiovascular instability, or a shock-like condition may be reason for selecting another induction agent.
- Prolonged administration may result in cumulative effects, including extended somnolence, protracted unconsciousness, and respiratory and cardiovascular depression. Respiratory depression in the presence of an impaired airway may lead to hypoxia, cardiac arrest, and death.
- The CNS-depressant effect of Brevital Sodium may be additive with that of other CNS depressants, including ethyl alcohol and propylene glycol.
# Adverse Reactions
## Clinical Trials Experience
- Side effects associated with Brevital Sodium are extensions of pharmacologic effects and include:
- Circulatory depression, thrombophlebitis, hypotension, tachycardia, peripheral vascular collapse, and convulsions in association with cardiorespiratory arrest.
- Respiratory depression (including apnea), cardiorespiratory arrest, laryngospasm, bronchospasm, hiccups, and dyspnea
- Skeletal muscle hyperactivity (twitching), injury to nerves adjacent to injection site, and seizures
- Emergence delirium, restlessness, and anxiety may occur, especially in the presence of postoperative pain
- Nausea, emesis, abdominal pain, and liver function tests abnormal
- Erythema, pruritus, urticaria, and cases of anaphylaxis have been reported rarely
- Other adverse reactions include pain at injection site, salivation, headache, and rhinitis
- For medical advice about adverse reactions contact your medical professional. To report suspected adverse reactions, contact JHP at 1-866-923-2547 or MEDWATCH at 1-800-FDA-1088 (1-800-332-1088) or http://www.fda.gov/medwatch/.
## Postmarketing Experience
There is limited information regarding Methohexital Postmarketing Experience in the drug label.
# Drug Interactions
There is limited information regarding Methohexital Drug Interactions in the drug label.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): B
There is no FDA guidance on usage of Methohexital in women who are pregnant.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Methohexital in women who are pregnant.
### Labor and Delivery
- Brevital Sodium has been used in cesarean section delivery but, because of its solubility and lack of protein binding, it readily and rapidly traverses the placenta.
### Nursing Mothers
- Caution should be exercised when Brevital Sodium is administered to a nursing woman.
### Pediatric Use
- The safety and effectiveness of methohexital sodium in pediatric patients below the age of 1 month have not been established. Seizures may be elicited in subjects with a previous history of convulsive activity, especially partial seizure disorders. Apnea has been reported following dosing with methohexital regardless of the route of administration used. Studies using methohexital sodium intravenously in pediatric patients have been reported in the published literature. This literature is not adequate to establish the safety and effectiveness of intravenous administration of methohexital sodium in pediatric patients. Due to a variety of limitations such as study design, biopharmaceutic issues, and the wide range of effects observed with similar doses of intravenous methohexital, additional studies of intravenous methohexital in pediatric patients are necessary before this route can be recommended in pediatric patients. (See Warnings)
### Geriatic Use
- Clinical studies of Brevital did not include sufficient numbers of subjects aged 65 and over to determine whether they respond differently from younger subjects. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. Elderly subjects may commonly have conditions in which methohexital should be used cautiously such as obstructive pulmonary disease, severe hypertension or hypotension, preexisting circulatory depression, myocardial disease, congestive heart failure, or severe anemia. Caution should be exercised in debilitated patients or in those with impaired function of respiratory, circulatory, renal, hepatic, or endocrine systems (see Warnings, precautions and adverse reactions). Barbiturates may influence the metabolism of other concomitantly used drugs that are commonly taken by the elderly, such as anticoagulants and corticosteroids. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy (see Precautions-Drug Interactions).
### Gender
There is no FDA guidance on the use of Methohexital with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Methohexital with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Methohexital in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Methohexital in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Methohexital in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Methohexital in patients who are immunocompromised.
# Administration and Monitoring
### Administration
There is limited information regarding Methohexital Administration in the drug label.
### Monitoring
There is limited information regarding Methohexital Monitoring in the drug label.
# IV Compatibility
There is limited information regarding the compatibility of Methohexital and IV administrations.
# Overdosage
- The onset of toxicity following an overdose of intravenously administered methohexital will be within seconds of the infusion. If methohexital is administered rectally or is ingested, the onset of toxicity may be delayed. The manifestations of an ultrashort-acting barbiturate in overdose include central nervous system depression, respiratory depression, hypotension, loss of peripheral vascular resistance, and muscular hyperactivity ranging from twitching to convulsive-like movements. Other findings may include convulsions and allergic reactions. Following massive exposure to any barbiturate, pulmonary edema, circulatory collapse with loss of peripheral vascular tone, and cardiac arrest may occur.
- To obtain up-to-date information about the treatment of overdose, a good resource is your certified Regional Poison Control Center. Telephone numbers of certified poison control centers are listed in the Physicians' Desk Reference (PDR). In managing overdosage, consider the possibility of multiple drug overdoses, interaction among drugs, and unusual drug kinetics in your patient.
- Establish an airway and ensure oxygenation and ventilation. Resuscitative measures should be initiated promptly. For hypotension, intravenous fluids should be administered and the patient's legs raised. If desirable increase in blood pressure is not obtained, vasopressor and/or inotropic drugs may be used as dictated by the clinical situation.
- For convulsions, diazepam intravenously and phenytoin may be required. If the seizures are refractory to diazepam and phenytoin, general anesthesia and paralysis with a neuromuscular blocking agent may be necessary.
- Protect the patient's airway and support ventilation and perfusion. Meticulously monitor and maintain, within acceptable limits, the patient's vital signs, blood gases, serum electrolytes, etc. Absorption of drugs from the gastrointestinal tract may be decreased by giving activated charcoal, which, in many cases, is more effective than emesis or lavage; consider charcoal instead of or in addition to gastric emptying. Repeated doses of charcoal over time may hasten elimination of some drugs that have been absorbed. Safeguard the patient's airway when employing gastric emptying or charcoal.
# Pharmacology
## Mechanism of Action
There is limited information regarding Methohexital Mechanism of Action in the drug label.
## Structure
- Brevital® Sodium (Methohexital Sodium for Injection, USP) is 2,4,6 (1H, 3H, 5H)-Pyrimidinetrione, 1-methyl-5-(1-methyl-2-pentynyl)-5-(2-propenyl)-, (±)-, monosodium salt and has the empirical formula C14H17N2NaO3. Its molecular weight is 284.29.
- The structural formula is as follows:
- Methohexital sodium is a rapid, ultrashort-acting barbiturate anesthetic. Methohexital sodium for injection is a freeze-dried, sterile, nonpyrogenic mixture of methohexital sodium with 6% anhydrous sodium carbonate added as a buffer. It contains not less than 90% and not more than 110% of the labeled amount of methohexital sodium. It occurs as a white, freeze-dried plug that is freely soluble in water.
- This product is oxygen sensitive. The pH of the 1% solution is between 10 and 11; the pH of the 0.2% solution in 5% dextrose is between 9.5 and 10.5.
- Methohexital sodium may be administered by direct intravenous injection or continuous intravenous drip, intramuscular or rectal routes (see Precautions—Pediatric Use). Reconstituting instructions vary depending on the route of administration (see Dosage and Administration).
## Pharmacodynamics
- Compared with thiamylal and thiopental, methohexital is at least twice as potent on a weight basis, and its duration of action is only about half as long. Although the metabolic fate of methohexital in the body is not clear, the drug does not appear to concentrate in fat depots to the extent that other barbiturate anesthetics do. Thus, cumulative effects are fewer and recovery is more rapid with methohexital than with thiobarbiturates. In experimental animals, the drug cannot be detected in the blood 24 hours after administration.
- Methohexital differs chemically from the established barbiturate anesthetics in that it contains no sulfur. Little analgesia is conferred by barbiturates; their use in the presence of pain may result in excitation.
- Intravenous administration of methohexital results in rapid uptake by the brain (within 30 seconds) and rapid induction of sleep.
## Pharmacokinetics
- Following intramuscular administration to pediatric patients, the onset of sleep occurs in 2 to 10 minutes. A plasma concentration of 3 µg/mL was achieved in pediatric patients 15 minutes after an intramuscular dose (10 mg/kg) of a 5% solution. Following rectal administration to pediatric patients, the onset of sleep occurs in 5 to 15 minutes. Plasma methohexital concentrations achieved following rectal administration tend to increase both with dose and with the use of more dilute solution concentrations when using the same dose. A 25 mg/kg dose of a 1% methohexital solution yielded plasma concentrations of 6.9 to 7.9 µg/mL 15 minutes after dosing. The absolute bioavailability of rectal methohexital sodium is 17%.
- With single doses, the rate of redistribution determines duration of pharmacologic effect. Metabolism occurs in the liver through demethylation and oxidation. Side-chain oxidation is the most important biotransformation involved in termination of biologic activity. Excretion occurs via the kidneys through glomerular filtration.
## Nonclinical Toxicology
There is limited information regarding Methohexital Nonclinical Toxicology in the drug label.
# Clinical Studies
There is limited information regarding Methohexital Clinical Studies in the drug label.
# How Supplied
- Store between 20° to 25°C (68° to 77°F). (See USP Controlled Room Temperature.)
## Storage
There is limited information regarding Methohexital Storage in the drug label.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
There is limited information regarding Methohexital Patient Counseling Information in the drug label.
# Precautions with Alcohol
Alcohol-Methohexital interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
There is limited information regarding Methohexital Brand Names in the drug label.
# Look-Alike Drug Names
There is limited information regarding Methohexital Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Brevital_Sodium | |
215f9ca7721759ebdd48c8063908fb5fec1b60f1 | wikidoc | Brinzolamide | Brinzolamide
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Brinzolamide is a carbonic anhydrase inhibitor that is FDA approved for the {{{indicationType}}} of elevated intraocular pressure in patients with ocular hypertension or open-angle glaucoma. Common adverse reactions include abnormal taste in mouth and blurred vision.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- Dosing Information
- The recommended dose is 1 drop of AZOPT® (brinzolamide ophthalmic suspension) 1% in the affected eye(s) three times daily.
- If more than one topical ophthalmic drug is being used, the drugs should be administered at least ten (10) minutes apart.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brinzolamide in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brinzolamide in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
Safety and efficacy not established in pediatric patients.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brinzolamide in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brinzolamide in pediatric patients.
# Contraindications
- Hypersensitivity
- AZOPT® (brinzolamide ophthalmic suspension) 1% is contraindicated in patients who are hypersensitive to any component of this product.
# Warnings
- Sulfonamide Hypersensitivity Reactions
- AZOPT® (brinzolamide ophthalmic suspension) 1% is a sulfonamide and although administered topically it is absorbed systemically. Therefore, the same types of adverse reactions that are attributable to sulfonamides may occur with topical administration of AZOPT® (brinzolamide ophthalmic suspension) 1%. Fatalities have occurred, although rarely, due to severe reactions to sulfonamides including Stevens-Johnson syndrome, toxic epidermal necrolysis, fulminant hepatic necrosis, agranulocytosis, aplastic anemia, and other blood dyscrasias. Sensitization may recur when a sulfonamide is re-administered irrespective of the route of administration. If signs of serious reactions or hypersensitivity occur, discontinue the use of this preparation.
- Corneal Endothelium
- Carbonic anhydrase activity has been observed in both the cytoplasm and around the plasma membranes of the corneal endothelium. There is an increased potential for developing corneal edema in patients with low endothelial cell counts. Caution should be used when prescribing AZOPT® (brinzolamide ophthalmic suspension) 1% to this group of patients.
- Severe Renal Impairment
- AZOPT® (brinzolamide ophthalmic suspension) 1% has not been studied in patients with severe renal impairment (CrCl < 30 mL/min). Because AZOPT® (brinzolamide ophthalmic suspension) 1% and its metabolite are excreted predominantly by the kidney, AZOPT® (brinzolamide ophthalmic suspension) 1% is not recommended in such patients.
- Acute Angle-Closure Glaucoma
- The management of patients with acute angle-closure glaucoma requires therapeutic interventions in addition to ocular hypotensive agents. AZOPT® (brinzolamide ophthalmic suspension) 1% has not been studied in patients with acute angle-closure glaucoma.
- Contact Lens Wear
- The preservative in AZOPT® (brinzolamide ophthalmic suspension) 1%, benzalkonium chloride, may be absorbed by soft contact lenses. Contact lenses should be removed during instillation of AZOPT® (brinzolamide ophthalmic suspension) 1%, but may be reinserted 15 minutes after instillation.
# Adverse Reactions
## Clinical Trials Experience
- Because clinical studies are conducted under widely varying conditions, adverse reaction rates observed in the clinical studies of a drug cannot be directly compared to the rates in the clinical studies of another drug and may not reflect the rates observed in practice.
- In clinical studies of AZOPT® (brinzolamide ophthalmic suspension) 1%, the most frequently reported adverse events reported in 5-10% of patients were blurred vision and bitter, sour or unusual taste. Adverse events occurring in 1-5% of patients were blepharitis, dermatitis, dry eye, foreign body sensation, headache, hyperemia, ocular discharge, ocular discomfort, ocular keratitis, ocular pain, ocular pruritus and rhinitis.
- The following adverse reactions were reported at an incidence below 1%: allergic reactions, alopecia, chest pain, conjunctivitis, diarrhea, diplopia, dizziness, dry mouth, dyspnea, dyspepsia, eye fatigue, hypertonia, keratoconjunctivitis, keratopathy, kidney pain, lid margin crusting or sticky sensation, nausea, pharyngitis, tearing and urticaria.
## Postmarketing Experience
There is limited information regarding Postmarketing Experience of Brinzolamide in the drug label.
# Drug Interactions
- Oral Carbonic Anhydrase Inhibitors
- There is a potential for an additive effect on the known systemic effects of carbonic anhydrase inhibition in patients receiving an oral carbonic anhydrase inhibitor and AZOPT® (brinzolamide ophthalmic suspension) 1%. The concomitant administration of AZOPT® (brinzolamide ophthalmic suspension) 1% and oral carbonic anhydrase inhibitors is not recommended.
- High-Dose Salicylate Therapy
- Carbonic anhydrase inhibitors may produce acid-base and electrolyte alterations. These alterations were not reported in the clinical trials with brinzolamide. However, in patients treated with oral carbonic anhydrase inhibitors, rare instances of acid-base alterations have occurred with high-dose salicylate therapy. Therefore, the potential for such drug interactions should be considered in patients receiving AZOPT® (brinzolamide ophthalmic suspension) 1%.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
- Pregnancy Category C
- Developmental toxicity studies with brinzolamide in rabbits at oral doses of 1, 3, and 6 mg/kg/day (20, 62, and 125 times the recommended human ophthalmic dose) produced maternal toxicity at 6 mg/kg/day and a significant increase in the number of fetal variations, such as accessory skull bones, which was only slightly higher than the historic value at 1 and 6 mg/kg. In rats, statistically decreased body weights of fetuses from dams receiving oral doses of 18 mg/kg/day (375 times the recommended human ophthalmic dose) during gestation were proportional to the reduced maternal weight gain, with no statistically significant effects on organ or tissue development. Increases in unossified sternebrae, reduced ossification of the skull, and unossified hyoid that occurred at 6 and 18 mg/kg were not statistically significant. No treatment-related malformations were seen. Following oral administration of 14C-brinzolamide to pregnant rats, radioactivity was found to cross the placenta and was present in the fetal tissues and blood.
- There are no adequate and well-controlled studies in pregnant women. AZOPT® (brinzolamide ophthalmic suspension) 1% should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Brinzolamide in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Brinzolamide during labor and delivery.
### Nursing Mothers
- In a study of brinzolamide in lactating rats, decreases in body weight gain in offspring at an oral dose of 15 mg/kg/day (312 times the recommended human ophthalmic dose) were seen during lactation. No other effects were observed. However, following oral administration of 14C-brinzolamide to lactating rats, radioactivity was found in milk at concentrations below those in the blood and plasma.
- It is not known whether this drug is excreted in human milk. Because many drugs are excreted in human milk and because of the potential for serious adverse reactions in nursing infants from AZOPT® (brinzolamide ophthalmic suspension) 1%, a decision should be made whether to discontinue nursing or to discontinue the drug, taking into account the importance of the drug to the mother.
### Pediatric Use
- A three-month controlled clinical study was conducted in which AZOPT® (brinzolamide ophthalmic suspension) 1% was dosed only twice a day in pediatric patients 4 weeks to 5 years of age. Patients were not required to discontinue their IOP-lowering medication(s) until initiation of monotherapy with AZOPT®. IOP-lowering efficacy was not demonstrated in this study in which the mean decrease in elevated IOP was between 0 and 2 mmHg. Five out of 32 patients demonstrated an increase in corneal diameter of one millimeter.
### Geriatic Use
- No overall differences in safety or effectiveness have been observed between elderly and younger patients.
### Gender
There is no FDA guidance on the use of Brinzolamide with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Brinzolamide with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Brinzolamide in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Brinzolamide in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Brinzolamide in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Brinzolamide in patients who are immunocompromised.
# Administration and Monitoring
### Administration
Ophthalmic
### Monitoring
- Serum electrolyte levels (particularly potassium) and blood pH levels should be monitored.
# IV Compatibility
There is limited information regarding IV Compatibility of Brinzolamide in the drug label.
# Overdosage
## Acute Overdose
- Although no human data are available, electrolyte imbalance, development of an acidotic state, and possible nervous system effects may occur following oral administration of an overdose. Serum electrolyte levels (particularly potassium) and blood pH levels should be monitored.
## Chronic Overdose
There is limited information regarding Chronic Overdose of Brinzolamide in the drug label.
# Pharmacology
## Mechanism of Action
- Carbonic anhydrase (CA) is an enzyme found in many tissues of the body including the eye. It catalyzes the reversible reaction involving the hydration of carbon dioxide and the dehydration of carbonic acid. In humans, carbonic anhydrase exists as a number of isoenzymes, the most active being carbonic anhydrase II (CA-II), found primarily in red blood cells (RBCs), but also in other tissues. Inhibition of carbonic anhydrase in the ciliary processes of the eye decreases aqueous humor secretion, presumably by slowing the formation of bicarbonate ions with subsequent reduction in sodium and fluid transport. The result is a reduction in intraocular pressure (IOP).
- AZOPT® (brinzolamide ophthalmic suspension) 1% contains brinzolamide, an inhibitor of carbonic anhydrase II (CA-II). Following topical ocular administration, brinzolamide inhibits aqueous humor formation and reduces elevated intraocular pressure. Elevated intraocular pressure is a major risk factor in the pathogenesis of optic nerve damage and glaucomatous visual field loss.
## Structure
- AZOPT® (brinzolamide ophthalmic suspension) 1% contains a carbonic anhydrase inhibitor formulated for multidose topical ophthalmic use. Brinzolamide is described chemically as: (R)-(+)-4-Ethylamino-2-(3-methoxypropyl)-3,4-dihydro-2H-thieno -1,2-thiazine-6-sulfonamide-1,1- dioxide. Its empirical formula is C12H21N3O5S3, and its structural formula is:
- Brinzolamide has a molecular weight of 383.5 and a melting point of about 131°C. It is a white powder, which is insoluble in water, very soluble in methanol and soluble in ethanol.
- AZOPT® (brinzolamide ophthalmic suspension) 1% is supplied as a sterile, aqueous suspension of brinzolamide which has been formulated to be readily suspended and slow settling, following shaking. It has a pH of approximately 7.5 and an osmolality of 300 mOsm/kg.
- Each mL of AZOPT® (brinzolamide ophthalmic suspension) 1% contains: Active ingredient: brinzolamide 10 mg. Preservative: Benzalkonium chloride 0.1 mg. Inactives: mannitol, carbomer 974P, tyloxapol, edetate disodium, sodium chloride, purified water, with hydrochloric acid and/or sodium hydroxide to adjust pH.
## Pharmacodynamics
There is limited information regarding Pharmacodynamics of Brinzolamide in the drug label.
## Pharmacokinetics
- Following topical ocular administration, brinzolamide is absorbed into the systemic circulation. Due to its affinity for CA-II, brinzolamide distributes extensively into the RBCs and exhibits a long half-life in whole blood (approximately 111 days). In humans, the metabolite N-desethyl brinzolamide is formed, which also binds to CA and accumulates in RBCs. This metabolite binds mainly to CA-I in the presence of brinzolamide. In plasma, both parent brinzolamide and N-desethyl brinzolamide concentrations are low and generally below assay quantitation limits (<10 ng/mL). Binding to plasma proteins is approximately 60%. Brinzolamide is eliminated predominantly in the urine as unchanged drug. N-Desethyl brinzolamide is also found in the urine along with lower concentrations of the N-desmethoxypropyl and O-desmethyl metabolites.
- An oral pharmacokinetic study was conducted in which healthy volunteers received 1 mg capsules of brinzolamide twice per day for up to 32 weeks. This regimen approximates the amount of drug delivered by topical ocular administration of AZOPT® (brinzolamide ophthalmic suspension) 1% dosed to both eyes three times per day and simulates systemic drug and metabolite concentrations similar to those achieved with long-term topical dosing. RBC CA activity was measured to assess the degree of systemic CA inhibition. Brinzolamide saturation of RBC CA-II was achieved within 4 weeks (RBC concentrations of approximately 20 μM). N-Desethyl brinzolamide accumulated in RBCs to steady-state within 20-28 weeks reaching concentrations ranging from 6-30 μM. The inhibition of CA-II activity at steady-state was approximately 70-75%, which is below the degree of inhibition expected to have a pharmacological effect on renal function or respiration in healthy subjects.
## Nonclinical Toxicology
- Carcinogenesis, Mutagenesis, Impairment of Fertility
- Carcinogenicity data on brinzolamide are not available. The following tests for mutagenic potential were negative: (1) in vivo mouse micronucleus assay; (2) in vivo sister chromatid exchange assay; and (3) Ames E. coli test. The in vitro mouse lymphoma forward mutation assay was negative in the absence of activation, but positive in the presence of microsomal activation. In reproduction studies of brinzolamide in rats, there were no adverse effects on the fertility or reproductive capacity of males or females at doses up to 18 mg/kg/day (375 times the recommended human ophthalmic dose).
# Clinical Studies
- In two, three-month clinical studies, AZOPT® (brinzolamide ophthalmic suspension) 1% dosed three times per day (TID) in patients with elevated intraocular pressure (IOP), produced significant reductions in IOPs (4–5 mm Hg). These IOP reductions are equivalent to the reductions observed with TRUSOPT- (dorzolamide hydrochloride ophthalmic solution) 2% dosed TID in the same studies.
- In two clinical studies in patients with elevated intraocular pressure, AZOPT® (brinzolamide ophthalmic suspension) 1% was associated with less stinging and burning upon instillation than TRUSOPT- 2%.
# How Supplied
- AZOPT® (brinzolamide ophthalmic suspension) 1% is supplied in plastic DROP-TAINER® dispensers with a controlled dispensing-tip as follows:
10 mL NDC 0065-0275-10
15 mL NDC 0065-0275-15
- Store AZOPT® (brinzolamide ophthalmic suspension) 1% at 4-30°C (39-86°F). Shake well before use.
## Storage
There is limited information regarding Brinzolamide Storage in the drug label.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Sulfonamide Reactions
- Patients should be advised that if serious or unusual ocular or systemic reactions or signs of hypersensitivity occur, they should discontinue the use of the product and consult their physician.
- Temporary Blurred Vision
- Vision may be temporarily blurred following dosing with AZOPT® (brinzolamide ophthalmic suspension) 1%. Care should be exercised in operating machinery or driving a motor vehicle.
- Avoiding Contamination of the Product
- Patients should be instructed to avoid allowing the tip of the dispensing container to contact the eye or surrounding structures or other surfaces, since the product can become contaminated by common bacteria known to cause ocular infections. Serious damage to the eye and subsequent loss of vision may result from using contaminated solutions.
- Intercurrent Ocular Conditions
- Patients should also be advised that if they have ocular surgery or develop an intercurrent ocular condition (e.g., trauma or infection), they should immediately seek their physician's advice concerning the continued use of the present multidose container.
- Concomitant Topical Ocular Therapy
- If more than one topical ophthalmic drug is being used, the drugs should be administered at least ten minutes apart.
- Contact Lens Wear
- The preservative in AZOPT® (brinzolamide ophthalmic suspension) 1%, benzalkonium chloride, may be absorbed by soft contact lenses. Contact lenses should be removed during instillation of AZOPT® (brinzolamide ophthalmic suspension) 1%, but may be reinserted 15 minutes after instillation.
# Precautions with Alcohol
Alcohol-Brinzolamide interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
AZOPT®
# Look-Alike Drug Names
- N/A
# Drug Shortage Status
# Price | Brinzolamide
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Gerald Chi
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Brinzolamide is a carbonic anhydrase inhibitor that is FDA approved for the {{{indicationType}}} of elevated intraocular pressure in patients with ocular hypertension or open-angle glaucoma. Common adverse reactions include abnormal taste in mouth and blurred vision.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- Dosing Information
- The recommended dose is 1 drop of AZOPT® (brinzolamide ophthalmic suspension) 1% in the affected eye(s) three times daily.
- If more than one topical ophthalmic drug is being used, the drugs should be administered at least ten (10) minutes apart.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brinzolamide in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brinzolamide in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
Safety and efficacy not established in pediatric patients.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brinzolamide in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brinzolamide in pediatric patients.
# Contraindications
- Hypersensitivity
- AZOPT® (brinzolamide ophthalmic suspension) 1% is contraindicated in patients who are hypersensitive to any component of this product.
# Warnings
- Sulfonamide Hypersensitivity Reactions
- AZOPT® (brinzolamide ophthalmic suspension) 1% is a sulfonamide and although administered topically it is absorbed systemically. Therefore, the same types of adverse reactions that are attributable to sulfonamides may occur with topical administration of AZOPT® (brinzolamide ophthalmic suspension) 1%. Fatalities have occurred, although rarely, due to severe reactions to sulfonamides including Stevens-Johnson syndrome, toxic epidermal necrolysis, fulminant hepatic necrosis, agranulocytosis, aplastic anemia, and other blood dyscrasias. Sensitization may recur when a sulfonamide is re-administered irrespective of the route of administration. If signs of serious reactions or hypersensitivity occur, discontinue the use of this preparation.
- Corneal Endothelium
- Carbonic anhydrase activity has been observed in both the cytoplasm and around the plasma membranes of the corneal endothelium. There is an increased potential for developing corneal edema in patients with low endothelial cell counts. Caution should be used when prescribing AZOPT® (brinzolamide ophthalmic suspension) 1% to this group of patients.
- Severe Renal Impairment
- AZOPT® (brinzolamide ophthalmic suspension) 1% has not been studied in patients with severe renal impairment (CrCl < 30 mL/min). Because AZOPT® (brinzolamide ophthalmic suspension) 1% and its metabolite are excreted predominantly by the kidney, AZOPT® (brinzolamide ophthalmic suspension) 1% is not recommended in such patients.
- Acute Angle-Closure Glaucoma
- The management of patients with acute angle-closure glaucoma requires therapeutic interventions in addition to ocular hypotensive agents. AZOPT® (brinzolamide ophthalmic suspension) 1% has not been studied in patients with acute angle-closure glaucoma.
- Contact Lens Wear
- The preservative in AZOPT® (brinzolamide ophthalmic suspension) 1%, benzalkonium chloride, may be absorbed by soft contact lenses. Contact lenses should be removed during instillation of AZOPT® (brinzolamide ophthalmic suspension) 1%, but may be reinserted 15 minutes after instillation.
# Adverse Reactions
## Clinical Trials Experience
- Because clinical studies are conducted under widely varying conditions, adverse reaction rates observed in the clinical studies of a drug cannot be directly compared to the rates in the clinical studies of another drug and may not reflect the rates observed in practice.
- In clinical studies of AZOPT® (brinzolamide ophthalmic suspension) 1%, the most frequently reported adverse events reported in 5-10% of patients were blurred vision and bitter, sour or unusual taste. Adverse events occurring in 1-5% of patients were blepharitis, dermatitis, dry eye, foreign body sensation, headache, hyperemia, ocular discharge, ocular discomfort, ocular keratitis, ocular pain, ocular pruritus and rhinitis.
- The following adverse reactions were reported at an incidence below 1%: allergic reactions, alopecia, chest pain, conjunctivitis, diarrhea, diplopia, dizziness, dry mouth, dyspnea, dyspepsia, eye fatigue, hypertonia, keratoconjunctivitis, keratopathy, kidney pain, lid margin crusting or sticky sensation, nausea, pharyngitis, tearing and urticaria.
## Postmarketing Experience
There is limited information regarding Postmarketing Experience of Brinzolamide in the drug label.
# Drug Interactions
- Oral Carbonic Anhydrase Inhibitors
- There is a potential for an additive effect on the known systemic effects of carbonic anhydrase inhibition in patients receiving an oral carbonic anhydrase inhibitor and AZOPT® (brinzolamide ophthalmic suspension) 1%. The concomitant administration of AZOPT® (brinzolamide ophthalmic suspension) 1% and oral carbonic anhydrase inhibitors is not recommended.
- High-Dose Salicylate Therapy
- Carbonic anhydrase inhibitors may produce acid-base and electrolyte alterations. These alterations were not reported in the clinical trials with brinzolamide. However, in patients treated with oral carbonic anhydrase inhibitors, rare instances of acid-base alterations have occurred with high-dose salicylate therapy. Therefore, the potential for such drug interactions should be considered in patients receiving AZOPT® (brinzolamide ophthalmic suspension) 1%.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
- Pregnancy Category C
- Developmental toxicity studies with brinzolamide in rabbits at oral doses of 1, 3, and 6 mg/kg/day (20, 62, and 125 times the recommended human ophthalmic dose) produced maternal toxicity at 6 mg/kg/day and a significant increase in the number of fetal variations, such as accessory skull bones, which was only slightly higher than the historic value at 1 and 6 mg/kg. In rats, statistically decreased body weights of fetuses from dams receiving oral doses of 18 mg/kg/day (375 times the recommended human ophthalmic dose) during gestation were proportional to the reduced maternal weight gain, with no statistically significant effects on organ or tissue development. Increases in unossified sternebrae, reduced ossification of the skull, and unossified hyoid that occurred at 6 and 18 mg/kg were not statistically significant. No treatment-related malformations were seen. Following oral administration of 14C-brinzolamide to pregnant rats, radioactivity was found to cross the placenta and was present in the fetal tissues and blood.
- There are no adequate and well-controlled studies in pregnant women. AZOPT® (brinzolamide ophthalmic suspension) 1% should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Brinzolamide in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Brinzolamide during labor and delivery.
### Nursing Mothers
- In a study of brinzolamide in lactating rats, decreases in body weight gain in offspring at an oral dose of 15 mg/kg/day (312 times the recommended human ophthalmic dose) were seen during lactation. No other effects were observed. However, following oral administration of 14C-brinzolamide to lactating rats, radioactivity was found in milk at concentrations below those in the blood and plasma.
- It is not known whether this drug is excreted in human milk. Because many drugs are excreted in human milk and because of the potential for serious adverse reactions in nursing infants from AZOPT® (brinzolamide ophthalmic suspension) 1%, a decision should be made whether to discontinue nursing or to discontinue the drug, taking into account the importance of the drug to the mother.
### Pediatric Use
- A three-month controlled clinical study was conducted in which AZOPT® (brinzolamide ophthalmic suspension) 1% was dosed only twice a day in pediatric patients 4 weeks to 5 years of age. Patients were not required to discontinue their IOP-lowering medication(s) until initiation of monotherapy with AZOPT®. IOP-lowering efficacy was not demonstrated in this study in which the mean decrease in elevated IOP was between 0 and 2 mmHg. Five out of 32 patients demonstrated an increase in corneal diameter of one millimeter.
### Geriatic Use
- No overall differences in safety or effectiveness have been observed between elderly and younger patients.
### Gender
There is no FDA guidance on the use of Brinzolamide with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Brinzolamide with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Brinzolamide in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Brinzolamide in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Brinzolamide in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Brinzolamide in patients who are immunocompromised.
# Administration and Monitoring
### Administration
Ophthalmic
### Monitoring
- Serum electrolyte levels (particularly potassium) and blood pH levels should be monitored.
# IV Compatibility
There is limited information regarding IV Compatibility of Brinzolamide in the drug label.
# Overdosage
## Acute Overdose
- Although no human data are available, electrolyte imbalance, development of an acidotic state, and possible nervous system effects may occur following oral administration of an overdose. Serum electrolyte levels (particularly potassium) and blood pH levels should be monitored.
## Chronic Overdose
There is limited information regarding Chronic Overdose of Brinzolamide in the drug label.
# Pharmacology
## Mechanism of Action
- Carbonic anhydrase (CA) is an enzyme found in many tissues of the body including the eye. It catalyzes the reversible reaction involving the hydration of carbon dioxide and the dehydration of carbonic acid. In humans, carbonic anhydrase exists as a number of isoenzymes, the most active being carbonic anhydrase II (CA-II), found primarily in red blood cells (RBCs), but also in other tissues. Inhibition of carbonic anhydrase in the ciliary processes of the eye decreases aqueous humor secretion, presumably by slowing the formation of bicarbonate ions with subsequent reduction in sodium and fluid transport. The result is a reduction in intraocular pressure (IOP).
- AZOPT® (brinzolamide ophthalmic suspension) 1% contains brinzolamide, an inhibitor of carbonic anhydrase II (CA-II). Following topical ocular administration, brinzolamide inhibits aqueous humor formation and reduces elevated intraocular pressure. Elevated intraocular pressure is a major risk factor in the pathogenesis of optic nerve damage and glaucomatous visual field loss.
## Structure
- AZOPT® (brinzolamide ophthalmic suspension) 1% contains a carbonic anhydrase inhibitor formulated for multidose topical ophthalmic use. Brinzolamide is described chemically as: (R)-(+)-4-Ethylamino-2-(3-methoxypropyl)-3,4-dihydro-2H-thieno [3,2-e]-1,2-thiazine-6-sulfonamide-1,1- dioxide. Its empirical formula is C12H21N3O5S3, and its structural formula is:
- Brinzolamide has a molecular weight of 383.5 and a melting point of about 131°C. It is a white powder, which is insoluble in water, very soluble in methanol and soluble in ethanol.
- AZOPT® (brinzolamide ophthalmic suspension) 1% is supplied as a sterile, aqueous suspension of brinzolamide which has been formulated to be readily suspended and slow settling, following shaking. It has a pH of approximately 7.5 and an osmolality of 300 mOsm/kg.
- Each mL of AZOPT® (brinzolamide ophthalmic suspension) 1% contains: Active ingredient: brinzolamide 10 mg. Preservative: Benzalkonium chloride 0.1 mg. Inactives: mannitol, carbomer 974P, tyloxapol, edetate disodium, sodium chloride, purified water, with hydrochloric acid and/or sodium hydroxide to adjust pH.
## Pharmacodynamics
There is limited information regarding Pharmacodynamics of Brinzolamide in the drug label.
## Pharmacokinetics
- Following topical ocular administration, brinzolamide is absorbed into the systemic circulation. Due to its affinity for CA-II, brinzolamide distributes extensively into the RBCs and exhibits a long half-life in whole blood (approximately 111 days). In humans, the metabolite N-desethyl brinzolamide is formed, which also binds to CA and accumulates in RBCs. This metabolite binds mainly to CA-I in the presence of brinzolamide. In plasma, both parent brinzolamide and N-desethyl brinzolamide concentrations are low and generally below assay quantitation limits (<10 ng/mL). Binding to plasma proteins is approximately 60%. Brinzolamide is eliminated predominantly in the urine as unchanged drug. N-Desethyl brinzolamide is also found in the urine along with lower concentrations of the N-desmethoxypropyl and O-desmethyl metabolites.
- An oral pharmacokinetic study was conducted in which healthy volunteers received 1 mg capsules of brinzolamide twice per day for up to 32 weeks. This regimen approximates the amount of drug delivered by topical ocular administration of AZOPT® (brinzolamide ophthalmic suspension) 1% dosed to both eyes three times per day and simulates systemic drug and metabolite concentrations similar to those achieved with long-term topical dosing. RBC CA activity was measured to assess the degree of systemic CA inhibition. Brinzolamide saturation of RBC CA-II was achieved within 4 weeks (RBC concentrations of approximately 20 μM). N-Desethyl brinzolamide accumulated in RBCs to steady-state within 20-28 weeks reaching concentrations ranging from 6-30 μM. The inhibition of CA-II activity at steady-state was approximately 70-75%, which is below the degree of inhibition expected to have a pharmacological effect on renal function or respiration in healthy subjects.
## Nonclinical Toxicology
- Carcinogenesis, Mutagenesis, Impairment of Fertility
- Carcinogenicity data on brinzolamide are not available. The following tests for mutagenic potential were negative: (1) in vivo mouse micronucleus assay; (2) in vivo sister chromatid exchange assay; and (3) Ames E. coli test. The in vitro mouse lymphoma forward mutation assay was negative in the absence of activation, but positive in the presence of microsomal activation. In reproduction studies of brinzolamide in rats, there were no adverse effects on the fertility or reproductive capacity of males or females at doses up to 18 mg/kg/day (375 times the recommended human ophthalmic dose).
# Clinical Studies
- In two, three-month clinical studies, AZOPT® (brinzolamide ophthalmic suspension) 1% dosed three times per day (TID) in patients with elevated intraocular pressure (IOP), produced significant reductions in IOPs (4–5 mm Hg). These IOP reductions are equivalent to the reductions observed with TRUSOPT* (dorzolamide hydrochloride ophthalmic solution) 2% dosed TID in the same studies.
- In two clinical studies in patients with elevated intraocular pressure, AZOPT® (brinzolamide ophthalmic suspension) 1% was associated with less stinging and burning upon instillation than TRUSOPT* 2%.
# How Supplied
- AZOPT® (brinzolamide ophthalmic suspension) 1% is supplied in plastic DROP-TAINER® dispensers with a controlled dispensing-tip as follows:
10 mL NDC 0065-0275-10
15 mL NDC 0065-0275-15
- Store AZOPT® (brinzolamide ophthalmic suspension) 1% at 4-30°C (39-86°F). Shake well before use.
## Storage
There is limited information regarding Brinzolamide Storage in the drug label.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Sulfonamide Reactions
- Patients should be advised that if serious or unusual ocular or systemic reactions or signs of hypersensitivity occur, they should discontinue the use of the product and consult their physician.
- Temporary Blurred Vision
- Vision may be temporarily blurred following dosing with AZOPT® (brinzolamide ophthalmic suspension) 1%. Care should be exercised in operating machinery or driving a motor vehicle.
- Avoiding Contamination of the Product
- Patients should be instructed to avoid allowing the tip of the dispensing container to contact the eye or surrounding structures or other surfaces, since the product can become contaminated by common bacteria known to cause ocular infections. Serious damage to the eye and subsequent loss of vision may result from using contaminated solutions.
- Intercurrent Ocular Conditions
- Patients should also be advised that if they have ocular surgery or develop an intercurrent ocular condition (e.g., trauma or infection), they should immediately seek their physician's advice concerning the continued use of the present multidose container.
- Concomitant Topical Ocular Therapy
- If more than one topical ophthalmic drug is being used, the drugs should be administered at least ten minutes apart.
- Contact Lens Wear
- The preservative in AZOPT® (brinzolamide ophthalmic suspension) 1%, benzalkonium chloride, may be absorbed by soft contact lenses. Contact lenses should be removed during instillation of AZOPT® (brinzolamide ophthalmic suspension) 1%, but may be reinserted 15 minutes after instillation.
# Precautions with Alcohol
Alcohol-Brinzolamide interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.[1]
# Brand Names
AZOPT®
# Look-Alike Drug Names
- N/A[2]
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Brinzolamide | |
da2ff6528abfdba4e88fae32e150ed49c0f43053 | wikidoc | Brivaracetam | Brivaracetam
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Brivaracetam is an anticonvulsant that is FDA approved for the treatment of epilepsy in patients 16 years of age and older with partial-onset seizures. Common adverse reactions include somnolence/sedation, dizziness, fatigue, and nausea/vomiting (5%).
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
Brivaracetam is indicated as adjunctive therapy in the treatment of partial-onset seizures in patients 16 years of age and older with epilepsy.
When initiating treatment, gradual dose escalation is not required. The recommended starting dosage is 50 mg twice daily (100 mg per day). Based on individual patient tolerability and therapeutic response, the dosage may be adjusted down to 25 mg twice daily (50 mg per day) or up to 100 mg twice daily (200 mg per day).
Brivaracetam injection may be used when oral administration is temporarily not feasible. Brivaracetam injection should be administered at the same dosage and same frequency as Brivaracetam tablets and oral solution.
The clinical study experience with Brivaracetam injection is limited to 4 consecutive days of treatment.
- Discontinuation of Brivaracetam
Avoid abrupt withdrawal from Brivaracetam in order to minimize the risk of increased seizure frequency and status epilepticus.
- Patients with Hepatic Impairment
For all stages of hepatic impairment, the recommended starting dosage is 25 mg twice daily (50 mg per day) and the recommended maximum dosage is 75 mg twice daily (150 mg per day).
- Co-administration with Rifampin
Increase the Brivaracetam dosage in patients on concomitant rifampin by up to 100% (i.e., double the dosage).
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brivaracetam in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brivaracetam in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
Safety and effectiveness of Brivaracetam in adolescents 16 years of age have been established (same indication and dosage as adults).
Safety and effectiveness of Brivaracetam in patients less than 16 years of age have not been established.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brivaracetam in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brivaracetam in pediatric patients.
# Contraindications
Hypersensitivity to Brivaracetam or any of the inactive ingredients in Brivaracetam (bronchospasm and angioedema have occurred).
# Warnings
- Suicidal Behavior and Ideation
Antiepileptic drugs (AEDs), including Brivaracetam, increase the risk of suicidal thoughts or behavior in patients taking these drugs for any indication. Patients treated with any AED for any indication should be monitored for the emergence or worsening of depression, suicidal thoughts or behavior, and/or any unusual changes in mood or behavior.
Pooled analyses of 199 placebo-controlled clinical trials (mono- and adjunctive therapy) of 11 different AEDs showed that patients randomized to one of the AEDs had approximately twice the risk (adjusted Relative Risk 1.8, 95% CI:1.2, 2.7) of suicidal thinking or behavior compared to patients randomized to placebo. In these trials, which had a median treatment duration of 12 weeks, the estimated incidence rate of suicidal behavior or ideation among 27,863 AED-treated patients was 0.43%, compared to 0.24% among 16,029 placebo-treated patients, representing an increase of approximately one case of suicidal thinking or behavior for every 530 patients treated. There were four suicides in drug-treated patients in the trials and none in placebo-treated patients, but the number is too small to allow any conclusion about drug effect on suicide.
The increased risk of suicidal thoughts or behavior with AEDs was observed as early as one week after starting drug treatment with AEDs and persisted for the duration of treatment assessed. Because most trials included in the analysis did not extend beyond 24 weeks, the risk of suicidal thoughts or behavior beyond 24 weeks could not be assessed.
The risk of suicidal thoughts or behavior was generally consistent among drugs in the data analyzed. The finding of increased risk with AEDs of varying mechanisms of action and across a range of indications suggests that the risk applies to all AEDs used for any indication. The risk did not vary substantially by age (5-100 years) in the clinical trials analyzed. Table 1 shows absolute and relative risk by indication for all evaluated AEDs.
- Table 1: Risk of Suicidal Thoughts or Behaviors by Indication for Antiepileptic Drugs in the Pooled Analysis
The relative risk for suicidal thoughts or behavior was higher in clinical trials in patients with epilepsy than in clinical trials in patients with psychiatric or other conditions, but the absolute risk differences were similar for the epilepsy and psychiatric indications.
Anyone considering prescribing Brivaracetam or any other AED must balance the risk of suicidal thoughts or behaviors with the risk of untreated illness. Epilepsy and many other illnesses for which AEDs are prescribed are themselves associated with morbidity and mortality and an increased risk of suicidal thoughts and behavior. Should suicidal thoughts and behavior emerge during treatment, consider whether the emergence of these symptoms in any given patient may be related to the illness being treated.
- Neurological Adverse Reactions
Brivaracetam causes somnolence, fatigue, dizziness, and disturbance in coordination. Patients should be monitored for these signs and symptoms and advised not to drive or operate machinery until they have gained sufficient experience on Brivaracetam to gauge whether it adversely affects their ability to drive or operate machinery.
- Somnolence and Fatigue
Brivaracetam causes dose-dependent increases in somnolence and fatigue-related adverse reactions (fatigue, asthenia, malaise, hypersomnia, sedation, and lethargy). In the Phase 3 controlled adjunctive epilepsy trials, these events were reported in 25% of patients randomized to receive Brivaracetam at least 50 mg/day (20% at 50 mg/day, 26% at 100 mg/day, and 27% at 200 mg/day) compared to 14% of patients who received placebo. The risk is greatest early in treatment but can occur at any time.
- Dizziness and Disturbance in Gait and Coordination
Brivaracetam causes adverse reactions related to dizziness and disturbance in gait and coordination (dizziness, vertigo, balance disorder, ataxia, nystagmus, gait disturbance, and abnormal coordination). In the Phase 3 controlled adjunctive epilepsy trials, these events were reported in 16% of patients randomized to receive Brivaracetam at least 50 mg/day compared to 10% of patients who received placebo. The risk is greatest early in treatment but can occur at any time.
- Psychiatric Adverse Reactions
Brivaracetam causes psychiatric adverse reactions. In the Phase 3 controlled adjunctive epilepsy trials, psychiatric adverse reactions were reported in approximately 13% of patients who received Brivaracetam (at least 50 mg/day) compared to 8% of patients who received placebo. Psychiatric events included both non-psychotic symptoms (irritability, anxiety, nervousness, aggression, belligerence, anger, agitation, restlessness, depression, depressed mood, tearfulness, apathy, altered mood, mood swings, affect lability, psychomotor hyperactivity, abnormal behavior, and adjustment disorder) and psychotic symptoms (psychotic disorder along with hallucination, paranoia, acute psychosis, and psychotic behavior). A total of 1.7% of adult patients treated with Brivaracetam discontinued treatment because of psychiatric reactions compared to 1.3% of patients who received placebo.
- Hypersensitivity: Bronchospasm and Angioedema
Brivaracetam can cause hypersensitivity reactions. Bronchospasm and angioedema have been reported in patients taking Brivaracetam. If a patient develops hypersensitivity reactions after treatment with Brivaracetam, the drug should be discontinued. Brivaracetam is contraindicated in patients with a prior hypersensitivity reaction to Brivaracetam or any of the inactive ingredients.
- Withdrawal of Antiepileptic Drugs
As with most antiepileptic drugs, Brivaracetam should generally be withdrawn gradually because of the risk of increased seizure frequency and status epilepticus. But if withdrawal is needed because of a serious adverse event, rapid discontinuation can be considered.
# Adverse Reactions
## Clinical Trials Experience
The following serious adverse reactions are described elsewhere in labeling:
- Suicidal Behavior and Ideation
- Neurological Adverse Reactions
- Psychiatric Adverse Reactions
- Hypersensitivity: Bronchospasm and Angioedema
- Withdrawal of Antiepileptic Drugs
Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice.
In all controlled and uncontrolled trials performed in adult epilepsy patients, Brivaracetam was administered as adjunctive therapy to 2437 patients. Of these patients, 1929 were treated for at least 6 months, 1500 for at least 12 months, 1056 for at least 24 months, and 758 for at least 36 months. A total of 1558 patients (1099 patients treated with Brivaracetam and 459 patients treated with placebo) constituted the safety population in the pooled analysis of Phase 3 placebo-controlled studies in patients with partial-onset seizures (Studies 1, 2, and 3). The adverse reactions presented in Table 2 are based on this safety population; the median length of treatment in these studies was 12 weeks. Of the patients in those studies, approximately 51% were male, 74% were Caucasian, and the mean age was 38 years.
In the Phase 3 controlled epilepsy studies, adverse events occurred in 68% of patients treated with Brivaracetam and 62% treated with placebo. The most common adverse reactions occurring at a frequency of at least 5% in patients treated with Brivaracetam doses of at least 50 mg/day and greater than placebo were somnolence and sedation (16%), dizziness (12%), fatigue (9%), and nausea and vomiting symptoms (5%).
The discontinuation rates due to adverse events were 5%, 8%, and 7% for patients randomized to receive Brivaracetam at the recommended doses of 50 mg, 100 mg, and 200 mg/day, respectively, compared to 4% in patients randomized to receive placebo.
Table 2 lists adverse reactions for Brivaracetam that occurred at least 2% more frequently for Brivaracetam doses of at least 50 mg/day than placebo.
- Table 2: Adverse Reactions in Pooled Placebo-Controlled Adjunctive Therapy Studies in Patients with Partial-Onset Seizures (Brivaracetam 50 mg/day, 100 mg/day, and 200 mg/day)
BRIVIACT: Brivaracetam's Brand name
There was no apparent dose-dependent increase in adverse reactions listed in Table 2 with the exception of somnolence and sedation.
- Hematologic Abnormalities
Brivaracetam can cause hematologic abnormalities. In the Phase 3 controlled adjunctive epilepsy studies, a total of 1.8% of Brivaracetam-treated patients and 1.1% of placebo-treated patients had at least one clinically significant decreased white blood cell count (<3.0 × 109/L), and 0.3% of Brivaracetam-treated patients and 0% of placebo-treated patients had at least one clinically significant decreased neutrophil count (<1.0 × 109/L).
- Adverse Reactions with Brivaracetam Injection
Adverse reactions with Brivaracetam injection were generally similar to those observed with Brivaracetam tablets. Other adverse events that occurred in at least 3% of patients who received Brivaracetam injection included dysgeusia, euphoric mood, feeling drunk, and infusion site pain.
- Comparison by Sex
There were no significant differences by sex in the incidence of adverse reactions.
## Postmarketing Experience
There is limited information regarding Brivaracetam Postmarketing Experience in the drug label.
# Drug Interactions
Co-administration with rifampin decreases Brivaracetam plasma concentrations likely because of CYP2C19 induction. Prescribers should increase the Brivaracetam dose by up to 100% (i.e., double the dosage) in patients while receiving concomitant treatment with rifampin.
Co-administration with carbamazepine may increase exposure to carbamazepine-epoxide, the active metabolite of carbamazepine. Though available data did not reveal any safety concerns, if tolerability issues arise when co-administered, carbamazepine dose reduction should be considered.
Because Brivaracetam can increase plasma concentrations of phenytoin, phenytoin levels should be monitored in patients when concomitant Brivaracetam is added to or discontinued from ongoing phenytoin therapy.
Brivaracetam provided no added therapeutic benefit to levetiracetam when the two drugs were co-administered.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
C. There are no adequate and well-controlled studies in pregnant women. In animal studies, Brivaracetam produced evidence of developmental toxicity at plasma exposures greater than clinical exposures. Brivaracetam should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Oral administration of Brivaracetam (0, 150, 300, or 600 mg/kg/day) to pregnant rats during the period of organogenesis did not produce any significant maternal or embryofetal toxicity. The highest dose tested was associated with maternal plasma exposures (area under the Brivaracetam plasma concentration versus time curve, an exposure metric, AUC) approximately 30 times exposures in humans at the maximum recommended dose (MRD) of 200 mg/day. Oral administration of Brivaracetam (0, 30, 60, 120, or 240 mg/kg/day) to pregnant rabbits during the period of organogenesis resulted in embryofetal mortality and decreased fetal body weights at the highest dose tested, which was also maternally toxic. The highest no-effect dose (120 mg/kg/day) was associated with maternal plasma exposures approximately 4 times human exposures at the MRD.
When Brivaracetam (0, 150, 300, or 600 mg/kg/day) was orally administered to rats throughout pregnancy and lactation, decreased growth, delayed sexual maturation (female), and long-term neurobehavioral changes were observed in the offspring at the highest dose. The highest no-effect dose (300 mg/kg/day) was associated with maternal plasma exposures approximately 7 times human exposures at the MRD.
- Pregnancy Registry
Physicians are advised to recommend that pregnant patients taking Brivaracetam enroll in the North American Antiepileptic Drug Pregnancy Registry. This can be done by calling the toll free number 1-888-233-2334, and must be done by patients themselves. Information on the registry can also be found at the website /.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Brivaracetam in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Brivaracetam during labor and delivery.
### Nursing Mothers
It is not known whether Brivaracetam is excreted in human milk. Studies in rats have shown excretion of Brivaracetam in milk. Because many drugs are excreted into human milk, a decision should be made whether to discontinue nursing or to discontinue Brivaracetam, taking into account the importance of the drug to the mother.
### Pediatric Use
Safety and effectiveness of Brivaracetam in adolescents 16 years of age have been established.
Safety and effectiveness of Brivaracetam in patients less than 16 years of age have not been established.
The potential adverse effects of Brivaracetam on postnatal growth and development were investigated in juvenile rats and dogs. Oral administration (0, 150, 300, or 600 mg/kg/day) to rats during the neonatal and juvenile periods of development resulted in increased mortality, decreased body weight gain, delayed male sexual maturation, and adverse neurobehavioral effects at the highest dose tested and decreased brain size and weight at all doses. Therefore, a no-effect dose was not established; the lowest dose tested in juvenile rats was associated with plasma exposures (AUC) approximately 2 times those in adult humans at the maximum recommended dose (MRD) of 200 mg/day. In dogs, oral administration (0, 15, 30, or 100 mg/kg/day) throughout the neonatal and juvenile periods of development induced liver changes similar to those observed in adult animals at the highest dose but produced no adverse effects on growth, bone density or strength, neurological testing, or neuropathology evaluation. The overall no-effect dose (30 mg/kg/day) and the no-effect dose for adverse effects on developmental parameters (100 mg/kg/day) were associated with plasma exposures approximately equal to and 4 times, respectively, adult human exposures at the MRD.
### Geriatic Use
There were insufficient numbers of patients 65 years of age and older in the double-blind, placebo-controlled epilepsy trials (n=38) to allow adequate assessment of the effectiveness of Brivaracetam in this population. In general, dose selection for an elderly patient should be judicious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy.
### Gender
There is no FDA guidance on the use of Brivaracetam with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Brivaracetam with respect to specific racial populations.
### Renal Impairment
Dose adjustments are not required for patients with impaired renal function. There are no data in patients with end-stage renal disease undergoing dialysis, and use of Brivaracetam is not recommended in this patient population.
### Hepatic Impairment
Because of increases in Brivaracetam exposure, dosage adjustment is recommended for all stages of hepatic impairment.
### Females of Reproductive Potential and Males
The effect of Brivaracetam on labor and delivery in humans is unknown.
- Controlled Substance
Brivaracetam is listed as a Schedule V controlled substance.
- Abuse
In a human abuse potential study, single doses of Brivaracetam at therapeutic and supratherapeutic doses were compared to alprazolam (C-IV) (1.5 mg and 3 mg). Brivaracetam at the recommended single dose (50 mg) caused fewer sedative and euphoric effects than alprazolam; however, Brivaracetam at supratherapeutic single doses (200 mg and 1000 mg) was similar to alprazolam on other measures of abuse.
- Dependence
There was no evidence of physical dependence potential or a withdrawal syndrome with Brivaracetam in a pooled review of placebo-controlled adjunctive therapy studies.
### Immunocompromised Patients
There is no FDA guidance one the use of Brivaracetam in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Administration Instructions for Brivaracetam Tablets and Brivaracetam Oral Solution
Brivaracetam can be initiated with either intravenous or oral administration.
Brivaracetam tablets and oral solution may be taken with or without food.
- Brivaracetam Tablets
Brivaracetam tablets should be swallowed whole with liquid. Brivaracetam tablets should not be chewed or crushed.
- Brivaracetam Oral Solution
A calibrated measuring device is recommended to measure and deliver the prescribed dose accurately. A household teaspoon or tablespoon is not an adequate measuring device.
When using Brivaracetam oral solution, no dilution is necessary. Brivaracetam oral solution may also be administered using a nasogastric tube or gastrostomy tube.
Discard any unused Brivaracetam oral solution remaining after 5 months of first opening the bottle.
- Preparation and Administration Instructions for Brivaracetam Injection
Brivaracetam injection is for intravenous use only.
- Preparation
Brivaracetam injection can be administered intravenously without further dilution or may be mixed with diluents listed below.
- Diluents
-0.9% Sodium Chloride injection, USP
-Lactated Ringer's injection
-5% Dextrose injection, USP
- Administration
Brivaracetam injection should be administered intravenously over 2 to 15 minutes.
Parenteral drug products should be inspected visually for particulate matter and discoloration prior to administration, whenever solution and container permit. Product with particulate matter or discoloration should not be used. Brivaracetam injection is for single dose only.
- Storage and Stability
The diluted solution should not be stored for more than 4 hours at room temperature and may be stored in polyvinyl chloride (PVC) bags. Discard any unused portion of the Brivaracetam injection vial contents.
### Monitoring
There is limited information regarding Brivaracetam Monitoring in the drug label.
# IV Compatibility
There is limited information regarding the compatibility of Brivaracetam and IV administrations.
# Overdosage
There is limited clinical experience with Brivaracetam overdose in humans. Somnolence and dizziness were reported in a patient taking a single dose of 1400 mg (14 times the highest recommended single dose) of Brivaracetam. The following adverse reactions were reported with Brivaracetam overdose: vertigo, balance disorder, fatigue, nausea, diplopia, anxiety, and bradycardia. In general, the adverse reactions associated with Brivaracetam overdose were consistent with the known adverse reactions.
There is no specific antidote for overdose with Brivaracetam. In the event of overdose, standard medical practice for the management of any overdose should be used. An adequate airway, oxygenation, and ventilation should be ensured; monitoring of cardiac rate and rhythm and vital signs is recommended. A certified poison control center should be contacted for updated information on the management of overdose with Brivaracetam. There are no data on the removal of Brivaracetam using hemodialysis, but because less than 10% of Brivaracetam is excreted in urine, hemodialysis is not expected to enhance Brivaracetam clearance.
# Pharmacology
## Mechanism of Action
The precise mechanism by which Brivaracetam exerts its anticonvulsant activity is not known. Brivaracetam displays a high and selective affinity for synaptic vesicle protein 2A (SV2A) in the brain, which may contribute to the anticonvulsant effect.
## Structure
The chemical name of Brivaracetam is (2S)-2- butanamide. Its molecular formula is C11H20N2O2 and its molecular weight is 212.29. The chemical structure is:
Brivaracetam is a white to off-white crystalline powder. It is very soluble in water, buffer (pH 1.2, 4.5, and 7.4), ethanol, methanol, and glacial acetic acid. It is freely soluble in acetonitrile and acetone and soluble in toluene. It is very slightly soluble in n-hexane.
- Tablets
Brivaracetam tablets are for oral administration and contain the following inactive ingredients: croscarmellose sodium, lactose monohydrate, betadex (β-cyclodextrin), anhydrous lactose, magnesium stearate, and film coating agents specified below:
10 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide
25 mg and 100 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide, yellow iron oxide, black iron oxide
50 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide, yellow iron oxide, red iron oxide
75 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide, yellow iron oxide, red iron oxide, black iron oxide
- Oral Solution
Brivaracetam oral solution contains 10 mg of Brivaracetam per mL. The inactive ingredients are sodium citrate, anhydrous citric acid, methylparaben, sodium carboxymethylcellulose, sucralose, sorbitol solution, glycerin, raspberry flavor, and purified water.
- Injection
Brivaracetam injection is a clear, colorless liquid provided as a sterile, preservative-free solution. Brivaracetam injection contains 10 mg Brivaracetam per mL for intravenous administration. One vial contains 50 mg of Brivaracetam drug substance. It contains the following inactive ingredients: sodium acetate (trihydrate), glacial acetic acid (for pH adjustment to 5.5), sodium chloride, and water for injection.
## Pharmacodynamics
- Interactions with Alcohol
In a pharmacokinetic and pharmacodynamic interaction study in healthy subjects, co-administration of Brivaracetam (single dose 200 mg ) and ethanol (continuous intravenous infusion to achieve a blood alcohol concentration of 60 mg/100 mL during 5 hours) increased the effects of alcohol on psychomotor function, attention, and memory. Co-administration of Brivaracetam and ethanol caused a larger decrease from baseline in saccadic peak velocity, smooth pursuit, adaptive tracking performance, and Visual Analog Scale (VAS) alertness, and a larger increase from baseline in body sway and in saccadic reaction time compared with Brivaracetam alone or ethanol alone. The immediate word recall scores were generally lower for Brivaracetam when co-administered with ethanol.
- Cardiac Electrophysiology
At a dose 4 times the maximum recommended dose, Brivaracetam did not prolong the QT interval to a clinically relevant extent.
## Pharmacokinetics
Brivaracetam tablets, oral solution, and injection can be used interchangeably. Brivaracetam exhibits linear and time-independent pharmacokinetics at the approved doses.
- Absorption
Brivaracetam is highly permeable and is rapidly and almost completely absorbed after oral administration. Pharmacokinetics is dose-proportional from 10 to 600 mg (a range that extends beyond the minimum and maximum single-administration dose levels). The median Tmax for tablets taken without food is 1 hour (range 0.25 to 3 hours). Co-administration with a high-fat meal slowed absorption, but the extent of absorption remained unchanged. Specifically, when a 50 mg tablet was administered with a high-fat meal, Cmax (maximum brivaracetam plasma concentration during a dose interval, an exposure metric) was decreased by 37% and Tmax was delayed by 3 hours, but AUC (area under the brivaracetam plasma concentration versus time curve, an exposure metric) was essentially unchanged (decreased by 5%).
- Distribution
Brivaracetam is weakly bound to plasma proteins (≤20%). The volume of distribution is 0.5 L/kg, a value close to that of the total body water. Brivaracetam is rapidly and evenly distributed in most tissues.
- Elimination
- Metabolism
Brivaracetam is primarily metabolized by hydrolysis of the amide moiety to form the corresponding carboxylic acid metabolite, and secondarily by hydroxylation on the propyl side chain to form the hydroxy metabolite. The hydrolysis reaction is mediated by hepatic and extra-hepatic amidase. The hydroxylation pathway is mediated primarily by CYP2C19. In human subjects possessing genetic variations in CYP2C19, production of the hydroxy metabolite is decreased 2-fold or 10-fold, while the blood level of brivaracetam itself is increased by 22% or 42%, respectively, in individuals with one or both mutated alleles. CYP2C19 poor metabolizers and patients using inhibitors of CYP2C19 may require dose reduction. An additional hydroxy acid metabolite is created by hydrolysis of the amide moiety on the hydroxy metabolite or hydroxylation of the propyl side chain on the carboxylic acid metabolite (mainly by CYP2C9). None of the 3 metabolites are pharmacologically active.
- Excretion
Brivaracetam is eliminated primarily by metabolism and by excretion in the urine. More than 95% of the dose, including metabolites, is excreted in the urine within 72 hours after intake. Fecal excretion accounts for less than 1% of the dose. Less than 10% of the dose is excreted unchanged in the urine. Thirty-four percent of the dose is excreted as the carboxylic acid metabolite in urine. The terminal plasma half-life (t1/2) is approximately 9 hours.
- Specific Populations
- Age
Geriatric Population: In a study in elderly subjects (65 to 79 years old; creatinine clearance 53 to 98 mL/min/1.73 m2) receiving Brivaracetam 200 mg twice daily (2 times the highest recommended dosage), the plasma half-life of brivaracetam was 7.9 hours and 9.3 hours in the 65 to 75 and >75 years groups, respectively. The steady-state plasma clearance of brivaracetam was slightly lower (0.76 mL/min/kg) than in young healthy controls (0.83 mL/min/kg).
- Sex
There were no differences observed in the pharmacokinetics of Brivaracetam between male and female subjects.
- Race/Ethnicity
A population pharmacokinetic analysis comparing Caucasian and non-Caucasian patients showed no significant pharmacokinetic difference.
- Renal Impairment
A study in subjects with severe renal impairment (creatinine clearance <30 mL/min/1.73m2 and not requiring dialysis) revealed that the plasma AUC of Brivaracetam was moderately increased (21%) relative to healthy controls, while the AUCs of the acid, hydroxy, and hydroxyacid metabolites were increased 3-fold, 4-fold, and 21-fold, respectively. The renal clearance of these inactive metabolites was decreased 10-fold. Brivaracetam has not been studied in patients undergoing hemodialysis.
- Hepatic Impairment
A pharmacokinetic study in subjects with hepatic cirrhosis, Child-Pugh grades A, B, and C, showed 50%, 57%, and 59% increases in Brivaracetam exposure, respectively, compared to matched healthy controls.
- Drug Interaction Studies
In Vitro Assessment of Drug Interactions
- Drug-Metabolizing Enzyme Inhibition
Brivaracetam did not inhibit CYP1A2, 2A6, 2B6, 2C8, 2C9, 2D6, or 3A4. Brivaracetam weakly inhibited CYP2C19 and would not be expected to cause significant inhibition of CYP2C19 in humans. Brivaracetam was an inhibitor of epoxide hydrolase, (IC50 = 8.2 μM), suggesting that brivaracetam can inhibit the enzyme in vivo.
- Drug-Metabolizing Enzyme Induction
Brivaracetam at concentrations up to 10 μM caused little or no change of mRNA expression of CYP1A2, 2B6, 2C9, 2C19, 3A4, and epoxide hydrolase. It is unlikely that brivaracetam will induce these enzymes in vivo.
- Transporters
Brivaracetam was not a substrate of P-gp, MRP1, or MRP2. Brivaracetam did not inhibit or weakly inhibit BCRP, BSEP, MATE1, MATE2/K, MRP2, OAT1, OAT3, OCT1, OCT2, OATP1B1, OATP1B3, or P-gp, suggesting that brivaracetam is unlikely to inhibit these transporters in vivo.
In Vivo Assessment of Drug Interactions
- Drug Interaction Studies with Antiepileptic Drugs (AEDs)
Potential interactions between Brivaracetam (25 mg twice daily to 100 mg twice daily) and other AEDs were investigated in a pooled analysis of plasma drug concentrations from all Phase 2 and 3 studies and in a population exposure-response analysis of placebo-controlled, Phase 3 studies in adjunctive therapy in the treatment of partial-onset seizures. None of the interactions require changes in the dose of Brivaracetam. Interactions with carbamazepine and phenytoin can be clinically important. The interactions are summarized in Table 3.
- Table 3: Drug Interactions Between Brivaracetam and Concomitant Antiepileptic Drugs
BRIVIACT: Brivaracetam's Brand name
Drug Interaction Studies with Other Drugs
- Effect of Other Drugs on Brivaracetam
Co-administration with CYP inhibitors or transporter inhibitors is unlikely to significantly affect Brivaracetam exposure.
Co-administration with rifampin decreases Brivaracetam plasma concentrations by 45%, an effect that is probably the result of CYP2C19 induction.
- Oral Contraceptives
Co-administration of Brivaracetam 200 mg twice daily (twice the recommended maximum daily dosage) with an oral contraceptive containing ethinylestradiol (0.03 mg) and levonorgestrel (0.15 mg) reduced estrogen and progestin AUCs by 27% and 23%, respectively, without impact on suppression of ovulation. However, co-administration of Brivaracetam 50 mg twice daily with an oral contraceptive containing ethinylestradiol (0.03 mg) and levonorgestrel (0.15 mg) did not significantly influence the pharmacokinetics of either substance. The interaction is not expected to be of clinical significance.
## Nonclinical Toxicology
- Carcinogenesis
In a carcinogenicity study in mice, oral administration of Brivaracetam (0, 400, 550, or 700 mg/kg/day) for 104 weeks increased the incidence of liver tumors (hepatocellular adenoma and carcinoma) in male mice at the two highest doses tested. At the dose (400 mg/kg) not associated with an increase in liver tumors, plasma exposures (AUC) were approximately equal to those in humans at the maximum recommended dose (MRD) of 200 mg/day. Oral administration (0, 150, 230, 450, or 700 mg/kg/day) to rats for 104 weeks resulted in an increased incidence of thymus tumors (benign thymoma) in female rats at the highest dose tested. At the highest dose not associated with an increase in thymus tumors, plasma exposures were approximately 9 times those in humans at the MRD.
- Mutagenesis
Brivaracetam was negative for genotoxicity in in vitro (Ames, mouse lymphoma, and CHO chromosomal aberration) and in vivo (rat bone marrow micronucleus) assays.
- Impairment of Fertility
Oral administration of Brivaracetam (0, 100, 200, or 400 mg/kg/day) to male and female rats prior to and throughout mating and early gestation produced no adverse effects on fertility. The highest dose tested was associated with plasma exposures approximately 6 (males) and 13 (females) times those in humans at the MRD.
# Clinical Studies
The effectiveness of Brivaracetam as adjunctive therapy in partial-onset seizures with or without secondary generalization was established in 3 fixed-dose, randomized, double-blind, placebo-controlled, multicenter studies (Studies 1, 2, and 3), which included 1550 patients. Patients enrolled had partial-onset seizures that were not adequately controlled with 1 to 2 concomitant antiepileptic drugs (AEDs). In each of these studies, 72% to 86% of patients were taking 2 or more concomitant AEDs with or without vagal nerve stimulation. The median baseline seizure frequency across the 3 studies was 9 seizures per 28 days. Patients had a mean duration of epilepsy of approximately 23 years.
All trials had an 8-week baseline period, during which patients were required to have at least 8 partial-onset seizures. The baseline period was followed by a 12-week treatment period. There was no titration period in these studies. Study 1 compared doses of Brivaracetam 50 mg/day and 100 mg/day with placebo. Study 2 compared a dose of Brivaracetam 50 mg/day with placebo. Study 3 compared doses of Brivaracetam 100 mg/day and 200 mg/day with placebo. Brivaracetam was administered in equally divided twice daily doses. Upon termination of Brivaracetam treatment, patients were down-titrated over a 1-, 2-, and 4-week duration for patients receiving 25, 50, and 100 mg twice daily Brivaracetam, respectively.
The primary efficacy outcome in Study 1 and Study 2 was the percent reduction in 7-day partial-onset seizure frequency over placebo, while the primary outcome for Study 3 was the percent reduction in 28-day partial-onset seizure frequency over placebo. The criteria for statistical significance for all 3 studies was p<0.05. Table 4 presents the primary efficacy outcome of the percent change in seizure frequency over placebo, based upon each study's protocol-defined 7- and 28-day seizure frequency efficacy outcome.
- Table 4: Percent Reduction in Partial-Onset Seizure Frequency over Placebo (Studies 1, 2, and 3)
Figure 1 presents the percentage of patients by category of reduction from baseline in partial-onset seizure frequency per 28 days for all pooled patients in the 3 pivotal studies. Patients in whom the seizure frequency increased are shown at left as "worse." Patients with an improvement in percent reduction from baseline partial-onset seizure frequency are shown in the 4 right-most categories.
- Figure 1: Proportion of Patients by Category of Seizure Response for Brivaracetam and Placebo Across all Three Double-Blind Trials
BRIVIACT: Brivaracetam's Brand name
Treatment with Levetiracetam
In Studies 1 and 2, which evaluated Brivaracetam dosages of 50 mg and 100 mg daily, approximately 20% of the patients were on concomitant levetiracetam. Although the numbers of patients were limited, Brivaracetam provided no added benefit when it was added to levetiracetam.
Although patients on concomitant levetiracetam were excluded from Study 3, which evaluated 100 and 200 mg daily, approximately 54% of patients in this study had prior exposure to levetiracetam.
# How Supplied
Tablets
Oral Solution
Injection
## Storage
- Store at 25°C (77°F); excursions permitted between 15°C to 30°C (59°F to 86°F). Do not freeze Brivaracetam injection or oral solution.
- Discard any unused Brivaracetam oral solution remaining after 5 months of first opening the bottle.
- Brivaracetam injection vials are single-dose only.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
Advise the patient to read the FDA-approved patient labeling (Medication Guide).
- Suicidal Behavior and Ideation
Counsel patients, their caregivers, and/or families that antiepileptic drugs, including Brivaracetam, may increase the risk of suicidal thoughts and behavior, and advise patients to be alert for the emergence or worsening of symptoms of depression; unusual changes in mood or behavior; or suicidal thoughts, behavior, or thoughts about self-harm. Advise patients, their caregivers, and/or families to report behaviors of concern immediately to a healthcare provider.
- Neurological Adverse Reactions
Counsel patients that Brivaracetam causes somnolence, fatigue, dizziness, and gait disturbance. These adverse reactions, if observed, are more likely to occur early in treatment but can occur at any time. Advise patients not to drive or operate machinery until they have gained sufficient experience on Brivaracetam to gauge whether it adversely affects their ability to drive or operate machinery.
- Psychiatric Adverse Reactions
Advise patients that Brivaracetam causes changes in behavior (e.g., aggression, agitation, anger, anxiety, and irritability) and psychotic symptoms. Instruct patients to report these symptoms immediately to their healthcare provider.
- Hypersensitivity: Bronchospasm and Angioedema
Advise patients that symptoms of hypersensitivity including bronchospasm and angioedema can occur with Brivaracetam. Instruct them to seek immediate medical care should they experience signs and symptoms of hypersensitivity.
- Withdrawal of Antiepileptic Drugs
Advise patients not to discontinue use of Brivaracetam without consulting with their healthcare provider. Brivaracetam should normally be gradually withdrawn to reduce the potential for increased seizure frequency and status epilepticus.
- Pregnancy
Advise patients to notify their healthcare provider if they become pregnant or intend to become pregnant during Brivaracetam therapy. Encourage patients to enroll in the North American Antiepileptic Drug Pregnancy Registry if they become pregnant. This registry is collecting information about the safety of antiepileptic drugs during pregnancy.
- Dosing Instructions
Counsel patients that Brivaracetam may be taken with or without food. Instruct patients that Brivaracetam tablets should be swallowed whole with liquid and not chewed or crushed.
Advise patients that the dosage of Brivaracetam oral solution should be measured using a calibrated measuring device and not a household teaspoon. Instruct patients to discard any unused Brivaracetam oral solution after 5 months of first opening the bottle.
# Precautions with Alcohol
In a pharmacokinetic and pharmacodynamic interaction study in healthy subjects, co-administration of Brivaracetam (single dose 200 mg ) and ethanol (continuous intravenous infusion to achieve a blood alcohol concentration of 60 mg/100 mL during 5 hours) increased the effects of alcohol on psychomotor function, attention, and memory. Co-administration of Brivaracetam and ethanol caused a larger decrease from baseline in saccadic peak velocity, smooth pursuit, adaptive tracking performance, and Visual Analog Scale (VAS) alertness, and a larger increase from baseline in body sway and in saccadic reaction time compared with Brivaracetam alone or ethanol alone. The immediate word recall scores were generally lower for Brivaracetam when co-administered with ethanol.
# Brand Names
BRIVIACT®
# Look-Alike Drug Names
There is limited information regarding Brivaracetam Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Brivaracetam
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Martin Nino [2]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Brivaracetam is an anticonvulsant that is FDA approved for the treatment of epilepsy in patients 16 years of age and older with partial-onset seizures. Common adverse reactions include somnolence/sedation, dizziness, fatigue, and nausea/vomiting (5%).
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
Brivaracetam is indicated as adjunctive therapy in the treatment of partial-onset seizures in patients 16 years of age and older with epilepsy.
When initiating treatment, gradual dose escalation is not required. The recommended starting dosage is 50 mg twice daily (100 mg per day). Based on individual patient tolerability and therapeutic response, the dosage may be adjusted down to 25 mg twice daily (50 mg per day) or up to 100 mg twice daily (200 mg per day).
Brivaracetam injection may be used when oral administration is temporarily not feasible. Brivaracetam injection should be administered at the same dosage and same frequency as Brivaracetam tablets and oral solution.
The clinical study experience with Brivaracetam injection is limited to 4 consecutive days of treatment.
- Discontinuation of Brivaracetam
Avoid abrupt withdrawal from Brivaracetam in order to minimize the risk of increased seizure frequency and status epilepticus.
- Patients with Hepatic Impairment
For all stages of hepatic impairment, the recommended starting dosage is 25 mg twice daily (50 mg per day) and the recommended maximum dosage is 75 mg twice daily (150 mg per day).
- Co-administration with Rifampin
Increase the Brivaracetam dosage in patients on concomitant rifampin by up to 100% (i.e., double the dosage).
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brivaracetam in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brivaracetam in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
Safety and effectiveness of Brivaracetam in adolescents 16 years of age have been established (same indication and dosage as adults).
Safety and effectiveness of Brivaracetam in patients less than 16 years of age have not been established.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Brivaracetam in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Brivaracetam in pediatric patients.
# Contraindications
Hypersensitivity to Brivaracetam or any of the inactive ingredients in Brivaracetam (bronchospasm and angioedema have occurred).
# Warnings
- Suicidal Behavior and Ideation
Antiepileptic drugs (AEDs), including Brivaracetam, increase the risk of suicidal thoughts or behavior in patients taking these drugs for any indication. Patients treated with any AED for any indication should be monitored for the emergence or worsening of depression, suicidal thoughts or behavior, and/or any unusual changes in mood or behavior.
Pooled analyses of 199 placebo-controlled clinical trials (mono- and adjunctive therapy) of 11 different AEDs showed that patients randomized to one of the AEDs had approximately twice the risk (adjusted Relative Risk 1.8, 95% CI:1.2, 2.7) of suicidal thinking or behavior compared to patients randomized to placebo. In these trials, which had a median treatment duration of 12 weeks, the estimated incidence rate of suicidal behavior or ideation among 27,863 AED-treated patients was 0.43%, compared to 0.24% among 16,029 placebo-treated patients, representing an increase of approximately one case of suicidal thinking or behavior for every 530 patients treated. There were four suicides in drug-treated patients in the trials and none in placebo-treated patients, but the number is too small to allow any conclusion about drug effect on suicide.
The increased risk of suicidal thoughts or behavior with AEDs was observed as early as one week after starting drug treatment with AEDs and persisted for the duration of treatment assessed. Because most trials included in the analysis did not extend beyond 24 weeks, the risk of suicidal thoughts or behavior beyond 24 weeks could not be assessed.
The risk of suicidal thoughts or behavior was generally consistent among drugs in the data analyzed. The finding of increased risk with AEDs of varying mechanisms of action and across a range of indications suggests that the risk applies to all AEDs used for any indication. The risk did not vary substantially by age (5-100 years) in the clinical trials analyzed. Table 1 shows absolute and relative risk by indication for all evaluated AEDs.
- Table 1: Risk of Suicidal Thoughts or Behaviors by Indication for Antiepileptic Drugs in the Pooled Analysis
The relative risk for suicidal thoughts or behavior was higher in clinical trials in patients with epilepsy than in clinical trials in patients with psychiatric or other conditions, but the absolute risk differences were similar for the epilepsy and psychiatric indications.
Anyone considering prescribing Brivaracetam or any other AED must balance the risk of suicidal thoughts or behaviors with the risk of untreated illness. Epilepsy and many other illnesses for which AEDs are prescribed are themselves associated with morbidity and mortality and an increased risk of suicidal thoughts and behavior. Should suicidal thoughts and behavior emerge during treatment, consider whether the emergence of these symptoms in any given patient may be related to the illness being treated.
- Neurological Adverse Reactions
Brivaracetam causes somnolence, fatigue, dizziness, and disturbance in coordination. Patients should be monitored for these signs and symptoms and advised not to drive or operate machinery until they have gained sufficient experience on Brivaracetam to gauge whether it adversely affects their ability to drive or operate machinery.
- Somnolence and Fatigue
Brivaracetam causes dose-dependent increases in somnolence and fatigue-related adverse reactions (fatigue, asthenia, malaise, hypersomnia, sedation, and lethargy). In the Phase 3 controlled adjunctive epilepsy trials, these events were reported in 25% of patients randomized to receive Brivaracetam at least 50 mg/day (20% at 50 mg/day, 26% at 100 mg/day, and 27% at 200 mg/day) compared to 14% of patients who received placebo. The risk is greatest early in treatment but can occur at any time.
- Dizziness and Disturbance in Gait and Coordination
Brivaracetam causes adverse reactions related to dizziness and disturbance in gait and coordination (dizziness, vertigo, balance disorder, ataxia, nystagmus, gait disturbance, and abnormal coordination). In the Phase 3 controlled adjunctive epilepsy trials, these events were reported in 16% of patients randomized to receive Brivaracetam at least 50 mg/day compared to 10% of patients who received placebo. The risk is greatest early in treatment but can occur at any time.
- Psychiatric Adverse Reactions
Brivaracetam causes psychiatric adverse reactions. In the Phase 3 controlled adjunctive epilepsy trials, psychiatric adverse reactions were reported in approximately 13% of patients who received Brivaracetam (at least 50 mg/day) compared to 8% of patients who received placebo. Psychiatric events included both non-psychotic symptoms (irritability, anxiety, nervousness, aggression, belligerence, anger, agitation, restlessness, depression, depressed mood, tearfulness, apathy, altered mood, mood swings, affect lability, psychomotor hyperactivity, abnormal behavior, and adjustment disorder) and psychotic symptoms (psychotic disorder along with hallucination, paranoia, acute psychosis, and psychotic behavior). A total of 1.7% of adult patients treated with Brivaracetam discontinued treatment because of psychiatric reactions compared to 1.3% of patients who received placebo.
- Hypersensitivity: Bronchospasm and Angioedema
Brivaracetam can cause hypersensitivity reactions. Bronchospasm and angioedema have been reported in patients taking Brivaracetam. If a patient develops hypersensitivity reactions after treatment with Brivaracetam, the drug should be discontinued. Brivaracetam is contraindicated in patients with a prior hypersensitivity reaction to Brivaracetam or any of the inactive ingredients.
- Withdrawal of Antiepileptic Drugs
As with most antiepileptic drugs, Brivaracetam should generally be withdrawn gradually because of the risk of increased seizure frequency and status epilepticus. But if withdrawal is needed because of a serious adverse event, rapid discontinuation can be considered.
# Adverse Reactions
## Clinical Trials Experience
The following serious adverse reactions are described elsewhere in labeling:
- Suicidal Behavior and Ideation
- Neurological Adverse Reactions
- Psychiatric Adverse Reactions
- Hypersensitivity: Bronchospasm and Angioedema
- Withdrawal of Antiepileptic Drugs
Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice.
In all controlled and uncontrolled trials performed in adult epilepsy patients, Brivaracetam was administered as adjunctive therapy to 2437 patients. Of these patients, 1929 were treated for at least 6 months, 1500 for at least 12 months, 1056 for at least 24 months, and 758 for at least 36 months. A total of 1558 patients (1099 patients treated with Brivaracetam and 459 patients treated with placebo) constituted the safety population in the pooled analysis of Phase 3 placebo-controlled studies in patients with partial-onset seizures (Studies 1, 2, and 3). The adverse reactions presented in Table 2 are based on this safety population; the median length of treatment in these studies was 12 weeks. Of the patients in those studies, approximately 51% were male, 74% were Caucasian, and the mean age was 38 years.
In the Phase 3 controlled epilepsy studies, adverse events occurred in 68% of patients treated with Brivaracetam and 62% treated with placebo. The most common adverse reactions occurring at a frequency of at least 5% in patients treated with Brivaracetam doses of at least 50 mg/day and greater than placebo were somnolence and sedation (16%), dizziness (12%), fatigue (9%), and nausea and vomiting symptoms (5%).
The discontinuation rates due to adverse events were 5%, 8%, and 7% for patients randomized to receive Brivaracetam at the recommended doses of 50 mg, 100 mg, and 200 mg/day, respectively, compared to 4% in patients randomized to receive placebo.
Table 2 lists adverse reactions for Brivaracetam that occurred at least 2% more frequently for Brivaracetam doses of at least 50 mg/day than placebo.
- Table 2: Adverse Reactions in Pooled Placebo-Controlled Adjunctive Therapy Studies in Patients with Partial-Onset Seizures (Brivaracetam 50 mg/day, 100 mg/day, and 200 mg/day)
BRIVIACT: Brivaracetam's Brand name
There was no apparent dose-dependent increase in adverse reactions listed in Table 2 with the exception of somnolence and sedation.
- Hematologic Abnormalities
Brivaracetam can cause hematologic abnormalities. In the Phase 3 controlled adjunctive epilepsy studies, a total of 1.8% of Brivaracetam-treated patients and 1.1% of placebo-treated patients had at least one clinically significant decreased white blood cell count (<3.0 × 109/L), and 0.3% of Brivaracetam-treated patients and 0% of placebo-treated patients had at least one clinically significant decreased neutrophil count (<1.0 × 109/L).
- Adverse Reactions with Brivaracetam Injection
Adverse reactions with Brivaracetam injection were generally similar to those observed with Brivaracetam tablets. Other adverse events that occurred in at least 3% of patients who received Brivaracetam injection included dysgeusia, euphoric mood, feeling drunk, and infusion site pain.
- Comparison by Sex
There were no significant differences by sex in the incidence of adverse reactions.
## Postmarketing Experience
There is limited information regarding Brivaracetam Postmarketing Experience in the drug label.
# Drug Interactions
Co-administration with rifampin decreases Brivaracetam plasma concentrations likely because of CYP2C19 induction. Prescribers should increase the Brivaracetam dose by up to 100% (i.e., double the dosage) in patients while receiving concomitant treatment with rifampin.
Co-administration with carbamazepine may increase exposure to carbamazepine-epoxide, the active metabolite of carbamazepine. Though available data did not reveal any safety concerns, if tolerability issues arise when co-administered, carbamazepine dose reduction should be considered.
Because Brivaracetam can increase plasma concentrations of phenytoin, phenytoin levels should be monitored in patients when concomitant Brivaracetam is added to or discontinued from ongoing phenytoin therapy.
Brivaracetam provided no added therapeutic benefit to levetiracetam when the two drugs were co-administered.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
C. There are no adequate and well-controlled studies in pregnant women. In animal studies, Brivaracetam produced evidence of developmental toxicity at plasma exposures greater than clinical exposures. Brivaracetam should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Oral administration of Brivaracetam (0, 150, 300, or 600 mg/kg/day) to pregnant rats during the period of organogenesis did not produce any significant maternal or embryofetal toxicity. The highest dose tested was associated with maternal plasma exposures (area under the Brivaracetam plasma concentration versus time curve, an exposure metric, AUC) approximately 30 times exposures in humans at the maximum recommended dose (MRD) of 200 mg/day. Oral administration of Brivaracetam (0, 30, 60, 120, or 240 mg/kg/day) to pregnant rabbits during the period of organogenesis resulted in embryofetal mortality and decreased fetal body weights at the highest dose tested, which was also maternally toxic. The highest no-effect dose (120 mg/kg/day) was associated with maternal plasma exposures approximately 4 times human exposures at the MRD.
When Brivaracetam (0, 150, 300, or 600 mg/kg/day) was orally administered to rats throughout pregnancy and lactation, decreased growth, delayed sexual maturation (female), and long-term neurobehavioral changes were observed in the offspring at the highest dose. The highest no-effect dose (300 mg/kg/day) was associated with maternal plasma exposures approximately 7 times human exposures at the MRD.
- Pregnancy Registry
Physicians are advised to recommend that pregnant patients taking Brivaracetam enroll in the North American Antiepileptic Drug Pregnancy Registry. This can be done by calling the toll free number 1-888-233-2334, and must be done by patients themselves. Information on the registry can also be found at the website http://www.aedpregnancyregistry.org/.
Pregnancy Category (AUS):
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Brivaracetam in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Brivaracetam during labor and delivery.
### Nursing Mothers
It is not known whether Brivaracetam is excreted in human milk. Studies in rats have shown excretion of Brivaracetam in milk. Because many drugs are excreted into human milk, a decision should be made whether to discontinue nursing or to discontinue Brivaracetam, taking into account the importance of the drug to the mother.
### Pediatric Use
Safety and effectiveness of Brivaracetam in adolescents 16 years of age have been established.
Safety and effectiveness of Brivaracetam in patients less than 16 years of age have not been established.
The potential adverse effects of Brivaracetam on postnatal growth and development were investigated in juvenile rats and dogs. Oral administration (0, 150, 300, or 600 mg/kg/day) to rats during the neonatal and juvenile periods of development resulted in increased mortality, decreased body weight gain, delayed male sexual maturation, and adverse neurobehavioral effects at the highest dose tested and decreased brain size and weight at all doses. Therefore, a no-effect dose was not established; the lowest dose tested in juvenile rats was associated with plasma exposures (AUC) approximately 2 times those in adult humans at the maximum recommended dose (MRD) of 200 mg/day. In dogs, oral administration (0, 15, 30, or 100 mg/kg/day) throughout the neonatal and juvenile periods of development induced liver changes similar to those observed in adult animals at the highest dose but produced no adverse effects on growth, bone density or strength, neurological testing, or neuropathology evaluation. The overall no-effect dose (30 mg/kg/day) and the no-effect dose for adverse effects on developmental parameters (100 mg/kg/day) were associated with plasma exposures approximately equal to and 4 times, respectively, adult human exposures at the MRD.
### Geriatic Use
There were insufficient numbers of patients 65 years of age and older in the double-blind, placebo-controlled epilepsy trials (n=38) to allow adequate assessment of the effectiveness of Brivaracetam in this population. In general, dose selection for an elderly patient should be judicious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy.
### Gender
There is no FDA guidance on the use of Brivaracetam with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Brivaracetam with respect to specific racial populations.
### Renal Impairment
Dose adjustments are not required for patients with impaired renal function. There are no data in patients with end-stage renal disease undergoing dialysis, and use of Brivaracetam is not recommended in this patient population.
### Hepatic Impairment
Because of increases in Brivaracetam exposure, dosage adjustment is recommended for all stages of hepatic impairment.
### Females of Reproductive Potential and Males
The effect of Brivaracetam on labor and delivery in humans is unknown.
- Controlled Substance
Brivaracetam is listed as a Schedule V controlled substance.
- Abuse
In a human abuse potential study, single doses of Brivaracetam at therapeutic and supratherapeutic doses were compared to alprazolam (C-IV) (1.5 mg and 3 mg). Brivaracetam at the recommended single dose (50 mg) caused fewer sedative and euphoric effects than alprazolam; however, Brivaracetam at supratherapeutic single doses (200 mg and 1000 mg) was similar to alprazolam on other measures of abuse.
- Dependence
There was no evidence of physical dependence potential or a withdrawal syndrome with Brivaracetam in a pooled review of placebo-controlled adjunctive therapy studies.
### Immunocompromised Patients
There is no FDA guidance one the use of Brivaracetam in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Administration Instructions for Brivaracetam Tablets and Brivaracetam Oral Solution
Brivaracetam can be initiated with either intravenous or oral administration.
Brivaracetam tablets and oral solution may be taken with or without food.
- Brivaracetam Tablets
Brivaracetam tablets should be swallowed whole with liquid. Brivaracetam tablets should not be chewed or crushed.
- Brivaracetam Oral Solution
A calibrated measuring device is recommended to measure and deliver the prescribed dose accurately. A household teaspoon or tablespoon is not an adequate measuring device.
When using Brivaracetam oral solution, no dilution is necessary. Brivaracetam oral solution may also be administered using a nasogastric tube or gastrostomy tube.
Discard any unused Brivaracetam oral solution remaining after 5 months of first opening the bottle.
- Preparation and Administration Instructions for Brivaracetam Injection
Brivaracetam injection is for intravenous use only.
- Preparation
Brivaracetam injection can be administered intravenously without further dilution or may be mixed with diluents listed below.
- Diluents
-0.9% Sodium Chloride injection, USP
-Lactated Ringer's injection
-5% Dextrose injection, USP
- Administration
Brivaracetam injection should be administered intravenously over 2 to 15 minutes.
Parenteral drug products should be inspected visually for particulate matter and discoloration prior to administration, whenever solution and container permit. Product with particulate matter or discoloration should not be used. Brivaracetam injection is for single dose only.
- Storage and Stability
The diluted solution should not be stored for more than 4 hours at room temperature and may be stored in polyvinyl chloride (PVC) bags. Discard any unused portion of the Brivaracetam injection vial contents.
### Monitoring
There is limited information regarding Brivaracetam Monitoring in the drug label.
# IV Compatibility
There is limited information regarding the compatibility of Brivaracetam and IV administrations.
# Overdosage
There is limited clinical experience with Brivaracetam overdose in humans. Somnolence and dizziness were reported in a patient taking a single dose of 1400 mg (14 times the highest recommended single dose) of Brivaracetam. The following adverse reactions were reported with Brivaracetam overdose: vertigo, balance disorder, fatigue, nausea, diplopia, anxiety, and bradycardia. In general, the adverse reactions associated with Brivaracetam overdose were consistent with the known adverse reactions.
There is no specific antidote for overdose with Brivaracetam. In the event of overdose, standard medical practice for the management of any overdose should be used. An adequate airway, oxygenation, and ventilation should be ensured; monitoring of cardiac rate and rhythm and vital signs is recommended. A certified poison control center should be contacted for updated information on the management of overdose with Brivaracetam. There are no data on the removal of Brivaracetam using hemodialysis, but because less than 10% of Brivaracetam is excreted in urine, hemodialysis is not expected to enhance Brivaracetam clearance.
# Pharmacology
## Mechanism of Action
The precise mechanism by which Brivaracetam exerts its anticonvulsant activity is not known. Brivaracetam displays a high and selective affinity for synaptic vesicle protein 2A (SV2A) in the brain, which may contribute to the anticonvulsant effect.
## Structure
The chemical name of Brivaracetam is (2S)-2-[(4R)-2-oxo-4-propyltetrahydro-1H-pyrrol-1-yl] butanamide. Its molecular formula is C11H20N2O2 and its molecular weight is 212.29. The chemical structure is:
Brivaracetam is a white to off-white crystalline powder. It is very soluble in water, buffer (pH 1.2, 4.5, and 7.4), ethanol, methanol, and glacial acetic acid. It is freely soluble in acetonitrile and acetone and soluble in toluene. It is very slightly soluble in n-hexane.
- Tablets
Brivaracetam tablets are for oral administration and contain the following inactive ingredients: croscarmellose sodium, lactose monohydrate, betadex (β-cyclodextrin), anhydrous lactose, magnesium stearate, and film coating agents specified below:
10 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide
25 mg and 100 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide, yellow iron oxide, black iron oxide
50 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide, yellow iron oxide, red iron oxide
75 mg tablets: polyvinyl alcohol, talc, polyethylene glycol 3350, titanium dioxide, yellow iron oxide, red iron oxide, black iron oxide
- Oral Solution
Brivaracetam oral solution contains 10 mg of Brivaracetam per mL. The inactive ingredients are sodium citrate, anhydrous citric acid, methylparaben, sodium carboxymethylcellulose, sucralose, sorbitol solution, glycerin, raspberry flavor, and purified water.
- Injection
Brivaracetam injection is a clear, colorless liquid provided as a sterile, preservative-free solution. Brivaracetam injection contains 10 mg Brivaracetam per mL for intravenous administration. One vial contains 50 mg of Brivaracetam drug substance. It contains the following inactive ingredients: sodium acetate (trihydrate), glacial acetic acid (for pH adjustment to 5.5), sodium chloride, and water for injection.
## Pharmacodynamics
- Interactions with Alcohol
In a pharmacokinetic and pharmacodynamic interaction study in healthy subjects, co-administration of Brivaracetam (single dose 200 mg [2 times greater than the highest recommended single dose]) and ethanol (continuous intravenous infusion to achieve a blood alcohol concentration of 60 mg/100 mL during 5 hours) increased the effects of alcohol on psychomotor function, attention, and memory. Co-administration of Brivaracetam and ethanol caused a larger decrease from baseline in saccadic peak velocity, smooth pursuit, adaptive tracking performance, and Visual Analog Scale (VAS) alertness, and a larger increase from baseline in body sway and in saccadic reaction time compared with Brivaracetam alone or ethanol alone. The immediate word recall scores were generally lower for Brivaracetam when co-administered with ethanol.
- Cardiac Electrophysiology
At a dose 4 times the maximum recommended dose, Brivaracetam did not prolong the QT interval to a clinically relevant extent.
## Pharmacokinetics
Brivaracetam tablets, oral solution, and injection can be used interchangeably. Brivaracetam exhibits linear and time-independent pharmacokinetics at the approved doses.
- Absorption
Brivaracetam is highly permeable and is rapidly and almost completely absorbed after oral administration. Pharmacokinetics is dose-proportional from 10 to 600 mg (a range that extends beyond the minimum and maximum single-administration dose levels). The median Tmax for tablets taken without food is 1 hour (range 0.25 to 3 hours). Co-administration with a high-fat meal slowed absorption, but the extent of absorption remained unchanged. Specifically, when a 50 mg tablet was administered with a high-fat meal, Cmax (maximum brivaracetam plasma concentration during a dose interval, an exposure metric) was decreased by 37% and Tmax was delayed by 3 hours, but AUC (area under the brivaracetam plasma concentration versus time curve, an exposure metric) was essentially unchanged (decreased by 5%).
- Distribution
Brivaracetam is weakly bound to plasma proteins (≤20%). The volume of distribution is 0.5 L/kg, a value close to that of the total body water. Brivaracetam is rapidly and evenly distributed in most tissues.
- Elimination
- Metabolism
Brivaracetam is primarily metabolized by hydrolysis of the amide moiety to form the corresponding carboxylic acid metabolite, and secondarily by hydroxylation on the propyl side chain to form the hydroxy metabolite. The hydrolysis reaction is mediated by hepatic and extra-hepatic amidase. The hydroxylation pathway is mediated primarily by CYP2C19. In human subjects possessing genetic variations in CYP2C19, production of the hydroxy metabolite is decreased 2-fold or 10-fold, while the blood level of brivaracetam itself is increased by 22% or 42%, respectively, in individuals with one or both mutated alleles. CYP2C19 poor metabolizers and patients using inhibitors of CYP2C19 may require dose reduction. An additional hydroxy acid metabolite is created by hydrolysis of the amide moiety on the hydroxy metabolite or hydroxylation of the propyl side chain on the carboxylic acid metabolite (mainly by CYP2C9). None of the 3 metabolites are pharmacologically active.
- Excretion
Brivaracetam is eliminated primarily by metabolism and by excretion in the urine. More than 95% of the dose, including metabolites, is excreted in the urine within 72 hours after intake. Fecal excretion accounts for less than 1% of the dose. Less than 10% of the dose is excreted unchanged in the urine. Thirty-four percent of the dose is excreted as the carboxylic acid metabolite in urine. The terminal plasma half-life (t1/2) is approximately 9 hours.
- Specific Populations
- Age
Geriatric Population: In a study in elderly subjects (65 to 79 years old; creatinine clearance 53 to 98 mL/min/1.73 m2) receiving Brivaracetam 200 mg twice daily (2 times the highest recommended dosage), the plasma half-life of brivaracetam was 7.9 hours and 9.3 hours in the 65 to 75 and >75 years groups, respectively. The steady-state plasma clearance of brivaracetam was slightly lower (0.76 mL/min/kg) than in young healthy controls (0.83 mL/min/kg).
- Sex
There were no differences observed in the pharmacokinetics of Brivaracetam between male and female subjects.
- Race/Ethnicity
A population pharmacokinetic analysis comparing Caucasian and non-Caucasian patients showed no significant pharmacokinetic difference.
- Renal Impairment
A study in subjects with severe renal impairment (creatinine clearance <30 mL/min/1.73m2 and not requiring dialysis) revealed that the plasma AUC of Brivaracetam was moderately increased (21%) relative to healthy controls, while the AUCs of the acid, hydroxy, and hydroxyacid metabolites were increased 3-fold, 4-fold, and 21-fold, respectively. The renal clearance of these inactive metabolites was decreased 10-fold. Brivaracetam has not been studied in patients undergoing hemodialysis.
- Hepatic Impairment
A pharmacokinetic study in subjects with hepatic cirrhosis, Child-Pugh grades A, B, and C, showed 50%, 57%, and 59% increases in Brivaracetam exposure, respectively, compared to matched healthy controls.
- Drug Interaction Studies
In Vitro Assessment of Drug Interactions
- Drug-Metabolizing Enzyme Inhibition
Brivaracetam did not inhibit CYP1A2, 2A6, 2B6, 2C8, 2C9, 2D6, or 3A4. Brivaracetam weakly inhibited CYP2C19 and would not be expected to cause significant inhibition of CYP2C19 in humans. Brivaracetam was an inhibitor of epoxide hydrolase, (IC50 = 8.2 μM), suggesting that brivaracetam can inhibit the enzyme in vivo.
- Drug-Metabolizing Enzyme Induction
Brivaracetam at concentrations up to 10 μM caused little or no change of mRNA expression of CYP1A2, 2B6, 2C9, 2C19, 3A4, and epoxide hydrolase. It is unlikely that brivaracetam will induce these enzymes in vivo.
- Transporters
Brivaracetam was not a substrate of P-gp, MRP1, or MRP2. Brivaracetam did not inhibit or weakly inhibit BCRP, BSEP, MATE1, MATE2/K, MRP2, OAT1, OAT3, OCT1, OCT2, OATP1B1, OATP1B3, or P-gp, suggesting that brivaracetam is unlikely to inhibit these transporters in vivo.
In Vivo Assessment of Drug Interactions
- Drug Interaction Studies with Antiepileptic Drugs (AEDs)
Potential interactions between Brivaracetam (25 mg twice daily to 100 mg twice daily) and other AEDs were investigated in a pooled analysis of plasma drug concentrations from all Phase 2 and 3 studies and in a population exposure-response analysis of placebo-controlled, Phase 3 studies in adjunctive therapy in the treatment of partial-onset seizures. None of the interactions require changes in the dose of Brivaracetam. Interactions with carbamazepine and phenytoin can be clinically important. The interactions are summarized in Table 3.
- Table 3: Drug Interactions Between Brivaracetam and Concomitant Antiepileptic Drugs
BRIVIACT: Brivaracetam's Brand name
Drug Interaction Studies with Other Drugs
- Effect of Other Drugs on Brivaracetam
Co-administration with CYP inhibitors or transporter inhibitors is unlikely to significantly affect Brivaracetam exposure.
Co-administration with rifampin decreases Brivaracetam plasma concentrations by 45%, an effect that is probably the result of CYP2C19 induction.
- Oral Contraceptives
Co-administration of Brivaracetam 200 mg twice daily (twice the recommended maximum daily dosage) with an oral contraceptive containing ethinylestradiol (0.03 mg) and levonorgestrel (0.15 mg) reduced estrogen and progestin AUCs by 27% and 23%, respectively, without impact on suppression of ovulation. However, co-administration of Brivaracetam 50 mg twice daily with an oral contraceptive containing ethinylestradiol (0.03 mg) and levonorgestrel (0.15 mg) did not significantly influence the pharmacokinetics of either substance. The interaction is not expected to be of clinical significance.
## Nonclinical Toxicology
- Carcinogenesis
In a carcinogenicity study in mice, oral administration of Brivaracetam (0, 400, 550, or 700 mg/kg/day) for 104 weeks increased the incidence of liver tumors (hepatocellular adenoma and carcinoma) in male mice at the two highest doses tested. At the dose (400 mg/kg) not associated with an increase in liver tumors, plasma exposures (AUC) were approximately equal to those in humans at the maximum recommended dose (MRD) of 200 mg/day. Oral administration (0, 150, 230, 450, or 700 mg/kg/day) to rats for 104 weeks resulted in an increased incidence of thymus tumors (benign thymoma) in female rats at the highest dose tested. At the highest dose not associated with an increase in thymus tumors, plasma exposures were approximately 9 times those in humans at the MRD.
- Mutagenesis
Brivaracetam was negative for genotoxicity in in vitro (Ames, mouse lymphoma, and CHO chromosomal aberration) and in vivo (rat bone marrow micronucleus) assays.
- Impairment of Fertility
Oral administration of Brivaracetam (0, 100, 200, or 400 mg/kg/day) to male and female rats prior to and throughout mating and early gestation produced no adverse effects on fertility. The highest dose tested was associated with plasma exposures approximately 6 (males) and 13 (females) times those in humans at the MRD.
# Clinical Studies
The effectiveness of Brivaracetam as adjunctive therapy in partial-onset seizures with or without secondary generalization was established in 3 fixed-dose, randomized, double-blind, placebo-controlled, multicenter studies (Studies 1, 2, and 3), which included 1550 patients. Patients enrolled had partial-onset seizures that were not adequately controlled with 1 to 2 concomitant antiepileptic drugs (AEDs). In each of these studies, 72% to 86% of patients were taking 2 or more concomitant AEDs with or without vagal nerve stimulation. The median baseline seizure frequency across the 3 studies was 9 seizures per 28 days. Patients had a mean duration of epilepsy of approximately 23 years.
All trials had an 8-week baseline period, during which patients were required to have at least 8 partial-onset seizures. The baseline period was followed by a 12-week treatment period. There was no titration period in these studies. Study 1 compared doses of Brivaracetam 50 mg/day and 100 mg/day with placebo. Study 2 compared a dose of Brivaracetam 50 mg/day with placebo. Study 3 compared doses of Brivaracetam 100 mg/day and 200 mg/day with placebo. Brivaracetam was administered in equally divided twice daily doses. Upon termination of Brivaracetam treatment, patients were down-titrated over a 1-, 2-, and 4-week duration for patients receiving 25, 50, and 100 mg twice daily Brivaracetam, respectively.
The primary efficacy outcome in Study 1 and Study 2 was the percent reduction in 7-day partial-onset seizure frequency over placebo, while the primary outcome for Study 3 was the percent reduction in 28-day partial-onset seizure frequency over placebo. The criteria for statistical significance for all 3 studies was p<0.05. Table 4 presents the primary efficacy outcome of the percent change in seizure frequency over placebo, based upon each study's protocol-defined 7- and 28-day seizure frequency efficacy outcome.
- Table 4: Percent Reduction in Partial-Onset Seizure Frequency over Placebo (Studies 1, 2, and 3)
Figure 1 presents the percentage of patients by category of reduction from baseline in partial-onset seizure frequency per 28 days for all pooled patients in the 3 pivotal studies. Patients in whom the seizure frequency increased are shown at left as "worse." Patients with an improvement in percent reduction from baseline partial-onset seizure frequency are shown in the 4 right-most categories.
- Figure 1: Proportion of Patients by Category of Seizure Response for Brivaracetam and Placebo Across all Three Double-Blind Trials
BRIVIACT: Brivaracetam's Brand name
Treatment with Levetiracetam
In Studies 1 and 2, which evaluated Brivaracetam dosages of 50 mg and 100 mg daily, approximately 20% of the patients were on concomitant levetiracetam. Although the numbers of patients were limited, Brivaracetam provided no added benefit when it was added to levetiracetam.
Although patients on concomitant levetiracetam were excluded from Study 3, which evaluated 100 and 200 mg daily, approximately 54% of patients in this study had prior exposure to levetiracetam.
# How Supplied
Tablets
Oral Solution
Injection
## Storage
- Store at 25°C (77°F); excursions permitted between 15°C to 30°C (59°F to 86°F). Do not freeze Brivaracetam injection or oral solution.
- Discard any unused Brivaracetam oral solution remaining after 5 months of first opening the bottle.
- Brivaracetam injection vials are single-dose only.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
Advise the patient to read the FDA-approved patient labeling (Medication Guide).
- Suicidal Behavior and Ideation
Counsel patients, their caregivers, and/or families that antiepileptic drugs, including Brivaracetam, may increase the risk of suicidal thoughts and behavior, and advise patients to be alert for the emergence or worsening of symptoms of depression; unusual changes in mood or behavior; or suicidal thoughts, behavior, or thoughts about self-harm. Advise patients, their caregivers, and/or families to report behaviors of concern immediately to a healthcare provider.
- Neurological Adverse Reactions
Counsel patients that Brivaracetam causes somnolence, fatigue, dizziness, and gait disturbance. These adverse reactions, if observed, are more likely to occur early in treatment but can occur at any time. Advise patients not to drive or operate machinery until they have gained sufficient experience on Brivaracetam to gauge whether it adversely affects their ability to drive or operate machinery.
- Psychiatric Adverse Reactions
Advise patients that Brivaracetam causes changes in behavior (e.g., aggression, agitation, anger, anxiety, and irritability) and psychotic symptoms. Instruct patients to report these symptoms immediately to their healthcare provider.
- Hypersensitivity: Bronchospasm and Angioedema
Advise patients that symptoms of hypersensitivity including bronchospasm and angioedema can occur with Brivaracetam. Instruct them to seek immediate medical care should they experience signs and symptoms of hypersensitivity.
- Withdrawal of Antiepileptic Drugs
Advise patients not to discontinue use of Brivaracetam without consulting with their healthcare provider. Brivaracetam should normally be gradually withdrawn to reduce the potential for increased seizure frequency and status epilepticus.
- Pregnancy
Advise patients to notify their healthcare provider if they become pregnant or intend to become pregnant during Brivaracetam therapy. Encourage patients to enroll in the North American Antiepileptic Drug Pregnancy Registry if they become pregnant. This registry is collecting information about the safety of antiepileptic drugs during pregnancy.
- Dosing Instructions
Counsel patients that Brivaracetam may be taken with or without food. Instruct patients that Brivaracetam tablets should be swallowed whole with liquid and not chewed or crushed.
Advise patients that the dosage of Brivaracetam oral solution should be measured using a calibrated measuring device and not a household teaspoon. Instruct patients to discard any unused Brivaracetam oral solution after 5 months of first opening the bottle.
# Precautions with Alcohol
In a pharmacokinetic and pharmacodynamic interaction study in healthy subjects, co-administration of Brivaracetam (single dose 200 mg [2 times greater than the highest recommended single dose]) and ethanol (continuous intravenous infusion to achieve a blood alcohol concentration of 60 mg/100 mL during 5 hours) increased the effects of alcohol on psychomotor function, attention, and memory. Co-administration of Brivaracetam and ethanol caused a larger decrease from baseline in saccadic peak velocity, smooth pursuit, adaptive tracking performance, and Visual Analog Scale (VAS) alertness, and a larger increase from baseline in body sway and in saccadic reaction time compared with Brivaracetam alone or ethanol alone. The immediate word recall scores were generally lower for Brivaracetam when co-administered with ethanol.
# Brand Names
BRIVIACT®
# Look-Alike Drug Names
There is limited information regarding Brivaracetam Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Brivaracetam | |
3fbc03418264d470c229e264a82d6820309614b8 | wikidoc | Bromadiolone | Bromadiolone
# Overview
Bromadiolone is a potent anticoagulant rodenticide. It is a second-generation 4-hydroxycoumarin derivative and vitamin K antagonist, often called a "super-warfarin" for its added potency and tendency to accumulate in the liver of the poisoned organism. When first introduced to the UK market in 1980, it was effective against the populations that had become resistant to the first generation anticoagulants.
The product may be used both indoors and outdoors for rats and mice.
# Toxicity
Bromadiolone can be absorbed through the digestive tract, through the lungs, or through skin contact. The pesticide is generally given orally. The substance is a vitamin K antagonist. The lack of vitamin K in the circulatory system reduces blood clotting and will cause death due to internal hemorrhaging.
Poisoning doesn't show up for 24 to 36 hours after poison is eaten and often it may take 2–5 days for the signs to show up.
Following are acute Template:LD50 values for various animals (mammals):
- rats 1.125 mg/kg b.w.
- mice 1.75 mg/kg b.w.
- rabbits 1 mg/kg b.w.
- dogs > 10 mg/kg b.w. (oral Maximum tolerated dose|MTD)
- cats > 25 mg/kg b.w. (oral MTD)
# Chemistry
The compound is used as a mixture of four stereoisomers. Its two stereoisomeric centers are at the phenyl- and the hydroxyl-substituted carbons in the carbon chain of the substituent at the 3 position of the coumarin.
# Antidote
Vitamin K1 is used as antidote. | Bromadiolone
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Bromadiolone is a potent anticoagulant rodenticide. It is a second-generation 4-hydroxycoumarin derivative and vitamin K antagonist, often called a "super-warfarin" for its added potency and tendency to accumulate in the liver of the poisoned organism. When first introduced to the UK market in 1980, it was effective against the populations that had become resistant to the first generation anticoagulants.
The product may be used both indoors and outdoors for rats and mice.
# Toxicity
Bromadiolone can be absorbed through the digestive tract, through the lungs, or through skin contact. The pesticide is generally given orally.[1] The substance is a vitamin K antagonist. The lack of vitamin K in the circulatory system reduces blood clotting and will cause death due to internal hemorrhaging.[1]
Poisoning doesn't show up for 24 to 36 hours after poison is eaten and often it may take 2–5 days for the signs to show up.
Following are acute Template:LD50 values for various animals (mammals):[1]
- rats 1.125 mg/kg b.w.
- mice 1.75 mg/kg b.w.
- rabbits 1 mg/kg b.w.
- dogs > 10 mg/kg b.w. (oral Maximum tolerated dose|MTD)[2]
- cats > 25 mg/kg b.w. (oral MTD)[2]
# Chemistry
The compound is used as a mixture of four stereoisomers. Its two stereoisomeric centers are at the phenyl- and the hydroxyl-substituted carbons in the carbon chain of the substituent at the 3 position of the coumarin.
# Antidote
Vitamin K1 is used as antidote.[3] | https://www.wikidoc.org/index.php/Bromadiolone | |
0f4809e6d48b01e8970bf02a197e2fc0e23d7946 | wikidoc | Halogenation | Halogenation
# Overview
Halogenation is a chemical reaction that incorporates a halogen atom into a molecule. More specific descriptions exist that specify the type of halogen: fluorination, chlorination, bromination, and iodination.
In a Markovnikov addition reaction, a halogen like bromine is reacted with an alkene which causes the π-bond to break forming an haloalkane. This makes the hydrocarbon more reactive and bromine as it turns out, is a good leaving group in further chemical reactions such as nucleophilic aliphatic substitution reactions and elimination reactions
Several main types of halogenation exist, including:
- Free radical halogenation
- Ketone halogenation
- Electrophilic halogenation
- Halogen addition reaction
Specific halogenation methods are the Hunsdiecker reaction (from carboxylic acids) and the Sandmeyer reaction (arylhalides).
An example of halogenation can be found in the organic synthesis of the anesthetic halothane from trichloroethylene which involves a high temperature bromination in the second step : | Halogenation
# Overview
Halogenation is a chemical reaction that incorporates a halogen atom into a molecule. More specific descriptions exist that specify the type of halogen: fluorination, chlorination, bromination, and iodination.
In a Markovnikov addition reaction, a halogen like bromine is reacted with an alkene which causes the π-bond to break forming an haloalkane. This makes the hydrocarbon more reactive and bromine as it turns out, is a good leaving group in further chemical reactions such as nucleophilic aliphatic substitution reactions and elimination reactions
Several main types of halogenation exist, including:
- Free radical halogenation
- Ketone halogenation
- Electrophilic halogenation
- Halogen addition reaction
Specific halogenation methods are the Hunsdiecker reaction (from carboxylic acids) and the Sandmeyer reaction (arylhalides).
An example of halogenation can be found in the organic synthesis of the anesthetic halothane from trichloroethylene which involves a high temperature bromination in the second step [1]: | https://www.wikidoc.org/index.php/Bromination | |
88742c86bb19b1408639ca7877ad9ab3bbc72ddd | wikidoc | Bromomethane | Bromomethane
# Overview
The chemical compound bromomethane, commonly known as methyl bromide, is an organic halogen compound with formula CH3Br. It is a colorless, nonflammable gas with no distinctive smell. Its chemical properties are quite similar to those of chloromethane. Trade names for bromomethane include Embafume and Terabol.
# Origin
Bromomethane originates from both natural and human sources. It occurs naturally in the ocean, where it is probably formed by algae and kelp. It is also produced by certain terrestrial plants, such as members of the Brassicaceae family. It is manufactured for agricultural and industrial use by reacting methanol with hydrobromic acid.
# Uses
Until its production and use was curtailed by the Montreal Protocol, it was widely used as a soil sterilant, mainly for production of seed but also for some crops such as strawberries. In seed production, unlike crop production, it is of vital importance to avoid contaminating the crop with off-type seed of the same species. Therefore, selective herbicides cannot be used. While bromomethane is dangerous to use, it is considerably safer and more effective than the few other soil sterilants available. Its loss to the seed industry has resulted in changes to cultural practices, with increased reliance on mechanical rogueing and fallow seasons.
Bromomethane was also used as a general-purpose fumigant to kill a variety of pests including rats, insects , Bromomethane has poor fungicidal properties. (Bromomethane is the preferred fumigant for ISPM number 15, regulations when exporting wooden packaging to certain countries). It is also a precursor in the manufacture of other chemicals as a methylation agent, and has been used as a solvent to extract oil from seeds and wool.
While the Montreal Protocol has severely restricted the use of bromomethane internationally, the United States has successfully pushed for critical-use exemptions of the chemical. In 2004, the most recent year with available data, over 7 million pounds of bromomethane were applied to California fields, according to pesticide use statistics compiled by the California Department of Pesticide Regulation.
# Ozone depletion
Bromomethane is on the list of banned ozone-depleting substances of the Montreal Protocol. Because bromine is 60 times more destructive to ozone than chlorine, even small amounts of bromomethane cause considerable damage to the ozone layer. In 2005 and 2006, however, it was granted a critical use exemption under the Montreal Protocol.
# Controversy
Bromomethane is used to prepare golf courses and sod for golf courses and elsewhere, particularly to control Bermuda grass. The Montreal Protocol stipulates that bromomethane use be phased out. The Bush Administration has adopted exceptions to prevent market disruptions.
# Health effects
If inhaled in high concentration for a short period, it produces headaches, dizziness, nausea, vomiting and weakness; this may be followed by mental excitement, convulsions and even acute mania. More prolonged inhalation of lower concentrations may cause bronchitis and pneumonia.
The liquid burns the skin, producing itching and reddening, then blisters several hours after contact. Both liquid and vapour severely damage the eyes.
Exposure levels leading to death vary from 1,600 to 60,000 ppm, depending on the duration of exposure.
The respiratory, kidney, and neurologic effects are of the greatest concern to people. No cases of severe effects on the nervous system from long-term exposure to low levels have been noted in people, but studies in rabbits and monkeys have shown moderate to severe injury.
# Sources and sinks
Sources of CH3Br include oceanic production, biomass burning, leaded fuel combustion, plant and marsh emissions, and fumigation of soils, durable goods, perishables, and structures. Sinks include photochemical decomposition in the atmosphere (reaction with hydroxyl radicals (OH) and photolysis at higher altitudes), loss to soils, chemical and biological degradation in the ocean, and uptake by green plants. | Bromomethane
Template:Chembox new
# Overview
The chemical compound bromomethane, commonly known as methyl bromide, is an organic halogen compound with formula CH3Br. It is a colorless, nonflammable gas with no distinctive smell. Its chemical properties are quite similar to those of chloromethane. Trade names for bromomethane include Embafume and Terabol.
# Origin
Bromomethane originates from both natural and human sources. It occurs naturally in the ocean, where it is probably formed by algae and kelp. It is also produced by certain terrestrial plants, such as members of the Brassicaceae family. It is manufactured for agricultural and industrial use by reacting methanol with hydrobromic acid.
# Uses
Until its production and use was curtailed by the Montreal Protocol, it was widely used as a soil sterilant, mainly for production of seed but also for some crops such as strawberries. In seed production, unlike crop production, it is of vital importance to avoid contaminating the crop with off-type seed of the same species. Therefore, selective herbicides cannot be used. While bromomethane is dangerous to use, it is considerably safer and more effective than the few other soil sterilants available. Its loss to the seed industry has resulted in changes to cultural practices, with increased reliance on mechanical rogueing and fallow seasons.
Bromomethane was also used as a general-purpose fumigant to kill a variety of pests including rats, insects , Bromomethane has poor fungicidal properties. (Bromomethane is the preferred fumigant for ISPM number 15, regulations when exporting wooden packaging to certain countries). It is also a precursor in the manufacture of other chemicals as a methylation agent, and has been used as a solvent to extract oil from seeds and wool.
While the Montreal Protocol has severely restricted the use of bromomethane internationally, the United States has successfully pushed for critical-use exemptions of the chemical. In 2004, the most recent year with available data, over 7 million pounds of bromomethane were applied to California fields, according to pesticide use statistics compiled by the California Department of Pesticide Regulation.
# Ozone depletion
Bromomethane is on the list of banned ozone-depleting substances of the Montreal Protocol. Because bromine is 60 times[1] more destructive to ozone than chlorine, even small amounts of bromomethane cause considerable damage to the ozone layer. In 2005 and 2006, however, it was granted a critical use exemption under the Montreal Protocol.
# Controversy
Bromomethane is used to prepare golf courses and sod for golf courses and elsewhere, particularly to control Bermuda grass. The Montreal Protocol stipulates that bromomethane use be phased out. The Bush Administration has adopted exceptions to prevent market disruptions.
# Health effects
If inhaled in high concentration for a short period, it produces headaches, dizziness, nausea, vomiting and weakness; this may be followed by mental excitement, convulsions and even acute mania. More prolonged inhalation of lower concentrations may cause bronchitis and pneumonia.[1]
The liquid burns the skin, producing itching and reddening, then blisters several hours after contact. Both liquid and vapour severely damage the eyes.[1]
Exposure levels leading to death vary from 1,600 to 60,000 ppm, depending on the duration of exposure.
The respiratory, kidney, and neurologic effects are of the greatest concern to people. No cases of severe effects on the nervous system from long-term exposure to low levels have been noted in people, but studies in rabbits and monkeys have shown moderate to severe injury.
# Sources and sinks
Sources of CH3Br include oceanic production, biomass burning, leaded fuel combustion, plant and marsh emissions, and fumigation of soils, durable goods, perishables, and structures. Sinks include photochemical decomposition in the atmosphere (reaction with hydroxyl radicals (OH) and photolysis at higher altitudes), loss to soils, chemical and biological degradation in the ocean, and uptake by green plants. | https://www.wikidoc.org/index.php/Bromomethane | |
601bde2471761b937ae7049716e2bfffed8e5190 | wikidoc | Bronchophony | Bronchophony
Bronchophony, also known as bronchiloquy, is the abnormal transmission of sounds from the lungs or bronchii. It is a general sign, detected by auscultation. The patient is requested to repeat a word several times (the numbers "ninety-nine" or "sixty-six" are traditional) while the physician auscultes symmetrical areas of each lung. Normally, the sound of the patient's voice becomes less distinct as the auscultation moves peripherally; bronchophony is the phenomenon of the patient's voice remaining loud at the periphery of the lungs or sounding louder than usual over a disctinct area of consolidation (such as pneumonia). This is a valuable tool in physical diagnosis used by medical personnel when auscultating the chest.
Often, the patient does not have to speak for the physician to hear signs of bronchophony. Rather, the normal breath sounds are increased in loudness over the affected area of the lungs (referred to by doctors as "increased breath sounds.")
Bronchophony may be caused by a solidification of lung tissue around the bronchii, which may indicate lung cancer, or by fluid in the alveoli, which may indicate pneumonia. However, it may also have benign causes, such as wide bronchii. As such, it usually an indication for further investigation rather than the main basis of a diagnosis.
Other tools used in auscultation include listening for egophony, whispered pectoriloquy, rales, rhonchi or wheezing. Also, percussion is often used to determine diseases of the chest.
# Sources
- Adam's Physical Diagnosis, by John W. Burnside, M.D.
- Medspan
- Mercksource
- TheFreeDictionary.com
de:Bronchophonie | Bronchophony
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Bronchophony, also known as bronchiloquy, is the abnormal transmission of sounds from the lungs or bronchii. It is a general sign, detected by auscultation. The patient is requested to repeat a word several times (the numbers "ninety-nine" or "sixty-six" are traditional) while the physician auscultes symmetrical areas of each lung. Normally, the sound of the patient's voice becomes less distinct as the auscultation moves peripherally; bronchophony is the phenomenon of the patient's voice remaining loud at the periphery of the lungs or sounding louder than usual over a disctinct area of consolidation (such as pneumonia). This is a valuable tool in physical diagnosis used by medical personnel when auscultating the chest.
Often, the patient does not have to speak for the physician to hear signs of bronchophony. Rather, the normal breath sounds are increased in loudness over the affected area of the lungs (referred to by doctors as "increased breath sounds.")
Bronchophony may be caused by a solidification of lung tissue around the bronchii, which may indicate lung cancer, or by fluid in the alveoli, which may indicate pneumonia. However, it may also have benign causes, such as wide bronchii. As such, it usually an indication for further investigation rather than the main basis of a diagnosis.
Other tools used in auscultation include listening for egophony, whispered pectoriloquy, rales, rhonchi or wheezing. Also, percussion is often used to determine diseases of the chest.
# Sources
- Adam's Physical Diagnosis, by John W. Burnside, M.D.
- Medspan
- Mercksource
- TheFreeDictionary.com
de:Bronchophonie
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Bronchophony | |
0249632444ade831bd6156440f45f5b89c988a14 | wikidoc | Bronchoscopy | Bronchoscopy
# Overview
Bronchoscopy is a medical procedure where a tube is inserted into the airways, usually through the nose or mouth. This allows the practitioner to examine inside a patient's airway for abnormalities such as foreign bodies, bleeding, tumors, or inflammation. The practitioner often takes samples from inside the lungs: biopsies, fluid (bronchoalveolar lavage), or endobronchial brushing. The practitioner may use either a rigid bronchoscope or flexible bronchoscope.
# History
A German, Gustav Killian, performed the first bronchoscopy in 1897. From then until the 1970s, doctors evaluated people’s airways using a rigid bronchoscope.
# Rigid Bronchoscopy
A rigid bronchoscope is a straight, hollow, metal tube. Doctors perform rigid bronchoscopy less often today, but it remains the procedure of choice for removing foreign material. Rigid bronchoscopy also becomes useful when bleeding interferes with viewing the examining area.
# Flexible Bronchoscopy
A flexible bronchoscope is a long thin tube that contains small clear optical fibers that transmit light images as the tube bends. Its flexibility allows this instrument to reach further into the airway. The procedure can be performed easily and safely under local anesthesia.
# Indications
Diagnostic Procedures
- To view abnormalities of the airway
- To obtain samples of an abnormality or specimens in undiagnosed infections
- To obtain tissue specimens of the lung in a variety of disorders
- To evaluate a person who has bleeding in the lungs, possible lung cancer, a chronic cough, or a collapsed lung
Therapeutic Procedures
- To remove foreign objects lodged in the airway
- Laser photocoagulation, electrocauterization, or argon plasma coagulation of exophytic tumors, granulation tissue, or benign lesions such as papilloma, hamartoma, lipoma, and adenoma
- Laser resection of benign tracheal and bronchial strictures
- Stent insertion to palliate extrinsic compression of the tracheobronchial lumen from either malignant or benign disease processes
# Bronchoscopy - The Procedure
The bronchoscopy is performed in 1 of 3 areas:
- A special room designated for such procedures
- An operating room
- An intensive care unit
One will be given antianxiety and antisecretory medications (to prevent oral secretions from obstructing the view), generally atropine (Atropair, I-Tropine) and morphine (Duramorph, Oramorph, Roxanol), half an hour before the procedure.
During the procedure, doctors provide an agent such as midazolam (Versed) to sedate although one would remain conscious. Lidocaine may also be used to anesthetize the upper airways.
One will be monitored during the procedure with periodic blood pressure checks, continuous ECG monitoring of the heart and oxygen measurement. Monitoring is particularly important when the patient remains conscious during the procedure.
The doctor inserts a flexible bronchoscope through either the nose or mouth either in the sitting or lying down position.
Once the bronchoscope is inserted into the upper airway, the doctor examines the vocal cords. The doctor continues to advance the instrument to the trachea and further down into the bronchus, examining each area as the bronchoscope passes.
If doctors discover an abnormality, they may sample it, using a brush, a needle, or forceps.They also may sample a large number of alveoli. Doctors can obtain a specimen of lung tissue (transbronchial biopsy) often using a real-time x-ray (fluoroscopy).
# After the procedure
Although most adults tolerate bronchoscopy well, doctors require that one remains under a brief period of observation.
Nurses watch closely for 2-4 hours following the procedure, usually every 15 minutes. Keep patient in semi-fowler position.
Most complications occur early and are readily apparent at the time of the procedure. Assess for respiratory difficulty (stridor and dyspnea resulting from laryngeal edema or laryngospasm).
Monitoring continues until the effects of sedative drugs wear off and gag reflex has returned.
If one has had a transbronchial biopsy, doctors will take a chest x-ray to rule out any air leakage in the lungs (pneumothorax) after the procedure
One will be hospitalized if there occurs any bleeding, air leakage (pneumothorax), or respiratory distress.
# Risks
Although the rigid bronchoscope can scratch or tear airway or damage the vocal cords, the risk for bronchoscopy is limited. The conditions for which doctors use it are ongoing, life-threatening cardiac problems or severely low oxygen.
Complications from fiberoptic bronchoscopy remain extremely low.
Common complications include either heart and blood vessel problems or excessive bleeding following biopsy.
A lung biopsy also may cause leakage of air called pneumothorax. Pneumothorax occurs in less than 1% of cases requiring lung biopsy. | Bronchoscopy
# Overview
Bronchoscopy is a medical procedure where a tube is inserted into the airways, usually through the nose or mouth. This allows the practitioner to examine inside a patient's airway for abnormalities such as foreign bodies, bleeding, tumors, or inflammation. The practitioner often takes samples from inside the lungs: biopsies, fluid (bronchoalveolar lavage), or endobronchial brushing. The practitioner may use either a rigid bronchoscope or flexible bronchoscope.
# History
A German, Gustav Killian, performed the first bronchoscopy in 1897. From then until the 1970s, doctors evaluated people’s airways using a rigid bronchoscope.
# Rigid Bronchoscopy
A rigid bronchoscope is a straight, hollow, metal tube. Doctors perform rigid bronchoscopy less often today, but it remains the procedure of choice for removing foreign material. Rigid bronchoscopy also becomes useful when bleeding interferes with viewing the examining area.
# Flexible Bronchoscopy
A flexible bronchoscope is a long thin tube that contains small clear optical fibers that transmit light images as the tube bends. Its flexibility allows this instrument to reach further into the airway. The procedure can be performed easily and safely under local anesthesia.
# Indications
Diagnostic Procedures
- To view abnormalities of the airway
- To obtain samples of an abnormality or specimens in undiagnosed infections
- To obtain tissue specimens of the lung in a variety of disorders
- To evaluate a person who has bleeding in the lungs, possible lung cancer, a chronic cough, or a collapsed lung
Therapeutic Procedures
- To remove foreign objects lodged in the airway
- Laser photocoagulation, electrocauterization, or argon plasma coagulation of exophytic tumors, granulation tissue, or benign lesions such as papilloma, hamartoma, lipoma, and adenoma
- Laser resection of benign tracheal and bronchial strictures
- Stent insertion to palliate extrinsic compression of the tracheobronchial lumen from either malignant or benign disease processes
# Bronchoscopy - The Procedure
The bronchoscopy is performed in 1 of 3 areas:
- A special room designated for such procedures
- An operating room
- An intensive care unit
One will be given antianxiety and antisecretory medications (to prevent oral secretions from obstructing the view), generally atropine (Atropair, I-Tropine) and morphine (Duramorph, Oramorph, Roxanol), half an hour before the procedure.
During the procedure, doctors provide an agent such as midazolam (Versed) to sedate although one would remain conscious. Lidocaine may also be used to anesthetize the upper airways.
One will be monitored during the procedure with periodic blood pressure checks, continuous ECG monitoring of the heart and oxygen measurement. Monitoring is particularly important when the patient remains conscious during the procedure.
The doctor inserts a flexible bronchoscope through either the nose or mouth either in the sitting or lying down position.
Once the bronchoscope is inserted into the upper airway, the doctor examines the vocal cords. The doctor continues to advance the instrument to the trachea and further down into the bronchus, examining each area as the bronchoscope passes.
If doctors discover an abnormality, they may sample it, using a brush, a needle, or forceps.They also may sample a large number of alveoli. Doctors can obtain a specimen of lung tissue (transbronchial biopsy) often using a real-time x-ray (fluoroscopy).
# After the procedure
Although most adults tolerate bronchoscopy well, doctors require that one remains under a brief period of observation.
Nurses watch closely for 2-4 hours following the procedure, usually every 15 minutes. Keep patient in semi-fowler position.
Most complications occur early and are readily apparent at the time of the procedure. Assess for respiratory difficulty (stridor and dyspnea resulting from laryngeal edema or laryngospasm).
Monitoring continues until the effects of sedative drugs wear off and gag reflex has returned.
If one has had a transbronchial biopsy, doctors will take a chest x-ray to rule out any air leakage in the lungs (pneumothorax) after the procedure
One will be hospitalized if there occurs any bleeding, air leakage (pneumothorax), or respiratory distress.
# Risks
Although the rigid bronchoscope can scratch or tear airway or damage the vocal cords, the risk for bronchoscopy is limited. The conditions for which doctors use it are ongoing, life-threatening cardiac problems or severely low oxygen.
Complications from fiberoptic bronchoscopy remain extremely low.
Common complications include either heart and blood vessel problems or excessive bleeding following biopsy.
A lung biopsy also may cause leakage of air called pneumothorax. Pneumothorax occurs in less than 1% of cases requiring lung biopsy. | https://www.wikidoc.org/index.php/Bronchoscope | |
683877ed7b15874f0db91e88dd59da8da197aba2 | wikidoc | Buccal index | Buccal index
The buccal index is a term used in different fields and is defined accordingly:
- In ultrasound diagnostics: The buccal index >20 mm was first introduced by E. E. Kortshagina as a marker for diabetic fetopathy. The buccal index <10 mm was first used as a marker for intrauterine growth restriction (IUGR) by M. S. Walid.
- In dentistry: The cross-mounting buccal index was developed by N. Chaimattayompol for the definitive implant abutment selection, framework design and fabrication. | Buccal index
The buccal index is a term used in different fields and is defined accordingly:
- In ultrasound diagnostics: The buccal index >20 mm was first introduced by E. E. Kortshagina as a marker for diabetic fetopathy. The buccal index <10 mm was first used as a marker for intrauterine growth restriction (IUGR) by M. S. Walid.
- In dentistry: The cross-mounting buccal index was developed by N. Chaimattayompol for the definitive implant abutment selection, framework design and fabrication. | https://www.wikidoc.org/index.php/Buccal_index | |
81f7c2daf2229e05ee913862841dfa3a1df8afa8 | wikidoc | Diazomethane | Diazomethane
# Overview
Diazomethane is the chemical compound CH2N2. In the pure form at room temperature, it is a yellow gas, but it is almost universally used as a solution in diethyl ether. It is one of the more common diazo compounds. It is also toxic and potentially explosive.
# Preparation
CH2N2 is usually prepared as a solution in diethyl ether and used for converting carboxylic acids into their methyl esters or into their homologues (see Arndt-Eistert synthesis). In the Buchner-Curtius-Schlotterbeck reaction (1885) diazomethane reacts with an aldehyde to form ketones. Diazomethane is also frequently used as a carbene equivalent. Diazomethane is prepared in the laboratory at mmol scale from precursors such as Diazald or N-methyl-N-nitroso-p-toluenesulfonamide and MNNG or 1-methyl-3-nitro-1-nitrosoguanidine. Diazald in a solution of diglyme and diethyl ether reacts with a warm aqueous solution of sodium hydroxide and the generated CH2N2 is collected by distillation. Diazomethane is liberated from a solution of MNNG in diethyl ether by addition of aqueous potassium hydroxide at low temperatures.
Diazomethane precursors
CH2N2 reacts with basic solutions of2H2O to give the deuterated derivative C2H2N2.
# Assay
The concentration of CH2N2 can be determined in either of two convenient ways. It can be treated with an excess of benzoic acid in cold Et2O. Unreacted benzoic acid is then assayed using titration with standard NaOH. Alternatively, the concentration of CH2N2 in Et2O can be determined spectrophotometrically at 410 nm where its extinction coefficient, ε, is 7.2.
# Other diazomethane compounds
Many substituted derivatives of diazomethane have been prepared:
- The very stable (CF3)2CN2 (b.p. 12–13 °C),
- Ph2CN2 (m.p. 29–30 °C).
- (CH3)3SiCHN2, which is commercially available as a solution and is as effective as CH2N2 for methylation.
- PhC(H)N2, a red liquid b.p.< 25 °C at 0.1 mm Hg.
# Safety
Diazomethane is toxic by inhalation or by contact with the skin or eyes (TLV 0.2ppm). Symptoms include chest discomfort, headache, weakness and, in severe cases, collapse. CH2N2 may explode when in contact with ground-glass joints or when heated to about 100.0 °C. Consequently specialized, scratch-free glassware and a blast shield should be employed for its use. | Diazomethane
Template:Chembox new
# Overview
Diazomethane is the chemical compound CH2N2. In the pure form at room temperature, it is a yellow gas, but it is almost universally used as a solution in diethyl ether. It is one of the more common diazo compounds. It is also toxic and potentially explosive.
# Preparation[1]
CH2N2 is usually prepared as a solution in diethyl ether and used for converting carboxylic acids into their methyl esters or into their homologues (see Arndt-Eistert synthesis). In the Buchner-Curtius-Schlotterbeck reaction (1885) diazomethane reacts with an aldehyde to form ketones. Diazomethane is also frequently used as a carbene equivalent. Diazomethane is prepared in the laboratory at mmol scale from precursors such as Diazald or N-methyl-N-nitroso-p-toluenesulfonamide and MNNG or 1-methyl-3-nitro-1-nitrosoguanidine. Diazald in a solution of diglyme and diethyl ether reacts with a warm aqueous solution of sodium hydroxide and the generated CH2N2 is collected by distillation. Diazomethane is liberated from a solution of MNNG in diethyl ether by addition of aqueous potassium hydroxide at low temperatures.
Diazomethane precursors
CH2N2 reacts with basic solutions of2H2O to give the deuterated derivative C2H2N2.[2]
# Assay
The concentration of CH2N2 can be determined in either of two convenient ways. It can be treated with an excess of benzoic acid in cold Et2O. Unreacted benzoic acid is then assayed using titration with standard NaOH. Alternatively, the concentration of CH2N2 in Et2O can be determined spectrophotometrically at 410 nm where its extinction coefficient, ε, is 7.2.
# Other diazomethane compounds
Many substituted derivatives of diazomethane have been prepared:
- The very stable (CF3)2CN2 (b.p. 12–13 °C),[3]
- Ph2CN2 (m.p. 29–30 °C).[4]
- (CH3)3SiCHN2, which is commercially available as a solution and is as effective as CH2N2 for methylation.[5]
- PhC(H)N2, a red liquid b.p.< 25 °C at 0.1 mm Hg.[6]
# Safety
Diazomethane is toxic by inhalation or by contact with the skin or eyes (TLV 0.2ppm). Symptoms include chest discomfort, headache, weakness and, in severe cases, collapse.[7] CH2N2 may explode when in contact with ground-glass joints or when heated to about 100.0 °C. Consequently specialized, scratch-free glassware and a blast shield should be employed for its use. | https://www.wikidoc.org/index.php/Buchner-Curtius-Schlotterbeck_reaction | |
d64caea60a8fd1b3399756b9953c5d44608d744c | wikidoc | Bulk modulus | Bulk modulus
The bulk modulus (K) of a substance measures the substance's resistance to uniform compression. It is defined as the pressure increase needed to effect a given relative decrease in volume.
As an example, suppose an iron cannon ball with bulk modulus 160 GPa (gigapascal) is to be reduced in volume by 0.5%. This requires a pressure increase of 0.005×160 GPa = 0.8 GPa. If the cannon ball is subjected to a pressure increase of only 100 MPa, it will decrease in volume by a factor of 100 MPa/160 GPa = 0.000625, or 0.0625%.
The bulk modulus K can be formally defined by the equation:
where p is pressure, V is volume, and ∂p/∂V denotes the partial derivative of pressure with respect to volume. The inverse of the bulk modulus gives a substance's compressibility.
Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear, and Young's modulus describes the response to linear strain. For a fluid, only the bulk modulus is meaningful. For an anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law.
Strictly speaking, the bulk modulus is a thermodynamic quantity, and it is necessary to specify how the temperature varies in order to specify a bulk modulus: constant-temperature (K_T), constant-enthalpy (adiabatic K_S), and other variations are possible. In practice, such distinctions are usually only relevant for gases.
For a gas, the adiabatic bulk modulus K_S is approximately given by
K_S=\kappa\, p
where
In a fluid, the bulk modulus K and the density ρ determine the speed of sound c (pressure waves), according to the formula
Solids can also sustain transverse waves, for these one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds.
# Anisotropy
For crystalline solids with a symmetry lower than cubic the bulk modulus is not the same in all directions and needs to be described with a tensor with more than one independent value. It is possible to study the tensor elements using powder diffraction under applied pressure. | Bulk modulus
The bulk modulus (K) of a substance measures the substance's resistance to uniform compression. It is defined as the pressure increase needed to effect a given relative decrease in volume.
As an example, suppose an iron cannon ball with bulk modulus 160 GPa (gigapascal) is to be reduced in volume by 0.5%. This requires a pressure increase of 0.005×160 GPa = 0.8 GPa. If the cannon ball is subjected to a pressure increase of only 100 MPa, it will decrease in volume by a factor of 100 MPa/160 GPa = 0.000625, or 0.0625%.
The bulk modulus K can be formally defined by the equation:
where p is pressure, V is volume, and ∂p/∂V denotes the partial derivative of pressure with respect to volume. The inverse of the bulk modulus gives a substance's compressibility.
Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear, and Young's modulus describes the response to linear strain. For a fluid, only the bulk modulus is meaningful. For an anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law.
Strictly speaking, the bulk modulus is a thermodynamic quantity, and it is necessary to specify how the temperature varies in order to specify a bulk modulus: constant-temperature (<math>K_T</math>), constant-enthalpy (adiabatic <math>K_S</math>), and other variations are possible. In practice, such distinctions are usually only relevant for gases.
For a gas, the adiabatic bulk modulus <math>K_S</math> is approximately given by
K_S=\kappa\, p
</math>
where
In a fluid, the bulk modulus K and the density ρ determine the speed of sound c (pressure waves), according to the formula
Solids can also sustain transverse waves, for these one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds.
# Anisotropy
For crystalline solids with a symmetry lower than cubic the bulk modulus is not the same in all directions and needs to be described with a tensor with more than one independent value. It is possible to study the tensor elements using powder diffraction under applied pressure. | https://www.wikidoc.org/index.php/Bulk_modulus | |
cd929699e6c96c12d482588dd05413ad2cbe4e7f | wikidoc | Bungarotoxin | Bungarotoxin
Bungarotoxin (more accurately α-bungarotoxin) is one of the components of the venom of the elapid snake Taiwanese banded krait (Bungarus multicinctus). It binds irreversibly to the acetylcholine receptor found at the neuromuscular junction, causing paralysis, respiratory failure and death in the victim.
α-bungarotoxin is also a selective antagonist of the α7 nicotinic acetylcholine receptor in the brain, and as such has applications in neuroscience research.
In addition to the α-type bungarotoxin, a β-bungarotoxin is fairly common in some snake venom. The target of this neurotoxin is at the pre-synaptic terminal, where it prevents release of acetylcholine by binding to proteins, most commonly actin.
# History
Bungarotoxin was discovered by Chuan-Chiung Chang and Chen-Yuan Lee of the National Taiwan University in 1963. | Bungarotoxin
Bungarotoxin (more accurately α-bungarotoxin) is one of the components of the venom of the elapid snake Taiwanese banded krait (Bungarus multicinctus). It binds irreversibly to the acetylcholine receptor found at the neuromuscular junction, causing paralysis, respiratory failure and death in the victim.
α-bungarotoxin is also a selective antagonist of the α7 nicotinic acetylcholine receptor in the brain, and as such has applications in neuroscience research.
In addition to the α-type bungarotoxin, a β-bungarotoxin is fairly common in some snake venom. The target of this neurotoxin is at the pre-synaptic terminal, where it prevents release of acetylcholine by binding to proteins, most commonly actin.
# History
Bungarotoxin was discovered by Chuan-Chiung Chang and Chen-Yuan Lee of the National Taiwan University in 1963.[1][2] | https://www.wikidoc.org/index.php/Bungarotoxin | |
9ff75b62ffc798f99f7ffb8c9802653f37c529df | wikidoc | Bunyaviridae | Bunyaviridae
Bunyaviridae is a family of negative-stranded RNA viruses. Though generally found in arthropods or rodents, certain viruses in this family occasionally infect humans.
Bunyaviridae are vector-borne viruses. With the exception of Hantaviruses, transmission occurs via an arthropod vector (mosquitos, tick, or sandfly). Hantaviruses are transmitted through contact with deer mice feces. Incidence of infection is closely linked to vector activity, for example, mosquito-borne viruses are more common in the summer.
Human infections with certain Bunyaviruses, such as Crimean-Congo Hemorrhagic Fever virus, are associated with high levels of morbidity and mortality, consequently handling of these viruses must occur with a Biosafety level 4 laboratory.
Hanta virus or Hantavirus Hemorrhagic fever, common in Korea, Scandinavia, Russia, and the American southwest, is associated with high fever, lung edema and pulmonary failure. Mortality is around 55%.
The antibody reaction plays an important role in decreasing levels of virema.
The family Bunyaviridae contains the genera:
- Genus Hantavirus; type species: Hantaan virus (Hantavirus pulmonary syndrome, Korean hemorrhagic fever)
- Genus Nairovirus; type species: Dugbe virus
- Genus Orthobunyavirus; type species: Bunyamwera virus
- Genus Phlebovirus; type species: Rift Valley fever virus
- Genus Tospovirus; type species: Tomato spotted wilt virus
Of these genera, all infect vertebrates except Tospoviruses, which only infect arthropods and plants.
# Morphology
Bunyavirus morphology is somewhat similar to that of the Paramyxoviridae family; Bunyaviruses form enveloped, spherical virions with diameters of 90-100 nm. These viruses contain no matrix proteins.
# Genome
Bunyaviruses have tripartite genomes consisting of a large (L), medium (M), and small (S) RNA segment. These RNA segments are single-stranded, and exist in a helical formation within the virion. Besides, they exhibit a pseudo-circular structure due to each segment's complementary ends. The L segment encodes the RNA Dependent RNA-polymerase, necessary for viral RNA replication and mRNA synthesis. The M segment encodes the viral glycoproteins, which project from the viral surface and aid the virus in attaching to and entering the host cell. The S segment encodes the nucleocapsid protein (N).
The L and M segment are negative sense. For the Genera of Phlebovirus and Tospovirus, the S segment is ambisense. Ambisense means that some of the proteins on the RNA strand are negative sense. The S segment codes for the viral nucleoprotein (N) in the negative sense and a nonstructural (NSs) protein in ambisense.
Total genome size ranges from 11-19 kbp.
# Replication
This ambisense arrangement requires two rounds of transcription to be carried out. First the negative sense RNA is transcribed to produce mRNA and a full length replicative intermediate. From this intermediate a subgenomic mRNA encoding the small segment nonstructural protein is produced while the polymerase produced following the first round of transcription can now replicate the full lengh of RNA to produce viral genomes.
Bunyavirus RNA replicates in the cytoplasm, while the viral proteins transit through the ER and Golgi apparatus. Mature virions bud from the Golgi apparatus into vesicles which are transported to the cell surface.
de:Bunyaviridae
fi:Bunyavirukset | Bunyaviridae
Bunyaviridae is a family of negative-stranded RNA viruses. Though generally found in arthropods or rodents, certain viruses in this family occasionally infect humans.
Bunyaviridae are vector-borne viruses. With the exception of Hantaviruses, transmission occurs via an arthropod vector (mosquitos, tick, or sandfly). Hantaviruses are transmitted through contact with deer mice feces. Incidence of infection is closely linked to vector activity, for example, mosquito-borne viruses are more common in the summer.
Human infections with certain Bunyaviruses, such as Crimean-Congo Hemorrhagic Fever virus, are associated with high levels of morbidity and mortality, consequently handling of these viruses must occur with a Biosafety level 4 laboratory.
Hanta virus or Hantavirus Hemorrhagic fever, common in Korea, Scandinavia, Russia, and the American southwest, is associated with high fever, lung edema and pulmonary failure. Mortality is around 55%.
The antibody reaction plays an important role in decreasing levels of virema.
The family Bunyaviridae contains the genera:
- Genus Hantavirus; type species: Hantaan virus (Hantavirus pulmonary syndrome, Korean hemorrhagic fever)
- Genus Nairovirus; type species: Dugbe virus
- Genus Orthobunyavirus; type species: Bunyamwera virus
- Genus Phlebovirus; type species: Rift Valley fever virus
- Genus Tospovirus; type species: Tomato spotted wilt virus
Of these genera, all infect vertebrates except Tospoviruses, which only infect arthropods and plants.
# Morphology
Bunyavirus morphology is somewhat similar to that of the Paramyxoviridae family; Bunyaviruses form enveloped, spherical virions with diameters of 90-100 nm. These viruses contain no matrix proteins.
# Genome
Bunyaviruses have tripartite genomes consisting of a large (L), medium (M), and small (S) RNA segment. These RNA segments are single-stranded, and exist in a helical formation within the virion. Besides, they exhibit a pseudo-circular structure due to each segment's complementary ends. The L segment encodes the RNA Dependent RNA-polymerase, necessary for viral RNA replication and mRNA synthesis. The M segment encodes the viral glycoproteins, which project from the viral surface and aid the virus in attaching to and entering the host cell. The S segment encodes the nucleocapsid protein (N).
The L and M segment are negative sense. For the Genera of Phlebovirus and Tospovirus, the S segment is ambisense. Ambisense means that some of the proteins on the RNA strand are negative sense. The S segment codes for the viral nucleoprotein (N) in the negative sense and a nonstructural (NSs) protein in ambisense.
Total genome size ranges from 11-19 kbp.
# Replication
This ambisense arrangement requires two rounds of transcription to be carried out. First the negative sense RNA is transcribed to produce mRNA and a full length replicative intermediate. From this intermediate a subgenomic mRNA encoding the small segment nonstructural protein is produced while the polymerase produced following the first round of transcription can now replicate the full lengh of RNA to produce viral genomes.
Bunyavirus RNA replicates in the cytoplasm, while the viral proteins transit through the ER and Golgi apparatus. Mature virions bud from the Golgi apparatus into vesicles which are transported to the cell surface.
Template:Virus-stub
de:Bunyaviridae
fi:Bunyavirukset | https://www.wikidoc.org/index.php/Bunyaviridae | |
c18aa938b206314aff7e032ea59d3ffa74f7a38b | wikidoc | Burmese tofu | Burmese tofu
Burmese tofu (Template:Lang-my; Template:IPA2 or Template:IPA) is a food of Shan origin and is different from Chinese tofu which is made from soybeans. Shan tofu is made from yellow split peas and the Burmese version from besan flour. The flour is mixed with water, turmeric, and a little salt and heated, stirring constantly, until it reaches a creamy consistency. It is then transferred into a tray and allowed to set. It is matte yellow in colour, jelly-like but firm in consistency, and does not crumble when cut or sliced. It may be eaten fresh as a salad or deep fried. It may also be sliced and dried to make crackers for deep frying.
# Varieties and etymology
- Pè bya (File:Bscript pebya.png, literally pressed peas) refers to Chinese tofu and is translated into 'beancurd' in English in Myanmar. Stinky tofu or the fermented form of Chinese tofu, however, is called si to hpu, probably a corruption of the Chinese word chòu dòufu.
- Won ta hpo is the yellow form of tofu made from yellow split peas or zadaw bè in Shan State.
- To hpu gyauk or dried tofu is yellow tofu sliced into a long thin rectangular form and dried in the sun. They are similar to fish or prawn crackers and sold in bundles.
- To hpu made from chickpea (kala bè) flour or pè hmont is the common version in mainland Burma. It has the same yellow colour and taste but slightly firmer than Shan tofu.
- Hsan ta hpo is still mainly confined to Shan regions, made from rice flour called hsan hmont or mont hmont, and is white in colour. It has the same consistency but slightly different in taste. It is as popular as the yellow form as a salad.
There is no Template:IPA (as in "French") in the Burmese language; hence, Template:IPA (as in "prince") is used in to hpu, the Burmese version of "tofu".
# Preparation
## Fried
- To hpu gyaw is yellow tofu cut into rectangular shapes, scored in the middle, and deep fried. Tofu fritters may be eaten with a spicy sour dip, or cut and made into a salad. They are crispy outside and soft inside.
- Hnapyan gyaw is so called because the fritters are "twice fried" after the tofu is cut into triangular shapes. It is the traditional form in the Shan States.
- To hpu gyauk kyaw or deep fried tofu crackers, like hnapyan gyaw, are usually served with htamin gyin (rice balls kneaded together with fish or potato), another popular Shan dish.
Fried tofu goes very well with kau hnyin baung (glutinous rice) as a breakfast option, and also with mohinga (rice vermicelli in fish soup) or rice noodles called hsan hkauk swè, especially Shan hkauk swè. Green tea is the preferred traditional drink to go with all these in Burma.
## Salad
- To hpu thouk or tofu salad with either to hpu or hsan ta hpo is very popular as a snack or a meal in itself whereas fried tofu on its own is considered a snack. Both may form part of a meal where all the dishes are customarily shared at the same time. Fresh tofu, cut into small rectangular slices, constitutes the main ingredient of the salad, dressed and garnished with peanut oil, dark soy sauce, rice vinegar, toasted crushed dried chilli, crushed garlic, crushed roasted peanuts, crisp-fried onions, and coriander.
- To hpu gyaw thouk refers to tofu fritters cut up and served as a salad as above.
- To hpu nway (warm tofu) or to hpu byaw (soft tofu) is the soft creamy tofu served hot before it sets, usually as a salad dressed and garnished the same way. It may be combined in the same dish with tofu fritters or rice noodles.
## Curried
- To hpu gyet - Sliced yellow tofu may also be curried with fresh tomatoes, onions and garlic, cooked in peanut oil and fish sauce, and garnished with coriander and green chilli. It makes a good pescatarian dish to go with rice, but also popular among the poor if meat or poultry is unaffordable.
# Notes
- ↑ Also called gram flour, besan flour is made from chana dal (also called kala chana or Bengal gram), a type of small, dark-colored chickpea also used in Indian cuisine).
- Hsan ta hpo (rice tofu) salad from the Shan States is as popular as the yellow Burmese tofu salad.
Hsan ta hpo (rice tofu) salad from the Shan States is as popular as the yellow Burmese tofu salad.
- Shan hkauk swè (Shan rice noodles) with to hpu gyaw (tofu fritters) served with monnyingyin (pickled mustard greens) on the side
Shan hkauk swè (Shan rice noodles) with to hpu gyaw (tofu fritters) served with monnyingyin (pickled mustard greens) on the side
- To hpu nway (warm Burmese tofu) and to hpu gyaw (Burmese tofu fritters) salad combines the creamy and crispy forms into a satisfying meal.
To hpu nway (warm Burmese tofu) and to hpu gyaw (Burmese tofu fritters) salad combines the creamy and crispy forms into a satisfying meal.
- To hpu thouk (Burmese tofu salad) hawker at the Kuthodaw Pagoda, Mandalay
To hpu thouk (Burmese tofu salad) hawker at the Kuthodaw Pagoda, Mandalay | Burmese tofu
Burmese tofu (Template:Lang-my; Template:IPA2 or Template:IPA) is a food of Shan origin and is different from Chinese tofu which is made from soybeans. Shan tofu is made from yellow split peas and the Burmese version from besan flour.[1] The flour is mixed with water, turmeric, and a little salt and heated, stirring constantly, until it reaches a creamy consistency. It is then transferred into a tray and allowed to set. It is matte yellow in colour, jelly-like but firm in consistency, and does not crumble when cut or sliced. It may be eaten fresh as a salad or deep fried. It may also be sliced and dried to make crackers for deep frying.
# Varieties and etymology
- Pè bya (File:Bscript pebya.png, literally pressed peas) refers to Chinese tofu and is translated into 'beancurd' in English in Myanmar. Stinky tofu or the fermented form of Chinese tofu, however, is called si to hpu, probably a corruption of the Chinese word chòu dòufu.
- Won ta hpo is the yellow form of tofu made from yellow split peas or zadaw bè in Shan State.
- To hpu gyauk or dried tofu is yellow tofu sliced into a long thin rectangular form and dried in the sun. They are similar to fish or prawn crackers and sold in bundles.
- To hpu made from chickpea (kala bè) flour or pè hmont is the common version in mainland Burma. It has the same yellow colour and taste but slightly firmer than Shan tofu.
- Hsan ta hpo is still mainly confined to Shan regions, made from rice flour called hsan hmont or mont hmont, and is white in colour. It has the same consistency but slightly different in taste. It is as popular as the yellow form as a salad.
There is no Template:IPA (as in "French") in the Burmese language; hence, Template:IPA (as in "prince") is used in to hpu, the Burmese version of "tofu".
# Preparation
## Fried
- To hpu gyaw is yellow tofu cut into rectangular shapes, scored in the middle, and deep fried. Tofu fritters may be eaten with a spicy sour dip, or cut and made into a salad. They are crispy outside and soft inside.
- Hnapyan gyaw is so called because the fritters are "twice fried" after the tofu is cut into triangular shapes. It is the traditional form in the Shan States.
- To hpu gyauk kyaw or deep fried tofu crackers, like hnapyan gyaw, are usually served with htamin gyin (rice balls kneaded together with fish or potato), another popular Shan dish.
Fried tofu goes very well with kau hnyin baung (glutinous rice) as a breakfast option, and also with mohinga (rice vermicelli in fish soup) or rice noodles called hsan hkauk swè, especially Shan hkauk swè. Green tea is the preferred traditional drink to go with all these in Burma.
## Salad
- To hpu thouk or tofu salad with either to hpu or hsan ta hpo is very popular as a snack or a meal in itself whereas fried tofu on its own is considered a snack. Both may form part of a meal where all the dishes are customarily shared at the same time. Fresh tofu, cut into small rectangular slices, constitutes the main ingredient of the salad, dressed and garnished with peanut oil, dark soy sauce, rice vinegar, toasted crushed dried chilli, crushed garlic, crushed roasted peanuts, crisp-fried onions, and coriander.
- To hpu gyaw thouk refers to tofu fritters cut up and served as a salad as above.
- To hpu nway (warm tofu) or to hpu byaw (soft tofu) is the soft creamy tofu served hot before it sets, usually as a salad dressed and garnished the same way. It may be combined in the same dish with tofu fritters or rice noodles.
## Curried
- To hpu gyet - Sliced yellow tofu may also be curried with fresh tomatoes, onions and garlic, cooked in peanut oil and fish sauce, and garnished with coriander and green chilli. It makes a good pescatarian dish to go with rice, but also popular among the poor if meat or poultry is unaffordable.
# Notes
- ↑ Also called gram flour, besan flour is made from chana dal (also called kala chana or Bengal gram), a type of small, dark-colored chickpea also used in Indian cuisine).
- Hsan ta hpo (rice tofu) salad from the Shan States is as popular as the yellow Burmese tofu salad.
Hsan ta hpo (rice tofu) salad from the Shan States is as popular as the yellow Burmese tofu salad.
- Shan hkauk swè (Shan rice noodles) with to hpu gyaw (tofu fritters) served with monnyingyin (pickled mustard greens) on the side
Shan hkauk swè (Shan rice noodles) with to hpu gyaw (tofu fritters) served with monnyingyin (pickled mustard greens) on the side
- To hpu nway (warm Burmese tofu) and to hpu gyaw (Burmese tofu fritters) salad combines the creamy and crispy forms into a satisfying meal.
To hpu nway (warm Burmese tofu) and to hpu gyaw (Burmese tofu fritters) salad combines the creamy and crispy forms into a satisfying meal.
- To hpu thouk (Burmese tofu salad) hawker at the Kuthodaw Pagoda, Mandalay
To hpu thouk (Burmese tofu salad) hawker at the Kuthodaw Pagoda, Mandalay | https://www.wikidoc.org/index.php/Burmese_tofu | |
a2c2e15e622f0ea595d529e5a4f74047c7d8591a | wikidoc | Bus bunching | Bus bunching
Bus bunching refers to two things: (1) a bus route having highly irregular service intervals, and (2) a classical theory for a causal model for irregular intervals, on the premise that a late bus tends to get later and later as it completes its run, while the bus following it tends to get earlier and earlier.
# Theory
The theory is that the two buses eventually form a pair, one right after another, and the service breaks down as the headway degrades from its nominal value. The buses that are stuck together are called a bus bunch or banana bus and may involve more than two buses. It has been theorized to be the primary cause of reliability problems on bus and metro systems.
# Causes
## Abnormal Passenger Loads
The time taken for a bus to complete its duties is related to the number people attempting to board or alight at stops. The bus that is already late tends to attract a higher number of riders due to the longerer service gap between it and the previous bus. The higher number of riders boarding the bus results in delaying it further.
## Speed of Individual Drivers
Another cause is that some drivers are faster than others. This results in catching up on long or high-frequency routes.
## Deliberate Acts
According to the article "Progress Has Passed Metrobus" by Lyndsey Layton (December 27, 2005) bus bunching may be deliberately caused by bus drivers, so that the bus ahead of them picks up more passengers and decreases their own workload.
# Practice
The existence of bunching has not been borne out by vehicle tracking systems data. Studies into metro operations have broadly debunked the theory of pairwise bunching as a major cause of irregular intervals on metro lines, and have tied irregularity largely to problems in other key scheduling and operational processes.
Recently research has demonstrated that simulation models of bus routes based on the classical theory of bus bunching have failed to replicate actual conditions of bus service intervals as captured in bus location tracking databases, even when random external events are incorporated into the model. One researcher attributed the classical theory's claim to the phenomenon of physics envy.
While station dwell time does influence on interval variability, other explanations of bus service unreliability have included:
- The lack of ability in resetting scheduled departure times at the start of the line. This is often the case because outer terminals in bus networks are often remote and on an isolated route rather than a convergence of routes, and it is uneconomical to position a supervisor for only a single bus route. AVL/CAD systems have been used successfully in some surface transit systems to remotely revise terminal departure times, thus improving overall variability in service intervals.
- Schedules and service plans that provide very little recovery margin compared to actual running time performance will accumulate lateness. Service control actions may be taken to keep bus drivers within union-agreed contractual work parameters, in many bus systems through the use of unscheduled short-turning. Unscheduled short-turning often occurs in the post-peak period and results in many passengers off-loaded onto the following vehicle, which itself may also be crowded.
- Bus routes are subject to street closures, which may increase running times, leading to further lateness of drivers and greater levels of intervention necessary to keep drivers within work parameters.
- Bus operation is also dependent on the aggressiveness of driving. This effect has been quantified by researchers studying the Portland (Oregon) bus network.
# Chaos Theory
Bus bunching is an example of chaos theory. The orderly procession of buses is inherently unstable and buses will tend towards bunches if left unchecked. However, it is impossible to predict from the outset which buses will be bunched and which buses will proceed on schedule to the destination, because bunching is caused by random conditions such as traffic, stoplights, and the number of passengers at a stop. | Bus bunching
Bus bunching refers to two things: (1) a bus route having highly irregular service intervals, and (2) a classical theory for a causal model for irregular intervals, on the premise that a late bus tends to get later and later as it completes its run, while the bus following it tends to get earlier and earlier.
## Theory
The theory is that the two buses eventually form a pair, one right after another, and the service breaks down as the headway degrades from its nominal value. The buses that are stuck together are called a bus bunch or banana bus and may involve more than two buses. It has been theorized to be the primary cause of reliability problems on bus and metro systems.
## Causes
### Abnormal Passenger Loads
The time taken for a bus to complete its duties is related to the number people attempting to board or alight at stops. The bus that is already late tends to attract a higher number of riders due to the longerer service gap between it and the previous bus. The higher number of riders boarding the bus results in delaying it further.
### Speed of Individual Drivers
Another cause is that some drivers are faster than others. This results in catching up on long or high-frequency routes.
### Deliberate Acts
According to the article "Progress Has Passed Metrobus" by Lyndsey Layton (December 27, 2005) bus bunching may be deliberately caused by bus drivers, so that the bus ahead of them picks up more passengers and decreases their own workload.
## Practice
The existence of bunching has not been borne out by vehicle tracking systems data. Studies into metro operations have broadly debunked the theory of pairwise bunching as a major cause of irregular intervals on metro lines, and have tied irregularity largely to problems in other key scheduling and operational processes.
Recently research has demonstrated that simulation models of bus routes based on the classical theory of bus bunching have failed to replicate actual conditions of bus service intervals as captured in bus location tracking databases, even when random external events are incorporated into the model. One researcher attributed the classical theory's claim to the phenomenon of physics envy.
While station dwell time does influence on interval variability, other explanations of bus service unreliability have included:
- The lack of ability in resetting scheduled departure times at the start of the line. This is often the case because outer terminals in bus networks are often remote and on an isolated route rather than a convergence of routes, and it is uneconomical to position a supervisor for only a single bus route. AVL/CAD systems have been used successfully in some surface transit systems to remotely revise terminal departure times, thus improving overall variability in service intervals.
- Schedules and service plans that provide very little recovery margin compared to actual running time performance will accumulate lateness. Service control actions may be taken to keep bus drivers within union-agreed contractual work parameters, in many bus systems through the use of unscheduled short-turning. Unscheduled short-turning often occurs in the post-peak period and results in many passengers off-loaded onto the following vehicle, which itself may also be crowded.
- Bus routes are subject to street closures, which may increase running times, leading to further lateness of drivers and greater levels of intervention necessary to keep drivers within work parameters.
- Bus operation is also dependent on the aggressiveness of driving. This effect has been quantified by researchers studying the Portland (Oregon) bus network.
## Chaos Theory
Bus bunching is an example of chaos theory. The orderly procession of buses is inherently unstable and buses will tend towards bunches if left unchecked. However, it is impossible to predict from the outset which buses will be bunched and which buses will proceed on schedule to the destination, because bunching is caused by random conditions such as traffic, stoplights, and the number of passengers at a stop. | https://www.wikidoc.org/index.php/Bus_bunching | |
f0755ce36c8abb69a2c1e7e92956231a8544a3e8 | wikidoc | Butabarbital | Butabarbital
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Butabarbital is an intermediate acting barbiturate that is FDA approved for the treatment of insomnia and preoperative sedation. Common adverse reactions include confusion, dizziness, somnolence, and agitation.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is indicated for use as a sedative or hypnotic.
- Since barbiturates appear to lose their effectiveness for sleep induction and sleep maintenance after 2 weeks, use of BUTISOL SODIUM® in treating insomnia should be limited to this time. BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is indicated for use as a sedative or hypnotic.
- Since barbiturates appear to lose their effectiveness for sleep induction and sleep maintenance after 2 weeks, use of BUTISOL SODIUM® in treating insomnia should be limited to this time.
- Daytime sedative - 15 to 30 mg, 3 or 4 times daily.
- Bedtime hypnotic - 50 to 100 mg.
- Preoperative sedative - 50 to 100 mg, 60 to 90 minutes before surgery.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butabarbital in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butabarbital in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
- Preoperative sedative - 2 to 6 mg/kg maximum 100 mg.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butabarbital in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butabarbital in pediatric patients.
# Contraindications
- Barbiturates are contraindicated in patients with known barbiturate sensitivity. Barbiturates are also contraindicated in patients with a history of manifest or latent porphyria.
# Warnings
- Because sleep disturbances may be the presenting manifestation of a physical and/or psychiatric disorder, symptomatic treatment of insomnia should be initiated only after a careful evaluation of the patient. The failure of insomnia to remit after 7 to 10 days of treatment may indicate the presence of a primary psychiatric and/or medical illness that should be evaluated.
- Worsening of insomnia or the emergence of new thinking or behavior abnormalities may be the consequences of an unrecognized psychiatric or physical disorder. Such findings have emerged during the course of treatment with sedative-hypnotic drugs. Because some of the important adverse effects of sedative-hypnotics appear to be dose related, it is important to use the smallest possible effective dose, especially in the elderly.
- Complex behaviors such as “sleep driving” (i.e., driving while not fully awake after ingestion of a sedative-hypnotic, with amnesia for the event) have been reported. These events can occur in sedative-hypnotic-naive as well as in sedative-hypnotic-experienced persons. Although behaviors such as sleep-driving may occur with sedative-hypnotics alone at therapeutic doses, the use of alcohol and other CNS depressants with sedative-hypnotics appears to increase the risk of such behaviors, as does the use of sedative-hypnotics at doses exceeding the maximum recommended dose. Due to the risk to the patient and the community, discontinuation of sedative-hypnotics should be strongly considered for patients who report a “sleep driving” episode”.
- Other complex behaviors (e.g., preparing and eating food, making phone calls, or having sex) have been reported in patients who are not fully awake after taking a sedative-hypnotic. As with sleep-driving, patients usually do not remember these events.
- Severe anaphylactic and anaphylactoid reactions: Rare cases of angioedema involving the tongue, glottis or larynx have been reported in patients after taking the first or subsequent doses of sedative-hypnotics. Some patients have had additional symptoms such as dyspnea, throat closing, or nausea and vomiting that suggest anaphylaxis. Some patients have required medical therapy in the emergency department. If angioedema involves the tongue, glottis or larynx, airway obstruction may occur and be fatal. Patients who develop angioedema after treatment with sedative-hypnotics should not be rechallenged with the drug.
- Habit forming: Barbiturates may be habit forming. Tolerance, psychological and physical dependence may occur with continued use. Patients who have psychological dependence on barbiturates may increase the dosage or decrease the dosage interval without consulting a physician and may subsequently develop a physical dependence on barbiturates. To minimize the possibility of overdosage or the development of dependence, the prescribing and dispensing of sedative-hypnotic barbiturates should be limited to the amount required for the interval until the next appointment. Abrupt cessation after prolonged use in the dependent person may result in withdrawal symptoms, including delirium, convulsions, and possibly death. Barbiturates should be withdrawn gradually from any patient known to be taking excessive dosage over long periods of time.
- Acute or chronic pain: Caution should be exercised when barbiturates are administered to patients with acute or chronic pain, because paradoxical excitement could be induced, or important symptoms could be masked. However, the use of barbiturates as sedatives in the postoperative surgical period, and as adjuncts to cancer chemotherapy, is well established.
- Use in pregnancy: Barbiturates can cause fetal damage when administered to a pregnant woman. Retrospective, case-controlled studies have suggested a connection between the maternal consumption of barbiturates and a higher than expected incidence of fetal abnormalities. Following oral administration, barbiturates readily cross the placental barrier and are distributed throughout fetal tissues with highest concentrations found in the placenta, fetal liver, and brain.
- Withdrawal symptoms occur in infants born to mothers who receive barbiturates throughout the last trimester of pregnancy. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be apprised of the potential hazard to the fetus.
### Precautions
- Barbiturates should be administered with caution, if at all, to patients who are mentally depressed, have suicidal tendencies, or a history of drug abuse.
- Elderly or debilitated patients may react to barbiturates with marked excitement, depression, and confusion. In some persons, barbiturates repeatedly produce excitement rather than depression.
- In patients with hepatic damage, barbiturates should be administered with caution and initially in reduced doses. Barbiturates should not be administered to patients showing the premonitory signs of hepatic coma.
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) Tablets and Oral Solution contain FD&C Yellow No. 5 (tartrazine) which may cause allergic-type reactions (including bronchial asthma) in certain susceptible individuals. Although the overall incidence of FD&C Yellow No. 5 (tartrazine) sensitivity in the general population is low, it is frequently seen in patients who also have aspirin hypersensitivity.
# Adverse Reactions
## Clinical Trials Experience
- The following adverse reactions have been observed with the use of barbiturates in hospitalized patients. Because such patients may be less aware of certain of the milder adverse effects of barbiturates, the incidence of these reactions may be somewhat higher in fully ambulatory patients.
- More than 1 in 100 patients. The most common adverse reaction, somnolence, is estimated to occur at a rate of 1 to 3 patients per 100.
- Less than 1 in 100 patients. The most common adverse reactions estimated to occur at a rate of less than 1 in 100 patients listed below, grouped by organ system, and by decreasing order of occurrence are:
Agitation, confusion, hyperkinesia, ataxia, CNS depression, nightmares, nervousness, psychiatric disturbance, hallucinations, insomnia, anxiety, dizziness, thinking abnormality.
Hypoventilation, apnea.
Bradycardia, hypotension, syncope.
Nausea, vomiting, constipation.
Headache, hypersensitivity (angioedema, skin rashes, exfoliative dermatitis), fever, liver damage.
## Postmarketing Experience
There is limited information regarding Postmarketing Experience of Butabarbital in the drug label.
# Drug Interactions
- Most reports of clinically significant drug interactions occurring with the barbiturates have involved phenobarbital. However, the application of these data to other barbiturates appears valid and warrants serial blood level determinations of the relevant drugs when there are multiple therapies.
- Anticoagulants. Phenobarbital lowers the plasma levels of dicumarol and causes a decrease in anticoagulant activity as measured by the prothrombin time. Barbiturates can induce hepatic microsomal enzymes resulting in increased metabolism and decreased anticoagulant response of oral anticoagulants (e.g., warfarin, acenocoumarol, dicumarol, and phenprocoumon). Patients stabilized on anticoagulant therapy may require dosage adjustments if barbiturates are added to or withdrawn from their dosage regimen.
- Corticosteroids. Barbiturates appear to enhance the metabolism of exogenous corticosteroids probably through the induction of hepatic microsomal enzymes. Patients stabilized on corticosteroid therapy may require dosage adjustments if barbiturates are added to or withdrawn from their dosage regimen.
- Griseofulvin. Phenobarbital appears to interfere with the absorption of orally administered griseofulvin, thus decreasing its blood level. The effect of the resultant decreased blood levels of griseofulvin on therapeutic response has not been established. However, it would be preferable to avoid concomitant administration of these drugs.
- Doxycycline. Phenobarbital has been shown to shorten the half-life of doxycycline for as long as 2 weeks after barbiturate therapy is discontinued. This mechanism is probably through the induction of hepatic microsomal enzymes that metabolize the antibiotic. If phenobarbital and doxycycline are administered concurrently, the clinical response to doxycycline should be monitored closely.
- Phenytoin, sodium valproate, valproic acid. The effect of barbiturates on the metabolism of phenytoin appears to be variable. Some investigators report an accelerating effect, while others report no effect. Because the effect of barbiturates on the metabolism of phenytoin is not predictable, phenytoin and barbiturate blood levels should be monitored more frequently if these drugs are given concurrently. Sodium valproate and valproic acid appear to decrease barbiturate metabolism; therefore, barbiturate blood levels should be monitored and appropriate dosage adjustments made as indicated.
- Central nervous system. The concomitant use of other central nervous system depressants, including other sedatives or hypnotics, antihistamines, tranquilizers, or alcohol, may produce additive depressant effects.
- Monoamine oxidase inhibitors (MAOI). MAOI prolong the effects of barbiturates probably because metabolism of the barbiturate is inhibited.
- Estradiol, estrone, progesterone, and other steroid hormones. Pretreatment with or concurrent administration of phenobarbital may decrease the effect of estradiol by increasing its metabolism. There have been reports of patients treated with antiepileptic drugs (e.g., phenobarbital) who become pregnant while taking oral contraceptives. An alternate contraceptive method might be suggested to women taking phenobarbital.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
- Pregnancy Category D
- Nonteratogenic effects - Infants suffering from long-term barbiturate exposure in utero may have an acute withdrawal syndrome of seizures and hyperirritability from birth to a delayed onset of up to 14 days (see Drug Abuse and Dependence).
Pregnancy Category (AUS):
- Australian Drug Evaluation Committee (ADEC) Pregnancy Category
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Butabarbital in women who are pregnant.
### Labor and Delivery
- Hypnotic doses of barbiturates do not appear to significantly impair uterine activity during labor. Administration of sedative-hypnotic barbiturates to the mother during labor may result in respiratory depression in the newborn. Premature infants are particularly susceptible to the depressant effects of barbiturates. If barbiturates are used during labor and delivery, resuscitation equipment should be available.
### Nursing Mothers
- Caution should be exercised when a barbiturate is administered to a nursing woman since small amounts of some barbiturates are excreted in the milk.
### Pediatric Use
There is no FDA guidance on the use of Butabarbital with respect to pediatric patients.
### Geriatic Use
- Clinical studies of Butisol Sodium Tablets/Oral Solution did not include sufficient numbers of subjects aged 65 and over to determine whether they respond differently from younger subjects. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy.
### Gender
There is no FDA guidance on the use of Butabarbital with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Butabarbital with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Butabarbital in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Butabarbital in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Butabarbital in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Butabarbital in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Oral
### Monitoring
There is limited information regarding Monitoring of Butabarbital in the drug label.
# IV Compatibility
There is limited information regarding IV Compatibility of Butabarbital in the drug label.
# Overdosage
## Acute Overdose
### Signs and Symptoms
- The toxic dose of barbiturates varies considerably. In general, an oral dose of 1 gram of most barbiturates produces serious poisoning in an adult. Death commonly occurs after 2 to 10 grams of ingested barbiturates. Symptoms of acute intoxication with barbiturates include unsteady gait, slurred speech, and sustained nystagmus. Mental signs of chronic intoxication include confusion, poor judgment, irritability, insomnia, and somatic complaints. Barbiturate intoxication may be confused with alcoholism, bromide intoxication, and with various neurological disorders.
- Acute overdosage with barbiturates is manifested by CNS and respiratory depression which may progress to Cheyne-Stokes respiration, areflexia, constriction of the pupils to a slight degree (though in severe poisoning they may show paralytic dilation), oliguria, tachycardia, hypotension, lowered body temperature, and coma. Typical shock syndrome (apnea, circulatory collapse, respiratory arrest, and death) may occur.
- In extreme overdose, all electrical activity in the brain may cease, in which case a “flat” EEG normally equated with clinical death cannot be accepted. This effect is fully reversible unless hypoxic damage occurs. Consideration should be given to the possibility of barbiturate intoxication even in situations that appear to involve trauma.
### Management
- Maintenance of an adequate airway, with assisted respiration and oxygen administration as necessary.
- Monitoring of vital signs and fluid balance.
- If the patient is conscious and has not lost the gag reflex, emesis may be induced with ipecac. Care should be taken to prevent pulmonary aspiration of vomitus. After completion of vomiting, 30 grams activated charcoal in a glass of water may be administered.
- If emesis is contraindicated, gastric lavage may be performed with a cuffed endotracheal tube in place with the patient in the face down position. Activated charcoal may be left in the emptied stomach and a saline cathartic administered.
- Fluid therapy and other standard treatment for shock, if needed.
- If renal function is normal, forced diuresis may aid in the elimination of the barbiturate.
- Although not recommended as a routine procedure, hemodialysis may be used in severe barbiturate intoxications or if the patient is anuric or in shock.
- Appropriate nursing care, including rolling patients from side-to-side every 30 minutes, to prevent hypostatic pneumonia, decubiti, aspiration, and other complications of patients with altered states of consciousness.
- Antibiotics should be given if pneumonia is suspected.
## Chronic Overdose
There is limited information regarding Chronic Overdose of Butabarbital in the drug label.
# Pharmacology
## Mechanism of Action
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP), like other barbiturates, is capable of producing all levels of CNS mood alteration from excitation to mild sedation, to hypnosis, and deep coma. Overdosage can produce death. Barbiturates depress the sensory cortex, decrease motor activity, alter cerebellar function, and produce drowsiness, sedation, and hypnosis.
## Structure
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is a non-selective central nervous system depressant which is used as a sedative or hypnotic. It is available for oral administration as Tablets containing 30 mg or 50 mg butabarbital sodium; and as Oral Solution containing 30 mg/5 mL, with alcohol (by volume) 7%. Other ingredients in the Tablets are: calcium stearate, corn starch, dibasic calcium phosphate, FD&C Blue No. 1 (30 mg only), FD&C Yellow No. 5 (30 mg and 50 mg), FD&C Yellow No. 6 (50 mg only). Other ingredients in the Oral Solution are: D&C Green No. 5, edetate disodium, FD&C Yellow No. 5, flavors (natural and artificial), propylene glycol, purified water, saccharin sodium, sodium benzoate. Butabarbital sodium occurs as a white, bitter powder which is freely soluble in water and alcohol, but practically insoluble in benzene and ether. The structural formula for butabarbital sodium is:
## Pharmacodynamics
- Barbiturate-induced sleep differs from physiological sleep. Sleep laboratory studies have demonstrated that barbiturates reduce the amount of time spent in the rapid eye movement (REM) phase of sleep or dreaming stage. Also, Stages III and IV sleep are decreased. Following abrupt cessation of barbiturates used regularly, patients may experience markedly increased dreaming, nightmares, and/or insomnia. Therefore, withdrawal of a single therapeutic dose over 5 or 6 days has been recommended to lessen the REM rebound and disturbed sleep which contribute to drug withdrawal syndrome (for example, decrease the dose from 3 to 2 doses a day for 1 week).
- In studies, secobarbital sodium and pentobarbital sodium have been found to lose most of their effectiveness for both inducing and maintaining sleep by the end of 2 weeks of continued drug administration even with the use of multiple doses. As with secobarbital sodium and pentobarbital sodium, other barbiturates might be expected to lose their effectiveness for inducing and maintaining sleep after about 2 weeks. The short-, intermediate-, and, to a lesser degree, long-acting barbiturates have been widely prescribed for treating insomnia. Although the clinical literature abounds with claims that the short-acting barbiturates are superior for producing sleep while the intermediate-acting compounds are more effective in maintaining sleep, controlled studies have failed to demonstrate these differential effects. Therefore, as sleep medications, the barbiturates are of limited value beyond short-term use.
- Barbiturates are respiratory depressants. The degree of respiratory depression is dependent upon dose. With hypnotic doses, respiratory depression produced by barbiturates is similar to that which occurs during physiologic sleep with slight decrease in blood pressure and heart rate.
- Barbiturates do not impair normal hepatic function, but have been shown to induce liver microsomal enzymes, thus increasing and/or altering the metabolism of barbiturates and other drugs.
## Pharmacokinetics
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is the sodium salt of a weak acid. Barbiturates are weak acids that are absorbed and rapidly distributed to all tissues and fluids with high concentrations in the brain, liver, and kidneys. Barbiturates are bound to plasma and tissue proteins. The rate of absorption is increased if it is ingested as a dilute solution or taken on an empty stomach.
- Barbiturates are metabolized primarily by the hepatic microsomal enzyme system, and most metabolic products are excreted in the urine. The excretion of unchanged butabarbital in the urine is negligible. BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is classified as an intermediate-acting barbiturate. The average plasma half-life for butabarbital is 100 hours in the adult.
- Although variable from patient to patient, butabarbital has an onset of action of about 3/4 to 1 hour, and a duration of action of about 6 to 8 hours.
## Nonclinical Toxicology
- No long-term studies in animals have been performed with butabarbital sodium to determine carcinogenic and mutagenic potential, or effects on fertility.
# Clinical Studies
There is limited information regarding Clinical Studies of Butabarbital in the drug label.
# How Supplied
- BUTISOL SODIUM® (butabarbital sodium tablets, USP):
- 30 mg - colored green, scored, imprinted “BUTISOL SODIUM” and 37/113 in bottles of 100 (NDC 0037-0113-60).
- 50 mg - colored orange, scored, imprinted “BUTISOL SODIUM” and 37/114 in bottles of 100 (NDC 0037-0114-60).
- BUTISOL SODIUM® (butabarbital sodium oral solution, USP): 30 mg/ 5 mL, alcohol (by volume) 7% - colored green, in bottles of one pint (NDC 0037-0110-16).
- Store at controlled room temperature 20°-25°C (68°-77°F).
- Dispense in a tight container.
## Storage
There is limited information regarding Butabarbital Storage in the drug label.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Practitioners should give the following information and instructions to patients receiving barbiturates.
- “Sleep Driving” and other complex behaviors: There have been reports of people getting out of bed after taking a sedative-hypnotic and driving their cars while not fully awake, often with no memory of the event. If a patient experiences such an episode, it should be reported to his or her doctor immediately, since “sleep driving” can be dangerous. This behavior is more likely to occur when sedative-hypnotics are taken with alcohol or other central nervous depressants. Other complex behaviors (e.g. preparing and eating food, making phone calls, or having sex) have been reported in patients who are not fully awake after taking a sedative-hypnotic. As with sleep-driving, patients usually do not remember these events.
- The use of barbiturates carries with it an associated risk of psychological and/or physical dependence. The patient should be warned against increasing the dose of the drug without consulting a physician.
- Barbiturates may impair mental and/or physical abilities required for the performance of potentially hazardous tasks, such as driving or operating machinery.
- Alcohol should not be consumed while taking barbiturates. Concurrent use of the barbiturates with other CNS depressants, including other sedatives or hypnotics, alcohol, narcotics, tranquilizers, and antihistamines, may result in additional CNS depressant effects.
# Precautions with Alcohol
- Alcohol should not be consumed while taking barbiturates.
# Brand Names
- BUTISOL SODIUM®
# Look-Alike Drug Names
There is limited information regarding Butabarbital Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Butabarbital
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Vignesh Ponnusamy, M.B.B.S. [2]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Butabarbital is an intermediate acting barbiturate that is FDA approved for the treatment of insomnia and preoperative sedation. Common adverse reactions include confusion, dizziness, somnolence, and agitation.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is indicated for use as a sedative or hypnotic.
- Since barbiturates appear to lose their effectiveness for sleep induction and sleep maintenance after 2 weeks, use of BUTISOL SODIUM® in treating insomnia should be limited to this time. BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is indicated for use as a sedative or hypnotic.
- Since barbiturates appear to lose their effectiveness for sleep induction and sleep maintenance after 2 weeks, use of BUTISOL SODIUM® in treating insomnia should be limited to this time.
- Daytime sedative - 15 to 30 mg, 3 or 4 times daily.
- Bedtime hypnotic - 50 to 100 mg.
- Preoperative sedative - 50 to 100 mg, 60 to 90 minutes before surgery.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butabarbital in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butabarbital in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
- Preoperative sedative - 2 to 6 mg/kg maximum 100 mg.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butabarbital in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butabarbital in pediatric patients.
# Contraindications
- Barbiturates are contraindicated in patients with known barbiturate sensitivity. Barbiturates are also contraindicated in patients with a history of manifest or latent porphyria.
# Warnings
- Because sleep disturbances may be the presenting manifestation of a physical and/or psychiatric disorder, symptomatic treatment of insomnia should be initiated only after a careful evaluation of the patient. The failure of insomnia to remit after 7 to 10 days of treatment may indicate the presence of a primary psychiatric and/or medical illness that should be evaluated.
- Worsening of insomnia or the emergence of new thinking or behavior abnormalities may be the consequences of an unrecognized psychiatric or physical disorder. Such findings have emerged during the course of treatment with sedative-hypnotic drugs. Because some of the important adverse effects of sedative-hypnotics appear to be dose related, it is important to use the smallest possible effective dose, especially in the elderly.
- Complex behaviors such as “sleep driving” (i.e., driving while not fully awake after ingestion of a sedative-hypnotic, with amnesia for the event) have been reported. These events can occur in sedative-hypnotic-naive as well as in sedative-hypnotic-experienced persons. Although behaviors such as sleep-driving may occur with sedative-hypnotics alone at therapeutic doses, the use of alcohol and other CNS depressants with sedative-hypnotics appears to increase the risk of such behaviors, as does the use of sedative-hypnotics at doses exceeding the maximum recommended dose. Due to the risk to the patient and the community, discontinuation of sedative-hypnotics should be strongly considered for patients who report a “sleep driving” episode”.
- Other complex behaviors (e.g., preparing and eating food, making phone calls, or having sex) have been reported in patients who are not fully awake after taking a sedative-hypnotic. As with sleep-driving, patients usually do not remember these events.
- Severe anaphylactic and anaphylactoid reactions: Rare cases of angioedema involving the tongue, glottis or larynx have been reported in patients after taking the first or subsequent doses of sedative-hypnotics. Some patients have had additional symptoms such as dyspnea, throat closing, or nausea and vomiting that suggest anaphylaxis. Some patients have required medical therapy in the emergency department. If angioedema involves the tongue, glottis or larynx, airway obstruction may occur and be fatal. Patients who develop angioedema after treatment with sedative-hypnotics should not be rechallenged with the drug.
- Habit forming: Barbiturates may be habit forming. Tolerance, psychological and physical dependence may occur with continued use. Patients who have psychological dependence on barbiturates may increase the dosage or decrease the dosage interval without consulting a physician and may subsequently develop a physical dependence on barbiturates. To minimize the possibility of overdosage or the development of dependence, the prescribing and dispensing of sedative-hypnotic barbiturates should be limited to the amount required for the interval until the next appointment. Abrupt cessation after prolonged use in the dependent person may result in withdrawal symptoms, including delirium, convulsions, and possibly death. Barbiturates should be withdrawn gradually from any patient known to be taking excessive dosage over long periods of time.
- Acute or chronic pain: Caution should be exercised when barbiturates are administered to patients with acute or chronic pain, because paradoxical excitement could be induced, or important symptoms could be masked. However, the use of barbiturates as sedatives in the postoperative surgical period, and as adjuncts to cancer chemotherapy, is well established.
- Use in pregnancy: Barbiturates can cause fetal damage when administered to a pregnant woman. Retrospective, case-controlled studies have suggested a connection between the maternal consumption of barbiturates and a higher than expected incidence of fetal abnormalities. Following oral administration, barbiturates readily cross the placental barrier and are distributed throughout fetal tissues with highest concentrations found in the placenta, fetal liver, and brain.
- Withdrawal symptoms occur in infants born to mothers who receive barbiturates throughout the last trimester of pregnancy. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, the patient should be apprised of the potential hazard to the fetus.
### Precautions
- Barbiturates should be administered with caution, if at all, to patients who are mentally depressed, have suicidal tendencies, or a history of drug abuse.
- Elderly or debilitated patients may react to barbiturates with marked excitement, depression, and confusion. In some persons, barbiturates repeatedly produce excitement rather than depression.
- In patients with hepatic damage, barbiturates should be administered with caution and initially in reduced doses. Barbiturates should not be administered to patients showing the premonitory signs of hepatic coma.
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) Tablets and Oral Solution contain FD&C Yellow No. 5 (tartrazine) which may cause allergic-type reactions (including bronchial asthma) in certain susceptible individuals. Although the overall incidence of FD&C Yellow No. 5 (tartrazine) sensitivity in the general population is low, it is frequently seen in patients who also have aspirin hypersensitivity.
# Adverse Reactions
## Clinical Trials Experience
- The following adverse reactions have been observed with the use of barbiturates in hospitalized patients. Because such patients may be less aware of certain of the milder adverse effects of barbiturates, the incidence of these reactions may be somewhat higher in fully ambulatory patients.
- More than 1 in 100 patients. The most common adverse reaction, somnolence, is estimated to occur at a rate of 1 to 3 patients per 100.
- Less than 1 in 100 patients. The most common adverse reactions estimated to occur at a rate of less than 1 in 100 patients listed below, grouped by organ system, and by decreasing order of occurrence are:
Agitation, confusion, hyperkinesia, ataxia, CNS depression, nightmares, nervousness, psychiatric disturbance, hallucinations, insomnia, anxiety, dizziness, thinking abnormality.
Hypoventilation, apnea.
Bradycardia, hypotension, syncope.
Nausea, vomiting, constipation.
Headache, hypersensitivity (angioedema, skin rashes, exfoliative dermatitis), fever, liver damage.
## Postmarketing Experience
There is limited information regarding Postmarketing Experience of Butabarbital in the drug label.
# Drug Interactions
- Most reports of clinically significant drug interactions occurring with the barbiturates have involved phenobarbital. However, the application of these data to other barbiturates appears valid and warrants serial blood level determinations of the relevant drugs when there are multiple therapies.
- Anticoagulants. Phenobarbital lowers the plasma levels of dicumarol and causes a decrease in anticoagulant activity as measured by the prothrombin time. Barbiturates can induce hepatic microsomal enzymes resulting in increased metabolism and decreased anticoagulant response of oral anticoagulants (e.g., warfarin, acenocoumarol, dicumarol, and phenprocoumon). Patients stabilized on anticoagulant therapy may require dosage adjustments if barbiturates are added to or withdrawn from their dosage regimen.
- Corticosteroids. Barbiturates appear to enhance the metabolism of exogenous corticosteroids probably through the induction of hepatic microsomal enzymes. Patients stabilized on corticosteroid therapy may require dosage adjustments if barbiturates are added to or withdrawn from their dosage regimen.
- Griseofulvin. Phenobarbital appears to interfere with the absorption of orally administered griseofulvin, thus decreasing its blood level. The effect of the resultant decreased blood levels of griseofulvin on therapeutic response has not been established. However, it would be preferable to avoid concomitant administration of these drugs.
- Doxycycline. Phenobarbital has been shown to shorten the half-life of doxycycline for as long as 2 weeks after barbiturate therapy is discontinued. This mechanism is probably through the induction of hepatic microsomal enzymes that metabolize the antibiotic. If phenobarbital and doxycycline are administered concurrently, the clinical response to doxycycline should be monitored closely.
- Phenytoin, sodium valproate, valproic acid. The effect of barbiturates on the metabolism of phenytoin appears to be variable. Some investigators report an accelerating effect, while others report no effect. Because the effect of barbiturates on the metabolism of phenytoin is not predictable, phenytoin and barbiturate blood levels should be monitored more frequently if these drugs are given concurrently. Sodium valproate and valproic acid appear to decrease barbiturate metabolism; therefore, barbiturate blood levels should be monitored and appropriate dosage adjustments made as indicated.
- Central nervous system. The concomitant use of other central nervous system depressants, including other sedatives or hypnotics, antihistamines, tranquilizers, or alcohol, may produce additive depressant effects.
- Monoamine oxidase inhibitors (MAOI). MAOI prolong the effects of barbiturates probably because metabolism of the barbiturate is inhibited.
- Estradiol, estrone, progesterone, and other steroid hormones. Pretreatment with or concurrent administration of phenobarbital may decrease the effect of estradiol by increasing its metabolism. There have been reports of patients treated with antiepileptic drugs (e.g., phenobarbital) who become pregnant while taking oral contraceptives. An alternate contraceptive method might be suggested to women taking phenobarbital.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA):
- Pregnancy Category D
- Nonteratogenic effects - Infants suffering from long-term barbiturate exposure in utero may have an acute withdrawal syndrome of seizures and hyperirritability from birth to a delayed onset of up to 14 days (see Drug Abuse and Dependence).
Pregnancy Category (AUS):
- Australian Drug Evaluation Committee (ADEC) Pregnancy Category
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Butabarbital in women who are pregnant.
### Labor and Delivery
- Hypnotic doses of barbiturates do not appear to significantly impair uterine activity during labor. Administration of sedative-hypnotic barbiturates to the mother during labor may result in respiratory depression in the newborn. Premature infants are particularly susceptible to the depressant effects of barbiturates. If barbiturates are used during labor and delivery, resuscitation equipment should be available.
### Nursing Mothers
- Caution should be exercised when a barbiturate is administered to a nursing woman since small amounts of some barbiturates are excreted in the milk.
### Pediatric Use
There is no FDA guidance on the use of Butabarbital with respect to pediatric patients.
### Geriatic Use
- Clinical studies of Butisol Sodium Tablets/Oral Solution did not include sufficient numbers of subjects aged 65 and over to determine whether they respond differently from younger subjects. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy.
### Gender
There is no FDA guidance on the use of Butabarbital with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Butabarbital with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Butabarbital in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Butabarbital in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Butabarbital in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Butabarbital in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Oral
### Monitoring
There is limited information regarding Monitoring of Butabarbital in the drug label.
# IV Compatibility
There is limited information regarding IV Compatibility of Butabarbital in the drug label.
# Overdosage
## Acute Overdose
### Signs and Symptoms
- The toxic dose of barbiturates varies considerably. In general, an oral dose of 1 gram of most barbiturates produces serious poisoning in an adult. Death commonly occurs after 2 to 10 grams of ingested barbiturates. Symptoms of acute intoxication with barbiturates include unsteady gait, slurred speech, and sustained nystagmus. Mental signs of chronic intoxication include confusion, poor judgment, irritability, insomnia, and somatic complaints. Barbiturate intoxication may be confused with alcoholism, bromide intoxication, and with various neurological disorders.
- Acute overdosage with barbiturates is manifested by CNS and respiratory depression which may progress to Cheyne-Stokes respiration, areflexia, constriction of the pupils to a slight degree (though in severe poisoning they may show paralytic dilation), oliguria, tachycardia, hypotension, lowered body temperature, and coma. Typical shock syndrome (apnea, circulatory collapse, respiratory arrest, and death) may occur.
- In extreme overdose, all electrical activity in the brain may cease, in which case a “flat” EEG normally equated with clinical death cannot be accepted. This effect is fully reversible unless hypoxic damage occurs. Consideration should be given to the possibility of barbiturate intoxication even in situations that appear to involve trauma.
### Management
- Maintenance of an adequate airway, with assisted respiration and oxygen administration as necessary.
- Monitoring of vital signs and fluid balance.
- If the patient is conscious and has not lost the gag reflex, emesis may be induced with ipecac. Care should be taken to prevent pulmonary aspiration of vomitus. After completion of vomiting, 30 grams activated charcoal in a glass of water may be administered.
- If emesis is contraindicated, gastric lavage may be performed with a cuffed endotracheal tube in place with the patient in the face down position. Activated charcoal may be left in the emptied stomach and a saline cathartic administered.
- Fluid therapy and other standard treatment for shock, if needed.
- If renal function is normal, forced diuresis may aid in the elimination of the barbiturate.
- Although not recommended as a routine procedure, hemodialysis may be used in severe barbiturate intoxications or if the patient is anuric or in shock.
- Appropriate nursing care, including rolling patients from side-to-side every 30 minutes, to prevent hypostatic pneumonia, decubiti, aspiration, and other complications of patients with altered states of consciousness.
- Antibiotics should be given if pneumonia is suspected.
## Chronic Overdose
There is limited information regarding Chronic Overdose of Butabarbital in the drug label.
# Pharmacology
## Mechanism of Action
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP), like other barbiturates, is capable of producing all levels of CNS mood alteration from excitation to mild sedation, to hypnosis, and deep coma. Overdosage can produce death. Barbiturates depress the sensory cortex, decrease motor activity, alter cerebellar function, and produce drowsiness, sedation, and hypnosis.
## Structure
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is a non-selective central nervous system depressant which is used as a sedative or hypnotic. It is available for oral administration as Tablets containing 30 mg or 50 mg butabarbital sodium; and as Oral Solution containing 30 mg/5 mL, with alcohol (by volume) 7%. Other ingredients in the Tablets are: calcium stearate, corn starch, dibasic calcium phosphate, FD&C Blue No. 1 (30 mg only), FD&C Yellow No. 5 (30 mg and 50 mg), FD&C Yellow No. 6 (50 mg only). Other ingredients in the Oral Solution are: D&C Green No. 5, edetate disodium, FD&C Yellow No. 5, flavors (natural and artificial), propylene glycol, purified water, saccharin sodium, sodium benzoate. Butabarbital sodium occurs as a white, bitter powder which is freely soluble in water and alcohol, but practically insoluble in benzene and ether. The structural formula for butabarbital sodium is:
## Pharmacodynamics
- Barbiturate-induced sleep differs from physiological sleep. Sleep laboratory studies have demonstrated that barbiturates reduce the amount of time spent in the rapid eye movement (REM) phase of sleep or dreaming stage. Also, Stages III and IV sleep are decreased. Following abrupt cessation of barbiturates used regularly, patients may experience markedly increased dreaming, nightmares, and/or insomnia. Therefore, withdrawal of a single therapeutic dose over 5 or 6 days has been recommended to lessen the REM rebound and disturbed sleep which contribute to drug withdrawal syndrome (for example, decrease the dose from 3 to 2 doses a day for 1 week).
- In studies, secobarbital sodium and pentobarbital sodium have been found to lose most of their effectiveness for both inducing and maintaining sleep by the end of 2 weeks of continued drug administration even with the use of multiple doses. As with secobarbital sodium and pentobarbital sodium, other barbiturates might be expected to lose their effectiveness for inducing and maintaining sleep after about 2 weeks. The short-, intermediate-, and, to a lesser degree, long-acting barbiturates have been widely prescribed for treating insomnia. Although the clinical literature abounds with claims that the short-acting barbiturates are superior for producing sleep while the intermediate-acting compounds are more effective in maintaining sleep, controlled studies have failed to demonstrate these differential effects. Therefore, as sleep medications, the barbiturates are of limited value beyond short-term use.
- Barbiturates are respiratory depressants. The degree of respiratory depression is dependent upon dose. With hypnotic doses, respiratory depression produced by barbiturates is similar to that which occurs during physiologic sleep with slight decrease in blood pressure and heart rate.
- Barbiturates do not impair normal hepatic function, but have been shown to induce liver microsomal enzymes, thus increasing and/or altering the metabolism of barbiturates and other drugs.
## Pharmacokinetics
- BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is the sodium salt of a weak acid. Barbiturates are weak acids that are absorbed and rapidly distributed to all tissues and fluids with high concentrations in the brain, liver, and kidneys. Barbiturates are bound to plasma and tissue proteins. The rate of absorption is increased if it is ingested as a dilute solution or taken on an empty stomach.
- Barbiturates are metabolized primarily by the hepatic microsomal enzyme system, and most metabolic products are excreted in the urine. The excretion of unchanged butabarbital in the urine is negligible. BUTISOL SODIUM® (butabarbital sodium tablets, USP and butabarbital sodium oral solution, USP) is classified as an intermediate-acting barbiturate. The average plasma half-life for butabarbital is 100 hours in the adult.
- Although variable from patient to patient, butabarbital has an onset of action of about 3/4 to 1 hour, and a duration of action of about 6 to 8 hours.
## Nonclinical Toxicology
- No long-term studies in animals have been performed with butabarbital sodium to determine carcinogenic and mutagenic potential, or effects on fertility.
# Clinical Studies
There is limited information regarding Clinical Studies of Butabarbital in the drug label.
# How Supplied
- BUTISOL SODIUM® (butabarbital sodium tablets, USP):
- 30 mg - colored green, scored, imprinted “BUTISOL SODIUM” and 37/113 in bottles of 100 (NDC 0037-0113-60).
- 50 mg - colored orange, scored, imprinted “BUTISOL SODIUM” and 37/114 in bottles of 100 (NDC 0037-0114-60).
- BUTISOL SODIUM® (butabarbital sodium oral solution, USP): 30 mg/ 5 mL, alcohol (by volume) 7% - colored green, in bottles of one pint (NDC 0037-0110-16).
- Store at controlled room temperature 20°-25°C (68°-77°F).
- Dispense in a tight container.
## Storage
There is limited information regarding Butabarbital Storage in the drug label.
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
- Practitioners should give the following information and instructions to patients receiving barbiturates.
- “Sleep Driving” and other complex behaviors: There have been reports of people getting out of bed after taking a sedative-hypnotic and driving their cars while not fully awake, often with no memory of the event. If a patient experiences such an episode, it should be reported to his or her doctor immediately, since “sleep driving” can be dangerous. This behavior is more likely to occur when sedative-hypnotics are taken with alcohol or other central nervous depressants. Other complex behaviors (e.g. preparing and eating food, making phone calls, or having sex) have been reported in patients who are not fully awake after taking a sedative-hypnotic. As with sleep-driving, patients usually do not remember these events.
- The use of barbiturates carries with it an associated risk of psychological and/or physical dependence. The patient should be warned against increasing the dose of the drug without consulting a physician.
- Barbiturates may impair mental and/or physical abilities required for the performance of potentially hazardous tasks, such as driving or operating machinery.
- Alcohol should not be consumed while taking barbiturates. Concurrent use of the barbiturates with other CNS depressants, including other sedatives or hypnotics, alcohol, narcotics, tranquilizers, and antihistamines, may result in additional CNS depressant effects.
# Precautions with Alcohol
- Alcohol should not be consumed while taking barbiturates.
# Brand Names
- BUTISOL SODIUM®[3]
# Look-Alike Drug Names
There is limited information regarding Butabarbital Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Butabarbital | |
ea9ac09c4dbe1fb1dbfa6f9f478fe33ea61b3e8c | wikidoc | Butyric acid | Butyric acid
Butyric acid (from Greek βούτυρος = butter), also known under the systematic name butanoic acid, is a carboxylic acid with the structural formula CH3CH2CH2-COOH. It is found in rancid butter, parmesan cheese, vomit, and body odor and has an unpleasant smell and acrid taste, with a sweetish aftertaste (similar to ether). Butyric acid can be detected by mammals with good scent detection abilities such as dogs at 10 ppb, whereas humans can detect it in concentrations above 10 ppm.
Butyric acid is a fatty acid occurring in the form of esters in animal fats and plant oils. The glyceride of butyric acid makes up 3% to 4% of butter. When butter goes rancid, butyric acid is liberated from the glyceride by hydrolysis leading to the unpleasant odor. It is an important member of the fatty acid sub-group called short chain fatty acids. Butyric acid is a weak acid with a pKa of 4.82, similar to acetic acid which has pKa 4.76. The similar strength of these acids results from their common -CH2COOH terminal structure. Butyric acid has density 0.96 g/cm3 and molecular mass 88.1051; thus pure butyric acid is 10.9 molar.
Butyric acid or fermentation butyric acid is also found as a hexyl ester (hexyl butanoate) in the oil of Heracleum giganteum (a type of cow parsnip) and as an octyl ester (octyl butanoate) in parsnip (Pastinaca sativa); it has also been noticed in the fluids of the flesh and in perspiration.
It is industrially prepared by the fermentation of sugar or starch, brought about by the addition of putrefying cheese, with calcium carbonate added to neutralize the acids formed in the process. The butyric fermentation of starch is aided by the direct addition of Bacillus subtilis. Salts and esters of the acid are called butanoates.
Butyric acid is used in the preparation of various butanoate esters. Low-molecular-weight esters of butyric acid, such as methyl butanoate, have mostly pleasant aromas or tastes. As a consequence, they find use as food and perfume additives. They are also used in organic laboratory courses, to teach the Fischer esterification reaction.
The acid is an oily colorless liquid that freezes at -8 °C; it boils at 164 °C. It is easily soluble in water, ethanol, and ether, and is precipitated out of its aqueous solution by the addition of calcium chloride. Potassium dichromate and sulfuric acid oxidize it to carbon dioxide and acetic acid, while alkaline potassium permanganate oxidizes it to carbon dioxide. The calcium salt, Ca(C4H7O2)2·H2O, is less soluble in hot water than in cold.
Butyric acid has a structural isomer called isobutyric acid (2-methylpropanoic acid).
# Butanoate fermentation
Butanoate is produced as end-product of a fermentation process solely performed by obligate anaerobic bacteria. Fermented Kombucha "tea" includes butyric acid as a result of the fermentation. This fermentation pathway was discovered by Louis Pasteur in 1861. Examples of butanoate-producing species of bacteria:
- Clostridium acetobutylicum
- Clostridium butyricum
- Clostridium kluyveri
- Clostridium pasteurianum
- Fusobacterium nucleatum
- Butyrivibrio fibrisolvens
- Eubacterium limosum
The pathway starts with the glycolytic cleavage of glucose to two molecules of pyruvate, as happens in most organisms. Pyruvate is then oxidized into acetyl coenzyme A using a unique mechanism that involves an enzyme system called pyruvate-ferredoxin oxidoreductase. Two molecules of carbon dioxide (CO2) and two molecules of elemental hydrogen (H2) are formed as wastes products from the cell. Then:
ATP is produced, as can be seen, in the last step of the fermentation. Three molecules of ATP are produced for each glucose molecule, a relatively high yield. The balanced equation for this fermentation is:
C6H12O6 → C4H8O2 + 2CO2 + 2H2
## Acetone and butanol fermentation
Several species form acetone and butanol in an alternative pathway, which starts as butyrate fermentation. Some of these species are:
- Clostridium acetobutylicum: the most prominent acetone and butanol producer, used also in industry
- Clostridium beijerinckii
- Clostridium tetanomorphum
- Clostridium aurantibutyricum
These bacteria begin with butanoate fermentation as described above, but, when the pH drops below 5, they switch into butanol and acetone production in order to prevent further lowering of the pH. Two molecules of butanol are formed for each molecule of acetone.
The change in the pathway occurs after acetoacetyl CoA formation. This intermediate then takes two possible pathways:
- Acetoacetyl CoA → acetoacetate → acetone, or
- Acetoacetyl CoA → butyryl CoA → butanal → butanol.
## Butyric acid function/activity
Highly-fermentable fibers like oat bran, pectin, and guar are transformed by colonic bacteria into short chain fatty acids including butyrate.
Butanoate has diverse and, it seems, paradoxical effects on cellular proliferation, apoptosis and differentiation that may be either pro-neoplastic or anti-neoplastic, depending upon factors such as the level of exposure, availability of other metabolic substrate, and the intracellular milieu. Butanoate is thought by some to be protective against colon cancer. However, not all studies support a chemopreventive effect, and the lack of agreement (particularly between in vivo and in vitro studies) on butyrate and colon cancer has been termed the "butyrate paradox." There are many reasons for this discrepant effect, including differences between the in vitro and in vivo environments, the timing of butanoate administration, the amount administered, the source (usually dietary fiber) as a potential confounder, and an interaction with dietary fat. Together, the studies suggest that the chemopreventive benefits of butanoate depend in part on amount, time of exposure with respect to the tumorigenic process, and the type of fat in the diet. Low carbohydrate diets like the Atkins diet are known to reduce the amount of butanoate produced in the colon.
Butyric acid has been associated with the ability to inhibit the function of histone deacetylase enzymes, thereby favouring an acetylated state of histones in the cell. Acetylated histones have a lower affinity for DNA than non-acetylated histones, due to the neutralisation of electrostatic charge interactions. In general, it is thought that transcription factors will be unable to access regions where histones are tightly associated with DNA (ie non-acetylated, e.g., heterochromatin). Therefore, it is thought that butyric acid enhances the transcriptional activity at promoters, which are typically silenced/downregulated due to histone deacetylase activity.
This article incorporates information from the 1911 encyclopedia. | Butyric acid
Template:Chembox new
Butyric acid (from Greek βούτυρος = butter), also known under the systematic name butanoic acid, is a carboxylic acid with the structural formula CH3CH2CH2-COOH. It is found in rancid butter, parmesan cheese, vomit, and body odor and has an unpleasant smell and acrid taste, with a sweetish aftertaste (similar to ether). Butyric acid can be detected by mammals with good scent detection abilities such as dogs at 10 ppb, whereas humans can detect it in concentrations above 10 ppm.
Butyric acid is a fatty acid occurring in the form of esters in animal fats and plant oils. The glyceride of butyric acid makes up 3% to 4% of butter. When butter goes rancid, butyric acid is liberated from the glyceride by hydrolysis leading to the unpleasant odor. It is an important member of the fatty acid sub-group called short chain fatty acids. Butyric acid is a weak acid with a pKa of 4.82, similar to acetic acid which has pKa 4.76.[1] The similar strength of these acids results from their common -CH2COOH terminal structure.[2] Butyric acid has density 0.96 g/cm3 and molecular mass 88.1051; thus pure butyric acid is 10.9 molar.
Butyric acid or fermentation butyric acid is also found as a hexyl ester (hexyl butanoate) in the oil of Heracleum giganteum (a type of cow parsnip) and as an octyl ester (octyl butanoate) in parsnip (Pastinaca sativa); it has also been noticed in the fluids of the flesh and in perspiration.
It is industrially prepared by the fermentation of sugar or starch, brought about by the addition of putrefying cheese, with calcium carbonate added to neutralize the acids formed in the process. The butyric fermentation of starch is aided by the direct addition of Bacillus subtilis. Salts and esters of the acid are called butanoates.
Butyric acid is used in the preparation of various butanoate esters. Low-molecular-weight esters of butyric acid, such as methyl butanoate, have mostly pleasant aromas or tastes. As a consequence, they find use as food and perfume additives. They are also used in organic laboratory courses, to teach the Fischer esterification reaction.
The acid is an oily colorless liquid that freezes at -8 °C; it boils at 164 °C. It is easily soluble in water, ethanol, and ether, and is precipitated out of its aqueous solution by the addition of calcium chloride. Potassium dichromate and sulfuric acid oxidize it to carbon dioxide and acetic acid, while alkaline potassium permanganate oxidizes it to carbon dioxide. The calcium salt, Ca(C4H7O2)2·H2O, is less soluble in hot water than in cold.
Butyric acid has a structural isomer called isobutyric acid (2-methylpropanoic acid).
# Butanoate fermentation
Butanoate is produced as end-product of a fermentation process solely performed by obligate anaerobic bacteria. Fermented Kombucha "tea" includes butyric acid as a result of the fermentation. This fermentation pathway was discovered by Louis Pasteur in 1861. Examples of butanoate-producing species of bacteria:
- Clostridium acetobutylicum
- Clostridium butyricum
- Clostridium kluyveri
- Clostridium pasteurianum
- Fusobacterium nucleatum
- Butyrivibrio fibrisolvens
- Eubacterium limosum
The pathway starts with the glycolytic cleavage of glucose to two molecules of pyruvate, as happens in most organisms. Pyruvate is then oxidized into acetyl coenzyme A using a unique mechanism that involves an enzyme system called pyruvate-ferredoxin oxidoreductase. Two molecules of carbon dioxide (CO2) and two molecules of elemental hydrogen (H2) are formed as wastes products from the cell. Then:
ATP is produced, as can be seen, in the last step of the fermentation. Three molecules of ATP are produced for each glucose molecule, a relatively high yield. The balanced equation for this fermentation is:
C6H12O6 → C4H8O2 + 2CO2 + 2H2
## Acetone and butanol fermentation
Several species form acetone and butanol in an alternative pathway, which starts as butyrate fermentation. Some of these species are:
- Clostridium acetobutylicum: the most prominent acetone and butanol producer, used also in industry
- Clostridium beijerinckii
- Clostridium tetanomorphum
- Clostridium aurantibutyricum
These bacteria begin with butanoate fermentation as described above, but, when the pH drops below 5, they switch into butanol and acetone production in order to prevent further lowering of the pH. Two molecules of butanol are formed for each molecule of acetone.
The change in the pathway occurs after acetoacetyl CoA formation. This intermediate then takes two possible pathways:
- Acetoacetyl CoA → acetoacetate → acetone, or
- Acetoacetyl CoA → butyryl CoA → butanal → butanol.
## Butyric acid function/activity
Highly-fermentable fibers like oat bran, pectin, and guar are transformed by colonic bacteria into short chain fatty acids including butyrate.
Butanoate has diverse and, it seems, paradoxical effects on cellular proliferation, apoptosis and differentiation that may be either pro-neoplastic or anti-neoplastic, depending upon factors such as the level of exposure, availability of other metabolic substrate, and the intracellular milieu. Butanoate is thought by some to be protective against colon cancer. However, not all studies support a chemopreventive effect, and the lack of agreement (particularly between in vivo and in vitro studies) on butyrate and colon cancer has been termed the "butyrate paradox." There are many reasons for this discrepant effect, including differences between the in vitro and in vivo environments, the timing of butanoate administration, the amount administered, the source (usually dietary fiber) as a potential confounder, and an interaction with dietary fat. Together, the studies suggest that the chemopreventive benefits of butanoate depend in part on amount, time of exposure with respect to the tumorigenic process, and the type of fat in the diet.[3] Low carbohydrate diets like the Atkins diet are known to reduce the amount of butanoate produced in the colon.
Butyric acid has been associated with the ability to inhibit the function of histone deacetylase enzymes, thereby favouring an acetylated state of histones in the cell. Acetylated histones have a lower affinity for DNA than non-acetylated histones, due to the neutralisation of electrostatic charge interactions. In general, it is thought that transcription factors will be unable to access regions where histones are tightly associated with DNA (ie non-acetylated, e.g., heterochromatin). Therefore, it is thought that butyric acid enhances the transcriptional activity at promoters, which are typically silenced/downregulated due to histone deacetylase activity.
This article incorporates information from the 1911 encyclopedia. | https://www.wikidoc.org/index.php/Butanoate | |
a6b266311aae72f9edcca7d0a05e0f300440cbbe | wikidoc | Butoconazole | Butoconazole
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Butoconazole is an antifungal that is FDA approved for the treatment of vulvovaginal candidiasis (infections caused by Candida). Common adverse reactions include vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% is indicated for the local treatment of vulvovaginal candidiasis (infections caused by Candida). The diagnosis should be confirmed by KOH smears and/or cultures.
- Note: GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% is safe and effective in non-pregnant women; however, the safety and effectiveness of this product in pregnant women has not been established.
- Of the 314 patients treated with GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% for 1 day in controlled clinical trials, 18 patients (5.7%) reported complaints such as vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping, or a combination of two or more of these symptoms. In 3 patients (1%) these complaints were considered treatment-related. Five of the 18 patients reporting adverse events discontinued the study because of them.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butoconazole in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butoconazole in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
There is limited information regarding FDA-Labeled Use of Butoconazole in pediatric patients.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butoconazole in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butoconazole in pediatric patients.
# Contraindications
- GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% is contraindicated in patients with a history of hypersensitivity to any of the components of the product.
# Warnings
- This cream contains mineral oil. Mineral oil may weaken latex or rubber products such as condoms or vaginal contraceptive diaphragms; therefore, use of such products within 72 hours following treatment with GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% is not recommended.
- Recurrent vaginal yeast infections, especially those that are difficult to eradicate, can be an early sign of infection with the human immunodeficiency virus (HIV) in women who are considered at risk for HIV infection.
# Adverse Reactions
## Clinical Trials Experience
There is limited information regarding Clinical Trial Experience of Butoconazole in the drug label.
## Postmarketing Experience
- Of the 314 patients treated with GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% for 1 day in controlled clinical trials, 18 patients (5.7%) reported complaints such as vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping, or a combination of two or more of these symptoms. In 3 patients (1%) these complaints were considered treatment-related. Five of the 18 patients reporting adverse events discontinued the study because of them.Of the 314 patients treated with GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% for 1 day in controlled clinical trials, 18 patients (5.7%) reported complaints such as vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping, or a combination of two or more of these symptoms. In 3 patients (1%) these complaints were considered treatment-related. Five of the 18 patients reporting adverse events discontinued the study because of them.
# Drug Interactions
There is limited information regarding Butoconazole Drug Interactions in the drug label.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): C
- In pregnant rats administered 6 mg/kg/day of butoconazole nitrate intravaginally during the period of organogenesis, there was an increase in resorption rate and decrease in litter size; however, no teratogenicity was noted. This dose represents a 130- to 353-fold margin of safety based on serum levels achieved in rats following intravaginal administration compared to the serum levels achieved in humans following intravaginal administration of the recommended therapeutic dose of butoconazole nitrate.
- Butoconazole nitrate has no apparent adverse effect when administered orally to pregnant rats throughout organogenesis at dose levels up to 50 mg/kg/day (5 times the human dose based on mg/m2). Daily oral doses of 100, 300 or 750 mg/kg/day (10, 30 or 75 times the human dose based on mg/m2 respectively) resulted in fetal malformations (abdominal wall defects, cleft palate), but maternal stress was also evident at these higher dose levels. There were, however, no adverse effects on litters of rabbits who received butoconazole nitrate orally, even at maternally stressful dose levels (e.g., 150 mg/kg, 24 times the human dose based on mg/m2).
- Butoconazole nitrate, like other azole antifungal agents, causes dystocia in rats when treatment is extended through parturition. However, this effect was not apparent in rabbits treated with as much as 100 mg/kg/day orally (16 times the human dose based on mg/m2).
- There are, however, no adequate and well-controlled studies in pregnant women. GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Pregnancy Category (AUS):
- Australian Drug Evaluation Committee (ADEC) Pregnancy Category
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Butoconazole in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Butoconazole during labor and delivery.
### Nursing Mothers
- It is not known whether this drug is excreted in human milk. Because many drugs are excreted in human milk, caution should be exercised when butoconazole nitrate is administered to a nursing woman.
### Pediatric Use
There is no FDA guidance on the use of Butoconazole with respect to pediatric patients.
### Geriatic Use
There is no FDA guidance on the use of Butoconazole with respect to geriatric patients.
### Gender
There is no FDA guidance on the use of Butoconazole with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Butoconazole with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Butoconazole in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Butoconazole in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Butoconazole in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Butoconazole in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Oral
- Intravenous
### Monitoring
There is limited information regarding Monitoring of Butoconazole in the drug label.
- Description
# IV Compatibility
There is limited information regarding IV Compatibility of Butoconazole in the drug label.
# Overdosage
There is limited information regarding Butoconazole overdosage. If you suspect drug poisoning or overdose, please contact the National Poison Help hotline (1-800-222-1222) immediately.
# Pharmacology
## Mechanism of Action
- The exact mechanism of the antifungal action of butoconazole nitrate is unknown; however, it is presumed to function as other imidazole derivatives via inhibition of steroid synthesis. Imidazoles generally inhibit the conversion of lanosterol to ergosterol, resulting in a change in fungal cell membrane lipid composition. This structural change alters cell permeability and, ultimately, results in the osmotic disruption or growth inhibition of the fungal cell.
- Butoconazole nitrate is an imidazole derivative that has fungicidal activity in vitro against Candida spp. and has been demonstrated to be clinically effective against vaginal infections due to Candida albicans. Candida albicanshas been identified as the predominant species responsible for vulvovaginal candidiasis.
## Structure
- GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% contains butoconazole nitrate 2%, an imidazole derivative with antifungal activity and it has the following chemical structure:
- Butoconazole nitrate is a white to off-white crystalline powder with a molecular weight of 474.79. It is sparingly soluble in methanol; slightly soluble in chloroform, methylene chloride, acetone, and ethanol; very slightly soluble in ethyl acetate; and practically insoluble in water. It melts at about 159°C with decomposition.
- GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% contains 2% butoconazole nitrate in a cream of edentate disodium, glyceryl monoisostearate, methylparaben, mineral oil, polyglyceryl-3 oleate, propylene glycol, propylparaben, colloidal silicon dioxide, sorbitol solution, purified water, and microcrystalline wax.
## Pharmacodynamics
There is limited information regarding Pharmacodynamics of Butoconazole in the drug label.
## Pharmacokinetics
- Following vaginal administration of butoconazole nitrate vaginal cream, 2% to 3 women, 1.7% (range 1.3-2.2%) of the dose was absorbed on average. Peak plasma levels (13.6-18.6 ng radioequivalents/mL of plasma) of the drug and its metabolites are attained between 12 and 24 hours after vaginal administration.
## Nonclinical Toxicology
There is limited information regarding Nonclinical Toxicology of Butoconazole in the drug label.
# Clinical Studies
- Vulvovaginal Candidiasis: Two studies were conducted that compared 2% butoconazole nitrate cream with clotrimazole tablets. There were 322 enrolled patients, 161 received 2.0% butoconazole vaginal cream and 161 patients inserted the 500-mg clotrimazole vaginal tablet. At the second follow-up visit (30 days post-therapy), 118 patients in the butoconzole group and 116 in the clotrimazole group were evaluable for efficacy analysis, respectively. All of these patients had infection caused by Candida albicans.
- The efficacy of the study drugs was assessed by evaluating clinical, mycologic and therapeutic cure rates, which are summarized in Table 1.
- The therapeutic cure was defined as complete resolution of signs and symptoms of vaginal candidiasis (clinical cure) along with a negative KOH examination and negative culture for Candida spp. (microbiologic eradication) at the long term follow-up (30 days). The therapeutic cure rate was 67% in the butoconazole group and 61% in the clotrimazole group.
# How Supplied
- GYNAZOLE - 1® Butoconazole Nitrate Vaginal Cream USP, 2% is available in cartons containing one single-dose prefilled disposable applicator (NDC 64011-246-01).
## Storage
- Store at 25°C (77°F); excursions permitted to 15°-30°C
- (59°-86°F)
- Avoid heat above 30°C (86°F).
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
# Precautions with Alcohol
- Alcohol-Butoconazole interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- GYNAZOLE 1 ®
# Look-Alike Drug Names
There is limited information regarding Butoconazole Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | Butoconazole
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Ammu Susheela, M.D. [2]
# Disclaimer
WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here.
# Overview
Butoconazole is an antifungal that is FDA approved for the treatment of vulvovaginal candidiasis (infections caused by Candida). Common adverse reactions include vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping.
# Adult Indications and Dosage
## FDA-Labeled Indications and Dosage (Adult)
- GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% is indicated for the local treatment of vulvovaginal candidiasis (infections caused by Candida). The diagnosis should be confirmed by KOH smears and/or cultures.
- Note: GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% is safe and effective in non-pregnant women; however, the safety and effectiveness of this product in pregnant women has not been established.
- Of the 314 patients treated with GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% for 1 day in controlled clinical trials, 18 patients (5.7%) reported complaints such as vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping, or a combination of two or more of these symptoms. In 3 patients (1%) these complaints were considered treatment-related. Five of the 18 patients reporting adverse events discontinued the study because of them.
## Off-Label Use and Dosage (Adult)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butoconazole in adult patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butoconazole in adult patients.
# Pediatric Indications and Dosage
## FDA-Labeled Indications and Dosage (Pediatric)
There is limited information regarding FDA-Labeled Use of Butoconazole in pediatric patients.
## Off-Label Use and Dosage (Pediatric)
### Guideline-Supported Use
There is limited information regarding Off-Label Guideline-Supported Use of Butoconazole in pediatric patients.
### Non–Guideline-Supported Use
There is limited information regarding Off-Label Non–Guideline-Supported Use of Butoconazole in pediatric patients.
# Contraindications
- GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% is contraindicated in patients with a history of hypersensitivity to any of the components of the product.
# Warnings
- This cream contains mineral oil. Mineral oil may weaken latex or rubber products such as condoms or vaginal contraceptive diaphragms; therefore, use of such products within 72 hours following treatment with GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% is not recommended.
- Recurrent vaginal yeast infections, especially those that are difficult to eradicate, can be an early sign of infection with the human immunodeficiency virus (HIV) in women who are considered at risk for HIV infection.
# Adverse Reactions
## Clinical Trials Experience
There is limited information regarding Clinical Trial Experience of Butoconazole in the drug label.
## Postmarketing Experience
- Of the 314 patients treated with GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% for 1 day in controlled clinical trials, 18 patients (5.7%) reported complaints such as vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping, or a combination of two or more of these symptoms. In 3 patients (1%) these complaints were considered treatment-related. Five of the 18 patients reporting adverse events discontinued the study because of them.Of the 314 patients treated with GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% for 1 day in controlled clinical trials, 18 patients (5.7%) reported complaints such as vulvar/vaginal burning, itching, soreness and swelling, pelvic or abdominal pain or cramping, or a combination of two or more of these symptoms. In 3 patients (1%) these complaints were considered treatment-related. Five of the 18 patients reporting adverse events discontinued the study because of them.
# Drug Interactions
There is limited information regarding Butoconazole Drug Interactions in the drug label.
# Use in Specific Populations
### Pregnancy
Pregnancy Category (FDA): C
- In pregnant rats administered 6 mg/kg/day of butoconazole nitrate intravaginally during the period of organogenesis, there was an increase in resorption rate and decrease in litter size; however, no teratogenicity was noted. This dose represents a 130- to 353-fold margin of safety based on serum levels achieved in rats following intravaginal administration compared to the serum levels achieved in humans following intravaginal administration of the recommended therapeutic dose of butoconazole nitrate.
- Butoconazole nitrate has no apparent adverse effect when administered orally to pregnant rats throughout organogenesis at dose levels up to 50 mg/kg/day (5 times the human dose based on mg/m2). Daily oral doses of 100, 300 or 750 mg/kg/day (10, 30 or 75 times the human dose based on mg/m2 respectively) resulted in fetal malformations (abdominal wall defects, cleft palate), but maternal stress was also evident at these higher dose levels. There were, however, no adverse effects on litters of rabbits who received butoconazole nitrate orally, even at maternally stressful dose levels (e.g., 150 mg/kg, 24 times the human dose based on mg/m2).
- Butoconazole nitrate, like other azole antifungal agents, causes dystocia in rats when treatment is extended through parturition. However, this effect was not apparent in rabbits treated with as much as 100 mg/kg/day orally (16 times the human dose based on mg/m2).
- There are, however, no adequate and well-controlled studies in pregnant women. GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Pregnancy Category (AUS):
- Australian Drug Evaluation Committee (ADEC) Pregnancy Category
There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Butoconazole in women who are pregnant.
### Labor and Delivery
There is no FDA guidance on use of Butoconazole during labor and delivery.
### Nursing Mothers
- It is not known whether this drug is excreted in human milk. Because many drugs are excreted in human milk, caution should be exercised when butoconazole nitrate is administered to a nursing woman.
### Pediatric Use
There is no FDA guidance on the use of Butoconazole with respect to pediatric patients.
### Geriatic Use
There is no FDA guidance on the use of Butoconazole with respect to geriatric patients.
### Gender
There is no FDA guidance on the use of Butoconazole with respect to specific gender populations.
### Race
There is no FDA guidance on the use of Butoconazole with respect to specific racial populations.
### Renal Impairment
There is no FDA guidance on the use of Butoconazole in patients with renal impairment.
### Hepatic Impairment
There is no FDA guidance on the use of Butoconazole in patients with hepatic impairment.
### Females of Reproductive Potential and Males
There is no FDA guidance on the use of Butoconazole in women of reproductive potentials and males.
### Immunocompromised Patients
There is no FDA guidance one the use of Butoconazole in patients who are immunocompromised.
# Administration and Monitoring
### Administration
- Oral
- Intravenous
### Monitoring
There is limited information regarding Monitoring of Butoconazole in the drug label.
- Description
# IV Compatibility
There is limited information regarding IV Compatibility of Butoconazole in the drug label.
# Overdosage
There is limited information regarding Butoconazole overdosage. If you suspect drug poisoning or overdose, please contact the National Poison Help hotline (1-800-222-1222) immediately.
# Pharmacology
## Mechanism of Action
- The exact mechanism of the antifungal action of butoconazole nitrate is unknown; however, it is presumed to function as other imidazole derivatives via inhibition of steroid synthesis. Imidazoles generally inhibit the conversion of lanosterol to ergosterol, resulting in a change in fungal cell membrane lipid composition. This structural change alters cell permeability and, ultimately, results in the osmotic disruption or growth inhibition of the fungal cell.
- Butoconazole nitrate is an imidazole derivative that has fungicidal activity in vitro against Candida spp. and has been demonstrated to be clinically effective against vaginal infections due to Candida albicans. Candida albicanshas been identified as the predominant species responsible for vulvovaginal candidiasis.
## Structure
- GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% contains butoconazole nitrate 2%, an imidazole derivative with antifungal activity and it has the following chemical structure:
- Butoconazole nitrate is a white to off-white crystalline powder with a molecular weight of 474.79. It is sparingly soluble in methanol; slightly soluble in chloroform, methylene chloride, acetone, and ethanol; very slightly soluble in ethyl acetate; and practically insoluble in water. It melts at about 159°C with decomposition.
- GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% contains 2% butoconazole nitrate in a cream of edentate disodium, glyceryl monoisostearate, methylparaben, mineral oil, polyglyceryl-3 oleate, propylene glycol, propylparaben, colloidal silicon dioxide, sorbitol solution, purified water, and microcrystalline wax.
## Pharmacodynamics
There is limited information regarding Pharmacodynamics of Butoconazole in the drug label.
## Pharmacokinetics
- Following vaginal administration of butoconazole nitrate vaginal cream, 2% to 3 women, 1.7% (range 1.3-2.2%) of the dose was absorbed on average. Peak plasma levels (13.6-18.6 ng radioequivalents/mL of plasma) of the drug and its metabolites are attained between 12 and 24 hours after vaginal administration.
## Nonclinical Toxicology
There is limited information regarding Nonclinical Toxicology of Butoconazole in the drug label.
# Clinical Studies
- Vulvovaginal Candidiasis: Two studies were conducted that compared 2% butoconazole nitrate cream with clotrimazole tablets. There were 322 enrolled patients, 161 received 2.0% butoconazole vaginal cream and 161 patients inserted the 500-mg clotrimazole vaginal tablet. At the second follow-up visit (30 days post-therapy), 118 patients in the butoconzole group and 116 in the clotrimazole group were evaluable for efficacy analysis, respectively. All of these patients had infection caused by Candida albicans.
- The efficacy of the study drugs was assessed by evaluating clinical, mycologic and therapeutic cure rates, which are summarized in Table 1.
- The therapeutic cure was defined as complete resolution of signs and symptoms of vaginal candidiasis (clinical cure) along with a negative KOH examination and negative culture for Candida spp. (microbiologic eradication) at the long term follow-up (30 days). The therapeutic cure rate was 67% in the butoconazole group and 61% in the clotrimazole group.
# How Supplied
- GYNAZOLE • 1® Butoconazole Nitrate Vaginal Cream USP, 2% is available in cartons containing one single-dose prefilled disposable applicator (NDC 64011-246-01).
## Storage
- Store at 25°C (77°F); excursions permitted to 15°-30°C
- (59°-86°F)
- Avoid heat above 30°C (86°F).
# Images
## Drug Images
## Package and Label Display Panel
# Patient Counseling Information
# Precautions with Alcohol
- Alcohol-Butoconazole interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication.
# Brand Names
- GYNAZOLE 1 ®[1]
# Look-Alike Drug Names
There is limited information regarding Butoconazole Look-Alike Drug Names in the drug label.
# Drug Shortage Status
# Price | https://www.wikidoc.org/index.php/Butoconazole | |
69ee963bbfd046f91e83ed973ef50b5fafdc77cc | wikidoc | Butriptyline | Butriptyline
# Overview
Butriptyline (Evadene, Evadyne, Evasidol, Centrolese) is a tricyclic antidepressant (TCA) which has been used in Europe since 1974. It is the isobutyl side chain homologue of amitriptyline and produces similar effects to it, but with less marked side effects like sedation and interactions with adrenergic drugs.
In vitro, butriptyline is a strong antihistamine and anticholinergic, moderate 5-HT2 and α1-adrenergic receptor antagonist, and weak serotonin reuptake inhibitor, with negligible affinity for the norepinephrine and dopamine transporters. These actions appear to confer a profile similar to that of iprindole and trimipramine with serotonin-blocking effects as the predominant mediator of mood-lifting efficacy.
However, in clinical trials, using similar doses, butriptyline was found to be even more effective than amitriptyline as an antidepressant, despite the fact that amitriptyline is much, much stronger as both a 5-HT2 antagonist and serotonin-norepinephrine reuptake inhibitor. As a result, it may be that butriptyline, in vivo, functions as a prodrug to a metabolite with more appreciable pharmacodynamics. | Butriptyline
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Butriptyline (Evadene, Evadyne, Evasidol, Centrolese) is a tricyclic antidepressant (TCA) which has been used in Europe since 1974.[1][2][3][4][5] It is the isobutyl side chain homologue of amitriptyline and produces similar effects to it, but with less marked side effects like sedation and interactions with adrenergic drugs.[1][5][6]
In vitro, butriptyline is a strong antihistamine and anticholinergic, moderate 5-HT2 and α1-adrenergic receptor antagonist, and weak serotonin reuptake inhibitor, with negligible affinity for the norepinephrine and dopamine transporters.[7][8][9] These actions appear to confer a profile similar to that of iprindole and trimipramine with serotonin-blocking effects as the predominant mediator of mood-lifting efficacy.[10][11][12]
However, in clinical trials, using similar doses, butriptyline was found to be even more effective than amitriptyline as an antidepressant, despite the fact that amitriptyline is much, much stronger as both a 5-HT2 antagonist and serotonin-norepinephrine reuptake inhibitor.[7][8][9][13] As a result, it may be that butriptyline, in vivo, functions as a prodrug to a metabolite with more appreciable pharmacodynamics. | https://www.wikidoc.org/index.php/Butriptyline | |
411b8135c82bfb11be2ed9532f0b16d766be54fe | wikidoc | Butz-Choquin | Butz-Choquin
Butz-Choquin is a French pipe maker. It was founded in 1858 by tobacconist Jean-baptiste Choquin and Gustave Butz.
# History
The company was established and in Metz; it remained there until 1951, when it was purchased by the Berrod-Regad company. It was then relocated to Saint-Claude, Jura. The company began to export pipes in 1960, receiving the Oscar of Export and the Gold Cup of the French Good Taste. The company was acquired by Fabien Guichon in 2002.
# Pipes
Butz-Choquin's first pipe, the Choquin pipe, was a curved pipe with a flat-bottomed hearth, albatross bone, and silver rings. The company currently produces over 70 different series of pipes. Butz-Choquin pipes have only been readily available in the United States of America since 1999. | Butz-Choquin
Butz-Choquin is a French pipe maker. It was founded in 1858 by tobacconist Jean-baptiste Choquin and Gustave Butz.
# History
The company was established and in Metz; it remained there until 1951, when it was purchased by the Berrod-Regad company. It was then relocated to Saint-Claude, Jura. The company began to export pipes in 1960, receiving the Oscar of Export and the Gold Cup of the French Good Taste. The company was acquired by Fabien Guichon in 2002.
# Pipes
Butz-Choquin's first pipe, the Choquin pipe, was a curved pipe with a flat-bottomed hearth, albatross bone, and silver rings. The company currently produces over 70 different series of pipes. Butz-Choquin pipes have only been readily available in the United States of America since 1999.
# External links
- Butz-Choquin official website
- "PipeSMOKE 09/97—Picking the Perfect Pipe (pg. 2)". Retrieved 2006-11-03..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em} — contains a review of pipes including those of Butz-Choquin
- "RYO Magzine Tobacco Reviews, Winter 2004". Retrieved 2006-11-03. — another review of several brands of pipes including Butz-Choquin | https://www.wikidoc.org/index.php/Butz-Choquin | |
20ff3eb0903587cfa7030af90cd1cd65093ca1f2 | wikidoc | C1-inhibitor | C1-inhibitor
C1-inhibitor (C1-inh, C1 esterase inhibitor) is a protease inhibitor belonging to the serpin superfamily. Its main function is the inhibition of the complement system to prevent spontaneous activation. C1-inhibitor is an acute-phase protein that circulates in blood at levels of around 0.25 g/L. The levels rise ~2-fold during inflammation. C1-inhibitor irreversibly binds to and inactivates C1r and C1s proteases in the C1 complex of classical pathway of complement. MASP-1 and MASP-2 proteases in MBL complexes of the lectin pathway are also inactivated. This way, C1-inhibitor prevents the proteolytic cleavage of later complement components C4 and C2 by C1 and MBL. Although named after its complement inhibitory activity, C1-inhibitor also inhibits proteases of the fibrinolytic, clotting, and kinin pathways. Note that C1-inhibitor is the most important physiological inhibitor of plasma kallikrein, fXIa, and fXIIa.
# Proteomics
C1-inhibitor is the largest member among the serpin superfamily of proteins. It can be noted that, unlike most family members, C1-inhibitor has a 2-domain structure. The C-terminal serpin domain is similar to other serpins, which is the part of C1-inhibitor that provides the inhibitory activity. The N-terminal domain (also some times referred to as the N-terminal tail) is not essential for C1-inhibitor to inhibit proteases. This domain has no similarity to other proteins. C1-inhibitor is highly glycosylated, bearing both N- and O-glycans. N-terminal domain is especially heavily glycosylated.
# Genetics
The human C1-inhibitor gene (SERPING1) is located on the eleventh chromosome (11q11-q13.1).
# Role in disease
Deficiency of this protein is associated with hereditary angioedema ("hereditary angioneurotic edema"), or swelling due to leakage of fluid from blood vessels into connective tissue. Deficiency of C1-inhibitor permits plasma kallikrein activation, which leads to the production of the vasoactive peptide bradykinin. Also, C4 and C2 cleavage goes unchecked, resulting in auto-activation of the complement system. In its most common form, it presents as marked swelling of the face, mouth and/or airway that occurs spontaneously or to minimal triggers (such as mild trauma), but such swelling can occur in any part of the body. In 85% of the cases, the levels of C1-inhibitor are low, while in 15% the protein circulates in normal amounts but it is dysfunctional. In addition to the episodes of facial swelling and/or abdominal pain, it also predisposes to autoimmune diseases, most markedly lupus erythematosus, due to its consumptive effect on complement factors 3 and 4. Mutations in the gene that codes for C1-inhibitor, SERPING1, may also play a role in the development of age related macular degeneration.
Despite uncontrolled auto-activation, it is important to note that levels of key complement components are low during an acute attack, because they are being consumed - indeed, low levels of C4 are a key diagnostic test for hereditary angioedema. This situation is analogous to the low levels of clotting factors found in disseminated intravascular coagulation (DIC).
# Medical use
## Hereditary angioedema
Blood-derived C1-inhibitor is effective, but does carry the risk associated with the use of any human blood product. Cinryze, a pharmaceutical-grade C1-inhibitor, was approved for the use of HAE in 2008. It is a highly purified, pasteurized and nanofiltered plasma-derived C1 esterase inhibitor product; it has been approved for routine prophylaxis against angioedema attacks in adolescent and adult patients with HAE.
A recombinant C1 inhibitor obtained from the milk of transgenic rabbits, conestat alfa (trade name Ruconest), is approved for the treatment of acute HAE attacks in adults.
While C1 inhibitor therapy has been used acutely for more than 35 years in Europe in patients with C1 inhibitor deficiency, new methods of treating acute attacks have emerged: a plasma kallikrein inhibitor and the bradykinin receptor antagonist icatibant.
## For other conditions
The activation of the complement cascade can cause damage to cells, therefore the inhibition of the complement cascade can work as a medicine in certain conditions. When someone has a heart attack, for instance, the lack of oxygen in heart cells causes necrosis in heart cells: Dying heart cells spill their contents in the extracellular environment, which triggers the complement cascade. Activation of the complement cascade attracts phagocytes that leak peroxide and other reagents, which may increase the damage for the surviving heart cells. Inhibition of the complement cascade can decrease this damage.
## Synthesis
C1-inhibitor is contained in the human blood; it can, therefore, be isolated from donated blood. Risks of infectious disease transmission (viruses, prions, etc.) and relative expense of isolation prevented widespread use. It is also possible to produce it by recombinant technology, but Escherichia coli (the most commonly used organism for this purpose) lacks the eukaryotic ability to glycosylate proteins; as C1-inhibitor is particularly heavily glycosylated, this sialylated recombinant form would have a short circulatory life (the carbohydrates are not relevant to the inhibitor function). Therefore, C1-inhibitor has also been produced in glycosylated form using transgenic rabbits. This form of recombinant C1-inhibitor also has been given orphan drug status for delayed graft function following organ transplantation and for capillary leakage syndrome. | C1-inhibitor
C1-inhibitor (C1-inh, C1 esterase inhibitor) is a protease inhibitor belonging to the serpin superfamily. Its main function is the inhibition of the complement system to prevent spontaneous activation.[1][2] C1-inhibitor is an acute-phase protein that circulates in blood at levels of around 0.25 g/L. The levels rise ~2-fold during inflammation. C1-inhibitor irreversibly binds to and inactivates C1r and C1s proteases in the C1 complex of classical pathway of complement. MASP-1 and MASP-2 proteases in MBL complexes of the lectin pathway are also inactivated. This way, C1-inhibitor prevents the proteolytic cleavage of later complement components C4 and C2 by C1 and MBL. Although named after its complement inhibitory activity, C1-inhibitor also inhibits proteases of the fibrinolytic, clotting, and kinin pathways. Note that C1-inhibitor is the most important physiological inhibitor of plasma kallikrein, fXIa, and fXIIa.
# Proteomics
C1-inhibitor is the largest member among the serpin superfamily of proteins. It can be noted that, unlike most family members, C1-inhibitor has a 2-domain structure. The C-terminal serpin domain is similar to other serpins, which is the part of C1-inhibitor that provides the inhibitory activity. The N-terminal domain (also some times referred to as the N-terminal tail) is not essential for C1-inhibitor to inhibit proteases. This domain has no similarity to other proteins. C1-inhibitor is highly glycosylated, bearing both N- and O-glycans. N-terminal domain is especially heavily glycosylated.[2]
# Genetics
The human C1-inhibitor gene (SERPING1) is located on the eleventh chromosome (11q11-q13.1).[3][4]
# Role in disease
Deficiency of this protein is associated with hereditary angioedema ("hereditary angioneurotic edema"), or swelling due to leakage of fluid from blood vessels into connective tissue.[5] Deficiency of C1-inhibitor permits plasma kallikrein activation, which leads to the production of the vasoactive peptide bradykinin. Also, C4 and C2 cleavage goes unchecked, resulting in auto-activation of the complement system. In its most common form, it presents as marked swelling of the face, mouth and/or airway that occurs spontaneously or to minimal triggers (such as mild trauma), but such swelling can occur in any part of the body. In 85% of the cases, the levels of C1-inhibitor are low, while in 15% the protein circulates in normal amounts but it is dysfunctional. In addition to the episodes of facial swelling and/or abdominal pain, it also predisposes to autoimmune diseases, most markedly lupus erythematosus, due to its consumptive effect on complement factors 3 and 4. Mutations in the gene that codes for C1-inhibitor, SERPING1, may also play a role in the development of age related macular degeneration.[6]
Despite uncontrolled auto-activation, it is important to note that levels of key complement components are low during an acute attack, because they are being consumed - indeed, low levels of C4 are a key diagnostic test for hereditary angioedema. This situation is analogous to the low levels of clotting factors found in disseminated intravascular coagulation (DIC).
# Medical use
## Hereditary angioedema
Blood-derived C1-inhibitor is effective, but does carry the risk associated with the use of any human blood product. Cinryze, a pharmaceutical-grade C1-inhibitor, was approved for the use of HAE in 2008.[7] It is a highly purified, pasteurized and nanofiltered plasma-derived C1 esterase inhibitor product; it has been approved for routine prophylaxis against angioedema attacks in adolescent and adult patients with HAE.[8]
A recombinant C1 inhibitor obtained from the milk of transgenic rabbits, conestat alfa (trade name Ruconest), is approved for the treatment of acute HAE attacks in adults.[9][10]
While C1 inhibitor therapy has been used acutely for more than 35 years in Europe in patients with C1 inhibitor deficiency, new methods of treating acute attacks have emerged: a plasma kallikrein inhibitor and the bradykinin receptor antagonist icatibant.
## For other conditions
The activation of the complement cascade can cause damage to cells, therefore the inhibition of the complement cascade can work as a medicine in certain conditions.[11] When someone has a heart attack, for instance, the lack of oxygen in heart cells causes necrosis in heart cells: Dying heart cells spill their contents in the extracellular environment, which triggers the complement cascade. Activation of the complement cascade attracts phagocytes that leak peroxide and other reagents, which may increase the damage for the surviving heart cells. Inhibition of the complement cascade can decrease this damage.
## Synthesis
C1-inhibitor is contained in the human blood; it can, therefore, be isolated from donated blood. Risks of infectious disease transmission (viruses, prions, etc.) and relative expense of isolation prevented widespread use. It is also possible to produce it by recombinant technology, but Escherichia coli (the most commonly used organism for this purpose) lacks the eukaryotic ability to glycosylate proteins; as C1-inhibitor is particularly heavily glycosylated, this sialylated recombinant form would have a short circulatory life (the carbohydrates are not relevant to the inhibitor function). Therefore, C1-inhibitor has also been produced in glycosylated form using transgenic rabbits.[12] This form of recombinant C1-inhibitor also has been given orphan drug status for delayed graft function following organ transplantation and for capillary leakage syndrome.[13] | https://www.wikidoc.org/index.php/C1-inhibitor | |
6b2f4b19dc9334fcf9af4423ca9a799c8a685fe2 | wikidoc | C3a receptor | C3a receptor
The C3a receptor also known as complement component 3a receptor 1 (C3AR1) is a G protein-coupled receptor protein involved in the complement system.
The receptor binds to complement component C3a, although there is limited evidence that this receptor also binds C4a in lesser mammals this has yet to be proven true in humans. C3a receptor modulates immunity, arthritis, diet-induced obesity and cancers
# Agonists and antagonists
Potent and selective agonists and antagonists for C3aR have been discovered. | C3a receptor
The C3a receptor also known as complement component 3a receptor 1 (C3AR1) is a G protein-coupled receptor protein involved in the complement system.[1][2]
The receptor binds to complement component C3a, although there is limited evidence that this receptor also binds C4a in lesser mammals this has yet to be proven true in humans.[3] C3a receptor modulates immunity,[4] arthritis, diet-induced obesity[5] and cancers[6]
# Agonists and antagonists
Potent and selective agonists[7] and antagonists[8] for C3aR have been discovered. | https://www.wikidoc.org/index.php/C3a_receptor | |
5678155d9ebb84ee0cf28e561f3ddab62ac8ef49 | wikidoc | CHADS2 score | CHADS2 score
Synonyms and keywords: CHADS score
# Overview
CHADS2 score is a clinical prediction rule for the estimation of the risk of stroke among patients with non-rheumatic atrial fibrillation (AF), a common and serious cardiac arrhythmia associated with an increased risk of thromboembolic stroke. AF can cause stasis of blood in the atria, leading to the formation of a mural thrombus that can dislodge into the blood flow, reach the brain, and cause a stroke. CHADS2 score is used to assess the risk of stroke and determine whether or not antithrombotic therapy is required with either anticoagulants therapy or antiplatelets for the prevention of thromboembolism. A high CHADS2 score corresponds to a greater risk of stroke, while a low CHADS2 score corresponds to a lower risk of stroke. The CHADS2 score was validated by a study on non-rheumatic AF patients aged 65 to 95 who were not prescribed the anticoagulant warfarin.
# CHADS2 Score Original Study
## Description
The CHADS2 index was developed by the Gage et al., published in the Journal of the American Medical Association in June 2001, with the objective of assessing the predictive value of classification schemes that estimate stroke risk in patients with AF. To develop the index, two existing classification schemes from the Atrial Fibrillation Investigators (AFI), and the Stroke Prevention and Atrial Fibrillation investigators (SPAF) were combined, and all 3 classification schemes were validated. 1 point each was assigned for the presence of congestive heart failure, hypertension, age 75 and older, and diabetes mellitus, and 2 points were assigned for history of stroke or transient ischemic attack. Data was obtained from peer review organizations representing 7 different states to create a National Registry of Atrial Fibrillation consisting of 1733 Medicare beneficiaries aged 65 to 95 years who had non-rheumatic AF and were not prescribed warfarin at discharge. The outcome measured was the hospitalization for ischemic stroke, which was determined by medicare claims data. The 1733 patients were followed for a median of 1.2 years.
The results were as follows;
- During the 2121 patient-years of follow up, 94 patients were re-admitted for an ischemic event; 73 of these patients were admitted for stroke, and 23 patients for transient cerebral ischemia.
- The stroke rate was lowest amongst the 120 patients who had a CHADS2 score of 0.
- The stroke rate increased by a factor of 1.5 (95% CI, 1.3-1.7) for each 1 point increase in the CHADS2 score.
- Aspirin was associated with a hazard rate of 0.80 (95% CI, 0.5-1.3) corresponding to a nonsignificant 20% RR reduction in the rate of stroke (p=0.27)
- Compared to the schemes developed by the AFI and SPAF, the CHADS2 index was the most accurate predictor of stroke with a c-statistic of 0.82 (95% CI, 0.80-0.84).
## Strengths
- The CHADS2 study used chart reviews rather than ICD-9-CM claims to document the presence of AF and to identify stroke risk factors.
- The chart reviews included patients who received aspirin after being discharged from the hospital, enabling adjustment for the use of aspirin in the calculation of the CHADS2 specific stroke rate.
- The cohort of persons used in the study were Medicare beneficiaries from 7 different states, and all geographic regions of the United States were represented.
- As the CHADS2 study used Medicare beneficiaries who were recently hospitalized rather than healthier individuals, it is thought that CHADS2 should be generalizable to frail and elderly individuals
## Limitations
- The CHADS2 score has various limitations, which have been debated. Notably, many stroke risk factors have not been included, and whilst simple, the score has only modest predictive value for thromboembolism.
- The CHADS2 score may underestimate the risk of stroke in those patients over the age of 75 years. For this reason, some authors have advocated the use of anticoagulation among patients who are over the age of 75 years if there are no contraindications.
- When compared to data from clinical trials from The Stroke Prevention and Atrial Fibrillation investigators (SPAF) and the Atrial Fibrillation Investigators (AFI), the CHADS2 study used participants who were older and sicker. The CHADS2 study was based on the SPAF and AFI schemes; therefore, the study may have performed better if it was used in a younger cohort of patients.
- A single chart review was used to measure the stroke risk factors, and therefore the study was unable to capture new stroke risk factors that may have developed in the cohort participants.
- The study only looked at patients who were hospitalized and were not prescribed warfarin.
- As Medicare claims were used to ascertain the number of ischemic events, there was no way to verify these events.
- The 20% risk reduction of stroke with aspirin administration was not statistically significant in this study (however there is clinical significance when the study is combined with other research).
- While the CHADS2 score provides prognostic information regarding the natural history of non-valvular AF in the absence of warfarin therapy, it should be noted that warfarin therapy also has an associated stroke risk (particularly hemorrhagic stroke) and a risk of major bleeding, and these considerations were taken into account in the development of the recommendations in the next section.
# CHADS2 Risk Score Calculator
## Calculation of the CHADS2 Score for Atrial Fibrillation Stroke Risk
function calcScore(){
var score = 0;
if(document.forms.checked == 1){score += 1;}
if(document.forms.checked == 1){score += 1;}
if(document.forms.checked == 1){score += 1;}
if(document.forms.checked == 1){score += 1;}
if(document.forms.checked == 1){score += 2;}
document.forms.value = score;
if(score == 0){document.forms.value = "Low risk (1.9% stroke risk, 95% CI 1.2-3.0); Consider Aspirin daily";}
if(score == 1){document.forms.value = "Moderate risk (2.8% stroke risk, CI 2.0-3.8); Consider Aspirin or Warfarin";}
if(score == 2){document.forms.value = "Moderate risk (4% stroke risk, 95% CI 3.1-5.); Warfarin with an INR target of 2-3";}
if(score == 3){document.forms.value = "Moderate risk (5.9% stroke risk, 95% CI 4.6-7.3); Warfarin with an INR target of 2-3";}
if(score == 4){document.forms.value = "High risk (8.5% stroke risk, 95% CI 6.3-11.1); Warfarin with an INR target of 2-3";}
if(score == 5){document.forms.value = "High risk (12.5% stroke risk, 95% CI 8.2-17.5); Warfarin with an INR target of 2-3";}
if(score == 6){document.forms.value = "High risk (18.2% stroke risk, 95% CI 10.5-27.4); Warfarin with an INR target of 2-3";}
## Interpretation of the CHADS2 Score for Atrial Fibrillation Stroke Risk
Shown below is the probability of the annual stroke risk by the corresponding CHADS2 score value.
- Score 0: Low risk (1.9% stroke risk, 95% CI 1.2-3.0); Consider Aspirin daily
- Score 1: Moderate risk (2.8% stroke risk, CI 2.0-3.8); Consider Aspirin or Warfarin depends on patient preference
- Score 2: Moderate risk (4% stroke risk, 95% CI 3.1-5.); Warfarin with an INR target of 2-3, unless contraindicated
- Score 3: Moderate risk (5.9% stroke risk, 95% CI 4.6-7.3); Warfarin with an INR target of 2-3, unless contraindicated
- Score 4: High risk (8.5% stroke risk, 95% CI 6.3-11.1); Warfarin with an INR target of 2-3, unless contraindicated
- Score 5: High risk (12.5% stroke risk, 95% CI 8.2-17.5); Warfarin with an INR target of 2-3, unless contraindicated
- Score 6: High risk (18.2% stroke risk, 95% CI 10.5-27.4); Warfarin with an INR target of 2-3, unless contraindicated
# 2014 AHA/ACC/HRS Guideline for the Management of Patients With Atrial Fibrillation (DO NOT EDIT)
## Prevention of Thromboembolism | CHADS2 score
Template:Seealso
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Associate Editor(s)-in-Chief: Sadaf Sharfaei M.D.[2]
Synonyms and keywords: CHADS score
# Overview
CHADS2 score is a clinical prediction rule for the estimation of the risk of stroke among patients with non-rheumatic atrial fibrillation (AF), a common and serious cardiac arrhythmia associated with an increased risk of thromboembolic stroke. AF can cause stasis of blood in the atria, leading to the formation of a mural thrombus that can dislodge into the blood flow, reach the brain, and cause a stroke. CHADS2 score is used to assess the risk of stroke and determine whether or not antithrombotic therapy is required with either anticoagulants therapy or antiplatelets for the prevention of thromboembolism.[1] A high CHADS2 score corresponds to a greater risk of stroke, while a low CHADS2 score corresponds to a lower risk of stroke. The CHADS2 score was validated by a study on non-rheumatic AF patients aged 65 to 95 who were not prescribed the anticoagulant warfarin.[2]
# CHADS2 Score Original Study
## Description
The CHADS2 index was developed by the Gage et al., published in the Journal of the American Medical Association in June 2001, with the objective of assessing the predictive value of classification schemes that estimate stroke risk in patients with AF. To develop the index, two existing classification schemes from the Atrial Fibrillation Investigators (AFI), and the Stroke Prevention and Atrial Fibrillation investigators (SPAF) were combined, and all 3 classification schemes were validated. 1 point each was assigned for the presence of congestive heart failure, hypertension, age 75 and older, and diabetes mellitus, and 2 points were assigned for history of stroke or transient ischemic attack. Data was obtained from peer review organizations representing 7 different states to create a National Registry of Atrial Fibrillation consisting of 1733 Medicare beneficiaries aged 65 to 95 years who had non-rheumatic AF and were not prescribed warfarin at discharge. The outcome measured was the hospitalization for ischemic stroke, which was determined by medicare claims data. The 1733 patients were followed for a median of 1.2 years.
The results were as follows;
- During the 2121 patient-years of follow up, 94 patients were re-admitted for an ischemic event; 73 of these patients were admitted for stroke, and 23 patients for transient cerebral ischemia.
- The stroke rate was lowest amongst the 120 patients who had a CHADS2 score of 0.
- The stroke rate increased by a factor of 1.5 (95% CI, 1.3-1.7) for each 1 point increase in the CHADS2 score.
- Aspirin was associated with a hazard rate of 0.80 (95% CI, 0.5-1.3) corresponding to a nonsignificant 20% RR reduction in the rate of stroke (p=0.27)
- Compared to the schemes developed by the AFI and SPAF, the CHADS2 index was the most accurate predictor of stroke with a c-statistic of 0.82 (95% CI, 0.80-0.84).
## Strengths
- The CHADS2 study used chart reviews rather than ICD-9-CM claims to document the presence of AF and to identify stroke risk factors.
- The chart reviews included patients who received aspirin after being discharged from the hospital, enabling adjustment for the use of aspirin in the calculation of the CHADS2 specific stroke rate.
- The cohort of persons used in the study were Medicare beneficiaries from 7 different states, and all geographic regions of the United States were represented.
- As the CHADS2 study used Medicare beneficiaries who were recently hospitalized rather than healthier individuals, it is thought that CHADS2 should be generalizable to frail and elderly individuals
## Limitations
- The CHADS2 score has various limitations, which have been debated.[3] Notably, many stroke risk factors have not been included, and whilst simple, the score has only modest predictive value for thromboembolism.
- The CHADS2 score may underestimate the risk of stroke in those patients over the age of 75 years. For this reason, some authors have advocated the use of anticoagulation among patients who are over the age of 75 years if there are no contraindications.[4]
- When compared to data from clinical trials from The Stroke Prevention and Atrial Fibrillation investigators (SPAF) and the Atrial Fibrillation Investigators (AFI), the CHADS2 study used participants who were older and sicker. The CHADS2 study was based on the SPAF and AFI schemes; therefore, the study may have performed better if it was used in a younger cohort of patients.
- A single chart review was used to measure the stroke risk factors, and therefore the study was unable to capture new stroke risk factors that may have developed in the cohort participants.
- The study only looked at patients who were hospitalized and were not prescribed warfarin.
- As Medicare claims were used to ascertain the number of ischemic events, there was no way to verify these events.
- The 20% risk reduction of stroke with aspirin administration was not statistically significant in this study (however there is clinical significance when the study is combined with other research).
- While the CHADS2 score provides prognostic information regarding the natural history of non-valvular AF in the absence of warfarin therapy, it should be noted that warfarin therapy also has an associated stroke risk (particularly hemorrhagic stroke) and a risk of major bleeding, and these considerations were taken into account in the development of the recommendations in the next section.[5]
# CHADS2 Risk Score Calculator
## Calculation of the CHADS2 Score for Atrial Fibrillation Stroke Risk
function calcScore(){
var score = 0;
if(document.forms["CHADS2"]["input1"].checked == 1){score += 1;}
if(document.forms["CHADS2"]["input2"].checked == 1){score += 1;}
if(document.forms["CHADS2"]["input3"].checked == 1){score += 1;}
if(document.forms["CHADS2"]["input4"].checked == 1){score += 1;}
if(document.forms["CHADS2"]["input5"].checked == 1){score += 2;}
document.forms["CHADS2"]["result"].value = score;
if(score == 0){document.forms["CHADS2"]["longanswer"].value = "Low risk (1.9% stroke risk, 95% CI 1.2-3.0); Consider Aspirin daily";}
if(score == 1){document.forms["CHADS2"]["longanswer"].value = "Moderate risk (2.8% stroke risk, CI 2.0-3.8); Consider Aspirin or Warfarin";}
if(score == 2){document.forms["CHADS2"]["longanswer"].value = "Moderate risk (4% stroke risk, 95% CI 3.1-5.); Warfarin with an INR target of 2-3";}
if(score == 3){document.forms["CHADS2"]["longanswer"].value = "Moderate risk (5.9% stroke risk, 95% CI 4.6-7.3); Warfarin with an INR target of 2-3";}
if(score == 4){document.forms["CHADS2"]["longanswer"].value = "High risk (8.5% stroke risk, 95% CI 6.3-11.1); Warfarin with an INR target of 2-3";}
if(score == 5){document.forms["CHADS2"]["longanswer"].value = "High risk (12.5% stroke risk, 95% CI 8.2-17.5); Warfarin with an INR target of 2-3";}
if(score == 6){document.forms["CHADS2"]["longanswer"].value = "High risk (18.2% stroke risk, 95% CI 10.5-27.4); Warfarin with an INR target of 2-3";}
}
## Interpretation of the CHADS2 Score for Atrial Fibrillation Stroke Risk
Shown below is the probability of the annual stroke risk by the corresponding CHADS2 score value.[2]
- Score 0: Low risk (1.9% stroke risk, 95% CI 1.2-3.0); Consider Aspirin daily
- Score 1: Moderate risk (2.8% stroke risk, CI 2.0-3.8); Consider Aspirin or Warfarin depends on patient preference
- Score 2: Moderate risk (4% stroke risk, 95% CI 3.1-5.); Warfarin with an INR target of 2-3, unless contraindicated
- Score 3: Moderate risk (5.9% stroke risk, 95% CI 4.6-7.3); Warfarin with an INR target of 2-3, unless contraindicated
- Score 4: High risk (8.5% stroke risk, 95% CI 6.3-11.1); Warfarin with an INR target of 2-3, unless contraindicated
- Score 5: High risk (12.5% stroke risk, 95% CI 8.2-17.5); Warfarin with an INR target of 2-3, unless contraindicated
- Score 6: High risk (18.2% stroke risk, 95% CI 10.5-27.4); Warfarin with an INR target of 2-3, unless contraindicated
# 2014 AHA/ACC/HRS Guideline for the Management of Patients With Atrial Fibrillation (DO NOT EDIT)
## Prevention of Thromboembolism[6] | https://www.wikidoc.org/index.php/CHADS2_score | |
2cbb6e75265f2de214703aab3ae89cdacdfaad93 | wikidoc | Facial nerve | Facial nerve
The facial nerve is the seventh (VII) of twelve paired cranial nerves. It emerges from the brainstem between the pons and the medulla, and controls the muscles of facial expression, and taste to the anterior two-thirds of the tongue. It also supplies preganglionic parasympathetic fibers to several head and neck ganglia.
# Structure
The motor part of the facial nerve arises from the facial nerve nucleus in the pons while the sensory part of the facial nerve arises from the nervus intermedius.
The motor part of the facial nerve enters the petrous temporal bone into the internal auditory meatus (intimately close to the inner ear) then runs a tortuous course (including two tight turns) through the facial canal, emerges from the stylomastoid foramen and passes through the parotid gland, where it divides into five major branches. Though it passes through the parotid gland, it does not innervate the gland. This action is the responsibility of cranial nerve IX, the glossopharyngeal nerve.
Inside one of the tight turns in the facial canal, the facial nerve forms the geniculate ganglion.
No other nerve in the body travels such a long distance through a bony canal.
## Branches
### Inside the facial canal
- Greater petrosal nerve - provides parasympathetic innervation to lacrimal gland, as well as special taste sensory fibers to the palate via the nerve of pterygoid canal(Vidian Nerve).
- Nerve to stapedius - provides motor innervation for stapedius muscle in middle ear
- Chorda tympani - provides parasympathetic innervation to submandibular and sublingual glands and special sensory taste fibers for the anterior 2/3 of the tongue.
### Outside skull (distal to stylomastoid foramen)
- Posterior auricular nerve - controls movements of some of the scalp muscles around the ear
- Branch to Posterior belly of Digastric and Stylohyoid muscle
- Five major facial branches (in parotid gland) - from top to bottom:
Temporal branch of the facial nerve
Zygomatic branch of the facial nerve
Buccal branch of the facial nerve
Marginal mandibular branch of the facial nerve
Cervical branch of the facial nerve
- Temporal branch of the facial nerve
- Zygomatic branch of the facial nerve
- Buccal branch of the facial nerve
- Marginal mandibular branch of the facial nerve
- Cervical branch of the facial nerve
A helpful mnemonic device for remembering the major branches are the phrases: "To Zanzibar By Motor Car", "Two Zebras Bit My Cat", "Tell Ziggy Bob Marley Called", or "Two Zulus Buggered My Cat"..
# Function
## Efferent
Its main function is motor control of most of the muscles of facial expression. It also innervates the posterior belly of the digastric muscle, the stylohyoid muscle, and the stapedius muscle of the middle ear. All of these muscles are striated muscles of branchiomeric origin developing from the 2nd pharyngeal arch.
The facial also supplies parasympathetic fibers to the submandibular gland and sublingual glands via chorda tympani and the submandibular ganglion. Parasympathetic innervation serves to increase the flow of saliva from these glands. It also supplies parasympathetic innervation to the nasal mucosa and the lacrimal gland via the pterygopalatine ganglion.
## Afferent
In addition, it receives taste sensations from the anterior two-thirds of the tongue and sends them to the nucleus of solitary tract. The facial nerve also supplies a small amount of afferent innervation to the oropharynx above the palatine tonsil. There is also a small amount of cutaneous sensation carried by the nervus intermedius from the skin in and around the auricle (earlobe).
# Location of Cell Bodies
The cell bodies for the facial nerve are grouped in anatomical areas called nuclei or ganglia. The cell bodies for the afferent nerves are found in the geniculate ganglion for both taste and general afferent sensation. The cell bodies for muscular efferent nerves are found in the facial motor nucleus whereas the cell bodies for the parasympathetic efferent nerves are found in the superior salivatory nucleus.
# Pathology
People may suffer from acute facial nerve paralysis, which is usually manifested by facial paralysis.
Bell's palsy is one type of idiopathic acute facial nerve paralysis, which is more accurately described as a multiple cranial nerve ganglionitis that involves the facial nerve, and most likely results from viral infection and also sometimes as a result of Lyme disease.
# Testing the facial nerve
Voluntary facial movements, such as wrinkling the brow, showing teeth, frowning, closing the eyes tightly (lagophthalmos)
, pursing the lips and puffing out the cheeks, all test the facial nerve. There should be no noticeable asymmetry.
In an upper motor neuron lesion, called central seven, only the lower part of the face on the opposite side will be affected, due to the bilateral control to the upper facial muscles.
Lower motor neuron lesions can result in Bell's palsy, manifested as both upper and lower facial weakness on the same side of the lesion.
Taste can be tested on the anterior 2/3 of the tongue. This can be tested with a swab dipped in a flavoured solution, or with electronic stimulation (similar to putting your tongue on a battery).
# Additional images
- Superficial dissection of the right side of the neck, showing the carotid and subclavian arteries.
- Dura mater and its processes exposed by removing part of the right half of the skull, and the brain.
- Superficial dissection of brain-stem. Ventral view.
- Hind- and mid-brains; postero-lateral view.
- The sphenopalatine ganglion and its branches.
- Mandibular division of the trifacial nerve.
- Mandibular division of trifacial nerve, seen from the middle line.
- Plan of the facial and intermediate nerves and their communication with other nerves.
- The course and connections of the facial nerve in the temporal bone.
- Upper part of medulla spinalis and hind- and mid-brains; posterior aspect, exposed in situ.
- View of the inner wall of the tympanum (enlarged.)
- The right membrana tympani with the hammer and the chorda tympani, viewed from within, from behind, and from above.
- Position of the right bony labyrinth of the ear in the skull, viewed from above.
- Left temporal bone showing surface markings for the tympanic antrum (red), transverse sinus (blue), and facial nerve (yellow).
- Side of neck, showing chief surface markings.
- Cranial nerves
Cranial nerves
- Head facial nerve branches | Facial nerve
Template:Infobox Nerve
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The facial nerve is the seventh (VII) of twelve paired cranial nerves. It emerges from the brainstem between the pons and the medulla, and controls the muscles of facial expression, and taste to the anterior two-thirds of the tongue. It also supplies preganglionic parasympathetic fibers to several head and neck ganglia.
# Structure
The motor part of the facial nerve arises from the facial nerve nucleus in the pons while the sensory part of the facial nerve arises from the nervus intermedius.
The motor part of the facial nerve enters the petrous temporal bone into the internal auditory meatus (intimately close to the inner ear) then runs a tortuous course (including two tight turns) through the facial canal, emerges from the stylomastoid foramen and passes through the parotid gland, where it divides into five major branches. Though it passes through the parotid gland, it does not innervate the gland. This action is the responsibility of cranial nerve IX, the glossopharyngeal nerve.
Inside one of the tight turns in the facial canal, the facial nerve forms the geniculate ganglion.
No other nerve in the body travels such a long distance through a bony canal.
## Branches
### Inside the facial canal
- Greater petrosal nerve - provides parasympathetic innervation to lacrimal gland, as well as special taste sensory fibers to the palate via the nerve of pterygoid canal(Vidian Nerve).
- Nerve to stapedius - provides motor innervation for stapedius muscle in middle ear
- Chorda tympani - provides parasympathetic innervation to submandibular and sublingual glands and special sensory taste fibers for the anterior 2/3 of the tongue.
### Outside skull (distal to stylomastoid foramen)
- Posterior auricular nerve - controls movements of some of the scalp muscles around the ear
- Branch to Posterior belly of Digastric and Stylohyoid muscle
- Five major facial branches (in parotid gland) - from top to bottom:
Temporal branch of the facial nerve
Zygomatic branch of the facial nerve
Buccal branch of the facial nerve
Marginal mandibular branch of the facial nerve
Cervical branch of the facial nerve
- Temporal branch of the facial nerve
- Zygomatic branch of the facial nerve
- Buccal branch of the facial nerve
- Marginal mandibular branch of the facial nerve
- Cervical branch of the facial nerve
A helpful mnemonic device for remembering the major branches are the phrases: "To Zanzibar By Motor Car", "Two Zebras Bit My Cat", "Tell Ziggy Bob Marley Called", or "Two Zulus Buggered My Cat"..
# Function
## Efferent
Its main function is motor control of most of the muscles of facial expression. It also innervates the posterior belly of the digastric muscle, the stylohyoid muscle, and the stapedius muscle of the middle ear. All of these muscles are striated muscles of branchiomeric origin developing from the 2nd pharyngeal arch.
The facial also supplies parasympathetic fibers to the submandibular gland and sublingual glands via chorda tympani and the submandibular ganglion. Parasympathetic innervation serves to increase the flow of saliva from these glands. It also supplies parasympathetic innervation to the nasal mucosa and the lacrimal gland via the pterygopalatine ganglion.
## Afferent
In addition, it receives taste sensations from the anterior two-thirds of the tongue and sends them to the nucleus of solitary tract. The facial nerve also supplies a small amount of afferent innervation to the oropharynx above the palatine tonsil. There is also a small amount of cutaneous sensation carried by the nervus intermedius from the skin in and around the auricle (earlobe).
# Location of Cell Bodies
The cell bodies for the facial nerve are grouped in anatomical areas called nuclei or ganglia. The cell bodies for the afferent nerves are found in the geniculate ganglion for both taste and general afferent sensation. The cell bodies for muscular efferent nerves are found in the facial motor nucleus whereas the cell bodies for the parasympathetic efferent nerves are found in the superior salivatory nucleus.
# Pathology
People may suffer from acute facial nerve paralysis, which is usually manifested by facial paralysis.
Bell's palsy is one type of idiopathic acute facial nerve paralysis, which is more accurately described as a multiple cranial nerve ganglionitis that involves the facial nerve, and most likely results from viral infection and also sometimes as a result of Lyme disease.
# Testing the facial nerve
Voluntary facial movements, such as wrinkling the brow, showing teeth, frowning, closing the eyes tightly (lagophthalmos)[1]
, pursing the lips and puffing out the cheeks, all test the facial nerve. There should be no noticeable asymmetry.
In an upper motor neuron lesion, called central seven, only the lower part of the face on the opposite side will be affected, due to the bilateral control to the upper facial muscles.
Lower motor neuron lesions can result in Bell's palsy, manifested as both upper and lower facial weakness on the same side of the lesion.
Taste can be tested on the anterior 2/3 of the tongue. This can be tested with a swab dipped in a flavoured solution, or with electronic stimulation (similar to putting your tongue on a battery).
# Additional images
- Superficial dissection of the right side of the neck, showing the carotid and subclavian arteries.
- Dura mater and its processes exposed by removing part of the right half of the skull, and the brain.
- Superficial dissection of brain-stem. Ventral view.
- Hind- and mid-brains; postero-lateral view.
- The sphenopalatine ganglion and its branches.
- Mandibular division of the trifacial nerve.
- Mandibular division of trifacial nerve, seen from the middle line.
- Plan of the facial and intermediate nerves and their communication with other nerves.
- The course and connections of the facial nerve in the temporal bone.
- Upper part of medulla spinalis and hind- and mid-brains; posterior aspect, exposed in situ.
- View of the inner wall of the tympanum (enlarged.)
- The right membrana tympani with the hammer and the chorda tympani, viewed from within, from behind, and from above.
- Position of the right bony labyrinth of the ear in the skull, viewed from above.
- Left temporal bone showing surface markings for the tympanic antrum (red), transverse sinus (blue), and facial nerve (yellow).
- Side of neck, showing chief surface markings.
- Cranial nerves
Cranial nerves
- Head facial nerve branches | https://www.wikidoc.org/index.php/CN_VII | |
5c5a7d94976e1f80d0ef57c1fbbea353c7ed6442 | wikidoc | COMMIT-CCS 2 | COMMIT-CCS 2
COMMIT-CCS 2: Clopidogrel and Metoprolol in Myocardial Infarction Trial-Second Chinese Cardiac Study
# Overview
The cardioprotective effects of aspirin in patients with acute myocardial infarction (MI) have been well established. However, whether the routine administration of the ADP inhibitor clopidogrel (Plavix, Sanofi-Aventis and Bristol-Myers Squibb) together with aspirin adds further protection has not been assessed in a randomized, controlled study.
# Study Design
The Clopidogrel and Metoprolol in Myocardial Infarction Trial-Second Chinese Cardiac Study (COMMIT-CCS 2) is the largest clinical study ever conducted in China; it enrolled 45,852 patients at 1250 centers in China. COMMIT-CCS 2 was a randomized, parallel controlled trial that used a 2 x 2 factorial design to assess the effects of adding 75 mg of clopidogrel (vs. placebo) and the effects of adding the beta blocker, metoprolol (vs. placebo) in acute MI patients on aspirin therapy (162 mg daily).
Patients with suspected AMI (ST change or new left bundle branch block) within 24 hours of symptom onset were enrolled in the study; patients undergoing primary PCI or those with a high risk of bleeding were excluded.
Primary endpoint: Death and the composite of death, reinfarction, and stroke up to 4 weeks in hospital or prior to discharge.
Mean treatment duration and follow-up was 16 days.
# Results
A total of 22,960 patients were randomized to clopidogrel 75 mg daily and 22,891 patients were randomized to placebo; all patients were treated with aspirin. Baseline characteristics were well balanced between study groups. In each group, 26% of patients were older than 70 years of age, 34% were randomized within 6 hours of symptom onset, and fibrinolytic therapy was administered in 50% of patients, including 68% of ST segment elevation MI patients presenting within 12 hours. Other concomitant drug therapies were well used, and prescribing patterns among Chinese physicians matched those used in Western countries.
ACE = angiotensin converting enzyme; LBBB = left bundle branch block; STEMI = ST elevation myocardial infarction
The incidence of the study's primary composite endpoint (death, reinfarction, stroke) was significantly lower in the clopidogrel arm than in the control arm (10.1% vs 9.3%, 2P =0.002). This significant difference translated into a 9% reduction in the relative risk of the composite endpoint.
A total of 1728 (7.7%) in-hospital deaths were reported in the clopidogrel group vs 1846 (8.1%) in-hospital deaths in the control group, accounting for a 7% relative risk reduction (2p=0.03).
Clopidogrel was also found to reduce the risk of reinfarction by 13% relative to placebo (2p=0.02) and reduced the risk of all strokes by 14%, but the difference did not reach statistical significance, which was mostly attributable to a reduced incidence of ischemic stroke. In addition, there was no difference in the rate of major bleeding events between the 2 groups.
The treatment effect of clopidogrel on the primary endpoint occurred relatively quickly, with a 10% reduction in favor of clopidogrel within 12 hours, and with each additional day, there was further benefit.
The risk reduction in the primary endpoint with clopidogrel vs placebo remained consistent across all prespecified subgroups, including those treated and not treated with lytic therapy. There was a slight trend suggesting that greater benefit was achieved when the study drug was administered within 6 hours.
# Conclusions
- Adding 75 mg of clopidogrel daily in acute myocardial infarction patients prevents approximately 10 major vascular events per 1000 treated patients.
- There was no excess of cerebral, fatal, or transfused bleeds.
- Each million patients treated for approximately 2 weeks would avoid 5000 deaths and 5000 nonfatal deaths.
# Reference
- AHA Scientific Sessions 2005
- COMMIT CCS2 Website
# Additional and up-to-dated Information about all Cardiovascular Trials
- Clinical Trial Results | COMMIT-CCS 2
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Associate Editor-In-Chief: Cafer Zorkun, M.D., Ph.D. [2]
COMMIT-CCS 2: Clopidogrel and Metoprolol in Myocardial Infarction Trial-Second Chinese Cardiac Study
# Overview
The cardioprotective effects of aspirin in patients with acute myocardial infarction (MI) have been well established. However, whether the routine administration of the ADP inhibitor clopidogrel (Plavix, Sanofi-Aventis and Bristol-Myers Squibb) together with aspirin adds further protection has not been assessed in a randomized, controlled study.
# Study Design
The Clopidogrel and Metoprolol in Myocardial Infarction Trial-Second Chinese Cardiac Study (COMMIT-CCS 2) is the largest clinical study ever conducted in China; it enrolled 45,852 patients at 1250 centers in China. COMMIT-CCS 2 was a randomized, parallel controlled trial that used a 2 x 2 factorial design to assess the effects of adding 75 mg of clopidogrel (vs. placebo) and the effects of adding the beta blocker, metoprolol (vs. placebo) in acute MI patients on aspirin therapy (162 mg daily).
Patients with suspected AMI (ST change or new left bundle branch block) within 24 hours of symptom onset were enrolled in the study; patients undergoing primary PCI or those with a high risk of bleeding were excluded.
Primary endpoint: Death and the composite of death, reinfarction, and stroke up to 4 weeks in hospital or prior to discharge.
Mean treatment duration and follow-up was 16 days.
# Results
A total of 22,960 patients were randomized to clopidogrel 75 mg daily and 22,891 patients were randomized to placebo; all patients were treated with aspirin. Baseline characteristics were well balanced between study groups. In each group, 26% of patients were older than 70 years of age, 34% were randomized within 6 hours of symptom onset, and fibrinolytic therapy was administered in 50% of patients, including 68% of ST segment elevation MI patients presenting within 12 hours. Other concomitant drug therapies were well used, and prescribing patterns among Chinese physicians matched those used in Western countries.
ACE = angiotensin converting enzyme; LBBB = left bundle branch block; STEMI = ST elevation myocardial infarction
The incidence of the study's primary composite endpoint (death, reinfarction, stroke) was significantly lower in the clopidogrel arm than in the control arm (10.1% vs 9.3%, 2P =0.002). This significant difference translated into a 9% reduction in the relative risk of the composite endpoint.
A total of 1728 (7.7%) in-hospital deaths were reported in the clopidogrel group vs 1846 (8.1%) in-hospital deaths in the control group, accounting for a 7% relative risk reduction (2p=0.03).
Clopidogrel was also found to reduce the risk of reinfarction by 13% relative to placebo (2p=0.02) and reduced the risk of all strokes by 14%, but the difference did not reach statistical significance, which was mostly attributable to a reduced incidence of ischemic stroke. In addition, there was no difference in the rate of major bleeding events between the 2 groups.
The treatment effect of clopidogrel on the primary endpoint occurred relatively quickly, with a 10% reduction in favor of clopidogrel within 12 hours, and with each additional day, there was further benefit.
The risk reduction in the primary endpoint with clopidogrel vs placebo remained consistent across all prespecified subgroups, including those treated and not treated with lytic therapy. There was a slight trend suggesting that greater benefit was achieved when the study drug was administered within 6 hours.
# Conclusions
- Adding 75 mg of clopidogrel daily in acute myocardial infarction patients prevents approximately 10 major vascular events per 1000 treated patients.
- There was no excess of cerebral, fatal, or transfused bleeds.
- Each million [[acute myocardial infarction] patients treated for approximately 2 weeks would avoid 5000 deaths and 5000 nonfatal deaths.
# Reference
- AHA Scientific Sessions 2005
- COMMIT CCS2 Website
# Additional and up-to-dated Information about all Cardiovascular Trials
- Clinical Trial Results
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/COMMIT-CCS_2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.